text
stringlengths 226
34.5k
|
---|
scikit-bio not working after installation
Question: I have installed scikit-bio on my mac and when I run `python -m skbio.test`, I
get the following error:
File "/macqiime/anaconda/lib/python2.7/site-packages/skbio/io/tests/test_util.py", line 17, in <module>
import httpretty
ImportError: No module named httpretty.
Could this be based on my `PATH`?
-macqiime is in the directory Users.
Answer: **Edit** : Changed my answer to use conda instead of pip
Try executing:
conda install -c https://conda.anaconda.org/hargup httpretty
|
How to use pydoc to generate a list of methods and properties in a Python file
Question: I have some classes that I wrote, and in some of them I did add some docs
strings, like in the class header.
Now I would like to use pydoc, to generate documentation, but I realized that
pydoc won't print anything unless I actually write the doc part inside the
class, which is not what I want.
Is there a way to have pydoc generate a list of all the properties, methods
and their type, including the type of the parameter required (if any), and the
type of the return (if any)?
If I have a class like this:
class myclass(object):
def __init__(anumber=2, astring="hello"):
self.a = anumber
self.b = astring
def printme(self):
thestring = self.a + self.b + "\nthat's all folks\n"
return thestring
def setvalues(self, a_number, a_string):
self.a = a_number
self.b = a_string
I would like to print something that includes the class name, class method,
datatype:
class name
what parameters it takes in the init and the type of the parameters
method name
what parameters it takes and the type of the parameters
what value return and its type.
I believe pydoc won't do that. Is there any other way to do so?
I can add the doc strings for explanation later, but first of all, I would
like to print out what is in my modules, to know what it takes, what it
returns and so on.
Answer: Pydoc should provide you the skeleton details, the same as `help(myclass)`,
this will show you the function signatures of your class without any
docstrings. You can use pydoc module to get this help information:
$ pydoc MyClass.myclass
Help on class myclass in MyClass:
class myclass(builtins.object)
| Methods defined here:
|
| __init__(anumber=2, astring='hello')
|
| printme(self)
|
| setvalues(self, a_number, a_string)
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
You can do this programmatically with the `pydoc` module:
>>> import pydoc
>>> import MyClass
>>> h = pydoc.plain(pydoc.render_doc(MyClass))
>>> print h
|
Split a list into chunks with multiple integer delimiters
Question: I have two list of integers, namely `numbers` and `delimiters`. I want to
divide `numbers` into chunks divided by `delimiters`.
## Example:
numbers = [10, 13, 7, 204, 129, 44, 12, 7, 17, 49, 216, 43, 16, 7, 7, 38, 29, 76, 54, 17, 39, 7, 17, 77, 7, 24, 19]
delimiters = [7,17,77]
## Output:
result = [[10,13],[204, 129, 44, 12],[49, 216, 43, 16],[38, 29, 76, 54],[39],[24, 19]]
As already seen in the example, I want to split the list if I see one or
multiple occurences of the `delimiter`.
I can easily do it by **a simple loop** , but I am sure there should be a
_better_ , probably more _Pythonic_ way of doing it. I also feel like
[`itertools`](https://docs.python.org/2/library/itertools.html) is the way to
go, however, I couldn't find a good function that could fit to this problem.
Answer: Here you go, using
[`itertools.groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby):
>>> from itertools import groupby
>>> [list(g) for k, g in groupby(numbers, delimiters.__contains__) if not k]
[[10, 13], [204, 129, 44, 12], [49, 216, 43, 16], [38, 29, 76, 54], [39], [24, 19]]
|
Can import edgelist to igraph python
Question: I have a list of twitter followers in text file that I want to import to
iGraph.
Here's the sample of my list
393795446 18215973
393795446 582203919
393795446 190709835
393795446 1093090866
393795446 157780872
393795446 1580109739
393795446 3301748909
393795446 1536791610
393795446 106170345
393795446 9409752
And this is how I import it
from igraph import *
twitter_igraph = Graph.Read_Edgelist('twitter_edgelist.txt', directed=True)
But I get this error.
---------------------------------------------------------------------------
InternalError Traceback (most recent call last)
<ipython-input-10-d808f2237fa8> in <module>()
----> 1 twitter_igraph = Graph.Read_Edgelist('twitter_edgelist.txt', directed=True)
InternalError: Error at type_indexededgelist.c:369: cannot add negative number of vertices, Invalid value
I'm not sure why it's saying something about negative number. I check the file
and it doesn't have any negative number or id.
Answer: You need to use `graph.Read_Ncol` for this type of file format. Why your file
doesn't conform to a typical "edgelist" format is beyond me. I've wondered
this myself many times. I should also mention that I grabbed the answer from
[here](http://stackoverflow.com/questions/14471473/format-for-importing-
edgelist-into-igraph-in-python). Tamàs seems to be the main igraph guy around
here. I'm sure he can give a more detailed reason as to why you need to use
`Ncol` as opposed to `Edgelist`.
This works for me.
from igraph import *
twitter_igraph = Graph.Read_Ncol('twitter_edgelist.txt', directed=True)
* * *
**Personal Plug**
This is a great example of where igraph's documentation could be improved.
For example: The only accompanying text with [graph.Read_Edgelist()
doc](http://igraph.org/python/doc/igraph.GraphBase-class.html#Read_Edgelist)
says...
> Reads an edge list from a file and creates a graph based on it. Please note
> that the vertex indices are zero-based.
This doesn't really tell me anything when obviously there are nuances with how
the file needs to be formatted. Saying what format this function expects the
file to be in would save a lot of people their sanity.
|
How to make hexbin plots from a data file using seaborn?
Question: I'm pretty new to using matplotlib and seaborn, and I couldn't really find any
"for dummies" guides on how to do this. I keep getting error messages trying
to use code from the guides I can find. I guess I'm having difficulty taking
their pieces of code and knowing how to apply it to my problem.
I'd like to make a plot like the ones here:
[1](http://stanford.edu/~mwaskom/software/seaborn/tutorial/distributions.html)
and
[2](http://stanford.edu/~mwaskom/software/seaborn/examples/hexbin_marginals.html).
I have a data file with two columns of data ranging from -180 to 180.
This is my attempt at the code:
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import kendalltau
sns.set(style="ticks")
f2 = open("dihs23")
lines = f2.readlines()
f2.close()
x = []
y = []
for line in lines:
p = line.split()
x.append(float(p[0]))
y.append(float(p[1]))
sns.jointplot(x, y, kind="hex", stat_func=kendalltau, color="#4CB391")
sns.plt.show()
Which returns the error
Traceback (most recent call last):
File "heatmap.py", line 30, in <module>
sns.jointplot(x, y, kind="hex", stat_func=kendalltau, color="#4CB391")
File "/usr/local/lib/python2.7/dist-packages/seaborn/distributions.py", line 973, in jointplot
xlim=xlim, ylim=ylim)
File "/usr/local/lib/python2.7/dist-packages/seaborn/axisgrid.py", line 1133, in __init__
x = x[not_na]
TypeError: only integer arrays with one element can be converted to an index
I'm guessing there's some aspect to the format of the data that is part of the
problem, but I'm not sure how to fix it.
Thank you for the help!
Answer: Try transforming your `lists` to an `array` with NumPy
x_axis = np.asarray(x)
y_axis = np.asarray(y)
|
transforming wordnet txt into lists in python nltk
Question: **I am running the following function:**
import nltk
from nltk.corpus import wordnet as wn
def noun_names(list):
for synset in list:
for lemma in synset.lemmas():
print lemma.name()
noun_names(list(wn.all_synsets(wn.NOUN)))
**and it returns a long list of all the names of nouns in wordnet:**
e.g.
epoch
Caliphate
Christian_era
Common_era
day
year_of_grace
Y2K
generation
anniversary
How do I take this output, which is neither a string or a list, and turn it
into a list? Thanks so much.
Answer: Instead of printing to the stdout with your:
print lemma.name()
Why not append it to a list and return the list?
def noun_names(list):
names = []
for synset in list:
for lemma in synset.lemmas():
names.append(lemma.name())
return names
names = noun_names(list(wn.all_synsets(wn.NOUN)))
|
Module being imported from another Python install
Question: I'm running multiple installs of Python on Windows Server 2012. I can probably
find a way to work around this, but I'm curious as to what is going on. I'm
wary about radically changing the installs in case I break other people's
Python scheduled tasks that I may not be aware of.
(All the code boxes below are PowerShell).
PS C:\> C:\Python34\Scripts\pip.exe list
jdcal (1.0)
pip (7.1.2)
setuptools (12.0.5)
virtualenv (13.1.2)
Although this Python 3.4 install doesn't have Django installed, it appears to
pick up the version from the Python 33x86 install. Is that normal?
PS C:\> C:\Python34\python.exe -c "import django; print(django.get_version())"
1.6.5
PS C:\> C:\Python33x86\python.exe -c "import django; print(django.get_version())"
1.6.5
I've created a Python virtualenv based on Python 3.4 and installed Django
1.8.4 in it. Doing a "pip list" confirms that it is installed correctly:-
PS C:\> D:\PyVirtualEnvs\example_py34\Scripts\activate.bat
PS C:\> D:\PyVirtualEnvs\example_py34\Scripts\pip.exe list | Select-String "Django "
Django (1.8.4)
However, when I import within that virtualenv, I get Django version 1.6.5:-
PS C:\> D:\PyVirtualEnvs\example_py34\Scripts\python.exe -c "import django; print(django.get_version())"
1.6.5
Is this a bug in virtualenv or am I missing something?
EDIT: Could it be related to [this
question](http://stackoverflow.com/questions/25920265/python-module-imported-
from-outside-virtualenv)?
EDIT2: The same thing happens when using
[pyvenv](https://docs.python.org/3/library/venv.html), as suggested by ham-
sandwich.
Answer: The only thing that looks strange to me is that you are running
D:\PyVirtualEnvs\example_py34\Scripts\activate.bat
in powershell when a there is a activate.ps1. I don't know if there are
compatibility issues with this.
|
python, how parallelize several calls to the same function without lose execution time
Question: I have some trouble to parallelize a function. I would like use a function
several time in parallel way:
def f_diff_coord(vect1,vect2):
return vect1-vect2
This function need to be computed several time with different vectors in a
general for loop. So my code is writing in that way:
from multiprocessing import Pool
def f_diff_coord(vect1,vect2):
return vect1-vect2,vect1+vect2
if __name__ == '__main__':
p = Pool(3)
for _ in manytime
vect_a = np.arange(10)+2
vect_b = np.arange(10)
vect_c = np.arange(10)
vect_d = np.arange(10)+3
#vect_ are juste for example
r1=p.apply_async(f_diff_coord, (vect_a,vect_b,) )
r2=p.apply_async(f_diff_coord, (vect_c,vect_d,) )
data_a = r1.get()
data_b = r2.get()
#do somthing with data_
I run this kind of code with 3 pools and it seems being paralelized (in my
Task Manager on windows). However, the computation time is quite longer than
in the serialized code. Am I missing something or is it the fact that the call
to several processes takes alot of time to be initiated?
Answer: use the [multiprocessing](https://pymotw.com/2/multiprocessing/basics.html)
module to parallelise your calls
|
python unicode in subprocess does not work same in console and in mod_wsgi
Question: On a python console I can run (echo of e acute):
import subprocess
cmd = u'echo "é"'
subprocess.call(cmd,shell=True)
But if I run that code in a Django view (mod_wsgi), It crashes :
subprocess.call(cmd,shell=True)
File "/usr/lib64/python2.6/subprocess.py", line 444, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib64/python2.6/subprocess.py", line 595, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1106, in _execute_child
raise child_exception
TypeError: execv() arg 2 must contain only strings
I can solve this problem by doing a `.encode('ascii','replace')` but it strips
the accentuated chars.
I cannot see any way to specify a locale or an encoding while calling the
subshell. I tried to configure mod_wsgi with lang and locale to utf-8 but it
did not help.
How can I run a subprocess call with accentuated chars on mod_wsgi ?
Answer: I finally found a way :
subprocess.call(cmd.encode('utf-8'),shell=True)
|
Store User Value in an Entry box to calculate a value in Tkinter
Question: I am trying to calculate an equation from user input using Tkinter in python.
The code is as follows:
import math
from tkinter import *
def Solar_Param():
d = Interface.get()
S_E = 1367*(1 + 0.0334 * math.cos(((math.pi*360)/180) * (d - 2.7206) / 365.25))
nlabel1 = Label(nGui, text = S_E).pack(side="left")
return S_E
nGui = Tk()
Interface = IntVar()
nGui.title("Solar Calculations")
nlabel = Label(text = "User Interface for Solar Calculation")
nlabel.pack()
nbutton = Button(nGui, text = "Calculate", command = Solar_Param).pack()
nEntry = Entry(nGui, textvariable = Interface).pack()
nGui.mainloop()
Here, value of S_E is calculated automatically using the default value of d
i.e., 0, which I do not want. and even though i change the input to some other
value in the UI, the output is still calculated for the default value.
I tried using self method, but my superiors don't want the code to get
complex. What should I do to calculate the value for S_E without changing the
source code much?
Answer: Your calculation seems perfectly fine. I think the problem is that you keep
creating new labels without destroying the old ones, so you aren't seeing the
new calculations.
Create the result label once, and then modify it for each calculation:
import math
from Tkinter import *
def Solar_Param():
d = Interface.get()
S_E = 1367*(1 + 0.0334 * math.cos(((math.pi*360)/180) * (d - 2.7206) / 365.25))
result_label.configure(text=S_E)
return S_E
nGui = Tk()
Interface = IntVar()
nGui.title("Solar Calculations")
nlabel = Label(text = "User Interface for Solar Calculation")
nlabel.pack()
nbutton = Button(nGui, text = "Calculate", command = Solar_Param).pack()
nEntry = Entry(nGui, textvariable = Interface).pack()
result_label = Label(nGui, text="")
result_label.pack(side="top", fill="x")
nGui.mainloop()
|
How to create a module object by content in Python
Question: I have a string which contains a python code. Is there a way to create a
python module object using the string without an additional file?
content = "import math\n\ndef f(x):\n return math.log(x)"
my_module = needed_function(content) # <- ???
print my_module.f(2) # prints 0.6931471805599453
Please, don't suggest using `eval` or `exec`. I need a python module object
exactly. Thanks!
Answer: You can create an empty modules with `imp` module then load your code in the
module with `exec`.
content = "import math\n\ndef f(x):\n return math.log(x)"
import imp
my_module = imp.new_module('my_module')
exec content in my_module.__dict__ # in python 3, use exec() function
print my_module.f(2)
This is my answer, but I do not recommend using this in an actual application.
|
TortoiseHg error running HG
Question: I'm running into an error when pushing a local repository to the master
repository located on a network share in Windows 7. I've added a hook on the
master repository to perform "hg update" on push. When I'm running push from
the local repository in the TortoiseHg, I get this error in the console:
Traceback (most recent call last):
File "c:\Python26\lib\site-packages\py2exe\boot_common.py", line 92, in <module>
ImportError: No module named linecache
Traceback (most recent call last):
File "<install zipextimporter>", line 1, in <module>
ImportError: No module named zipextimporter
Traceback (most recent call last):
File "hg", line 10, in <module>
ImportError: No module named os
warning: changegroup hook exited with status 255
The push happens, but does not successfully execute the hook. Furthermore, it
seems I get this error all the time I run the "hg" command in the command
line, except when running it inside the C:\Program Files\TortoiseHg directory.
I've put the "C:\Program Files\TortoiseHg" in the `PATH` environment variable,
but without any success. The system is Windows 7 x64.
In general TortoiseHg seem to work, like commit, update, push, pull... But
finer details like hooks seem to not work. I've installed Tortoise 3.5.1 ONLY,
without any Mercurial or Python. Here is where I've installed it from:
<http://bitbucket.org/tortoisehg/files/downloads/tortoisehg-3.5.1-x64.msi>
<http://tortoisehg.bitbucket.org/download/index.html>
Can somebody help me? I've seen a similar question on StackOverflow, but the
person had both Mercurial and TortoiseHg installed.
So, why do I get these errors when running hg in the command line? Why pushing
from TortoiseHg GUI doesn't successfully execute the hook on the remote
repository?
Answer: If traceback is from push-target, then
* You **have** Python 2.6 on it ('File "c:\Python26\lib\site-packages\py2exe\boot_common.py"...')
* Mercurial 3.5 (thus - TortoiseHG also) dropped compatibility with Python 2.6, AFAICR, but:
THG uses Python's modules, if $PYTHON part is earlier in PATH and hg called
outside THG home (in the latter case bundled with THG modules used); and `pwd`
for running hook is always repo-dir
Change order in PATH (TBT!) or update Python to 2.7 or remove current Python
from PATH
> Why pushing from TortoiseHg GUI doesn't successfully execute the hook on the
> remote repository?
Because:
1. hook is push-target task, unrelated to push-source
2. You have troubles with hg (called in hook) on server's side, which are not related to client's THG
|
Retrieving Ms-access form properties in python
Question: I want to retrieve ms-access form dimensions through python, getting the
"width" is no problem, but "height" is not a direct property but consists of
partial heights of sections. In VBA these can be taken by
Form!fName.section(n).height. In python this fails. Does anyone know the
direct access to these properties. The code used is
formNames = []
strDbName = 'D:\\Python\\workspace\\Test_MDB\\kaders.mdb'
oApp = Dispatch("Access.Application")
oApp.Visible = True
win32api.keybd_event(0x10, 0x10, 0, 0)
oApp.OpenCurrentDatabase(strDbName)
win32api.keybd_event(0x10, 0x10, 2, 0)
print("Program start .....")
for form in oApp.CurrentProject.AllForms:
formName = form.name # get form name and open
oApp.DoCmd.OpenForm(formName, 1) # acDesign = 1
frm = oApp.Forms(formName) # point to form
print (frm.width)
print (frm.section(0).height)
last line crashes with
print (frm.section(0).height)
File ">", line 2, in section
pywintypes.com_error: (-2147352573, 'can't find member.', None, None)
any ideas?
thanks,
Walter
Answer: To get the section for form's height, simply reference the named section of
form without need of a number index: Detail, FormHeader, PageHeader.
Also, I adjusted your code somewhat to convert [twips to
inches](https://support.microsoft.com/en-us/kb/76388) in form's dimensions in
print statements and wrapped loop in a `try/except` (the counterpart to VBA's
`On Error` handle) as you want to always uninitialize com objects from memory
after runtime of script regardless of errors.
import win32api
import win32com.client
strDbName = 'D:\\Python\\workspace\\Test_MDB\\kaders.mdb'
oApp = win32com.client.Dispatch("Access.Application")
oApp.Visible = True
win32api.keybd_event(0x10, 0x10, 0, 0)
oApp.OpenCurrentDatabase(strDbName)
win32api.keybd_event(0x10, 0x10, 2, 0)
print("Program start .....")
try:
for form in oApp.CurrentProject.AllForms:
formName = form.name # get form name and open
oApp.DoCmd.OpenForm(formName, 1) # acDesign = 1
frm = oApp.Forms(formName) # point to form
print (form.name)
print ((frm.width)/1440) # convert twips to inchdes (1440 twips = 1 inch)
print ((frm.Detail.height/1440)) # convert twips to inchdes (1440 twips = 1 inch)
print ("-----------------------")
oApp.DoCmd.Close(2, formName) # closing form to remove instance (in case of subforms)
except Exception as e :
print (e)
oApp.DoCmd.CloseDatabase # closing database to release from memory
oApp = None # uninitializing com object
|
Python - Using function in parent from child
Question: ~~I am trying to make a module that when imported can be used to easily define
commands for an interactive 'console'. However this requires me to be able to
run a function from the parent file, which when I do so I get this:`<function
Test at 0x027234B0>` instead of the function being run.
I am somewhat new to using classes and modules in python so I am not sure what
I am meant to be doing.
Here is the module for the Menu (Menu.py): (Not complete, just trying to get
this working) ~~
I have just an imbecile and forgot to put the thing in quotes
class Menu:
def __init__(self):
self.temp=0
self.menuobj = dict()
def add(self, command, function):
self.menuobj[command] = function
print(command)
return 0
def debug(self):
print(self.menuobj)
def lookup(self, command):
return self.menuobj[command]
def mainloop(self):
while 1:
x = input("> ")
try:
self.menuobj[x]()
except KeyError:
print("Not Found")
if __name__ == "__main__":
print("This module is meant to be imported")
And the module that calls it:
import Menu
def Men():
a = Menu.Menu()
a.add("1",Test)
a.mainloop()
def Test():
print(Test)
Men()
Answer: The issue is in your `Test()` function, not that its not getting called -
def Test():
print(Test)
You are printing the reference to `Test` itself, hence it prints what you get
- `<function Test at 0x027234B0>` .
Example to show this -
>>> def a():
... print(a)
...
>>> a()
<function a at 0x0018B198>
You should print something meaningful.
|
Using setuptools to create a cython package calling an external C library
Question: I am trying to compile, install and run a package that we'll call `myPackage`.
It contains a `*.pyx` file that calls the function `fftw_set_timelimit()` from
library `fftw`. Currently, when I run a script `clientScript.py` that imports
the package I obtain the following error message :
Traceback (most recent call last):
File "clientScript.py", line 5, in <module>
import myPackage.myModule
ImportError: /usr/local/lib/python2.7/dist-packages/myPackage/myModule.so: undefined symbol: fftw_set_timelimit
From what I understand (I am quite new to python and cython), the linking with
the C library is not yet performed in my package. Indeed, my `setup.py` file
looks like this :
from setuptools import setup,find_packages
from Cython.Build import cythonize
import os
setup(
name = "myPackage",
version = "0.0.1",
url = "none",
author = "me",
author_email = "[email protected]",
packages=find_packages(),
ext_modules = cythonize("pyClo/pyClo.pyx"),
)
As you can see my `setup.py` file uses `setuptools`. I decided to do so since
it is recommended by the [Python Packaging User Guide](https://python-
packaging-user-guide.readthedocs.org/en/latest/current.html). However, the
instructions in the [Cython
documentation](http://docs.cython.org/src/tutorial/cython_tutorial.html) use
`distutils` instead. Linking libraries is done through a call to
`distutils.Extension('file',['file.pyx'],libraries='fftw')`. How do I achieve
the same result using `setuptools` ?
Answer: It turns out `setuptools` has a module `setuptools.extension.Extension` which
is used in the same way as the `distutils.extension.Extension` module .
In the end, the `setup.py` file looks something like :
from setuptools import setup, find_packages
from setuptools.extension import Extension
from Cython.Build import cythonize
extensions = [
Extension(
"myPackage/myModule",
["myPackage/myModule.pyx"],
include_dirs=['/some/path/to/include/'], # not needed for fftw unless it is installed in an unusual place
libraries=['fftw3', 'fftw3f', 'fftw3l', 'fftw3_threads', 'fftw3f_threads', 'fftw3l_threads'],
library_dirs=['/some/path/to/include/'], # not needed for fftw unless it is installed in an unusual place
),
]
setup(
name = "myPackage",
packages = find_packages(),
ext_modules = cythonize(extensions)
)
Here is an overview of my installation directory :
.
├── MANIFEST.in
├── myPackage
│ └── myModule.pyx
├── README.rst
└── setup.py
where `myModule.pyx` is the file that calls `fftw_set_timelimit()`.
`MANIFEST.in` contains :
include myPackage/*.*
and `README.rst` is a mere plain text file.
|
Not turning variable to an int
Question: I wanted to make an rpg game based of the manga Deadmans Wonderland. I want to
change 'james' the variable which represents strenth, agility etc. It works ok
but it doesnt convert it to a int it keeps it as (hexidecimal I think?) I know
this because I asked it to print james.
Heres the code so far:
import random
import time
import sys
global james
global var
global var2
global var3
global var4
global var5
global var6
global var7
def james ():
print ("Hello and welcome to DeadMans Wounder Land Python Adventure!")
print ("Hello and welcome to DeadMans Wounder Land Python Adventure!")
time.sleep (1)
name = input("The story begins with you: ")
print ("So" +name+ " You will first need to choose your skill points you can choose from; Streanth, Perception, Endurance, Charisma, Inteligence, Agility, Luck.")
print ("You have 40 points to alocate between 7 skills sets.
choose now!")
time.sleep (1)
var = int(input("What is your streanth?"))
var2 = int(input("What is your Perception?"))
var3 = int(input("What is your Endurance?"))
var4 = int(input("What is your carisma?"))
var5 = int(input("What is your Inteligence?"))
var6 = int(input("What is you Agility?"))
var7 = int(input("What is your Luck?"))
int(james == var+var2+var3+var4+var5+var6+var7)
time.sleep (1)
james == int(40)
if james == 40:
print ("Well done! You can count to 40! these are valuable skills!")
else:
print (james)
print ("Sorry but your varible is not equal to 40, please boot the game again because i cba to make a loop (: ")
sys.exit("Do it again!")
james()
#charecter creation over, now the real wounderland starts!
print (" You wake up, a dry steanch of sweat hits you, your head hurts. You can feel the dry, rough sheet of this bed stick to you as you sit up and rub your eyes. this is not familiar surroundings...")
print ("[this is the ooc/oog speach help. to look around type 'look']")
print ("try it now:")
if input == "look":
print (" You look around, you can see a blank prison cell , a dry bed whitch you are sitting on. a toilet and a sink stationed next to your bed and furter on down the cell you can see a table with 4 of draws.")
print ("[you can search an item by typing serch.")
if input == "serch":
print ("You serch the desk draws...")
time.sleep (0.5)
print (".")
time.sleep (0.5)
print (".")
time.sleep (0.5)
print (".")
time.sleep (1)
print ("You found a small leather bag, this bag contains a manual, a credit card of some sort and a small sweet.")
print ("[type serch again, this might help (: ]")
Answer: Try changing this to:
jamesValue = var+var2+var3+var4+var5+var6+var7
time.sleep (1)
if jamesValue == 40:
print ("Well done! You can count to 40! these are valuable skills!")
and then fixing the subsequent references.
The problems are that James is a function the way you have it set up, and that
you're attempting to assign a value with a comparator ('==' is a question, not
a statement).
|
flask-sqlalchemy: AttributeError: type object has no attribute 'query', works in ipython
Question: I'm creating a new flask app using flask-sqlalchemy and flask-restful with
Python 3.4. I've defined my User model as such:
from mytvpy import db
from sqlalchemy.ext.declarative import declared_attr
class BaseModel(db.Model):
__abstract__ = True
id = db.Column(db.Integer, primary_key=True)
created = db.Column(db.TIMESTAMP, server_default=db.func.now())
last_updated = db.Column(db.TIMESTAMP, server_default=db.func.now(), onupdate=db.func.current_timestamp())
@declared_attr
def __tablename__(cls):
return cls.__name__
class User(BaseModel):
username = db.Column(db.String(255), unique=True)
password = db.Column(db.String(255))
email = db.Column(db.String(255), unique=True)
def __init__(self, username, password, email):
super(User, self).__init__()
self.username = username
self.password = password
self.email = email
If I try to query in ipython, it works:
In [15]: from mytvpy.models.base import User
In [16]: User.query.all()
Out[16]: [<mytvpy.models.base.User at 0x7fac65b1c6a0>]
But if I try to hit it from an endpoint:
class User(Resource):
def get(self, user):
return User.query.filter(User.username==user).scalar()
It craps out:
Traceback (most recent call last):
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask_restful/__init__.py", line 270, in error_router
return original_handler(e)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/_compat.py", line 32, in reraise
raise value.with_traceback(tb)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask_restful/__init__.py", line 270, in error_router
return original_handler(e)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/_compat.py", line 32, in reraise
raise value.with_traceback(tb)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask_restful/__init__.py", line 471, in wrapper
resp = resource(*args, **kwargs)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/home/rich/anaconda/envs/mytv/lib/python3.5/site-packages/flask_restful/__init__.py", line 581, in dispatch_request
resp = meth(*args, **kwargs)
File "/home/rich/prj/mytv/mytvpy/blueprints/base.py", line 12, in get
return User.query.filter(User.username==user).scalar()
AttributeError: type object 'User' has no attribute 'query'
Answer: You have two classes named `Use`r, one extending `BaseModel` and one extending
`Resource`. The latter is shadowing the former.
Change how you import/reference the model and your code will work.
from mytvpy.models import base
base.User.query.all()
|
Generate keystrokes in Linux from Python3
Question: I need to generate keystrokes in Linux (Raspbian) from Python3. Something like
`uinput` but for Python3. I'd prefer not to use `subprocess` for this.
Easier to install (apt-get) the better as it will be used in a guide to show
others.
Any ideas? Thomas
Answer: Found PyUserInput that works.
<https://github.com/SavinaRoja/PyUserInput/wiki/Installation>
<https://github.com/SavinaRoja/PyUserInput>
sudo apt-get install python3-pip
sudo pip-3.2 install python3-xlib
sudo pip-3.2 install PyUserInput
And the Python:
from pykeyboard import PyKeyboard
keyboard = PyKeyboard()
keyboard.tap_key("a")
|
Python Regex - Capturing a sentence based on the beginning and ending
Question: I'm fairly new to python, but I'm attempting to write a program that will
capture a sentence out of a string, based of the beginning and ending of the
sentence.
For example if my string was
description = "11:26:16 ENTRY 'Insert Imaginative Description of a person' 11:29:17 EXIT 'Insert The Description of the Same Person'"
I know how to do the regex to detect the date stamp and the word entry. I'd
use:
re.search(r'\d{2}:\d{2}:\d{2} ENTRY', description)
Which would of course tell me that there was one entry at that position, but
how would I make the regex capture the date stamp, entry and the following
sentence, but leave out the EXIT log?
Answer: You may try this.
re.search(r'\b(\d{2}:\d{2}:\d{2}(?:\.\d{3})?) ENTRY', description)
Use `re.findall` if you want to do a global match since `re.search` would
return only the first match.
**Example:**
>>> import re
>>> description = "11:26:16 ENTRY 'Insert Imaginative Description of a person' 11:29:17 EXIT 'Insert The Description of the Same Person'"
>>> re.search(r'\b(\d{2}:\d{2}:\d{2}(?:\.\d{3})?) ENTRY', description).group(1)
'11:26:16'
To get also the log after the `ENTRY`.
>>> re.search(r"\b(\d{2}:\d{2}:\d{2}(?:\.\d{3})?) ENTRY '([^']*)'", description).group(1)
'11:26:16'
>>> re.search(r"\b(\d{2}:\d{2}:\d{2}(?:\.\d{3})?) ENTRY '([^']*)'", description).group(2)
'Insert Imaginative Description of a person'
>>> re.search(r"\b(\d{2}:\d{2}:\d{2}(?:\.\d{3})?) ENTRY '([^']*)'", description).group()
"11:26:16 ENTRY 'Insert Imaginative Description of a person'"
|
Python - search list with only a few characters entered
Question: I want to find the string that the user searches for. For example, if the user
enters FD-2** (he doesnt know what the * characters are), the search should
give FD-234 and FD-285. I have a piece of code:
rendszam = input('Adja meg a rendszámot! ')
matching = [s for s in rszamok if rendszam in s]
print(matching)
How can I do that?
Answer: The easiest way is to use a regular expression:
>>> import re
>>> targets = 'FD-234 XY-456 FD-285 XY-890 FD-999'
>>> search = 'FD-2**'
>>> pattern = search.replace('*', '.')
>>> re.findall(pattern, targets)
['FD-234', 'FD-285']
|
Selenium Python Configure Jenkins to run build. My build fails
Question: I am trying to configure Jenkins to build my Selenium Webdriver Python code.
When i click Build Now it fails The Console output shows the following:
Building in workspace C:\Program Files\Jenkins\workspace\ClearCore
[ClearCore] $ cmd /c call C:\Windows\TEMP\hudson6133135491793466847.bat
C:\Program Files\Jenkins\workspace\ClearCore>copy E:\RL Fusion\projects\Jenkins sample\ClearCore501\TestCases\*.py
The system cannot find the file specified.
C:\Program Files\Jenkins\workspace\ClearCore>python smoketests.py
python: can't open file 'smoketests.py': [Errno 2] No such file or directory
C:\Program Files\Jenkins\workspace\ClearCore>exit 2
Build step 'Execute Windows batch command' marked build as failure
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: No test report files were found. Configuration error?
Finished: FAILURE
In PyCharm i have a smoketests.py file as follows:
import unittest
from xmlrunner import xmlrunner
from TestCases.LoginPage_TestCase import LoginPage_TestCase
from TestCases.AdministrationPage_TestCase import AdministrationPage_TestCase
from TestCases.DataConfigurationPage_TestCase import DataConfigurationPage_TestCase
login_tests = unittest.TestLoader().loadTestsFromTestCase(LoginPage_TestCase)
admin_tests = unittest.TestLoader().loadTestsFromTestCase(AdministrationPage_TestCase)
dataconf_tests = unittest.TestLoader().loadTestsFromTestCase(DataConfigurationPage_TestCase)
smoke_tests = unittest.TestSuite([login_tests, admin_tests, dataconf_tests])
xmlrunner.XMLTestRunner(verbosity=2, output='test-reports').run(smoke_tests)
I have a test_HTMLRunner.py as follows:
import unittest
import HTMLTestRunner
import os
from TestCases.LoginPage_TestCase import LoginPage_TestCase
from TestCases.AdministrationPage_TestCase import AdministrationPage_TestCase
from TestCases.DataConfigurationPage_TestCase import DataConfigurationPage_TestCase
# get the directory path to output report file
result_dir = os.getcwd()
login_tests = unittest.TestLoader().loadTestsFromTestCase(LoginPage_TestCase)
admin_tests = unittest.TestLoader().loadTestsFromTestCase(AdministrationPage_TestCase)
dataconf_tests = unittest.TestLoader().loadTestsFromTestCase(DataConfigurationPage_TestCase)
smoke_tests = unittest.TestSuite([login_tests, admin_tests, dataconf_tests])
# open the report file
outfile = open(result_dir + "\TestReport.html", "w")
# configure HTMLTestRunner options
runner = HTMLTestRunner.HTMLTestRunner(stream=outfile,
title='Test Report',
description='LADEMO create a basic project test')
# run the suite using HTMLTestRunner
runner.run(smoke_tests)
I have a suite.py as follows:
import sys
import unittest
import HTMLTestRunner
import os
import unittest
import AdministrationPage_TestCase
import LoginPage_TestCase
import DataConfigurationPage_TestCase
class Test_Suite(unittest.TestCase):
def test_main(self):
# suite of TestCases
self.suite = unittest.TestSuite()
self.suite.addTests([
unittest.defaultTestLoader.loadTestsFromTestCase(LoginPage_TestCase.LoginPage_TestCase),
unittest.defaultTestLoader.loadTestsFromTestCase(AdministrationPage_TestCase.AdministrationPage_TestCase),
unittest.defaultTestLoader.loadTestsFromTestCase(DataConfigurationPage_TestCase.DataConfigurationPage_TestCase),
])
runner = unittest.TextTestRunner()
runner.run (self.suite)
import unittest
if __name__ == "__main__":
unittest.main()
In Jenkins I have configured the following:
From the section Build, Execute Windows Batch Command
copy E:\RL Fusion\projects\Jenkins sample\ClearCore501\TestCases\*.py
python smoketests.py
From the section Post-Build Actions, Publish JUnit test result report
test_reports/*..xml
Below test_reports/*..xml it shows:
‘test_reports/*..xml’ doesn’t match anything: even ‘test_reports’ doesn’t exist
How do i get this to work please? What am i doing wrong? Is there any sample
demo I could follow and then I can get my setup to work? Thanks,
Riaz
Answer: The problem looks to be in the copy step of you batch file. Notice how it says
it cant find the file. Surround the source and destination paths with double
quotes so that windows knows your path has spaces in it.
It also appears the copy operation doesn't have a destination specified. You
~~should~~ may want to specify that too. Although apparently that isn't a
requirement, as I just found out :).
Once the copy operation succeeds, check the workspace directory to see if the
file(s) you expect are present.
Alternatively, you can tell the Jenkins job to use a custom workspace, the
directory where your tests live. With this configuration you don't even have
to worry about copying files.
Here's how:
In the job config in Jenkins, open the **_Advanced Project Options_** and
select **_use custom workspace_** and set the directory to `E:\RL
Fusion\projects\Jenkins sample\ClearCore501\TestCases\`.
Then the build command can just be `python smoketests.py`.
|
How to set python exception messages to use a special language?
Question: I'd like to change the way python is handling the language of the returned
error strings, for example for WinError/OSError exception. I'm working with
ctypes and WinError is defined as
def WinError(code=None, descr=None):
if code is None:
code = GetLastError()
if descr is None:
descr = FormatError(code).strip()
return OSError(None, descr, None, code)
The FormatError function is extracted from ..\Python34\DLLs_ctypes.pyd and is
the python version of C++ FormatMessage function.
DWORD WINAPI FormatMessage(
_In_ DWORD dwFlags,
_In_opt_ LPCVOID lpSource,
_In_ DWORD dwMessageId,
_In_ DWORD dwLanguageId,
_Out_ LPTSTR lpBuffer,
_In_ DWORD nSize,
_In_opt_ va_list *Arguments
);
Ideally the python equivalent should have the same parameters, but FormatError
can only have one parameter and this is FormatError([code]). I found the
source code of ctypes written in c++. There is a file called callproc.c and
there is the FormatError function defined as
static TCHAR *FormatError(DWORD code)
{
TCHAR *lpMsgBuf;
DWORD n;
n = FormatMessage(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM,
NULL,
code,
MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), // Default language
(LPTSTR) &lpMsgBuf,
0,
NULL);
if (n) {
while (isspace(lpMsgBuf[n-1]))
--n;
lpMsgBuf[n] = '\0'; /* rstrip() */
}
return lpMsgBuf;
}
LANG_NEUTRAL|SUBLANG_DEFAULT = fallback to user's default language.
Is there a way to control the language of the error strings, perhaps by
setting the locale, an environmental variable or something else?
Thanks in advance!
Edit: I think i found something interesting, but i will test it later, cause
i'm really sleepy. This should work or not?
<https://gist.github.com/EBNull/6135237>
Answer: On Windows Vista and later you can call
[`SetThreadPreferredUILanguages`](https://msdn.microsoft.com/en-
us/library/dd374052) to set a list of up to 5 languages for the current
thread, ordered by preference. Windows 7+ also has
`SetProcessPreferredUILanguages` to change this setting for the entire
process.
Windows language packs provide the required Multilingual User Interface (MUI)
resource libraries. For the system libraries, you'll find the MUI resources in
subdirectories of System32 that are named by language. For example, if a
Spanish language pack is installed, then `System32\es-ES\kernel32.dll.mui`
should exist. For more info on MUI, read [Understanding
MUI](https://msdn.microsoft.com/en-us/goglobal/dd218459) and [MUI Fundamental
Concepts Explained](https://msdn.microsoft.com/en-us/goglobal/dd218460).
Here's an example ctypes definition for calling
`SetThreadPreferredUILanguages`:
import ctypes
from ctypes import wintypes
kernel32 = ctypes.WinDLL('kernel32')
MUI_LANGUAGE_ID = 0x004
MUI_LANGUAGE_NAME = 0x008
MUI_RESET_FILTERS = 0x001
MUI_CONSOLE_FILTER = 0x100
MUI_COMPLEX_SCRIPT_FILTER = 0x200
def errcheck_bool(result, func, args):
if not result:
raise ctypes.WinError(ctypes.get_last_error())
return args
kernel32.SetThreadPreferredUILanguages = ctypes.WINFUNCTYPE(
wintypes.BOOL,
wintypes.DWORD,
wintypes.LPCWSTR,
wintypes.PULONG,
use_last_error=True
)(('SetThreadPreferredUILanguages', kernel32),
((1, 'flags', MUI_LANGUAGE_NAME),
(1, 'languages', None),
(2, 'pulNumLanguages')))
kernel32.SetThreadPreferredUILanguages.errcheck = errcheck_bool
I defined the `pulNumLanguages` parameter as an out parameter (i.e. type 2),
so ctypes takes care of passing a reference to a temporary `c_ulong`, and then
returns the value as the high-level Python result.
The low-level `BOOL` result of the C call is handled by `errcheck_bool`, which
potentially raises an exception based on the Windows `LastError` code. To
ensure the accuracy of this value, the native call is configured with
`use_last_error=True`, which captures the `LastError` value in a thread-local
storage variable. This captured error value is retrieved by calling
`ctypes.get_last_error`.
The list of languages is passed as a single wide-character string, with each
element terminated by a `nul` character (i.e. `'\0'`). Using [language
names](https://msdn.microsoft.com/en-us/library/dd318696) instead of the old
language IDs is recommeneded, so I've set `MUI_LANGUAGE_NAME` as the default
flag value.
Example:
>>> ERROR_FILE_NOT_FOUND = 2
>>> kernel32.SetThreadPreferredUILanguages(languages='es-es\0en-us\0')
2
>>> ctypes.FormatError(ERROR_FILE_NOT_FOUND)
'El sistema no puede encontrar el archivo especificado.'
Reset:
>>> kernel32.SetThreadPreferredUILanguages(flags=0)
0
>>> ctypes.FormatError(ERROR_FILE_NOT_FOUND)
'The system cannot find the file specified.'
* * *
Unfortunately this doesn't set the language used by C runtime error messages.
Generally CRT error messages are used on Windows whenever Python relies on the
CRT's low-level POSIX API, e.g. `_open` / `_wopen`:
>>> kernel32.SetThreadPreferredUILanguages(languages='es-es\0en-us\0')
2
>>> open('spam')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: 'spam'
The above `ENOENT` message is hard-coded in the CRT's `_sys_errlist` array.
For example here's the first 10 entries of this array in Python 3.5 (using the
new universal CRT):
ucrtbase = ctypes.CDLL('ucrtbase')
ucrtbase.__sys_nerr.restype = ctypes.POINTER(ctypes.c_int)
nerr = ucrtbase.__sys_nerr()[0]
ucrtbase.__sys_errlist.restype = ctypes.POINTER(ctypes.c_char_p * nerr)
errlist = ucrtbase.__sys_errlist()[0]
>>> print(*(m.decode() for m in errlist[:10]), sep='\n')
No error
Operation not permitted
No such file or directory
No such process
Interrupted function call
Input/output error
No such device or address
Arg list too long
Exec format error
Bad file descriptor
(Don't actually use ctypes for this. The above was just to show the
implementation. In Python use `os.strerror` to get these CRT error messages.)
In this case you'd have to handle the exception by re-raising it with a custom
or machine translated error message.
|
Python Pandas: How to move one row to the first row of a Dataframe?
Question: Given an existing Dataframe that is indexed.
>>> df = pd.DataFrame(np.random.randn(10, 5),columns=['a', 'b', 'c', 'd', 'e'])
>>> df
a b c d e
0 -0.131666 -0.315019 0.306728 -0.642224 -0.294562
1 0.769310 -1.277065 0.735549 -0.900214 -1.826320
2 -1.561325 -0.155571 0.544697 0.275880 -0.451564
3 0.612561 -0.540457 2.390871 -2.699741 0.534807
4 -1.504476 -2.113726 0.785208 -1.037256 -0.292959
5 0.467429 1.327839 -1.666649 1.144189 0.322896
6 -0.306556 1.668364 0.036508 0.596452 0.066755
7 -1.689779 1.469891 -0.068087 -1.113231 0.382235
8 0.028250 -2.145618 0.555973 -0.473131 -0.638056
9 0.633408 -0.791857 0.933033 1.485575 -0.021429
>>> df.set_index("a")
b c d e
a
-0.131666 -0.315019 0.306728 -0.642224 -0.294562
0.769310 -1.277065 0.735549 -0.900214 -1.826320
-1.561325 -0.155571 0.544697 0.275880 -0.451564
0.612561 -0.540457 2.390871 -2.699741 0.534807
-1.504476 -2.113726 0.785208 -1.037256 -0.292959
0.467429 1.327839 -1.666649 1.144189 0.322896
-0.306556 1.668364 0.036508 0.596452 0.066755
-1.689779 1.469891 -0.068087 -1.113231 0.382235
0.028250 -2.145618 0.555973 -0.473131 -0.638056
0.633408 -0.791857 0.933033 1.485575 -0.021429
How to move the 3rd row to the first row?
That says, expected result:
b c d e
a
-1.561325 -0.155571 0.544697 0.275880 -0.451564
-0.131666 -0.315019 0.306728 -0.642224 -0.294562
0.769310 -1.277065 0.735549 -0.900214 -1.826320
0.612561 -0.540457 2.390871 -2.699741 0.534807
-1.504476 -2.113726 0.785208 -1.037256 -0.292959
0.467429 1.327839 -1.666649 1.144189 0.322896
-0.306556 1.668364 0.036508 0.596452 0.066755
-1.689779 1.469891 -0.068087 -1.113231 0.382235
0.028250 -2.145618 0.555973 -0.473131 -0.638056
0.633408 -0.791857 0.933033 1.485575 -0.021429
Now the original first row should become the second row.
Answer: Reindexing is probably the optimal solution for putting the rows in any new
order in 1 apparent step, except it may require producing a new DataFrame
which could be prohibitively large.
For example
import pandas as pd
t = pd.read_csv('table.txt',sep='\s+')
t
Out[81]:
DG/VD TYPE State Access Consist Cache sCC Size Units Name
0 0/0 RAID1 Optl RW No RWTD - 1.818 TB one
1 1/1 RAID1 Optl RW No RWTD - 1.818 TB two
2 2/2 RAID1 Optl RW No RWTD - 1.818 TB three
3 3/3 RAID1 Optl RW No RWTD - 1.818 TB four
t.index
Out[82]: Int64Index([0, 1, 2, 3], dtype='int64')
t2 = t.reindex([2,0,1,3]) # cannot do this in place
t2
Out[93]:
DG/VD TYPE State Access Consist Cache sCC Size Units Name
2 2/2 RAID1 Optl RW No RWTD - 1.818 TB three
0 0/0 RAID1 Optl RW No RWTD - 1.818 TB one
1 1/1 RAID1 Optl RW No RWTD - 1.818 TB two
3 3/3 RAID1 Optl RW No RWTD - 1.818 TB four
Now the index can be set back to range(4) without reindexing:
t2.index=range(4)
Out[102]:
DG/VD TYPE State Access Consist Cache sCC Size Units Name
0 2/2 RAID1 Optl RW No RWTD - 1.818 TB three
1 0/0 RAID1 Optl RW No RWTD - 1.818 TB one
2 1/1 RAID1 Optl RW No RWTD - 1.818 TB two
3 3/3 RAID1 Optl RW No RWTD - 1.818 TB four
It can also be done with 'tuple switching' and row selection as a basic
mechanism and without creating a new DataFrame. For example:
import pandas as pd
t = pd.read_csv('table.txt',sep='\s+')
t.ix[1], t.ix[2] = t.ix[2], t.ix[1]
t.ix[0], t.ix[1] = t.ix[1], t.ix[0]
t
Out[96]:
DG/VD TYPE State Access Consist Cache sCC Size Units Name
0 2/2 RAID1 Optl RW No RWTD - 1.818 TB three
1 0/0 RAID1 Optl RW No RWTD - 1.818 TB one
2 1/1 RAID1 Optl RW No RWTD - 1.818 TB two
3 3/3 RAID1 Optl RW No RWTD - 1.818 TB four
Another in place method sets the DataFrame index for the desired ordering so
that, for example, the 3rd row gets index 0, etc. and then the DataFrame is
sorted in place. It's encapsulated in the following function that assumes the
rows are indexed with some range(m) for positive integer m and the DataFrame
is simply indexed (no MultiIndex) as in the example provided in the question.
def putfirst(n,df):
if not isinstance(n, int):
print 'error: 1st arg must be an int'
return
if n < 1:
print 'error: 1st arg must be an int > 0'
return
if n == 1:
print 'nothing to do when first arg == 1'
return
if n > len(df):
print 'error: n exceeds the number of rows in the DataFrame'
return
df.index = range(1,n) + [0] + range(n,df.index[-1]+1)
df.sort(inplace=True)
The arguments of putfirst are n, which is the ordinal position of the row to
relocate to the first row position, so that if the 3rd row is to be so
relocated then n = 3; and df is the DataFrame containing the row to be
relocated.
Here is a demo:
import pandas as pd
df = pd.DataFrame(np.random.randn(10, 5),columns=['a', 'b', 'c', 'd', 'e'])
df.set_index("a") # ineffective without assignment or inplace=True
Out[182]:
b c d e
a
1.394072 -1.076742 -0.192466 -0.871188 0.420852
-1.211411 -0.258867 -0.581647 -1.260421 0.464575
-1.070241 0.804223 -0.156736 2.010390 -0.887104
-0.977936 -0.267217 0.483338 -0.400333 0.449880
0.399594 -0.151575 -2.557934 0.160807 0.076525
-0.297204 -1.294274 -0.885180 -0.187497 -0.493560
-0.115413 -0.350745 0.044697 -0.897756 0.890874
-1.151185 -2.612303 1.141250 -0.867136 0.383583
-0.437030 0.347489 -1.230179 0.571078 0.060061
-0.225524 1.349726 1.350300 -0.386653 0.865990
df
Out[183]:
a b c d e
0 1.394072 -1.076742 -0.192466 -0.871188 0.420852
1 -1.211411 -0.258867 -0.581647 -1.260421 0.464575
2 -1.070241 0.804223 -0.156736 2.010390 -0.887104
3 -0.977936 -0.267217 0.483338 -0.400333 0.449880
4 0.399594 -0.151575 -2.557934 0.160807 0.076525
5 -0.297204 -1.294274 -0.885180 -0.187497 -0.493560
6 -0.115413 -0.350745 0.044697 -0.897756 0.890874
7 -1.151185 -2.612303 1.141250 -0.867136 0.383583
8 -0.437030 0.347489 -1.230179 0.571078 0.060061
9 -0.225524 1.349726 1.350300 -0.386653 0.865990
df.index
Out[184]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64')
putfirst(3,df)
df
Out[186]:
a b c d e
0 -1.070241 0.804223 -0.156736 2.010390 -0.887104
1 1.394072 -1.076742 -0.192466 -0.871188 0.420852
2 -1.211411 -0.258867 -0.581647 -1.260421 0.464575
3 -0.977936 -0.267217 0.483338 -0.400333 0.449880
4 0.399594 -0.151575 -2.557934 0.160807 0.076525
5 -0.297204 -1.294274 -0.885180 -0.187497 -0.493560
6 -0.115413 -0.350745 0.044697 -0.897756 0.890874
7 -1.151185 -2.612303 1.141250 -0.867136 0.383583
8 -0.437030 0.347489 -1.230179 0.571078 0.060061
9 -0.225524 1.349726 1.350300 -0.386653 0.865990
|
Count throws for sixes in rolling dice python
Question: For an introductionary course in Python I got an assignment to make a
simulation for eolling dice
You want all of your dices (5 in total) to get the value six, and count how
many throws in total it takes for a person to get all sixes. I need a loop
that simulates this problem 100.000 times and then need to divide the total
amount of counts by 100.000 to get the outcome. I know that the final outcome
should be something around 13, but I am not getting that and I am not sure
why.
I know something is wrong in my approach to this problem , but what?
import random
count1=0
count2=0
count3=0
count4=0
count5=0
loopcounter = 0
for loopcouter in range (1,100000):
dice1=int( random.random()*6)+1
if dice1 != 6:
#reroll
while dice1 != 6:
dice1=int( random.random()*6)+1
#set counter1
count1 = count1+1
else:
count1 = 1
dice2=int( random.random()*6)+1
if dice2 != 6:
#reroll while not six
while dice2 != 6:
dice2=int( random.random()*6)+1
#set counter2
count2 = count2+1
else:
count2 = 1
dice3=int( random.random()*6)+1
if dice3 != 6:
#reroll while not six
while dice3 != 6:
dice3=int( random.random()*6)+1
#set counter3
count3 = count3+1
else:
count3 = 1
dice4=int( random.random()*6)+1
if dice4 != 6:
#reroll while not six
while dice4 != 6:
dice4=int( random.random()*6)+1
#set counter4
count4 = count4+1
else:
count4 = 1
dice5=int( random.random()*6)+1
if dice5 != 6:
#reroll while not six
while dice5 != 6:
dice5=int( random.random()*6)+1
#set counter5
count5 = count5+1
else:
count5 = 1
#print (dice1)
print (count1)
#print (dice2)
print (count2)
#print (dice3)
print (count3)
#print (dice4)
print (count4)
#print (dice5)
print (count5)
allcount = count1+count2+count3+count4+count5
averagecount = int(allcount / 100000)
print ("the total number of throws is",allcount)
print ("the average number of throws is",averagecount)
So, if anyone could tell me what I am doing wrong, that would be perfect!
Answer: According to the instructions you need the maximum of count for each round as
this will tell you how many rolls you needed to get all 6s. This is a re-write
of your code using a loop for each dice:
import random
allcount = 0
for loopcouter in range(100000): # 1,100000 would only loop 99999 times
count = [0]*5
for i in range(5): # 5 dice
while True:
dice = random.randint(1,6) # Use randint
count[i] += 1
if dice == 6:
break
allcount += max(count) # The number of rolls needed to get all 6s
averagecount = allcount // 100000
print("the total number of throws is", allcount)
print("the average number of throws is", averagecount)
And this seems to average in 12/13 range.
There are many ways to solve this for example you can use `iter` and an
anonymous function `lambda` to replace the inner while loop. These start to
use more advanced features of python (iterators and generators):
from random import randint
allcount = 0
for _ in range(100000):
counts = [1]*5
for i in range(5):
dice = list(iter(lambda: randint(1,6), 6))
counts[i] += len(dice)
allcount += max(counts)
averagecount = allcount // 100000
In fact you can completely collapse this into one line of code but it gets
increasingly harder to read and breaks all sorts of manner of style:
allcount = sum(max((1 + sum(1 for _ in iter(lambda: randint(1, 6), 6)))
for _ in range(5)) for _ in range(100000))
averagecount = allcount // 100000
|
Overlapping probability of two normal distribution with scipy
Question: i have two scipy.stats.norm(mean, std).pdf(0) normal distribution curve and i
am trying to find out the differential (overlapping) of the two curves.
How do i calculate it with scipy in Python? Thanks
Answer: You can use the answer suggested by @duhalme to get the intersect and then use
this point to define the range of integral limits,
[](http://i.stack.imgur.com/7xtWy.png)
Where the code for this looks like,
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
norm.cdf(1.96)
def solve(m1,m2,std1,std2):
a = 1/(2*std1**2) - 1/(2*std2**2)
b = m2/(std2**2) - m1/(std1**2)
c = m1**2 /(2*std1**2) - m2**2 / (2*std2**2) - np.log(std2/std1)
return np.roots([a,b,c])
m1 = 2.5
std1 = 1.0
m2 = 5.0
std2 = 1.0
#Get point of intersect
result = solve(m1,m2,std1,std2)
#Get point on surface
x = np.linspace(-5,9,10000)
plot1=plt.plot(x,norm.pdf(x,m1,std1))
plot2=plt.plot(x,norm.pdf(x,m2,std2))
plot3=plt.plot(result,norm.pdf(result,m1,std1),'o')
#Plots integrated area
r = result[0]
olap = plt.fill_between(x[x>r], 0, norm.pdf(x[x>r],m1,std1),alpha=0.3)
olap = plt.fill_between(x[x<r], 0, norm.pdf(x[x<r],m2,std2),alpha=0.3)
# integrate
area = norm.cdf(r,m2,std2) + (1.-norm.cdf(r,m1,std1))
print("Area under curves ", area)
plt.show()
The cdf is used to obtain the integral of the Gaussian here, although symbolic
version of the Gaussian could be defined and `scipy.quad` employed (or
something else). Alternatively, you could use a Monte Carlo method like this
[link](http://rpsychologist.com/calculating-the-overlap-of-two-normal-
distributions-using-monte-carlo-integration) (i.e. generate random numbers and
reject any outside the range you want).
|
What is installing Python modules or packages?
Question: A Python module is just a `.py` source file. A Python package is simply a
collection of modules.
So why do we need programs such as `pip` to 'install' Python modules? Why not
just download the files, put them in our project's folder and `import` them?
What exactly does it mean to 'install' a module or a package? And what exactly
does `pip` do?
Are things different on Windows and on Linux?
Answer: > So why do we need programs such as pip to 'install' Python modules? Why not
> just download the files, put them in our project's folder and import them?
It's just meant to facilitate the installation of softwares without having to
bundle all the dependencies nor ask the user to download the files.
You can type `pip install mysoftware` and that will also install the required
dependencies. You can also upgrade a software easily.
> What exactly does it mean to 'install' a module or a package? And what
> exactly does pip do?
It will copy the files in a directory that is in your Python path. This way
you will be able to import the package without having to copy the directory in
your project.
|
Natural time interval processing in Python
Question: I am wondering how to take a user-inputted string (i.e. `1 day, 5 hours, 15
minutes, 2 seconds`) and convert it to either a `timedelta` object or
(preferably) the number of seconds in that interval.
Note that:
* This is not a question about _[datetime](/questions/tagged/datetime "show questions tagged 'datetime'")s_, it is about **[timedelta](/questions/tagged/timedelta "show questions tagged 'timedelta'")s**. I don’t need “Tomorrow” or “In 5 minutes”, I need “1 day” or “5 minutes.”
* All fields are optional
* These are the possible fields:
* `year`, `years`, or `y`
* `month`, `months`, or `m`
* `week`, `weeks`, or `w`
* `day`, `days`, or `d`
* `hour`, `hours`, or `h`
* `minute`, `minutes`, or `m`
* `second`, `seconds`, or `s`
* If you can get me started, I can probably do the rest
* The input can either be delimited by `,` or whitespace
Thank you!
Answer: You could use [`parsedatetime`
module](https://pypi.python.org/pypi/parsedatetime/):
#!/usr/bin/env python
from datetime import date, datetime
import parsedatetime as pdt # $ pip install parsedatetime
cal = pdt.Calendar()
midnight = datetime.fromordinal(date.today().toordinal())
for s in "1 day, 5 hours, 15 minutes, 2 seconds".split(', '):
print(repr(cal.parseDT(s, midnight)[0] - midnight))
### Output
datetime.timedelta(1)
datetime.timedelta(0, 18000)
datetime.timedelta(0, 900)
datetime.timedelta(0, 2)
To get number of seconds, call `.total_seconds()` or if you don't need
fractions of a second; you could truncate it:
integer_seconds = td // timedelta(seconds=1)
|
OS X: Self Packaged Kivy-Application does not work
Question: I wanted to put myself a bit into Python GUI programming and found Kivy, a
really cool Framework, Cross-Platform and open source.
Before I started to put my console-based python script into Kivy, I wanted to
test if I can make the program self running without the need to install Kivy
or other packages to the system. I started with OS X 10.10.5 Yosemite and the
simple "Hello World" usage example provided on the homepage.
There is a guide on the webpage [here](http://kivy.org/docs/guide/packaging-
macosx.html#build-the-spec-and-create-a-dmg) which I followed but when it says
"That’s it, your self contained package is ready to be deployed! when you
double click this app you can see your app run." it just does nothing. No
window opens, no message comes up. system.log just says "Kivy[11273]: App did
finish launching".
I'm using the newest Kivy package from kivy.org (1.9.0-rev3), symlinks have
been created.
Maybe it's my fault: In the guide it says "Now all you need to do is to
include your compiled app into the Kivy.app". What's meant with "compiled"
app? How do I compile my .py-script before I call the package-app.sh script?
I'm happy with any answer!
Answer: I used the first method described on that page and experienced the same issue.
Looking inside the .app (right click >> show package contents) it seems Kivy
doesn't automatically package external libraries.
After I manually copied the required external libraries I was importing, my
app opened without issue.
I copied the libraries from my python directory into: Contents >> Resources >>
venv >> lib >> python2.7 >> site-packages
The python2.7 folder will depend on the version you're using.
Let me know if this works for you.
|
Display two frames of the same class with Python and Tkinter
Question: I wrote a simple game in Python. Now I'm trying to give it a GUI (currently it
is text based). Since there aren't moving objects, I decieded to use Tkinter
(I like the grid arrangement, the east coding to get a menu and the fact that
there is no need to install anything).
I have a few problems I can't seem to solve:
1. I want two display two game boards on the screen. I created a frame class that contain the information and displays it, but it only shows the first one. The second is created but does not appear on screen.
2. All the tiles on the board has the same functionality, but when I press a tile (label object) I want to use some metadata of the object. That is, I want to assign it a string and when I press it I want to get the string attached to the label I pressed. I can't figure out how to do it. **\--- Solved in the comments by Eric Levieil**
3. (least important) Can I put two images, one on top of the other without using canvas/absolute position?
Here is my code:
import tkinter as tk
from tkinter import ttk
class GameBoard(ttk.Frame):
tiles = []
def __init__(self, parent, temp, *args, **kwargs):
ttk.Frame.__init__(self, parent)
self.parent = parent
self.graphics = {'1H':tk.PhotoImage(file='graphics/1H.gif'),
'0': tk.PhotoImage(file='graphics/0.gif')}
for row_num in range(5):
row = []
for cell_num in range(10):
row.append(ttk.Label(self))
row[cell_num]['image'] = self.graphics[temp]
row[cell_num].bind('<Button-1>', lambda event: print('(%d,%d)'%(row_num,cell_num)))
self.tiles.append(row)
for row_num in range(5):
for cell_num in range(10):
self.tiles[row_num][cell_num].grid(row=row_num, column=cell_num)
class MainBoard(ttk.Frame):
def __init__(self, parent, *args, **kwargs):
ttk.Frame.__init__(self, parent)
player_1_name = ttk.Label(self, text='Player 1').grid(row=0, sticky=tk.W)
player_1_board = GameBoard(self, '0')
player_1_board.grid(row=1, sticky=(tk.N, tk.W))
player_2_name = ttk.Label(self, text='Player 2').grid(row=2, sticky=tk.W)
player_2_board = GameBoard(self, '1H')
player_2_board.grid(row=3, sticky=(tk.S, tk.W))
self.grid(row=0)
def main():
root = tk.Tk()
root.title('Battleship')
MainBoard(root)
root.mainloop()
if __name__ == '__main__':
main()
Thanks, Uri
Edit: Here are the two images I'm using
[graphics.zip](http://www.math.tau.ac.il/~urigrupe/graphics.zip).
Answer: Managed it. Reinitialize your tiles in `GameBoard.__init__` and it will do.
Just before your first `for-Loop`:
`self.tiles=[]`
`for row_num in range(5):`
|
How to run Scrapy/Portia on Azure Web App
Question: I am trying to run Scrapy or Portia on a Microsoft Azure Web App. I have
installed Scrapy by creating a virtual environment:
D:\Python27\Scripts\virtualenv.exe D:\home\Python
And then installed Scrapy:
D:\home\Python\Scripts\pip install Scrapy
The installation seemed to work. But executing a spider returns the following
output:
D:\home\Python\Scripts\tutorial>d:\home\python\scripts\scrapy.exe crawl example 2015-09-13 23:09:31 [scrapy] INFO: Scrapy 1.0.3 started (bot: tutorial)
2015-09-13 23:09:31 [scrapy] INFO: Optional features available: ssl, http11
2015-09-13 23:09:31 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'}
2015-09-13 23:09:34 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
Unhandled error in Deferred:
2015-09-13 23:09:35 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "D:\home\Python\lib\site-packages\scrapy\cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "D:\home\Python\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "D:\home\Python\lib\site-packages\scrapy\crawler.py", line 153, in crawl
d = crawler.crawl(*args, **kwargs)
File "D:\home\Python\lib\site-packages\twisted\internet\defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "D:\home\Python\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "D:\home\Python\lib\site-packages\scrapy\crawler.py", line 71, in crawl
self.engine = self._create_engine()
File "D:\home\Python\lib\site-packages\scrapy\crawler.py", line 83, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "D:\home\Python\lib\site-packages\scrapy\core\engine.py", line 66, in __init__
self.downloader = downloader_cls(crawler)
File "D:\home\Python\lib\site-packages\scrapy\core\downloader\__init__.py", line 65, in __init__
self.handlers = DownloadHandlers(crawler)
File "D:\home\Python\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 23, in __init__
cls = load_object(clspath)
File "D:\home\Python\lib\site-packages\scrapy\utils\misc.py", line 44, in load_object
mod = import_module(module)
File "D:\Python27\Lib\importlib\__init__.py", line 37, in import_module
__import__(name)
File "D:\home\Python\lib\site-packages\scrapy\core\downloader\handlers\s3.py", line 6, in <module>
from .http import HTTPDownloadHandler
File "D:\home\Python\lib\site-packages\scrapy\core\downloader\handlers\http.py", line 5, in <module>
from .http11 import HTTP11DownloadHandler as HTTPDownloadHandler
File "D:\home\Python\lib\site-packages\scrapy\core\downloader\handlers\http11.py", line 15, in <module>
from scrapy.xlib.tx import Agent, ProxyAgent, ResponseDone, \
File "D:\home\Python\lib\site-packages\scrapy\xlib\tx\__init__.py", line 3, in <module>
from twisted.web import client
File "D:\home\Python\lib\site-packages\twisted\web\client.py", line 42, in <module>
from twisted.internet.endpoints import TCP4ClientEndpoint, SSL4ClientEndpoint
File "D:\home\Python\lib\site-packages\twisted\internet\endpoints.py", line 34, in <module>
from twisted.internet.stdio import StandardIO, PipeAddress
File "D:\home\Python\lib\site-packages\twisted\internet\stdio.py", line 30, in <module>
from twisted.internet import _win32stdio
File "D:\home\Python\lib\site-packages\twisted\internet\_win32stdio.py", line 7, in <module>
import win32api
exceptions.ImportError: No module named win32api
2015-09-13 23:09:35 [twisted] CRITICAL:
The documentation <http://doc.scrapy.org/en/latest/intro/install.html> says
that I have to install **pywin32**. I don't know how I can download/install it
via command line since I am in the web app environment.
Is it even possible to run Scrapy or Portia on an Azure Web App or do I have
to use a fully fledged Virtual Machine on Azure?
Thank you!
Answer: You can't run general purpose Windows applications "on" an Azure Web App.
Things that run on Azure as web apps have to be built specifically to do so.
So, you have to use a full-fledged Virtual Machine on Azure.
It seems Azure Webapps can run some Python apps, if they are built on certain
frameworks: <https://azure.microsoft.com/en-us/documentation/articles/web-
sites-python-configure/>
|
How to subtract the value from the previous value in a list in python?
Question: I am trying to take values in a list, such as `[1,2,3]` and subtract them from
each other. So it would return `[-1,-1]` because the first value is `1-2` and
the second value is `2-3`. How would i achieve this in python? I have tried
[x-y for (x,y) in list]
but this gives a 'need more than one value to unpack error.'
Answer: You can also use a generator created by izipping the list with itself, offset
by one index.
from itertools import izip, islice
[x - y for x,y in izip(lst, islice(lst, 1, None))]
This is handy if for some reason `lst` was itself a generator, or otherwise
was not easily examined for its length ahead of time, or you just didn't want
to consume it directly.
|
PySide/python video player issue
Question: I am trying to write a simple YUV video player using python. After some
initial study, I thought I could use PySide and started with it. As a first
step, I have taken the following approach without consideration for real-time
performance. Read YUV buffer (420 planar) -> convert the YUV image to RGB
(32bit format) - > call PySide utilities for display. The basic problem that I
have with my simple program is that I am able to get only the first frame to
display and the rest are not displayed, eventhough the paint event seems to be
happening according to the counter in the (below) code. I would appreciate any
comments to understand (i) any mistakes and lack of understanding from my side
regarding painting/repainting at regular intervals on QLabel/QWidget. (ii) Any
pointers to Python based video players/display from YUV or RGB source.
#!/usr/bin/python
import sys
from PySide.QtCore import *
from PySide.QtGui import *
import array
import numpy as np
class VideoWin(QWidget):
def __init__(self, width, height, f_yuv):
QWidget.__init__(self)
self.width = width
self.height = height
self.f_yuv = f_yuv
self.setWindowTitle('Video Window')
self.setGeometry(10, 10, width, height)
self.display_counter = 0
self.img = QImage(width, height, QImage.Format_ARGB32)
#qApp.processEvents()
def getImageBuf(self):
return self.img.bits()
def paintEvent(self, e):
painter = QPainter(self)
self.display_counter += 1
painter.drawImage(QPoint(0, 0), self.img)
def timerSlot(self):
print "In timer"
yuv = array.array('B')
pix = np.ndarray(shape=(height, width), dtype=np.uint32, buffer=self.getImageBuf())
for i in range(0,self.height):
for j in range(0, self.width):
pix[i, j] = 0
for k in range (0, 10):
#qApp.processEvents()
yuv.fromfile(self.f_yuv, 3*self.width*self.height/2)
for i in range(0, self.height):
for j in range(0, self.width):
Y_val = yuv[(i*self.width)+j]
U_val = yuv[self.width*self.height + ((i/2)*(self.width/2))+(j/2)]
V_val = yuv[self.width*self.height + self.width*self.height/4 + ((i/2)*(self.width/2))+(j/2)]
C = Y_val - 16
D = U_val - 128
E = V_val - 128
R = (( 298 * C + 409 * E + 128) >> 8)
G = (( 298 * C - 100 * D - 208 * E + 128) >> 8)
B = (( 298 * C + 516 * D + 128) >> 8)
if R > 255:
R = 255
if G > 255:
G = 255
if B > 255:
B = 255
assert(int(R) < 256)
pix[i, j] = (255 << 24 | ((int(R) % 256 )<< 16) | ((int(G) % 256 ) << 8) | (int(B) % 256))
self.repaint()
print "videowin.display_counter = %d" % videowin.display_counter
if __name__ == "__main__":
try:
yuv_file_name = sys.argv[1]
width = int(sys.argv[2])
height = int(sys.argv[3])
f_yuv = open(yuv_file_name, "rb")
videoApp = QApplication(sys.argv)
videowin = VideoWin(width, height, f_yuv)
timer = QTimer()
timer.singleShot(100, videowin.timerSlot)
videowin.show()
videoApp.exec_()
sys.exit(0)
except NameError:
print("Name Error : ", sys.exc_info()[1])
except SystemExit:
print("Closing Window...")
except Exception:
print(sys.exc_info()[1])
I have tried a second approach where I have tried a combination of creating a
Signal object which "emits" each decoded RGB image (converted from YUV)as a
signal which is caught by the "updateFrame" method in the displaying class
which displays the received RGB buffer/frame using QPainter.drawImage(...)
method. YUV-to-RGB decode--->Signal(Image buffer) --->updateFrame --->
QPainter.drawImage(...) This also displays only the first image alone although
the slot which catches the signal (getting the image) shows that it is called
as many times as the signal is sent by the YUV->RGB converter/decoder. I have
also tried running the YUV->RGB converter and Video display (calling
drawImage) in seperate threads, but the result is the same.
Please note that in both the cases, I am writing the RGB pixel values directly
into the bit buffer of the QImage object which is part of the VideoWin class
in the code shown (NOTE: the code line pix = np.ndarray(shape=(height, width),
dtype=np.uint32, buffer=videowin.getImageBuf()) which gets the img.bits()
buffer of the QImage class) Also, for this test I am decoding and displaying
only the first 10 frames of the video file. Versions: Python - 2.7, Qt - 4.8.5
using Pyside
Answer: From the docs for
[`array.fromfile()`](https://docs.python.org/2/library/array.html#array.array.fromfile):
> Read n items (as machine values) from the file object f and **append them to
> the end of the array**. [emphasis added]
The example code does not include an offset into the array, and so the first
frame is read over and over again. A simple fix would be to clear the array
before reading the next frame:
for k in range (0, 100):
del yuv[:]
yuv.fromfile(self.f_yuv, 3*self.width*self.height/2)
And note that, to see a difference, you will need to read at least sixty
frames of the test file you linked to, because the first fifty or so are all
the same (i.e. a plain green background).
|
PyEphem reporting different latitude and longitude than input values
Question: I have used Pyephem to calculate the latitude and longitude of the sun given
date, latitude and longitude of an observer at sea level. I get results I do
not understand. The code I ran follows (on Ipython notebook in windows 7):
import ephem
date = '2015-04-17 12:30:00'
Amundsen = ephem.Observer()
Amundsen.lat = '46.8'
Amundsen.lon = '-71.2'
Amundsen.date = date
sun = ephem.Sun(Amundsen)
sun.compute(Amundsen)
print Amundsen
print "sun latitude: {0}, sun longitude: {1}".format(sun.hlat,sun.hlon)
The resulst I obtained are the following:
<ephem.Observer date='2015/4/17 12:30:00' epoch='2000/1/1 12:00:00'lon='-71:12:00.0' lat='46:48:00.0' elevation=0.0m horizon=0:00:00.0 temp=15.0C pressure=1010.0mBar>
sun latitude: -0:00:00.1, sun longitude: 207:11:10.2
As you can see, when printing the input data, the latitude and the longitude
of my obesrver have been changed from 46.8 and -71.2 to 46.48 to -71.12. This
is maybe a basic fact, but why does this happen? and how to correct for it?
Thanks in advance,
Grégory
Answer: The values, happily, are not changing. You have entered them as decimals with
a decimal point `.` separating each degree into ten tenths, but that is not
how longitude and latitude are traditionally expressed — they are usually
written as a whole number, then sixtieths called “minutes” and then sixtieths-
of-sixtieths called “seconds” which PyEphem separates with the `:` character.
So `46:48` means “46 degrees 48 minutes latitude” because 48/60 = 0.8.
The `libsastro` use of `:` is a compromise to the limitations of ASCII.
Traditionally, degrees minutes and seconds would be delimited with a degree
symbol, a prime, and a double prime, which is now possible in Unicode but not
widespread in programming languages:
46°48′00″
I note that in the academic papers of Bernard R. Goldstein, on the astronomy
of the late Middle Ages and Renaissance, that an academic notation using a
semicolon for the decimal point and a comma between the minutes and seconds is
used, that looks like:
46;48,00°
|
What are Type hints in Python 3.5
Question: One of the talked about features in `Python 3.5` is said to be `type hints`.
An example of `type hints` is mentioned in this
[article](http://lwn.net/Articles/650904/) and
[this](http://lwn.net/Articles/640359/) while also mentioning to use type
hints responsibly. Can someone explain more about it and when it should be
used and when not?
Answer: I would suggest reading [PEP 483](https://www.python.org/dev/peps/pep-0483/)
and [PEP 484](https://www.python.org/dev/peps/pep-0484/) and watching
[this](https://www.youtube.com/watch?v=2wDvzy6Hgxg) presentation by Guido on
Type Hinting. In addition, more examples on Type Hints can be found at their
[documentation topic](http://stackoverflow.com/documentation/python/1766/type-
hints#t=201607251908319482596).
**In a nutshell** : **Type hinting is _literally what it means_ , you hint the
type of the object(s) you're using**.
Due to the highly **dynamic** nature of Python, _inferring or checking the
type_ of an object being used is especially hard. This fact makes it hard for
developers to understand what exactly is going on in code they haven't written
and, most importantly, for type checking tools found in many IDEs [PyCharm,
PyDev come to mind] that are limited due to the fact that they don't have any
indicator of what type the objects are. As a result they resort to trying to
infer the type with (as mentioned in the presentation) around 50% success
rate.
* * *
To take two important slides from the Type Hinting presentation:
### **_Why Type Hints?_**
1. **Helps Type Checkers:** By hinting at what type you want the object to be the type checker can easily detect if, for instance, you're passing an object with a type that isn't expected.
2. **Helps with documentation:** A third person viewing your code will know what is expected where, ergo, how to use it without getting them `TypeErrors`.
3. **Helps IDEs develop more accurate and robust tools:** Development Environments will be better suited at suggesting appropriate methods when know what type your object is. You have probably experienced this with some IDE at some point, hitting the `.` and having methods/attributes pop up which aren't defined for an object.
### **_Why Static Type Checkers?_**
* **Find bugs sooner** : This is self evident, I believe.
* **The larger your project the more you need it** : Again, makes sense. Static languages offer a robustness and control that dynamic languages lack. The bigger and more complex your application becomes the more control and predictability (from a behavioral aspect) you require.
* **Large teams are already running static analysis** : I'm guessing this verifies the first two points.
**As a closing note for this small introduction** : This is an **optional**
feature and from what I understand it has been introduced in order to reap
some of the benefits of static typing.
You generally **do not** need to worry about it and **definitely** don't need
to use it (especially in cases where you use Python as an auxiliary scripting
language). It should be helpful when developing large projects as _it offers
much needed robustness, control and additional debugging capabilities_.
* * *
## **Type Hinting with mypy** :
In order to make this answer more complete, I think a little demonstration
would be suitable. I'll be using [`mypy`](http://mypy-lang.org/), the library
which inspired Type Hints as they are presented in the PEP. This is mainly
written for anybody bumping into this question and wondering where to begin.
Before I do that let reiterate the following: [PEP
484](https://www.python.org/dev/peps/pep-0484/) doesn't enforce anything; it
is simply setting a direction for function annotations and proposing
guidelines for **how** type checking can/should be performed. You can annotate
your functions and hint as many things as you want; your scripts will still
run regardless of the presence of annotations.
Anyways, as noted in the PEP, hinting types should generally take three forms:
* Function annotations. ([PEP 3107](https://www.python.org/dev/peps/pep-3107/))
* Stub files for built-in/user modules. (Ideal future for type checking)
* Special `# type: type` comments. (Complementing the first two forms)**
Additionally, you'll want to use type hints in conjunction with the new
[`typing`](https://docs.python.org/3/library/typing.html) module introduced
with `Py3.5`. The typing module will save your life in this situation; in it,
many (additional) ABCs are defined along with helper functions and decorators
for use in static checking. Most `ABCs` in `collections.abc` are included but
in a `Generic` form in order to allow subscription (by defining a
`__getitem__()` method).
For anyone interested in a more in-depth explanation of these, the [`mypy
documentation`](http://mypy.readthedocs.org/en/latest/) is written very nicely
and has a lot of code samples demonstrating/describing the functionality of
their checker; it is definitely worth a read.
### Function annotations and special comments:
First, it's interesting to observe some of the behavior we can get when using
special comments. Special `# type: type` comments can be added during variable
assignments to indicate the type of an object if one cannot be directly
inferred. Simple assignments are generally easily inferred but others, like
lists (with regard to their contents), cannot.
**Note:** If we want to use any derivative of `Containers` and need to specify
the contents for that container we **must** use the **_generic_** types from
the `typing` module. **These support indexing.**
# generic List, supports indexing.
from typing import List
# In this case, the type is easily inferred as type: int.
i = 0
# Even though the type can be inferred as of type list
# there is no way to know the contents of this list.
# By using type: List[str] we indicate we want to use a list of strings.
a = [] # type: List[str]
# Appending an int to our list
# is statically not correct.
a.append(i)
# Appending a string is fine.
a.append("i")
print(a) # [0, 'i']
If we add these commands to a file and execute them with our interpreter,
everything works just fine and `print(a)` just prints the contents of list
`a`. The `# type` comments have been discarded, _treated as plain comments
which have no additional semantic meaning_.
By running this with `mypy`, on the other hand, we get the following responce:
(Python3)jimmi@jim: mypy typeHintsCode.py
typesInline.py:14: error: Argument 1 to "append" of "list" has incompatible type "int"; expected "str"
Indicating that a list of `str` objects cannot contain an `int`, which,
statically speaking, is sound. This can be fixed by either abiding to the type
of `a` and only appending `str` objects or by changing the type of the
contents of `a` to indicate that any value is acceptable (Intuitively
performed with `List[Any]` after `Any` has been imported from `typing`).
Function annotations are added in the form `param_name : type` after each
parameter in your function signature and a return type is specified using the
`-> type` notation before the ending function colon; all annotations are
stored in the `__annotations__` attribute for that function in a handy
dictionary form. Using a trivial example (which doesn't require extra types
from the `typing` module):
def annotated(x: int, y: str) -> bool:
return x < y
The `annotated.__annotations__` attribute now has the following values:
{'y': <class 'str'>, 'return': <class 'bool'>, 'x': <class 'int'>}
If we're a complete noobie, or we are familiar with `Py2.7` concepts and are
consequently unaware of the `TypeError` lurking in the comparison of
`annotated`, we can perform another static check, catch the error and save us
some trouble:
(Python3)jimmi@jim: mypy typeHintsCode.py
typeFunction.py: note: In function "annotated":
typeFunction.py:2: error: Unsupported operand types for > ("str" and "int")
Among other things, calling the function with invalid arguments will also get
caught:
annotated(20, 20)
# mypy complains:
typeHintsCode.py:4: error: Argument 2 to "annotated" has incompatible type "int"; expected "str"
These can be extended to basically any use-case and the errors caught extend
further than basic calls and operations. The types you can check for are
really flexible and I have merely given a small sneak peak of its potential. A
look in the `typing` module, the PEPs or the `mypy` docs will give you a more
comprehensive idea of the capabilities offered.
### Stub Files:
Stub files can be used in two different non mutually exclusive cases:
* You need to type check a module for which you do not want to directly alter the function signatures
* You want to write modules and have type-checking but additionally want to separate annotations from content.
What stub files (with an extension of `.pyi`) are is an annotated interface of
the module you are making/want to use. They contain the signatures of the
functions you want to type-check with the body of the functions discarded. To
get a feel of this, given a set of three random functions in a module named
`randfunc.py`:
def message(s):
print(s)
def alterContents(myIterable):
return [i for i in myIterable if i % 2 == 0]
def combine(messageFunc, itFunc):
messageFunc("Printing the Iterable")
a = alterContents(range(1, 20))
return set(a)
We can create a stub file `randfunc.pyi`, in which we can place some
restrictions if we wish to do so. The downside is that somebody viewing the
source without the stub won't really get that annotation assistance when
trying to understand what is supposed to be passed where.
Anyway, the structure of a stub file is pretty simplistic: Add all function
definitions with empty bodies (`pass` filled) and supply the annotations based
on your requirements. Here, let's assume we only want to work with `int` types
for our Containers.
# Stub for randfucn.py
from typing import Iterable, List, Set, Callable
def message(s: str) -> None: pass
def alterContents(myIterable: Iterable[int])-> List[int]: pass
def combine(
messageFunc: Callable[[str], Any],
itFunc: Callable[[Iterable[int]], List[int]]
)-> Set[int]: pass
The `combine` function gives an indication of why you might want to use
annotations in a different file, they some times clutter up the code and
reduce readability (big no-no for Python). You could of course use type
aliases but that sometime confuses more than it helps (so use them wisely).
* * *
This should get you familiarized with the basic concepts of Type Hints in
Python. Even though the type checker used has been `mypy` you should gradually
start to see more of them pop-up, some internally in IDEs
([**PyCharm**](http://blog.jetbrains.com/pycharm/2015/11/python-3-5-type-
hinting-in-pycharm-5/),) and others as standard python modules. I'll try and
add additional checkers/related packages in the following list when and if I
find them (or if suggested).
**_Checkers I know of_** :
* [**Mypy**](http://mypy-lang.org/): as described here.
* [**PyType**](https://github.com/google/pytype): By Google, uses different notation from what I gather, probably worth a look.
**_Related Packages/Projects_** :
* [**typeshed:**](https://github.com/python/typeshed/) Official Python repo housing an assortment of stub files for the standard library.
The `typeshed` project is actually one of the best places you can look to see
how type hinting might be used in a project of your own. Let's take as an
example [the `__init__` dunders of the `Counter`
class](https://github.com/python/typeshed/blob/master/stdlib/3/collections.pyi#L78)
in the corresponding `.pyi` file:
class Counter(Dict[_T, int], Generic[_T]):
@overload
def __init__(self) -> None: ...
@overload
def __init__(self, Mapping: Mapping[_T, int]) -> None: ...
@overload
def __init__(self, iterable: Iterable[_T]) -> None: ...
[Where `_T = TypeVar('_T')` is used to define generic
classes](http://mypy.readthedocs.org/en/latest/generics.html#defining-generic-
classes). For the `Counter` class we can see that it can either take no
arguments in its initializer, get a single `Mapping` from any type to an `int`
_or_ take an `Iterable` of any type.
* * *
**Notice** : One thing I forgot to mention was that the `typing` module has
been introduced on a _provisional basis_. From **[PEP
411](https://www.python.org/dev/peps/pep-0411/)** :
> A provisional package may have its API modified prior to "graduating" into a
> "stable" state. On one hand, this state provides the package with the
> benefits of being formally part of the Python distribution. On the other
> hand, the core development team explicitly states that no promises are made
> with regards to the the stability of the package's API, which may change for
> the next release. While it is considered an unlikely outcome, such packages
> may even be removed from the standard library without a deprecation period
> if the concerns regarding their API or maintenance prove well-founded.
So take things here with a pinch of salt; I'm doubtfull it will be removed or
altered in significant ways but one can never know.
* * *
** Another topic altogether but valid in the scope of type-hints: [`PEP
526`](https://docs.python.org/3.6/whatsnew/3.6.html#pep-526-syntax-for-
variable-annotations) is an effort to replace `# type` comments by introducing
new syntax which allows users to annotate the type of variables in simple
`varname: type` statements.
|
Can't make migrations run after upgrading to Django 1.7
Question: I'm trying to upgrade my Django 1.6.2 application to Django 1.7.10 but am
stuck because the makemigrations command keeps raising an error. I've never
used migrations in this application. When I run the command "python
./manage.py makemigrations", I get the following error:
... # stacktrace
File "/Users/myname/venv/myproject/lib/python2.7/site-packages/django/db/migrations/state.py", line 248, in __init__
raise ValueError(msg.format(field=operations[0][1], model=lookup_model))
ValueError: Lookup failed for model referenced by field my.admin.PhotoQueue.review_queue: my.admin.my.admin.ReviewQueue
where my.admin is the AppConfig label for the "admin" app whose models module
contains the classes in question:
# apps/admin/models.py <- I keep all my apps in an "apps" subdirectory in my project
from django.contrib.auth.models import User
class ReviewQueue(models.Model):
"""Queue contains changes that need to be reviewed."""
user = models.ForeignKey(User)
... # more declarations
class PhotoQueue(models.Model):
"""Queue contains information about photos uploaded by a user."""
review_queue = models.OneToOneField(ReviewQueue, primary_key=True)
As you can see, an item in my review queue can optionally be related to an
item in my photo queue. The ReviewQueue and PhotoQueue classes reside in the
same module and the ReviewQueue is declared right before the PhotoQueue. I've
looked online to see if anyone else has had this problem but didn't see
anything. I've also looked to see if there are any issues relating to
migrations and OneToOneFields, again with no luck. Does anyone know what is
causing this problem? My business is dead if I can't resolve it.
Here are my installed apps and appconfig:
# conf/settings/base.py
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'django.contrib.admindocs',
# Project apps
'apps.admin',
'apps.members',
)
# apps/admin/models/apps.py
from django.apps import AppConfig
class AdminConfig(AppConfig):
name = 'apps.admin'
label = 'my.admin'
Thanks!
Answer: The
[label](https://docs.djangoproject.com/en/1.8/ref/applications/#django.apps.AppConfig.label)
in your app config should not have a dot in it. You could do:
class AdminConfig(AppConfig):
name = 'apps.admin'
label = 'myadmin'
|
Gunicorn Internal Server Error
Question: I'm setting up Gunicorn with my Flask application. I have the following file
structure:
Flask/
run.py
myapp/
__init__.py
views/
homepage/
__init__.py
homepage.py
login/
__init__.py
login.py
blog/
__init__.py
blog.py
The `run.py` file imports the app instance in `Flask/myapp/__init__.py` and
runs it like so:
from myapp import app
def run():
app.run()
Using the command line, I run `gunicorn run:run` and the website starts up. I
go to the website and I get an internal server error stating this:
[2015-09-14 19:00:41 +0100] [35529] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/pavsidhu/.virtualenvs/environment/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 130, in handle
self.handle_request(listener, req, client, addr)
File "/Users/pavsidhu/.virtualenvs/environment/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 171, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: run() takes no arguments (2 given)
What is the issue? Thanks.
Answer: You're supposed to pass [a WSGI callable to `gunicorn`](http://gunicorn-
docs.readthedocs.org/en/latest/run.html#gunicorn). As it turns out `app` is
one, but your `run` function isn't.
So, instead of running `gunicorn run:run`, run: `gunicorn run:app`.
|
Python library module implementation
Question: I am trying to write - and understand - some python code , and I have been
struggling to realize how python libraries are imported. Let me describe my
situation.
I am trying to mock a raspberry-pi-only python library (RPi.GPIO) in order to
run some unittests in my (x86) laptop. In order to accomplish that, I thought
I should just define the same functions, variables as the GPIO class, and have
all the functions emtpy (just pass). So I had a look at the RPi.GPIO module.
Although I thought I would find the actual implementation of the GPIO class
methods there, I actually saw that their body was empty. For example:
def add_event_detect(*args, **kwargs): # real signature unknown
"""
Enable edge detection events for a particular GPIO channel.
channel - either board pin number or BCM number depending on which mode is set.
edge - RISING, FALLING or BOTH
[callback] - A callback function for the event (optional)
[bouncetime] - Switch bounce timeout in ms for callback
"""
pass
So the question is, where is the actual implementation of this functions and
what is the point of this empty body? (just the pass keyword and the
documentation) How and by whom is this method overriden and gets the desired
functionality?
Answer: It should be a wrapper for a C function. And if you want to override
`__import__` as Zizouz212 mentioned, use import hooks instead.
Here is a PEP describing import hooks:
<https://www.python.org/dev/peps/pep-0302/>
|
JSON and Python - flatten JSON api for CSV with TypeError: list indices must be integers, not str
Question: Trying to flatten JSON-formatted data rec'd via API with a somewhat
unpredictable hierarchy with the goal of saving it as a CSV for import into
MySQL and I'm running into an error **_TypeError: list indices must be
integers, not str_**
* data1 is a list with the results of the API call ...
CODE:
json_data1 = json.dumps(data1)
## Flatten JSON
from collections import OrderedDict
import csv
outfile = open("output.csv", "w")
writer = csv.writer(outfile, delimiter=",")
data = json.loads(json_data1, object_pairs_hook=OrderedDict)
# Recursively flatten JSON
def flatten(structure, key="", path="", flattened=None):
if flattened is None:
flattened = OrderedDict()
if type(structure) not in(OrderedDict, list):
flattened[((path + "_") if path else "") + key] = structure
elif isinstance(structure, list):
for i, item in enumerate(structure):
flatten(item, "", path + "_" + key, flattened)
else:
for new_key, value in structure.items():
flatten(value, new_key, path + "_" + key, flattened)
return flattened
# Write fields
fields = []
for result in data["results"]:
flattened = flatten(data["results"][0])
for k, v in flattened.iteritems():
if k not in fields:
fields.append(k)
writer.writerow(fields)
# Write values
for result in data["results"]:
flattened = flatten(result)
row = []
for field in fields:
if field in flattened.iterkeys():
row.append(flattened[field])
else:
row.append("")
writer.writerow(row)
Any ideas on where the error is?
Answer: I think you'll need to post more output for someone to troubleshoot this. What
line number is the error on?
In general this error means you're trying to pass a non integer value in a
list. For example, if you have list:
list = ["a", "b", "c"]
You can do list[0] to get "a". If you do list["pig"] you get "TypeError: list
indices must be integers, not str".
Just add some print statements before your lists and you should be able to
spot this.
|
Python pandas multiple conditions
Question: Sorry, I apologise now, just started learning Python and trying to get
something working.
Ok dataset is
Buy, typeid, volume, issued, duration, Volume Entered,Minimum Volume, range, price, locationid, locationname
SELL 20 2076541 2015-09-12T06:31:13 90 2076541 1 region 331.21 60008494 Amarr
SELL 20 194642 2015-09-07T19:36:49 90 194642 1 region 300 60008494 Amarr
SELL 20 2320 2015-09-13T07:48:54 3 2320 1 region 211 60008491 Irnin
I would like to filter for a specific location either by name or ID, doesn't
bother me, then to pick the minimum price for that location. Preferably to
hardcode it in, since I only have a few locations I'm interested. e.g
locationid = 60008494.
I see you can do two conditions on one line, but I don't see how to apply it.
So I'm trying to nest it. Doesn't have to be pandas, just seems the first
thing I found that did one part of what I required.
The code I've gotten so far is, and only does the minimum part of what I'm
looking to achieve.
data = pd.read_csv('orders.csv')
length = len(data['typeid'].unique())
res = pd.DataFrame(columns=('Buy', 'typeid', 'volume','duration','volumeE','Minimum','range','price','locationid','locationname'))
for i in range(0,length):
name_filter = data[data['typeid'] == data['typeid'].unique()[i]]
price_min_filter = name_filter[name_filter['price'] == name_filter['price'].min() ]
res = res.append(price_min_filter, ignore_index=True)
i=i+1
res.to_csv('format.csv') # writes output to csv
print "Complete"
UPDATED. Ok so, the latest part, seems like the following code is the
direction I should be going in. If I could have s=typeid, locationid and
price, thats perfect. So I've written what I want to do, whats the correct
syntax to get that in python? Sorry I'm used to Excel and SQL.
import pandas as pd
df = pd.read_csv('orders.csv')
df[df['locationid'] ==60008494]
s= df.groupby(['typeid'])['price'].min()
s.to_csv('format.csv')
Answer: If what you really want is -
> I would like to filter for a specific location either by name or ID, doesn't
> bother me, then to pick the minimum price for that location. Preferably to
> hardcode it in, since I only have a few locations I'm interested. e.g
> locationid = 60008494.
You can simply filter the df on the `locationid` first and then use
`['price'].min()`. Example -
In [1]: import pandas as pd
In [2]: s = """Buy,typeid,volume,issued,duration,Volume Entered,Minimum Volume,range,price,locationid,locationname
...: SELL,20,2076541,2015-09-12T06:31:13,90,2076541,1,region,331.21,60008494,Amarr
...: SELL,20,194642,2015-09-07T19:36:49,90,194642,1,region,300,60008494,Amarr
...: SELL,20,2320,2015-09-13T07:48:54,3,2320,1,region,211,60008491,Irnin"""
In [3]: import io
In [4]: df = pd.read_csv(io.StringIO(s))
In [5]: df
Out[5]:
Buy typeid volume issued duration Volume Entered \
0 SELL 20 2076541 2015-09-12T06:31:13 90 2076541
1 SELL 20 194642 2015-09-07T19:36:49 90 194642
2 SELL 20 2320 2015-09-13T07:48:54 3 2320
Minimum Volume range price locationid locationname
0 1 region 331.21 60008494 Amarr
1 1 region 300.00 60008494 Amarr
2 1 region 211.00 60008491 Irnin
In [8]: df[df['locationid']==60008494]['price'].min()
Out[8]: 300.0
If you want to do it for all the locationids', then as said in the other
answer you can use `DataFrame.groupby` for that and then take the `['price']`
column for the group you want and use `.min()`. Example -
data = pd.read_csv('orders.csv')
data.groupby(['locationid'])['price'].min()
Demo -
In [9]: df.groupby(['locationid'])['price'].min()
Out[9]:
locationid
60008491 211
60008494 300
Name: price, dtype: float64
* * *
For getting the complete row which has minimum values in the corresponding
groups, you can use `idxmin()` to get the index for the minimum value and then
pass it to `df.loc` to get those rows. Example -
In [9]: df.loc[df.groupby(['locationid'])['price'].idxmin()]
Out[9]:
Buy typeid volume issued duration Volume Entered \
2 SELL 20 2320 2015-09-13T07:48:54 3 2320
1 SELL 20 194642 2015-09-07T19:36:49 90 194642
Minimum Volume range price locationid locationname
2 1 region 211 60008491 Irnin
1 1 region 300 60008494 Amarr
|
python "cannot import name 'geolite2'"
Question: I'm in trouble with import module using python
Here's the list when I list my installed package using cmd "pip list"
geoip2
maxmindb
pip
python-geoip
python-geoip-geolite2
request
setuptools
wheel
my python code starts
import sys
from geoip import geolite2
<my code>
the import error occured at line 2.
I have no idea about this..Is there anyone who can help me?
My code is running on Windows. Please help me
Thanks for your help in advance.
Answer: You've installed `geoip2` and you're looking to use `geoipe2` (extra "e"). In
addition, in your import you're using `geoip`
|
Is wait_window() required for creating a modal dialog in Python Tkinter?
Question: I try to create a modal dialog using Python Tkinter. I found no difference
between using and not using wait_window().
import tkinter as tk
def button_click():
dlg = tk.Toplevel(master=window)
tk.Button(dlg, text="Dismiss", command=dlg.destroy).pack()
dlg.transient(window) # only one window in the task bar
dlg.grab_set() # modal
#window.wait_window(dlg) # why?
window = tk.Tk()
tk.Button(window, text="Click Me", command=button_click).pack()
window.mainloop()
I've seen some examples in that use wait_window() for creating a modal dialog.
So I'm not sure whether the function is required for creating a modal dialog.
I'm using Python 3.5.
Answer: running your code using `window.wait_window(dlg)` won't change anything as
`dlg.grab_set()` already creates a modal dialog. this does only mean that you
cannot close `window` while `dlg` is still alive. you cannot close window as
the modal dialog grabs all mouse events from `window` and redirects them to
`null`.
If you want to create a modal dialog without `grab_set()`, you would need to
bind all mouse events to one handler and then decide if they should be allowed
or dismissed **and** use `wait_window`.
As a modal dialog is defined by _"is anything outside the dialog**and** in my
application available to be clicked"_ == _False_ , you already have a modal
dialog only using `grab_set()`.
If your application shall not be able to programatically close `window`, you
would need `wait_window()` as well.
Hope I made everything clear.
|
Mock with submodules for ReadTheDocs
Question: I'm trying to document a Python project with ReadTheDocs. Initially, the build
process would die when it got to:
from osgeo import gdal, osr
I've read the [rtd faq](https://read-the-
docs.readthedocs.org/en/latest/faq.html#i-get-import-errors-on-libraries-that-
depend-on-c-modules) and used mock for the osgeo module that was giving me
trouble. Now the build process makes it past that import but chokes on:
from osgeo.gdalconst import *
With this rather unhelpful error:
RuntimeError: sys.path must be a list of directory names
I'm completely new to using mock but I think the problem is that `osgeo` is a
mock module and therefore does not have the submodule `gdalconst`. How do I
get around that? Is there a way to mock the submodule too?
Answer: A bit late… but I ran across this looking for a solution (using nested modules
with `mock`). I've mocked module and submodules like this:
MOCK_MODULES = ['dbs', 'dbs.apis', 'dbs.apis.dbsClient']
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
where the order mattered. Hope this helps anyone else looking to solve this.
|
Python - Loadtext with specific lines for huge file
Question: I have to get specific lines in a huge text file. Until now i try as below. My
aim is to extract columns for a specific iteration, here each 500 lines. But
by proceeding with the "readlines", sometimes i get some crashes because of
the size of the file (until 4Gb). So i would like to find an other way in
order to avoid problems...
with open('/test.txt') as f:
text = f.readlines()
A = ""
for i in text[3000:3500]:
A+=i
B=A.splitlines()
listed = []
for i in range(len(B)):
C=B[i][3:47].split(" ")
while True:
try:
C.remove("")
except ValueError:
break
listed.append(C)
import numpy as np
import matplotlib.pyplot as plt
#print listed
x = np.array(listed, dtype=float)
y = x.astype(np.float)
plt.plot(y[:,1]);plt.ylim(0,5);plt.show()
This post follows a former
[question](http://stackoverflow.com/questions/32559304/python-loadtext-for-
specific-number-of-lines).
Answer: If I understand correctly you want to get the lines 3000 to 3500. You can do
this like such:
import itertools
with open('test.txt') as f:
lines = list(itertools.islice(f, 3000, 3500))
|
How to make ffmpeg write its output to a named pipe
Question: I know i can make `ffmpeg` put its output to `stdout` and `stderr` using
`pipe:1` and `pipe:2`, respectively, as `output_file` parameter.
([Docs](http://ffmpeg.org/ffmpeg-protocols.html#pipe))
But what about [named pipes](https://en.wikipedia.org/wiki/Named_pipe), can i
make it write to one?
If not, is there a way to redirect the data in `stdout` to a named pipe in Linux? (something like `ffmpeg <parameters> | pipe123`)
This question is a follow-up of [this
question](http://stackoverflow.com/questions/32563527/how-to-keep-piping-
output-of-a-program-to-a-python-script-if-the-program-is-res).
Answer: You could create a named pipe first and have `ffmpeg` write to it using the
following approach:
`ffmpeg` output to named pipe:
# mkfifo outpipe
# ffmpeg -i input_file.avi -f avi pipe:1 > outpipe
FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers
built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
...
[avi @ 0x1959670]non-interleaved AVI
Input #0, avi, from 'input_file.avi':
Duration: 00:00:34.00, start: 0.000000, bitrate: 1433 kb/s
Stream #0.0: Video: cinepak, yuv420p, 320x240, 15 tbr, 15 tbn, 15 tbc
Stream #0.1: Audio: pcm_u8, 22050 Hz, 1 channels, u8, 176 kb/s
Output #0, avi, to 'pipe:1':
Metadata:
ISFT : Lavf52.64.2
Stream #0.0: Video: mpeg4, yuv420p, 320x240, q=2-31, 200 kb/s, 15 tbn, 15 tbc
Stream #0.1: Audio: mp2, 22050 Hz, 1 channels, s16, 64 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Press [q] to stop encoding
frame= 510 fps= 0 q=11.5 Lsize= 1292kB time=33.96 bitrate= 311.7kbits/s
video:1016kB audio:265kB global headers:0kB muxing overhead 0.835379%
reading `outpipe` named pipe (`Python` example):
# python -c "import os; fifo_read = open('outpipe', 'r', 0); print fifo_read.read().splitlines()[0]"
RIFFAVI LIST<hdrlavih8j...
...
\-- ab1
|
Convert cURL request to Python-Requests request
Question: I want to convert this cURL request to a Python-Requests request since I am
working on a Python wrapper for a REST service
MS_WORD_DOCUMENT=...
CONTENT_TYPE="application/msword"
JSON_REQUEST="{\"documentType\" : \"$CONTENT_TYPE\"}"
curl -X POST -F "meta=$JSON_REQUEST;type=application/json" -F "data=@$MS_WORD_DOCUMENT" $SERVICE_ENDPOINT
How can I convert this to a Python3 Requests library request?
So far I've got to
data = {"metadata": {"documentType": "application/msword",
"Content-Type": "application/json"}}
req = requests.post(
"https://text.s4.ontotext.com/v1/twitie",
auth=("user", "pass"),
headers={"Content-Type": "multipart/mixed"},
data=data,
files={"file": ("sample.docx", content,
"application/octet-stream")})
I don't know if that's the way to process multipart requests of the type with
requests
Answer: The Curl command sends specific `multipart/form-data` field names; `meta` and
`data`, and the [documentation for the
API](http://docs.s4.ontotext.com/display/S4docs/Text+Analytics) specifies
specific meta-types to be used.
Moreover, the metadata should be encoded to JSON.
The following should work:
import json
import requests
metadata = json.dumps({"documentType": "application/msword"})
files = {
'meta': ('', metadata, 'application/json'),
'data': ('sample.docx', content, 'application/octet-stream'),
}
req = requests.post(
"https://text.s4.ontotext.com/v1/twitie",
auth=("user", "pass"),
files=files)
The `files` parameter is all that is needed here; each value is a tuple with a
filename, the data to be sent, and the mimetype for that part.
|
Python: List files in subdirectories with specific extension created in last 24 hours
Question: First of all, I'm new to programming and Python in particular, therefore I'm
struggling to find a right solution.
I'm trying to search the files with specific extension recursively which have
been created only in last 24 hours and either print the result to the screen,
save to the file, and copy those files to directory.
Below is an example the code which does most of what I would like to achieve,
except it finds all files with given extension, however, I need only files
created in last 24 or less hours.
import os
import shutil
topdir = r"C:\Docs"
dstdir = r"C:\test"
exten = ".png"
for dname, names, files in os.walk(topdir):
for name in files:
if name.lower().endswith(exten):
# Prints result of walk
print(os.path.join(dname, name))
#copy all files with given extension to the dst folder
path = os.path.realpath(os.path.join(dname, name))
shutil.copy2(path, dstdir)
Answer:
compare_date = datetime.datetime.today() - datetime.timedelta(hours = 24)
Inside nested loop, you can add these code
create_dt = os.stat(name).st_mtime
created_date = datetime.datetime.fromtimestamp(create_dt)
if created_date > compare_date:
print name
|
How to do mutation with word in python?
Question: I'm tryin to play with words. For example, I read word "script" from a txt
file. Then would like to do all mutations of letters and write each of them.
So here are changes
s=$
s=5
s=S
s=s
c=(
c=[
c={
c=<
c=c
c=C
r=r
r=R
i=i
i=I
i=|
i=1
i=!
.
.
.
What i was thinking to get is
scrypt
$crypt
5cript
Scrypt
s(ript
$(cript
.
.
.
So all possible combinations. Similar to itertool.product I'm little confused
how to do that. I started like this
def main():
with open('file1.txt', 'r') as f1, open('file2.txt', 'w') as f2:
for word in f1:
l=len(word)
for i in range(l):
if word[i] = s:
word1=word[i].raplace("$") #don't know if sintacs is ok
f2.write(word1)
else:
if word[i] = c:
.
.
Now I'm confused here. I would have to ask for every letter in alphabet and
I'm complicating this.
I would probably need to have lots of loops inside loop. I think how many
characters in word thats how manu for loops.
ITs to complicated and probably to slow. Is there some easy way? Some function
(tool) to import?
Question is how to deal with to same letter in one word and how t
P.S I'm using python 3.4.2
Answer: Construct a dictionary that maps each letter to its permitted replacements.
Then use `itertools.product` to find all the combinations of those
replacements.
import string
import itertools
replacements = """
s=$
s=5
s=S
c=(
c=[
c={
c=<
c=C
r=R
i=I
i=|
i=1
i=!
"""
d = {c:[c] for c in string.printable}
for line in replacements.strip().split("\n"):
c, replacement = line.split("=")
d[c].append(replacement)
word = "script"
for letters in itertools.product(*[d[c] for c in word]):
print("".join(letters))
Result:
script
scrIpt
scr|pt
scr1pt
...
S(R1pt
S(R!pt
S[ript
S[rIpt
...
SCRIpt
SCR|pt
SCR1pt
SCR!pt
|
remove all elements in a list not relatively prime
Question: I have a list `m` in python, and I want to remove all the elements in m that
are not relatively prime to all previous elements. So if `m=[2,3,4]` I want
the output to be `[2,3]`.
I tried iterating through the values of `m`, but it doesnt work since the size
of `m` changes and then the index value is out of range.
Answer: You can use
[`enumerate`](https://docs.python.org/3/library/functions.html#enumerate) and
[`any`](https://docs.python.org/3/library/functions.html#any) within a list
comprehension and use `fractions.gcd` function to get the gcd for each pairs :
>>> from fractions import gcd
>>> [j for i,j in enumerate(m) if not any(gcd(j,t)!=1 for t in m[:i])]
[3, 100, 7, 11, 17]
Also as @mgilson mentioned in comment as a more efficient way instead of
simple slicing in `any` you can use `itertools.islice` function.
|
How can I select lines in reason of a value in awk?
Question: Let's assume I have a file which structure looks like this:
AAAA 700 something1 something_else1
AAAA 98 something2 something_else2
AAAA 2000 something3 something_else3
BBBB 200 something4 something_else4
BBBB 21 something5 something_else5
BBBB 300 something6 something_else6
I need to extract, for each value in column $1, the whole line having the
highest value in column $1. This means that, for the field AAAA, I would need
to print the line in which $2=2000. The output should thus look like:
AAAA 2000 something3 something_else3
BBBB 300 something6 something_else6
I did it with python, but the file is huge and the process is very time-
consuming. Is there any way to do it with awk?
Answer:
$ cat tst.awk
$1!=prev { if (rec!="") print rec; max=$2; rec=$0 }
$2 > max { max=$2; rec=$0 }
{ prev=$1 }
END { if (rec!="") print rec }
$ awk -f tst.awk file
AAAA 2000 something3 something_else3
BBBB 300 something6 something_else6
The above assumes the `$1` values are always grouped together as shown in your
sample input. Given that, it only stores 1 record in memory at a time (since
you say your input file is huge that could be important), prints the records
in the same order they were read, will work even for zero or negative `$2`
values, and will not output anything for an empty input file.
|
Python OpenCV Template Matching error
Question: I've been messing around with the OpenCV bindings for python for a while now
and i wanted to try template matching, i get this error and i have no idea why
C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\imgproc\src\templmatch.cpp:910: error: (-215) (depth == CV_8U || depth == CV_32F) && type == _templ.type() && _img.dims() <= 2 in function cv::matchTemplate
Anyone have any clues as to why this might be happening?
Source code:
import win32gui
from PIL import ImageGrab
import win32api, win32con
import numpy
deckVar = "deck.png" # Temporary
def click(x,y):
win32api.SetCursorPos((x,y))
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
margin = 10
def OOO(): # Order Of Operations
print None
def main():
deck = "test"
img = ImageGrab.grab()
HWNDHandle = win32gui.FindWindow(None, "Hearthstone");
x,y,x2,y2 = win32gui.GetWindowRect(HWNDHandle)
print x,y,x2,y2
l = x2-x
h = y2-y
print l,h
img_recog(img,"imgs/my_collection.png")
def img_recog(img,templ):
import cv2
import numpy as np
from matplotlib import pyplot as plt
img2 = numpy.array(img.getdata()).reshape(img.size[0], img.size[1], 3)
template = cv2.imread(templ,0)
w, h = template.shape[::-1]
# All the 6 methods for comparison in a list
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',
'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']
img = img2.copy()
method = eval(methods[1])
# Apply template Matching
try:
res = cv2.matchTemplate(img,template,method)
except Exception as e:
print str(e)
raw_input()
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum
if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
return cv2.rectangle(img,top_left, bottom_right, 255, 2)
main()
Answer: Pay attention to the error message:
> error: (-215) (depth == CV_8U || depth == CV_32F) && type == _templ.type()
> && _img.dims() <= 2 in function cv::matchTemplate
It means the data type of the image should be CV_8U or CV_32F, and it should
have 3 or less channels.
If you don't know what CV_8U, CV_32F means, see [this
list](http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-
depth).
Probably you are passing numpy objects other than np.uint8 or np.float32. you
can easily convert numpy dtype to 8-bit or 32-bit using:
img.astype(np.float32)
img.astype(np.uint8)
Just pay attention that OpenCV expect CV_8U 8-bit data to be in the range
0..255 and CV_32F can be in any range.
|
MQTT subscriber to know who is the publisher
Question: From what I read in MQTT protocol [message
payload](http://www.hivemq.com/mqtt-essentials-part-4-mqtt-publish-subscribe-
unsubscribe/), it doesn't seem to support telling who is the publisher on a
published message. But is it possible that a MQTT subscriber to know which
publisher the message are from?
A 'msg.publisher' workaround maybe?
#!/usr/bin/env python
import mosquitto
def on_message(mosq, obj, msg):
print "Publisher: %s, Topic: %s, "Msg: %s" % (msg.publisher, msg.topic, msg.payload)
cli = mosquitto.Mosquitto()
cli.on_message = on_message
cli.connect("127.0.0.1", 1883, 60)
cli.subscribe("dns/all", 0)
cli.subscribe("nagios/#", 0)
while cli.loop() == 0:
pass
Answer: You're right, the MQTT specification has no field in the PUBLISH packet to
specify which publisher a certain message comes from.
I can think of two possible "work-around" implementations:
1) Add the publisher information in the payload of the message. Application
level parsing would allow you to retrieve the publisher ID from the message
payload.
2) Add the publisher information in the topic. You could concoct a clever
topic hierarchy with a level dedicated to the publisher.
For example: data/ Your subscriber could then subscribe to data/+ and parse
the last level to retrieve the publisher ID.
|
Is it possible to make a module iterable in Python?
Question: I am making a module that I want to treat as a static container of objects.
These objects are of a class type that I have defined. I want to be able to
import this module and then loop over the objects within. Here is some code
explaining what I mean:
**example.py**
class MyExampleClass(object):
def __init__(self, var1, var2, var3):
self.var1 = var1
self.var2 = var2
self.var3 = var3
self.var4 = var4
instanceA = MyExampleClass(1, 2, 3, 4)
instanceB = MyExampleClass(4, 3, 6, 7)
instanceC = MyExampleClass(5, 3, 4, 5)
# something like this
def __iter__():
return (instanceA, instanceB, instanceC)
Then I would like to be able to import this and use it like an enum:
import example
for e in example:
# do stuff with e
Is this possible to do in Python? Or will I have to import a list from within
the `example` package?
**example.py**
objects = (instanceA, instanceB, instanceC)
and then
import example
for e in example.objects:
# do stuff with e
Answer: You can achieve that by defining a root class in your module and replacing the
module with an instance of that class. Here is an example:
class ModuleClass(object):
__init__(self):
self.instanceA = MyExampleClass(1, 2, 3, 4)
...
__iter__(self):
# Your iterator logic here
# and then in the same module code
sys.modules[__name__] = ModuleClass()
So then you can do what you want, because when you import that module, it will
actually be an instance of your custom iterable `ModuleClass`:
import example
for e in example:
# do stuff with e
|
Python Datareader Stock Exchange markets options
Question: I'm using Datareader to get some stock quotes from Yahoo finance. I would like
to get the values of Euronext Paris Stock exchange market and not the standard
values (NYSE ones I think).
import pandas.io.data as web
import time
today = time.strftime(\"%m/%d/%Y\")
valeur = web.DataReader('ING.PA',data_source='yahoo',start='1/1/2000',end=today)
Is there an option in Datareader method to indicate I want Euronext closing
values ?
I have seen in Yahoo Finance API that there is a tag x for Stock Exchange that
can be useful to precise the market on which you want to get values
(<http://www.marketindex.com.au/yahoo-finance-api>) but I can't see any
example value I can pass to this x tag to try it. And I don't know if I can
use an equivalent in Datareader afterwards.
I have also founnd a page describing google_exchange codes indicating the one
I'm looking for ('EPA') but I don't how to transpose it in DataReader.
<https://github.com/mdengler/stockquote/blob/master/stockquote.py>
Does anyone as a clue on this ? Thanks in advance
Answer: The suffix in the code `.PA` indicates the exchange you want to obtain, Paris
in this case.
Take a look at [Yahoo
Finance](http://finance.yahoo.com/lookup/all;_ylt=Ak0bQJEijLNpCehrNmNeE8LXVax_;_ylu=X3oDMTE4MnA1bzlpBHBvcwMxBHNlYwN5ZmlTeW1ib2xMb29rdXBSZXN1bHRzBHNsawNhbGw-?s=Ingenico&t=A&m=ALL&r=)
to see the different symbols you can use according to the exchange you want to
get data from. Then use the appropriate symbol in the call.
Symbol Name Last Trade Type Industry/Category Exchange
ING.PA Ingenico Group 101.45 Stock Business Services PAR
ING.SW INGENICO GROUP 116.30 Stock EBS
IIE.F INGENICO GROUP 100.90 Stock Business Services FRA
IIE.SG INGENICO GROUP 102.88 Stock Business Services STU
INGIY Ingenico Group 23.05 Stock PNK
INGNV.PA Ingenico S.A. 98.33 Stock PAR
IIEF.EX INGENICO GROUP 115.49 Stock EUX
IIE.BE INGENICO GROUP 106.70 Stock Business Services BER
Here is a quote from the [Yahoo Finance
API](http://www.marketindex.com.au/yahoo-finance-api) page.
> All listed companies have a stock ticker between 1 and 4 characters. E.g.
> Apple has the stock ticker AAPL. As there are multiple exchanges around the
> world, you must specify which exchange your code relates to by adding a
> suffix.
>
> * Australian listed companies require the suffix “.AX” to be added to the
> companies stock code (e.g. BHP.AX).
> * UK listed companies require the suffix “.L” to be added to the companies
> stock code (e.g. BLT.L).
>
|
sudo pip install python-Levenshtein failed with error code 1
Question: I'm trying to install the python-Levenshtein library on linux, but whenever I
try to install it via:
sudo pip install python-Levenshtein
I get this error:
> Command "/usr/bin/python -c "import setuptools, tokenize;**file**
> ='/tmp/pip-build-LAmG4b/python-
> Levenshtein/setup.py';exec(compile(getattr(tokenize, 'open',
> open)(**file**).read().replace('\r\n', '\n'), **file** , 'exec'))" install
> --record /tmp/pip-KGiQPH-record/install-record.txt --single-version-
> externally-managed --compile" failed with error code 1 in /tmp/pip-build-
> LAmG4b/python-Levenshtein
And the error code: error: command 'gcc' failed with exit status 1
I'm using debian linux.
Answer: One of the `python-Levenshtein` maintainers here.
Make sure you have `python-dev` and `build-essential` packages.
Are you sure that's full error message as the actual error seem to be missing?
If a log file is created can you peek into it and add its content to the
question.
Also read the official Python package installation guide. [Use virtual
environments](https://packaging.python.org/en/latest/installing/#creating-
virtual-environments). Never do `sudo pip install` unless you have a specific
reason to do so.
|
javascript HTML5 canvas display from python websocket server
Question: I created a websocket server that uses ZeroMQ4 to talk to a middleware. I also
created a peice of Javascript to display information back from the middleware.
I know the Websocket server works and is able to send to the javascript as i
tested with small string output.
So, I want to send an png image from the websocket server to Javascript, but
the Javascript documentation of canvas is confusing and I haven't found a
solid example good for a newbie with Javascript.
This is the Javascript I have so far it is able to input data in, but does not
display any image.
var canvas = document.getElementById('stage');
var context = canvas.getContext("2d");
function openWS(){
var websocket = new WebSocket("ws://raptorweb1.no-ip.org:10000");
websocket.binaryType = "arraybuffer";
websocket.onmessage = function(evt) { onMessage(evt) };
websocket.onerror = function(evt) { onError(evt) };
websocket.onopen = function(evt) { onOpen(evt) };
function onOpen(evt){
var luser = document.getElementById("lusername").value;
var ruser = document.getElementById("rusername").value;
var pwd = document.getElementById("password").value;
console.log("Connecting.. ");
websocket.send("SUB[00100]" + luser);
websocket.send("MESSAGE[00100]" + ruser + "[11111]" + pwd);
console.log("Connected.");
}
function onMessage(evt) {
console.log("received: " + evt.data);
drawImageBinary(evt.data);
}
function onError(evt) {
console.log(evt.data);
}
function drawImageBinary(blob) {
var bytes = new Uint8Array(blob);
// console.log('drawImageBinary (bytes.length): ' + bytes.length);
var imageData = context.createImageData(canvas.width, canvas.height);
for (var i=8; i<imageData.data.length; i++) {
imageData.data[i] = bytes[i];
}
context.putImageData(imageData, 0, 0);
var img = document.createElement('img');
//img.height = canvas.height;
//img.width = canvas.width;
img.src = canvas.toDataURL();
}
}
this is the websocket server:
clients = []
from SimpleWebSocketServer import WebSocket, SimpleWebSocketServer, SimpleSSLWebSocketServer
import zmq
import zmq.auth
from zmq.auth.thread import ThreadAuthenticator
import sys
import os
import random
import pygame
from pygame.locals import *
import base64
import string
from threading import Thread
def id_generator(size=10, chars=string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))
class SimpleChat(WebSocket):
def initZMQ(self):
file = sys.argv[0]
base_dir = os.path.dirname(file)
keys_dir = os.path.join(base_dir, 'certificates')
public_keys_dir = os.path.join(base_dir, 'public_keys')
secret_keys_dir = os.path.join(base_dir, 'private_keys')
self.context = zmq.Context()
self.socket = self.context.socket(zmq.DEALER)
client_secret_file = os.path.join(secret_keys_dir, "client.key_secret")
client_public, client_secret = zmq.auth.load_certificate(client_secret_file)
self.socket.curve_publickey = client_public
self.socket.curve_secretkey = client_secret
server_public_file = os.path.join(public_keys_dir, "server.key")
server_public, _ = zmq.auth.load_certificate(server_public_file)
self.socket.curve_serverkey = server_public
self.width = "0"
self.height = "0"
def ondata(self):
while True:
try:
data = self.socket.recv()
code, self.width = data.split('[55555]')
data = self.socket.recv()
code, self.height = data.split('[55555]')
self.width = int(self.width)
self.height = int(self.height)
self.width = float(self.width /1.5)
self.height = float(self.height /1.5)
print (self.width, self.height)
data = self.socket.recv()
image = pygame.image.frombuffer(data, (int(self.width),int(self.height)),"RGB")
randname = id_generator()
pygame.image.save(image,randname+".png")
out = open(randname+".png","rb").read()
self.sendMessage(out)
print("data sent")
os.remove(randname+".png")
except Exception as e:
print (e)
def handleMessage(self):
try:
message = str(self.data)
protocode, msg = message.split("[00100]")
if protocode == ("SUB"):
print("SUB")
self.socket.setsockopt(zmq.IDENTITY, str(msg))
self.socket.connect("tcp://127.0.0.1:9001")
Thread(target=self.ondata).start()
elif protocode == ("MESSAGE"):
print("MESSAGE")
msg = str(msg)
ident, mdata = msg.split("[11111]")
msg = ('%sSPLIT%s' % (ident, mdata))
self.socket.send(str(msg))
else:
raise Exception
except Exception as e:
print (e)
def handleConnected(self):
print (self.address, 'connected')
clients.append(self)
self.initZMQ()
def handleClose(self):
clients.remove(self)
print (self.address, 'closed')
for client in clients:
client.sendMessage(self.address[0] + u' - disconnected')
server = SimpleWebSocketServer('', 10000, SimpleChat)
server.serveforever()
Answer: I managed to solve this problem.
Turns out that my javascript and my python server were wrong.
this is the function that works for me when processing the message from the
server:
function onMessage(evt) {
var img = new Image();
img.src = "data:image/png;base64,"+evt.data;
img.onload = function () {
context.drawImage(img,0,0);
}
}
I had to add a base64.b64encode on my server right before I send the picture.
|
Cannot retrieve data from Strawpoll API
Question: I have a simple python code that goes to [this
link](http://maps.googleapis.com/maps/api/geocode/json?address=googleplex&sensor=false)
and retrieves it's data. Here is the code
import urllib, json
url = "http://maps.googleapis.com/maps/api/geocode/json?address=googleplex&sensor=false"
htmlfile = urllib.urlopen(url)
data = json.loads(htmlfile.read())
print data
Running the code returns the data from the url.
{u'status': u'ZERO_RESULTS', u'results': []}
I'd like to do the same, but for Strawpoll. After reading through their [API
documentation](https://github.com/strawpoll/strawpoll/wiki/API), it appears to
be the same formula. Going to the strawpoll link for testing, it shows me the
same content structure as the google link shown above. The API Documenation
states that " All resources will return data in JSON." But I am not getting
any data back, I'm getting errors. The code is exactly the same, but with the
edited url.
import urllib, json
url = "http://strawpoll.me/api/v2/polls/1/"
htmlfile = urllib.urlopen(url)
data = json.loads(htmlfile.read())
print data
Running the code gives me a view errors, I would post an image but
stackoverflow won't let me...
The last error I receive is "ValueError: No JSON object could be loaded". But
the API Documentation said that data is returned as JSON.
Removing the `json.loads` gives me pure html instead. Here is the code for
that. Again, exactly the same but removed the `json.loads`.
import urllib
url = "http://strawpoll.me/api/v2/polls/1/"
htmlfile = urllib.urlopen(url)
data = htmlfile.read()
print data
What am I doing wrong?
Answer: I just ran your code and looked into the HTML response. Perhaps you are not
setting the right HTTP headers? It says that access is denied, but I'm not
sure why that would be. I'd recommend using [requests](http://www.python-
requests.org/en/latest/).
>>> url = "http://strawpoll.me/api/v2/polls/1/"
>>> import requests
>>> requests.get(url).json()
{u'id': 1, u'multi': False, u'votes': [14683, 31165, 5635, 7397], u'options': [u'Sucker punch ', u'Pirates of carribian ', u'Prison logic', u'Witchhunter'], u'title': u'What movie should we watch'}
Since you are not opening the URL from a browser, it could be an effort by
strawpoll.me to protect their content from being scraped. In fact, I found
this line in the HTML response:
<p>The owner of this website (strawpoll.me) has banned your access based on your browser's signature (***).</p>
|
Python - Permutation/Combination column-wise
Question: I have a list
mylist = [
['f', 'l', 'a', 'd', 'l', 'f', 'k'],
['g', 'm', 'b', 'b', 'k', 'g', 'l'],
['h', 'n', 'c', 'a', 'm', 'j', 'o'],
['i', 'o', 'd', 'c', 'n', 'i', 'm'],
['j', 'p', 'e', 'e', 'o', 'h', 'n'],
]
I want do permutation/combination column-wise, such the elements of the column
are restricted to that column i.e., f,g,h,i,j remain in Column 1, l,m,n,o,p
remain in Column 2 and so on, in the results of permutation/combination. How
can this be achieved in Python 2.7?
Answer: You could [use
`zip(*mylist)`](https://docs.python.org/2/library/functions.html#zip) to list
the "columns" of `mylist`. Then use [the `*`
operator](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-
in-python/) (again) to unpack those lists as arguments to `IT.product` or
`IT.combinations`. For example,
import itertools as IT
list(IT.product(*zip(*mylist)))
yields
[('f', 'l', 'a', 'd', 'l', 'f', 'k'),
('f', 'l', 'a', 'd', 'l', 'f', 'l'),
('f', 'l', 'a', 'd', 'l', 'f', 'o'),
('f', 'l', 'a', 'd', 'l', 'f', 'm'),
...]
|
Trouble with command line execution or R script file from within Python shell
Question: I'm currently using the `optparse` package cast an R script file as a command
line executable file that accepts C-style long and short flags. The program is
running on Ubuntu. The execution of the overall application is controlled by a
Python script that (1) first uses `os.system` to call `chmod` on the
`script.R` file as follows:
import os
os.system("chmod +x script.R; export PATH=$PATH:`pwd`")
I then attempt to execute the program, again from within Python using
`os.system` as follows:
program_call = "script.R --arg1 1"
os.system(program_call)
This returns the error:
sh: 1: script.R: not found
32512
The really puzzling thing is that this was working fine just a day ago and now
it's erroring out. I'm developing this application with several other people,
so I'm wondering if this could be caused by a change in my administrative
permissions. I've verified that all necessary file are contained within the
current working directory.
Answer: The change to the `PATH` environment variable in your first call to
`os.system` won't carry over to the second call, since it's a separate shell
process. If you instead modify `PATH` within Python, it should work. Try
os.environ['PATH'] += ":" + os.getcwd()
os.system("chmod +x script.R")
program_call = "script.R --arg1 1"
os.system(program_call)
|
Web scraper script - How can I make it run quicker?
Question: I just started with python 3 and love reading light novels, so the first
python project I made is a script which web scrapes & downloads my favourite
light novel.
Everything works fine so far but it is really slow, especially checking
whether a chapter is actually in the folder and downloading the chapters.
Right now the script needs 17.8 minutes to check and download 694 chapters.
Are there any ways to at least speed up the checking process? Because all the
actual chapters only have to be downloaded once.
<https://github.com/alpenmilch411/LN_scrape/blob/master/LN_scraper.py>
import requests
from bs4 import BeautifulSoup
import os
import getpass
#Gets chapter links
def get_chapter_links(index_url):
r = requests.get(index_url)
soup = BeautifulSoup(r.content, 'html.parser')
links = soup.find_all('a')
url_list = []
for url in links:
if 'http://www.wuxiaworld.com/cdindex-html/book' in str(url):
url_list.append((url.get('href')))
return url_list
#Gets chapter content
def get_chapters(url):
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
chapter_text = soup.find_all('div',{'class':"entry-content"})
#Puts chapter text into 'chapter'-variable
chapter = ''
for c in chapter_text:
#Removing 'Previous Next Chapter'
content = c.text.strip() # strip??
chapter += content.strip('Previous Next Chapter') # strip??
return chapter
#Gets title of chapter
def get_title(url):
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
title = soup.find_all('h1',{'class':'entry-title'})
chapter_title = ''
for l in title:
chapter_title += l.text
return chapter_title
#Gets title of story
def get_story_title(url):
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
story = soup.find_all('h1',{'class':"entry-title"})
story_title = ''
for content in story:
story_title += content.text
return story_title
#url on which links can be found
links = 'http://www.wuxiaworld.com/cdindex-html/'
#Checks whether a directory already exists and creates a new one if necessary
story_title = get_story_title(links)
path = '/users/{}/documents/'.format(getpass.getuser())+'{}'.format(story_title)
if not os.path.isdir(path):
os.mkdir(path)
link_list = get_chapter_links(links)
#Copys chapters into text file
for x in link_list:
#Checks whether chapter already exists
#TODO Make checking process quicker
chapter_title = get_title(str(x)).replace(',','') + '.txt'
if not os.path.isfile(path + '/' + chapter_title):
story_title = get_story_title(links)
chapter_text = get_chapters(str(x))
file = open(path + '/' + chapter_title, 'w')
file.write(chapter_text)
file.close()
print('{} saved.'.format(chapter_title.replace(',','')))
print('All chapters are up to date.')
Answer: use lxml with BeautifulSoup. It is faster than html.parser
|
Undeclared, uninitialised variables being used in expressions: what to think?
Question: I'm struggling to understand the following code snippet (located in a file
called `program.js`. My issue is that I can't find where
`CODERBOT_PROG_SAVEONRUN` is declared and/or initialized in this file. No
external code or library seems to be being imported.
I'm running into the same issue in many other places in
[this](http://www.coderbot.org/en/index.html) particular
[project](https://github.com/CoderBotOrg/coderbot). Is this a quirky feature
of JavaScript, or is there somewhere else I should be looking?
What should I think if a variable is used but not initialized and declared in
a given JavaScript file?
Where is it coming from if there is no obvious "import" statement?
function runProg() {
var bot = new CoderBot();
// Generate JavaScript code and run it.
window.LoopTrap = 1000;
Blockly.Python.INFINITE_LOOP_TRAP = ' get_prog_eng().check_end()\n';
var code = Blockly.Python.workspaceToCode();
if(CODERBOT_PROG_SAVEONRUN) {
Blockly.Python.INFINITE_LOOP_TRAP = null;
var xml_code = Blockly.Xml.workspaceToDom(Blockly.mainWorkspace);
var dom_code = Blockly.Xml.domToText(xml_code);
var data = {'name': prog.name, 'dom_code': dom_code, 'code': code};
try {
$.ajax({url: '/program/save', data: data, type: "POST", success:function(){
loadProgList();
}});
}catch (e) {
alert(e);
}
}
try {
var data = {'name': prog.name, 'code': code};
$.ajax({url: '/program/exec', data: data, type: "POST"});
$("#dialogRunning").popup("open", {transition: "pop"});
setTimeout(statusProg, 1000);
} catch (e) {
alert(e);
}
}
Answer: In JavaScript, there is a global context, and local contexts defined by
functions. If a variable is not defined inside a function, it is defined on a
global context. In a browser, the global context is `window`; all scripts you
run in that window share the same global context.
The variable you are looking for is defined in `templates/config_params.html`.
Both it, and the `program.js` script are included from `templates/main.html`,
which makes global variables of each visible to the other when displaying that
page.
|
Simple Calculator using HTML forms in python django
Question: Below is my html form:
<form id="calci_form" method="get" action="#">
<input type="hidden" name="prev_val" value="{{prev_val}}"></input>
<input type="hidden" name="curr_val" value="{{curr_val}}"></input>
<input type="hidden" name="op_sign" value="{{opsign}}"></input>
<div id="calculator">
<table id="tableCalci">
<tr id="row1">
<td colspan="4"><input type="text" value="0" class="display" name="user_input">{{result}}</input></td>
</tr>
<tr>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr class="hover">
<td><button id="7" name="numval" value="7" type="submit"></button></td>
<td><button id="8" name="numval" value="8" type="submit"></button></td>
<td><button id="9" name="numval" value="9" type="submit"></button></td>
<td><button id="plus" name="sym" type="submit" value="add">+</button>
</td>
</tr>
<tr>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr class="hover">
<td><button id="4" name="numval" value="4" type="submit"></button></td>
<td><button id="5" name="numval" value="5" type="submit"></button></td>
<td><button id="6" name="numval" value="6" type="submit"></button></td>
<td><button id="minus" name="sym" value="minus" type="submit">−
</button></td>
</tr>
<tr>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr class="hover">
<td><button id="1" name="numval" value="1" type="submit"></button></td>
<td><button id="2" name="numval" value="2" type="submit"></button></td>
<td><button id="3" name="numval" value="3" type="submit"></button></td>
<td><button id="times" name="sym" value="times" type="submit">×
</button></td>
</tr>
<tr>
<td> </td>
<td> </td>
<td> </td>
<td> </td>
</tr>
<tr class="hover">
<td> </td>
<td><button id="0" name="numval" value="0" type="submit"></button></td>
<td><button id="equal" name="sym" type="submit">=</button></td>
<td><button id="divide"; name="sym" type="submit" value="divide">÷
</button></td>
</tr>
</table>
</div>
</form>
Views.py file:
if 'sym' in request.GET:
if request.GET['sym'] == 'add':
first=request.GET['result']
opsign='\+'
return render(request,'calculator.html',{'result':first,'prev_val':first,'curr_val':second,'opsign':opsign})
elif request.GET['sym'] == '=':
if 'prev_val' in request.GET and request.GET['prev_val']:
first=request.GET['prev_val']
if 'result' in request.GET and request.GET['result']:
second=request.GET['result']
try:
result=add(10,20)
except ValueError:
err="Error: Incorrect Number"
except ZeroDivisionError:
err="Error: Division by zero"
return render(request,'calculator.html',{'result':result,'error':err})
else:
return render(request,'calculator.html',{'error':'No Operation selected'})
Solution required: When i click the "Plus" button or "Equal" button the above
mentioned function does not invoke at all. THe controller is not passed from
HTML page to the above function in the views.py file. Where am i making the
mistake and why the controller is not passing control to the function. Any
help is appreciated.
Answer: In my view.py file i had written the operations such as add,sub,mul in the
end. So, i just put them at the top of the file below import. It worked.
Views.py file:
from django.shorcuts import render
def add(a,b):
return a+b
def mul(a,b):
return a*b
def sub(a,b):
return a-b
def operation(request):
if 'sym' in request.GET:
if request.GET['sym'] == 'add':
first=request.GET['result']
opsign='\+'
return render(request,'calculator.html',{'result':first,'prev_val':first,'curr_val':second,'opsign':opsign})
elif request.GET['sym'] == '=':
if 'prev_val' in request.GET and request.GET['prev_val']:
first=request.GET['prev_val']
if 'result' in request.GET and request.GET['result']:
second=request.GET['result']
try:
result=add(10,20)
except ValueError:
err="Error: Incorrect Number"
except ZeroDivisionError:
err="Error: Division by zero"
return render(request,'calculator.html',{'result':result,'error':err})
else:
return render(request,'calculator.html',{'error':'No Operation selected'})
|
How to arrange lines into colums of csv file with python?
Question: I am a chemistry student and I am interested in performing conformational
analysis of molecules. I have performed a Potential Energy Surface Scan on the
cumaric acid in order to find the most stable conformer. With this simple
process the different spatial arrangements due to rotations of groups of atoms
about a bond are visualized. The image **pes_molecule.png** of the molecule
shows clearly the two varying dihedrals of the chain.
The program used for this purpose is called Gaussian 09 and gives the
following **pes5part.csv** output for the first five conformers:
1 2 3 4 5
Eigenvalues -- -570.08934-570.08821-570.08676-570.08521-570.08384
B1 1.38384 1.38327 1.38324 1.38348 1.38413
B2 1.38571 1.38662 1.38692 1.38687 1.38631
A2 119.68274 119.74315 119.80026 119.84218 119.85816
B3 1.39004 1.38856 1.38754 1.38685 1.38683
A3 119.90377 119.88911 119.86542 119.83707 119.82679
D3 359.78590 359.83552 359.88306 359.93484 359.98413
B4 1.37736 1.37902 1.38023 1.38107 1.38117
A4 119.75636 119.73537 119.72486 119.72923 119.74312
D4 0.71367 0.72647 0.69117 0.56509 0.38069
B5 1.39645 1.39466 1.39330 1.39215 1.39158
A5 121.33129 121.30763 121.28873 121.27166 121.23298
D5 0.35956 0.44698 0.45240 0.42630 0.33448
B6 1.47220 1.47528 1.47926 1.48347 1.48738
A6 122.40820 121.98088 121.61637 121.36363 121.16036
D6 180.48284 181.09688 181.65183 182.01495 181.86758
B7 1.32697 1.32601 1.32486 1.32369 1.32268
A7 126.15279 125.45399 124.91354 124.58356 124.35302
D7 326.35068 316.35068 306.35068 296.35068 286.35068
B8 1.47594 1.47706 1.47838 1.47958 1.48079
A8 119.99708 120.12965 120.23195 120.29720 120.33716
D8 180.53457 180.77470 180.92143 180.91869 180.76068
B9 1.07411 1.07413 1.07416 1.07418 1.07420
A9 118.93985 118.98599 119.01911 119.04122 119.05329
D9 181.37285 181.38492 181.22672 180.94401 180.58221
B10 1.34694 1.34770 1.34843 1.34907 1.34959
A10 122.64744 122.58131 122.55418 122.55000 122.56749
D10 180.42161 180.46502 180.42820 180.34924 180.21926
B11 1.07626 1.07630 1.07630 1.07624 1.07612
A11 119.03402 119.08722 119.10807 119.12392 119.13418
D11 179.35212 179.21303 179.20177 179.31786 179.55673
B12 1.07697 1.07704 1.07710 1.07715 1.07720
A12 120.07413 120.05334 120.01240 119.97693 119.94390
D12 180.48654 180.55485 180.52338 180.39366 180.25905
B13 1.07508 1.07529 1.07540 1.07548 1.07561
A13 119.03861 119.18885 119.28342 119.31016 119.29960
D13 181.28569 181.16448 180.90103 180.58626 180.30590
B14 0.94291 0.94286 0.94282 0.94279 0.94274
A14 111.19697 111.19860 111.17512 111.14446 111.13678
D14 359.87694 359.98739 360.03935 359.94679 360.14975
B15 1.33041 1.33009 1.32973 1.32951 1.32933
A15 111.93106 111.92554 111.91202 111.89198 111.87131
D15 180.31345 180.31345 180.31345 180.31345 180.31345
B16 1.19235 1.19199 1.19165 1.19132 1.19107
A16 126.00937 125.96822 125.92197 125.88559 125.85792
D16 0.53326 0.61269 0.54073 0.55376 0.45438
B17 1.07741 1.07759 1.07781 1.07807 1.07828
A17 116.61938 117.00542 117.31889 117.52706 117.69428
D17 149.32579 139.91922 130.07838 119.74879 108.88744
B18 1.07393 1.07424 1.07440 1.07445 1.07448
A18 123.00819 122.72745 122.54598 122.45741 122.42974
D18 0.14076 0.61929 0.95343 1.10958 0.96334
B19 0.94770 0.94770 0.94774 0.94780 0.94787
A19 108.07785 108.09603 108.12787 108.16255 108.20337
D19 180.24961 180.28903 180.28314 180.25552 180.18273
**My goal is to create a csv file** with the following arrangement:
Eigenvalues D7 D15
-570.08934 326.35068 180.31345
-570.08821 316.35068 180.31345
-570.08676 306.35068 180.31345
-570.08521 296.35068 180.31345
-570.08384 286.35068 180.31345
The reason I need this is to create the 3D PES graph of the energy and the two
dihedrals and afterwards to retrieve the conformer with the lowest energy. For
this purpose I have created the following script:
#! /usr/bin/python2.7
import csv
import re
ifile =open('pes5part.csv', 'rb')
infile = csv.reader(ifile)
for line in open('pes5part.csv'):
rec = line.strip()
if rec.startswith('Eigenvalues') or rec.startswith('D7') or rec.startswith('D15'):
print line
When the script runs the following is printed into the terminal:
Eigenvalues -- -570.08934 -570.08821 -570.08676 -570.08521 -570.08384
D7 326.35068 316.35068 306.35068 296.35068 286.35068
D15 180.31345 180.31345 180.31345 180.31345 180.31345
So in order to proceed I need your help in order to arrange the values of the
first line of the eigenvalues in the first column. Then the values of the
second line of angle D7 to the second column and finally the values of angle
D15 to the third column as depicted at **my goal csv file ** above.ccs
The complete PES scan file output from Gaussian with all 361 conformers is the
**pesFULL.csv** :
The final complete desired PES file created by hand with all 361 conformers
after 5 hours of typing is **pes.ods**
while the final PES graphs are depicted in the files **pes_graph1.png** and
**pes_graph2.png**
I have attached all above files inside the shared dropbox folder
<https://www.dropbox.com/sh/5185f19tifpfr8s/AAB8cj0-niTFGbfGtEvjmfdGa?dl=0>
Thank you in advance developers for any suggestion or help.
Answer: This is a very basic example, but it should do the job. Pay attention to use
the right delimiter. You can modify the print statement to get the right
formatting.
CSV: CSV stands for Comma Separated Values, but there are at least three
possible separators in CSV files. Tools and libraries can use semicolons,
commas or the tab character as the separator. Depending on with which
separator the file was created you have to make sure to use the same separator
when reading it. The csv library in python calls the separator delimiter.
Since the input file wasn't posted I can't know which separator is being used
in it.
import csv
D = list(csv.reader(open(r"pes5part.csv"), delimiter=";"))
for l in zip(*filter(lambda e: e[0].strip() in ["Eigenvalues", "D7", "D15"], D)):
print "\t".join(l)
Of course doing it step-wise is not necessary, but this way I find it easier
to read.
Upon further study of your question and example, i think the issue is that
although the file has the csv extensions, it's not a proper CSV. So try this
instead:
import re
splitter = re.compile("\s+")
D = [splitter.split(a) for a in open(r"pes5part.csv").readlines()]
for l in zip(*filter(lambda e: e[0] in ["Eigenvalues", "D7", "D15"], D)):
print "\t".join(l)
|
Python: How can the same socket object serve different clients?
Question: As is mentioned in the documentation: [Python
socket.accept()](https://docs.python.org/3/library/socket.html#socket.socket.accept)
> Accept a connection. The socket must be bound to an address and listening
> for connections. The return value is a pair (conn, address) where conn is a
> new socket object usable to send and receive data on the connection, and
> address is the address bound to the socket on the other end of the
> connection.
>
> The newly created socket is non-inheritable.
>
> Changed in version 3.4: The socket is now non-inheritable.
_server code_
>>> from socket import *
>>> sock = socket(AF_INET, SOCK_STREAM)
>>> sock.bind(("localhost", 20000))
>>> sock.getsockname()
('127.0.0.1', 20000)
>>> sock.listen(1)
>>> while True:
... conn, address = sock.accept()
... print("Address of client : {0}".format(address))
... print("Address of socket : {0}".format(conn.getsockname()))
...
Address of client : ('127.0.0.1', 47165)
Address of socket : ('127.0.0.1', 20000)
Address of client : ('127.0.0.1', 47166)
Address of socket : ('127.0.0.1', 20000)
_client code_
>>> from socket import *
>>> sclient1 = socket(AF_INET, SOCK_STREAM)
>>> sclient2 = socket(AF_INET, SOCK_STREAM)
>>> sclient1.connect(("localhost", 20000))
>>> sclient2.connect(("localhost", 20000))
The address of the new `socket` object that is returned, is always the same as
the original `socket` which was actually accepting connections.
I always thought that the server would create a new `socket` object with a
different random port, but as can be seen above, even for multiple clients,
the address and port of new `conn` object is still the same. How is then the
server able to handle multiple clients?
**EDIT** : I know the above code is blocking. If I use multiple threads to
handle different client connections, I'll have to do send the new socket
object, and client address to my thread function. Therefore, multiple threads
are then handling multiple clients using the same server address and port.
_Threaded server_
>>> from socket import *
>>> import threading
>>> def handler(conn, address):
... print("Address of client : {0}".format(address))
... print("Address of socket : {0}".format(conn.getsockname()))
...
>>> sock = socket(AF_INET, SOCK_STREAM)
>>> sock.bind(("localhost", 20000))
>>> sock.listen(1)
>>> while 1:
... conn, address = sock.accept()
... t = threading.Thread(target=handler, args=[conn, address])
... t.start()
...
Address of client : ('127.0.0.1', 47169)
Address of socket : ('127.0.0.1', 20000)
Address of client : ('127.0.0.1', 47170)
Address of socket : ('127.0.0.1', 20000)
Answer: > I always thought that the server would create a new socket object with a
> different random port
That would require telling the client which _new_ port to use for the
connection. Nicely enough, it is not necessary, see below.
> How is then the server able to handle multiple clients?
The tuple `(server_addr, server_port, client_addr, client_port)` is unique
after a client connects. When a packet comes in, the network stack searches
for open connections matching this tuple and redirects incoming packets to the
relevant socket (/file descriptor).
The _server socket_ (which you perform `accept()` on), is _unconnected_ (it is
_`listen()`ing_), but bound. This means it doesn't have a peer on the other
side (no client address), but it does have a local address (server side).
`accept()` returns a new socket. This one is _bound_ and _connected_. Bound is
the same as above: it has a local address and port. That address is the same
as for the server socket, however the client socket state is different from
the state of the server socket: It is connected. This means that there is a
known peer (with an address) on the other side that we can communicate with.
We also have that peer's address (the _peer address_) and socket. This
information is enough to uniquely identify the connection.
The client socket only accepts data that matches all four of `(server_addr,
server_port, client_addr, client_port)`.
|
Get the last date of each month in a list of dates in Python
Question: I'm using Python 2.7, PyCharm and Anaconda,
I have a `list` of dates and I'd like to retrieve the last date of each month
present in the array.
Are there any functions or libraries that could help me to do this?
I read the dates from a CSV file and stored them as `datetime`.
I have the following code:
Dates=[]
Dates1=[]
for date in dates:
temp=xlrd.xldate_as_tuple(int(date),0)
Dates1.append(datetime.datetime(temp[0],temp[1],temp[2]))
for date in Dates1:
if not (date<startDate or date>endDate):
Dates.append(date)
To make it clear, suppose I have:
Dates = [2015-01-20, 2015-01-15, 2015-01-17, 2015-02-21, 2015-02-06]
(Consider it being in `datetime` format.)
The list I'd like to retrieve is:
[2015-01-20, 2015-02-21]
So far I've googled around, especially in Stack Overflow, but I could only
find answers to how I could get the last date of each month, but not from a
user-specified list.
Answer: For year `y` and month `m`, `calendar.monthrange(y, m)[1]` returns the day
number of the last day of the month.
The following script takes a list of `datetime` object called `dates` and
makes a new list, `month_last_dates`, containing `datetime` objects
corresponding to the last date of each month in which the members of `dates`
fall.
import datetime
import calendar
tuples = [(2015, 8, 1), (2015, 9, 16), (2015, 10, 4)]
dates = [datetime.datetime(y, m, d) for y, m, d in tuples]
month_last_dates = len(dates) * [None]
for i, date in enumerate(dates):
y, m, d = date.year, date.month, date.day
last = calendar.monthrange(y, m)[1]
print y, m, last # Output for testing purposes.
month_last_dates[i] = datetime.datetime(y, m, last)
Here is an equivalent script written more concisely with the help of a list
comprehension:
import datetime
import calendar
tuples = [(2015, 8, 1), (2015, 9, 16), (2015, 10, 4)]
dates = [datetime.datetime(y, m, d) for y, m, d in tuples]
month_last_dates = [datetime.datetime(date.year, date.month,
calendar.monthrange(date.year, date.month)[1]) for date in dates]
# Output for testing purposes.
for date in month_last_dates:
print date.year, date.month, date.day
In your case, given the list `Dates`, you can make a new list like this:
last_dates = [datetime.datetime(date.year, date.month,
calendar.monthrange(date.year, date.month)[1]) for date in Dates]
|
Eliminating warnings from scikit-learn
Question: I would like to ignore warnings from all packages when I am teaching, but
scikit-learn seems to work around the use of the `warnings` package to control
this. For example:
with warnings.catch_warnings():
warnings.simplefilter("ignore")
from sklearn import preprocessing
/usr/local/lib/python3.5/site-packages/sklearn/utils/fixes.py:66: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if 'order' in inspect.getargspec(np.copy)[0]:
/usr/local/lib/python3.5/site-packages/sklearn/utils/fixes.py:358: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead
if 'exist_ok' in inspect.getargspec(os.makedirs).args:
Am I using this module incorrectly, or is sklearn doing something its not
supposed to?
Answer: It annoys me to the extreme that sklearn [forces
warnings](https://github.com/scikit-learn/scikit-learn/issues/2531).
I started using this at the top of main.py:
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
#... import sklearn stuff...
|
Python: Writing Counter to a csv file
Question: I have a csv file of data that has the columns `‘number’`, `’colour’`,
`’number2’`, `’foo’`, `’bar’`, which looks like:
12, red, 124, a, 15p
14, blue, 353, c, 7g
12, blue, 125, d, 65h
12, red, 124, c, 12d
I want to count the number of times number, colour and number2 occur together,
so for example, the output from the above list would be: `’12, red, 124
:2’,’14, blue, 353: 1’, ’12, blue, 125: 1’`. I’ve done this by using:
import csv
datafile=open('myfile.csv','r')
usefuldata=[]
for line in datafile:
usefuldata.append(line)
from collections import Counter
outfile1=Counter((line[1],line[2],line[3]) for line in usefuldata)
print(outfile1)
This gives me :
Counter({(‘12’,’red’,’135’): 21, (‘15’,’blue’,’152’):18, (‘34’,’green’,’123’):16 etc})
Which is great, but I’d like to write this out to a file. I'd like the file to
have 4 columns: number, colour, number2, and count. I realise this is a common
question and I’ve tried a few different approaches suggested on other threads,
but none have worked.
Newfile=open(‘newfile.csv’,’wb’)
fieldnames=['a','b']
csvwriter=csv.DictWriter(newfile, delimiter=',', fieldnames=fieldnames)
csvwriter.writerow(dict((fn,fn) for fn in fieldnames))
for row in outfile1:
csvwriter.writerow(row)
And
with open('newfile.csv','wb') as csvfile:
fieldnames=['number','colour','number2']
writer=csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerow(Counter((line[1],line[2],line[3]) for line in usefuldata))
countwriter=csv.writer(csvfile, delimiter=', ')
countwriter.writerow(outfile1)
Both give me the error
return self.writer.writerow(self._dict_to_list(rowdict))
TypeError: 'str' does not support the buffer interface
I've also tried using pickle:
import pickle
with open('newfile.csv','wb') as outputfile:
pickle.dump(outfile1, outputfile)
gives me gibberish files.
My current attempt is to use
writer=csv.DictWriter(newfile, outfile1)
for line in outfile1:
writer.writerow(line)
but this gives me an error about fieldnames.
I know this is a common question and I'm conscious that I'm only struggling
because I really don't know what I'm doing- it has been a few years since I've
used python and I've forgotten so much. Any help would be greatly appreciated.
Answer: First of all, the reason for the main issue -
TypeError: 'str' does not support the buffer interface
is that you are openning the file in binary mode, you should open the file in
text mode ( without `b` ).
Secondly, I would say it would be easier to use normal
[`csv.writer`](https://docs.python.org/2/library/csv.html#csv.writer) than
[`csv.DictWriter()`](https://docs.python.org/2/library/csv.html#csv.DictWriter)
in your case, because of the way your dictionary is created.
A way to write your result to csv would be -
#Assuming you have previously created the counter you want to write
#lets say you stored the counter in a variable called cnter
with open('newfile.csv','w') as csvfile:
fieldnames=['number','colour','number2','count']
writer=csv.writer(csvfile)
writer.writerow(fieldnames)
for key, value in cnter.items():
writer.writerow(list(key) + [value])
|
matplotlib pyplot show capture image
Question: In a python interactive session the following code is run:
import matplotlib.pyplot as plt
plt.plot([1,2,3])
plt.show()
how do I let show() not block and not actually show the image, but instead
store it in a file? I can not change the code to `plt.savefig('figure.png')`.
(There is a very good reason for this, I can explain if interested.)
The way to go seems to be specifying a custom backend renderer but so far no
success. Is it possible to take an existing backend renderer and change the
show() method to save to a file? (Lets say "figure%d.png" with %d the amount
of times show() has been called so far.)
Other suggestions next to a custom backend renderer are welcome as well. In
IPython notebook if you execute plt.show(), it manages to take the image and
place it beneath the active code block. How's that done?
Answer: You could use the solution from
[here](http://www.dalkescientific.com/writings/diary/archive/2005/04/23/matplotlib_without_gui.html)
for creating a figure without a gui. This uses `FigureCanvasAgg` (which is the
gui built by [default](http://stackoverflow.com/questions/4931376/generating-
matplotlib-graphs-without-a-running-x-server), you could probably use a
different one). The show in pyplot can then be monkey patched,
import matplotlib.pyplot as plt
from matplotlib.backends.backend_agg import FigureCanvasAgg
def show(fig=None):
if fig == None:
fig = plt.gcf()
canvas = FigureCanvasAgg(fig)
canvas.print_figure("./out.png", dpi=80)
plt.show = show
plt.plot([1,2,3])
plt.show()
Not sure if this avoid the problem you have with savefig?
|
Why does Python's print function act this way?
Question: I was writing a simple Hello World Python script and tried a couple different
things to see what I would get.
Results:
print "Hello World" ===> Hello World
print "Hello"," ","World ===> Hello World
print "Hello" "World" ===> HelloWorld
The results surprised me... from experience with other languages I expected to
get something more like this:
print "Hello World" ===> Hello World
print "Hello"," ","World ===> Hello World
print "Hello" "World" ===> Syntax Error!
After trying a couple more examples I realized that it seems to add a space
whenever you separate strings with a ",".
...Even more strangely, it doesn't seem to care if you give it multiple
strings without a "," separating them as the third example shows.
* * *
Why does Python's print function act this way???
Also is there a way to stop it from adding spaces for "," separated strings?
Answer: Because the `print` statement adds spaces between _separate values_ , as
[documented](https://docs.python.org/2/reference/simple_stmts.html#the-print-
statement):
> A space is written before each object is (converted and) written, unless the
> output system believes it is positioned at the beginning of a line.
However, `"Hello" "World"` is not two values; it is _one_ string. Only
whitespace between two string literals is ignored and those string literals
are concatenated ([by the
parser](http://stackoverflow.com/questions/26433138/what-is-under-the-hood-of-
x-y-z-in-python/26433185#26433185)):
>>> "Hello" "World"
"HelloWorld"
See the [_String literal concatenation_
section](https://docs.python.org/2/reference/lexical_analysis.html#string-
literal-concatenation):
> Multiple adjacent string literals (delimited by whitespace), possibly using
> different quoting conventions, are allowed, and their meaning is the same as
> their concatenation.
This makes it easier to combine different string literal styles (triple
quoting and raw string literals and 'regular' string literals can all be used
to create one value), as well as make creating a _long_ string value easier to
format:
long_string_value = (
"This is the first chuck of a longer string, it fits within the "
'limits of a "style guide" that sets a shorter line limit while '
r'at the same time letting you use \n as literal text instead of '
"escape sequences.\n")
This feature is in fact inherited from C, it is not a Python invention.
In Python 3, where [`print()` is a
_function_](https://docs.python.org/3/library/functions.html#print) rather
than a _statement_ , you are given more control over how multiple arguments
are handled. Separate arguments are delimited by the `sep` argument to the
function, which defaults to a space.
In Python 2, you can get the same functionality by adding `from __future__
import print_function` to the top of your module. This disables the statement,
making it possible to use the [same function in Python 2
code](https://docs.python.org/3/library/functions.html#print).
|
How to loop sklearn linear regression by values within a column - python
Question: I am relatively new to python and programming in general.
I have a csv with three columns, an identifier, distance, and date. The data
looks something like:
id, date, distance
1, 1850, 150
1, 1950, 200
1, 1990, 250
2, 1850, 130
2, 1950, 180
2, 1990, 210
3, 1850, 200
3, 1950, 220
3, 1990, 250
etc...
I'm trying to use sklearn.linear_model LinearRegression. To run a regression
on each of these series. So I need to basically separate the columns by id. I
want a separate result for each group of id's (1, 2, 3, etc).
I was able to do this easily with R by using:
reg1 <- lmList(distance ~ date | id, mydata)
However, for several reasons, I need to do this with python.
I've imported the csv using pandas.read_csv. Then I assumed I need to
attribute each of these to a variable:
dataset = pandas.read_csv(location, sep = ",", header = 0)
x = dataset['date']
y = dataset['distance']
z = dataset['id']
Then apparently to use them in LinearRegression() and model.fit(x,y) I needed
to use
x2 = numpy.array(x).T and
y2 = numpy.array(x).T
So now that that is all set up, what should I do to actually loop the
regression through those id values?
when I try:
for n in main_reg2.id:
model1 = LinearRegression()
model1.fit(xT1,yT)
print(model1.coef_, model1.intercept_)
break
It just gives me one result for the whole dataset. And I'm not sure where to
tell it to go through the id values.
Any help would be greatly appreciated. Sorry for the mixing of jargon, I've
had to dabble in 5 different programming and statistical languages over the
past 2 years of grad school and am very confused sometimes.
Perhaps I'm overthinking this, or maybe there is some way towards the
beginning that I can separate the data?
Also, how would I create a table from this that lists the coefficients and
intercepts and whatnot.
Thanks
Updated code so far: with open(r'file_location','rb') as f: reader =
csv.reader(f) headers = reader.next() for fields in reader: your_list =
list(reader)
z = [_[0] for _ in your_list]
x = [_[1] for _ in your_list]
y = [_[2] for _ in your_list]
id_max = max(z)
new_list = [[]]
for _ in range(id_max):
new_list.append([])
for index in range(len(z)):
id = z[index]
new_entry = [x[index],y[index]]
print "Adding to index", id, new_entry
new_list[id].append(new_entry)
print new_list
I used header = reader.next() code to get rid of the headers. When I was doing
the max(z) before, the header was coming up as the max...
Going through line by line: I get...
id_max = max(z)
print(id_max)
999
Which I already know is wrong since the max is 1576.
But moving on.... after the append I get
Type Error: range() integer end argument expected, got str
Which I don't understand since z is a list of integers from 1 to 1576.
Some examples of the indexes are:
z[1] = '10'
y[1] = '2011'
x[1] = '515.8938'
Answer: I'd simply separate the data by ID at the very start: make this a list of
lists:
data_list = [
[1, 1850, 150],
[1, 1950, 200],
[1, 1990, 250],
[2, 1850, 130],
[2, 1950, 180],
[2, 1990, 210],
[3, 1850, 200],
[3, 1950, 220],
[3, 1990, 250]
]
z = [_[0] for _ in data_list]
x = [_[1] for _ in data_list]
y = [_[2] for _ in data_list]
# Create a list of empty lists, one for each ID
id_max = max(z)
new_list = [[]]
for _ in range(id_max):
new_list.append([])
# For each row, add the data to the sub-list for the row ID;
# ID 0 is not used.
for index in range(len(z)):
id = z[index]
new_entry = [x[index], y[index]]
print "Adding to index", id, new_entry
new_list[id].append(new_entry)
print new_list
new_list is now a list of lists, each sub-list having rows that share the same
id. There are more "Pythonic" ways to do some of these things, but I kept the
code to where I expect you can follow the logic.
Output, complete with trace:
Adding to index 1 [1850, 150]
[[], [[1850, 150]], [], []]
Adding to index 1 [1950, 200]
[[], [[1850, 150], [1950, 200]], [], []]
Adding to index 1 [1990, 250]
[[], [[1850, 150], [1950, 200], [1990, 250]], [], []]
Adding to index 2 [1850, 130]
[[], [[1850, 150], [1950, 200], [1990, 250]], [[1850, 130]], []]
Adding to index 2 [1950, 180]
[[], [[1850, 150], [1950, 200], [1990, 250]], [[1850, 130], [1950, 180]], []]
Adding to index 2 [1990, 210]
[[], [[1850, 150], [1950, 200], [1990, 250]], [[1850, 130], [1950, 180], [1990, 210]], []]
Adding to index 3 [1850, 200]
[[], [[1850, 150], [1950, 200], [1990, 250]], [[1850, 130], [1950, 180], [1990, 210]], [[1850, 200]]]
Adding to index 3 [1950, 220]
[[], [[1850, 150], [1950, 200], [1990, 250]], [[1850, 130], [1950, 180], [1990, 210]], [[1850, 200], [1950, 220]]]
Adding to index 3 [1990, 250]
[[], [[1850, 150], [1950, 200], [1990, 250]], [[1850, 130], [1950, 180], [1990, 210]], [[1850, 200], [1950, 220], [1990, 250]]]
|
Python 3, Tkinter, How to update button text
Question: Im trying to make it so that when the user clicks a button, it becomes "X" or
"0" (Depending on their team). How can I make it so that the text on the
button is updated? My best idea so far has been to delete the buttons then
print them again, but that only deletes one button. Here's what I have so far:
from tkinter import *
BoardValue = ["-","-","-","-","-","-","-","-","-"]
window = Tk()
window.title("Noughts And Crosses")
window.geometry("10x200")
v = StringVar()
Label(window, textvariable=v,pady=10).pack()
v.set("Noughts And Crosses")
def DrawBoard():
for i, b in enumerate(BoardValue):
global btn
if i%3 == 0:
row_frame = Frame(window)
row_frame.pack(side="top")
btn = Button(row_frame, text=b, relief=GROOVE, width=2, command = lambda: PlayMove())
btn.pack(side="left")
def PlayMove():
BoardValue[0] = "X"
btn.destroy()
DrawBoard()
DrawBoard()
window.mainloop()
Answer: The Button widget, just like your Label, also has a `textvariable=` option.
You can use `StringVar.set()` to update the Button. Minimal example:
import tkinter as tk
root = tk.Tk()
def update_btn_text():
btn_text.set("b")
btn_text = tk.StringVar()
btn = tk.Button(root, textvariable=btn_text, command=update_btn_text)
btn_text.set("a")
btn.pack()
root.mainloop()
|
Pinging in Python with Feedback
Question: so I'm such a noob in Python it's painful but I'm trying to come up with a way
to PING a site, then spit out an 'if/else' clause.
So far I have this:
import subprocess
command = "ping -c 3 www.google.com" # the shell command
process = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=None, shell=True)
#Launch the shell command:
output = process.communicate()
print output[0]
Below is where I go awry, here is part two:
if output == 0.00
print "server is good"
else
print "server is hosed"
Obviously part 2 is not working out.
My question is, how do I "read" the results from the ping (the milliseconds)
icmp_seq=0 ttl=44 time=13.384 ms
and say "if the ping time is faster than 12.000 ms then do THIS" else "do
THAT"
Right now I'm just doing a print but soon I'd like to change that so something
else.
Answer: `subprocess.Popen` is usually not the one you want. There are convenience
functions for all of the simple tasks. In your case, I think you want
`subprocess.check_output`:
output = subprocess.check_output(command, shell=True)
There are many ways to parse the resulting output string. I am fond of regular
expressions:
matches = re.findall(" time=([\d.]+) ms", output)
`re.findall` returns a `list` of `str`, but you want to convert that to a
single number so that you can do numerical comparisons. Use the `float()`
constructor to convert the `str`s to `float`s, and then compute the average:
matches = [float(match) for match in matches]
ms = sum(matches)/len(matches)
Sample program:
import subprocess
import re
# Run the "ping" command
command = "ping -c 3 www.google.com" # the shell command
output = subprocess.check_output(command, shell=True)
# And interpret the output
matches = re.findall(" time=([\d.]+) ms", output)
matches = [float(match) for match in matches]
ms = sum(matches)/len(matches)
if ms < 12:
print "Yay"
else:
print "Boo"
Note that the output of `ping` is not standardized. On my machine, running
Ubuntu 14.04, the above regular expression works. On your machine, running
some other OS, it might need to be different.
|
Weird behavior of T distribution
Question: I have an empirical distribution and I am trying to fit a `T` distribution to
it using `numpy` and plotting it with `matplotlib`.
Here is something I cannot understand:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import t
arr = np.array( [140, 36, 44, 24, 15, 48, 19, 2, 84, 6, 70, 3, 20, 6, 133, 23, 30, 7, 37, 165] )
params = t.fit( arr )
mean = arr.mean()
std = arr.std()
r = np.arange( mean - 3 * std, mean + 3 * std, 0.01 )
pdf_fitted = t.pdf(r, *params[0:-2], loc=params[-2], scale=params[-1])
plt.plot( r, pdf_fitted )
plt.plot( [mean, mean], [0, max(pdf_fitted)] )
plt.show()
This plots:
[](http://i.stack.imgur.com/NkcNi.png)
The green line is the mean of the emprical data, and the blue line is the
fitted `T` distribution to the same data.
The problem is the empirical mean and the peak of the distribution do not
match. When I fit a `normal` distribution to the same data, I get a perfect
match with the green line and the peak of the distribution, as expected.
Now, looking to the [Wikipedia T
distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution):
> The t-distribution is symmetric and bell-shaped, like the normal
> distribution, but has heavier tails...
Since it says it is **symmetric** I expect that my mean and the peak match
perfectly, but it does not.
**My question is:** Is there anything wrong with my Python code or is it the
expected behavior of the `T` distribution? If yes, why? If no, what I am doing
wrong with my code?
Answer: There's no bug in the Python code as far as I can see; actually this is a good
example to illustrate the robustness of the Student _t_ distribution compared
to the Gaussian. One characteristic of exponential family distributions
(Gaussian, Exponential, Binomial, Poisson, etc.) is that they have really thin
tails, meaning that the pdf decreases exponentially as you deviate from the
mean. This characteristic gives them nice theoretical properties, but is often
the bottleneck in applying them to model real-world distributions, where
outliers abound in the dataset. Therefore, the _t_ distribution is a popular
alternative, because a couple outliers in your observed dataset wouldn't
affect your inferences much. In your example, think about the original dataset
as consisting of all points except the three high outliers. However, these
outliers were, say introduced in some noisy process. Statistical inference
aims to describe properties (say, the mean) of the original dataset, so
suppose you used a Gaussian in this case you would have grossly over-estimated
the true mean. If you used the _t_ in this case, it would not match the mean
of your noisy sample, but it would be much a much more accurate estimate of
the original true mean, regardless of the outliers.
|
Solving vector ordinary differential equations in python with scipy odeint
Question: I'm trying to solve and ode that involves vectors and can't come out with a
feasible answer. So i split it into 6 components, one for each time derivative
of a component, and one for each time derivative of a velocity component. The
first value seems reasonable but then it jumps to to numbers in the millions
and I'm not sure why. I'm honestly not really sure how to do this in the first
place and am just giving it a shot right now. I couldn't seem to find any info
on it online and could use some help or some links if there are examples of
this type of problem. Any info would be much appreciated on how to get this to
solve the ODE.
def dr_dt(y, t):
"""Integration of the governing vector differential equation.
d2r_dt2 = -(mu/R^3)*r with d2r_dt2 and r as vecotrs.
Initial position and velocity are given.
y[0:2] = position components
y[3:] = velocity components"""
G = 6.672*(10**-11)
M = 5.972*(10**24)
mu = G*M
r = np.sqrt(y[0]**2 + y[1]**2 + y[2]**2)
dy0 = y[3]
dy1 = y[4]
dy2 = y[5]
dy3 = -(mu / (r**3)) * y[0]
dy4 = -(mu / (r**3)) * y[1]
dy5 = -(mu / (r**3)) * y[2]
return [dy0, dy3, dy1, dy4, dy2, dy5]
After this is solved, I want to plot it. It should come out to an ellipse but
to be honest I'm not exactly sure how to do that either. I was thinking of
taking the magnitude of the position and then plotting it with time. If
there's a better way of doing this please feel free to let me know.
Thanks.
Answer: First, if your `y[:3]` is position and `y[3:]` is velocity, then `dr_dt`
function should return the components in exactly this order. Second, to plot
the trajectory we can either use excellent matplotlib `mplot3d` module, or
omit `z`th component of position and velocity (so our motion is on XY plane),
and plot `y` versus `x`. The sample code (with corrected order of return
values) is given below:
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
def dr_dt(y, t):
"""Integration of the governing vector differential equation.
d2r_dt2 = -(mu/R^3)*r with d2r_dt2 and r as vecotrs.
Initial position and velocity are given.
y[0:2] = position components
y[3:] = velocity components"""
G = 6.672*(10**-11)
M = 5.972*(10**24)
mu = G*M
r = np.sqrt(y[0]**2 + y[1]**2 + y[2]**2).
dy0 = y[3]
dy1 = y[4]
dy2 = y[5]
dy3 = -(mu / (r**3)) * y[0]
dy4 = -(mu / (r**3)) * y[1]
dy5 = -(mu / (r**3)) * y[2]
return [dy0, dy1, dy2, dy3, dy4, dy5]
t = np.arange(0, 100000, 0.1)
y0 = [7.e6, 0., 0., 0., 1.e3, 0.]
y = odeint(dr_dt, y0, t)
plt.plot(y[:,0], y[:,1])
plt.show()
This yields a nice ellipse:
[](http://i.stack.imgur.com/tey09.png)
|
installing cdat-light with anacanda
Question: I'm trying to get cdat-light to use with Python (and the Anaconda package I
have downloaded). The procedure of insttaling cdat-lite with Anaconda is very
simple.
conda install -c <https://conda.binstar.org/scitools> cdat-lite
I am on CentOS (version 6.5) and have troubles with python module cdtime.so
about library stuff, when I try importing modules cdms2 and regrid2, cdtime:
ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by
/home/ramus/anaconda/lib/python2.7/site-packages/cdtime.so)
Any help is much appreciated! I'm completely lost and have no idea how to
proceed
Answer: It looks like that package was built against a newer version of Linux than the
one you are using. You won't be able to use it. You might ask the owner of the
binstar channel if they can build it against an older version of Linux, or see
if you can build it yourself from their recipe.
|
Error when config logging use logging.config.dictConfig in python
Question: First of all , I am using python.Try to config logging by a config json file
and use **logging.config.dictConfig()** .I used it before and it work well.
But this time I delete a little bit and some how it just doesn't work. My
config file will look like this:
{
"version": 1,
"disable_existing_loggers": false,
"formatters": {
"simple": {
"format": "%(name)-6s | %(levelname)-8s | %(message)s"
},
"info_format": {
"format": "%(name)-6s | %(levelname)-6s | %(message)s"
},
"error_format": {
"format": "%(filename)-8s | %(module)-12s | %(funcName)s | %(lineno)d : %(message)s"
},
"debug_format": {
"format": "%(filename)-8s | %(module)-12s | %(funcName)s | %(lineno)d : %(message)s"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"formatter": "simple",
"stream": "ext://sys.stdout"
},
"info_handler": {
"level": "INFO",
"formatter": "info_format",
"encoding": "utf8",
"stream": "ext://sys.stdout"
},
"error_handler": {
"level": "ERROR",
"formatter": "error_format",
"encoding": "utf8",
"stream": "ext://sys.stderr"
}
},
"loggers": {
"info": {
"level": "INFO",
"handlers": ["console"],
"propagate": "no"
},
"error": {
"level": "ERROR",
"handlers": ["console"],
"propagate": "no"
},
"tornado.access": {
"level": "INFO",
"handlers": ["console"],
"propagate": "no"
},
"tornado.general": {
"level": "INFO",
"handlers": ["console"],
"propagate": "no"
},
"tornado.application": {
"level": "ERROR",
"handlers": ["console"],
"propagate": "no"
}
}
}
Here is how I config logging:
configParser = ConfigParser.ConfigParser()
__ProjectPath = os.getcwd()
config_path = __ProjectPath + "/configs/logconfig.json"
with open(config_path, 'rt') as f:
configs = json.loads(f.read())
logging.config.dictConfig(configs)
That is the error message:
Traceback (most recent call last):
File "/home/jonah/code/leancloud-demo/wsgi.py", line 13, in <module>
from app import app
File "/home/jonah/code/leancloud-demo/app.py", line 14, in <module>
import log
File "/home/jonah/code/leancloud-demo/log.py", line 51, in <module>
logging.config.dictConfig(configs)
File "/usr/lib/python2.7/logging/config.py", line 794, in dictConfig
dictConfigClass(config).configure()
File "/usr/lib/python2.7/logging/config.py", line 576, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler u'error_handler': 'NoneType' object has no attribute 'split'
Process finished with exit code 1
I had google for a while,just can not a answer.Sorry for my poor English.
Answer: Read <https://docs.python.org/2/library/logging.config.html#dictionary-schema-
details>
// class (mandatory). This is the fully qualified name of the handler class
"info_handler": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "info_format",
"stream": "ext://sys.stdout"
},
"error_handler": {
"class": "logging.StreamHandler",
"level": "ERROR",
"formatter": "error_format",
"stream": "ext://sys.stderr"
}
|
MemoryError in Python
Question: I have text file, its size is 300 MB. I want to read it and then print 50 most
frequently used words. When i run the program it gives me MemoryError. My code
is as under:-
import sys, string
import codecs
import re
from collections import Counter
import collections
import itertools
import csv
import re
import unicodedata
words_1800 = []
with open('E:\\Book\\1800.txt', "r", encoding='ISO-8859-1') as File_1800:
for line in File_1800:
sepFile_1800 = line.lower()
words_1800.extend(re.findall('\w+', sepFile_1800))
for wrd_1800 in [words_1800]:
long_1800=[w for w in words_1800 if len(w)>3]
common_words_1800 = dict(Counter(long_1800).most_common(50))
print(common_words_1800)
It give me the following error:-
Traceback (most recent call last):
File "C:\Python34\CommonWords.py", line 17, in <module>
words_1800.extend(re.findall('\w+', sepFile_1800))
MemoryError
Answer: You can use a [generator container](http://scottmoonen.com/2008/02/01/python-
generators-saving-time-and-memory/) instead of a list to store the result of
`re.findall` which is much optimized in terms of memory use, you can also use
`re.finditer` instead of `findall` which returns an iterator.
with open('E:\\Book\\1800.txt', "r", encoding='ISO-8859-1') as File_1800:
words_1800=(re.findall('\w+', line.lower()) for line in File_1800)
Then the `words_1800` will be an iterator contain lists of founded words or
use
with open('E:\\Book\\1800.txt', "r", encoding='ISO-8859-1') as File_1800:
words_1800=(re.finditer('\w+', line.lower()) for line in File_1800)
to get an iterator contains iterators.
|
Python multiline regex from text to text MULTILINE
Question: I want to extract info from .cdp file ( content downloader file, program to
parsing, can be opened in notepad) file looks like:
....
...
<CD_PARSING_RB_9>0</CD_PARSING_RB_9>
<CD_PARSING_RB_F9_3>0</CD_PARSING_RB_F9_3>
<CD_PARSING_LB_1>http://www.prospect.chisites.net/opportunities/?pageno=1
http://www.prospect.chisites.net/opportunities/?pageno=2
http://www.prospect.chisites.net/opportunities/?pageno=3
http://www.prospect.chisites.net/opportunities/?pageno=4</CD_PARSING_LB_1>
<CD_PARSING_EDIT_26>0</CD_PARSING_EDIT_26>
<CD_PARSING_EDIT_27><a href=/jobs/</CD_PARSING_EDIT_27>
<CD_PARSING_EDIT_28>0</CD_PARSING_EDIT_28>
I want to extract links using python, I found some solution bt it works
partially. (just deletes `<CD_PARSING_LB1>`tags), it should delete everything
but the links between those two tags. Solution may be also using search, but
this one wouldn't work for some reason.
code:
import string
import codecs
import re
import glob
outfile = open('newout.txt', 'w+')
try:
for file in glob.glob("*.cdp"):
print(file)
infile = open(file, 'r')
step1 = re.sub('.*<CD_PARSING_LB_1>', '',infile.read(), re.DOTALL)
step2 = re.sub('</CD_PARSING_LB_1>.*','', step1, re.DOTALL)
outfile.write(str(step1))
except Exception as ex:
print ex
raw_input()
Please help me in any way to get those links separated... Thanks full file
example:
Content Downloader X1 (11.9940) project file (parsing)
<F68_CB_5>0</F68_CB_5>
<F68_CB_8>0</F68_CB_8>
<F34_CB_4>0</F34_CB_4>
<F70_CB_4>0</F70_CB_4>
<F34_CB_5>0</F34_CB_5>
<F34_SE_1>0</F34_SE_1>
<F82_SE_2>0</F82_SE_2>
<F69_SE_1>1</F69_SE_1>
<F1_CMBO_8>0</F1_CMBO_8>
<F105_MEMO_1></F105_MEMO_1>
<F9_RBN_01>2</F9_RBN_01>
<F96_RB_01>1</F96_RB_01>
<F1_RBN_15>1</F1_RBN_15>
<F1_N120>1</F1_N120>
<F64_CB_01>0</F64_CB_01>
<F64_RB_01>1</F64_RB_01>
<F70_CB_03>0</F70_CB_03>
<CD_PARSING_COMBO_5>0</CD_PARSING_COMBO_5>
<F64_CB_02>0</F64_CB_02>
<F60_CB_02>0</F60_CB_02>
<F64_RE_1></F64_RE_1>
<F95_M_1></F95_M_1>
<F1_COMBO_6>0</F1_COMBO_6>
<F40_CHCKBX_555>0</F40_CHCKBX_555>
<F09_CB_01>0</F09_CB_01>
<F48_CB_02>0</F48_CB_02>
<F68_CB_01>0</F68_CB_01>
<F68_CB_02>0</F68_CB_02>
<F68_CB_03>0</F68_CB_03>
<F57_CB_41>0</F57_CB_41>
<F57_CB_43>0</F57_CB_43>
<F57_CB_45>0</F57_CB_45>
<F57_CB_47>0</F57_CB_47>
<F57_CB_49>0</F57_CB_49>
<F57_CB_51>0</F57_CB_51>
<F57_CB_53>0</F57_CB_53>
<F57_CB_55>0</F57_CB_55>
<F57_CB_57>0</F57_CB_57>
<F57_CB_59>0</F57_CB_59>
<F57_CB_61>0</F57_CB_61>
<F57_CB_63>0</F57_CB_63>
<F57_CB_65>0</F57_CB_65>
<F57_CB_67>0</F57_CB_67>
<F57_CB_69>0</F57_CB_69>
<F57_CB_71>0</F57_CB_71>
<F57_CB_73>0</F57_CB_73>
<F57_CB_75>0</F57_CB_75>
<F57_CB_77>0</F57_CB_77>
<F57_CB_79>0</F57_CB_79>
<F57_CB_42>0</F57_CB_42>
<F57_CB_44>0</F57_CB_44>
<F57_CB_46>0</F57_CB_46>
<F57_CB_48>0</F57_CB_48>
<F57_CB_50>0</F57_CB_50>
<F57_CB_52>0</F57_CB_52>
<CD_PARSING_EDIT_93>0</CD_PARSING_EDIT_93>
<CD_PARSING_EDIT_94></CD_PARSING_EDIT_94>
<CD_PARSING_EDIT_57_12></CD_PARSING_EDIT_57_12>
<CD_PARSING_EDIT_57_13></CD_PARSING_EDIT_57_13>
<CD_PARSING_EDIT_57_14></CD_PARSING_EDIT_57_14>
<CD_PARSING_EDIT_57_15></CD_PARSING_EDIT_57_15>
<CD_PARSING_EDIT_57_16></CD_PARSING_EDIT_57_16>
<CD_PARSING_EDIT_57_17></CD_PARSING_EDIT_57_17>
<CD_PARSING_EDIT_57_18></CD_PARSING_EDIT_57_18>
<CD_PARSING_RICH_50_1>[VALUE]</CD_PARSING_RICH_50_1>
<CD_PARSING_EDIT_F9_13>3</CD_PARSING_EDIT_F9_13>
<CD_PARSING_EDIT_F9_18>http://sitename.com</CD_PARSING_EDIT_F9_18>
<CD_PARSING_EDIT_F24_2>1</CD_PARSING_EDIT_F24_2>
<CD_PARSING_EDIT_F48_1></CD_PARSING_EDIT_F48_1>
<CD_PARSING_EDIT_F48_2>10</CD_PARSING_EDIT_F48_2>
<CD_PARSING_EDIT_F48_5>0</CD_PARSING_EDIT_F48_5>
<CD_PARSING_EDIT_F48_3>0</CD_PARSING_EDIT_F48_3>
<CD_PARSING_EDIT_F56_1></CD_PARSING_EDIT_F56_1>
<CD_PARSING_EDIT_F56_2>-</CD_PARSING_EDIT_F56_2>
<CD_PARSING_EDIT_F34_1></CD_PARSING_EDIT_F34_1>
<CD_PARSING_EDIT_F34_3>http://</CD_PARSING_EDIT_F34_3>
<CD_PARSING_EDIT_F40_2>Mozilla/5.0 (Windows; U; Windows NT 6.1; ru; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 sputnik 2.1.0.18 YB/4.3.0</CD_PARSING_EDIT_F40_2>
<CD_PARSING_EDIT_F46_1></CD_PARSING_EDIT_F46_1>
<CD_PARSING_M49_1> class="entry"
id="news-id-
id="article-text"
</CD_PARSING_M49_1>
<CD_PARSING_M48_1></CD_PARSING_M48_1>
<F90_M_1></F90_M_1>
<CD_PARSING_M48_3></CD_PARSING_M48_3>
<CD_PARSING_SYN_F46_1><CD_CYCLE_GRAN_ALL!></CD_PARSING_SYN_F46_1>
<CD_PARSING_RICH_F9_1></CD_PARSING_RICH_F9_1>
<CD_PARSING_RICH_F9_2></CD_PARSING_RICH_F9_2>
<CD_PARSING_R24_1>0</CD_PARSING_R24_1>
<F1_COMBOBOX_9>0</F1_COMBOBOX_9>
<F1_COMBOBOX_10>2</F1_COMBOBOX_10>
<CD_PARSING_RB_9>0</CD_PARSING_RB_9>
<CD_PARSING_RB_F9_3>0</CD_PARSING_RB_F9_3>
<CD_PARSING_LB_1>http://www.latestvacancies.com/wates/</CD_PARSING_LB_1>
<CD_PARSING_EDIT_26>0</CD_PARSING_EDIT_26>
<CD_PARSING_EDIT_27>Jobs/Advert/</CD_PARSING_EDIT_27>
<CD_PARSING_EDIT_28>0</CD_PARSING_EDIT_28>
<CD_PARSING_EDIT_29>?</CD_PARSING_EDIT_29>
<CD_PARSING_COMBOBOX_1>csv</CD_PARSING_COMBOBOX_1>
<CD_PARSING_RE61_1></CD_PARSING_RE61_1>
<CD_PARSING_CHECK_61_1>1</CD_PARSING_CHECK_61_1>
<CD_PARSING_RB60_1>1</CD_PARSING_RB60_1>
<CD_PARSING_SE60_1>1</CD_PARSING_SE60_1>
Answer: Try this.
1. use `with` statement to read/write files.
2. the `file` is a **builtin** class, use something like `ifile`.
3. the regex only need to match the pattern `http:[^<]*`.
## code
import string
import codecs
import re
import glob
with open('newout.txt', 'w+') as outfile:
try:
for ifile in glob.glob("*.cdp"):
print (ifile)
with open(ifile, 'r') as infile:
for line in infile:
step1 = re.findall(r'(http:[^<]+)', line)
if len(step1) > 0:
outfile.write("%s\n" % step1[0].strip())
except Exception as ex:
print (ex)
|
imports fail in subfolder
Question: I have a Python project, that uses some code from a Github repo. I added the
repo using `git submodule add`. So now I have the following file-structure:
ProjectFolder\
foo.py
BarProject\ (the Github repo added with submodule)
bar.py
baz.py
In my main file `foo.py` I want to import the method `bar` from the file
`bar.py`:
from BarProject.bar import bar
This fails, because the first line of `bar.py` is :
from baz import *
And Python throws an `ImportError`, because it cannot find the module baz.
* * *
Is there a way to import the file `bar.py` in a way, so that the relative
imports don't get screwed up? I don't really want to modify `bar.py` or
`baz.py`, because they are part of an external Github project.
Answer: remember to add `__init__.py` to `BarProject` folder to indicate that folder
is a package.
|
Maya Python Create and Use Zipped Package?
Question: Can anyone describe exactly how someone can create and execute a python zip
package in Maya? A lot of tutorials and questions regarding this jump into the
middle and assume certain knowledge. I need a simple example as a starting
point.
Folder/
scriptA.py
scriptB.py
scriptC.py
ScriptA:
Import ScriptB
Import ScriptC
Create zip of Folder
In Maya
Code to run Folder as if not zipped
ScriptA.foo()
In folder we have three scripts. ScriptA references the other two. I zip the
folder with a program like winrar. In maya, I want to load from this zip as if
the files inside were any other module sitting in my script folder (without
unzipping preferably). How do I do this?
Answer: Any zip on your python path is treated like a folder, so:
import sys
sys.path.append('path/to/archive.zip')
import thingInZip
thingInZip.do_something()
The only issue is depth: the zipImporter does not expect nested directory
structures. So this is OK:
ziparchive.zip
+--- module1.py
+----module2.py
+----package_folder
|
+-- __init.__py
+-- submodule1.py
+-- submodule2.py
+--- subpackage
|
+- __init__.py
But this is not:
ziparchive.zip
+ --- folder
+- just_a_python_file_not_part_of_a_package.py
also the `site` module can't add paths _inside_ a zip. There's a workaround
[here](http://techartsurvival.blogspot.com/2014/07/save-environment-2-i-am-
egg-man.html). You will also probably need to be careful about the order of
your sys.path: you want to make sure you know if you are working from the
zipped one or from loose files on your disk.
You can save space by zipping only the .pyc files instead of the whole thing,
btw.
PS beware of using left slashes in `sys.path.append` : they have to be escaped
`\\` \-- right slashes work on both windows and *ix
|
Pull out chunks of a plot made in python and re-display
Question: I have made a plot in jupyter that has an x-axis spanning for about 40
seconds. I want to pull out sections that are milliseconds long and re-display
them as separate plots (so that they can be better viewed). How would I go
about doing this?
Answer: You could use some subplots, and slice the original data arrays. For example:
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,40,1000)
y = np.random.random(1000)
fig, [ax1,ax2,ax3] = plt.subplots(3,1)
ax1.plot(x,y)
ax2.plot(x[100:120],y[100:120])
ax3.plot(x[500:520],y[500:520])
plt.show()
[](http://i.stack.imgur.com/I8h19.png)
|
Error sorting in MongoDB
Question: Going through the Udacity course "Data Wrangling with MongoDB" and they have
the following question. I tried solving (as you see below). However, it's
giving me Python error and I am not sure what's wrong.
The format of JSON it's going on is this:
{
"_id" : ObjectId("5304e2e3cc9e684aa98bef97"),
"text" : "First week of school is over :P",
"in_reply_to_status_id" : null,
"retweet_count" : null,
"contributors" : null,
"created_at" : "Thu Sep 02 18:11:25 +0000 2010",
"geo" : null,
"source" : "web",
"coordinates" : null,
"in_reply_to_screen_name" : null,
"truncated" : false,
"entities" : {
"user_mentions" : [ ],
"urls" : [ ],
"hashtags" : [ ]
},
"retweeted" : false,
"place" : null,
"user" : {
"friends_count" : 145,
"profile_sidebar_fill_color" : "E5507E",
"location" : "Ireland :)",
"verified" : false,
"follow_request_sent" : null,
"favourites_count" : 1,
"profile_sidebar_border_color" : "CC3366",
"profile_image_url" : "http://a1.twimg.com/profile_images/1107778717/phpkHoxzmAM_normal.jpg",
"geo_enabled" : false,
"created_at" : "Sun May 03 19:51:04 +0000 2009",
"description" : "",
"time_zone" : null,
"url" : null,
"screen_name" : "Catherinemull",
"notifications" : null,
"profile_background_color" : "FF6699",
"listed_count" : 77,
"lang" : "en",
"profile_background_image_url" : "http://a3.twimg.com/profile_background_images/138228501/149174881-8cd806890274b828ed56598091c84e71_4c6fd4d8-full.jpg",
"statuses_count" : 2475,
"following" : null,
"profile_text_color" : "362720",
"protected" : false,
"show_all_inline_media" : false,
"profile_background_tile" : true,
"name" : "Catherine Mullane",
"contributors_enabled" : false,
"profile_link_color" : "B40B43",
"followers_count" : 169,
"id" : 37486277,
"profile_use_background_image" : true,
"utc_offset" : null
},
"favorited" : false,
"in_reply_to_user_id" : null,
"id" : NumberLong("22819398300")
}
Here the code with instructions:
#!/usr/bin/env python
"""
Write an aggregation query to answer this question:
Of the users in the "Brasilia" timezone who have tweeted 100 times or more,
who has the largest number of followers?
The following hints will help you solve this problem:
- Time zone is found in the "time_zone" field of the user object in each tweet.
- The number of tweets for each user is found in the "statuses_count" field.
To access these fields you will need to use dot notation (from Lesson 4)
- Your aggregation query should return something like the following:
{u'ok': 1.0,
u'result': [{u'_id': ObjectId('52fd2490bac3fa1975477702'),
u'followers': 2597,
u'screen_name': u'marbles',
u'tweets': 12334}]}
Note that you will need to create the fields 'followers', 'screen_name' and 'tweets'.
Please modify only the 'make_pipeline' function so that it creates and returns an aggregation
pipeline that can be passed to the MongoDB aggregate function. As in our examples in this lesson,
the aggregation pipeline should be a list of one or more dictionary objects.
Please review the lesson examples if you are unsure of the syntax.
Your code will be run against a MongoDB instance that we have provided. If you want to run this code
locally on your machine, you have to install MongoDB, download and insert the dataset.
For instructions related to MongoDB setup and datasets please see Course Materials.
Please note that the dataset you are using here is a smaller version of the twitter dataset used
in examples in this lesson. If you attempt some of the same queries that we looked at in the lesson
examples, your results will be different.
"""
def get_db(db_name):
from pymongo import MongoClient
client = MongoClient('localhost:27017')
db = client[db_name]
return db
def make_pipeline():
# complete the aggregation pipeline
pipeline = [
{
"$match": {
"user.time_zone": "Brasilia",
"user.statuses_count": {"$gte": 100}
}
},
{
"$sort": { "$user.friends_count", -1}
},
{
"$limit": 1
},
{
"$project": {
"followers": "$user.friends_count",
"screen_name": "$user.screen_name",
"tweets": "$user.statuses_count"
}
}
]
return pipeline
def aggregate(db, pipeline):
result = db.tweets.aggregate(pipeline)
return result
if __name__ == '__main__':
db = get_db('twitter')
pipeline = make_pipeline()
result = aggregate(db, pipeline)
import pprint
pprint.pprint(result)
assert len(result["result"]) == 1
assert result["result"][0]["followers"] == 17209
Here is the error it's giving me:
Traceback (most recent call last):
File "vm_main.py", line 33, in <module>
import main
File "/tmp/vmuser_hnypkpkult/main.py", line 2, in <module>
import studentMain
File "/tmp/vmuser_hnypkpkult/studentMain.py", line 43, in <module>
result = aggregate(db, pipeline)
File "/tmp/vmuser_hnypkpkult/studentMain.py", line 37, in aggregate
result = db.tweets.aggregate(pipeline)
File "/usr/local/lib/python2.7/dist-packages/pymongo/collection.py", line 1390, in aggregate
"aggregate", self.__name, **command_kwargs)
File "/usr/local/lib/python2.7/dist-packages/pymongo/database.py", line 338, in _command
for doc in cursor:
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 1076, in next
if len(self.__data) or self._refresh():
File "/usr/local/lib/python2.7/dist-packages/pymongo/cursor.py", line 1020, in _refresh
self.__uuid_subtype))
bson.errors.InvalidDocument: Cannot encode object: set(['$user.friends_count', -1])
Answer: Your `$sort` clause is getting interpreted as a Python set instead of a
dictionary. Additionally, I believe you need to refer to the field without a
dollar sign in that clause. Change it to the following (note the colon instead
of the comma):
{
"$sort": { "user.friends_count": -1}
},
|
Python Flask: keeping track of user sessions? How to get Session Cookie ID?
Question: I want to build a simple webapp as part of my learning activity. Webapp is
supposed to ask for user to input their email_id if it encounters a first time
visitor else it remembers the user through cookie and automatically logs
him/her in to carry out the functions.
This is my first time with creating a user based web app. I have a blue print
in my mind but I am unable to figure out how to implement it. Primarily I am
confused with respect to the way of collecting user cookie. I have looked into
various tutorials and flask_login but I think what I want to implement is much
simpler as compared to what flask_login is implementing.
Here is what I have so far (it is rudimentary and meant to communicate my use
case):
from flask import render_template, request, redirect, url_for
@app.route("/", methods= ["GET"])
def first_page():
cookie = response.headers['cookie']
if database.lookup(cookie):
user = database.get(cookie) # it returns user_email related to that cookie id
else:
return redirect_url(url_for('login'))
data = generateSomeData() # some function
return redirect(url_for('do_that'), user_id, data, stats)
@app.route('/do_that', methods =['GET'])
def do_that(user_id):
return render_template('interface.html', user_id, stats,data) # it uses Jinja template
@app.route('/submit', methods =["GET"])
def submit():
# i want to get all the information here
user_id = request.form['user_id']# some data
answer = request.form['answer'] # some response to be recorded
data = request.form['data'] # same data that I passed in do_that to keep
database.update(data,answer,user_id)
return redirect(url_for('/do_that'))
@app.route('/login', methods=['GET'])
def login():
return render_template('login.html')
@app.route('/loggedIn', methods =['GET'])
def loggedIn():
cookie = response.headers['cookie']
user_email = response.form['user_email']
database.insert(cookie, user_email)
return redirect(url_for('first_page'))
Answer: You can access request cookies through the [`request.cookies`
dictionary](http://flask.pocoo.org/docs/0.10/api/#flask.Request.cookies) and
set cookies by using either `make_response` or just storing the result of
calling `render_template` in a variable and then calling [`set_cookie` on the
response
object](http://flask.pocoo.org/docs/0.10/api/#flask.Response.set_cookie):
@app.route("/")
def home():
user_id = request.cookies.get('YourSessionCookie')
if user_id:
user = database.get(user_id)
if user:
# Success!
return render_template('welcome.html', user=user)
else:
return redirect(url_for('login'))
else:
return redirect(url_for('login'))
@app.route("/login", methods=["GET", "POST"])
def login():
if request.method == "POST":
# You should really validate that these fields
# are provided, rather than displaying an ugly
# error message, but for the sake of a simple
# example we'll just assume they are provided
user_name = request.form["name"]
password = request.form["password"]
user = db.find_by_name_and_password(user_name, password)
if not user:
# Again, throwing an error is not a user-friendly
# way of handling this, but this is just an example
raise ValueError("Invalid username or password supplied")
# Note we don't *return* the response immediately
response = redirect(url_for("do_that"))
response.set_cookie('YourSessionCookie', user.id)
return response
@app.route("/do-that")
def do_that():
user_id = request.cookies.get('YourSessionCookie')
if user_id:
user = database.get(user_id)
if user:
# Success!
return render_template('do_that.html', user=user)
else:
return redirect(url_for('login'))
else:
return redirect(url_for('login'))
### DRYing up the code
Now, you'll note there is a _lot_ of boilerplate in the `home` and `do_that`
methods, all related to login. You can avoid that by writing your own
decorator (see [_What is a
decorator_](http://stackoverflow.com/a/1594484/135978) if you want to learn
more about them):
from functools import wraps
from flask import flash
def login_required(function_to_protect):
@wraps(function_to_protect)
def wrapper(*args, **kwargs):
user_id = request.cookies.get('YourSessionCookie')
if user_id:
user = database.get(user_id)
if user:
# Success!
return function_to_protect(*args, **kwargs)
else:
flash("Session exists, but user does not exist (anymore)")
return redirect(url_for('login'))
else:
flash("Please log in")
return redirect(url_for('login'))
return wrapper
Then your `home` and `do_that` methods get _much_ shorter:
# Note that login_required needs to come before app.route
# Because decorators are applied from closest to furthest
# and we don't want to route and then check login status
@app.route("/")
@login_required
def home():
# For bonus points we *could* store the user
# in a thread-local so we don't have to hit
# the database again (and we get rid of *this* boilerplate too).
user = database.get(request.cookies['YourSessionCookie'])
return render_template('welcome.html', user=user)
@app.route("/do-that")
@login_required
def do_that():
user = database.get(request.cookies['YourSessionCookie'])
return render_template('welcome.html', user=user)
### Using what's provided
If you don't _need_ your cookie to have a particular name, I would recommend
using [`flask.session`](http://flask.pocoo.org/docs/0.10/quickstart/#sessions)
as it already has a lot of niceties built into it (it's signed so it can't be
tampered with, can be set to be HTTP only, etc.). That DRYs up our
`login_required` decorator even more:
# You have to set the secret key for sessions to work
# Make sure you keep this secret
app.secret_key = 'something simple for now'
from flask import flash, session
def login_required(function_to_protect):
@wraps(function_to_protect)
def wrapper(*args, **kwargs):
user_id = session.get('user_id')
if user_id:
user = database.get(user_id)
if user:
# Success!
return function_to_protect(*args, **kwargs)
else:
flash("Session exists, but user does not exist (anymore)")
return redirect(url_for('login'))
else:
flash("Please log in")
return redirect(url_for('login'))
And then your individual methods can get the user via:
user = database.get(session['user_id'])
|
Aerospike losing documents when node goes down
Question: I've been doing dome tests using aerospike and I noticed a behavior different
than what is sold.
I have a cluster of 4 nodes running on AWS in the same AZ, the instances are
t2micro (1cpu, 1gb RAM, 25gb SSD) using the aws linux with the AMI aerospike
aerospike.conf:
heartbeat {
mode mesh
port 3002
mesh-seed-address-port XXX.XX.XXX.164 3002
mesh-seed-address-port XXX.XX.XXX.167 3002
mesh-seed-address-port XXX.XX.XXX.165 3002
#internal aws IPs
...
namespace teste2 {
replication-factor 2
memory-size 650M
default-ttl 365d
storage-engine device {
file /opt/aerospike/data/bar.dat
filesize 22G
data-in-memory false
}
}
What I did was a test to see if I would loose documents when a node goes down.
For that I wrote a little code on python:
from __future__ import print_function
import aerospike
import pandas as pd
import numpy as np
import time
import sys
config = {
'hosts': [ ('XX.XX.XX.XX', 3000),('XX.XX.XX.XX',3000),
('XX.XX.XX.XX',3000), ('XX.XX.XX.XX',3000)]
} # external aws ips
client = aerospike.client(config).connect()
for i in range(1,10000):
key = ('teste2', 'setTest3', ''.join(('p',str(i))))
try:
client.put(key, {'id11': i})
print(i)
except Exception as e:
print("error: {0}".format(e), file=sys.stderr)
time.sleep(1)
I used this code just for inserting a sequence of integers that I could check
after that. I ran that code and after a few seconds I stopped the aerospike
service at one node for 10 seconds, using `sudo service aerospike stop` and
`sudo service aerospike colstart` to restart.
I waited for a few seconds until the nodes did all the migration and executed
the following python script:
query = client.query('teste2', 'setTest3')
query.select('id11')
te = []
def save_result((key, metadata, record)):
te.append(record)
query.foreach(save_result)
d = pd.DataFrame(te)
d2 = d.sort(columns='id11')
te2 = np.array(d2.id11)
for i in range(0,len(te2)):
if i > 0:
if (te2[i] != (te2[i-1]+1) ):
print('no %d'% int(te2[i-1]+1))
print(te2)
And got as response:
no 3
no 6
no 8
no 11
no 13
no 17
no 20
no 22
no 24
no 26
no 30
no 34
no 39
no 41
no 48
no 53
[ 1 2 5 7 10 12 16 19 21 23 25 27 28 29 33 35 36 37 38 40 43 44 45 46 47 51 52 54]
Is my cluster configured wrong or this is normal?
ps: I tried to include as many things I could, if you please suggest more
information to include I will appreciate.
Answer: Actually I found a solution, and it is pretty simple and foolish to be honest.
In the configuration file we have some parameters for network communication
between nodes, such as:
interval 150 # Number of milliseconds between heartbeats
timeout 10 # Number of heartbeat intervals to wait
# before timing out a node
This two parameters set the time it takes to the cluster to realize the node
is down and out of the cluster. (in this case 1.5 sec).
What we found useful was to tune the write policies at the client to work
along this parameters.
Depending on the client you will have some policies like number of tries until
the operation fails, timeout for the operation, time between tries.
You just need to adapt the client parameters. For example: set the number of
retries to 4 (each is executed after 500 ms) and the timeout to 2 sec. Doing
that the client will recognize the node is down and redirect the operation to
another node.
This setup can be overwhelming on the cluster, generating a huge overload, but
it worked for us.
|
How many attempts do websocket-client's send() do?
Question: I use the Python package [websocket-
client](http://pypi.python.org/pypi/websocket-client/) to take care some
client-server discussion.
Assume I do the following:
import websocket
MAX_TIMEOUT = 1 * 60 # Maximum time to wait to establish the connection to the server.
ws = websocket.create_connection("ws://128.52.195.211:8080/websocket")
ws.settimeout(MAX_TIMEOUT)
ws.send("Hello, World!")
How many attempts will `ws.send()` do to try to send the message to the
server? Only 1?
Answer: If you look at the [library source code](https://github.com/liris/websocket-
client/blob/master/websocket/_core.py), it clearly shows that there's no retry
implemented.
`ws.send` creates frames (chunks) from the payload, and then sends each frame
1 by one in `ws.send_frame`
|
How to scrape javascript webpage using python standard libs only
Question: I have to scrape a website that uses javascript to display content. I have to
use standard libs only as I will run this script on a server where there is
not any browser. I have found selenium but it requires a browser that in my
case is not possible to install.
Any idea or solution?
Answer: Have a look at Ghost.py <http://jeanphix.me/Ghost.py/>. It doesn't require a
browser.
pip install Ghost.py
from ghost import Ghost
ghost = Ghost()
page, resources = ghost.open('http://stackoverflow.com/')
|
New python versions add to the existing ones, rather than upgrading
Question: I'm new to python. I installed python3.4 on OsX some time ago and now I
installed python3.5 using the installer you can download from the site.
I noticed that in /Library/Frameworks/Python.framework/Versions/ I have both
3.4 and 3.5. I wasn't expecting that - I was expecting an upgrade where 3.5
replaced 3.4
So, if I run python3.5 and I try to import the packages I installed when using
3.4, they are not found. Furthermore if I use pip install to reinstall them,
it says the packages are already installed, therefore I can see that it's
pointing to the 3.4 version.
What I'm doing wrong? I supposed that installing the new python should upgrade
my existing installation (bringing installed packages with it) rather than add
a completely new install.
I'm not sure what to do now:
1. Should I keep every old version?
2. Should I manually change which pip is used every time?
3. (is there a more streamlined update procedure for next time?)
Answer: A lot of Python packages are 3rd party. The community is always moving forward
and this may take some getting used to!
That said, my recommendation is to start using venv. It gives you (mostly)
isolated Python virtual environments in which you can install whatever
packages you like (via pip) without polluting the global installation. This
also allows you to configure various virtual environments with varying
packages and versions. It's really handy!
Link: <https://docs.python.org/3.4/library/venv.html>
|
python, find and print specific cells in csv files that are in different directories
Question: I have different csv files in different directories. so i want to find
specific cells in different columns that correspond to a specific date in my
input.txt file. here is what i have until now:
import glob, os, csv, numpy
import re, csv
if __name__ == '__main__':
Input=open('Input.txt','r');
output = []
for i, line in enumerate(Input):
if i==0:
header_Input = Input.readline().replace('\n','').split(',');
else:
date_input = Input.readline().replace('\n','').split(',');
a=os.walk("path to the directory")
[x[0] for x in os.walk("path to the directory")]
print(a)
b=next(os.walk('.'))[1] # immediate child directories.
for dirname, dirnames, filenames in os.walk('.'):
# print path to all subdirectories first.
for subdirname in dirnames:
print(os.path.join(dirname, subdirname))
# print path to all filenames.
for filename in filenames:
#print(os.path.join(dirname, filename))
csvfile = 'csv_file'
if csvfile in filename:
print(os.path.join(dirname, filename))
Now I have the csv files, so i need to find the date_input in every file, and
print the line that contains all the information. Or if possible, to get only
the cells that are in the columns with header == header_input.
Answer: This is not intended to be a full answer to your question. But you may want to
consider replacing
for i, line in enumerate(Input):
if i==0:
header_Input = Input.readline().replace('\n','').split(',');
else:
date_input = Input.readline().replace('\n','').split(',');
with
header_Input = Input.readline().strip().split(',')
date_input = Input.readline().strip().split(',')
The `enumerate(Input)` expression reads lines from the file, and so do calls
to `readline()` in the loop body. This will most likely result in some
unfortunate results like reading alternating lines from the file.
The `strip()` method removes whitespace from the start and end of the line.
Alternatively you may want to know that `s[:-1]` strips off the last character
of `s`.
|
Using a for loop in python to generate mySQL insert statements
Question: I am working on a python script to take an incoming String of data from
several arduinos to read, split and insert that data into a database.
The problem is the sensor data varies in the number of values depending on
what kind of sensor is used. I cannot figure out the proper way to loop
through the separated pieces and insert them properly.
A '1' specifies 10HS sensor and need 1 space for the incoming value
The String comes in like this for 10HS sensors:
"Cucumber2015,Arduino01,1,20150918124200,25.3,75.5,**_1_ ,12 .....**
A '2' specifies 10HS sensor and need 1 space for the incoming value
"Cucumber2015,Arduino01,1,20150918124200,25.3,75.5,**_2_ ,12,24,23 ......**
The for loop should repeat until all the sensor values have an insert
statement for their respective tables.
I have tried the code shown below and keep getting errors.
How can I accomplish this? Besides the syntax problem, am I going about this
correctly?
**My current error**
File "serialToDbV3.py", line 50
index =index+1
^
SyntaxError: invalid syntax
**Python code**
#!/usr/bin/python
import serial
import MySQLdb
#establish connection to MySQL. You'll have to change this for your database.
dbConn = MySQLdb.connect("localhost","python_user","password","IrrigationDB") or die ("could not connect to database")
#open a cursor to the database
cursor = dbConn.cursor()
device = '/dev/ttyUSB0' #this will have to be changed to the serial port you are using
baudrate = 9600
def getSerialData():
try:
print "Trying...",device
arduino = serial.Serial(device, baudrate)
except:
print "Failed to connect on",device
try:
print "Trying to get data"
next(arduino)
data = arduino.readline() #read the data from the Arduino
pieces = data.split(",") #split the data by the tab
print "Data: %s" % data
print "Piece 0: ProjectID %s" % pieces[0]
print "Piece 1: ArduinoID %s" % pieces[1]
print "Piece 2: Plot# %s" % pieces[2]
print "Piece 3: SQLTime %s" % pieces[3]
print "Piece 4: AirTemp %s" % pieces[4]
print "Piece 5: Humidity %s" % pieces[5]
print "Piece 6: SensType %s" % pieces[6]
print "Piece 7: SensData %s" % pieces[7]
print "Piece 8: %s" % pieces[8]
print "Piece 9: %s" % pieces[9]
print "Piece 10: %s" % pieces[10]
#Here we are going to insert the data into the Database
try:
print "Trying insertion..."
cursor.execute("INSERT IGNORE INTO `IrrigationDB`.`Project`(`idProject`)VALUES (%s)", (pieces[0]))
cursor.execute("INSERT IGNORE INTO `IrrigationDB`.`Arduino`(`idArduino`,`FK_ProjectID`)VALUES (%s,%s)", (pieces[1],pieces[0]))
cursor.execute("INSERT IGNORE INTO `IrrigationDB`.`Plot`(`idPlot`,`FK_ArduinoID`)VALUES (%s,%s)", (pieces[2],pieces[1]))
cursor.execute("INSERT INTO `IrrigationDB`.`Reading`(`DateAndTime`,`airTempC`,`relativeHumidity`,`FK_PlotID`)VALUES (%s,%s,%s,%s)", (pieces[3],pieces[4],pieces[5],pieces[2]))
startingPosition = 6
for index in xrange(startingPosition, len(pieces), step):
if pieces[6] == 1 :
cursor.execute("INSERT INTO `IrrigationDB`.`10HS_Sensor`(`id10HS_Sensor`,`DielectricPermittivity`)VALUES (%s,%s,%s)", (pieces[2],pieces[i+1])
index =index+1
if pieces[6] == 2 :
cursor.execute("INSERT INTO `IrrigationDB`.`GS3_Sensor`(`idGS3_Sensor`,`DielectricPermittivity`,`soilTempC`,`electricalConductivity`)VALUES (%s,%s,%s)", (pieces[2],pieces[i+1],pieces[i+2],pieces[i+3]))
index=index+3
except MySQLdb.IntegrityError:
print "failed to insert data"
except:
print "Failed to get data from Arduino!"
val = 0
while val == 0 :
getSerialData()
Answer:
cursor.execute("INSERT INTO `IrrigationDB`.`10HS_Sensor`(`id10HS_Sensor`,`DielectricPermittivity`)VALUES (%s,%s,%s)", (pieces[2],pieces[i+1])
You're missing a parentheses at the end, it should be:
cursor.execute("INSERT INTO `IrrigationDB`.`10HS_Sensor`(`id10HS_Sensor`,`DielectricPermittivity`)VALUES (%s,%s,%s)", (pieces[2],pieces[i+1]))
|
class GzipFile(io.BufferedIOBase): AttributeError: 'module' object has no attribute 'BufferedIOBase' installing python tables
Question: I have a problem while installing python tables. There is something missing or
corrupt in my path or dependencies that I can not solve.
When I'm using a Python program that uses tables, it returns:
File "/usr/lib/python2.7/dist-packages/io.py", line 331, in
def bget(imfile,shp=None,unpackstr=N.int16,bytesperpixel=2.0,sliceinit=0):
AttributeError: 'module' object has no attribute 'int16'
And simply importing tables from python shell:
...
File "/usr/lib/python2.7/gzip.py", line 36, in
class GzipFile(io.BufferedIOBase):
AttributeError: 'module' object has no attribute 'BufferedIOBase'
tables is installed in:
/usr/local/lib/python2.7/dist-packages/
My PYTHONPATH is:
['', '/usr/local/lib/python2.7/dist-packages/bbfreeze-1.0.2-py2.7-linux-x86_64.egg', '/usr/local/lib/python2.7/dist-packages/altgraph-0.9-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/phylonetwork-1.0b6-py2.7.egg', '/usr/local/lib/python2.7/dist-packages/tables-3.2.0-py2.7-linux-x86_64.egg', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode']
What am I missing?
Any healp would be appreaciated.
Answer: Facing the same issue, I found this interesting answer : [How to solve
AttributeError when importing
igraph?](http://stackoverflow.com/questions/6315440/how-to-solve-
attributeerror-when-importing-igraph)
You should check for packages names `io` and 'N' in your own sources. Make
sure they do not overlap definition of `io` builtin packages and numpy (you
seemed to have imported `as N`)
Renaming them solved this issue for me.
|
Parsing JSON array in Python to create a delimited string
Question: Brand new to Python and trying to parse a JSON array like the one below:
[
{"Event":"start","EventDateTime":"2015-09-15T03:45:16.681428Z"},
{"Event":"process","EventDateTime":"2015-09-15T03:45:16.681428Z"},
{"Event":"end","EventDateTime":"2015-09-15T03:45:16.681428Z"}
]
I need the output to be on string tab delimited fields and new line delimted
row, like this:
start \t 2015-09-15T03:45:16.681428Z \n
process \t 2015-09-15T03:45:16.681428Z \n
end \t 2015-09-15T03:45:16.681428Z
here's the code I have so far:
import json
if not j or not i:
return None
try:
arr = json.loads(j)
except ValueError:
return None
if len(arr) <= 0:
return None
row=i
for li in arr
elem = json.loads(li, object_pairs_hook=collections.OrderedDict)
row=row + '\t' + elem.Key + '\t' + elem.Value + \n
return row
First I get, Indent errors, fixed indents but getting error about
'collections' not defined.
Is there a way to do what I need without using that collections. When I remove
the collections object I get other errors.
Thanks!!
Answer:
import json
j = """[
{"Event":"start","EventDateTime":"2015-09-15T03:45:16.681428Z"},
{"Event":"process","EventDateTime":"2015-09-15T03:45:16.681428Z"},
{"Event":"end","EventDateTime":"2015-09-15T03:45:16.681428Z"}
]"""
j = json.loads(j)
for item in j:
print '%s\t%s' % (item['Event'], item['EventDateTime'])
|
Python 3 Regex Issue
Question: Why does the following `string1` regexp not match? I have tested it using
[this](https://regex101.com/ "this") and it appears my regex-fu is accurate,
so I figure I must be missing something in the python implementation:
import re
pattern = r".*W([0-9]+(\.5)?)[^\.]?.*$"
string1 = '6013-SFR6W4.5'
string2 = '6013-SFR6W4.5L'
print(re.match(pattern, string1)) # the return value is None
print(re.match(pattern, string2)) # this returns a re.match object
[Here](http://imgur.com/gWn1XnE "here") is a screenshot of an interactive
session showing this issue.
**EDIT**
sys.version outputs 3.4.3
Answer: When I run the code you provided, I get return values on both:
$ python3 test.py
<_sre.SRE_Match object at 0x6ffffedc3e8>
<_sre.SRE_Match object at 0x6ffffedc3e8>
|
Mongodb lock prevention
Question: After some time looking at the mongodb documentation and the pymongo API, I am
still no clearer on what route I take as the way forward (more confused now
that when I started) . My problem concerns locks ... not so much that I have
tested and found there to be major concurrency problems, but that I don't want
to run into them after the fact.
I have a tkinter script with several functions, all of them need access to the
same document collection, and most of them access the same single document
within that collection.
client = MongoClient()
def 1 ():
glob_client = client['ALPHA']['A-Z']
#do work:
"""Also call subprocesses that use the same database document (glob_client) in another script.
There can be 3 -10 instances of this subprocess running, listening to various http streams in a while loop,
collecting data that can come in at 100's of times per second."""
def2 ():
glob_client = client['ALPHA']['A-Z']
...
def32 ():
glob_client = client['ALPHA']['A-Z']
And the called subprocess(in separate scripts), multiple instances possible:
client = MongoClient()
glob_client = client['ALPHA']['A-Z']
while True:
#do work with glob_client; updates, push, pull, reads,
So, would it be enough in this case to just use client.close() in every
function?
def 1 ():
glob_client = client['ALPHA']['A-Z']
#do work
client.close()
Similarly in the while loops:
while True:
#do work with glob_client; updates, push, pull, reads,
Client.close()
Would that suffice, or should I be looking to shard in this case? Or should I
just go back to SQL!
Mongodb 3.0.6 32-bit, pymongo 3.03, python 2.7.
Answer: As far as avoiding a lock in this case I put the client in a separate script,
foo.py:
import pymongo
CLIENT = pymongo.MongoClient(maxPoolSize=None,w=1)
COLLEC = CLIENT ['ABC']['XYZ']
And then imported the collection anywhere I needed it throughout various
scripts:
from foo import COLLEC
|
Comparing first element of the consecutive lists of tuples in Python
Question: I have a list of tuples, each containing two elements. The first element of
few sublists is common. I want to compare the first element of these sublists
and append the second element in one lists. Here is my list:
myList=[(1,2),(1,3),(1,4),(1,5),(2,6),(2,7),(2,8),(3,9),(3,10)]
I would like to make a list of lists out of it which looks something like
this:`
NewList=[(2,3,4,5),(6,7,8),(9,10)]
I hope if there is any efficient way.
Answer: You can use an
[OrderedDict](https://docs.python.org/2/library/collections.html#collections.OrderedDict)
to group the elements by the first subelement of each tuple:
myList=[(1,2),(1,3),(1,4),(1,5),(2,6),(2,7),(2,8),(3,9),(3,10)]
from collections import OrderedDict
od = OrderedDict()
for a,b in myList:
od.setdefault(a,[]).append(b)
print(list(od.values()))
[[2, 3, 4, 5], [6, 7, 8], [9, 10]]
If you really want tuples:
print(list(map(tuple,od.values())))
[(2, 3, 4, 5), (6, 7, 8), (9, 10)]
If you did not care about the order the elements appeared and just wanted the
most efficient way to group you could use a
[collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict):
from collections import defaultdict
od = defaultdict(list)
for a,b in myList:
od[a].append(b)
print(list(od.values()))
Lastly, if your data is in order as per your input example i.e sorted you
could simply use
[itertools.groupby](https://docs.python.org/2/library/itertools.html#itertools.groupby)
to group by the first subelement from each tuple and extract the second
element from the grouped tuples:
from itertools import groupby
from operator import itemgetter
print([tuple(t[1] for t in v) for k,v in groupby(myList,key=itemgetter(0))])
Output:
[(2, 3, 4, 5), (6, 7, 8), (9, 10)]
Again the groupby will only work if your data is _sorted_ by at least the
first element.
Some timings on a reasonable sized list:
In [33]: myList = [(randint(1,10000),randint(1,10000)) for _ in range(100000)]
In [34]: myList.sort()
In [35]: timeit ([tuple(t[1] for t in v) for k,v in groupby(myList,key=itemgetter(0))])
10 loops, best of 3: 44.5 ms per loop
In [36]: %%timeit od = defaultdict(list)
for a,b in myList:
od[a].append(b)
....:
10 loops, best of 3: 33.8 ms per loop
In [37]: %%timeit
dictionary = OrderedDict()
for x, y in myList:
if x not in dictionary:
dictionary[x] = [] # new empty list
dictionary[x].append(y)
....:
10 loops, best of 3: 63.3 ms per loop
In [38]: %%timeit
od = OrderedDict()
for a,b in myList:
od.setdefault(a,[]).append(b)
....:
10 loops, best of 3: 80.3 ms per loop
If order matters and the data is _sorted_ , go with the _groupby_ , it will
get even closer to the defaultdict approach if it is necessary to map all the
elements to tuple in the defaultdict.
If the data is not sorted or you don't care about any order, you won't find a
faster way to group than using the _defaultdict_ approach.
|
Unable to fetch element using scrapy
Question: I have wrote a spider to scrap a few elements from a website but the problem
is i am unable to fetch some of the elements and some are working fine. Please
help me in right direction.
Here is my spider code:
from scrapy.selector import Selector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from ScrapyScraper.items import ScrapyscraperItem
class ScrapyscraperSpider(CrawlSpider) :
name = "rs"
allowed_domains = ["mega.pk"]
start_urls = ["http://www.mega.pk/mobiles/"]
rules = (
Rule(SgmlLinkExtractor(allow = ("http://www\.mega\.pk/mobiles_products/[0-9]+\/[a-zA-Z-0-9.]+",)), callback = 'parse_item', follow = True),
)
def parse_item(self, response) :
sel = Selector(response)
item = ScrapyscraperItem()
item['Heading'] = sel.xpath('//*[@id="main1"]/div[1]/div[1]/div/div[2]/div[2]/div/div[1]/h2/span/text()').extract()
item['Content'] = sel.xpath('//*[@id="main1"]/div[1]/div[1]/div/div[2]/div[2]/div/p/text()').extract()
item['Price'] = sel.xpath('//*[@id="main1"]/div[1]/div[1]/div/div[2]/div[2]/div/div[2]/div[1]/div[2]/span/text()').extract()
item['WiFi'] = sel.xpath('//*[@id="laptop_detail"]/tbody/tr/td[contains(. ,"Wireless")]/text()').extract()
return item
Now i am able to get Heading, Content and Price but Wifi returns nothing. The
point where i get totally confused is that the same xpath works in chrome and
not in python(scrapy).
Answer: I 'm still learning myself, though I think I may see your problem.
I would imagine you are looking to find the wifi status - in which case you
need the text of the span of the next element:
import urllib2
import lxml.html as LH
url = 'http://www.mega.pk/laptop_products/13242/Apple-MacBook-Pro-with-Retina-Display-Z0RG0000V.html'
response = urllib2.urlopen(url)
html = response.read()
doc=LH.fromstring(html)
heading = doc.xpath('//*[@id="main1"]/div[1]/div[1]/div/div[2]/div[2]/div/div[1]/h2/span/text()')
content = doc.xpath('//*[@id="main1"]/div[1]/div[1]/div/div[2]/div[2]/div/p/text()')
price = doc.xpath('//*[@id="main1"]/div[1]/div[1]/div/div[2]/div[2]/div/div[2]/div[1]/div[2]/span/text()')
wifi_location = doc.xpath('//*[@id="laptop_detail"]//tr/td[contains(. ,"Wireless")]')[0]
wifi_status = wifi_location.getnext().find('span').text
I only checked a single page, but hopefully this helps. I am unsure why the
xpath does not work.. I will be doing more reading but I often find that the
inclusion of tbody does not function properly in this setting. I typically
have opted to skip to td via //.
**Edit**
Found the reason, it looks like chrome will insert tbody when it is not
included in original html. Scrapy is trying to parse the original HTML without
this feature.
[Problem with lxml xpath for html table
extracting](http://stackoverflow.com/questions/5586296/problem-with-lxml-
xpath-for-html-table-extracting)
|
Runing module of python package
Question: I am trying to execute a package from github called
[plagcomps](https://github.com/NoahCarnahan/plagcomps)
I try to execute the extrinsic_testing module using the following command:
python -m plagcomps.extrinsic.extrinsic_testing
I get an error as follows:
/usr/bin/python: No module named dbconstants
I am trying to look for this package but cant find it in using pip.
Or is there something else I am missing?
Answer: Looking at [the file that produces the
exception](https://github.com/NoahCarnahan/plagcomps/blob/master/extrinsic/extrinsic_testing.py#L19)
(next time, please post the full traceback):
from ..dbconstants import username, password, dbname
...
url = "postgresql://%s:%s@%s" % (username, password, dbname)
It seems that it expects you to create a file named `dbconstants.py` with the
following content:
username = '...'
password = '...'
dbname = '...'
Replace the dots with the information about your postgres database.
|
Python lambda function printing <function <lambda> at 0x7fcbbc740668> instead of value
Question: I am a beginner in python, and I was playing around with lambda functions. I
was writing a program using lambda function to print characters that are +1
the ascii value of the input characters. My code is
#!/usr/bin/python
import sys
try:
word = sys.argv[1]
except:
print "No arguments passed"
sys.exit(1)
def convert_ascii(char):
return "".join(chr(ord(char) + 1))
for i in word:
print convert_ascii(i)
print lambda x: chr(ord(i) + 1)
I have a function convert_ascii that does the same thing as lambda. However,
my output is
/usr/bin/python2.7 /home/user1/PycharmProjects/test/Tut1/asciipl2.py "abc def ghi"
b
<function <lambda> at 0x7f0310160668>
c
<function <lambda> at 0x7f0310160668>
d
<function <lambda> at 0x7f0310160668>
!
<function <lambda> at 0x7f0310160668>
e
<function <lambda> at 0x7f0310160668>
f
<function <lambda> at 0x7f0310160668>
g
<function <lambda> at 0x7f0310160668>
!
<function <lambda> at 0x7f0310160668>
h
<function <lambda> at 0x7f0310160668>
i
<function <lambda> at 0x7f0310160668>
j
<function <lambda> at 0x7f0310160668>
The purpose of this script is learning lambda, though there are other ways to
do this program. Please let me know what am I doing wrong. Process finished
with exit code 0
Answer: Currently you are printing a function object. You have to call the function.
Receive the function in a variable and call it with a parameter.
for i in word:
print convert_ascii(i)
fun=lambda x: chr(ord(x) + 1)
print fun(some_arg)
|
Python zipfile module adds files with null bytes instead of correct content
Question: When I use `zipfile.ZipFile.writestr`, the file contains the correct amount of
characters afterwards, but all of them are null bytes.
Minimal example:
import zipfile
z=zipfile.ZipFile("test.zip", "w")
z.writestr("foo", "test")
z.close()
The resulting test.zip has a file "foo" inside, which contains 4 null bytes.
Answer: Got the same problem, and it seems ZipInfo is the obvious workaround.
import zipfile, os
name = 'foo.txt'
data = b'This is a test text.'
open(name, 'wb').write(data)
zipfile.ZipFile('write.zip', 'w').write(name) # OK for Ark
zipfile.ZipFile('writestr.zip', 'w').writestr(name, data) # nulls by Ark
wrt_attr = zipfile.ZipFile('write.zip').getinfo(name)
wrts_attr = zipfile.ZipFile('writestr.zip').getinfo(name)
os.remove(name)
os.remove('write.zip')
os.remove('writestr.zip')
for attr in wrt_attr.__slots__:
if getattr(wrt_attr, attr) != getattr(wrts_attr, attr):
attr, getattr(wrt_attr, attr), getattr(wrts_attr, attr)
attr = 'external_attr'
oct(getattr(wrt_attr, attr)>>16), oct(getattr(wrts_attr, attr)>>16)
The [ZIP spec](https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT)
says, `external_attr` should be set to zero if the content is came from
`stdin`. However, `writestr` constructs an
[invalid](https://github.com/python/cpython/blob/3.0/Lib/zipfile.py#1096)
external_attr when the first argument is str.
It could be
0o100xxx (regular file with umasked permission)
or
zero (as the spec)
but not
0oxxx (file type absent)
|
redirect all requests from one domain to another with Google App Engine but keep static routing rules in yaml
Question: I have a GAE app serving static files defined by rules in the yaml file under
two different domain names as configured in DNS, an old one and a new one, but
otherwise it's the same content served for each. I'd like to redirect requests
from the old domain to the new domain. I've seen [this
question](http://stackoverflow.com/questions/1058119/how-to-redirect-all-urls-
with-google-app-engine), but that loses the ability to use the static asset
handlers in the yaml from what I can tell, and would have to set up static
asset serving explicitly in my main.py I think. Is there a simple way (ideally
in the yaml file itself) to do a redirect when the hostname is the old domain,
but keep my static file rules in place for the new domain?
**Update**
Here's a complete solution that I ended up using:
### dispatch.yaml ###
dispatch:
- url: "*my.domain/*"
module: redirect-module
### redirector.yaml ###
module: redirect-module
runtime: python27
threadsafe: true
api_version: 1
skip_files:
- ^(?!redirector.py$)
handlers:
# Redirect everything via our redirector
- url: /.*
script: redirector.app
### redirector.py ###
import webapp2
def get_redirect_uri(handler, *args, **kwargs):
return 'https://my.domain/' + kwargs.get('path')
app = webapp2.WSGIApplication([
webapp2.Route('/<path:.*>', webapp2.RedirectHandler, defaults={'_uri': get_redirect_uri}),
], debug=False)
Some extra docs:
<https://cloud.google.com/appengine/docs/python/modules/routing#routing_with_a_dispatch_file>
Answer: AFAIK you can't do redirection for the static assets, since GAE serves them
directly according to the .yaml file rules, without even hitting your app
code.
You could add a module (let's call it **redirect-module** for example) to your
app, route ALL old domain URLs to it using a dispatcher file and use a dynamic
handler in this module to redirect URLs to the new domain equivalents, along
the lines suggested in the answers to the question you referenced. The new
domain requests will continue to work unmodified, served either as static
assets or the existing module(s) of your app. The **dispatch.yaml** file would
look like this:
application: your-app-name
dispatch:
- url: "your.old.domain.com/*"
module: redirect-module
Another thought that comes to mind (I didn't actually do this, so I'm unsure
if it would address your problem) is to avoid the redirect altogether and
instead of mapping your app to 2 different domains map it only to the new
domain and make the old domain a DNS CNAME/alias to the new domain.
|
Python exception not caught
Question: I am currently playing with sockets and JSON in Python where I have the
following code:
class RCHandler(SocketServer.BaseRequestHandler):
def setup(self):
pass
def handle(self):
raw = self.request.recv(1024)
recv = raw.strip()
if not recv:
return
# do some logging
logging.debug("RECV: (%s)" % recv)
try:
data = json.loads(recv)
except:
logging.exception('JSON parse error: %s' % recv)
return
if recv == 'quit':
return
The problem is that when I send a faulty JSON string, e.g. `'{"method":
"test"'`, the exception seems to be caught, but I still get the following
traceback:
DEBUG: 20/09/2015 12:33:57 - RECV: ({"method": "test")
ERROR: 20/09/2015 12:33:57 - JSON parse error: {"method": "test"
Traceback (most recent call last):
File "./remoteControl.py", line 68, in handle
data = json.loads(recv)
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting object: line 1 column 17 (char 16)
What am I missing here? I am not supposed to get a traceback if I catch the
exception right? My server class extends ThreadingTCPServer if that has
anything to do with it.
When I ran another python script:
#!/usr/bin/python
import json
import socket
d = '{"method": "test"'
try:
data = json.loads(d)
except:
print "fail"
It only prints "fail" and no traceback.
Answer: You _did_ catch the exception, but you are telling `logging` to include the
exception information (which includes the traceback), by using the
`Logger.exception` method here:
except:
logging.exception('JSON parse error: %s' % recv)
From the [method
documentation](https://docs.python.org/2/library/logging.html#logging.exception):
> Logs a message with level `ERROR` on the root logger. The arguments are
> interpreted as for `debug()`, except that any passed `exc_info` is not
> inspected. **Exception info is always added to the logging message**. This
> function should only be called from an exception handler.
_Emphasis mine_.
Also see the [`Formatter.formatException()`
documentation](https://docs.python.org/2/library/logging.html#logging.Formatter.formatException);
it is this method that does the exception formatting here:
> Formats the specified exception information (a standard exception tuple as
> returned by `sys.exc_info()`) as a string. This default implementation just
> uses `traceback.print_exception()`. The resulting string is returned.
and
[`traceback.print_exception()`](https://docs.python.org/2/library/traceback.html#traceback.print_exception)
does this:
> Print exception information and up to _limit_ stack trace entries from
> _traceback_ to _file_.
If you did not want the exception to be included, use `logging.error()`
instead:
except:
logging.error('JSON parse error: %s' % recv)
or provide a custom [`Formatter`
subclass](https://docs.python.org/2/library/logging.html#logging.Formatter)
that formats the exception information without the traceback.
|
Slate in Python stumbles on foothills
Question: Trying to parse PDFs into text and have been trying to start with Slate.
However, just following the basic example posted everywhere, I get the
following:
>>> import slate
>>> with open('pytest.PDF') as fp:
... doc = slate.PDF(fp)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/slate/slate.py", line 52, in __init__
self.append(self.interpreter.process_page(page))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/slate/slate.py", line 36, in process_page
self.device.outfp.buf = ''
AttributeError: 'cStringIO.StringO' object has no attribute 'buf'
Any ideas?
Answer: This can be fixed by changing line 36 where the error occurred to read:
self.device.outfp.truncate(0)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.