text
stringlengths 226
34.5k
|
---|
Insert csv file into SQL Server database: error with listdir?
Question: I know it's bad to throw code here and ask for help to trouble shoot. This
problem seems a little over my head.
The code is supposed to loop through all the files and sub folders. I don't
think there is any log error here. The problem is I ran into problem that same
file is processed again and caused the DB insert failed on primary key
constraint.
This is my code:
import csv
import pypyodbc
import os
import sys
extension = ".tsv"
connStr = """DSN=database_test;"""
sys.stdout = open('c:\\temp\\python.log', 'w')
print 'starting ...'
def LoadFile(path):
i = 0
for item in os.listdir(path): # loop through items in dir
full_path = os.path.join(path, item)
if os.path.isfile(full_path) and full_path.endswith(extension): # check for ".tsv" extension
if full_path.find('IM') > 0:
table_name = 'table_a'
else:
table_name = 'table_b'
if os.stat(full_path).st_size > 0:
print "Processing file:", i, "|", full_path
i = i + 1
with open (full_path, 'r') as f:
reader = csv.reader(f, delimiter='\t')
columns = next(reader)
query = 'insert into ' + table_name + '({0}) values ({1})'
crsr = cnxn.cursor()
for data in reader:
query = query.format(', '.join(columns), ', '.join('?' * len(columns)))
#print(query, "with ", data)
if(data[1] != ''):
crsr.execute(query, data)
crsr.commit()
crsr.close()
elif os.path.isdir(full_path):
print "Process Folder: ", full_path
LoadFile(full_path)
else:
print("invalid file name:", item)
print "Process Folder total files: ", i, ":", full_path
return
cnxn = pypyodbc.connect(connStr)
dir_name = 'X:\\TopLevelFolder'
LoadFile(dir_name)
cnxn.close()
print("Completed")
Answer: My original question was a little misleading... the code didn't work because
of some hardware issues that cannot be explained. After I switched to python
2.7 and reinstalled all the libraries. The code is working fine.
|
How to match two equal string with IF statement in python
Question: My Python code:
import re
output = "your test contains errors"
match2 = re.findall('(.* contains errors)',output)
mat2 = "['your test contains errors'] "
if match2 == mat2:
print "PASS"
In the above python program, I have string in 'match2' and mat2. If it's same
it should print PASS.
I am not getting any error if I run this program. If I print "match2" and
"mat2" is giving the same equal output. but if I use "if match2 == mat2" is
not printing as 'PASS'.
Can anyone please help me to fix this.
Thanks in advance.
Thanks,
Kumar.
Answer: [`re.findall`](https://docs.python.org/2/library/re.html#re.findall) returns a
list, not a string. So `mat2` should be a list, too:
mat2 = ['your test contains errors']
If you want to check `your test contains errors` in the string, you can use
`in` operator:
if "your test contains errors" in output:
print "PASS"
|
Debian No Module named numpy
Question: I've installed Python Numpy on Debian using...
> apt-get install python-numpy
But when run the Python shell I get the following...
Python 2.7.10 (default, Sep 9 2015, 20:21:51)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy
When I view the contents of `/usr/local/lib/python2.7/site-packages/` I
noticed numpy is not list.
If I install it via pip i.e `pip install numpy` it works just fine, However, I
want to use the apt-get method. What I'm I doing wrong?
Other:
> echo $PYTHONPATH /usr/local/lib/python2.7
dpkg -l python-numpy...
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===============================================-============================-============================-====================================================================================================
ii python-numpy 1:1.8.2-2 amd64 Numerical Python adds a fast array facility to the Python language
> Python 2.7.10
['', '/usr/local/lib/python2.7', '/usr/local/lib/python27.zip', '/usr/local/lib/python2.7/plat-linux2', '/usr/local/lib/python2.7/lib-tk', '/usr/local/lib/python2.7/lib-old', '/usr/local/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/site-packages']
which -a python...
/usr/local/bin/python
/usr/bin/python
echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Answer: As you can tell from your `which` result, the python you are running when just
typing `python` is `/usr/local/bin/python`.
It's a python you probably installed there yourself, as [Debian will never put
anything in `/usr/local`](https://www.debian.org/doc/debian-policy/ch-
opersys.html#s9.1.2) by itself (except for empty directories).
How? Well, by running `pip` for instance. As a rule, you should never use
`pip` outside of a [virtualenv](http://docs.python-
guide.org/en/latest/dev/virtualenvs/), because it will install stuff on your
system that your package manager will not know about. And maybe break stuff,
like what you see on your system.
So, if you run `/usr/bin/python`, it should see the numpy package you
installed using your package manager.
How to fix it? Well, I would clear anything in `/usr/local` (beware, it will
definitely break stuff that rely on things you installed locally). Then I
would `apt-get install python-virtualenv`, and always work with a virtualenv.
$ virtualenv -p /usr/bin/python env
$ . env/bin/activate
(env)$ pip install numpy
(env)$ python
>>> import numpy
>>>
That way, packages will be installed in the `env` directory. You do all this
as a regular user, not root. And your different projects can have different
environments with different packages installed.
|
Match text between two strings with regular expression
Question: I would like to use a regular expression that matches any text between two
strings:
Part 1. Part 2. Part 3 then more text
In this example, I would like to search for "Part 1" and "Part 3" and then get
everything in between which would be: ". Part 2. "
I'm using Python 2x.
Answer: Use `re.search`
>>> import re
>>> s = 'Part 1. Part 2. Part 3 then more text'
>>> re.search(r'Part 1\.(.*?)Part 3', s).group(1)
' Part 2. '
>>> re.search(r'Part 1(.*?)Part 3', s).group(1)
'. Part 2. '
Or use `re.findall`, if there are more than one occurances.
|
How to pass socket objects between two clients in any language C++, Python, Java, C
Question: I have an idea like how basic communication between _client_ and _server_ is
established. So serialize data streams can be passed between _client_ and
_server_. But I want to know, how **_socket objects_** can be passed between
two _clients_ : I want to know is it possible to **_pass socket objects
between two clients_** and both share the same socket instance. Please
suggest.
Client class:
import socket
import sys
# create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#connect the socket to the port where server is listening
server_address = ('localhost',2000)
print >>sys.stderr, 'connecting to %s port %s' % server_address
sock.connect(server_address)
#after connection is established, data can be through socket with sendall() and recv()
try:
#send data
message = 'This is Message. It will be repeated'
print >>sys.stderr, 'sending "%s"' % message
sock.sendall(message)
#llok for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print amount_received
print >>sys.stderr, 'received "%s"' % data
finally:
print >>sys.stderr, 'closing socket'
sock.close()
_Server_ class created to receive message from the _client_ and revert with
some message.
Server class:
import socket
import sys
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ("localhost",2000)
print >>sys.stderr, 'starting up on %s port %s' %server_address
sock.bind(server_address)
sock.listen(1)
while True:
print >>sys.stderr, 'waiting for connection'
connection, client_address = sock.accept()
try:
print >>sys.stderr, 'connection from', cleint_address
while True:
data = connection.recv(16)
print >>sys.stderr, 'received "%s"' % data
if data:
print >>sys.stderr, 'sending data back to the client'
connection.sendall(data)
else:
print >>sys.stderr, 'no more data from', client_address
break
finally:
connection.close()
After _server_ started the _client_ connects with _server_ and displays
suitable messages. Now instead of sending messages between _client_ and
_server_ , I want to send **_socket object_** to another _client_ which can be
achieved using either `TCP` or `UDP`. In `TCP`, serialization of data is
required. I want to know is there any way to wrap **socket object** and pass
it over.
Answer: Socket objects can not be transported (or you know, teleported :D ) to another
language or anything. At most, you can create a protocol by which an instance
of the socket can be transferred to another language. But again, I don't see
how it may help you.
You have a server socket listening on 2000 port. And another Java socket may
connect to it using a client socket. So, what's the point of sending one of
the socket to the another? the communication link is somehow twirled. Like, we
can just eat ourself to regenerate us. But that would be impossible
Similarly, at most, you can send an instance of the server socket to the java
socket. But on the same computer, the Java won't be able to recreate it,
because the port is already being listened by another program.
Next, if two programs could listen on the same port, that would make stealing
of data and forging quite easy. So, it is not possible for two programs to
listen on the same port.
I think what you are looking for is that, two programs combinedly handle the
I/O of the same socket. That is rational, at least.
For that, you should create some sort of bidirectional communication link
between these two processes. Like another socket on a different port.
Like S is the Server (the sole owner of the socket S1) meanwhile A and B are
the handlers. S should be listening on two different ports. where only A and B
are connected. Then any data that comes to S, at S' discretion would be A or B
appropriate, then, A or B will reply to that request. And then S will respond
appropriate.
Another approach would be S is the `main` server socket. and A and B are
servers listening on different ports. Whenever data comes to S, S sends it to
A or B depending on content.
Thirdly, and the most messy solution would be that, A is the server and it
offloads some tasks to B via some sort of communication (server-client or
threads or a subprocess) and they handle data appropriately. The reason of
calling it messy is that one has to handle two tasks and its harder to
maintain its functionality.
But still, sharing a socket is like using the same page of a copy for two
different tasks. Hope it helped
|
splitting a file randomly in python
Question: I have a input file word.txt.I am trying to splitting the file in 75%-25%
randomly in python.
def shuffle_split(infilename, outfilename1, outfilename2):
from random import shuffle
with open(infilename, 'r') as f:
lines = f.readlines()
# append a newline in case the last line didn't end with one
lines[-1] = lines[-1].rstrip('\n') + '\n'
traingdata = len(lines)* 75 // 100
testdata = len(lines)-traingdata
with open(outfilename1, 'w') as f:
f.writelines(lines[:traingdata])
with open(outfilename2, 'w') as f:
f.writelines(lines[:testdata])
But this code is writing the first 75% of the original file in the first
output file and again the same 25% of the original file in the second output
file.Could you please suggest me some way to solve it.
Answer: The problem is that in this line
f.writelines(lines[:testdata])
you are saying "everything from index 0 to index testdata":
f.writelines(lines[0:testdata])
which is not what you want. Just change it to
f.writelines(lines[testdata:])
which means "everything from (testdata) to the end of the list". That should
work. Or even simpler
f.writelines(lines[traingdata + 1:])
This line says "everything from (traindata + 1) to the end of the list".
|
Python unittest Mock patch object not methods
Question: I am having trouble getting my head around unit testing with Mock in Python. I
have a method `start_thing()` in a class I'd like to test:
class ComplexClass:
def __init__(self, lots, of, args):
self.lots = lots
..
def start_thing(self):
import pdb; pdb.set_trace()
print "some error!!"
assert False
The class this belongs to is quite complex and a pain to try and mock
manually. That is why I started to look at using Mock.
I would like to setup a mock that Mocks an instance of this class to make it
easy to run a unittest, but not mock the method `start_thing()` so that the
real `start_thing()` implementation is tested, not a mocked version .. so I
created this:
class TestComplexClass(TestCase):
@patch.object(module.ComplexClass, 'start_thing')
def test_start_thing(self, mock_method):
ComplexClass.start_thing()
When running my test the debug trace, assert or print are not hit in my real
method `start_thing()`, which suggests I have mocked the class and the method
- where I just want to mock the object and test the real methods. What am I
doing wrong here? Is that possible?
I have found lots of examples with Mock showing how to create a mock version
of the method I want to test, which I think is kind of pointless since I don't
want to check if it's being called correctly, rather I want to test the
implementation in the real code, and mock the class it belongs to so it's
easier to create.
Perhaps there's something I don't understand about the Mock testing idea as a
whole?
Answer: I don't think you want to mock that class but to stub it out, ex:
class ComplexClassStub(ComplexClass):
def __init__(self):
self.lots = None
self.the_rest_of_the_args = None # Now your complex class isn't so complex.
class ComplexClassTest(unittest.TestCase):
def Setup(self):
self.helper = ComplexClassStub()
def testStartThing(self):
with mock.patch.object(self.helper, 'SomethingToMock') as something_mocked:
expected = 'Fake value'
actual = self.helper.start_thing()
self.assertEqual(expected, actual)
something_mocked.assert_called_once_with()
|
Python regular expression using the OR operator
Question: I am trying to parse a large sample of text files with regular expressions
(RE). I am trying to extract from these files the part of the text which
contains _'vu'_ and ends with a newline _'\n'_.
Patterns differ from one file to another, so I tried to look for combinations
of RE in my files using the _OR_ operator. However, I did not find a way to
automate my code so that the _re.findall()_ function looks for a combination
of RE.
Here is an example of how I tried to tackle this issue, but apparently I still
can not evaluate both my regular expressions and the OR operator in
_re.findall()_ :
import re
def series2string(myserie) :
myserie2 = ' or '.join(serie for serie in myserie)
return myserie2
def expression(pattern, mystring) :
x = re.findall(pattern, mystring)
if len(x)>0:
return 1
else:
return 0
#text example
text = "\n\n (troisième chambre)\n i - vu la requête, enregistrée le 28 février 1997 sous le n° 97nc00465, présentée pour m. z... farinez, demeurant ... à dommartin-aux-bois (vosges), par me y..., avocat ;\n"
#expressions to look out
pattern1 = '^\s*vu.*\n'
pattern2 = '^\s*\(\w*\s*\w*\)\s*.*?vu.*\n'
pattern = [pattern1, pattern2]
pattern = series2string(pattern)
expression(pattern, text)
_Note_ : I circumvented this problem by looking for each pattern in a _for
loop_ but my code would run faster if I could just use _re.findall()_ once.
Answer: Python regular expressions uses the `|` operator for alternation.
def series2string(myserie) :
myserie2 = '|'.join(serie for serie in myserie)
myserie2 = '(' + myserie2 + ')'
return myserie2
More information: <https://docs.python.org/3/library/re.html>
* * *
The individual patterns look really messy, so I don't know what is a mistake,
and what is intentional. I am guessing you are looking for the word "vu" in a
few different contexts.
1. Always use Python raw strings for regular expressions, prefixed with `r` (`r'pattern here'`). It allows you to use `\` in a pattern without python trying to interpret it as a string escape. It is passed directly to the regex engine. [(ref)](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals)
2. Use `\s` to match white-space (spaces and line-breaks).
3. Since you already have several alternative patterns, don't make `(` and `)` optional. It can result in catastrophic backtracking, which can make matching large strings really slow.
`\(?` -> `\(`
`\)?` -> `\)`
4. `{1}` doesn't do anything. It just repeats the previous sub-pattern once, which is the same as not specifying anything.
5. `\br` is invalid. It is interpreted as `\b` (ASCII bell-character) + the letter `r`.
6. You have a quote character (`'`) at the beginning of your text-string. Either you intend `^` to match the start of any line, or the `'` is a copy/paste error.
7. Some errors when combining the patterns:
pattern = [pattern1, pattern2, pattern3, pattern4]
pattern = series2string(pattern)
expression(re.compile(pattern), text)
|
Splitting paragraph into sentences
Question: I'm using the following Python code (which I found online a while ago) to
split paragraphs into sentences.
def splitParagraphIntoSentences(paragraph):
import re
sentenceEnders = re.compile(r"""
# Split sentences on whitespace between them.
(?: # Group for two positive lookbehinds.
(?<=[.!?]) # Either an end of sentence punct,
| (?<=[.!?]['"]) # or end of sentence punct and quote.
) # End group of two positive lookbehinds.
(?<! Mr\. ) # Don't end sentence on "Mr."
(?<! Mrs\. ) # Don't end sentence on "Mrs."
(?<! Jr\. ) # Don't end sentence on "Jr."
(?<! Dr\. ) # Don't end sentence on "Dr."
(?<! Prof\. ) # Don't end sentence on "Prof."
(?<! Sr\. ) # Don't end sentence on "Sr."."
\s+ # Split on whitespace between sentences.
""",
re.IGNORECASE | re.VERBOSE)
sentenceList = sentenceEnders.split(paragraph)
return sentenceList
I works just fine for my purpose, but now I need the exact same regex in
Javascript (to make sure that the outputs are consistent) and I'm struggling
to translate this Python regex into one compatible with Javascript.
Answer: It is not regex for direct split, but kind of workaround:
(?!Mrs?\.|Jr\.|Dr\.|Sr\.|Prof\.)(\b\S+[.?!]["']?)\s
[DEMO](https://regex101.com/r/nH6eH3/1)
You can replace matched fragment with for example: `$1#` (or other char not
occuring in text, instead of `#`), and then split it with `#`
[DEMO](https://jsfiddle.net/onssubo7/). However it is not too elegant
solution.
|
Python 2.7, Having problems with having iteration
Question:
from __future__ import print_function
import random as rand
def ex_game():
print('Instructions:','\n','Your job is to try to figure out a number of problems by your choosing and get the progress bar all GREEN, YELLOW means its there, but wrong spot, RED means its not there and there are no more of that left. After a light turns GREEN you do not have to get it right again. Trust me its easy.(Press enter to get started)')
raw_input()
ex_secret(int(raw_input('Number of items you would like to TRY to solve(WARNING: GAME GETS HARDER THE LARGER THE NUMBER YOU CHOOSE):')),int(raw_input('At what level would you like to play at.(1-5)')))
def ex_guess(l_sec,tries,cheat,number_items,level):
#This is the guess choice I separated it so that I could rerun it as pleased, to put a special code in, and show the progression table
guess_ex = []
option_2 = ''
option_3 = ''
option_4 = ''
option_5 = ''
if level == 2 or level == 3 or level == 4 or level == 5:
option_2 = ', white'
if level == 3 or level == 4 or level == 5:
option_3 = ', orange'
if level == 4 or level == 5:
option_4 = ', purple'
if level == 5:
option_5 = ', brown'
for loop in range(number_items):
append_guess = 'What do you beleive the solution is?(One at a time | choices: yellow, blue, black, green, red' + option_2 + option_3 + option_4 + option_5 +'):'
guess_ex.append(raw_input(append_guess).lower())
return guess_ex
def ex_secret(number_items,level):
#Makes the random list of colors to solve
secret_ex = []
option_2 = ''
option_3 = ''
option_4 = ''
option_5 = ''
if level == 2 or level == 3 or level == 4 or level == 5:
option_2 = ', white'
if level == 3 or level == 4 or level == 5:
option_3 = ', orange'
if level == 4 or level == 5:
option_4 = ', purple'
if level == 5:
option_5 = ', brown'
for loop in range(number_items):
secret_ex.append(rand.choice(['yellow','blue','black','green','red', option_2 , option_3, option_4, option_5]))
for empty_check in secret_ex:
if empty_check == '':
ex_secret(number_items,level)
report(secret_ex,[],0,False,number_items,level,False,False)
def report(secret,progress,tries,cheat,number_items,level,correct,end):
#The "game"
safe_secret = []
for record in secret:
safe_secret.append(record)
if correct == True:
print('YOU WON!')
end = True
if tries > 0 and cheat == False or tries > 4 and cheat:
print(progress)
progress = []
if not end:
while not correct:
if tries == 5 and not correct:
print('YOU LOST?!?!')
end = True
return end
guess = ex_guess(len(secret),tries,cheat,number_items,level)
super_secret = 'up up down down left right left right b a start'
if 'give' in guess:
tries = 4
if super_secret in guess and not cheat:
print('The deal has been made. Now make your "new" choices.')
cheat = True
tries += 6
guess = ex_guess(len(secret),progress,tries,cheat,number_items,level)
elif super_secret in guess and cheat:
print('I said the deal was made now go make your new choices and leave me alone you prick.')
guess = ex_guess(len(secret),progress,tries,cheat,number_items,level)
elif 'stop_cheat' in guess and cheat:
if tries > 8:
return 'YOU LOST, thank you for admitting to cheating though.'
else:
cheat = False
tries -= 6
list_counter_in = 0
list_counter_out = 0
list_counter_incor = 0
while list_counter_in <= number_items - 1:
if guess[list_counter_in] == secret[list_counter_in] and '_g' not in guess[list_counter_in]:
secret[list_counter_in] += ('_g')
guess[list_counter_in] += ('_g')
list_counter_in += 1
while list_counter_out <= number_items - 1:
if guess[list_counter_out] in secret and '_g' not in guess[list_counter_out]:
secret_yellow_check = secret[secret.index(guess[list_counter_out])]
if '_y' not in secret_yellow_check:
secret[secret.index(guess[list_counter_out])] += '_y'
guess[list_counter_out] += '_y'
list_counter_out += 1
while list_counter_incor <= number_items - 1:
if '_g' not in guess[list_counter_incor] and '_y' not in guess[list_counter_incor]:
guess[list_counter_incor] += '_r'
list_counter_incor += 1
for check_status in guess:
if '_g' in check_status:
progress.append('GREEN')
if '_y' in check_status:
progress.append('YELLOW')
if '_r' in check_status:
progress.append('RED')
if 'RED' not in progress and 'YELLOW' not in progress:
correct = True
if 'RED' in progress or 'YELLOW' in progress:
tries +=1
print('Not quite right...Lets try again!')
report(safe_secret,progress,tries,cheat,number_items,level,correct,end)
Basically I have only one large issue with this code, It keeps repeating,
after the player wins or loses it repeats could someone help me with this, and
yes I have tried using returns instead. I am basically trying to have if they
get all the correct colors in a random secret they win if they can't figure it
out in 5 moves they lose.
Answer: So there are actually a couple problems here and they mainly boil down to the
fact that you are calling functions inside themselves which is generally a
REALLY bad idea. I rewrote your functions so that they do not need to call
themselves. Here is what I came up with. :) As a bonus I added the option to
type 'quit' at any time while playing(but not while configuring and actually
typing a 'quit' while configuring will crash the game) to quit the game.
from __future__ import print_function
import random as rand
def ex_game():
print('Instructions:','\n','Your job is to try to figure out a number of problems by your choosing and get the progress bar all GREEN, YELLOW means its there, but wrong spot, RED means its not there and there are no more of that left. After a light turns GREEN you do not have to get it right again. Trust me its easy.(Press enter to get started)')
raw_input()
ex_secret(int(raw_input('Number of items you would like to TRY to solve(WARNING: GAME GETS HARDER THE LARGER THE NUMBER YOU CHOOSE):')),int(raw_input('At what level would you like to play at.(1-5)')))
def ex_guess(l_sec,tries,cheat,number_items,level):
#This is the guess choice I separated it so that I could rerun it as pleased, to put a special code in, and show the progression table
guess_ex = []
colors = ['yellow', 'blue', 'black', 'green', 'red']
if level >= 2:
colors.append('white')
if level >= 3:
colors.append('orange')
if level >= 4:
colors.append('purple')
if level >= 5:
colors.append('brown')
for loop in range(number_items):
append_guess = 'What do you beleive the solution is?(One at a time | choices: ' + str(colors) + '):'
input = raw_input(append_guess).lower()
if input == 'quit':
return 'quit'
guess_ex.append(input)
return guess_ex
def ex_secret(number_items,level):
secret_ex = []
optionalColors = []
if level >= 2:
optionalColors.append('white')
if level >= 3:
optionalColors.append('orange')
if level >= 4:
optionalColors.append('purple')
if level >= 5:
optionalColors.append('brown')
for loop in range(number_items):
secret_ex.append(rand.choice(['yellow','blue','black','green','red']+optionalColors))
report(secret_ex,[],0,False,number_items,level,False)
def report(secret,progress,tries,cheat,number_items,level,correct):
#The "game"
safe_secret = []
for record in secret:
safe_secret.append(record)
if correct == True:
print('YOU WON!')
end = True
if tries > 0 and cheat == False or tries > 4 and cheat:
print(progress)
progress = []
while not correct:
if tries == 5 and not correct:
print('YOU LOST?!?!')
raw_input('Thanks for Playing!')
return
guess = ex_guess(len(secret),tries,cheat,number_items,level)
super_secret = 'up up down down left right left right b a start'
if 'give' in guess:
tries = 4
if super_secret in guess and not cheat:
print('The deal has been made. Now make your "new" choices.')
cheat = True
tries += 6
guess = ex_guess(len(secret),progress,tries,cheat,number_items,level)
elif super_secret in guess and cheat:
print('I said the deal was made now go make your new choices and leave me alone you prick.')
guess = ex_guess(len(secret),progress,tries,cheat,number_items,level)
elif 'stop_cheat' in guess and cheat:
if tries > 8:
return 'YOU LOST, thank you for admitting to cheating though.'
else:
cheat = False
tries -= 6
if guess == 'quit':
raw_input('Thanks for Playing!')
return
list_counter_in = 0
list_counter_out = 0
list_counter_incor = 0
while list_counter_in <= number_items - 1:
if guess[list_counter_in] == secret[list_counter_in] and '_g' not in guess[list_counter_in]:
secret[list_counter_in] += ('_g')
guess[list_counter_in] += ('_g')
list_counter_in += 1
while list_counter_out <= number_items - 1:
if guess[list_counter_out] in secret and '_g' not in guess[list_counter_out]:
secret_yellow_check = secret[secret.index(guess[list_counter_out])]
if '_y' not in secret_yellow_check:
secret[secret.index(guess[list_counter_out])] += '_y'
guess[list_counter_out] += '_y'
list_counter_out += 1
while list_counter_incor <= number_items - 1:
if '_g' not in guess[list_counter_incor] and '_y' not in guess[list_counter_incor]:
guess[list_counter_incor] += '_r'
list_counter_incor += 1
for check_status in guess:
if '_g' in check_status:
progress.append('GREEN')
if '_y' in check_status:
progress.append('YELLOW')
if '_r' in check_status:
progress.append('RED')
if 'RED' not in progress and 'YELLOW' not in progress:
correct = True
if 'RED' in progress or 'YELLOW' in progress:
tries +=1
print('Not quite right...Lets try again!')
ex_game()
|
ANOVA syntax in RPy2
Question: First time using the RPy2 implementation in Python. Attempting to do an one-
way ANOVA with two factors. It works in R on another machine, but Python does
not like the syntax. Any thoughts are appreciated!
from rpy2.robjects import aov
huanova = aov(formula = df1['human_den'] ~ df1['region']+df1['years'])
Error message points at the tilda.
huanova = aov(formula=df1['human_den'] ~ df1['region']+df1['years'])
^
SyntaxError: invalid syntax
Answer: As per the [documentation about Formulae in
Rpy2](http://rpy.sourceforge.net/rpy2/doc-2.2/html/robjects_formulae.html),
you have to pass the formula as a string. This is one way of doing it:
from rpy2.robjects import aov
from rpy2.robjects import Formula
formula = Formula('human_den ~ region + years')
env = formula.environment
env['human_den'] = df1['human_den']
env['region'] = df1['region']
env['years'] = df1['years']
huanova = aov(formula = formula)
|
Recursive function returns redundant print statements
Question: I have a simple python application that counts down from 10 to 0. I have it
working except it prints a print message 10 times. Here is my code:
`CountDown.py`:
import sys
import counter
def main():
A = counter.counter()
A.counter(10)
if __name__ == "__main__":
sys.exit(int(main() or 0))
`counter.py`:
class counter(object):
def __init__(self):
pass
#return super(counter, self).__init__(*args, **kwargs)
"""description of class"""
def counter(self,a):
if a == 0:
print ('BlastOff')
else:
print ('T equal:', a)
a -= 1
self.counter(a)
print ('Exiting from countdown(',a,')')
return 1
This is what I'm receiving at the console window.
('T equal:', 10)
('T equal:', 9)
('T equal:', 8)
('T equal:', 7)
('T equal:', 6)
('T equal:', 5)
('T equal:', 4)
('T equal:', 3)
('T equal:', 2)
('T equal:', 1)
BlastOff
('Existing from countdown(', 0, ')')
('Existing from countdown(', 0, ')')
('Existing from countdown(', 1, ')')
('Existing from countdown(', 2, ')')
('Existing from countdown(', 3, ')')
('Existing from countdown(', 4, ')')
('Existing from countdown(', 5, ')')
('Existing from countdown(', 6, ')')
('Existing from countdown(', 7, ')')
('Existing from countdown(', 8, ')')
('Existing from countdown(', 9, ')')
Press any key to continue . . .
How do I prevent the multiple `'Existing from countdown'`?
Answer: If you add a `return` statement to the `else:` clause, you will avoid printing
most of the "Exiting" printouts.
else:
print ('T equal:', a)
a -= 1
return self.counter(a)
|
r.headers['Authorization'] in python2.7 class's __call__ function
Question: I'm digging a little into requests/requests/auth.py file at master
kennethreitz/requests on github.
<https://github.com/kennethreitz/requests/blob/master/requests/auth.py>
And I saw this code,
class HTTPBasicAuth(AuthBase):
"""Attaches HTTP Basic Authentication to the given Request object."""
def __init__(self, username, password):
self.username = username
self.password = password
def __call__(self, r):
r.headers['Authorization'] = _basic_auth_str(self.username, self.password)
return r
I just can't understand how could he comes up with r.headerp['Authorisation ']
which hasn't been defined anywhere before. Am I missing something?
Many thanks for someone to answer the problem :)
Answer: I'm assuming the r.headers object is just a dictionary datastructure.
In Python you can take a dictionary and assign any attribute, existing or not,
and if it doesn't exist already it'll be created.
You can see this if you fire up a python shell
#Create a new empty dictionary, no attributes
>>> obj = {}
#Assign the string "hello header" to the "headers" attribute
>>> obj['headers'] = "hello header"
# Print it
>>> print(obj['headers'])
> hello header
See [this](http://www.tutorialspoint.com/python/python_dictionary.htm) page on
python dictionary datastructures to learn more.
**EDIT:**
A 2 second glance at the source file you linked to shows the line
from .utils import parse_dict_header, to_native_string
I think without looking into it it's safe to say the header attribute is just
a dictionary - `parse_dict_header`.
**EDIT 2:** To answer your more specific question as to where `r.headers`
comes from.
the `__call__` method is when the `HTTPBasicAuth` object is called like a
function which I could trace through the code and see happening
[here](https://github.com/kennethreitz/requests/blob/f5dacf84468ab7e0631cc61a3f1431a32e3e143c/requests/models.py)
at line 487 which is in the `prepare_auth` method of the `Request` object.
r = auth(self)
where self is the `Request` object instance. The `Request` has the headers
attribute of itself set in the `__init__` at line 225.
self.headers = headers
It is this `Request` object defined in `models.py` which is the parameter `r`
passed to the `__call__` method of the `HTTPBasicAuth` class.
If you use something like Python Visual Studio Calls you could run this code,
break on the line you're interested in, in this case where r.headers is used,
and see the backstack at that point and explore the objects in scope.
|
ImportRrror: no module named 'psycopg2'
Question: I'm attempting to use PostgreSQL as my data base and I'm running into an issue
when I try to start my server. Here's what I'm doing:
* I have a virtual environment set up and activated
* Django 1.8.4 is installed
* psycopg2 2.5.2 is installed
* wheel 0.24.0 is installed
I'm using python 3.4.
Adding pip freeze output as requested:
Django==1.8.4
psycopg2==2.5.2
wheel==0.24.0
when I run the server using the default sqlite3 db I have no issue; it runs
fine. As soon as I switch over to postgres I get the following error:
ImportError: no module named psycopg2.
`pip install psycopg2` wasn't working so I installed psycopg2 from github
using this command: `pip install
git+https://github.com/nwcell/psycopg2-windows.git@win64-py34#egg=psycopg2`
Guidance is greatly appreciated.
Answer: It is important to specify operating system you are on, as I guess from
comment you are using windows.
On windows if you do not wont to install Visual C++ libraries you can just
download whl file of the package.
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#psycopg>
choose correct architecture and python version for you and run
pip install path/to/packagename.whl
make sure you are using pip version 6 or newer
|
Python - global Serial obj instance accessible from multiple modules
Question: I have 5 different games written in python that run on a raspberry pi. Each
game needs to pass data in and out to a controller using a serial connection.
The games get called by some other code (written in nodeJS) that lets the user
select any of the games.
I'm thinking I don't want to open and close a serial port every time I start
and finish a game. Is there anyway to make a serial object instance "global",
open it once, and then access it from multiple game modules, all of which can
open and close at will?
I see that if I make a module which assigns a Serial object to a variable
(using PySerial) I can access that variable from any module that goes on to
import this first module, but I can see using the id() function that they are
actually different objects - different instances - when they are imported by
the various games.
Any ideas about how to do this?
Answer: Delegate the opening and management of the serial port to a separate daemon,
and use a UNIX domain socket to transfer the file descriptor for the serial
port to the client programs.
|
Issue with python 3.0 pexpect module
Question: Here is the piece of base code which I wrote to do an automatic ssh to the
linux, but every time it is getting into cases==0, which means it's thinking
every time it's a "newkey"/ (yes/no):
Please help me solving it. I am stuck at the basic level.
#!/home/python/Python-3.4.3/python
import subprocess;
import pexpect;
def f1_input():
global server, id, password, commands;
server = input("Enter Server: ");
id = input("Enter User ID: ");
password = input("Enter Password: ");
commands = input("Enter Commands: ");
return server, id, password, commands;
def f2_exec():
child = pexpect.spawn('ssh %s@%s %s'%(id,server,commands));
newkey = 'Are you sure you want to continue connecting (yes/no)? ';
passkey = 'password:';
cases = child.expect([newkey, passkey, pexpect.EOF]);
print("cases=", cases);
if cases==0:
print("cases=", cases);
child.sendline('yes');
child.expect('password:');
child.sendline(password);
child.expect(pexpect.EOF);
print(child.before);
elif cases==1:
print("cases=", cases);
child.sendline(password);
child.expect(pexpect.EOF);
print(child.before);
elif cases==2:
print("cases=", cases);
print("Timeout!!!");
else:
print("cases=", cases);
print("EXIT");
f1_input();
f2_exec();
Answer: your problem is with ssh, and should reproduce without pexpect. It may not be
able to write to ~/.ssh/known_hosts file, or option StrictHostKeyChecking=no
in ~/.ssh/config or similar.
Furthermore, your solution is better resolved with a program such as
sshpass(1) or a python module like paramiko.
|
python, plot planck curves looping through arrays
Question: I try to get myself familiar with programming in python but have just started
and struggling with the following problem. Maybe someone can give me a hint
how to proceed or where I can look for a nice solution.
I'd like to plot planck curves for 132 wavelength in 6 different temperatures
via a loop in a loop. The function planckwavel receives two parameters,
wavelength and temperature, which I separated in two loops.
I so far managed to use lists, which worked, however probably not solved in an
elegant way:
plancks = []
temp = [280, 300, 320, 340, 360, 380]
temp_len = len(temp)
### via fun planckwavel
for i in range(temp_len):
t_list = [] # list nach jeder j schleife wieder leeren
for j in range(wl_centers_ar.shape[0]):
t = planckwavel(wl_centers_ar[j],temp[i])
t_list.append(t)
plancks.append(t_list)
### PLOT Planck curves
plancks = np.array(plancks).T # convert list to array and transpose
view_7 = plt.figure(figsize=(8.5, 4.5))
plt.plot(wl_centers_ar,plancks)
plt.xticks(rotation='vertical')
But I would like to use arrays insted of lists, as I like to continue
afterwards with huge more dimensional images. So I tried the same with arrays
but unfortunately failed with this code:
plancks_ar = zeros([132,6], dtype=float ) # create array and fill with zeros
temp_ar = array([273, 300, 310, 320, 350, 373])
for i in range(temp_ar.shape[0]):
t_ar = np.zeros(plancks_ar.shape[0])
for j in range(plancks_ar.shape[0]):
t = planck(wl_centers_ar[j]*1e-6,temp[1])/10**6
np.append(t_ar,t)
np.append(plancks_ar, t_ar)
plt.plot(wl_centers_ar,plancks)
I would be very thankful, if someone can give me some advice.
Thanx, best regards,
peter
Answer: I think you're asking about how to use NumPy's [broadcasting and
vectorization](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
Here's a way to remove the explicit Python loops:
import numpy as np
# Some physical constants we'll need
h, kB, c = 6.626e-34, 1.381e-23, 2.998e8
def planck(lam, T):
# The Planck function, using NumPy vectorization
return 2*h*c**2/lam**5 / (np.exp(h*c/lam/kB/T) - 1)
# wavelength array, 3 - 300 um
lam = np.linspace(3, 75, 132)
# temperature array
T = np.array([280, 300, 320, 340, 360, 380])
# Remember to convert wavelength from um to m
pfuncs = planck(lam * 1.e-6, T[:,None])
import pylab
for pfunc in pfuncs:
pylab.plot(lam, pfunc)
pylab.show()
[](http://i.stack.imgur.com/tfWPV.png)
We want to calculate `planck` for each wavelength and for each T, so we need
to broadcast the calculation over the two arrays. Following the rules laid out
in the documentation linked to above, we can do that by adding a new axis to
the temperature array (with `T[:, None]`):
lam: 132
T 6 x 1
--------------
6 x 132
The final dimension of `T[:, None]` is 1, so the 132 values of `lam` can be
broadcast across it to produce a `6 x 132` array: 6 rows (one for each T) of
132 values (the wavelengths).
|
how to loop through list multiple times in Python
Question: Can you loop through list (using range that has a step in it) over and over
again until all the elements in the list are accessed by the loop?
I have the following lists:
result = []
list = ['ba', 'cb', 'dc', 'ed', 'gf', 'jh']
i want the outcome (result) to be:
result = ['dc', 'cb', 'ba', 'jh', 'gf', 'ed']
How do i make it loop through the first list, and appending each element to
result list, starting from the third element and using 5 as a step, until all
the elements are in the results list?
Answer: There is no need to loop through a list multiple times.As a more pythonic way
You can use
[`itertools.cycle`](https://docs.python.org/3/library/itertools.html#itertools.cycle)
and
[`islice`](https://docs.python.org/3/library/itertools.html#itertools.islice)
:
>>> from itertools import cycle,islice
>>> li= ['ba', 'cb', 'dc', 'ed', 'gf', 'jh']
>>> sl=islice(cycle(li),2,None,4)
>>> [next(sl) for _ in range(len(li))]
['dc', 'ba', 'gf', 'dc', 'ba', 'gf']
Note that in your expected output the step is 5 not 4.So if you use 5 as slice
step you'll get your expected output :
>>> sl=islice(cycle(li),2,None,5)
>>> [next(sl) for _ in range(len(li))]
['dc', 'cb', 'ba', 'jh', 'gf', 'ed']
|
Avoid shouldInterruptJavaScript in PySide QT4 - Python
Question: I am using PySide '1.2.2' and trying to avoid the msgbox alerting a potential
javascript error, since it is due to the site being sizeable. I am using this
code from this other answer:
[Override shouldInterruptJavaScript in QWebPage with
PySide](http://stackoverflow.com/questions/6868286/override-
shouldinterruptjavascript-in-qwebpage-with-pyside)
import sys
from PySide import QtCore
from PySide.QtGui import QApplication
from PySide.QtWebKit import QWebPage
class QWebPageHeadless(QWebPage):
# FIXME: This is not working, the slot is not overriden!
@QtCore.Slot()
def shouldInterruptJavaScript(self):
print( 'not interrupting')
return False
I have tried implementing the class above, and all sorts of derivatives of it,
but with no success, it never gets to execute. Any thoughts on how to do this?
Thank you
Answer: I found the "solution". Forget about PySide and PyQt, you will just end up
with headaches, use Selenium. Very easy to implement and powerful for this
purpose. Works great!
|
Python requests - how to add multiple own certificates
Question: Is there a way to tell the requests lib to add multiple certificates like all
.pem files from a specified folder?
import requests, glob
CERTIFICATES = glob('/certs/')
url = '127.0.0.1:8080'
requests.get(url, cert=CERTIFICATES)
Seems to work only for a single certificate
I already search google and the python doc. The best tutorial I found was [the
SSL certification section in the official documentation](http://docs.python-
requests.org/en/latest/user/advanced/#ssl-cert-verification).
Answer: You can only pass in one certificate file at a time.
Either merge those files into one `.pem` file, or loop over the certificate
files and try each one in turn until the connection succeeds.
A `.pem` file can hold multiple certificates; it should be safe to concatenate
all your files together. See
<http://how2ssl.com/articles/working_with_pem_files/>.
|
Python PIL save Image in directory no override if the name is same
Question: I am trying to make a backup copy of an image, because it will be resized
often. I am asking for the path where the image is (Tkinter), then I am adding
to the path and the image an "-original" and save it in the same directory
where I got it from.
The problem is everytime I use this function it overrides the original,
because there is no loop which makes the programm check wheter there already
exists a file with "-original".
Thats how I make the backup save:
pfad = askopenfilename()
im_backup = Image.open(pfad)
start_string = pfad[:pfad.index(".")]
ende_string = pfad[pfad.index("."):]
im_backup.save(start_string + "-original" + ende_string)
Currently I am working on a solution with os which could work, but I have the
feeling it has to be simple. I red the documentation of PIL.Image.save, there
are more arguments which can be passed in to save, but I dont figured out
which one has to be used to prevent overriding.
My current solution (not working yet) is checking with os.listdir(directory)
wether there is already a (start_string + "-original" + ende_string) file and
only save it there if it is false.
Thanks in advance!
Answer: Consider using `os.path.splitxext` instead of slicing and indexing. You can
also use `os.path.isfile` instead of `listdir`.
import os
pfad = askopenfilename()
name, ext = os.path.splitext(pfad)
backup_name = name + "-original" + ext
if not os.path.isfile(backup_name):
im_backup = Image.open(pfad)
im_backup.save(backup_name)
|
Access to Spark from Flask app
Question: I wrote a simple Flask app to pass some data to Spark. The script works in
IPython Notebook, but not when I try to run it in it's own server. I don't
think that the Spark context is running within the script. How do I get Spark
working in the following example?
from flask import Flask, request
from pyspark import SparkConf, SparkContext
app = Flask(__name__)
conf = SparkConf()
conf.setMaster("local")
conf.setAppName("SparkContext1")
conf.set("spark.executor.memory", "1g")
sc = SparkContext(conf=conf)
@app.route('/accessFunction', methods=['POST'])
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080)
In IPython Notebook I don't define the `SparkContext` because it is
automatically configured. I don't remember how I did this, I followed some
blogs.
On the Linux server I have set the .py to always be running and installed the
latest Spark by following up to step 5 of [this
guide](https://districtdatalabs.silvrback.com/getting-started-with-spark-in-
python).
**Edit** :
Following the advice by davidism I have now instead resorted to simple
programs with increasing complexity to localise the error.
Firstly I created .py with just the script from the answer below (after
appropriately adjusting the links):
import sys
try:
sys.path.append("your/spark/home/python")
from pyspark import context
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
This returns "Successfully imported Spark Modules". However, the next .py file
I made returns an exception:
from pyspark import SparkContext
sc = SparkContext('local')
rdd = sc.parallelize([0])
print rdd.count()
This returns exception:
"Java gateway process exited before sending the driver its port number"
Searching around for similar problems I found [this
page](https://forums.databricks.com/questions/1662/spark-python-java-gateway-
process-exited-before-se.html) but when I run this code nothing happens, no
print on the console and no error messages. Similarly,
[this](http://ambracode.com/index/show/153755) did not help either, I get the
same Java gateway exception as above. I have also installed anaconda as I
heard this may help unite python and java, again no success...
Any suggestions about what to try next? I am at a loss.
Answer: Okay, so I'm going to answer my own question in the hope that someone out
there won't suffer the same days of frustration! It turns out it was a
combination of missing code and bad set up.
**Editing the code** : I did indeed need to initialise a Spark Context by
appending the following in the preamble of my code:
from pyspark import SparkContext
sc = SparkContext('local')
So the full code will be:
from pyspark import SparkContext
sc = SparkContext('local')
from flask import Flask, request
app = Flask(__name__)
@app.route('/whateverYouWant', methods=['POST']) #can set first param to '/'
def toyFunction():
posted_data = sc.parallelize([request.get_data()])
return str(posted_data.collect()[0])
if __name__ == '__main_':
app.run(port=8080) #note set to 8080!
**Editing the setup** : It is essential that the file (yourrfilename.py) is in
the correct directory, namely it must be saved to the folder
/home/ubuntu/spark-1.5.0-bin-hadoop2.6.
Then issue the following command within the directory:
./bin/spark-submit yourfilename.py
which initiates the service at 10.0.0.XX:8080/accessFunction/ .
**Note that the port must be set to 8080 or 8081: Spark only allows web UI for
these ports by default for master and worker respectively**
You can test out the service with a restful service or by opening up a new
terminal and sending POST requests with cURL commands:
curl --data "DATA YOU WANT TO POST" http://10.0.0.XX/8080/accessFunction/
|
Rounding up to nearest 30 minutes in python
Question: I have the following code below.
I would like to roundup TIME to the nearest 30 minutes in the hour. For
example: 12:00PM or 12:30PM and so on.
EASTERN_NOW = timezone.localtime(timezone.now() + timedelta(minutes=30))
TIME = datetime.time(EASTERN_NOW.time().hour, EASTERN_NOW.time().minute).strftime(
VALID_TIME_FORMATS[2])
Thanks in advance
Answer: To round _up_ to the nearest 30 minutes:
#!/usr/bin/env python3
from datetime import datetime, timedelta
def ceil_dt(dt, delta):
return dt + (datetime.min - dt) % delta
now = datetime.now()
print(now)
print(ceil_dt(now, timedelta(minutes=30)))
[The formula is suggested by @Mark Dickinson (for a different
question)](http://stackoverflow.com/questions/13071384/python-ceil-a-datetime-
to-next-quarter-of-an-hour/32657466#comment53190737_32657466).
### Output
2015-09-22 19:08:34.839915
2015-09-22 19:30:00
Note: if the input is timezone-aware datetime object such as `EASTERN_NOW` in
your case then you should call
`timezone.make_aware(rounded_dt.replace(tzinfo=None))` if you want to preserve
the rounded local time and to attach the correct tzinfo, otherwise you may get
wrong timezone info if the rounding crosses DST boundaries. Or to avoid
failing for ambiguous local time, call `.localize()` manually:
localize = getattr(rounded_dt.tzinfo, 'localize', None)
if localize:
rounded_dt = localize(rounded_dt.replace(tzinfo=None),
is_dst=bool(rounded_dt.dst()))
|
How can I make this code more Pythonic - specifically, merging something into a function?
Question: I know that this code would look a lot better if I made it so that checking
the current prices against the open prices were in a function so I wouldn't
have to re-write it for every stock I want to check, but I'm not sure how to
get started on doing that properly. Do any of you have some tips to get me
started?
from yahoo_finance import Share
apple = Share('AAPL')
appleopen = float(apple.get_open())
applecurrent = float(apple.get_price())
if appleopen > applecurrent:
print(("Apple is down for the day. Current price is"), applecurrent)
else:
print(("Apple is up for the day! Current price is "), applecurrent)
applechange = (applecurrent - appleopen)
if applechange > 0:
print(('The price moved'),abs(applechange),("to the upside today."))
else:
print(('The priced moved'),abs(applechange),("to the downside today."))
print('-----------------------')
nflx = Share('NFLX')
nflxopen = float(nflx.get_open())
nflxcurrent = float(nflx.get_price())
if nflxopen > nflxcurrent:
print(("Netflix is down for the day. Current price is"), nflxcurrent)
else:
print(("Netflix is up for the day! Current price is "), nflxcurrent)
nflxchange = (nflxcurrent - nflxopen)
if nflxchange > 0:
print(('The price moved'),abs(nflxchange),("to the upside today."))
else:
print(('The priced moved'),abs(nflxchange),("to the downside today."))
Answer: Try this:
from yahoo_finance import Share
Store = {
'AAPL': 'Apple',
'NFLX': 'Netflix'
}
for code in Store:
name, shr = Store[code], Share(code)
sopen = float( shr.get_open() )
scurr = float( shr.get_price() )
schange = scurr - sopen
movement = 'down' if schange < 0 else 'up'
print( format("{} is {} for the day. Current price is {}"), name, movement, scurr) )
print( format('The price moved {} to the {}side today.',abs(schange), movement) )
print('-----------------------')
|
Randomly fill a 3D grid according to a probability density function p(x,y,z)
Question: **How can I fill a 3D grid in the order specified by a given probability
density function?**
Using python, I'd like to lay down points in a _random_ order, but according
to some specified probability distribution over that region, with no repeated
points.
Sequentially:
* create a discrete 3D grid
* specify a probability density function for every grid point, pdf(x,y,z)
* lay down a point (x0,y0,z0) whose random location is proportional to the pdf(x,y,z)
* continue adding additional points, not recording entries if a location has already been filled
* repeat until all spaces are filled
The desired result is a list of all points (no repeats) of all the points in
the grid, in order that they were filled.
Answer: The below does not implement drawing from a multivariate gaussian:
xi_sorted = np.random.choice(x_grid.ravel(),x_grid.ravel().shape, replace=False, p = pdf.ravel())
yi_sorted = np.random.choice(x_grid.ravel(),x_grid.ravel().shape, replace=False, p = pdf.ravel())
zi_sorted = np.random.choice(x_grid.ravel(),x_grid.ravel().shape, replace=False, p = pdf.ravel())
That is because `p(x)*p(y)*p(z) != p(x,y,z)` unless the three variables are
independent. You can consider something like a [Gibbs
sampler](https://en.wikipedia.org/wiki/Gibbs_sampling) to draw from the joint
distribution by sequentially drawing from univariate distributions.
In the specific case of the multivariate normal, you can use (full example)
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from mpl_toolkits.mplot3d import Axes3D
from math import *
num_points = 4000
sigma = .5;
mean = [0, 0, 0]
cov = [[sigma**2,0,0],[0,sigma**2,0],[0,0,sigma**2]]
x,y,z = np.random.multivariate_normal(mean,cov,num_points).T
svals = 16
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d',aspect='equal')
ax.scatter(x,y,z, s=svals, alpha=.1,cmap=cm.gray)
|
Sci-Kit Learn SGD Classifier problems predicting
Question: I may not be able to find the help I need here, but I am hoping the smart
coders of the internet can help me. I am attempting to use Python's Sci-Kit
learn SGDClassifier to classify physical events. These physical events create
an image of a track (black and white) and I am trying to get a classifier to
classify them. The images are approx 500 * 400 pixels (not quite sure) but for
machine-learning purposes it gives me a 200640 dimensional vector. I have
20000 train events serialized in data packages of 200 events. Then I have an
extra 2000 train events. Here is how I go about extracting and training.
>>> from sklearn.linear_model import SGDClassifier
>>> import dill
>>> import glob
>>> import numpy as np
>>> clf = SGDClassifier(loss='hinge')
>>>for file in glob.glob('./SerializedData/Batch1/*.pkl'):
... with open(file, 'rb') as stream:
... minibatch = dill.load(stream)
... clf.partial_fit(minibatch.data, minibatch.target, classes=np.classes([1, 2]))
(Some output stuff about the classifier)
>>>
This is the train part of my code, or at least a rough abbreviation of it. I
do have a little bit more complicated initialization of the classifier. Just
for more info the `minibatch.data` gives a numpy array of shapes and features
so that is a "2-dimensional numpy array" with the shape being 200 and the
features being 200640. To clear that up there are arrays describing the pixel
values of each image and then 200 of them contained in a big array.
`minibatch.target` gives a numpy array of all the class values of each event
However, after this training of 20000 events I try to test the classifier and
it just does not seem to have been trained at all:
>>> file = open('./SerializedData/Batch2/train1.pkl', 'rb')
>>> test = dill.load(file)
>>> clf.predict(test.data)
array([ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2])
>>> clf.score(test.data)
.484999999999999999999
As you can see the classifier is predicting class 2 for all the test events.
The only problem I can think of at the moment is that I do not have enough
test events but I find that hard to believe. Does anybody have any
suggestions/solutions/answers to this predicament?
Answer: Unless your images are exceptionally simple, you aren't going to get good
results using just scikit learn if your inputs are images. You need to
transform the images in some way to obtain actually useful features, pixel
values make terrible features. You could try using some of the tools in
[scikit-image](http://scikit-image.org/) to create better features, or you
could use some of the pre-trained convolutional neural networks to extract
features for you. If you were feeling more adventurous, you could try and
train a CNN to do the classification for your problem in particular.
|
Python's equivalent of Ruby's ||=
Question: To check if a variable exist, and if exits, use the original value, other
wise, use the new value assigned. In ruby, it's `var ||= var_new`
How to write it in python?
PS: I don't know the `name` of `||=`, I simply can't search it in Bing.
Answer: I think there is some confusion from the people who aren't really sure what
the conditional assignment operator (`||=`) does, and also some
misunderstanding about how variables are spawned in Ruby.
Everyone should read [this article](http://www.rubyinside.com/what-rubys-
double-pipe-or-equals-really-does-5488.html) on the subject. A TLDR quote:
> A common misconception is that a ||= b is equivalent to a = a || b, but it
> behaves like a || a = b
>
> In a = a || b, a is set to something by the statement on every run, whereas
> with a || a = b, a is only set if a is logically false (i.e. if it's nil or
> false) because || is 'short circuiting'. That is, if the left hand side of
> the || comparison is true, there's no need to check the right hand side.
And another very important note:
> ...a variable assignment, even if not run, immediately summons that variable
> into being.
# Ruby
x = 10 if 2 == 5
puts x
> Even though the first line won't be run, x will exist on the second line and
> no exception will be raised.
This means that Ruby will absolutely _ensure_ that there is a variable
container for a value to be placed into before any righthand conditionals take
place. `||=` doesn't assign if `a` is _not defined_ , it assigns if `a` is
falsy (again, `false` or `nil` \- `nil` being the default _nothingness_ value
in Ruby), whilst guaranteeing `a` _is_ defined.
### What does this mean for Python?
Well, if `a` is defined, the following:
# Ruby
a ||= 10
is actually equivalent to:
# Python
if not a:
a = 10
while the following:
# Either language
a = a or 10
is close, but it _always_ assigns a value, whereas the previous examples do
not.
And if `a` is not defined the whole operation is closer to:
# Python
a = None
if not a:
a = 10
Because a very explicit example of what `a ||= 10` does when `a` is not
defined would be:
# Ruby
if not defined? a
a = nil
end
if not a
a = 10
end
At the end of the day, the `||=` operator is not _completely_ translatable to
Python in any kind of 'Pythonic' way, because of how it relies on the
underlying variable spawning in Ruby.
|
convert json to xml without changing the order of parameters in python
Question: I'm using Json2xml module for converting json format to xml format. But, while
converting it changes the order of the parameters. How do I convert without
changing the order of parameters? Here's my python code.
from json2xml.json2xml import Json2xml
data = Json2xml.fromjsonfile('example.json').data
data_object = Json2xml(data)
xml_output = data_object.json2xml()
print xml_output
example.json
{
"action": {
"param1": "aaa",
"param2": "bbb"
}
}
The output is
<action>
<param2>bbb</param2>
<param1>aaa</param1>
</action>
Is there a way to convert json to xml without changing the order of
parameters?
Answer: Try using an `OrderedDict`:
from collections import OrderedDict
from json2xml.json2xml import Json2xml
data = Json2xml.fromjsonfile('example.json').data
data = OrderedDict(data)
data_object = Json2xml(data)
xml_output = data_object.json2xml()
print xml_output
|
PUT not updating Pipedrive API (Python wrapper)
Question: Here's a brief description of what I'm trying to do:
* get a field's value
* multiply that value by a constant
* update the field with the adjusted value
I am using a nice wrapper found here: <https://github.com/hiway/pipedrive-api>
Here is my code:
from pipedrive import Pipedrive
pd = Pipedrive('API_token')
# ^ insert API token
EAAR = pd.deals.get(id=693) ## parse info from given deal/field
Current_value = float(EAAR.value) ## convert value to decimal
print 'Previous value was ', Current_value
New_value = Current_value * 0.96
print 'New Value is ', New_value
pd.deals.put({
id:693,
'value': New_value})
EAAR2 = pd.deals.get(id=693)
print EAAR2.value
So expected output would be:
>>>Previous value was 5.0
>>>New Value is 4.8
>>>4.8
However, I'm getting:
>>>Previous value was 5.0
>>>New Value is 4.8
>>>5
Any ideas would be greatly appreciated!
Answer: your put is probably failing. put quotes around id:
pd.deals.put({ 'id':693, 'value': New_value})
|
Pickle errors with Python 3
Question: I'm converting some code from Python **2** to Python **3** , and I have hard
time with a pickle problem! Here is a simple example of what I'm trying to do:
class test(str):
def __new__(self, value, a):
return (str.__new__(self, value))
def __init__(self, value, a):
self.a = a
if __name__ == '__main__':
import pickle
t = test("abs", 5)
print (t)
print( t.a)
wdfh = open("./test.dump", "wb")
pickle.dump(t, wdfh)
wdfh.close()
awfh = open("./test.dump", "rb")
newt = pickle.load(awfh)
awfh.close()
print (t)
print (newt.a)
This works just fine with Python 2 but I have the following error with Python
3:
> Traceback (most recent call last):
>
> File "test.py", line 21, in
>
>
> newt = pickle.load(awfh)
>
>
> TypeError: **new**() takes exactly 3 arguments (2 given)
I do not understand what is the difference, any idea?
Answer: The problem here is that your code only works with protocol 0 or 1. By
default, Python 2 uses protocol 0, whereas Python 3 uses protocol 3.
For protocol 2 and above you can't have additional arguments to the `__new__`
method unless you implement the `__getnewargs__` method.
In this case simply adding:
def __getnewargs__(self):
return (str(self),self.a)
should do the trick.
Or you could stick with protocol 0 or 1 and change the dump call:
pickle.dump(t, wdfh, 0)
|
java.lang.NoClassDefFoundError when trying to instantiate class from jar
Question: I did found quite a lot about this error, but somehow none of the suggested
solutions resolved the problem.
I am trying to use JNA bindings for libgphoto2 under Ubuntu in Eclipse
(moderate experience with Java on Eclipse, none whatsoever on Ubuntu, I'm
afraid). The bindings in question I want to use are here:
<http://angryelectron.com/projects/libgphoto2-jna/>
I followed the steps described on that page, and made a simple test client
that failed with the above error. So I reduced the test client until the only
thing I tried to do was to instantiate a GPhoto2 object, which still produced
the error. The test client looks like this:
import com.angryelectron.gphoto2.*;
public class test_class
{
public static void main(String[] args)
{
GPhoto2 cam = new GPhoto2();
}
}
The errors I get take up considerably more space:
Exception in thread "main" java.lang.NoClassDefFoundError: com/sun/jna/Structure
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at test_class.main(test_class.java:12)
Caused by: java.lang.ClassNotFoundException: com.sun.jna.Structure
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more
libgphoto2 itself is installed, it runs from the command line, I even have the
development headers and am able to call GPhoto2 functions from python, so the
problem can't be located there.
When looking at the .class files in Eclipse, however, they didn't have any
definitions. So I figured that might be the problem, especially since there
was an error when building the whole thing with ant (although the .jar was
succesfully exported, from what I could make out the error concerned only the
generation of documentation). So I loaded the source into eclipse and built
the .jar myself. At this occasion Eclipse stated there were warnings during
the build (though no errors), but didn't show me the actual warnings. If
anyone could tell me where the hell the build log went, that might already
help something. I searched for it everywhere without success, and if I click
on "details" in eclipse it merely tells me where the warnings occured, not
what they were.
Be that as it may, a warning isn't necessarily devastating, so I imported the
resulting Jar into the above client. I checked the .class files, this time
they contained all the code. But I still get the exact same list of errors
(yes, I have made very sure that the old library was removed from the
classpath and the new ones added. I repeated the process several times, just
in case).
Since I don't have experience with building jars, I made a small helloworld
jar, just to see if I could call that from another program or if I'd be
getting similar errors. It worked without a hitch. I even tried to reproduce
the problem deliberately by exporting it with various options, but it still
worked. I tried re-exporting the library I actully need with the settings that
had worked during my experiment, but they still wouldn't run. I'm pretty much
stuck by now. Any hints that help me resolve the problem would be greatly
appreciated.
Answer: In addition to what @Paul Whelan has said. You might have better luck by just
get the missing jar directly.
Get the missing library [here](https://github.com/java-native-access/jna), set
the classpath and then re-run the application again and see whether it will
run fine or not.
|
Selenium Webdriver / Beautifulsoup + Web Scraping + Error 416
Question: I'm doing web scraping using selenium webdriver in Python with
[Proxy](http://www.us-proxy.org/).
I want to browse more than 10k pages of single site using this scraping.
**Issue** is using this proxy I'm able to send request for single time only.
when I'm sending another request on same link or another link of this site,
I'm getting 416 error (kind of block IP using firewall) for 1-2 hours.
**Note:** I'm able to do scraping all normal sites with this code, but this
site has kind of security which is prevent me for scraping.
Here is code.
profile = webdriver.FirefoxProfile()
profile.set_preference("network.proxy.type", 1)
profile.set_preference(
"network.proxy.http", "74.73.148.42")
profile.set_preference("network.proxy.http_port", 3128)
profile.update_preferences()
browser = webdriver.Firefox(firefox_profile=profile)
browser.get('http://www.example.com/')
time.sleep(5)
element = browser.find_elements_by_css_selector(
'.well-sm:not(.mbn) .row .col-md-4 ul .fs-small a')
for ele in element:
print ele.get_attribute('href')
browser.quit()
Any solution ??
Answer: Selenium wasn't helpful for me, so I solved the problem by using
[beautifulsoup](http://www.crummy.com/software/BeautifulSoup/bs3/documentation.html),
the website has used security to block proxy whenever received request, so I
am continuously changing [proxyurl](http://www.proxynova.com/proxy-server-
list/country-us/) and [User-Agent](http://www.useragentstring.com/pages/All/)
whenever server blocking requested proxy.
I'm pasting my code here
from bs4 import BeautifulSoup
import requests
import urllib2
url = 'http://terriblewebsite.com/'
proxy = urllib2.ProxyHandler({'http': '130.0.89.75:8080'})
# Create an URL opener utilizing proxy
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
request = urllib2.Request(url)
request.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15')
result = urllib2.urlopen(request)
data = result.read()
soup = BeautifulSoup(data, 'html.parser')
ptag = soup.find('p', {'class', 'text-primary'}).text
print ptag
**Note :**
1. change **proxy and User-Agent** and use latest updated proxy only
2. few server are accepting only specific country proxy, In my case I used Proxies from United States
this process might be a slow, still u can scrap the data
|
No such file or directory webdriver_prefs.json when compiling to exe with cx_Freeze
Question: I wrote an application using selenium firefox webdriver and compiled it with
cx_Freeze. When I start my application I get an error:
Traceback (most recent call last):
File "c:\111\ui\__init__.py", line 27, in login
self.browser = self.webdriver.Firefox()
File "C:\Python34\lib\site-packages\selenium\webdriver\firefox\webdriver.py", line 47, in __init__
self.profile = FirefoxProfile()
File "C:\Python34\lib\site-packages\selenium\webdriver\firefox\firefox_profile.py", line 63, in __init__
WEBDRIVER_PREFERENCES)) as default_prefs:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\111\\build\\exe.win32-3.4\\library.zip\\selenium\\webdriver\\firefox\\webdriver_prefs.json'
But my library.zip actually contains webdriver_prefs.json and webdriver.xpi. I
use next setup.py file to add it:
import sys
from cx_Freeze import setup, Executable
base= 'C:\\Python34\\Lib\\site-packages\\selenium\\webdriver'
includes = [
('%s\\firefox\\webdriver.xpi' %(base), 'selenium/webdriver/firefox/webdriver.xpi'),
('%s\\firefox\\webdriver_prefs.json '%(base), 'selenium/webdriver/firefox/webdriver_prefs.json')
]
build_exe_options = {
"packages": ["os"],
"excludes": ["tkinter"],
"zip_includes": includes,
}
setup(
name = "lala",
version = "0.1",
description = "lalala",
options = {"build_exe": build_exe_options},
executables = [Executable("app.py", base=base)],
)
Shoud I somehow register this files for my executable? And why traceback
prints filepaths with two ways (one backslash and two backslashes)?
Answer: Finally I wasn't able to solve problem with `cx_Freeze` but then I tried
`PyInstaller` and it works like a charm! It supports Python3 already by the
way. I used that command:
`c:\Python34\Scripts\pyinstaller.exe -p C:\Python34\Lib\site-packages -F
app.py `
|
using python 3.5 saving csv from url drops CR and LF
Question: I'm using Python 3.5.0 to grab some census data. When I use my script it does
retrieve the data from the url and saves it but the file that was saved can't
be imported to SQL because it somehow dropped the {CR}{LF}. How can I get the
file it saves able of being imported to SQL?
try:
url = 'https://www.census.gov/popest/data/counties/asrh/2014/files/CC-EST2014-ALLDATA.csv'
headers = {}
headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0'
req = urllib.request.Request(url,headers=headers)
resp = urllib.request.urlopen(req)
respData = resp.read()
saveFile = open('Vintage2014.csv' ,'w')
saveFile.write(str(respData))
saveFile.close()
except Exception as e:
print(str(e))
Answer: Note, the file you are trying to download does not contain `CRLF` only `LF`.
You could use the following approach to convert the bytes to a suitable
string. This should also result in you getting `CRLF`:
import urllib.request
try:
url = 'https://www.census.gov/popest/data/counties/asrh/2014/files/CC-EST2014-ALLDATA.csv'
headers = {}
headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0'
req = urllib.request.Request(url, headers=headers)
resp = urllib.request.urlopen(req)
respData = resp.read()
with open('Vintage2014.csv', 'w') as saveFile:
saveFile.write(respData.decode('latin-1'))
except Exception as e:
print(str(e))
|
Copy and paste region of image in opencv?
Question: I'm stuck at [this](http://opencv-python-
tutroals.readthedocs.org/en/latest/py_tutorials/py_core/py_basic_ops/py_basic_ops.html#image-
roi "this") tutorial where a ROI is pasted over another region of same image.
Python trows a value error when I try something similar:
img = cv2.imread(path, -1)
eye = img[349:307, 410:383]
img[30:180, 91:256] = eye
Exeption:
Traceback (most recent call last):
File "test.py", line 13, in <module>
img[30:180, 91:256] = eye
ValueError: could not broadcast input array from shape (0,0,3) into shape (150,165,3)
This might be very newb question, but I couldn't come up with an answer by
searching on google. Are there other numpy methods for doing this?
EDIT: Also in the tutorial its not specified how the coordinates should be
entered. Ex: I can enter coords of the region I want something like: `eye =
img[x1:y1, x2:y2]` or `img[x1:x2, y1:y2]`. This is what confusing to me.
Actually I tried to get these coords from a mouse callback method which
printed the position of the mouse click. So, the coordinates are surely from
the inside of image.
Answer: Your slice `[349:307, 410:383]` returns an empty array `eye`, which could not
be assigned to an array view of different **shape**.
E.g.:
In [8]: import cv2
...: fn=r'D:\Documents\Desktop\1.jpg'
...: img=cv2.imread(fn, -1)
...: roi=img[200:400, 200:300]
In [9]: roi.shape
Out[9]: (200, 100, 3)
In [10]: img2=img.copy()
In [11]: img2[:roi.shape[0], :roi.shape[1]]=roi
In [12]: cv2.imshow('img', img)
...: cv2.imshow('roi', roi)
...: cv2.imshow('img2', img2)
...: cv2.waitKey(0)
...: cv2.destroyAllWindows()
result:
img & roi:
[](http://i.stack.imgur.com/7wqct.png)
[](http://i.stack.imgur.com/2i2ae.png)
img2:
[](http://i.stack.imgur.com/MavRu.png)
NOTE that even if `roi` is not an empty array, assignment with mismatching
shapes will raise errors:
In [13]: img2[:100, :100]=roi
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-13-85de95cf3ded> in <module>()
----> 1 img2[:100, :100]=roi
ValueError: could not broadcast input array from shape (200,100,3) into shape (100,100,3)
|
Python documentation equivalent for Perl's "perldoc"
Question: ## Quick [perldoc](http://search.cpan.org/~dapm/perl-5.14.4/pod/perldoc.pod)
overview:
When writing a Perl module you can document it with
[`POD`](http://perldoc.perl.org/perlpod.html) style documentation. Then to get
an overview of how the module works you can just type this into the command
line:
`perldoc <module_name>`
* * *
## How do I do this with Python?
I understand that Python has a standard form of documenting code using
"docstrings" that is somewhat similar to Perl's POD style. The information
about the Python module can then be extracted using the `help()` function, but
this is _far from elegant_.
First you need to start the Python interpreter, then import the module you
want to get help for, and then finally you can use the `help()` function to
get information about the module.
**example:**
>python
# Prints Python version info
>>>import <module_name>
>>>help(<module_name>)
# Prints documentation!
* * *
## Is there a better way?
I would like a Python equivalent to the way this works for Perl:
`pydoc <module_name>`
but when I try this I get the following output:
'pydoc' is not recognized as an internal or external command,
operable program or batch file.
Answer: ## It turns out that `pydoc` actually does work just like `perldoc`!
With a catch ... you just need to type it a little different!
`python -m pydoc <module_name>`
* * *
## And with a little "haquery"... :)
Create a pydoc.bat file as shown below.
pydoc.bat:
python -m pydoc %1
Then store this file in "C:\Python27" (or in the same location as your
python.exe file)
> Then swiftly wave your hands over the keyboard and exclaim "**_SIM SALA
> BIM!_** "
your `pydoc` command will now work!
`pydoc <module_name>`
|
Trees and path finding in Fortran
Question: I am attempting to replicate some Python code in Fortran 90 to make it work
within a larger Fortran project I am contributing to. Specifically, I am
trying to convert some code that recursively identifies upstream paths in a
binary tree, such as in the following example:
4 -- 5 -- 8
/
2 --- 6 - 9 -- 10
/ \
1 -- 11
\
3 ----7
This tree is represented and traversed by:
class Node(object):
def __init__(self):
self.name = None
self.parent = None
self.children = set()
self._upstream = set()
def __repr__(self):
return "Node({})".format(self.name)
# Recursively search upstream in the drainage network, returns a set of all paths
@property
def upstream_paths(self):
if not self._paths:
for child in self.children:
if child.upstream_paths:
self._paths.extend([child] + path for path in child.upstream_paths)
else:
self._paths.append([child])
return self._paths
from collections import defaultdict
edges = {(11, 9), (10, 9), (9, 6), (6, 2), (8, 5), (5, 4), (4, 2), (2, 1), (3, 1), (7, 3)}
nodes = collections.defaultdict(lambda: Node())
for node, parent in edges:
nodes[node].name = node
nodes[parent].name = parent
nodes[node].parent = nodes[parent]
nodes[parent].children.add(nodes[node])
Is it possible to implement anything like this in Fortran 90? I have a decent
understanding of recursion in f90 but without the object-orientedness of
Python, I can't imagine how this can be done.
EDIT:
For further description:
What I intend to do is identify upstream drainage paths in a dendritic stream
network. For any given outlet (root) there may be hundreds or thousands
upstream paths. There would be no modification of the network required once it
is initialized, although there will be calls to many different nodes within
the network (in the above example, a call will be made for all upstream paths
from 1, from 6, from 5, etc.) I've been looking into using pointers but can't
seem to find any examples out there of this kind of path-finding.
Answer: I find that it's much easier to convert Python towards another language while
you're still in Python (and then you can test in Python every step of the
way). Python is so flexible that it is much easier to make Python that looks
like F90 (or almost anything else) than the other way around.
I used to do this with assembly language. I'd modify my python to make it look
more like assembler, then try to code it in assembler, realize I'd missed
something, then modify the Python again. By the time I finished, I had Python
and assembly language that were easy to read and had one-to-one
correspondence, and _the Python had been tested_ every step of the way. The
assembly language just worked(TM) when I ran it.
FWIW, if removing the recursion is something you are considering,
[here](http://blog.moertel.com/posts/2013-05-11-recursive-to-iterative.html)
is an excellent guide on the best way to do exactly that.
|
Python 2d Ball Collision
Question: Here is what I have mustered up:
from graphics import *
from random import *
from math import *
class Ball(Circle):
def **init**(self, win_width, win_high, point, r, vel1, vel2):
Circle.**init**(self, point, r)
self.width = win_width
self.high = win_high
self.vecti1 = vel1
self.vecti2 = vel2
def collide_wall(self):
bound1 = self.getP1()
bound2 = self.getP2()
if (bound2.y >= self.width):
self.vecti2 = -self.vecti2
self.move(0, -1)
if (bound2.x >= self.high):
self.vecti1 = -self.vecti1
self.move(-1, 0)
if (bound1.x <= 0):
self.vecti1 = -self.vecti1
self.move(1, 0)
if (bound2.y <= 0):
self.vecti2 = -self.vecti2
self.move(0, 1)
def ball_collision(self, cir2):
radius = self.getRadius()
radius2 = cir2.getRadius()
bound1 = self.getP1()
bound3 = cir2.getP1()
center1 = Point(radius + bound1.x,radius + bound1.y)
center2 = Point(radius2 + bound3.x,radius2 + bound3.y)
centerx = center2.getX() - center1.getX()
centery = center2.getY() - center1.getY()
distance = sqrt((centerx * centerx) + (centery * centery))
if (distance <= (radius + radius2)):
xdistance = abs(center1.getX() - center2.getX())
ydistance = abs(center1.getY() - center2.getY())
if (xdistance <= ydistance):
if ((self.vecti2 > 0 & bound1.y < bound3.y) | (self.vecti2 < 0 & bound1.y > bound3.y)):
self.vecti2 = -self.vecti2
if ((cir2.vecti2 > 0 & bound3.y < bound1.y) | (cir2.vecti2 < 0 & bound3.y > bound1.y)):
cir2.vecti2 = -cir2.vecti2
elif (xdistance > ydistance):
if ((self.vecti1 > 0 & bound1.x < bound3.x) | (self.vecti1 < 0 & bound1.x > bound3.x)):
self.vecti1 = -self.vecti1
if ((cir2.vecti1 > 0 & bound3.x < bound1.x) | (cir2.vecti1 < 0 & bound3.x > bound1.x)):
cir2.vecti1 = -cir2.vecti1
def main(): win = GraphWin("Ball screensaver", 700,700)
velo1 = 4
velo2 = 3
velo3 = -4
velo4 = -3
cir1 = Ball(win.getWidth(),win.getHeight(),Point(50,50), 20, velo1, velo2)
cir1.setOutline("red")
cir1.setFill("red")
cir1.draw(win)
cir2 = Ball(win.getWidth(),win.getHeight(),Point(200,200), 20, velo3, velo4)
cir2.setOutline("blue")
cir2.setFill("blue")
cir2.draw(win)
while(True):
cir1.move(cir1.vecti1, cir1.vecti2)
cir2.move(cir2.vecti1, cir2.vecti2)
time.sleep(.010)
cir1.collide_wall()
cir2.collide_wall()
cir1.ball_collision(cir2)
#cir2.ball_collision(cir1)
main()
`
Ok so here is the problem. This Math is not working correctly at all.
Sometimes it works perfectly sometimes one ball overpowers the other ball or
they don't react like a ball collision should. I am racking my brain trying to
figure out what the problem is but I feel I am too close to the project at the
moment to see it. Any help would be appreciated.
Answer: Fix your "if" statements to be legal and straightforward. I think that you
might be trying to say something like what's below. It's hard to tell, since
you haven't documented your code.
if cir2.vecti2 > 0 and bound3.y > bound1.y:
cir2.vecti2 = -cir2.vecti2
Note that bound3 has no value. You will find other problems, I'm sure.
I suggest that you back up and try incremental coding. First, try getting one
ball to move around legally, bouncing off walls. Put in tracing print
statements for the position, and label them so you know where you are in your
code.
Once you have that working, then add the second ball. Continue with the print
statements, commenting out the ones you don't think you need any more. Don't
delete them until you have the whole program working.
Does this get you going?
|
Trendline in Plotly Python
Question: I am generating a plot in Python using Plotly, which shows data in a
timeseries. I am using the following data from my SQLite database (as _dates_
and _lines_ below):
[(u'2015-12-08 00:00:00',), (u'2015-11-06 00:00:00',), (u'2015-11-06 00:00:00',), (u'2015-10-07 00:00:00',), (u'2015-10-06 00:00:00',), (u'2015-10-06 00:00:00',), (u'2015-09-17 00:00:00',), (u'2015-09-17 00:00:00',), (u'2015-09-17 00:00:00',), (u'2015-09-17 00:00:00',), (u'2015-09-16 00:00:00',), (u'2015-09-15 00:00:00',), (u'2015-09-15 00:00:00',), (u'2015-09-15 00:00:00',), (u'2015-08-30 00:00:00',), (u'2015-08-22 00:00:00',), (u'2015-08-22 00:00:00',), (u'2015-08-17 00:00:00',), (u'2015-08-09 00:00:00',), (u'2015-08-09 00:00:00',), (u'2015-08-08 00:00:00',), (u'2015-08-07 00:00:00',), (u'2015-07-28 00:00:00',), (u'2015-07-26 00:00:00',), (u'2015-07-22 00:00:00',), (u'2015-07-22 00:00:00',), (u'2015-07-22 00:00:00',), (u'2015-07-13 00:00:00',), (u'2015-07-13 00:00:00',), (u'2015-07-13 00:00:00',), (u'2015-07-13 00:00:00',), (u'2015-07-09 00:00:00',), (u'2015-07-09 00:00:00',), (u'2015-07-09 00:00:00',), (u'2015-07-09 00:00:00',), (u'2015-06-28 00:00:00',), (u'2015-06-28 00:00:00',), (u'2015-06-28 00:00:00',), (u'2015-06-16 00:00:00',), (u'2015-06-14 00:00:00',), (u'2015-06-14 00:00:00',), (u'2015-06-14 00:00:00',), (u'2015-06-04 00:00:00',), (u'2015-04-09 00:00:00',), (u'2015-03-31 00:00:00',), (u'2015-03-09 00:00:00',), (u'2015-03-09 00:00:00',), (u'2015-03-09 00:00:00',), (u'2015-03-09 00:00:00',), (u'2015-03-09 00:00:00',), (u'2015-03-09 00:00:00',)]
[(18,), (24,), (17,), (22,), (16,), (18,), (24,), (20,), (16,), (14,), (21,), (21,), (24,), (15,), (23,), (22,), (22,), (20,), (24,), (20,), (20,), (20,), (22,), (21,), (21,), (23,), (23,), (17,), (25,), (20,), (25,), (25,), (25,), (26,), (26,), (19,), (17,), (16,), (16,), (14,), (17,), (17,), (13,), (27,), (19,), (19,), (12,), (17,), (20,), (12,), (21,)]
Some data is overlapping (multiple instances in the same day), but presumably
this would not matter for a fitted line. My code looks like this:
import sqlite3
import plotly.plotly as py
from plotly.graph_objs import *
import numpy as np
db = sqlite3.connect("Applications.db")
cursor = db.cursor()
cursor.execute('SELECT date FROM applications ORDER BY date(date) DESC')
dates = cursor.fetchall()
cursor.execute('SELECT lines FROM applications ORDER BY date(date) DESC')
lines = cursor.fetchall()
trace0 = Scatter(
x=dates,
y=lines,
name='Amount of lines',
mode='markers'
)
trace1 = Scatter(
x=dates,
y=lines,
name='Fit',
mode='markers'
)
data = Data([trace0, trace1])
py.iplot(data, filename = 'date-axes')
How do I make _trace1_ a fitted trendline base on this data? That is, a smooth
representation showing the development of the data.
Answer: Per Plotly support: "Unfortunately fits aren't exposed through the API right
now. We're working on add the fit GUI to the IPython interface though and
eventually the API" (25th of September, 2015).
I found the easiest way of doing this, after an inordinate amount of reading
and googling, was through Matplotlib, Numbpy, and SciPy. Having cleaned up the
data a bit, the following code worked:
import plotly.plotly as py
import plotly.tools as tls
from plotly.graph_objs import *
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as dates
def line(x, a, b):
return a * x + b
popt, pcov = curve_fit(line, trend_dates.ravel(), trend_lines.ravel())
fig1 = plt.figure(figsize=(8,6))
plt.plot_date(new_x, trend_lines, 'o', label='Lines')
z = np.polyfit(new_x, trend_lines, 1)
p = np.poly1d(z)
plt.plot(new_x, p(new_x), '-', label='Fit')
plt.title('Lines per day')
fig = tls.mpl_to_plotly(fig1)
fig['layout'].update(showlegend=True)
fig.strip_style()
py.iplot(fig)
Where essentially `new_x` are dates as expected by Matplotlib, and
`trend_lines` regular data as in the question. This is not a full example, as
a fair amount of the aforementioned data cleaning and importing of libraries
precedes it, but it shows a way of getting the Plotly figure as output but
going through Matplotlib, Numbpy, and SciPy.
|
Differential Testing of GNUs Coreutils 'fmt' Utility
Question: I am exploring various testing strategies (differential, regression, unit,
etc...), and have been assigned the task of testing GNUs `Coreutils`
_[`fmt`](http://www.gnu.org/software/coreutils/manual/html_node/fmt-
invocation.html#fmt-invocation)_ utility. I am trying to apply randomized
differential testing and create an oracle, so as to assert the described
postconditions of the utility are met.
* * *
What I would like to do is create a Python utility that generates a randomized
string, applies text wrapping to the string (up to a given line width) to
generate an expected output, and then invoke the fmt utility on the generated
string and assert that the output matches the expected output. To do this, I
am trying to leverage the
[`textwrap`](https://docs.python.org/2/library/textwrap.html#module-textwrap)
Python module. However, I have not found a way to ensure that indentation is
maintained. Consider a file (file.txt) with contents
\s\s\s\sLorem ipsum dolor sit amet, consectetuer-adipiscing elit. Curabitur dignissim venenatis pede. Quisque dui dui, ultricies ut, facilisis non, pulvinar non.
as input to the fmt utility. Invoking the command `fmt -w 50 file.txt` leads
to the output:
\s\s\s\sLorem ipsum dolor sit amet,
\s\s\s\sconsectetuer-adipiscing elit. Curabitur
\s\s\s\sdignissim venenatis pede. Quisque dui dui,
\s\s\s\sultricies ut, facilisis non, pulvinar non.
* * *
According to the fmt utilities documentation,
> By default, blank lines, spaces between words, and indentation are preserved
> in the output; successive input lines with different indentation are not
> joined; tabs are expanded on input and introduced on output.
>
> fmt prefers breaking lines at the end of a sentence, and tries to avoid line
> breaks after the first word of a sentence or before the last word of a
> sentence. A sentence break is defined as either the end of a paragraph or a
> word ending in any of ‘.?!’, followed by two spaces or end of line, ignoring
> any intervening parentheses or quotes.
In my attempt to mimic the same output behavior as the fmt utility, I decided
to use the textwrap modules
_[`fill`](https://docs.python.org/2/library/textwrap.html#textwrap.fill)_
function as follows:
textwrap.fill(in_str, width=50, expand_tabs=True, drop_whitespace=False, fix_sentence_endings=True, break_on_hyphens=False)
Which, according to the Python documentation, should do the following:
1. The maximum length of the wrapped lines will be equal to the width parameter (50).
2. All tab characters in the input will be expanded to spaces.
3. Whitespace at the beginning and ending of every line (after wrapping but before indenting) will **not** be dropped.
4. Assume that a sentence ending consists of a lowercase letter followed by one of '.', '!', or '?', possibly followed by one of '"' or "'", followed by a space.
5. Only white spaces will be considered as potentially good places for line breaks.
However, the output of the textwrap.fill function on the same input returns:
\s\s\s\sLorem ipsum dolor sit amet,
consectetuer-adipiscing elit. Curabitur dignissim
venenatis pede. Quisque dui dui, ultricies ut,
facilisis non, pulvinar non. Duis quis arcu a
purus volutpat iaculis. Morbi id dui in diam
ornare dictum. Praesent consectetuer vehicula
ipsum. Praesent tortor massa, congue et, ornare
in, posuere eget, pede.
As you can see, the indentation level is not maintained.
* * *
What tool and/or differential testing strategy could I best utilize to test
the fmt utility most effectively? Any suggestions are much appreciated!
Answer: Try to run this snippet with `py.test`:
#!/usr/bin/env python2.7
from textwrap import TextWrapper
input_tx = """\s\s\s\sLorem ipsum dolor sit amet, consectetuer-adipiscing elit. Curabitur dignissim venenatis pede. Quisque dui dui, ultricies ut, facilisis non, pulvinar non."""
output_tx1 = """\s\s\s\sLorem ipsum dolor sit amet, consectetuer-
\s\s\s\sadipiscing elit. Curabitur dignissim
\s\s\s\svenenatis pede. Quisque dui dui, ultricies
\s\s\s\sut, facilisis non, pulvinar non."""
output_tx2 = """\s\s\s\sLorem ipsum dolor sit amet,
\s\s\s\sconsectetuer-adipiscing elit. Curabitur
\s\s\s\sdignissim venenatis pede. Quisque dui dui,
\s\s\s\sultricies ut, facilisis non, pulvinar non."""
class Test_TextWrapper:
def test_stackoverflow_q32753000_1(self):
sut = TextWrapper(width=50, subsequent_indent="\s\s\s\s")
assert sut.fill(input_tx) == output_tx1
def test_stackoverflow_q32753000_2(self):
sut = TextWrapper(width=50, subsequent_indent="\s\s\s\s",
fix_sentence_endings=False,
break_on_hyphens=False)
assert sut.fill(input_tx) == output_tx2
It should show you something like this:
> py.test -v -k Test_TextWrapper
============================================== test session starts ===============================================
platform linux2 -- Python 2.7.11, pytest-2.9.0, py-1.4.31, pluggy-0.3.1 -- /usr/bin/python
cachedir: .cache
...
plugins: cov-2.2.1
collected 6 items
test_fmt.py::Test_TextWrapper::test_stackoverflow_q32753000_1 PASSED
test_fmt.py::Test_TextWrapper::test_stackoverflow_q32753000_2 PASSED
=================================== 4 tests deselected by '-kTest_TextWrapper' ===================================
===================================== 2 passed, 4 deselected in 0.03 seconds =====================================
As you can see, my second test case produces the same output as shown in the
example of your question.
I'm currently writing a small Python wrapper class for Python standard library
`textwrap.TextWrapper` class. This derived class will provide a new `prefix`
keyword argument combining the effects of the `initial_indent` and the
`subsequent_indent` parameters with a removal of the prefix from the input
text. (Similar to what the `-p` option of the `fmt` utility program does)
While looking for prior art and inspiration I found your question here.
|
RuntimeWarning: PyOS_InputHook is not available for interactive use of PyGTK
Question: I'm using PyGTK for Python 2.7 in Ubuntu 14.04, but I get the following
message:
RuntimeWarning: PyOS_InputHook is not available for interactive use of PyGTK
What could be the reason ?
Answer: When does it trigger? Are you trying to run some script or just use PyGTK
interactively?
Most likely, your input hook is grabbed by another interactive loop, e.g.:
>>> import Tkinter
>>> root = Tkinter.Tk() # input hook is grabbed by Tkinter for immediate result evaluation
>>> import gtk # gtk tries to grab the hook, but fails
/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:127: RuntimeWarning: PyOS_InputHook is not available for interactive use of PyGTK
The _immediate result evaluation_ means that an expression results are
evaluated immediately (e.g. window is displayed) before entering the main
loop.
Have in mind, that this is a warning, not an error, but if it bothers you, you
can import gtk module as early as you can (or, well, reasonably early) and
release the input hook:
import gtk
gtk.set_interactive(False)
import Tkinter
root = Tkinter.Tk()
# no warning here
|
Concurrency with shell scripts in failure-prone environments
Question: Good morning all,
I am trying to implement concurrency in a very specific environment, and keep
getting stuck. Maybe you can help me.
this is the situation:
-I have N nodes that can read/write in a shared folder.
-I want to execute an application in one of them. this can be anything, like a shell script, an installed code, or whatever.
-To do so, I have to send the same command to all of them. The first one should start the execution, and the rest should see that somebody else is running the desired application and exit.
-The execution of the application can be killed at any time. This is important because does not allow relying on any cleaning step after the execution.
-if the application gets killed, the user may want to execute it again. He would then send the very same command.
My current approach is to create a shell script that wraps the command to be
executed. This could also be implemented in C. Not python or other languages,
to avoid library dependencies.
#!/bin/sh
# (folder structure simplified for legibility)
mutex(){
lockdir=".lock"
firstTask=1 #false
if mkdir "$lockdir" &> /dev/null
then
controlFile="controlFile"
#if this is the first node, start coordinator
if [ ! -f $controlFile ]; then
firstTask=0 #true
#tell the rest of nodes that I am in control
echo "some info" > $controlFile
fi
# remove control File when script finishes
trap 'rm $controlFile' EXIT
fi
return $firstTask
}
#The basic idea is that a task executes the desire command, stated as arguments to this script. The rest do nothing
if ! mutex ;
then
exit 0
fi
#I am the first node and the only one reaching this, so I execute whatever
$@
If there are no failures, this wrapper works great. The problem is that, if
the script is killed before the execution, the trap is not executed and the
control file is not removed. Then, when we execute the wrapper again to
restart the task, it won't work as every node will think that somebody else is
running the application.
A possible solution would be to remove the control script just before the "$@"
call, but that it would lead to some race condition.
Any suggestion or idea?
Thanks for your help.
**edit** : edited with correct solution as future reference
Answer: Your trap syntax looks wrong: According to POSIX, it should be:
trap [action condition ...]
e.g.:
trap 'rm $controlFile' HUP INT TERM
trap 'rm $controlFile' 1 2 15
Note that `$controlFile` will not be expanded until the trap is executed if
you use single quotes.
|
Python-Modelica interface: Data
Question: what is in your opinion the best way to get data (measure date for example)
into modelica (dymola)? Is it possible, to import data from python to modelica
(for example into a combi-time-table)? My idea would be as follows:
1. pre processing of measured data in python
2. load the data from python into modelica (comi-time-table)
3. rund simulation studies (scripted in python)
I would appreciate any suggestions.
Answer: That's probably a matter of opinion. But since you have to do much of your
data post- and preprocessing in Python I would definitely export my (plant)
model from Dymola as a co-simulation FMU and run it in Python.
In Dymola you can export FMU's and 'execute' them **on the same pc** that
holds the Dymola license file. If you need to run the FMU on another pc you'll
have to buy a special binary export license.
There is a free Python package called PyFMI
([www.pyfmi.org](http://www.pyfmi.org)) which makes it easy to run an FMU in
Python. See the examples at
[http://www.jmodelica.org/page/4924](http://www.jmodelica.org/assimulo_home/pyfmi_1.0/examples.html).
PyFMI can be a bit tricky to get up and running (with the right Python package
dependencies and so on). So if you are not an experienced Python user I would
suggest that you download the installer for
[JModelica.org](http://www.jmodelica.org/binary) which will do much the
setting up for you.
Best regards,
Rene Just Nielsen
|
Timeout a file download with Python urllib?
Question: Python beginner here. I want to be able to timeout my download of a video file
if the process takes longer than 500 seconds.
import urllib
try:
urllib.urlretrieve ("http://www.videoURL.mp4", "filename.mp4")
except Exception as e:
print("error")
How do I amend my code to make that happen?
Answer: Better way is to use `requests` so you can stream the results and easily check
for timeouts:
import requests
# Make the actual request, set the timeout for no data to 10 seconds and enable streaming responses so we don't have to keep the large files in memory
request = requests.get('http://www.videoURL.mp4', timeout=10, stream=True)
# Open the output file and make sure we write in binary mode
with open('filename.mp4', 'wb') as fh:
# Walk through the request response in chunks of 1024 * 1024 bytes, so 1MiB
for chunk in request.iter_content(1024 * 1024):
# Write the chunk to the file
fh.write(chunk)
# Optionally we can check here if the download is taking too long
|
Python: Continuously check size of files being added to list, stop at size, zip list, continue
Question: I am trying to loop through a directory, check the size of each file, and add
the files to a list until they reach a certain size (2040 MB). At that point,
I want to put the list into a zip archive, and then continue looping through
the next set of files in the directory and continue to do the same thing. The
other constraint is that files with the same name but different extension need
to be added together into the zip, and can't be separated. I hope that makes
sense.
The issue I am having is that my code basically ignores the size constraint
that I have added, and just zips up all the files in the directory anyway.
I suspect there is some logic issue, but I am failing to see it. Any help
would be appreciated. Here is my code:
import os,os.path, zipfile
from time import *
#### Function to create zip file ####
# Add the files from the list to the zip archive
def zipFunction(zipList):
# Specify zip archive output location and file name
zipName = "D:\Documents\ziptest1.zip"
# Create the zip file object
zipA = zipfile.ZipFile(zipName, "w", allowZip64=True)
# Go through the list and add files to the zip archive
for w in zipList:
# Create the arcname parameter for the .write method. Otherwise the zip file
# mirrors the directory structure within the zip archive (annoying).
arcname = w[len(root)+1:]
# Write the files to a zip
zipA.write(w, arcname, zipfile.ZIP_DEFLATED)
# Close the zip process
zipA.close()
return
#################################################
#################################################
sTime = clock()
# Set the size counter
totalSize = 0
# Create an empty list for adding files to count MB and make zip file
zipList = []
tifList = []
xmlList = []
# Specify the directory to look at
searchDirectory = "Y:\test"
# Create a counter to check number of files
count = 0
# Set the root, directory, and file name
for root,direc,f in os.walk(searchDirectory):
#Go through the files in directory
for name in f:
# Set the os.path file root and name
full = os.path.join(root,name)
# Split the file name from the file extension
n, ext = os.path.splitext(name)
# Get size of each file in directory, size is obtained in BYTES
fileSize = os.path.getsize(full)
# Add up the total sizes for all the files in the directory
totalSize += fileSize
# Convert from bytes to megabytes
# 1 kilobyte = 1,024 bytes
# 1 megabyte = 1,048,576 bytes
# 1 gigabyte = 1,073,741,824 bytes
megabytes = float(totalSize)/float(1048576)
if ext == ".tif": # should be everything that is not equal to XML (could be TIF, PDF, etc.) need to fix this later
tifList.append(n)#, fileSize/1048576])
tifSorted = sorted(tifList)
elif ext == ".xml":
xmlList.append(n)#, fileSize/1048576])
xmlSorted = sorted(xmlList)
if full.endswith(".xml") or full.endswith(".tif"):
zipList.append(full)
count +=1
if megabytes == 2040 and len(tifList) == len(xmlList):
zipFunction(zipList)
else:
continue
eTime = clock()
elapsedTime = eTime - sTime
print "Run time is %s seconds"%(elapsedTime)
The only thing I can think of is that there is never an instance where my
variable `megabytes==2040` exactly. I can't figure out how to make the code
stop at that point otherwise though; I wonder if using a range would work? I
also tried:
if megabytes < 2040:
zipList.append(full)
continue
elif megabytes == 2040:
zipFunction(zipList)
Answer: Your main problem is that you need to reset your file size tally when you
archive the current list of files. Eg
if megabytes >= 2040:
zipFunction(zipList)
totalSize = 0
BTW, you don't need
else:
continue
there, since it's the end of the loop.
As for the constraint that you need to keep files together that have the same
main file name but different extensions, the only fool-proof way to do that is
to sort the file names before processing them.
If you want to guarantee that the total file size in each archive is under the
limit you need to test the size before you add the file(s) to the list. Eg,
if (totalSize + fileSize) // 1048576 > 2040:
zipFunction(zipList)
totalsize = 0
totalSize += fileSize
That logic will need to be modified slightly to handle keeping a group of
files together: you'll need to add the filesizes of each file in the group
together into a sub-total, and then see if adding that sub-total to
`totalSize` takes it over the limit.
|
Making a vectorized numpy function behave like a ufunc
Question: Let's suppose that we have a Python function that takes in Numpy arrays and
returns another array:
import numpy as np
def f(x, y, method='p'):
"""Parameters: x (np.ndarray) , y (np.ndarray), method (str)
Returns: np.ndarray"""
z = x.copy()
if method == 'p':
mask = x < 0
else:
mask = x > 0
z[mask] = 0
return z*y
although the actual implementation does not matter. We can assume that `x` and
`y` will always be arrays of the same shape, and that the output is of the
same shape as `x`.
The question is what would be the simplest/most elegant way of wrapping such
function so it would work with both ND arrays (N>1) and scalar arguments, in a
manner somewhat similar to [universal functions in
Numpy](https://docs.scipy.org/doc/numpy/reference/ufuncs.html).
For instance, the expected output for the above function should be,
In [1]: f_ufunc(np.arange(-1,2), np.ones(3), method='p')
Out[1]: array([ 0., 0., 1.]) # random array input -> output of the same shape
In [2]: f_ufunc(np.array([1]), np.array([1]), method='p')
Out[2]: array([1]) # array input of len 1 -> output of len 1
In [3]: f_ufunc(1, 1, method='p')
Out[3]: 1 # scalar input -> scalar output
* The function `f` cannot be changed, and it will fail if given a scalar argument for `x` or `y`.
* When `x` and `y` are scalars, we transform them to 1D arrays, do the calculation then transform them back to scalars at the end.
* `f` is optimized to work with arrays, scalar input being mostly a convenience. So writing a function that work with scalars and then using `np.vectorize` or `np.frompyfunc` would not be acceptable.
A beginning of an implementation could be,
def atleast_1d_inverse(res, x):
# this function fails in some cases (see point 1 below).
if res.shape[0] == 1:
return res[0]
else:
return res
def ufunc_wrapper(func, args=[]):
""" func: the wrapped function
args: arguments of func to which we apply np.atleast_1d """
# this needs to be generated dynamically depending on the definition of func
def wrapper(x, y, method='p'):
# we apply np.atleast_1d to the variables given in args
x = np.atleast_1d(x)
y = np.atleast_1d(x)
res = func(x, y, method='p')
return atleast_1d_inverse(res, x)
return wrapper
f_ufunc = ufunc_wrapper(f, args=['x', 'y'])
which mostly works, but will fail the tests 2 above, producing a scalar output
instead of a vector one. If we want to fix that, we would need to add more
tests on the type of the input (e.g. `isinstance(x, np.ndarray)`, `x.ndim>0`,
etc), but I'm afraid to forget some corner cases there. Furthermore, the above
implementation is not generic enough to wrap a function with a different
number of arguments (see point 2 below).
This seems to be a rather common problem, when working with Cython / f2py
function, and I was wondering if there was a generic solution for this
somewhere?
**Edit:** a bit more precisions following @hpaulj's comments. Essentially, I'm
looking for
1. a function that would be the inverse of `np.atleast_1d`, such as `atleast_1d_inverse( np.atleast_1d(x), x) == x`, where the second argument is only used to determine the type or the number of dimensions of the original object `x`. Returning numpy scalars (i.e. arrays with `ndim = 0`) instead of a python scalar is ok.
2. A way to inspect the function f and generate a wrapper that is consistent with its definition. For instance, such wrapper could be used as,
`f_ufunc = ufunc_wrapper(f, args=['x', 'y'])`
and then if we have a different function `def f2(x, option=2): return x**2`,
we could also use
`f2_ufunc = ufunc_wrapper(f2, args=['x'])`.
_Note:_ the analogy with ufuncs might be a bit limited, as this corresponds to
the opposite problem. Instead of having a scalar function that we transform to
accept both vector and scalar input, I have a function designed to work with
vectors (that can be seen as something that was previously vectorized), that I
would like to accept scalars again, without changing the original function.
Answer: This doesn't fully answer the question of making a vectorized function truly
behave like a `ufunc`, but I did recently run into a slight annoyance with
`numpy.vectorize` that sounds similar to your issue. That wrapper insists on
returning an `array` (with `ndim=0` and `shape=()`) even if passed scalar
inputs.
But it appears that the following does the right thing. In this case I am
vectorizing a simple function to return a floating point value to a certain
number of significant digits.
def signif(x, digits):
return round(x, digits - int(np.floor(np.log10(abs(x)))) - 1)
def vectorize(f):
vf = np.vectorize(f)
def newfunc(*args, **kwargs):
return vf(*args, **kwargs)[()]
return newfunc
vsignif = vectorize(signif)
This gives
>>> vsignif(0.123123, 2)
0.12
>>> vsignif([[0.123123, 123.2]], 2)
array([[ 0.12, 120. ]])
>>> vsignif([[0.123123, 123.2]], [2, 1])
array([[ 0.12, 100. ]])
|
Merging two csv files where common column matches
Question: I have a csv of users, and a csv of virtual machines, and i need to merge the
users into their vms only where their id match.
But all im getting is a huge file containing everything.
file_names = ['vms.csv', 'users.csv']
o_data = []
for afile in file_names:
file_h = open(afile)
a_list = []
a_list.append(afile)
csv_reader = csv.reader(file_h, delimiter=';')
for row in csv_reader:
a_list.append(row[0])
o_data.append((n for n in a_list))
file_h.close()
with open('output.csv', 'w') as op_file:
csv_writer = csv.writer(op_file, delimiter=';')
for row in list(zip(*o_data)):
csv_writer.writerow(row)
op_file.close()
Im relatively new to python, am i missing something?
Answer: I've always found pandas really helpful for tasks like this. You can simply
load the datasets into pandas data frames and then use the merge function to
merge them where the values in a column are same.
import pandas
vms = pandas.read_csv('vms.csv')
users = pandas.read_csv('users.csv')
output = pandas.merge(vms, users)
output.to_csv('output.tsv')
You can find the documentation for the different options at
<http://pandas.pydata.org/pandas-docs/stable/merging.html>
|
Jupyter / Ipython not displaying correctly in browser
Question: I have installed Anaconda python 3.4 distribution for windows 64. This was a
fresh install today of all components. I am super excited to start learning
python. However, when I run 'ipython notebook' the browser page has formatting
issues (see image in first link below). This occurs on firefox, chrome, and
IE. On IE, the 'compatibility view' icon pops up. This is what I have tried:
* updated conda and anaconda
* Installed jupyter ('conda update jupyter' couldn't find the package)
* refreshed browser with ctrl + F5
* Checked that chrome and firefox are up-to-date.
There are others who have reported a similar problem, but no solutions have
been given yet:
[Jupyter / Ipython Notebook Html Page
View](http://stackoverflow.com/questions/29020556/jupyter-ipython-notebook-
html-page-view)
[Jupyter webpages not displaying
properly](http://stackoverflow.com/questions/31899764/jupyter-webpages-not-
displaying-properly)
Seems like it should be a simple fix, but I haven't figured it out after hours
of messing around. Any help would be greatly appreciated!!!
UPDATE #1 Following Bubbafat advice, I opened the page using incognito, opened
the debugging console, and refreshed the page (ctrl + F5). There were errors
with the Stylesheet:
Resource interpreted as Stylesheet but transferred with MIME type application/x-css: "http://localhost:8888/static/components/jquery-ui/themes/smoothness/jquery-ui.min.css?v=9b2c8d3489227115310662a343fce11c".
Resource interpreted as Stylesheet but transferred with MIME type application/x-css: "http://localhost:8888/static/style/style.min.css?v=b2822da270f572199d71df9279c2c9e8".
Resource interpreted as Stylesheet but transferred with MIME type application/x-css: "http://localhost:8888/custom/custom.css".
If anyone know how to fix this for Windows 7, I would greatly appreciate the
advice. The link in Bubbafat's comment is for linux and I don't know how to
translate to windows. In addition, if there is an older version of anaconda
that is known to work well with Windows 7 (perhaps before jupyter was rolled
out) I could also downgrade. Any advice is appreciated.
UPDATE #2 Looking around on the web, I think my SOPHOS anti-virsus software
might be the issue. Unfortunately, it is a company computer, so I'll need IT
to turn it off.
Answer: I did have a very similar issue on Windows recently. Some program has
overwritten your mime type associations. For me it was Inkscape killing SVG by
setting the mime type to application/svg.
Try this on terminal:
import mimetypes
mimetypes.guess('file.css')
You should get text/css. If you get application/css this is most likely your
problem. My solution: Change back the mime type association. Start regedit and
search for application/css in HKEY_CLASSES_ROOT. Replace it with text/css.
Please refer to this <https://github.com/ipython/ipython/issues/7024> for
further reference.
|
read numbers from a text file in python
Question: I am trying to read a column of numbers from a text file that looks like this:
some text and numbers..., then:
q-pt= 1 0.000000 0.000000 0.000000 1.0000000000
1 -0.066408 0.0000000
2 -0.053094 0.0000000
3 -0.037643 0.0000000
...
156 3107.735577 6.8945617
...more text file
I am interested on reading the secound column the one that contain -0.066408,
-0.053094 and so on.
The code I have try to write is somehow not doing the job without giving any
error.I have tried this:
import re
import sys
from string import atof
from math import exp
from numpy import *
file1 = open('castepfreq.dat', 'w')
with open('xd_geo_Efield.phonon') as file:
File = file.readlines()
p1 = re.compile("q-pt= 1 0.000000 0.000000 0.000000 1.0000000000")
for i in range(len(File)):
m1 = p1.search(File[i])
if m1:
read = int(float(File[i+1][10:23]))
freq = (read)
print >> file1, freq
file1.close()
If anyone can help me with this, it will be great.
Answer: You can split on whitespace and then extract the second elements:
with open('xd_geo_Efield.phonon') as f:
col = [line.split()[1] for line in f]
print(col)
If your input is:
q-pt= 1 0.000000 0.000000 0.000000 1.0000000000
1 -0.066408 0.0000000
2 -0.053094 0.0000000
3 -0.037643 0.0000000
Output will be:
[('1', '-0.066408', '-0.053094', '-0.037643')]
Or using itertools and transposing:
from itertools import izip, islice, imap
with open('xd_geo_Efield.phonon') as f:
col = islice(izip(*imap(str.split,f)), 1,2)
print(list(col))
If you want to cast, cast the value to float:
[float(line.split()[1]) for line in f]
Also if you want to skip the header and ignore `1` call `next(f)` on the file
object before you use the rest of the code i.e:
with open('xd_geo_Efield.phonon') as f:
next(f)
col = [float(line.split()[1]) for line in f]
print(list(col))
Which would output:
[-0.066408, -0.053094, -0.037643]
If you have data you want to ignore and only start at the line `q-pt=..`, you
can use itertools.dropwhile to ignore the lines at the start:
from itertools import dropwhile
with open('xd_geo_Efield.phonon') as f:
col = [float(line.split()[1]) for line in dropwhile(
lambda x: not x.startswith("q-pt="), f)]
print(list(col))
If you want to also ignore that line, you can call next again but this time on
the dropwhile object:
from itertools import dropwhile
with open('xd_geo_Efield.phonon') as f:
dp = dropwhile(lambda x: not x.startswith("q-pt="), f)
next(dp)
col = [float(line.split()[1]) for line in dp]
print(list(col))
So for the input:
some 1 1 1 1 1
meta 2 2 2 2 2
data 3 3 3 3 3
and 4 4 4 4 4
numbers 5 5 5 5 5
q-pt= 1 0.000000 0.000000 0.000000 1.0000000000
1 -0.066408 0.0000000
2 -0.053094 0.0000000
3 -0.037643 0.0000000
3 -0.037643 0.0000000
The output will be:
[-0.066408, -0.053094, -0.037643, -0.037643]
For leading spaces,`lstrip` it off:
from itertools import dropwhile, imap, takewhile
with open('xd_geo_Efield.phonon') as f:
# for python3 just use map
dp = dropwhile(lambda x: not x.startswith("q-pt="), imap(str.lstrip,f))
next(dp)
col = [float(line.split(None,2)[1]) for line in takewhile(lambda x: x.strip() != "", dp)]
print(list(col))
`takewhile` will keep taking lines until we hit the first empty lines at the
end of the file.
|
How to import a .profile into ipython's bash shell?
Question: I like to run bash commands in ipython via ! however, my default path in the
ipython bash (e.g. output from !$PATH) doesn't match up with $PATH from the
system command line.
I've already tried
! . ~/.profile
but I get an error. Here is my output (from ipython notebook after running the
above command):
////////////////////////////////////////////////////////////
//
// SHELL: /bin/bash
// /ifhom/myusername/.profile integrated.
//
////////////////////////////////////////////////////////////
/bin/sh: line 1: /ifhom/myusername/.kshout: No such file or directory
Answer: I can't believe its this easy:
just run:
ipython --profile ~/.profile
|
secant line too long, trying not to use y limits
Question: I'm currently trying to use python to teach myself the basics of calc, so
please bare with me as I'm pretty beginner.
I am using matplotlib pyplot to trace a function curve and then draw a secant
line from two points I specify (p1 & p2).
I think I have most of the math working ok but if I dont use limits
(Y.min,Y.max of curve) on my Y axis then the secant line can sometimes get
really long and "crush" my curve.
I'm just not sure what would be the best mathematical approach to limiting the
line so it doesn't overshoot the curve's max or min y values:
No Limits Set: [](http://i.stack.imgur.com/55oZb.png)
Limits Set to (Y.min,Y.max) of curve: [](http://i.stack.imgur.com/OuIno.png)
Heres my code:
import matplotlib.pyplot as plt
import numpy as np
ls = np.linspace(-2,2,100)
p1 = 2 #starts from zero
p2 = 50 #starts from zero
def draw_func(ls):
x = ls
y = x**2
#y = (x**4-2*x**2+x)
plt.plot(x,y)
return y
x = ls
y = draw_func(ls)
x1 = x[p1]
x2 = x[p2]
y1 = y[p1]
y2 = y[p2]
m = (y2-y1) / (x2-x1)
delta_x = x2 - x1
delta_x = x1 + (delta_x/2)
b = y2 - y1
b = y1 + (b/2)
def draw_line(ls, m, b):
x = ls
y = (m * (x-delta_x)) + b
plt.plot(x,y)
draw_line(ls, m, b)
plt.ylim(y.min(),y.max()) #This Is What I Was Hoping To Avoid
plt.show()
Thanks!!
Answer: You can compute the coordinates of the curve's apex. Draw the line from the
start to that x value, instead of continuing through the entire range.
|
Summation of only consecutive values in a python array
Question: I am new on python (and even programing!), so I will try to be as clear as I
can to explain my question. For you guys it could be easy, but I have not
found a satisfactory result on this yet.
Here is the problem:
I have an array with both negative and positive values, say:
x = numpy.array([1, 4, 2, 3, -1, -6, -6, 5, 6, 7, 3, 1, -5, 4, 9, -5, -2, -1, -4])
I would like to sum ONLY the **negative** values that are **continuous** ,
i.e. only _sum_(-1, -6, -6), _sum_(-5, -2, -1, -4) and so on. I have tried
using _numpy.where_ , as well as _numpy.split_ based on the condition.
For example:
for i in range(len(x)):
if x[i] < 0.:
y[i] = sum(x[i])
However, as you can expect, I just got the summation of all negative values in
the array instead. In this case **sum**(-1, -6, -6, -5, -5, -2, -1, -4) Could
guys share with me an aesthetic and efficient way to solve this problem? I
will appreciate any response on this.
Thank you very much
Answer: You can use [`itertools`](https://docs.python.org/3/library/itertools.html)
module, here with using `groupby` you can grouping your items based on those
sign then check if it meet the condition in `key` function so it is contains
negative numbers then yield the sum else yield it and at last you can use
`chain.from_iterable` function to chain the result :
>>> from itertools import groupby,tee,chain
>>> def summ_neg(li):
... for k,g in groupby(li,key=lambda i:i<0) :
... if k:
... yield [sum(g)]
... yield g
...
>>> list(chain.from_iterable(summ_neg(x)))
[1, 4, 2, 3, -13, 5, 6, 7, 3, 1, -5, 4, 9, -12]
Or as a more pythonic way use a list comprehension :
list(chain.from_iterable([[sum(g)] if k else list(g) for k,g in groupby(x,key=lambda i:i<0)]))
[1, 4, 2, 3, -13, 5, 6, 7, 3, 1, -5, 4, 9, -12]
|
How I do an animation that works with a set timer in python?
Question: I am trying to do some animation that works with the timer. When the time
finishes, the animation should finish at the same time. I was thinking on
doing something like a battery bar in a cell phone. That's why I have a green
rectangle in another window, but I don't know how to do for the green get
black or the rectangle empty slowly with the timer. If you have another
suggestion for animation you can do it. Thanks
* * *
from tkinter import *
import math
import time
aux=False
segundo=60
Ventana1 = Tk()
Ventana1.title("Timer")
Ventana1.geometry("500x350+100+100")
def paso():
global aux
global segundo
if aux:
segundo -= 1
tiempo["text"] = segundo
tiempo.after(1000, paso)
if segundo==0:
aux=False
tiempo.configure(text=segundo, fg="blue")
if segundo<21:
tiempo.configure(text=segundo, fg="red")
def inicio ():
global aux
global segundo
segundo=segundo
if aux:
pass
else:
aux=True
paso()
tiempo.configure(text=segundo, fg="blue")
def pausa ():
global aux, segundo
aux=False
tiempo.configure(text=segundo, fg="blue")
def reset ():
global aux
global segundo
segundo=int(Entry.get(Segunditos)) + 60*int(Entry.get(Minuticos))
aux=False
tiempo["text"] = segundo
tiempo.configure(text=segundo, fg="blue")
tiempo = Label(Ventana1, text=segundo, font=("calibri", 200))
tiempo.pack()
Button(Ventana1, text="Arranquelo", command= inicio).place(x=120, y=220)#Button start
Button(Ventana1, text= "Parelo", command= pausa).place(x=200, y=220)#button pause
Button(Ventana1, text= "Acabelo", command= reset).place(x=300, y=220)#button reset
MinAviso=Label(Ventana1, text="Minutos").place(x=125, y=260)
SegAviso=Label(Ventana1, text="Segundos").place(x=305, y=260)
Minuticos=Entry(Ventana1)
Minuticos.place(x=70, y=290)
Segunditos=Entry(Ventana1)
Segunditos.place(x=250, y=290)
from tkinter import *
Ventana2 = Tk()
Canvas = Canvas(Ventana2, width=2000, height=1500)
Canvas.pack()
Canvas.create_rectangle(200, 200, 1000, 300, fill="green")
mainloop()
#Ventana1.mainloop()
Answer: The biggest problem is that you are creating two instances of `Tk`. Tkinter
isn't designed to work that way. If you want a floating window, create
instances of `Toplevel`.
Next, you need to save a reference to the rectangle you are drawing on the
canvas.
Canvas = Canvas(Ventana2, ...)
rect = Canvas.create_rectangle(...)
As your timer runs, you can modify the object using that reference.
def paso():
...
if aux:
...
Canvas.itemconfigure(rect, ...)
|
How can I write a fits table into an output LDAC fits catalog using Python
Question: I have an [LDAC](http://marvinweb.astro.uni-
bonn.de/data_products/THELIWWW/LDAC/LDAC_concepts.html) fits catalog which in
a Python code I need to add the elements of two arrays as two new columns to
it.
I open the original catalog in python:
from astropy.io import fits
from astropy.table import Table
import astromatic_wrapper as aw
cat1='catalog.cat'
hdulist1 =fits.open(cat1)
data1=hdulist1[1].data
The two arrays are ready and called **ra** and **dec**. I give them the key
name, format and other needed info and invert them to columns. Finally, I join
the two new columns to the original table (Checking **newtab.columns** and
**newtab.data** shows that the new columns are attached successfully).
racol=fits.Column(name = 'ALPHA_J2000', format = '1D', unit = 'deg', disp = 'F11.7',array=ra)
deccol=fits.Column(name = 'DELTA_J2000', format = '1D', unit = 'deg', disp = 'F11.7',array=dec)
cols = fits.ColDefs([racol, deccol])
tbhdu = fits.BinTableHDU.from_columns(cols)
orig_cols= data1.columns
newtab = fits.BinTableHDU.from_columns(cols + orig_cols)
When I save the new table into a new catalog:
newtab.writeto('newcatalog.cat')
it is not in the format that I need. If I look into the description of each
catalog with
ldacdes -i
I see for _catalog.cat_ :
> Reading catalog(s)
------------------Catalog information----------------
Filename:..............catalog.cat
Number of segments:....3
****** Table #1
Extension type:.........(Primary HDU)
Extension name:.........
****** Table #2
Extension type:.........BINTABLE
Extension name:.........OBJECTS
Number of dimensions:...2
Number of elements:.....24960
Number of data fields...23
Body size:..............4442880 bytes
****** Table #3
Extension type:.........BINTABLE
Extension name:.........FIELDS
Number of dimensions:...2
Number of elements:.....1
Number of data fields...4
Body size:..............28 bytes
> All done
and for the new one:
> Reading catalog(s)
------------------Catalog information----------------
Filename:..............newcatalog.cat
Number of segments:....2
****** Table #1
Extension type:.........(Primary HDU)
Extension name:.........
****** Table #2
Extension type:.........BINTABLE
Extension name:.........
Number of dimensions:...2
Number of elements:.....24960
Number of data fields...25
Body size:..............4842240 bytes
> All done
As seen above, in the original catalog _catalog.cat_ there are three tables
and I tried to add two columns to the OBJECTS table.
I need that _newcatalog.cat_ also keeps the same structure which is required
by other programs, but it does not have the OBJECTS table and considering the
"Number of elements" and the "Number of data fields" the newtab is saved into
the Table #2.
Is there any solution for controlling the output fits catalog format?
Thank you for your help and I hope that I could structure my very first
question on stackoverflow properly .
Answer: I don't know specifically about the LDAC format, but from your example file
_catalog.cat_ , it appears to be a _multi-extension_ FITS file. That is, each
table is stored in a separate HDU (as is typical for any file containing
multiple tables with different sets of columns).
When you do something like
newtab = fits.BinTableHDU.from_columns(cols + orig_cols)
newtab.writeto('newcatalog.cat')
You're just creating a single new binary table HDU and writing that HDU to a
file by itself (along with the mandatory primary HDU). What you really want is
to take the same HDU structure as the original file and replace the existing
table HDU with the one to which you added new columns.
Creating multi-extension FITS is discussed some
[here](http://docs.astropy.org/en/stable/io/fits/appendix/faq.html#how-do-i-
create-a-multi-extension-fits-file-from-scratch), but you don't even need to
recreate the full HDU structure from scratch. The
[`HDUList`](http://docs.astropy.org/en/stable/io/fits/api/hdulists.html#astropy.io.fits.HDUList)
object return from `fits.open` is just a list of HDUs that can be manipulated
like a normal Python list (with some extensions, for example, to support
indexing by EXTNAME) and written out to a file:
hdulist = fits.open(cat1)
hdulist['OBJECTS'] = newtab
hdulist.writeto('newcatalog.cat')
|
Code to do a "bit iterator" but with 3 states
Question: I am wanting to do a counter that iterates over 3 states. I know how to do
this for 2 states using the bit operator `i^=1`.
I want to know if there is a way to do similiar but with three states?
I realize I can just do:
i = 0
while
if(i==3)
i = 0
do stuff here
i++
But I was hoping for something more elegant and efficient. I was thinking of a
modulus trick or something.
(Right now I am using python)
Answer: Could
[itertools](https://docs.python.org/2/library/itertools.html#itertools.cycle
"itertools") be useful for you?
import itertools
states = itertools.cycle([0, 1, 2])
while True:
i = states.next()
<do stuff>
|
Cassandra json2sstable and sstableloader reporting positive results, but no data change happening
Question: I am fairly new to Cassandra - within the month, having come from a long SQL
Server background. I have been tasked with stubbing out some Python to
automate bulk loading of sstables. Enter sstableloader. Everything I have
installed so far is for testing. I have 1 virtual machine set up with
Cassandra installed on a single-node cluster. This required a bit of setup and
a loopback ipaddress. So I have 127.0.0.1 and 127.0.0.2, seed set up at
127.0.0.1. I successfully got Cassandra up and running, and can access it via
simple connection strings in Python from other boxes - so most of my
requirements are met. Where I am running into problems is loading data in via
anything but cql. I can use insert statements to get data in all day -- what I
need to successfully do is run json2sstable and sstableloader (separately at
this point) successfully. The kicker is it reports back that everything is
fine... and my data never shows up in either case. The following is my way to
recreate the issue.
Keyspace, column family and folder: sampledb_adl, emp_new_9
/var/lib/cassandra/data/emp_new_9
Table created at cqlsh prompt: CREATE TABLE emp_new_9 (pkreq uuid, empid int, deptid int, first_name text, last_name text, PRIMARY KEY ((pkreq))) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.000000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
Initial data entered into table via cqlsh: INSERT INTO emp_new_9
(pkreq,empid,deptid,first_name,last_name) VALUES
(uuid(),30001,235,'yogi','bear');
Results of 'select * from emp_new_9':
pkreq | deptid | empid | first_name | last_name \--------------------------------------+--------+-------+------------+----------- 9c6dd9de-f6b1-4312-9737-e9d00b8187f3 | 235 | 30001 | yogi | bear
Initiated nodetool flush
Contents of emp_new_9 folder at this point:
sampledb_adl-emp_new_9-jb-1-CompressionInfo.db sampledb_adl-emp_new_9-jb-1-Index.db sampledb_adl-emp_new_9-jb-1-TOC.txt
sampledb_adl-emp_new_9-jb-1-Data.db sampledb_adl-emp_new_9-jb-1-Statistics.db
sampledb_adl-emp_new_9-jb-1-Filter.db sampledb_adl-emp_new_9-jb-1-Summary.db
Current results of: [root@localhost emp_new_9]# sstable2json
/var/lib/cassandra/data/sampledb_adl/emp_new_9/sampledb_adl-
emp_new_9-jb-1-Data.db
[
{"key": "9c6dd9def6b143129737e9d00b8187f3","columns": [["","",1443108919841000], ["deptid","235",1443108919841000], ["empid","30001",1443108919841000], ["first_name","yogi",1443108919841000], ["last_name","bear",1443108919841000]]}
]
Now to create emp_new_10 with different data:
Keyspace, column family and folder: sampledb_adl, emp_new_10
/var/lib/cassandra/data/emp_new_10
Table created at cqlsh prompt: CREATE TABLE emp_new_10 (pkreq uuid, empid int, deptid int, first_name text, last_name text, PRIMARY KEY ((pkreq))) WITH
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.100000 AND
gc_grace_seconds=864000 AND
index_interval=128 AND
read_repair_chance=0.000000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
default_time_to_live=0 AND
speculative_retry='99.0PERCENTILE' AND
memtable_flush_period_in_ms=0 AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'LZ4Compressor'};
Initial data entered into table via cqlsh: INSERT INTO emp_new_10
(pkreq,empid,deptid,first_name,last_name) VALUES
(uuid(),30101,298,'scoobie','doo');
Results of 'select * from emp_new_10':
pkreq | deptid | empid | first_name | last_name \--------------------------------------+--------+-------+------------+----------- c0e1763d-8b2b-4593-9daf-af3596ed08be | 298 | 30101 | scoobie | doo
Initiated nodetool flush
Contents of emp_new_10 folder at this point:
sampledb_adl-emp_new_10-jb-1-CompressionInfo.db sampledb_adl-emp_new_10-jb-1-Index.db sampledb_adl-emp_new_10-jb-1-TOC.txt
sampledb_adl-emp_new_10-jb-1-Data.db sampledb_adl-emp_new_10-jb-1-Statistics.db
sampledb_adl-emp_new_10-jb-1-Filter.db sampledb_adl-emp_new_10-jb-1-Summary.db
Current results of: [root@localhost emp_new_10]# sstable2json
/var/lib/cassandra/data/sampledb_adl/emp_new_10/sampledb_adl-
emp_new_10-jb-1-Data.db
[
{"key": "c0e1763d8b2b45939dafaf3596ed08be","columns": [["","",1443109509458000], ["deptid","298",1443109509458000], ["empid","30101",1443109509458000], ["first_name","scoobie",1443109509458000], ["last_name","doo",1443109509458000]]}
]
So, yogi 9, scoobie 10.
Now I am going to try first to use json2sstable with the file from emp_new_10
which I named (original, I know): emp_new_10.json
json2sstable -K sampledb_adl -c emp_new_9 /home/tdmcoe_admin/Desktop/emp_new_10.json /var/lib/cassandra/data/sampledb_adl/emp_new_10/sampledb_adl-emp_new_10-jb-1-Data.db
Results printed to terminal window:
ERROR 08:56:48,581 Unable to initialize MemoryMeter (jamm not specified as javaagent). This means Cassandra will be unable to measure object sizes accurately and may consequently OOM.
Importing 1 keys...
1 keys imported successfully.
I get the MemoryMeter error all the time and ignore as googling said it didn't
affect results.
SO, my folder contents have not changed, 'select * from emp_new_9;' still
gives the same single original record result. emp_new_10 has not changed,
either. What the heck happened to my '1 keys imported successfully'?
Successfully where?
Now for the related sstableloader. Same base folders/data, but now running
sstableloader:
[root@localhost emp_new_10]# sstableloader -d 127.0.0.1 /var/lib/cassandra/data/sampledb_adl/emp_new_9
NOTE: I ALSO RAN THE LINE ABOVE WITH 127.0.0.2, and with 127.0.0.1,127.0.0.2
just in case, but same results.
Results printed to terminal window:
ERROR 09:05:07,686 Unable to initialize MemoryMeter (jamm not specified as javaagent). This means Cassandra will be unable to measure object sizes accurately and may consequently OOM.
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of /var/lib/cassandra/data/sampledb_adl/emp_new_9/sampledb_adl-emp_new_9-jb-1-Data.db to [/<my machine ip>]
Streaming session ID: 06a9c1a0-62d6-11e5-b85d-597b365ae56f
progress: [/<my machine ip> 1/1 (100%)] [total: 100% - 0MB/s (avg: 0MB/s)]
So - 100% - yay! 0MB/s boo!
Now for the contents of emp_new_9 folder, which I have not touched now have a
second set of files:
sampledb_adl-emp_new_9-jb-1-CompressionInfo.db sampledb_adl-emp_new_9-jb-1-TOC.txt sampledb_adl-emp_new_9-jb-2-Statistics.db
sampledb_adl-emp_new_9-jb-1-Data.db sampledb_adl-emp_new_9-jb-2-CompressionInfo.db sampledb_adl-emp_new_9-jb-2-Summary.db
sampledb_adl-emp_new_9-jb-1-Filter.db sampledb_adl-emp_new_9-jb-2-Data.db sampledb_adl-emp_new_9-jb-2-TOC.txt
sampledb_adl-emp_new_9-jb-1-Index.db sampledb_adl-emp_new_9-jb-2-Filter.db
sampledb_adl-emp_new_9-jb-1-Statistics.db sampledb_adl-emp_new_9-jb-2-Index.db
Results of 'select * from emp_new_9;' have not changed, using sstable2json on
BOTH of the data files also just show the 1 old yogi entry. When I run
nodetool compact it goes back down to 1 set of files with only the 1 yogi
line. So what 100% happened?!? 100% of what?
Any help is appreciated. I am very confused.
Answer: When using json2sstable, you should specify the name of a new non-existant .db
file. As designed, SSTables are immutable so will not allow them to be updated
through json2sstable.
For whatever reason, the tool doesn't complain about an existing SSTable. If
you specify a new .db file, you will find that the SSTable files will be
created with what you expect.
|
why matplotlib give the error [<matplotlib.lines.Line2D object at 0x0392A9D0>]?
Question: I am using python 2.7.9 on win8. When I tried to plot using matplotlib, the
following error showed up:
> from pylab import *
> plot([1,2,3,4])
>
> **[matplotlib.lines.Line2D object at 0x0392A9D0]**
I tried the test code "python simple_plot.py --verbose-helpful", and the
following warning showed up:
> $HOME=C:\Users\XX matplotlib data path C:\Python27\lib\site-
> packages\matplotlib\mpl-data
>
> * * *
>
> You have the following UNSUPPORTED LaTeX preamble customizations:
>
> Please do not ask for support with these customizations active.
>
> * * *
>
> loaded rc file C:\Python27\lib\site-packages\matplotlib\mpl-
> data\matplotlibrc matplotlib version 1.4.3 verbose.level helpful interactive
> is False platform is win32 CACHEDIR=C:\Users\XX.matplotlib Using fontManager
> instance from C:\Users\XX.matplotlib\fontList.cache backend TkAgg version
> 8.5 findfont: Matching :family=sans-
> serif:style=normal:variant=normal:weight=normal:stretch=normal:size=medium
> to Bitstream Vera Sans (u'C:\Python27\lib\site-packages\matplotlib\mpl-
> data\fonts\ttf\Vera.ttf') with score of 0.000000
What does this mean? How could I get matplotlib working? Thank you very much!
Answer: That isn't an error. That has created a plot object but you need to show the
window. That's done using
[`pyplot.show()`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.show)...
so you seriously just have to do...
show()
If you don't believe me, here's a trace from IPython:
In [9]: from pylab import *
In [10]: plot([1,2,3,4])
Out[10]: [<matplotlib.lines.Line2D at 0x123245290>]
In [11]: show()
We get:
[](http://i.stack.imgur.com/snnS0.png)
* * *
As mentioned in the comments, you should avoid using `pylab`. You should use
`matplotlib.pyplot` instead.... so:
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.show()
|
Clean way to read a null-terminated (C-style) string from a file?
Question: I'm looking for a clean and simple way to read a null-terminated C string from
a file or file-like object in Python. In a way that doesn't consume more input
from the file than it needs, or pushes it back onto whatever file/buffer it
works with such that other code can read the data immediately after a null-
terminated string.
I've seen [a bit of rather ugly
code](http://bytes.com/topic/python/answers/41987-canonical-way-dealing-null-
separated-lines) to do it, but not much that I'd like to use.
[universal newlines
support](https://docs.python.org/2/library/functions.html#open) only works for
`open()`ed files, not StringIO objects etc, and doesn't look like it handles
unconventional newlines. Also, if it did work it'd result in strings with `\n`
appended, which is undesirable.
[struct](https://docs.python.org/2/library/struct.html) [doesn't look like it
supports reading arbitrary-length C strings at
all](http://stackoverflow.com/q/26060110/398670), requiring a length as part
of the format.
[ctypes](https://docs.python.org/2/library/ctypes.html) has `c_buffer`, which
can be constructed from a byte string and will return the first null
terminated string as its `value`. Again, this requires determining how much
must be read in advance, and it doesn't differentiate between null-terminated
and unterminated strings. The same is true of `c_char_p`. So it doesn't seem
to help much, since you already have to know you've read enough of the string
and have to handle buffer splits.
The usual way to do this in C is read chunks into a buffer, copying and
resizing the buffer if needed, then check if the newest chunk read contains a
null byte. If it does, return everything up to the null byte and either
realign the buffer or if you're being fancy, keep on reading and use it as a
ring buffer. (This only works if you can hand the excess data read back to the
caller, or if your platform's `ungetc` lets to push a lot back onto the file,
of course.)
Is it necessary to spell out similar code in Python? I was surprised not to
find anything canned in `io`, `ctypes` or `struct`.
file objects don't seem to have a way to push back onto their buffer, like
`ungetc`, and neither do buffered I/O streams in the `io` module.
I feel like I must be missing the obvious here. I'd really rather avoid byte-
by-byte reading:
def readcstr(f):
buf = bytearray()
while True:
b = f.read(1)
if b is None or b == '\0':
return str(buf)
else:
buf.append(b)
but right now that's what I'm doing.
Answer: Incredibly mild improvement on what you have (mostly in that it uses more
built-ins that, in CPython, are implemented in C, which usually runs faster):
import functools
import itertools
def readcstr(f):
toeof = iter(functools.partial(f.read, 1), '')
return ''.join(itertools.takewhile('\0'.__ne__, toeof))
This is relatively ugly (and sensitive to the type of the file object; it
won't work with file objects that return `unicode`), but pushes all the work
to the C layer. The two arg iter ensures you stop if the file is exhausted,
while `itertools.takewhile` looks for (and consumes) the `NUL` terminator but
no more; `''.join` then combines the bytes read into a single return value.
|
python-3.x pickling creates empty file
Question: I'm new to python, trying to store/retrieve some complex data structures into
files, and am experimenting with pickling. The below example, however, keeps
creating a blank file (nothing is stored there), and I run into an error in
the second step. I've been googling around, only to find other examples which
were exactly matching mine - yet, it does not appear to be working. What may I
be missing? tx in advance!
import pickle
messageToSend = ["Pickle", "this!"]
print("before: \n",messageToSend)
f = open("pickletest.pickle","wb")
pickle.dump(messageToSend,f)
f.close
g = open("pickletest.pickle","rb")
messageReceived = pickle.load(g)
print("after: \n",messageReceived)
g.close
Answer: You are not closing the files. Note you wrote `f.close` instead of `f.close()`
The proper way to handle files in python is:
with open("pickletest.pickle", "wb") as f:
pickle.dump(messageToSend, f)
So it will close the file automatically when the `with` block ends even if
there was an error during processing.
The other answer given will work only in some Python implementations because
it relies on the garbage collector closing the file. This is quite unreliable
and error prone. Always use `with` when handling anything that requires to be
closed.
|
Submitting offline forms using python
Question: I am trying to extract information from my college website using python. Here
is the link of the website.
<http://studzone.psgtech.edu/CommonPage.aspx>
I have the exam results page locally saved. I want to know how to submit data
using the local file and get the resulting page using python. I've looked into
urllib2, requests, mechanize frameworks. But haven't got any useful
information on submitting data from a local HTML file. Thanks in advance.
Back note: The reason I saved it locally is, the website uses a specific token
for each detail such as Time-Table, Results, etc. So we need to send requests
from that specific page such as Time-Table, Results. i.e. A student can access
only one detail at a time and there is no separate page for that.
Edit:
I've use following code to obtain the web page and store it in a object. Is
there anyway to post the form in that webpage stored in the page variable? I
can edit the field such that value="something".
import urllib2
import requests
page = urllib2.urlopen("http://google.com/").read()
c = requests.post(page, data) //is something like this possible?
Answer: I have 3 files. `login.php`, `auth.php`, `index.php`. Direct access to
`index.php` is disabled. If user is not authenticated, request for `index.php`
is redirected to `login.php`.
>>> url = 'http://127.0.0.1/adminpanel/admin/index.php'
>>> import requests
>>> r = requests.get(url)
>>> print r.text
<!DOCTYPE html>
<html>
<head>
<title> Login </title>
</head>
<body>
<form method="POST" action="auth.php">
<input type="text" name="username">
<input type="password" name="password">
<input type="submit" name="submit" value="submit">
</form>
</body>
</html>
>>> print r.url
http://127.0.0.1/adminpanel/admin/login.php
There is no authentication logic in login.php it is just a form. The
authentication process is held in auth.php. Let's send a post request with
some data to this url.
>>> auth_url = "http://127.0.0.1/adminpanel/admin/auth.php"
>>> params = {'username':'superadmin', 'password':'geometry123', 'submit':'submit'}
>>> r = requests.post(auth_url, data=params)
>>> print r.text[:200]
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>
Admin Panel
</title>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="style
>>> print r.url
http://127.0.0.1/adminpanel/admin/index.php
So, you have to: determine names of form inputs to send a post request. If
authentication will be successfull, you'll be redirected to protected page.
And with other libraries (`beautifulsoup`, `urrlib3`) , you can extract the
data you want.
|
error while running any python-dependent commands/programs in terminal
Question: I recently set up arch on my machine; installed python. `/usr/bin/python` was
symlinked to `/usr/bin/python3` which itself is a symlink to
`/usr/bin/python3.4`.
Because, I use python2.7, I went ahead and linked `python` to `python2.7`.
Now when I try to python dependent program, I get the following error.
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3084, in <module>
@_call_aside
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3070, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3097, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 651, in _build_master
ws.require(__requires__)
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 952, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python2.7/site-packages/pkg_resources/__init__.py", line 839, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'pip==7.1.2' distribution was not found and is required by the application
I wish to know what's gone wrong.
Answer: The `pip` script in `/usr/bin` is tied to Python 3.4. The small script is just
a bootstrapping script to load the actual code from a module. That module is
missing in Python 2.7 because you did not install `pip` for it.
Either fix the script to replace `/usr/bin/python` in the first line with
`/usr/bin/python3`, or [install `pip` for Python
2.7](https://pip.pypa.io/en/stable/installing/).
Alternatively, only link `/usr/bin/python2` to Python 2.7 and leave
`/usr/bin/python` to point to Python 3. It is quite likely other Arch programs
rely on that being Python 3, anyway. Also see ["Proper way" to manage multiple
versions of Python on
archlinux](http://stackoverflow.com/questions/7297094/proper-way-to-manage-
multiple-versions-of-python-on-archlinux).
|
Python syntax. Beautiful, right, short code
Question:
for item in listOfModels:
if item[0] in perms:
perms[item[0]][item[1]] = True
else:
perms[item[0]] = {item[1]: True}
I often use code like this. Please tell me beautiful, short, right way to do
same.
(lib's, books, samples, etc.)
E.G. i have
[
['animal', 'rabbit'],
['animal', 'cow'],
['plant', 'tree'],
['animal', 'elephant'],
['fruit', 'strawberry'],
['fruit', 'apple'],
]
and need
{
'animal': ['rabbit', 'cow', 'elephant'],
'plant': ['tree'],
'fruit': ['strawberry', 'apple'],
}
OR
{
'animal': {
'rabbit': True,
'cow': True,
'elephant': True
},
'plant': {
'tree': True
},
'fruit': {
'strawberry': True,
'apple': True
},
Answer: Two options: use
[`dict.setdefault()`](https://docs.python.org/2/library/stdtypes.html#dict.setdefault)
or use a [`collections.defaultdict()`
object](https://docs.python.org/2/library/collections.html#collections.defaultdict).
Using `dict.setdefault()`:
for category, name in listOfModels:
perms.setdefault(category, {})[name] = True
or using a `defaultdict`:
from collection import defaultdict
perms = defaultdict(dict)
for category, name in listOfModels:
perms[category][name] = True
`dict.setdefault()` looks up the key for you, and if it is missing uses the
second argument to set the value. That way you _always_ get a dictionary back
(even an empty one), on which you can then set the `name` key.
A `defaultdict` takes a factory argument, and each time a key you try to
access is missing, the factory is called to produce a default value. So
accessing `perms['missing_key']` has the same effect as using
`perms.set_default('missing_key', default)`; a new value is produced as
needed.
Either method is trivially adapted to produce lists or sets instead of a
dictionary with `True` values:
# producing a list
for category, name in listOfModels:
perms.setdefault(category, []).append(name)
# or a set
for category, name in listOfModels:
perms.setdefault(category, set()).append(name)
# same with defaultdict
perms = defaultdict(list)
for category, name in listOfModels:
perms[category].append(name)
perms = defaultdict(set)
for category, name in listOfModels:
perms[category].add(name)
Sets are probably the best option here, being the direct equivalent of the
dictionary with values set to `True`.
|
Python Tkinter Progressbar indeterminate on Toplevel while function running
Question: I need a progressbar which should show that the programm is still running
while a loop in a certain function is working, so all in all the issue is
simple.
I found some useful threads here but none helped me. I think I am missing a
detail.
Here is the function which needs up to 1 minute to be finished depending on
how many blogs are used:
def bildinhalt_execute():
tumblr_progress.start()
taglist = tagliste_area.get("1.0", "end-1c")
taglist = taglist.split(",")
tumblr_alt_wert = tumblr_alt_wert_area.get("1.0", END)
""" Resized das Bild proportional """
with open('tumblr_credentials.json', 'r') as daten:
data_for_login_tumblr_all = json.load(daten)
for blog in data_for_login_tumblr_all:
tumblr_zugangsdaten(data_for_login_tumblr_all[blog]["consumer_key"],data_for_login_tumblr_all[blog]["consumer_secret"],data_for_login_tumblr_all[blog]["oauth_token"],data_for_login_tumblr_all[blog]["oauth_token_secret"])
im = Image.open(pfad_tumblr_1)
basewidth = (im.size[0] - int(breitepx_area.get("1.0", END)))
wpercent = (basewidth / float(im.size[0]))
height = int((float(im.size[1]) * float(wpercent)))
im = im.resize((basewidth, height), PIL.Image.ANTIALIAS)
im.save(pfad_tumblr_1)
""" Postet das Bild """
pfad_tumblr_1_bild = pfad_tumblr_1
pfad_tumblr_1_bild = str(pfad_tumblr_1_bild)
tumblr_bild(blog, taglist, pfad_tumblr_1_bild, tumblr_alt_wert)
tumblr_progress.stop()
I start the progress at the beginning and stop it at the end. The progressbar
itself is created on the toplevel root:
tumblr_progress = ttk.Progressbar(tumblr_blog_root, orient='horizontal', mode='indeterminate')
tumblr_progress.place(x = 300, y = 615)
The function is executed when this Button is clicked and thats the moment
where the prograssbar should start to show progress
wordpress_button_bild = Button(tumblr_blog_root, text = "Bild", width=7, bg = "powder blue", command=bildinhalt_execute)
wordpress_button_bild.place(x = 10, y = 10)
Am I on the right side? Or do I have probably to use multithreading for this,
have never worked with multiple threads could be very hard, so if
multithreading is needed, a hint would be nice where to start.
Thanks in advance!
Answer: Assuming you just want a moving progressbar, use the 'determinate' mode and
send the interval to the start() function. The label and counter below are
just there to do something for a while. If you want the progressbar to show
something like percent completion, then you would use "after" to schedule a
function call to update the progressbar with percent done, similar to the
label update's use of "after" in the code below.
import Tkinter as tk
from ttk import Progressbar
class ProgressBar_Label(object):
def __init__(self, parent):
self.parent=parent
self.ctr=0
self.p = Progressbar(self.parent, orient=tk.HORIZONTAL, length=200,
mode='determinate')
self.p.grid()
self.p.start(75) ## 75 millisecond interval
self.label=tk.Label(self.parent, text="Start", bg="lightblue", width=10)
self.label.grid(row=1)
tk.Button(self.parent, text="Quit", bg="orange",
command=self.parent.quit).grid(row=10)
self.update_label()
def update_label(self):
self.ctr +=1
self.label["text"]=str(self.ctr)
if self.ctr < 100:
self.parent.after(100, self.update_label)
else:
self.p.destroy()
self.label["text"]="Finished"
parent=tk.Tk()
PL=ProgressBar_Label(parent)
parent.mainloop()
|
Python pyqt4 access QTextEdit from function
Question: I'm trying to write a notepad application, so far i have a gui without
functionality. Every element of my gui is in separate function, and then is
called in **init** method. For example in create_new_file(self) function I was
trying to get text from QTextEdit .toPlainText() method, but how can i access
this field from text_edit_area(self) function?
import sys
from PyQt4 import QtGui, QtCore
class Editor(QtGui.QMainWindow):
def __init__(self):
super(Editor, self).__init__()
self.setGeometry(100, 100, 500, 500)
self.setWindowTitle('Text Editor')
self.setWindowIcon(QtGui.QIcon('editor.png'))
self.statusBar()
self.main_menu()
self.text_edit_area()
self.toolbar()
self.show()
def main_menu(self):
# CREATE MAIN MENU
menu = self.menuBar()
# MENU ACTIONS
file_exit_action = QtGui.QAction('&Exit', self)
file_exit_action.setShortcut('Ctrl+Q')
file_exit_action.setStatusTip('Close application')
file_exit_action.triggered.connect(self.close_application)
file_new_file_action = QtGui.QAction('&New File', self)
file_new_file_action.setShortcut('Ctrl+N')
file_new_file_action.setStatusTip('Create a new file')
file_new_file_action.triggered.connect(self.create_new_file)
file_open_file_action = QtGui.QAction('&Open File', self)
file_open_file_action.setShortcut('Ctrl+O')
file_open_file_action.setStatusTip('Open file')
file_open_file_action.triggered.connect(self.open_file)
file_save_file_action = QtGui.QAction('&Save File', self)
file_save_file_action.setShortcut('Ctrl+S')
file_save_file_action.setStatusTip('Save opened file')
file_save_file_action.triggered.connect(self.save_file)
edit_undo_action = QtGui.QAction('&Undo', self)
edit_undo_action.triggered.connect(self.undo)
format_change_font_action = QtGui.QAction('&Change Font', self)
format_change_font_action.triggered.connect(self.change_font)
view_maximize_action = QtGui.QAction('&Maximize', self)
view_maximize_action.triggered.connect(self.maximize)
help_about_action = QtGui.QAction('&About', self)
help_about_action.triggered.connect(self.about)
# FILE MENU
file_menu = menu.addMenu('&File')
file_menu.addAction(file_exit_action)
file_menu.addAction(file_new_file_action)
file_menu.addAction(file_open_file_action)
file_menu.addAction(file_save_file_action)
# EDIT MENU
edit_menu = menu.addMenu('&Edit')
edit_menu.addAction(edit_undo_action)
# FORMAT MENU
format_menu = menu.addMenu('&Format')
format_menu.addAction(format_change_font_action)
# VIEW MENU
view_menu = menu.addMenu('&View')
view_menu.addAction(view_maximize_action)
# HELP MENU
help_menu = menu.addMenu('&Help')
help_menu.addAction(help_about_action)
def toolbar(self):
# CREATE MAIN TOOLBAR
tool_bar = self.addToolBar('main toolbar')
# TOOLBAR ACTION
toolbar_new_file_action = QtGui.QAction(QtGui.QIcon('new_file.png'),
'&New File', self)
toolbar_new_file_action.triggered.connect(self.create_new_file)
toolbar_open_file_action = QtGui.QAction(QtGui.QIcon('open_file.png'),
'&Open File', self)
toolbar_open_file_action.triggered.connect(self.open_file)
# ADD TOOLBAR ACTIONS
tool_bar.addAction(toolbar_new_file_action)
tool_bar.addAction(toolbar_open_file_action)
def text_edit_area(self):
text_edit = QtGui.QTextEdit()
self.setCentralWidget(text_edit)
def close_application(self):
choice = QtGui.QMessageBox.question(self,
'Confirmation',
'Do you really want to quit?',
QtGui.QMessageBox.Yes |
QtGui.QMessageBox.No)
if choice == QtGui.QMessageBox.Yes:
sys.exit()
else:
pass
def create_new_file(self):
print('create new file')
def open_file(self):
print('open file')
def save_file(self):
print('saving file')
def undo(self):
print('undo')
def maximize(self):
print('maximize')
def change_font(self):
print('change font')
def about(self):
print('about')
def main():
app = QtGui.QApplication(sys.argv)
gui = Editor()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
Answer: You either create an instance attribute that stores a reference to the
QTextEdit
`self.text_edit = QtGui.QTextEdit()`
or you retrieve a reference by calling the `centralWidget()` method of
QMainWindow
|
Plain HTTP API call to Google Geocoding API fails with Python requests module
Question: I'm inside a corporate proxy, so often I have SSL issues and have to fall back
to plain HTTP (when it's not an issue involving sensitive data). Thus, I'm
trying to geotag with [Google's Geocoding
API](https://developers.google.com/maps/documentation/geocoding) over plain
HTTP. When I craft a call and execute it with `curl` on the terminal, I get my
JSON resonse as expected. But, when I put the same URL inside a Python script
and hit the URL with `requests.get`, I get an SSL error:
{u'status': u'REQUEST_DENIED', u'error_message': u'Requests to this API must be over SSL.', u'results': []}
The Python is dead-simple, but here it is for posterity:
import json
import requests
response = requests.get('http://maps.googleapis.com/maps/api/geocode/json?address=some+address+here&key=my-key')
json_data = json.loads(response.text)
print json_data
And, of course, if I try the call with HTTPS I run in to proxy cert errors.
Any ideas?
* * *
UPDATE:
I know I can add the `verify=false` flag to the `requests` method call to
overcome the SSL cert issue, but that doesn't help me understand why the call
is bouncing off the API even though it ostensibly accepts plain HTTP calls.
Answer: You should use the http**s** protocol for your request
import json
import requests
response = requests.get('https://maps.googleapis.com/maps/api/geocode/json?address=some+address+here&key=my-key')
json_data = json.loads(response.text)
print json_data
The example url from the Google
[documentation](https://developers.google.com/maps/documentation/geolocation/intro):
[https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&key=my_key](https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&key=my_key)
gives
> {u'status': u'OK', u'results': [{u'geometry': {u'location': {u'lat':
> 37.422245, u'lng': -122.0840084}, u'viewport': {u'northeast': {u'lat':
> 37.42359398029149, u'lng': -122.0826594197085}, u'southwest': {u'lat':
> 37.4208960197085, u'lng': -122.0853573802915}}, u'location_type':
> u'ROOFTOP'}, u'address_components': [{u'long_name': u'1600', u'types':
> [u'street_number'], u'short_name': u'1600'}, {u'long_name': u'Amphitheatre
> Parkway', u'types': [u'route'], u'short_name': u'Amphitheatre Pkwy'},
> {u'long_name': u'Mountain View', u'types': [u'locality', u'political'],
> u'short_name': u'Mountain View'}, {u'long_name': u'Santa Clara County',
> u'types': [u'administrative_area_level_2', u'political'], u'short_name':
> u'Santa Clara County'}, {u'long_name': u'California', u'types':
> [u'administrative_area_level_1', u'political'], u'short_name': u'CA'},
> {u'long_name': u'United States', u'types': [u'country', u'political'],
> u'short_name': u'US'}, {u'long_name': u'94043', u'types': [u'postal_code'],
> u'short_name': u'94043'}], u'place_id': u'ChIJ2eUgeAK6j4ARbn5u_wAGqWA',
> u'formatted_address': u'1600 Amphitheatre Pkwy, Mountain View, CA 94043,
> USA', u'types': [u'street_address']}]}
|
Celery beat not starting EOFError('Ran out of input')
Question: Everything worked perfectly fine until:
celery beat v3.1.18 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://user:**@staging-api.user-app.com:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> /tmp/beat.db
. logfile -> [stderr]@%INFO
. maxinterval -> now (0s)
[2015-09-25 17:29:24,453: INFO/MainProcess] beat: Starting...
[2015-09-25 17:29:24,457: CRITICAL/MainProcess] beat raised exception <class 'EOFError'>: EOFError('Ran out of input',)
Traceback (most recent call last):
File "/home/user/staging/venv/lib/python3.4/site-packages/kombu/utils/__init__.py", line 320, in __get__
return obj.__dict__[self.__name__]
KeyError: 'scheduler'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/shelve.py", line 111, in __getitem__
value = self.cache[key]
KeyError: 'entries'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/apps/beat.py", line 112, in start_scheduler
beat.start()
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/beat.py", line 454, in start
humanize_seconds(self.scheduler.max_interval))
File "/home/user/staging/venv/lib/python3.4/site-packages/kombu/utils/__init__.py", line 322, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/beat.py", line 494, in scheduler
return self.get_scheduler()
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/beat.py", line 489, in get_scheduler
lazy=lazy)
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/utils/imports.py", line 53, in instantiate
return symbol_by_name(name)(*args, **kwargs)
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/beat.py", line 358, in __init__
Scheduler.__init__(self, *args, **kwargs)
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/beat.py", line 185, in __init__
self.setup_schedule()
File "/home/user/staging/venv/lib/python3.4/site-packages/celery/beat.py", line 377, in setup_schedule
self._store['entries']
File "/usr/local/lib/python3.4/shelve.py", line 114, in __getitem__
value = Unpickler(f).load()
EOFError: Ran out of input
What is this?
Answer: I've deleted my `celerybeat-schedule` files and it solved my problem.
<https://github.com/celery/kombu/issues/516>
|
python SyntaxError: invalid syntax on def
Question: Can you help me with my code? It's a small part of it:
# -*- coding: utf-8 -*-
from sys import exit
def inside():
print "Text."
print "Text."
ladder = raw_input("Text.\n> ")
if "3" in ladder:
print 'Text.', \
'Text.'
fl()
else:
print "Text. %d Text." % ladder
room_number_wrong = raw_input("Text.\n> ")
dead("Text.")
def fl():
print "Text."
room_number_right = raw_input("Text.\n> ")
if "12" in room_number_right:
print "Text."
open()
else:
dead('Text.', \
'Text.!')
def dead(why):
print why, "Good job!"
exit(0)
def open():
print "Text."
print "Text."
welcome = raw_input.lower("Text.\n> ")
if "Text." in welcome:
dead("Text.")
elif "Text." in welcome or "Text." in welcome:
print 'Text.', \
'Text.'
else:
dead('Text.', \
'Text.'
def enter():
print 'Text.', \
'Text.'
print "Text."
name = raw_input("Text.\n> ")
if "Text." in name or "Text." in name:
print "Text."
inside()
else:
dead("Text.")
enter()
When I try to run code, in this part, in the first line (`def fl():`) I
received error:
> SyntaxError: invalid syntax Maybe I forgot something. I'm using Python
> 2.7.10.
Answer: I rewrite code and it worked. Thanks for help!
|
Importing module item of subpackage from another subpackage
Question: I have this project structure:
root_package/
root_package/packA/
root_package/packA/__init__.py (empty)
root_package/packA/moduleA.py
root_package/packB/__init__.py (empty)
root_package/packB/moduleB.py
root_package/rootModule.py
In the `rootModule.py` I have `from packA.moduleA import ModuleAClass`.
At the `packA.moduleA.py` I have this `from root_package.packB.moduleB import
ModuleBItem`.
When running rootModule either via PyCharm or the terminal with `python
./rootModule.py` I am getting this error:
Was this the right way of importing?
Traceback (most recent call last):
File "/project_dir/rootPackage/rootModule.py", line 7, in <module>
from packA.moduleA import ModuleAClass
File "/project_dir/rootPackage/packA/moduleA.py", line 8, in <module>
from rootPackage.packB.moduleB import module_b_method
File "/project_dir/rootPackage/rootModule.py", line 7, in <module>
from packA.wavelet_compression import WaveletCompression
ImportError: cannot import name WaveletCompression
How to solve this?
**Update 1**
I've added a test file at the _project_folder_ (**not** the root_package
folder).
So the current directory structure is this:
project_folder/
project_folder/root_package/
project_folder/root_package/packA/
project_folder/root_package/packA/__init__.py (empty)
project_folder/root_package/packA/moduleA.py
project_folder/root_package/packB/__init__.py (empty)
project_folder/root_package/packB/moduleB.py
project_folder/root_package/rootModule.py
project_folder/test_rootModule.py
I haven't made the `project_folder` a package (no `__init__.py` file) since,
the `test_rootModule` is simply a script to help me run the experiments.
So, in `root_package/packA/moduleA.py`, after changing the `from
root_package.packB.moduleB import ModuleBitem`, to `from packB.moduleB import
ModuleBitem`, as the answer suggests, it works.
But now there are **two** problems:
1\. PyCharm doesn't agree with the change:
[](http://i.stack.imgur.com/7D9Zj.png)
2. I cannot run my experiments from the `project_folder/test_rootModule.py` script. I got this error:
Traceback (most recent call last):
File "project_folder/test_rootModule.py", line 8, in
from root_package.rootModule import rootModuleClass
File "project_folder/root_package/rootModule.py", line 7, in
from packA.moduleA import ModuleAClass
File "project_folder/root_package/packA/moduleA.py", line 8, in
from packB.moduleB import module_b_item
ImportError: No module named packB.moduleB
_I cannot seem to get the 2nd Traceback to look like a code segment._
**Update 2**
What solved the problem was going to the `Project: project_name > Project
Structure` dialog in PyCharm, selecting the `root_package` and then setting it
as a `Sources` folder.
Now, I can run via the IDE both the `rootModule` and the `test_rootModule`.
**Although** , I cannot get to run the `test_rootModule` from the terminal.
The `test_rootModule` has these imports:
from root_package.rootModule import RootModuleClass
from root_package.packB.moduleB import module_b_item
I am at the `project_folder` dir, and run `python ./test_rootModule.py` and
get this error:
Traceback (most recent call last):
File "./test_rootModule.py", line 8, in <module>
from root_package.rootModule import RootModuleClass
File "project_folder/root_package/rootModule.py", line 7, in <module>
from packA.moduleA import ModuleAClass
File "project_folder/root_package/packA/moduleA.py", line 8, in <module>
from packB.moduleB import module_b_item
ImportError: No module named packB.moduleB
Answer: If you are running all your code from within this path:
`project_folder`
Then you should ensure that all your modules that reside in `root_package` are
referenced by that first. So for example:
`from root_package.modA import foo`
|
Maya - How to create python scripts with more than one file?
Question: It's my first post here, please understand that I'm a beginner and that I'm
learning "on-the-job".
Can someone explain how can I import files from a different module in a Maya
python script? I'm getting the following error:
Error: ImportError: file E:/.../bin/mainScript.py line 17: No module named tools
Here are my directories and codes:
Main_folder\
|-- install.mel
|-- ReadMeFirst.txt
`-- bin\
|-- __init__.py
|-- mainScript.py
|-- tools.py
`-- presets\
|-- bipedPreset.txt
|-- quadrupedPreset.txt
`-- [...] .txt
I'm trying to import `tools.py` in `mainScript.py`
### EDIT:
Ok, as it won't fit in a comment I edit this post to add precisions. I moved
the 'Main_folder' on my Desktop and ran the script once again in Maya. It
still doesn't work but I have a more complete error traceback. Here it is:
# Error: Error in maya.utils._guiExceptHook:
# File "C:\Program Files\Autodesk\Maya2014\Python\lib\site-packages\maya\utils.py", line 332, in formatGuiException
# result = u'%s: file %s line %s: %s' % (exceptionType.__name__, file, line, exceptionMsg)
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xfc in position 11: ordinal not in range(128)
#
# Original exception was:
# Traceback (most recent call last):
# File "<maya console>", line 3, in <module>
# File "C:/Users/UKDP/Desktop/Main_folder/bin/mainScript.py", line 17, in <module>
# from tools import ClassTest
# ImportError: No module named tools #
Answer: Try importing like:
>>>import san.libs.stringops
Where the san is dir(in san create __init__.py)
libs is a dir(in libs create __init__.py)
and stringops.py is imported
|
mySQL within Python 2.7.9
Question: Arrghh... I am trying to use mySQL with Python. I have installed all the
libraries for using mySQL, but keep getting the: "ImportError: No module named
mysql.connector" for "import mysql.connector", "mysql", etc..
Here is my config:
I have a RHEL server:
> Red Hat Enterprise Linux Server release 6.7 (Santiago)
with Python 2.7.9
> Python 2.7.9 (default, Dec 16 2014, 10:42:10) [GCC 4.4.7 20120313 (Red Hat
> 4.4.7-11)] on linux2
with mySQL 5.1
> mysql Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline
> 5.1
* * *
I have all the appropriate libraries/modules installed, I think!
yum install MySQL-python
> Package MySQL-python-1.2.3-0.3.c1.1.el6.x86_64 already installed and latest
> version
yum install mysql-connector-python.noarch
> Installed: mysql-connector-python.noarch 0:1.1.6-1.el6
> yum install MySQL-python.x86_64 Package MySQL-
> python-1.2.3-0.3.c1.1.el6.x86_64 already installed and latest version
yum install mysql-connector-python.noarch
> Package mysql-connector-python-1.1.6-1.el6.noarch already installed and
> latest version
* * *
What am I doing wrong? HELP!?
Answer: You should use `virtualenv` in order to isolate the environment. That way your
project libs won't clash with other projects libs. Also, you probably should
install the `Mysql` driver/connector from `pip`.
`Virtualenv` is a CLI tool for managing your environment. It is really easy to
use and helps a lot. What it does is to create all the folders Python needs on
a custom location (usually your specific project folder) and it also sets all
the shell variables so that Python can find the folders. Your system (/usr and
so on) folders are not removed from the shell; rather, they just get a low
priority. That is done by correctly setting your `PATH` variable, and
`virtualenv` does that when you load a determined environment.
It is common practice to use an environment for each project you work on. That
way, Python and `pip` won't install libs on the global folders. Instead, `pip`
installs the libs on the current environment you are using. That avoids
version conflicts and even Python version conflicts.
|
Install Azure Python api on linux: importError: No module named storage.blob
Question: I'm trying to use the Azure Python API. I followed these installation
instructions <https://azure.microsoft.com/en-us/documentation/articles/python-
how-to-install/> using
pip install azure
It had no issues. (I ran it again below just to show the message stating that
it is installed. )
I want to upload to Storage as described here:
<https://azure.microsoft.com/en-us/documentation/articles/storage-python-how-
to-use-blob-storage/>
$ pip install azure
Requirement already satisfied (use --upgrade to upgrade): azure in ./lib/python2.7/azure-1.0.1-py2.7.egg
...
Requirement already satisfied (use --upgrade to upgrade): azure-storage==0.20.1 in ./lib/python2.7/azure_storage-0.20.1-py2.7.egg (from azure)
...
$ pip install azure-storage
Requirement already satisfied (use --upgrade to upgrade): azure-storage in ./lib/python2.7/azure_storage-0.20.1-py2.7.egg
...
$ python2.7
>>> import azure
/home/path/lib/python2.7/azure_nspkg-1.0.0-py2.7.egg/azure/__init__.py:1: UserWarning: Module azure was already imported from
...
/home/path/lib/python2.7/azure_nspkg-1.0.0-py2.7.egg/azure/__init__.pyc, but /home/path/lib/python2.7/azure_storage-0.20.1-py2.7.egg is being added to sys.path
__import__('pkg_resources').declare_namespace(__name__)
...
>>> import azure # a second time just to try it. This time no msg.
>>> from azure.storage.blob import BlobService
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named storage.blob
Answer: If you only need azure-storage you should be able to install just that
package. If you need storage and other aspects of Azure, then you can just
install azure and that will grab everything including storage. No need for
both installs.
Particularly if you had an older version of Azure installed before there can
be issues with how the dependencies link up. Give `pip uninstall azure` and
`pip uninstall azure-storage` a try and if you're feeling particularly
thorough delete anything prefixed with azure in your python lib folder. Then
install just what you need per the first paragraph.
|
GSUTIL traceback-Linux Mint
Question: Im trying to install GSUTIL, after installation it gives the following output
for every command,
Traceback (most recent call last):
File "/usr/local/bin/gsutil", line 5, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2749, in <module>
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 446, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 459, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 632, in resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (httplib2 0.8 (/usr/lib/python2.7/dist-packages), Requirement.parse('httplib2>=0.9.1'))
Answer: That means you need to update the version of httplib2 installed on your system
to at least v 0.9.1.
|
Django Related model cannot be resolved
Question: I have two django/python applications one is running on Django 1.8 and Python
3.4 and the other is running on Django 1.8 and Python 2.7. These applications
share a database and use a python package that houses several of the models
that are shared between the two applications in a few different apps.
The application running on 3.4 works fine but the application running on 2.7
throws the ValueError: Relate Model 'model_reference' cannot be resolved.
In this psuedo example the package is core_app the two models are within
seperate apps called foobar and barfoo contained in core_app.
**foobar/models.py**
class Model_A(models.Model):
name = TextField()
**barfoo/models.py**
class Model_B(models.Model):
model_a = ForeignKey('core_app_foobar.Model_A')
Here is the full stack trace.
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/manager.pyc in manager_method(self, *args, **kwargs)
125 def create_method(name, method):
126 def manager_method(self, *args, **kwargs):
--> 127 return getattr(self.get_queryset(), name)(*args, **kwargs)
128 manager_method.__name__ = method.__name__
129 manager_method.__doc__ = method.__doc__
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/query.pyc in get(self, *args, **kwargs)
326 if self.query.can_filter():
327 clone = clone.order_by()
--> 328 num = len(clone)
329 if num == 1:
330 return clone._result_cache[0]
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/query.pyc in __len__(self)
142
143 def __len__(self):
--> 144 self._fetch_all()
145 return len(self._result_cache)
146
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/query.pyc in _fetch_all(self)
963 def _fetch_all(self):
964 if self._result_cache is None:
--> 965 self._result_cache = list(self.iterator())
966 if self._prefetch_related_lookups and not self._prefetch_done:
967 self._prefetch_related_objects()
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/query.pyc in iterator(self)
236 # Execute the query. This will also fill compiler.select, klass_info,
237 # and annotations.
--> 238 results = compiler.execute_sql()
239 select, klass_info, annotation_col_map = (compiler.select, compiler.klass_info,
240 compiler.annotation_col_map)
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/sql/compiler.pyc in execute_sql(self, result_type)
827 result_type = NO_RESULTS
828 try:
--> 829 sql, params = self.as_sql()
830 if not sql:
831 raise EmptyResultSet
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/sql/compiler.pyc in as_sql(self, with_limits, with_col_aliases, subquery)
376 refcounts_before = self.query.alias_refcount.copy()
377 try:
--> 378 extra_select, order_by, group_by = self.pre_sql_setup()
379 if with_limits and self.query.low_mark == self.query.high_mark:
380 return '', ()
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/sql/compiler.pyc in pre_sql_setup(self)
46 might not have all the pieces in place at that time.
47 """
---> 48 self.setup_query()
49 order_by = self.get_order_by()
50 extra_select = self.get_extra_select(order_by, self.select)
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/sql/compiler.pyc in setup_query(self)
37 if all(self.query.alias_refcount[a] == 0 for a in self.query.tables):
38 self.query.get_initial_alias()
---> 39 self.select, self.klass_info, self.annotation_col_map = self.get_select()
40 self.col_count = len(self.select)
41
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/sql/compiler.pyc in get_select(self)
185 if self.query.default_cols:
186 select_list = []
--> 187 for c in self.get_default_columns():
188 select_list.append(select_idx)
189 select.append((c, None))
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/sql/compiler.pyc in get_default_columns(self, start_alias, opts, from_parent)
522 alias = self.query.join_parent_model(opts, model, start_alias,
523 seen_models)
--> 524 column = field.get_col(alias)
525 result.append(column)
526 return result
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/fields/related.pyc in get_col(self, alias, output_field)
2015
2016 def get_col(self, alias, output_field=None):
-> 2017 return super(ForeignKey, self).get_col(alias, output_field or self.related_field)
2018
2019
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/fields/related.pyc in related_field(self)
1895 @property
1896 def related_field(self):
-> 1897 return self.foreign_related_fields[0]
1898
1899 def get_reverse_path_info(self):
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/fields/related.pyc in foreign_related_fields(self)
1629 @property
1630 def foreign_related_fields(self):
-> 1631 return tuple(rhs_field for lhs_field, rhs_field in self.related_fields)
1632
1633 def get_local_related_value(self, instance):
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/fields/related.pyc in related_fields(self)
1616 def related_fields(self):
1617 if not hasattr(self, '_related_fields'):
-> 1618 self._related_fields = self.resolve_related_fields()
1619 return self._related_fields
1620
/home/ubuntu/moi/local/lib/python2.7/site-packages/django/db/models/fields/related.pyc in resolve_related_fields(self)
1601 raise ValueError('Foreign Object from and to fields must be the same non-zero length')
1602 if isinstance(self.rel.to, six.string_types):
-> 1603 raise ValueError('Related model %r cannot be resolved' % self.rel.to)
1604 related_fields = []
1605 for index in range(len(self.from_fields)):
ValueError: Related model 'core_app_foobar.Model_A' cannot be resolved
Answer: Try it:
class Model_B(models.Model):
model_a = ForeignKey('foobar.Model_A')
don't forget to inherit from django models. (both)
from django.db import models
> To refer to models defined in another application, you can explicitly
> specify a model with the full application label. For example, if the
> Manufacturer model above is defined in another application called
> production, you’d need to use:
[Doc
ForeignKey](https://docs.djangoproject.com/en/1.8/ref/models/fields/#django.db.models.ForeignKey)
|
Calculate abs() value of input -- Python
Question: I just started studying programming and now for a school assignment we have to
make a program in Python that asks the user to input an integer and than
calculates the absolute value of that. I know about the `abs` function. What i
cant figure out is how to assign the users input to the `abs` function.
I would write:
a = int (raw_input ("give your number"))
int = abs()
The school assignment:
Write a program that calculates the absolute value of a number. That is, the
absolute value of -15 is 15, the absolute value of 10 is 10, etc.
Answer: Are you trying to do something like `a = math.abs(a)`? That would assign a to
the abs value of a.
Full code:
import math
a = int(input("Input a number"))
a = math.abs(a)
print(a)
|
Python: Choose random line from file, then delete that line
Question: I'm new to Python (in that I learned it through a CodeAcademy course) and
could use some help with figuring this out.
I have a file, 'TestingDeleteLines.txt', that's about 300 lines of text. Right
now, I'm trying to get it to print me 10 random lines from that file, then
delete those lines.
So if my file has 10 lines:
Carrot
Banana
Strawberry
Canteloupe
Blueberry
Snacks
Apple
Raspberry
Papaya
Watermelon
I need it to randomly pick out from those lines, tell me it's randomly picked
blueberry, carrot, watermelon, and banana, and then delete those lines.
The issue is, when Python reads a file, it reads that file and once it gets to
the end, it won't go back and delete the lines. My current thinking was that I
could write the lines to a list, then reopen the file, match the list to the
text file, and if it finds a match, delete the lines.
My current problem is twofold:
1. It's duplicating the random elements. If it picks a line, I need it to not pick that same line again. However, using random.sample doesn't seem to work, as I need those lines separated out when I later use each line to append to a URL.
2. I don't feel like my logic (write to array->find matches in text file->delete) is the most ideal logic. Is there a better way to write this?
import webbrowser
import random
"""url= 'http://www.google.com'
webbrowser.open_new_tab(url+myline)""" Eventually, I need a base URL + my 10 random lines opening in each new tab
def ShowMeTheRandoms():
x=1
DeleteList= []
lines=open('TestingDeleteLines.txt').read().splitlines()
for x in range(0,10):
myline=random.choice(lines)
print(myline) """debugging, remove later"""
DeleteList.append(myline)
x=x+1
print DeleteList """debugging, remove later"""
ShowMeTheRandoms()
Answer: Point is: you dont "delete" from a file, but rewrite the whole file (or
another one) with new content. The canonical way is to read the original file
line by line, write back the lines you want to keep to a temporary file, then
replace the old file with the new one.
with open("/path/to/source.txt") as src, open("/path/to/temp.txt", "w") as dest:
for line in src:
if should_we_keep_this_line(line):
dest.write(line)
os.rename("/path/to/temp.txt", "/path/to/source.txt")
|
Why does isinstance([1, 2, 3], List[str]) evaluate to true?
Question: I was playing around a bit with the new type hinting / typing module with
python3.5 trying to find a way to confirm if the hinted type is equal to the
actual type of the variable and came across something that rather surprised
me.
>>> from typing import List
>>> someList = [1, 2, 3]
>>> isinstance(someList, List[str])
True
Continuing my search for finding a way to compare a variable to it's hinted
type I've also tried this:
>>> anotherList = ["foo", "bar"]
>>> type(anotherList) is List[str]
False
Would anyone be able to explain why exactly the former evaluates to True?
And continuing onwards, is there a sound way to check if a variable's type is
equal to a type coming from the typing module?
Answer: [`isinstance`](https://docs.python.org/3/library/functions.html#isinstance)
does not do real [PEP 484](https://www.python.org/dev/peps/pep-0484/) type
checking. [The
documentation](https://docs.python.org/3/library/typing.html#typing.TypeVar)
notes this in passing:
> In general, `isinstance()` and `issubclass()` should not be used with types.
The `typing` module, as well as the `collections.abc` and `abc` modules it’s
based on, use extensive [`__instancecheck__` and
`__subclasscheck__`](https://docs.python.org/3/reference/datamodel.html#customizing-
instance-and-subclass-checks) magic to make `isinstance` and `issubclass`
behave reasonably. But they’re not doing enough to support your case. Nor is
it their goal to support it.
> is there a sound way to check if a variable's type is equal to a type coming
> from the typing module?
You’re not looking for type _equality_. As you have noted yourself, the type
of `[1, 2, 3]` is `list`, which is not _equal_ to `List[str]`, nor to
`List[int]`. You’re looking for type _checking_ , which is much more
complicated.
Consider this:
def my_function():
# ... 1000 lines of very complicated code ...
print(isinstance(my_function, Callable[[], int]))
What would you expect this program to print? You can’t expect `isinstance` to
dig into `my_function` at runtime and infer that it always returns `int`. This
is not feasible in Python. You need either a “compile” time type checker that
has access to the structure of `my_function`, or explicit type annotations,
or—most likely—both.
|
Global variables in multiples functions in Python
Question: i will try to explain my situation with examples:
Im using global to declare a variable but this work only in a function, when i
try to another sub function doesnt work.
register.py
def main():
alprint = input("Enter something: ")
if alprint == "a":
def alCheck():
global CheckDot
CheckDot = input("Enter your opinion: ")
def alTest():
global CheckTest
CheckTest = input("Hope it works: ")
alCheck()
alTest()
main()
and content.py
from register import CheckTest
if CheckTest == "ad":
print("You are welcome!")
When i declare this variable checkTest in a sub function(function, alTest())
of main, using global and importing to another file, it doesnt work, i tried a
lot of things, but nothing.
Answer: It _would_ work, except that if the user enters something other than `a` for
the first `input`, `CheckTest` is not defined, so it gives an `ImportError`.
You might want to try something like this instead:
def main():
global CheckTest, CheckDot
def alCheck():
global CheckDot
CheckDot = input("Enter your opinion: ")
def alTest():
global CheckTest
CheckTest = input("Hope it works: ")
alprint = input("Enter something: ")
if alprint == "a":
alCheck()
alTest()
else:
CheckTest = None
CheckDot = None
main()
This way, `CheckTest`, and `CheckDot` are always defined.
|
org-babel: want all noweb block references to appear verbatim on export
Question: Consider the following MVE in org-mode -- it contains my full question in
detail. But, in summary, with some code blocks, some noweb references to other
code blocks are substituted inline when I export the document, and, with other
code blocks, the noweb references, in double broket quotes, are copied
verbatim into the exported PDF. I do not know what causes this difference in
behavior and I don't know how to control it, but I'd like to. I'd like to be
able to specify that some blocks have behavior 1 (references substituted) and
other blocks have behavior 2 (references verbatim).
The PDF that results from `org-export` is [at this
link](https://drive.google.com/file/d/0B4v0MzmZfdm8TzRLeFlreUFnbEk/view?usp=sharing)
#+BEGIN_COMMENT
The emacs lisp block must export results, even though the results are none,
otherwise the block will not be eval'ed on export, and we will get
unacceptable confirmation requests for all the subsequent python blocks.
#+END_COMMENT
#+BEGIN_SRC emacs-lisp :exports results :results none
(setq org-confirm-babel-evaluate nil)
#+END_SRC
** PyTests
Define the test and cases. This code must be tangled out to an external file
so =py.test= can see it.
When I /export/ this to PDF, the noweb references, namely =<<imports>>= and
=<<definitions>>=, are substituted inline, so the typeset version of this
block in the PDF shows ALL the code. This is not what I want.
#+NAME: test-block
#+BEGIN_SRC python :noweb yes :tangle test_foo.py
<<imports>>
<<definitions>>
def test_smoke ():
np.testing.assert_approx_equal (foo_func (), foo_constant)
#+END_SRC
#+RESULTS: test-block
: None
The following blocks import prerequisites and do a quick smoke test:
** Do Some Imports
#+NAME: imports
#+BEGIN_SRC python
import numpy as np
#+END_SRC
#+RESULTS: imports
: None
** Define Some Variables
However, in the typeset PDF, the noweb reference =<<foo-func>>= in the block
below is /not/ substituted in-line, but rather appears verbatim. I want /all/
noweb references to appear verbatim in the exported, typeset, PDF document,
just like this one.
#+NAME: definitions
#+BEGIN_SRC python
foo_constant = 42.0
<<foo-func>>
#+END_SRC
#+RESULTS: definitions
** Define Some Functions
*** Foo Function is Really Interesting
#+NAME: foo-func
#+BEGIN_SRC python
def foo_func () :
return 42.000
#+END_SRC
#+RESULTS: foo-func
: None
We want results from pytest whether it succeeds or fails, hence the /OR/ with
=true= in the shell
#+BEGIN_SRC sh :results output replace :exports both
py.test || true
#+END_SRC
#+RESULTS:
: ============================= test session starts ==============================
: platform darwin -- Python 2.7.10, pytest-2.8.0, py-1.4.30, pluggy-0.3.1
: rootdir: /Users/bbeckman/foo, inifile:
: collected 1 items
:
: test_foo.py .
:
: =========================== 1 passed in 0.06 seconds ===========================
Answer: Found the appropriate [references
here](http://orgmode.org/manual/noweb.html#noweb)
[Here is a corrected PDF exported from the following .org
file](https://drive.google.com/file/d/0B4v0MzmZfdm8QmwybXRrYVFVdEk/view?usp=sharing).
And here is the corrected MVE (it, itself, explains the correction):
#+BEGIN_COMMENT
The emacs lisp block must export results, even though the exports are none,
otherwise the block will not be eval'ed on export, and we will get unacceptable
confirmation requests for all the subsequent python blocks.
#+END_COMMENT
#+BEGIN_SRC emacs-lisp :exports results :results none
(setq org-confirm-babel-evaluate nil)
#+END_SRC
** PyTests
Define the test and cases. This code must be tangled out to an external file
so =py.test= can see it.
When I /export/ this to PDF, the noweb references, namely =<<imports>>= and
=<<definitions>>=, are *NOT* substituted inline, but typeset verbatim. This
is what I want. You get this behavior by saying =:noweb no-export= in the
header.
#+NAME: test-block
#+BEGIN_SRC python :tangle test_foo.py :noweb no-export :exports code :results none
dummy_for_org_mode = True
<<imports>>
<<definitions>>
def test_smoke ():
np.testing.assert_approx_equal (foo_func (), foo_constant)
#+END_SRC
The following blocks import prerequisites and do a quick smoke test:
** Do Some Imports
#+NAME: imports
#+BEGIN_SRC python :exports code :results none
import numpy as np
#+END_SRC
** Define Some Variables and Functions
In this block, I want the noweb reference =<<foo-func>>= in the block to be
substituted in-line and not to appear verbatim. Do that by saying
=:noweb yes= in the header.
#+NAME: definitions
#+BEGIN_SRC python :noweb yes :exports code :results none
foo_constant = 42.0
<<foo-func>>
#+END_SRC
** Define Some Functions
*** Foo Function is Really Interesting
Here, I want to talk about the implementation of foo function in detail, but I
don't want its code to be exported again, just to appear in the original
=.org= file as I reminder or note to me.
#+NAME: foo-func
#+BEGIN_SRC python :exports none :results none
def foo_func () :
return 42.000
#+END_SRC
** Run the Tests
We want results from pytest whether it succeeds or fails, hence the /OR/ with
=true= in the shell
#+BEGIN_SRC sh :results output replace :exports both
py.test || true
#+END_SRC
#+RESULTS:
: ============================= test session starts ==============================
: platform darwin -- Python 2.7.10, pytest-2.8.0, py-1.4.30, pluggy-0.3.1
: rootdir: /Users/bbeckman/foo, inifile:
: collected 1 items
:
: test_foo.py .
:
: =========================== 1 passed in 0.08 seconds ===========================
|
Python - How to find an item exists in a list (or sublist)?
Question: For example
my_list = [1, 2, 3, 4, [5, 6], [7, 8]]
I want to find if 7 is in `my_list`.? The answer should be True, because it is
part of the last sublist. Any ideas?
Answer: If `mylist` was a list of only lists you could use
`itertools.chain.from_iterable()`
import itertools
mylist = [[1,2,3,4],[5,6],[7,8]]
merged = list(itertools.chain.from_iterable(mylist))
7 in merged
Since we have a list of ints mixed with lists, we define a custom filter. This
could also be done in a single list comprehension, but I can't figure that out
right now.
mylist = [1,2,3,4,[5,6],[7,8]]
merged = []
map(lambda l: merged.extend(l) if isinstance(l, list) else merged.append(l), [i for i in mylist])
7 in merged
You could also use the deprecated `compiler.ast.flatten` function:
from compiler.ast import flatten
merged = flatten(mylist)
|
Python crawler: downloading HTML page
Question: I want to crawl (gently) a website and download each HTML page that I crawl.
To accomplish that I use the library requests. I already did my crawl-listing
and I try to crawl them using urllib.open but without user-agent, I get an
error message. So I choose to use requests, but I don't really know how to use
it.
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1'
}
page = requests.get('http://www.xf.com/ranking/get/?Amount=1&From=left&To=right', headers=headers)
with open('pages/test.html', 'w') as outfile:
outfile.write(page.text)
The problem is when the script try to write the response in my file I get some
encoding error:
UnicodeEncodeError: 'ascii' codec can't encode characters in position 6673-6675: ordinal not in range(128)
How can we write in a file without having those encoding problem?
Answer: In Python 2, text files don't accept Unicode strings. Use the
`response.content` to access the original binary, undecoded content:
with open('pages/test.html', 'w') as outfile:
outfile.write(page.content)
This will write the downloaded HTML in the original encoding as served by the
website.
Alternatively, if you want to re-encode all responses to a specific encoding,
use `io.open()` to produce a file object that does accept Unicode:
import io
with io.open('pages/test.html', 'w', encoding='utf8') as outfile:
outfile.write(page.text)
Note that many websites rely on signalling the correct codec in the _HTML
tags_ , and the content can be served without a characterset parameter
altogether.
In that case `requests` uses the _default_ codec for the `text/*` mimetype,
Latin-1, to decode HTML to Unicode text. **This is often the wrong codec** and
relying on this behaviour can lead to
[Mojibake](https://en.wikipedia.org/wiki/Mojibake) output later on. I
recommend you stick to writing the binary content and rely on tools like
BeautifulSoup to detect the correct encoding later on.
Alternatively, test explicitly for the `charset` parameter being present and
only re-encode (via `response.text` and `io.open()` or otherwise) if
`requests` did not fall back to the Latin-1 default. See _[retrieve links from
web page using python and
BeautifulSoup](http://stackoverflow.com/questions/1080411/retrieve-links-from-
web-page-using-python-and-beautifulsoup/22583436#22583436)_ for an answer
where I use such a method to tell BeautifulSoup what codec to use.
|
Loading JSON object in Python using urllib.request and json modules
Question: I am having problems making the modules 'json' and 'urllib.request' work
together in a simple Python script test. Using Python 3.5 and here is the
code:
import json
import urllib.request
urlData = "http://api.openweathermap.org/data/2.5/weather?q=Boras,SE"
webURL = urllib.request.urlopen(urlData)
print(webURL.read())
JSON_object = json.loads(webURL.read()) #this is the line that doesn't work
When running script through command line the error I am getting is
"**TypeError:the JSON object must be str, not 'bytes'** ". I am new to Python
so there is most likely a very easy solution to is. Appreciate any help here.
Answer: Apart from forgetting to decode, you can only read the response _once_. Having
called `.read()` already, the second call returns an empty string.
Call `.read()` just once, and _decode_ the data to a string:
data = webURL.read()
print(data)
encoding = webURL.info().get_content_charset('utf-8')
JSON_object = json.loads(data.decode(encoding))
The `response.info().get_content_charset()` call tells you what characterset
the server thinks is used.
Demo:
>>> import json
>>> import urllib.request
>>> urlData = "http://api.openweathermap.org/data/2.5/weather?q=Boras,SE"
>>> webURL = urllib.request.urlopen(urlData)
>>> data = webURL.read()
>>> encoding = webURL.info().get_content_charset('utf-8')
>>> json.loads(data.decode(encoding))
{'coord': {'lat': 57.72, 'lon': 12.94}, 'visibility': 10000, 'name': 'Boras', 'main': {'pressure': 1021, 'humidity': 71, 'temp_min': 285.15, 'temp': 286.39, 'temp_max': 288.15}, 'id': 2720501, 'weather': [{'id': 802, 'description': 'scattered clouds', 'icon': '03d', 'main': 'Clouds'}], 'wind': {'speed': 5.1, 'deg': 260}, 'sys': {'type': 1, 'country': 'SE', 'sunrise': 1443243685, 'id': 5384, 'message': 0.0132, 'sunset': 1443286590}, 'dt': 1443257400, 'cod': 200, 'base': 'stations', 'clouds': {'all': 40}}
|
Animating the path of a projectile in python
Question: I am trying to animate the path of a projectile launched with an initial
velocity at an initial angle. I attempted to modify the code found here:
<http://matplotlib.org/examples/animation/simple_anim.html>
My code looks like this:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig, ax = plt.subplots()
g = 9.8 #value of gravity
v = 20 #initial velocity
theta = 20*np.pi/180 #initial angle of launch in radians
tt = 2*v*np.sin(theta)/g #total time of flight
t = np.linspace(0, tt, 0.01) #time of flight into an array
x = v*np.cos(theta)*t #x position as function of time
line, = ax.plot(x, v*np.sin(theta)*t-(0.5)*g*t**2) #plot of x and y in time
def animate(i):
line.set_xdata(v*np.cos(theta)*(t+i/10.0))
line.set_ydata(v*np.sin(theta)*(t+i/10.0)-(0.5)*g*(t+i/10.0)**2)
return line,
#Init only required for blitting to give a clean slate.
def init():
line.set_xdata(np.ma.array(t, mask=True))
line.set_ydata(np.ma.array(t, mask=True))
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(1, 200),
init_func=init, interval=25, blit=True)
plt.show()
The code, as shown, gives me plot window, but no trajectory and no animation.
I've searched on here to see if this has been asked elsewhere, and I have yet
to find it. If it has been asked, just link to the already answered question.
Any help is greatly appreciated. Thanks all.
Answer: I was able to get the following working with python 3.4.3:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig, ax = plt.subplots()
g = 9.8 #value of gravity
v = 10.0 #initial velocity
theta = 40.0 * np.pi / 180.0 #initial angle of launch in radians
t = 2 * v * np.sin(theta) / g
t = np.arange(0, 0.1, 0.01) #time of flight into an array
x = np.arange(0, 0.1, 0.01)
line, = ax.plot(x, v * np.sin(theta) * x - (0.5) * g * x**2) # plot of x and y in time
def animate(i):
"""change the divisor of i to get a faster (but less precise) animation """
line.set_xdata(v * np.cos(theta) * (t + i /100.0))
line.set_ydata(v * np.sin(theta) * (x + i /100.0) - (0.5) * g * (x + i / 100.0)**2)
return line,
plt.axis([0.0, 10.0, 0.0, 5.0])
ax.set_autoscale_on(False)
ani = animation.FuncAnimation(fig, animate, np.arange(1, 200))
plt.show()
The axis needed to be re-scaled, & many other things changed.
It is not perfect, but shows a projectile following the required trajectory.
You can play with it now & have a look at the code and fiddle with it to
learn. I also suggest that you invest a little bit of time to learn the basics
of numpy/pyplot; that will pay hefty dividends down the road.
|
Python regex optional capture group or lastindex
Question: I'm searching a file line by line for sections and sub sections using python.
*** Section with no sub section
*** Section with sub section ***
*** Sub Section ***
*** Another section
Sections start with 0-2 spaces followed by three asterisk, sub sections have
2+ white spaces then asterisks.
I write out the sections / sub sections without "***'s"; currently (using
re.sub).
Section: Section with no sub section
Section: Section with sub section
Sub-Section: Sub Section
Section: Another Section
**QUESTION 1** : Is there a python regexp with capture groups that would let
me access the section/sub section names as a capture group?
**QUESTION 2** : How would the regexp groups allow me to ID section or sub
section (possibly based on the number of /content in a match.group)?
**EXAMPLE (NON WORKING):**
match=re.compile('(group0 *** )(group1 section title)(group2 ***)')
sectionTitle = match.group(1)
if match.lastindex = 0: sectionType = section with no subs
if match.lastindex = 1: sectionType = section with subs
if match.lastindex = 2: sectionTpe = sub section
**PREVIOUS ATTEMPTS** I have been able to capture sections or sub sections
with separate regexps and if statements, but I want to do it all at once.
Something like the line below; has trouble with the second groups greediness.
'(^\*{3}\s)(.*)(\s\*{3}$)'
I can't seem to get the greedyness or optional groups to work together.
<http://pythex.org/> has been very helpful to this point.
Also, I tried capturing the asterisks '(*{3})' and then determining if section
or sub section based on the number of groups found.
sectionRegex=re.compile('(\*{3})'
m=re.search(sectionRegex)
if m.lastindex == 0:
sectionName = re.sub(sectionRegex,'',line)
#Set a section flag
if m.lastindex ==1:
sectionName = re.sub(sectionRegex,''line)
#Set a sub section flag.
**THANKS** Maybe I'm going at this totally wrong. Any help is appreciated.
**Latest Update** I've been playing with Pythex, answers, and other research.
I'm now spending more time capturing the words:
^[a-zA-Z]+$
and counting the number of asterisk matches to determine "level". I am still
searching for a single regexp to match the two - three "groups". May not
exist.
Thanks.
Answer: > **QUESTION 1** : Is there a python regexp with capture groups that would let
> me access the section/sub section names as a capture group?
>
>> a single regexp to match the two - three "groups". May not exist
Yes, it can be done. We can decomposs the conditions as the following tree:
* `Start of line` **+** `0 to 2 spaces`
* Any of the 2 alternations:
1. `***` **+** `Any text`[group 1]
2. `1+ spaces` **+** `***` **+** `Any text`[group 2]
* `***`(optional) **+** `End of line`
And the above tree can be expressed with the pattern:
^[ ]{0,2}(?:[*]{3}(.*?)|[ ]+[*]{3}(.*?))(?:[*]{3})?$
* [regex101 DEMO](https://regex101.com/r/mV0gN4/1)
Notice the _Section_ and _Sub-Section_ are being captured by different groups
([group 1] and [group 2] respectively). They both use the same syntax `.*?`,
both with a [lazy quantifier (the extra "?")](http://www.regular-
expressions.info/repeat.html#lazy) to allow the optional `"***"` at the end to
match.
* * *
> **QUESTION 2** : How would the regexp groups allow me to ID section or sub
> section (possibly based on the number of /content in a match.group)?
The above regex captures _Sections_ only in group 1, and _Sub-Sections_ only
in group 2. And to make it easier to identify in the code, I'll use
[`(?P<named> groups)`](http://www.regular-expressions.info/named.html) and
retrieve the captures with
**[`.groupdict()`](https://docs.python.org/2/library/re.html#re.MatchObject.groupdict)**.
### Code:
import re
data = """ *** Section with no sub section
*** Section with sub section ***
*** Sub Section ***
*** Another section"""
pattern = r'^[ ]{0,2}(?:[*]{3}[ ]?(?P<Section>.*?)|[ ]+[*]{3}[ ]?(?P<SubSection>.*?))(?:[ ]?[*]{3})?$'
regex = re.compile(pattern, re.M)
for match in regex.finditer(data):
print(match.groupdict())
''' OUTPUT:
{'Section': 'Section with no sub section', 'SubSection': None}
{'Section': 'Section with sub section', 'SubSection': None}
{'Section': None, 'SubSection': 'Sub Section'}
{'Section': 'Another section', 'SubSection': None}
'''
* [ideone DEMO](http://ideone.com/9fRpY6)
Instead of printing the dict, to reference each _Section_ /_Subsection_ , you
can use one of the following:
match.group("Section")
match.group(1)
match.group("SubSection")
match.group(2)
|
NoneType in python
Question: I was trying to get some rating data from
[Tripadvisor](http://'http://www.tripadvisor.in/Hotels-g186338-London_England-
Hotels.html') but as i was trying to fetch the data i was getting
> 'NoneType' object is not subscriptable
Can anybody help me figuring out where am i going wrong , sorry i am very new
to python.
Here is my sample code
import requests
import re
from bs4 import BeautifulSoup
r = requests.get('http://www.tripadvisor.in/Hotels-g186338-London_England-Hotels.html')
data = r.text
soup = BeautifulSoup(data)
for rate in soup.find_all('div',{"class":"rating"}):
print (rate.img['alt'])
The output to this looks like:
4.5 of 5 stars
4.5 of 5 stars 4 of 5 stars
4.5 of 5 stars
4.5 of 5 stars 4 of 5 stars
4.5 of 5 stars
4.5 of 5 stars
4.5 of 5 stars Traceback (most recent call last):
File "<ipython-input-52-7460e8bfcb82>", line 3, in <module>
print (rate.img['alt'])
TypeError: 'NoneType' object is not subscriptable
Answer: Not all your `<div class="rating">` tags have an `<img />` tag, so `rate.img`
is `None`.
Those divs look like this instead:
<div class="rating">
<span class="rate">4.5 out of 5, </span>
<em>2,294 Reviews</em>
<br/>
<div class="posted">Last reviewed 25 Sep 2015</div>
</div>
You can either test for this:
if rate.img is not None:
# ...
or select only images under the `div.rating` tags with a [CSS
selector](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-
selectors):
for img in soup.select('div.rating img[alt]'):
The selector here picks out `<img/>` tags with an `alt` attribute, nested
inside a `<div class="rating">` tag.
|
What exactly does "iterable" mean in Python?
Question: First I want to clarify, I'm NOT asking what is "iterator".
This is how the term "iterable" is defined in Python's
[doc](https://docs.python.org/3/glossary.html#term-iterable):
> **iterable**
> **An object capable of returning its members one at a time.** Examples of
> iterables include all sequence types (such as list, str, and tuple) and some
> non-sequence types like dict, file objects, and objects of any classes you
> define with an **__iter__() or __getitem__()** method. Iterables can be used
> in a for loop and in many other places where a sequence is needed (zip(),
> map(), ...). When an iterable object is passed as an argument to the built-
> in function iter(), it returns an iterator for the object. This iterator is
> good for one pass over the set of values. When using iterables, it is
> usually not necessary to call iter() or deal with iterator objects yourself.
> The for statement does that automatically for you, creating a temporary
> unnamed variable to hold the iterator for the duration of the loop. See also
> iterator, sequence, and generator.
As [other people suggested](http://stackoverflow.com/questions/1952464/in-
python-how-do-i-determine-if-an-object-is-iterable?lq=1), using `isinstance(e,
collections.Iterable)` is the most pythonic way to check if an object is
iterable.
So I did some test with Python 3.4.3:
from collections.abc import Iterable
class MyTrain:
def __getitem__(self, index):
if index > 3:
raise IndexError("that's enough!")
return index
for name in MyTrain():
print(name) # 0, 1, 2, 3
print(isinstance(MyTrain(), Iterable)) # False
The result is quite strange: `MyTrain` has defined `__getitem__` method, but
it is not considered as an iterable object, not to mention it's capable of
returning one number at a time.
Then I removed `__getitem__` and added the `__iter__` method:
from collections.abc import Iterable
class MyTrain:
def __iter__(self):
print("__iter__ called")
pass
print(isinstance(MyTrain(), Iterable)) # True
for name in MyTrain():
print(name) # TypeError: iter() returned non-iterator of type 'NoneType'
It is now considered as a "true" iterable object in spite of it cannot produce
anything while iterating.
So did I misunderstand something or is the documentation incorrect?
Answer: I think the point of confusion here is that, although implementing
`__getitem__` _does_ allow you to iterate over an object, it _isn't part of
the interface defined
by[`Iterable`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable)_.
The [abstract base classes](https://docs.python.org/3/glossary.html#term-
abstract-base-class) allow a form of virtual subclassing, where classes that
implement the specified methods (in the case of `Iterable`, only `__iter__`)
are considered by `isinstance` and `issubclass` to be subclasses of the ABCs
_even if they don't explicitly inherit from them_. It doesn't check whether
the method implementation _actually works_ , though, just whether or not it's
provided.
For more information, see
[PEP-3119](https://www.python.org/dev/peps/pep-3119/), which introduced ABCs.
* * *
> using `isinstance(e, collections.Iterable)` is the most pythonic way to
> check if an object is iterable
I disagree; I would use [duck-
typing](https://docs.python.org/3/glossary.html#term-duck-typing) and just
**attempt to iterate over the object**. If the object isn't iterable a
`TypeError` will be raised, which you can catch in your function if you want
to deal with non-iterable inputs, or allow to percolate up to the caller if
not. This completely side-steps how the object has decided to implement
iteration, and just finds out whether or not it does at the most appropriate
time.
* * *
To add a little more, I think the docs you've quoted are _slightly_
misleading. To quote the [`iter`
docs](https://docs.python.org/3/library/functions.html#iter), which perhaps
clear this up:
> _object_ must be a collection object which supports the iteration protocol
> (the `__iter__()` method), or it must support the sequence protocol (the
> `__getitem__()` method with integer arguments starting at `0`).
This makes it clear that, although both protocols make the object iterable,
only one is the actual _"iteration protocol"_ , and it is this that
`isinstance(thing, Iterable)` tests for. Therefore we could conclude that one
way to check for _"things you can iterate over"_ in the most general case
would be:
isinstance(thing, (Iterable, Sequence))
although this does also require you to implement `__len__` along with
`__getitem__` to _"virtually sub-class"_ `Sequence`.
|
Python Django Mezzanine Models Import
Question:
from django.db import models
from mezzanine.pages.models import Page
# The members of Page will be inherited by the Author model, such
# as title, slug, etc. For authors we can use the title field to
# store the author's name. For our model definition, we just add
# any extra fields that aren't part of the Page model, in this
# case, date of birth.
class Author(Page):
dob = models.DateField("Date of birth")
class Book(models.Model):
author = models.ForeignKey("Author")
cover = models.ImageField(upload_to="authors")
So does Book also inherits properties of Page? So it means any property or
method of Page is accessible from the above code?
Answer: It is better to user foreign key relationship if you want to extend a model.
You can use a one-to-one relationship to a model containing the fields for
additional information. For example:
class Author(models.Model):
page = models.OneToOneField(Page)
dob = models.DateField("Date of birth")
you can access the related information using Django’s standard related model
conventions:
a = Author.objects.get(...)
name = a.page.title # author's name is stored in Page.title field
|
Python replacement for ez-ipupdate?
Question: I want to update my dynamic dns entry from behind a NAT, which ez-ipupdate
doesn't support. It uses the locally bound ip instead of the external ip
address.
My provider, easydns, only explicitly supports the ez-ipupdate solution on my
platform, Linux.
Instead of writing a python-based deamon to get the external IP address and
put it into the ez-ipupdate config file regularly, I wondered if there was a
way to replace the whole thing with a python script. Maybe it would simplify
things.
(I could not find any information about this on google, so I'm asking and
answering this question here in order to help others.)
Answer: It would simplify things indeed. At least for easydns, ez-ipupdate only really
performs a simple GET request with Basic HTTP Authentication.
The below code is a starting point. It's been tested, it works. It needs the
`requests` and `ipgetter` modules from pypi.
import time
import ipgetter
import requests
import datetime
from requests.auth import HTTPBasicAuth
def update(user, auth_token, hostname, partner="easydns", cache_fn=None):
if cache_fn is None:
cache_fn = "/var/cache/ez-ipupdate/default-cache"
my_ip = ipgetter.myip()
with open(cache_fn) as fobj:
secs, ip = fobj.read().strip().split(",")
if ip == my_ip:
return "IP doesn't need updating"
last_update = datetime.datetime.fromtimestamp(int(secs))
diff = datetime.datetime.now() - last_update
minutes_since_last_update = diff.total_seconds() / 60.0
if minutes_since_last_update < 4.99:
return "Too short time since last update..."
with open(cache_fn, "wb") as fobj:
fobj.write("{},{}\n".format(int(time.time()), my_ip))
url = (
'https://api.cp.easydns.com/dyn/ez-ipupdate.php?action=edit'
'&myip={address}&partner={partner}&wildcard=OFF&hostname={host}'
).format(address=my_ip, partner=partner, host=hostname)
r = requests.get(url, auth=HTTPBasicAuth(user, auth_token))
return "{} {}".format(r.status_code, r.reason)
Now just run a script calling the update functions regularly, e.g. using
`crontab -e` and adding this line:
*/5 * * * * /path/to/script.py
|
Different results of CPU and GPU with Theano
Question: I have the following piece of code:
import theano
import theano.tensor as T
import numpy as np
x = theano.shared(np.asarray([1, 2, 3], dtype=theano.config.floatX), borrow=True)
y = T.cast(x, 'int32')
print 'type of y: ', type(y)
print 'type of y.owner.inputs[0]: ', type(y.owner.inputs[0])
print 'value of y: ', y.owner.inputs[0].get_value(borrow=True)
**Run with CPU**
$ THEANO_FLAGS=mode=FAST_RUN,device=cpu,floatX=float32 python test_share.py
type of y: <class 'theano.tensor.var.TensorVariable'>
type of y.owner.inputs[0]: <class 'theano.tensor.sharedvar.TensorSharedVariable'>
value of y: [ 1. 2. 3.]
**Run with GPU**
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python test_share.py
Using gpu device 0: GeForce 310M
type of y: <class 'theano.tensor.var.TensorVariable'>
type of y.owner.inputs[0]: <class 'theano.tensor.var.TensorVariable'>
value of y:
Traceback (most recent call last):
File "test_share.py", line 10, in <module>
print 'value of y: ', y.owner.inputs[0].get_value(borrow=True)
AttributeError: 'TensorVariable' object has no attribute 'get_value'
How can I get the same results as CPU?
Answer: The method you are using to access child nodes in the computation graph,
`.owner.inputs[0]`, is not appropriate for cross-platform code in general.
You call the shared variable `x` so the correct way to access `x`'s value is
to use `x.get_value()`.
This code should run the same on CPU and GPU:
import theano
import theano.tensor as T
import numpy as np
x = theano.shared(np.asarray([1, 2, 3], dtype=theano.config.floatX), borrow=True)
y = T.cast(x, 'int32')
print 'type of y: ', type(y)
print 'type of x: ', type(x)
print 'value of x: ', x.get_value(borrow=True)
If you want to see the result of applying the symbolic cast operation to `x`,
the symbolic result of which you call `y`, then you could do this:
print 'value of y: ', y.eval()
|
Activiti vs Spring batch
Question: I have got a use case to implement. It's basically a workflow kind of use
case. Below is the requirements
1. Extract and import data from an external db to an internal db
2. Make this imported data into different formats and supply it to multiple external systems and invoke some script there. The external interfaces are SFTP, SOAP, JDBC, Python over CORBA. There are around 14 external systems with one of these interfaces.
3. Interface transactions are executed in around 15 steps, with the ability to run some steps in parallel
4. These steps should be configurable. ie, a particular flow may execute 10 of these 15 steps and another flow executes 15 of 15 steps
5. Should have the ability to restart each step individually or restart from a particular step
6. There are some steps that are manual and completion of manual step should trigger next step
Volume of data is not that large. Total data size is around 400k records. But
this process is executing for around 30k records at a time. Time for
development is less and we are looking for some light weight easy to learn and
implement solution.
We are looking for Spring based or Spring integratable solutions.
The solutions we considered are
**For workflow:** **Activiti, Spring Batch**
**For interfaces:** **Spring Integration**
My question is 1) Can Spring batch considered for managing a work flow kind of
use case? I don't think it's a best fit use case for Spring Batch but as its
simple and easy to implement looked for its scope. We considered doing the
interfaces interaction as each step in a batch job and inside the tasklet do
the Spring Integration for external interfaces, with few issues as far as I
understand are
a) Dynamic step configuration can be done with Java configuration, but how
flexible it is and is it recommended?
b) Manual step processing is not possible in Spring Batch
Is there any work around for this? Is there any other issues or performance
impacts on doing this?
2) Activiti seems to a solution. Can you please provide some feedback on
Activiti with Spring and Spring integration for this use case and ease of
implementing it? And support for Activiti
3) Can Activiti workflows restarted from a particular task? Is a task can be
rollbacked?
Welcoming any suggestions !!
Answer: 1) For managing workflows, Activiti would be a great choice. They have created
a really good process engine which should comply your needs for delegating
your tasks as well as calling your custom logic. Moreover, it is based
entirely on Spring Framework so Integration with your logic would be easy.
2) i've provided the same in first answer.
3) No, you will have to create a new workflow for that and Yes!, a task can be
rolled back.
|
Python - The fastest way to generate string of zeros
Question: I need to generate some string of zeros for example:
import sys
MB = 1024 * 1024
cache = ''
while sys.getsizeof(cache) <= 10 * MB:
cache = cache + "0"
and save it to the file, but I have the impression that this method is too
slow, and waste a lot of system resources.
What is the best way to do it as fast as possible?
Answer: You can "multiply" a string:
cache = '0' * (1024**2)
To receive more than a million times `0`. Any other string or other integer as
factor works as well.
|
How do I get in python the maximum filesystem path length in unix?
Question: In the code I maintain I run across:
from ctypes.wintypes import MAX_PATH
I would like to change it to something like:
try:
from ctypes.wintypes import MAX_PATH
except ValueError: # raises on linux
MAX_PATH = 4096 # see comments
but I can't find any way to get the value of max filesystem path from python
(`os, os.path, sys...`) - is there a standard way or do I need an external lib
?
Or there is no analogous as MAX_PATH in linux, at least not a standard among
distributions ?
* * *
[**Answer**](http://stackoverflow.com/a/32812228/281545)
try:
MAX_PATH = int(subprocess.check_output(['getconf', 'PATH_MAX', '/']))
except (ValueError, subprocess.CalledProcessError, OSError):
deprint('calling getconf failed - error:', traceback=True)
MAX_PATH = 4096
Answer: You can read this values from files:
* PATH_MAX (defined in limits.h)
* FILENAME_MAX (defined in stdio.h)
Or use subprocess.check_output() with
[getconf](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/getconf.html)
function:
$ getconf NAME_MAX /
$ getconf PATH_MAX /
as in the following example:
name_max = subprocess.check_output("getconf NAME_MAX /", shell=True)
path_max = subprocess.check_output("getconf PATH_MAX /", shell=True)
to get values and
[fpath](http://pubs.opengroup.org/onlinepubs/009695399/functions/fpathconf.html)
to set different values for files.
|
How to prevent vertical sizers from expanding their children all the way downwards in wxformbuilder
Question: I'm trying to design a dash board in WX form builder for Python. I'm having
trouble trying to figure out how to keep two horizontal sizers that are
children to a vertical sizer from expanding far apart from each other. Below
is a screen shot describing what I am referring to:
[](http://i.stack.imgur.com/b6q0h.png)
_The blue arrow represents that I want the textbox control and the label to
move up closer to the first label._
My first instinct is that is had something to do with the wx.EXPAND etc flags
however, I was not able to change those in a manner that made the sizers come
closer to each other. It's almost like every time I place a sizer it
automatically tries to fit everything in the entire window...which makes it
difficult to place items in precise locations on the form. _Any suggestions on
how to stop sizers from expanding to the entire frame window size?_
My next course of action was to try and use a gridsizer or a flex grid sizer
however, I've only used them through direct code, where you can select the
exact location in the grid where you want to add a widget or object. With form
builder, I'm finding that they are more difficult to use mainly because I
can't insert objects at certain indices in the grid sizer. It inserts them
sequentially:
> 1,1 -> 1,2 -> 1,3 -> 1,i -> 2,1 -> 2,2 -> 2,3 -> 2,i -> j,i
Which means that if I need to change something in the grid...Its very
difficult. _Is there something I'm missing that makes it easier to insert
objects into the grid, other than sequentially?_
Below is one of example of 5x5 grid sizer (not completely filled) where I have
specified both the horizontal and vertical gaps = 0:
[](http://i.stack.imgur.com/B1Ifj.png)
[](http://i.stack.imgur.com/bLiPE.png)
Besides being able to replace the button with a textbox at 1,4, I notice that
the horizontal gap is clearly not zero even though I specified 0 for vertical
and horizontal gaps. This also makes it very hard to design the form. _Why is
there a horizontal gap between buttons?_
Answer: As you have noticed "stuff" expands to fit when using sizers and the
proportion flag can wreak havoc with your layout if not used properly.
Play with this simple bit of code:
import wx
class Summary(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, "Playing with Simple Sizers", size=(430,260))
self.panel = wx.Panel(self, wx.ID_ANY)
self.log = wx.TextCtrl(self.panel, wx.ID_ANY, value="input1:",size=(428,25))
self.log2 = wx.TextCtrl(self.panel, wx.ID_ANY, value="input2:", size=(428,25))
self.quit_button = wx.Button(self.panel, label="Quit",size=(60,25))
self.button1= wx.Button(self.panel, label="1",)
self.button2 = wx.Button(self.panel, label="2",size=(60,25))
self.button3 = wx.Button(self.panel, label="3",size=(60,25))
self.button4 = wx.Button(self.panel, label="4",size=(60,25))
self.quit_button.Bind(wx.EVT_BUTTON, self.OnQuit)
vbox = wx.BoxSizer(wx.VERTICAL)
hbox1 = wx.BoxSizer(wx.HORIZONTAL)
hbox2 = wx.BoxSizer(wx.HORIZONTAL)
hbox3 = wx.BoxSizer(wx.HORIZONTAL)
hbox4 = wx.BoxSizer(wx.HORIZONTAL)
vbox.Add(self.quit_button, 0, wx.ALL|wx.EXPAND, 1)
hbox1.Add(self.log, 0, wx.ALL|wx.EXPAND, 1)
hbox2.Add(self.log2, 0, wx.ALL|wx.EXPAND, 1)
hbox3.Add(self.button1, 1, wx.ALL|wx.EXPAND, 1)
hbox3.Add(self.button2, 2, wx.ALL|wx.EXPAND, 1)
hbox4.Add(self.button3, 0, wx.ALL|wx.EXPAND, 1)
hbox4.Add(self.button4, 1, wx.ALL|wx.EXPAND, 1)
vbox.Add(hbox1, 0, wx.ALIGN_RIGHT|wx.EXPAND, 1)
vbox.Add(hbox3, 0, wx.ALIGN_RIGHT|wx.EXPAND, 1)
vbox.Add(hbox4, 0, wx.ALIGN_RIGHT|wx.EXPAND, 1)
vbox.Add(hbox2, 0, wx.ALIGN_RIGHT|wx.EXPAND, 1)
self.panel.SetSizer(vbox)
self.Show()
def OnQuit(self, event):
self.Close()
# Run the program
if __name__ == "__main__":
app = wx.App()
frame = Summary()
app.MainLoop()
The proportion flag is the 2nd parameter when adding into the sizer.
You will note that it has been set at 0, 1, and 2 for the buttons, change them
and see what happens to the buttons in relation to each other and play with
the size parameter of a button.
Change the proportion flag on self.log or self.log2 to 1 and watch that one
expand.
Finally make the frame size wider and see the reaction.
Sizers can be extremely frustrating at first but once you "get" it, they are
powerful tools.
|
Switching two elements in Python
Question: Suppose I have a bunch of elements in a long list and I want to switch them
around based on the output I get in random.randit(lo,hi) function. For
instance, my list is
L=[(1,hi), (1, bye), (1,nope), (1,yup), (2,hi), (2, bye), (2,nope), (2,yup), (3,hi), (3, bye), (3,nope), (3,yup), (4,hi), (4, bye), (4,nope), (4,yup), (5,hi), (5, bye), (6,nope), (7,yup)]
if I import `from random import randint` and then do
`randint(0,(len(L)-1))`the number I get as an output I want to take the item
indexed at that number and swap it with the last element in the list, and then
`randint(lo, len(L)-2))` and take the number that I get from that output, take
the element indexed at that number and swap it with the second to last element
and so on until I get to the beginning. It's like I want to rearrange the list
completely random and not use the shuffle function.
**I understand that I can do randint(0,(len(L)-1)) and if I get 5 as an output
do A=L[19] (because I know the last element in my list is indexed as 19). And
then do L[19]=L[5] so that the 5th element takes the 19th elements place and
then do L[5]=A so that what WAS the last element then becomes the 5th element.
But I do not know how to write that as a loop.
Answer:
L =[(1,'hi'), (1, 'bye'), (1,'nope'), (1,'yup'), (2,'hi'), (2, 'bye'), (2,'nope'), (2,'yup'), (3,'hi'), (3, 'bye'),
(3,'nope'), (3,'yup'), (4,'hi'), (4, 'bye'), (4,'nope'), (4,'yup'), (5,'hi'), (5, 'bye'), (6,'nope'), (7,'yup')]
for i in range(-1, -len(L) -1 ,-1):
n = randint(0,(len(L) -1))
L[n], L[i] = L[i], L[n]
print(L)
**output**
[(2, 'nope'), (6, 'nope'), (3, 'hi'), (3, 'yup'), (2, 'bye'), (7, 'yup'), (1, 'hi'), (3, 'bye'), (3, 'nope'), (1, 'yup'), (1, 'bye'), (5, 'bye'), (4, 'yup'), (5, 'hi'), (4, 'bye'), (2, 'hi'), (2, 'yup'), (4, 'hi'), (1, 'nope'), (4, 'nope')]
|
Selenium won't click a button with python?
Question: please can someone help me with this,
I can't get selenium to click a button with python. I'm on python 3.4 and
using Firefox 42
the browser opens but that's all
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.speedyshare.com/")
elem = find_element_by_id_name("selectfilebox")
elem.click()
The browser opens but i get the following error
Traceback (most recent call last):
File "/home/ro/sele.py", line 6, in <module>
elem = find_element_by_id_name("selectfilebox")
NameError: name 'find_element_by_id_name' is not defined
Answer: It helps to inspect `driver.page_source` to see the HTML _as the driver sees
it_.
driver.get("http://www.speedyshare.com/")
content = driver.page_source
with open('/tmp/out', 'wb', encoding='utf-8') as f:
f.write(content)
You'll see in /tmp/out:
<frameset rows="*"><frame src="http://www30.speedyshare.com/upload_page.php" name="index31" />
</frameset>
Aha. The tag you wish to click is inside a frame. So switch to that frame
first:
driver.switch_to.frame("index31")
and then you'll be able to find the element by id:
elem = driver.find_element_by_id("selectfilebox")
elem.click()
This question is essentially the same as [Selenium Unable to locate element
(Python) WebScraping](http://stackoverflow.com/q/32636453/190597); it's just
hard to know that without first knowing the solution.
|
How to get the same result in book "Web Scraping with Python: Collecting Data from the Modern Web" Chapter 7 Data Normalization section
Question: **Python version:** 2.7.10
**My code:**
# -*- coding: utf-8 -*-
from urllib2 import urlopen
from bs4 import BeautifulSoup
from collections import OrderedDict
import re
import string
def cleanInput(input):
input = re.sub('\n+', " ", input)
input = re.sub('\[[0-9]*\]', "", input)
input = re.sub(' +', " ", input)
# input = bytes(input, "UTF-8")
input = bytearray(input, "UTF-8")
input = input.decode("ascii", "ignore")
cleanInput = []
input = input.split(' ')
for item in input:
item = item.strip(string.punctuation)
if len(item) > 1 or (item.lower() == 'a' or item.lower() == 'i'):
cleanInput.append(item)
return cleanInput
def ngrams(input, n):
input = cleanInput(input)
output = []
for i in range(len(input)-n+1):
output.append(input[i:i+n])
return output
url = 'https://en.wikipedia.org/wiki/Python_(programming_language)'
html = urlopen(url)
bsObj = BeautifulSoup(html, 'lxml')
content = bsObj.find("div", {"id": "mw-content-text"}).get_text()
ngrams = ngrams(content, 2)
keys = range(len(ngrams))
ngramsDic = {}
for i in range(len(keys)):
ngramsDic[keys[i]] = ngrams[i]
# ngrams = OrderedDict(sorted(ngrams.items(), key=lambda t: t[1], reverse=True))
ngrams = OrderedDict(sorted(ngramsDic.items(), key=lambda t: t[1], reverse=True))
print ngrams
print "2-grams count is: " + str(len(ngrams))
I recently learning how to do web scraping by following the book [**Web
Scraping with Python: Collecting Data from the Modern
Web**](http://shop.oreilly.com/product/0636920034391.do), while in **_Chapter
7 Data Normalization_** section I first write the code as same as the book
shows and got an error from the terminal:
Traceback (most recent call last):
File "2grams.py", line 40, in <module>
ngrams = OrderedDict(sorted(ngrams.items(), key=lambda t: t[1], reverse=True))
AttributeError: 'list' object has no attribute 'items'
Therefore I've changed the code by creating a new dictionary where the
entities are the lists of `ngrams`. But I've got a quite different result:
[](http://i.stack.imgur.com/1JEvh.png)
**Question:**
1. If I wanna have the result as the book shows ([where sorted by values and the frequency](http://i.stack.imgur.com/AEIlw.png)), should I write my own lines to count the occurrence of each 2-grams, or the code in the book already had that function (codes in the book were python 3 code) ? [book sample code on github](https://github.com/REMitchell/python-scraping/tree/master/chapter7)
2. The frequency in my output was quite different with the author's, for example `[u'Software', u'Foundation']` were occurred 37 times but not 40. What kinds of reasons causing that difference (could it be my code errors)?
**Book screenshot:**
[](http://i.stack.imgur.com/ay7LC.png)[](http://i.stack.imgur.com/AEIlw.png)
Answer: Got an error in this chapter too because ngrams was a list. I converted it to
dict and it worked
ngrams1 = OrderedDict(sorted(dict(ngrams1).items(), key=lambda t: t[1], reverse=True))
|
Python Multithreaded Messenger Simulation. Stuck on timerThread update. What do?
Question: I have a piece of code that simulates a system of messengers (think post
office or courier service) delivering letters in a multithreaded way. I want
to add a way to manage my messengers "in the field" to increase the efficiency
of my system.
tl;dr: How do I update my tens-to-hundreds of timerthreads so they wait longer
before calling their function?
Here's what the code I've written so far is supposed to do in steps.
1. Someone asks for a letter
2. We check to see if there are any available messengers. If none, we say "oops, sorry. can't help you with that"
3. If at least one is available, we send the messenger to deliver the letter (new timer thread with its wait param as the time it takes to get there and back)
4. When the messenger gets back, we put him in the back of the line of available messengers to wait for the next delivery
I do this by removing Messenger objects from a double ended queue, and then
adding them back in after a timerthread is done waiting. This is because my
Messengers are all unique and eventually I want to track how many deliveries
each has had, how far they have traveled, and other stuff.
Here's a pseudoish-codesnippet of the larger program I wrote for this
numMessengers=5
messengerDeque=deque()
pOrder=0.0001
class Messenger:
def __init__(self):
for i in range(numMessengers):
messenger=Messenger()
messengerDeque.append(messenger)
def popDeque():
messenger=idleDeque.popleft()
print 'messenger #?, sent'
return messenger
def appendDeque(messenger):
print 'messenger #?, returned'
messengerDeque.append(messenger)
def randomDelivery():
if numpy.random.randint(0,10000)<=(pOrder*10000):
if len(messengerDeque)!=0:
messenger=popDeque()
tripTime=distance/speed*120
t=threading.Timer(tripTime,appendDeque,args=[messenger])
t.start()
else:
print "oops, sorry. can't help you with that"
The above works in my program.
What I would like to add is some way to 'reroute' my messengers with new
orders.
Lets say you have to deliver a letter within an hour of when you get it. You
have five messengers and five orders, so they're all busy. You then get a
sixth order. Messenger 2 will be back in 20 minutes, and order six will take
30 minutes to get to the delivery destination. So instead of saying "oops, we
can't help you". We would say, ok, Messenger 2, when you get back, immediately
go deliver letter six.
With the code I've written, I think this could be done by checking the active
threads to see how long until they call their functions, pick the first one
you see where that time + how long your new delivery takes is < 1 hr, cancel
it, and start a new thread with the time left plus the new time to wait.
I just don't know how to do that.
How do you check how long is left in a timerthread and update it without
making a huge mess of your threads?
I'm also open to other, smarter ways of doing what I described.
YAY PYTHON MULTITHREADING!!!!! Thanks for the help
Answer: Using the class threading.Timer wont fulfill your needs. Although there is a
"interval" member in Timer instances, once the Timer(thread) started running
any changes in interval (time-out) are not considered. Furthermore you need to
know how much time is still left for the timer to be triggered, for which
there isn't a method as far as I know. Furthermore you probably also need a
way to identify which Timer instance you need to update with the new timeout
value, but this is up-to you.
You should implement your own Timer class, perhaps something along the lines
of:
import threading
import time
class MyTimer(threading.Thread):
def __init__(self, timeout, event):
super(MyTimer, self).__init__()
self.to = timeout
self.evt = event
def setTimeout(self, v):
self.end = time.time() + v
def run(self):
self.start = time.time()
self.end = time.time() + self.to
while self.end > time.time():
time.sleep(0) # instead of thread.yield
self.evt()
def getRemaining(self):
return self.end - time.time()
def hi(): print "hi"
T=MyTimer(20,hi)
T.start()
for i in range(10):
time.sleep(1)
# isAlive gives you True if the thread is running
print T.getRemaining(), T.isAlive()
T.setTimeout(1)
for i in range(3):
time.sleep(1)
print T.getRemaining(), T.isAlive()
|
parsing XML with namespace in python 3 gives no data
Question: I have a XML with 3 namespaces.
<?xml version="1.0" encoding="UTF-8"?>
<cus:Customizations xmlns:cus="http://www.bea.com/wli/config/customizations" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xt="http://www.bea.com/wli/config/xmltypes">
<cus:customization xsi:type="cus:EnvValueCustomizationType">
<cus:description/>
<cus:envValueAssignments>
<xt:envValueType>working manager</xt:envValueType>
<xt:location xsi:nil="true"/>
<xt:owner>
<xt:type>FLOW</xt:type>
<xt:path>/somedir/dir/somepath3</xt:path>
</xt:owner>
<xt:value xsi:type="xs:string" xmlns:xs="http://www.w3.org/2001/XMLSchema"/>
</cus:envValueAssignments>
</cus:customization>
<cus:customization xsi:type="cus:FindAndReplaceCustomizationType">
<cus:description/>
<cus:query>
<xt:resourceTypes>ProxyService</xt:resourceTypes>
<xt:resourceTypes>SMTPServer</xt:resourceTypes>
<xt:resourceTypes>SSconection</xt:resourceTypes>
<xt:refsToSearch xsi:type="xt:ResourceRefType">
<xt:type>FLOW</xt:type>
<xt:path>/somedir/dir/somepath2</xt:path>
</xt:refsToSearch>
<xt:includeOnlyModifiedResources>false</xt:includeOnlyModifiedResources>
<xt:searchString>Search String</xt:searchString>
<xt:isCompleteMatch>false</xt:isCompleteMatch>
</cus:query>
<cus:replacement>Replacement String</cus:replacement>
</cus:customization>
<cus:customization xsi:type="cus:ReferenceCustomizationType">
<cus:description/>
<cus:refsToBeConsidered xsi:type="xt:ResourceRefType">
<xt:type>FLOW</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</cus:refsToBeConsidered>
<cus:refsToBeConsidered xsi:type="xt:ResourceRefType">
<xt:type>WSDL</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</cus:refsToBeConsidered>
<cus:refsToBeConsidered xsi:type="xt:ResourceRefType">
<xt:type>ProxyService</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</cus:refsToBeConsidered>
<cus:externalReferenceMap>
<xt:oldRef>
<xt:type>FLOW</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</xt:oldRef>
<xt:newRef>
<xt:type>FLOW</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</xt:newRef>
</cus:externalReferenceMap>
<cus:externalReferenceMap>
<xt:oldRef>
<xt:type>XMLSchema</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</xt:oldRef>
<xt:newRef>
<xt:type>XMLSchema</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</xt:newRef>
</cus:externalReferenceMap>
<cus:externalReferenceMap>
<xt:oldRef>
<xt:type>XMLSchema</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</xt:oldRef>
<xt:newRef>
<xt:type>XMLSchema</xt:type>
<xt:path>/somedir/dir/somepath</xt:path>
</xt:newRef>
</cus:externalReferenceMap>
</cus:customization>
</cus:Customizations>
I am using lxml in python 3 but I am getting empty data. when I print the root
it gives me root tag. here is my code.
#!/usr/bin/python3
import sys
import os
import os.path
import csv
import xml.etree.ElementTree as etree
import lxml.etree
times = []
keys = []
tree2 = lxml.etree.parse('/home/vagrant/dev_dir/ALSBCustomizationFile.xml')
NSMAP = {'cus': 'http://www.bea.com/wli/config/customizations',
'xsi': 'http://www.w3.org/2001/XMLSchema-instance',
'xt': 'http://www.bea.com/wli/config/xmltypes'}
root22 = tree2.getroot()
print(root22)
namespace = root22.findall('cus:Customizations', NSMAP)
namespace2 = root22.findall('xsi:customization', NSMAP)
namespace3 = root22.findall('xt:envValueType', NSMAP)
print(namespace3)
when I run this script I get below output.
<Element {http://www.bea.com/wli/config/customizations}Customizations at 0x7faadb3a0508>
[]
I am able to get the root tag, but not able to access the inner namespace
tags.
Can you please help where I am going wrong. how do I read the data in all the
inner namespace tags.?
Answer: That's becuase the target element you're trying to get is not direct child of
the root element. You need to either specify full path from root to the target
element :
namespace3 = root22.findall('cus:customization/cus:envValueAssignments/xt:envValueType', NSMAP)
or using relative descendant-or-self axis (`.//`) at the beginning of the
XPath :
namespace3 = root22.findall('.//xt:envValueType', NSMAP)
For executing more complex XPath expression later you better off using
`lxml`'s `xpath()` method which provide better XPath support :
namespace3 = root22.xpath('.//xt:envValueType', namespaces=NSMAP)
|
Parallelize loop over numpy rows
Question: I need to apply the same function onto every row in a numpy array and store
the result again in a numpy array.
# states will contain results of function applied to a row in array
states = np.empty_like(array)
for i, ar in enumerate(array):
states[i] = function(ar, *args)
# do some other stuff on states
`function` does some **non trivial** filtering of my data and returns an array
when the conditions are True and when they are False. `function` can either be
pure python or cython compiled. The filtering operations on the rows are
complicated and can depend on previous values in the row, this means I can't
operate on the whole array in an element-by-element fashion
Is there a way to do something like this in dask for example?
Answer: ### Dask solution
You could do with with dask.array by chunking the array by row, calling
`map_blocks`, then computing the result
ar = ...
x = da.from_array(ar, chunks=(1, arr.shape[1]))
x.map_blocks(function, *args)
states = x.compute()
By default this will use threads, you can use processes in the following way
from dask.multiprocessing import get
states = x.compute(get=get)
### Pool solution
However dask is probably overkill for embarrassingly parallel computations
like this, you could get by with a threadpool
from multiprocessing.pool import ThreadPool
pool = ThreadPool()
ar = ...
states = np.empty_like(array)
def f(i):
states[i] = function(ar[i], *args)
pool.map(f, range(len(ar)))
And you could switch to processes with the following change
from multiprocessing import Pool
pool = Pool()
|
replacing "with" statement in Python code
Question:
import json
with open("login_data.txt", "r") as login_file:
try:
users = json.load(login_file)
except:
users = {}
Recently, I'm doing a presentation for my code. However, my lecturer requires
me to break down the code into pseudocode.
I can't find any pseudocode terms that fit in the `with` statement. I need to
find alternative solution that can replace the `with` statement above.
#i suppose it should look like this:...
def dummyname(login_file):
login_file = process open("login_data.txt","r")
while
users != {}
do
users = process json.load(login_file)
process dummyname(login_file)
#is it something like this?
Answer: If you don't mind to write less safe pseudo-code ( and write safe after ) you
could open-close.
login_file = open "login_data.txt" in text reading mode
users = load_json( login_file )
if load_json failed,
users = {}
close( login_file )
|
Regex in Python - Substring with single "re.sub" call
Question: I am looking into the Regex function in Python. As part of this, I am trying
to extract a substring from a string.
For instance, assume I have the string:
<place of birth="Stockholm">
Is there a way to extract Stockholm with a **single** regex call?
So far, I have:
location_info = "<place of birth="Stockholm">"
#Remove before
location_name1 = re.sub(r"<place of birth=\"", r"", location_info)
#location_name1 --> Stockholm">
#Remove after
location_name2 = re.sub(r"\">", r"", location_name1)
#location_name2 --> Stockholm
Any advice on how to extract the string Stockholm, without using two "re.sub"
calls is highly appreciated.
Answer: Sure, you can match the beginning up to the double quotes, and match and
capture all the characters other than double quotes after that:
import re
p = re.compile(r'<place of birth="([^"]*)')
location_info = "<place of birth=\"Stockholm\">"
match = p.search(location_info)
if match:
print(match.group(1))
See [IDEONE demo](http://ideone.com/fIYLY0)
The `<place of birth="` is matched as a literal, and `([^"]*)` is a capture
group 1 matching 0 or more characters other than `"`. The value is accessed
with `.group(1)`.
Here is a [**REGEX demo**](https://regex101.com/r/pP4fD6/2).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.