title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
Message Box in Python | 257,398 | <p>Is there a UI library to create a message box or input box in python?</p>
| 6 | 2008-11-02T21:22:30Z | 257,853 | <p>Simple message boxes and input boxes can be created using EasyGui, a small library using Tkinter, which Python comes with.</p>
<p>You can get EasyGui here: <a href="http://easygui.sourceforge.net/" rel="nofollow">http://easygui.sourceforge.net/</a></p>
| 3 | 2008-11-03T03:16:40Z | [
"python",
"user-controls",
"user-interface"
] |
Message Box in Python | 257,398 | <p>Is there a UI library to create a message box or input box in python?</p>
| 6 | 2008-11-02T21:22:30Z | 8,678,916 | <p>from Tkinter </p>
<pre><code>import *
import os
class Dialog(Toplevel):
def __init__(self, parent, title = None):
Toplevel.__init__(self, parent)
self.transient(parent)
if title:
self.title(title)
self.parent = parent
self.result = None
body = Frame(self)
self.initial_focus = self.body(body)
body.pack(padx=5, pady=5)
self.buttonbox()
self.grab_set()
if not self.initial_focus:
self.initial_focus = self
self.protocol("WM_DELETE_WINDOW", self.cancel)
self.geometry("+%d+%d" % (parent.winfo_rootx()+50,
parent.winfo_rooty()+50))
self.initial_focus.focus_set()
self.wait_window(self)
#
# construction hooks
def body(self, master):
# create dialog body. return widget that should have
# initial focus. this method should be overridden
pass
def buttonbox(self):
# add standard button box. override if you don't want the
# standard buttons
box = Frame(self)
w = Button(box, text="OK", width=10, command=self.ok, default=ACTIVE)
w.pack(side=LEFT, padx=5, pady=5)
w = Button(box, text="Cancel", width=10, command=self.cancel)
w.pack(side=LEFT, padx=5, pady=5)
self.bind("<Return>", self.ok)
self.bind("<Escape>", self.cancel)
box.pack()
#
# standard button semantics
def ok(self, event=None):
if not self.validate():
self.initial_focus.focus_set() # put focus back
return
self.withdraw()
self.update_idletasks()
self.apply()
self.cancel()
def cancel(self, event=None):
# put focus back to the parent window
self.parent.focus_set()
self.destroy()
#
# command hooks
def validate(self):
return 1 # override
def apply(self):
pass # override
</code></pre>
| 1 | 2011-12-30T11:21:03Z | [
"python",
"user-controls",
"user-interface"
] |
Download image file from the HTML page source using python? | 257,409 | <p>I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.</p>
| 33 | 2008-11-02T21:31:00Z | 257,412 | <p>You have to download the page and parse html document, find your image with regex and download it.. You can use urllib2 for downloading and Beautiful Soup for parsing html file.</p>
| 8 | 2008-11-02T21:33:53Z | [
"python",
"screen-scraping"
] |
Download image file from the HTML page source using python? | 257,409 | <p>I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.</p>
| 33 | 2008-11-02T21:31:00Z | 257,413 | <p>Use htmllib to extract all img tags (override do_img), then use urllib2 to download all the images.</p>
| 2 | 2008-11-02T21:34:28Z | [
"python",
"screen-scraping"
] |
Download image file from the HTML page source using python? | 257,409 | <p>I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.</p>
| 33 | 2008-11-02T21:31:00Z | 258,511 | <p>Here is some code to download all the images from the supplied URL, and save them in the specified output folder. You can modify it to your own needs.</p>
<pre><code>"""
dumpimages.py
Downloads all the images on the supplied URL, and saves them to the
specified output file ("/test/" by default)
Usage:
python dumpimages.py http://example.com/ [output]
"""
from BeautifulSoup import BeautifulSoup as bs
import urlparse
from urllib2 import urlopen
from urllib import urlretrieve
import os
import sys
def main(url, out_folder="/test/"):
"""Downloads all the images at 'url' to /test/"""
soup = bs(urlopen(url))
parsed = list(urlparse.urlparse(url))
for image in soup.findAll("img"):
print "Image: %(src)s" % image
filename = image["src"].split("/")[-1]
parsed[2] = image["src"]
outpath = os.path.join(out_folder, filename)
if image["src"].lower().startswith("http"):
urlretrieve(image["src"], outpath)
else:
urlretrieve(urlparse.urlunparse(parsed), outpath)
def _usage():
print "usage: python dumpimages.py http://example.com [outpath]"
if __name__ == "__main__":
url = sys.argv[-1]
out_folder = "/test/"
if not url.lower().startswith("http"):
out_folder = sys.argv[-1]
url = sys.argv[-2]
if not url.lower().startswith("http"):
_usage()
sys.exit(-1)
main(url, out_folder)
</code></pre>
<p><strong>Edit:</strong> You can specify the output folder now.</p>
| 69 | 2008-11-03T12:40:27Z | [
"python",
"screen-scraping"
] |
Download image file from the HTML page source using python? | 257,409 | <p>I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.</p>
| 33 | 2008-11-02T21:31:00Z | 2,448,326 | <p>And this is function for download one image:</p>
<pre><code>def download_photo(self, img_url, filename):
file_path = "%s%s" % (DOWNLOADED_IMAGE_PATH, filename)
downloaded_image = file(file_path, "wb")
image_on_web = urllib.urlopen(img_url)
while True:
buf = image_on_web.read(65536)
if len(buf) == 0:
break
downloaded_image.write(buf)
downloaded_image.close()
image_on_web.close()
return file_path
</code></pre>
| 8 | 2010-03-15T15:35:20Z | [
"python",
"screen-scraping"
] |
Download image file from the HTML page source using python? | 257,409 | <p>I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.</p>
| 33 | 2008-11-02T21:31:00Z | 4,200,547 | <p>Ryan's solution is good, but fails if the image source URLs are absolute URLs or anything that doesn't give a good result when simply concatenated to the main page URL. urljoin recognizes absolute vs. relative URLs, so replace the loop in the middle with:</p>
<pre><code>for image in soup.findAll("img"):
print "Image: %(src)s" % image
image_url = urlparse.urljoin(url, image['src'])
filename = image["src"].split("/")[-1]
outpath = os.path.join(out_folder, filename)
urlretrieve(image_url, outpath)
</code></pre>
| 9 | 2010-11-17T00:49:24Z | [
"python",
"screen-scraping"
] |
Download image file from the HTML page source using python? | 257,409 | <p>I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.</p>
| 33 | 2008-11-02T21:31:00Z | 24,837,891 | <p>If the request need an authorization refer to this one:</p>
<pre><code>r_img = requests.get(img_url, auth=(username, password))
f = open('000000.jpg','wb')
f.write(r_img.content)
f.close()
</code></pre>
| 1 | 2014-07-19T07:29:33Z | [
"python",
"screen-scraping"
] |
What's the difference between scgi and wsgi? | 257,481 | <p>What's the difference between these two?
Which is better/faster/reliable?</p>
| 10 | 2008-11-02T22:22:25Z | 257,511 | <p>They are both specifications for plugging a web application into a web server. One glaring difference is that WSGI comes from the Python world, and I believe there are no non-python implementations.</p>
<p><strong>Specifications are generally not comparable based on better/faster/reliable.</strong> </p>
<p>Only their implementations are comparable, and I am sure you will find good implementations of both specifications.</p>
<p>Perhaps <a href="http://en.wikipedia.org/wiki/SCGI">read</a> and <a href="http://en.wikipedia.org/wiki/WSGI">read</a>.</p>
| 7 | 2008-11-02T22:39:23Z | [
"python",
"wsgi",
"scgi"
] |
What's the difference between scgi and wsgi? | 257,481 | <p>What's the difference between these two?
Which is better/faster/reliable?</p>
| 10 | 2008-11-02T22:22:25Z | 257,642 | <p>SCGI is a language-neutral means of connecting a front-end web server and a web application. WSGI is a Python-specific interface standard for web applications.</p>
<p>Though they both have roots in CGI, they're rather different in scope and you could indeed quite reasonably use both at once, for example having a mod_scgi on the webserver talk to a WSGI app run as an SCGI server. There are multiple library implementations that will run WSGI applications as SCGI servers for you (eg. wsgitools, cherrypy).</p>
<p>They are both 'reliable', in as much as you can consider a specification reliable as opposed to a particular implementation. These days you would probably write your application as a WSGI callable, and consider the question of deployment separately.</p>
<p>Maybe an Apache+mod_wsgi (embedded) interface might be a <em>bit</em> faster than an Apache+mod_scgi+(SCGI wrapper lib), but in all likelihood it's not going to be <em>hugely</em> different. More valuable is the ability to run the application on a variety of servers, platforms and connection standards.</p>
| 21 | 2008-11-03T00:28:52Z | [
"python",
"wsgi",
"scgi"
] |
What's the difference between scgi and wsgi? | 257,481 | <p>What's the difference between these two?
Which is better/faster/reliable?</p>
| 10 | 2008-11-02T22:22:25Z | 778,530 | <p>SCGI (like FastCGI) is a (serialized) protocol suitable for inter-process communication between a web-server and a web-application.</p>
<p>WSGI is a Python API, connecting two (or more) Python WSGI-compatible modules inside the same process (Python interpreter). One module represents the web-server (being either a Python in-process web-server implementation or a gateway to a web-server in another process via e.g. SCGI). The other module is or represents the web application. Additionally, zero or more modules between theses two modules, may serve as WSGI "middleware" modules, doing things like session/cookie management, content caching, authentication, etc. The WSGI API uses Python language features like iteration/generators and passing of callable objects between the cooperating WSGI-compatible modules.</p>
| 6 | 2009-04-22T18:21:54Z | [
"python",
"wsgi",
"scgi"
] |
Python MySQL Statement returning Error | 257,563 | <p>hey, I'm very new to all this so please excuse stupidity :)</p>
<pre><code>import os
import MySQLdb
import time
db = MySQLdb.connect(host="localhost", user="root", passwd="********", db="workspace")
cursor = db.cursor()
tailoutputfile = os.popen('tail -f syslog.log')
while 1:
x = tailoutputfile.readline()
if len(x)==0:
break
y = x.split()
if y[2] == 'BAD':
timestring = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time()))
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
if y[2] == 'GOOD':
print y[4] + '\t' + y[7]
</code></pre>
<p>so i run the program and this is the error message I am getting</p>
<pre><code>user@machine:~/$ python reader.py
Traceback (most recent call last):
File "reader.py", line 17, in ?
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
File "/usr/lib/python2.4/site-packages/MySQLdb/cursors.py", line 163, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[4], y[7]' at line 1")
user@machine:~/$
</code></pre>
<p>So i'm assuming that the error is obviously coming from the SQL Statement </p>
<pre><code>cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
</code></pre>
<p>Here is an example of what y[4] and y[7] will look like. </p>
<pre><code>YES Mail.Sent.To.User:[email protected]:23.17
</code></pre>
<p>Is this error happening because I should be escaping those values before I try and Insert them into the Database?
Or am I completely missing the point??</p>
<p>Any help would be appreciated!
thanks in advance. </p>
| 2 | 2008-11-02T23:26:35Z | 257,570 | <pre><code> cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
</code></pre>
<p>should be</p>
<pre><code> cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, '%s', '%s')" % (y[4], y[7]))
</code></pre>
<p>Your best bet to debug things like this is to put the query into a variable and use that:</p>
<pre><code>query = "INSERT INTO releases (date, cat, name) values (timestring, '%s', '%s')" % (y[4], y[7])
print query
cursor.execute(query)
</code></pre>
<p>That print statement would make it very obvious what the problem is.</p>
<p>If you're going to be using list variables a lot like this it can get very confusing, consider using the list just once and putting the variables into a dictionary. It's a bit longer to type, but is much, much easier to keep track of what's going on.</p>
| 4 | 2008-11-02T23:29:44Z | [
"python",
"sql"
] |
Python MySQL Statement returning Error | 257,563 | <p>hey, I'm very new to all this so please excuse stupidity :)</p>
<pre><code>import os
import MySQLdb
import time
db = MySQLdb.connect(host="localhost", user="root", passwd="********", db="workspace")
cursor = db.cursor()
tailoutputfile = os.popen('tail -f syslog.log')
while 1:
x = tailoutputfile.readline()
if len(x)==0:
break
y = x.split()
if y[2] == 'BAD':
timestring = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time()))
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
if y[2] == 'GOOD':
print y[4] + '\t' + y[7]
</code></pre>
<p>so i run the program and this is the error message I am getting</p>
<pre><code>user@machine:~/$ python reader.py
Traceback (most recent call last):
File "reader.py", line 17, in ?
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
File "/usr/lib/python2.4/site-packages/MySQLdb/cursors.py", line 163, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[4], y[7]' at line 1")
user@machine:~/$
</code></pre>
<p>So i'm assuming that the error is obviously coming from the SQL Statement </p>
<pre><code>cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
</code></pre>
<p>Here is an example of what y[4] and y[7] will look like. </p>
<pre><code>YES Mail.Sent.To.User:[email protected]:23.17
</code></pre>
<p>Is this error happening because I should be escaping those values before I try and Insert them into the Database?
Or am I completely missing the point??</p>
<p>Any help would be appreciated!
thanks in advance. </p>
| 2 | 2008-11-02T23:26:35Z | 257,614 | <p>As pointed out, you're failing to copy the Python variable values into the query, only their names, which mean nothing to MySQL.</p>
<p>However the direct string concatenation option:</p>
<pre><code>cursor.execute("INSERT INTO releases (date, cat, name) VALUES ('%s', '%s', '%s')" % (timestring, y[4], y[7]))
</code></pre>
<p>is dangerous and should never be used. If those strings have out-of-bounds characters like ' or \ in, you've got an SQL injection leading to possible security compromise. Maybe in your particular app that can never happen, but it's still a very bad practice, which beginners' SQL tutorials really need to stop using.</p>
<p>The solution using MySQLdb is to let the DBAPI layer take care of inserting and escaping parameter values into SQL for you, instead of trying to % it yourself:</p>
<pre><code>cursor.execute('INSERT INTO releases (date, cat, name) VALUES (%s, %s, %s)', (timestring, y[4], y[7]))
</code></pre>
| 9 | 2008-11-03T00:02:17Z | [
"python",
"sql"
] |
Python MySQL Statement returning Error | 257,563 | <p>hey, I'm very new to all this so please excuse stupidity :)</p>
<pre><code>import os
import MySQLdb
import time
db = MySQLdb.connect(host="localhost", user="root", passwd="********", db="workspace")
cursor = db.cursor()
tailoutputfile = os.popen('tail -f syslog.log')
while 1:
x = tailoutputfile.readline()
if len(x)==0:
break
y = x.split()
if y[2] == 'BAD':
timestring = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(time.time()))
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
if y[2] == 'GOOD':
print y[4] + '\t' + y[7]
</code></pre>
<p>so i run the program and this is the error message I am getting</p>
<pre><code>user@machine:~/$ python reader.py
Traceback (most recent call last):
File "reader.py", line 17, in ?
cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
File "/usr/lib/python2.4/site-packages/MySQLdb/cursors.py", line 163, in execute
self.errorhandler(self, exc, value)
File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 35, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[4], y[7]' at line 1")
user@machine:~/$
</code></pre>
<p>So i'm assuming that the error is obviously coming from the SQL Statement </p>
<pre><code>cursor.execute("INSERT INTO releases (date, cat, name) values (timestring, y[4], y[7]")
</code></pre>
<p>Here is an example of what y[4] and y[7] will look like. </p>
<pre><code>YES Mail.Sent.To.User:[email protected]:23.17
</code></pre>
<p>Is this error happening because I should be escaping those values before I try and Insert them into the Database?
Or am I completely missing the point??</p>
<p>Any help would be appreciated!
thanks in advance. </p>
| 2 | 2008-11-02T23:26:35Z | 298,414 | <p>never use "direct string concatenation" with SQL, because it's not secure, more correct variant:</p>
<pre><code>cursor.execute('INSERT INTO releases (date, cat, name) VALUES (%s, %s, %s)', (timestring, y[4], y[7]))
</code></pre>
<p>it automatically escaping forbidden symbols in values (such as ", ' etc)</p>
| 1 | 2008-11-18T10:46:53Z | [
"python",
"sql"
] |
'Snippit' based django semi-CMS | 257,655 | <p>I remember reading somewhere on the internets about a half-assed tiny django CMS app, which was basically built on 'snippets' of text.</p>
<p>The idea was, that in the admin, you make a snippet (say a description of a product), give it a name (such as 'google_desc') and call it in a template with something like {% snippet google_desc %} and bam!</p>
<p>I <em>think</em> it was <a href="http://www.b-list.org" rel="nofollow">this</a> guy that made it, but im not quite sure.</p>
<p>Would anyone know where i could find this piece of awesomeness?</p>
<p><strong>Edit:</strong> I was after an app or something to plug into my project. Not, an existing website/service.</p>
<p><strong>Edit 2:</strong> insin got it. I was after <a href="http://code.google.com/p/django-chunks/" rel="nofollow">django-chunks</a></p>
| 0 | 2008-11-03T00:42:46Z | 257,695 | <p>Are you talking about <a href="http://www.punteney.com/writes/django-simplepages-basic-page-cms-system/" rel="nofollow">Django Simplepages</a>? Official site <a href="http://code.google.com/p/django-simplepages/" rel="nofollow">here</a>.</p>
<p>Another project that sounds similar to what you're after is <a href="http://code.google.com/p/django-page-cms/" rel="nofollow">django-page-cms</a>.</p>
| 1 | 2008-11-03T01:09:02Z | [
"python",
"django",
"content-management-system"
] |
'Snippit' based django semi-CMS | 257,655 | <p>I remember reading somewhere on the internets about a half-assed tiny django CMS app, which was basically built on 'snippets' of text.</p>
<p>The idea was, that in the admin, you make a snippet (say a description of a product), give it a name (such as 'google_desc') and call it in a template with something like {% snippet google_desc %} and bam!</p>
<p>I <em>think</em> it was <a href="http://www.b-list.org" rel="nofollow">this</a> guy that made it, but im not quite sure.</p>
<p>Would anyone know where i could find this piece of awesomeness?</p>
<p><strong>Edit:</strong> I was after an app or something to plug into my project. Not, an existing website/service.</p>
<p><strong>Edit 2:</strong> insin got it. I was after <a href="http://code.google.com/p/django-chunks/" rel="nofollow">django-chunks</a></p>
| 0 | 2008-11-03T00:42:46Z | 258,282 | <p>Sounds like <a href="https://github.com/clintecker/django-chunks" rel="nofollow">django-chunks</a> to me.</p>
| 2 | 2008-11-03T10:25:56Z | [
"python",
"django",
"content-management-system"
] |
'Snippit' based django semi-CMS | 257,655 | <p>I remember reading somewhere on the internets about a half-assed tiny django CMS app, which was basically built on 'snippets' of text.</p>
<p>The idea was, that in the admin, you make a snippet (say a description of a product), give it a name (such as 'google_desc') and call it in a template with something like {% snippet google_desc %} and bam!</p>
<p>I <em>think</em> it was <a href="http://www.b-list.org" rel="nofollow">this</a> guy that made it, but im not quite sure.</p>
<p>Would anyone know where i could find this piece of awesomeness?</p>
<p><strong>Edit:</strong> I was after an app or something to plug into my project. Not, an existing website/service.</p>
<p><strong>Edit 2:</strong> insin got it. I was after <a href="http://code.google.com/p/django-chunks/" rel="nofollow">django-chunks</a></p>
| 0 | 2008-11-03T00:42:46Z | 1,392,946 | <p>If you need some more features just checkout django-blocks (<a href="http://code.google.com/p/django-blocks/" rel="nofollow">http://code.google.com/p/django-blocks/</a>). Has multi-language Menu, Flatpages and even has a simple Shopping Cart!!</p>
| 1 | 2009-09-08T09:25:55Z | [
"python",
"django",
"content-management-system"
] |
How can I make a fake "active session" for gconf? | 257,658 | <p>I've automated my Ubuntu installation - I've got Python code that runs automatically (after a clean install, but before the first user login - it's in a temporary /etc/init.d/ script) that sets up everything from Apache & its configuration to my personal Gnome preferences. It's the latter that's giving me trouble.</p>
<p>This worked fine in Ubuntu 8.04 (Hardy), but when I use this with 8.10 (Intrepid), the first time I try to access gconf, I get this exception:</p>
<p>Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See <a href="http://www.gnome.org/projects/gconf/" rel="nofollow">http://www.gnome.org/projects/gconf/</a> for information. (Details - 1: <strong>Not running within active session</strong>)</p>
<p>Yes, right, there's no Gnome session when this is running, because the user hasn't logged in yet - however, this worked before; this appears to be new with Intrepid's Gnome (2.24?).</p>
<p>Short of modifying the gconf's XML files directly, is there a way to make some sort of proxy Gnome session? Or, any other suggestions?</p>
<p>(More details: this is python code that runs as root, but setuid's & setgid's to be me before setting my preferences using the "gconf" module from the python-gconf package.)</p>
| 9 | 2008-11-03T00:45:10Z | 257,833 | <p>Well, I think I understand the question. Looks like your script just needs to start the dbus daemon, or make sure its started. I believe "session" here refers to a dbus session. <a href="http://mail.gnome.org/archives/svn-commits-list/2008-May/msg01997.html" rel="nofollow">(here is some evidence)</a>, not a Gnome session. Dbus and gconf both run fine without Gnome.</p>
<p>Either way, faking an "active session" sounds like a pretty bad idea. It would only look for it if it needed it.</p>
<p>Perhaps we could see the script in a pastebin? I should have really seen it before making any comment.</p>
| 1 | 2008-11-03T02:52:42Z | [
"python",
"ubuntu",
"gconf",
"intrepid"
] |
How can I make a fake "active session" for gconf? | 257,658 | <p>I've automated my Ubuntu installation - I've got Python code that runs automatically (after a clean install, but before the first user login - it's in a temporary /etc/init.d/ script) that sets up everything from Apache & its configuration to my personal Gnome preferences. It's the latter that's giving me trouble.</p>
<p>This worked fine in Ubuntu 8.04 (Hardy), but when I use this with 8.10 (Intrepid), the first time I try to access gconf, I get this exception:</p>
<p>Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See <a href="http://www.gnome.org/projects/gconf/" rel="nofollow">http://www.gnome.org/projects/gconf/</a> for information. (Details - 1: <strong>Not running within active session</strong>)</p>
<p>Yes, right, there's no Gnome session when this is running, because the user hasn't logged in yet - however, this worked before; this appears to be new with Intrepid's Gnome (2.24?).</p>
<p>Short of modifying the gconf's XML files directly, is there a way to make some sort of proxy Gnome session? Or, any other suggestions?</p>
<p>(More details: this is python code that runs as root, but setuid's & setgid's to be me before setting my preferences using the "gconf" module from the python-gconf package.)</p>
| 9 | 2008-11-03T00:45:10Z | 260,731 | <p>I can reproduce this by installing GConf 2.24 on my machine. GConf 2.22 works fine, but 2.24 breaks it.</p>
<p>GConf is failing to launch because D-Bus is not running. Manually spawning D-Bus and the GConf daemon makes this work again.</p>
<p>I tried to spawn the D-Bus session bus by doing the following:</p>
<pre><code>import dbus
dummy_bus = dbus.SessionBus()
</code></pre>
<p>...but got this:</p>
<pre><code>dbus.exceptions.DBusException: org.freedesktop.DBus.Error.Spawn.ExecFailed: dbus-launch failed to autolaunch D-Bus session: Autolaunch error: X11 initialization failed.
</code></pre>
<p>Weird. Looks like it doesn't like to come up if X isn't running. To work around that, start dbus-launch manually (IIRC use the <a href="http://www.python.org/doc/2.5.2/lib/os-process.html">os.system()</a> call):</p>
<pre><code>$ dbus-launch
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-eAmT3q94u0,guid=c250f62d3c4739dcc9a12d48490fc268
DBUS_SESSION_BUS_PID=15836
</code></pre>
<p>You'll need to parse the output somehow and inject them into environment variables (you'll probably want to use <a href="http://www.python.org/doc/2.5.2/lib/os-procinfo.html">os.putenv</a>). For my testing, I just used the shell, and set the environment vars manually with <code>export DBUS_SESSION_BUS_ADDRESS=blahblah...</code>, etc.</p>
<p>Next, you need to launch <code>gconftool-2 --spawn</code> with those environment variables you received from <code>dbus-launch</code>. This will launch the GConf daemon. If the D-Bus environment vars are not set, the daemon will not launch.</p>
<p>Then, run your GConf code. Provided you set the D-Bus session bus environment variables for your own script, you will now be able to communicate with the GConf daemon.</p>
<p>I know it's complicated.</p>
<p><code>gconftool-2</code> provides a <code>--direct</code> option that enables you to set GConf variables without needing to communicate with the server, but I haven't been able to find an equivalent option for the Python bindings (short of outputting XML manually).</p>
<p><em>Edit:</em> For future reference, if anybody wants to run <code>dbus-launch</code> from within a normal <code>bash</code> script (as opposed to a Python script, as this thread is discussing), it is quite easy to retrieve the session bus address for use within the script:</p>
<pre><code>#!/bin/bash
eval `dbus-launch --sh-syntax`
export DBUS_SESSION_BUS_ADDRESS
export DBUS_SESSION_BUS_PID
do_other_stuff_here
</code></pre>
| 8 | 2008-11-04T03:07:40Z | [
"python",
"ubuntu",
"gconf",
"intrepid"
] |
How can I make a fake "active session" for gconf? | 257,658 | <p>I've automated my Ubuntu installation - I've got Python code that runs automatically (after a clean install, but before the first user login - it's in a temporary /etc/init.d/ script) that sets up everything from Apache & its configuration to my personal Gnome preferences. It's the latter that's giving me trouble.</p>
<p>This worked fine in Ubuntu 8.04 (Hardy), but when I use this with 8.10 (Intrepid), the first time I try to access gconf, I get this exception:</p>
<p>Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See <a href="http://www.gnome.org/projects/gconf/" rel="nofollow">http://www.gnome.org/projects/gconf/</a> for information. (Details - 1: <strong>Not running within active session</strong>)</p>
<p>Yes, right, there's no Gnome session when this is running, because the user hasn't logged in yet - however, this worked before; this appears to be new with Intrepid's Gnome (2.24?).</p>
<p>Short of modifying the gconf's XML files directly, is there a way to make some sort of proxy Gnome session? Or, any other suggestions?</p>
<p>(More details: this is python code that runs as root, but setuid's & setgid's to be me before setting my preferences using the "gconf" module from the python-gconf package.)</p>
| 9 | 2008-11-03T00:45:10Z | 261,180 | <p>Thanks, Ali & Jeremy - both your answers were a big help. I'm still working on this (though I've stopped for the evening).</p>
<p>First, I took the hint from Ali and was trying part of Jeremy's suggestion: I was using dbus-launch to run "gconftool-2 --spawn". It didn't work for me; I now understand why (thx, Jeremy) -- I was trying to use gconf from within the same python program that was launching dbus & gconftool, but its environment didn't have the environment variables - duh.</p>
<p>I set that strategy aside when I noticed gconftool-2's --direct option; internally, gconftool-2 is using API that isn't exposed by the gconf python bindings. So, I modified python-gconf to expose the extra method, and once that builds (I had some unrelated problems getting this to work), we'll see if that fixes things - if it doesn't (and maybe if it does, because building those bindings seems to build all of gnome!), I'll find a better way to manage the environment variables in that first strategy.</p>
<p>(I'll add another answer here tomorrow either way)</p>
<p>And it's the next day: I ran into a little trouble with my modified python-gconf, which inspired me to try Jeremy's simpler idea, which worked fine - before doing the first gconf operation, I simply ran "dbus-launch", parsed the resulting name-value pairs, and added them directly to python's environment. Having done that, I ran "gconftool-2 --spawn". Problem solved.</p>
| 1 | 2008-11-04T07:39:34Z | [
"python",
"ubuntu",
"gconf",
"intrepid"
] |
Java equivalent to pyftpdlib? | 257,956 | <p>Is there a good Java alternative to pyftpdlib? I am looking for an easy to setup and run embedded ftp server.</p>
| 1 | 2008-11-03T04:52:36Z | 257,973 | <p>Check out Apache's <a href="http://mina.apache.org/ftpserver/" rel="nofollow">FTPServer</a>.</p>
<p>They have an <a href="http://mina.apache.org/ftpserver/embedding-ftpserver-in-5-minutes.html" rel="nofollow">example</a> of how to embed it in a Java application.</p>
| 0 | 2008-11-03T05:11:20Z | [
"java",
"python",
"ftp"
] |
Python: wrapping method invocations with pre and post methods | 258,119 | <p>I am instantiating a class A (which I am importing from somebody
else, so I can't modify it) into my class X.</p>
<p>Is there a way I can intercept or wrape calls to methods in A?
I.e., in the code below can I call</p>
<pre><code>x.a.p1()
</code></pre>
<p>and get the output</p>
<pre><code>X.pre
A.p1
X.post
</code></pre>
<p>Many TIA!</p>
<pre><code>class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
</code></pre>
| 6 | 2008-11-03T08:23:10Z | 258,125 | <p>The no-whistles-or-bells solution would be to write a wrapper class for class A that does just that.</p>
| 1 | 2008-11-03T08:26:27Z | [
"python",
"metaprogramming"
] |
Python: wrapping method invocations with pre and post methods | 258,119 | <p>I am instantiating a class A (which I am importing from somebody
else, so I can't modify it) into my class X.</p>
<p>Is there a way I can intercept or wrape calls to methods in A?
I.e., in the code below can I call</p>
<pre><code>x.a.p1()
</code></pre>
<p>and get the output</p>
<pre><code>X.pre
A.p1
X.post
</code></pre>
<p>Many TIA!</p>
<pre><code>class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
</code></pre>
| 6 | 2008-11-03T08:23:10Z | 258,179 | <p>You could just modify the A instance and replace the p1 function with a wrapper function:</p>
<pre><code>def wrapped(pre, post, f):
def wrapper(*args, **kwargs):
pre()
retval = f(*args, **kwargs)
post()
return retval
return wrapper
class Y:
def __init__(self):
self.a=A()
self.a.p1 = wrapped(self.pre, self.post, self.a.p1)
def pre(self): print 'X.pre'
def post(self): print 'X.post'
</code></pre>
| 1 | 2008-11-03T09:06:35Z | [
"python",
"metaprogramming"
] |
Python: wrapping method invocations with pre and post methods | 258,119 | <p>I am instantiating a class A (which I am importing from somebody
else, so I can't modify it) into my class X.</p>
<p>Is there a way I can intercept or wrape calls to methods in A?
I.e., in the code below can I call</p>
<pre><code>x.a.p1()
</code></pre>
<p>and get the output</p>
<pre><code>X.pre
A.p1
X.post
</code></pre>
<p>Many TIA!</p>
<pre><code>class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
</code></pre>
| 6 | 2008-11-03T08:23:10Z | 258,253 | <p>I've just recently read about decorators in python, I'm not understanding them yet but it seems to me that they can be a solution to your problem. see Bruce Eckel intro to decorators at:
<a href="http://www.artima.com/weblogs/viewpost.jsp?thread=240808" rel="nofollow">http://www.artima.com/weblogs/viewpost.jsp?thread=240808</a></p>
<p>He has a few more posts on that topic there.</p>
<p>Edit: Three days later I stumble upon this article, which shows how to do a similar task without decorators, what's the problems with it and then introduces decorators and develop a quite full solution:
<a href="http://wordaligned.org/articles/echo" rel="nofollow">http://wordaligned.org/articles/echo</a></p>
| 1 | 2008-11-03T10:00:30Z | [
"python",
"metaprogramming"
] |
Python: wrapping method invocations with pre and post methods | 258,119 | <p>I am instantiating a class A (which I am importing from somebody
else, so I can't modify it) into my class X.</p>
<p>Is there a way I can intercept or wrape calls to methods in A?
I.e., in the code below can I call</p>
<pre><code>x.a.p1()
</code></pre>
<p>and get the output</p>
<pre><code>X.pre
A.p1
X.post
</code></pre>
<p>Many TIA!</p>
<pre><code>class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
</code></pre>
| 6 | 2008-11-03T08:23:10Z | 258,259 | <p>Here's what I've received from Steven D'Aprano on comp.lang.python.</p>
<pre><code># Define two decorator factories.
def precall(pre):
def decorator(f):
def newf(*args, **kwargs):
pre()
return f(*args, **kwargs)
return newf
return decorator
def postcall(post):
def decorator(f):
def newf(*args, **kwargs):
x = f(*args, **kwargs)
post()
return x
return newf
return decorator
</code></pre>
<p>Now you can monkey patch class A if you want. It's probably not a great
idea to do this in production code, as it will effect class A everywhere.
[this is ok for my application, as it is basically a protocol converter and there's exactly one instance of each class being processed.]</p>
<pre><code>class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
A.p1 = precall(self.pre)(postcall(self.post)(A.p1))
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
</code></pre>
<p>Gives the desired result.</p>
<pre><code>X.pre
A.p1
X.post
</code></pre>
| 0 | 2008-11-03T10:07:25Z | [
"python",
"metaprogramming"
] |
Python: wrapping method invocations with pre and post methods | 258,119 | <p>I am instantiating a class A (which I am importing from somebody
else, so I can't modify it) into my class X.</p>
<p>Is there a way I can intercept or wrape calls to methods in A?
I.e., in the code below can I call</p>
<pre><code>x.a.p1()
</code></pre>
<p>and get the output</p>
<pre><code>X.pre
A.p1
X.post
</code></pre>
<p>Many TIA!</p>
<pre><code>class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
</code></pre>
| 6 | 2008-11-03T08:23:10Z | 258,274 | <p>Here is the solution I and my colleagues came up with:</p>
<pre><code>from types import MethodType
class PrePostCaller:
def __init__(self, other):
self.other = other
def pre(self): print 'pre'
def post(self): print 'post'
def __getattr__(self, name):
if hasattr(self.other, name):
func = getattr(self.other, name)
return lambda *args, **kwargs: self._wrap(func, args, kwargs)
raise AttributeError(name)
def _wrap(self, func, args, kwargs):
self.pre()
if type(func) == MethodType:
result = func( *args, **kwargs)
else:
result = func(self.other, *args, **kwargs)
self.post()
return result
#Examples of use
class Foo:
def stuff(self):
print 'stuff'
a = PrePostCaller(Foo())
a.stuff()
a = PrePostCaller([1,2,3])
print a.count()
</code></pre>
<p>Gives:</p>
<pre><code>pre
stuff
post
pre
post
0
</code></pre>
<p>So when creating an instance of your object, wrap it with the PrePostCaller object. After that you continue using the object as if it was an instance of the wrapped object. With this solution you can do the wrapping on a per instance basis.</p>
| 4 | 2008-11-03T10:18:21Z | [
"python",
"metaprogramming"
] |
Python: wrapping method invocations with pre and post methods | 258,119 | <p>I am instantiating a class A (which I am importing from somebody
else, so I can't modify it) into my class X.</p>
<p>Is there a way I can intercept or wrape calls to methods in A?
I.e., in the code below can I call</p>
<pre><code>x.a.p1()
</code></pre>
<p>and get the output</p>
<pre><code>X.pre
A.p1
X.post
</code></pre>
<p>Many TIA!</p>
<pre><code>class A:
# in my real application, this is an imported class
# that I cannot modify
def p1(self): print 'A.p1'
class X:
def __init__(self):
self.a=A()
def pre(self): print 'X.pre'
def post(self): print 'X.post'
x=X()
x.a.p1()
</code></pre>
| 6 | 2008-11-03T08:23:10Z | 258,283 | <p>As others have mentioned, the wrapper/decorator solution is probably be the easiest one. I don't recommend modifyng the wrapped class itself, for the same reasons that you point out.</p>
<p>If you have many external classes you can write a code generator to generate the wrapper classes for you. Since you are doing this in Python you can probably even implement the generator as a part of the program, generating the wrappers at startup, or something.</p>
| 1 | 2008-11-03T10:27:04Z | [
"python",
"metaprogramming"
] |
Why are Exceptions iterable? | 258,228 | <p>I have been bitten by something unexpected recently. I wanted to make something like that:</p>
<pre><code>try :
thing.merge(iterable) # this is an iterable so I add it to the list
except TypeError :
thing.append(iterable) # this is not iterable, so I add it
</code></pre>
<p>Well, It was working fine until I passed an object inheriting from Exception which was supposed to be added.</p>
<p>Unfortunetly, an Exception is iterable. The following code does not raise any <code>TypeError</code>:</p>
<pre><code>for x in Exception() :
print 1
</code></pre>
<p>Does anybody know why?</p>
| 11 | 2008-11-03T09:46:03Z | 258,234 | <p>NOT VALID. Check Brian anwser.</p>
<p>Ok, I just got it :</p>
<pre><code>for x in Exception("test") :
print x
....:
....:
test
</code></pre>
<p>Don't bother ;-)</p>
<p>Anyway, it's good to know.</p>
<p>EDIT : looking to the comments, I feel like adding some explanations.</p>
<p>An exception contains a message you passed to during instantiation :</p>
<pre><code>raise Exception("test")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Exception: test
</code></pre>
<p>It's fair to say that the message is what defines the Exception the best, so str() returns it :</p>
<pre><code>print Exception("test")
test
</code></pre>
<p>Now, it happens that Exceptions are implicitly converted to string when used in something else than an Exception context.</p>
<p>So when I do :</p>
<pre><code>for x in Exception("test") :
print x
</code></pre>
<p>I am iterating over the string "test".</p>
<p>And when I do : </p>
<pre><code>for x in Exception() :
print x
</code></pre>
<p>I do iterate over an empty string. Tricky. Because when it comes to my issue :</p>
<pre><code>try :
thing.merge(ExceptionLikeObject)
except TypeError :
...
</code></pre>
<p>This won't raise anything since ExceptionLikeObject is considered as a string.</p>
<p>Well now, we know the HOW, but I still not the WHY. Maybe the built-in Exception inherit from the built-in String ? Because as far as I know :</p>
<ul>
<li>adding <strong>str</strong> does not make any object iterable.</li>
<li>I bypassed the problem by overloding <strong>iter</strong>, making it raising TypeError !</li>
</ul>
<p>Not a problem anymore, but still a mystery.</p>
| 3 | 2008-11-03T09:47:35Z | [
"python",
"exception"
] |
Why are Exceptions iterable? | 258,228 | <p>I have been bitten by something unexpected recently. I wanted to make something like that:</p>
<pre><code>try :
thing.merge(iterable) # this is an iterable so I add it to the list
except TypeError :
thing.append(iterable) # this is not iterable, so I add it
</code></pre>
<p>Well, It was working fine until I passed an object inheriting from Exception which was supposed to be added.</p>
<p>Unfortunetly, an Exception is iterable. The following code does not raise any <code>TypeError</code>:</p>
<pre><code>for x in Exception() :
print 1
</code></pre>
<p>Does anybody know why?</p>
| 11 | 2008-11-03T09:46:03Z | 258,530 | <p>Actually, I still don't quite get it. I can see that iterating an Exception gives you the original args to the exception, I'm just not sure why anyone would want that. Implicit iteration is I think one of the few gotchas in Python that still trips me up.</p>
| 2 | 2008-11-03T12:48:33Z | [
"python",
"exception"
] |
Why are Exceptions iterable? | 258,228 | <p>I have been bitten by something unexpected recently. I wanted to make something like that:</p>
<pre><code>try :
thing.merge(iterable) # this is an iterable so I add it to the list
except TypeError :
thing.append(iterable) # this is not iterable, so I add it
</code></pre>
<p>Well, It was working fine until I passed an object inheriting from Exception which was supposed to be added.</p>
<p>Unfortunetly, an Exception is iterable. The following code does not raise any <code>TypeError</code>:</p>
<pre><code>for x in Exception() :
print 1
</code></pre>
<p>Does anybody know why?</p>
| 11 | 2008-11-03T09:46:03Z | 258,930 | <p>Note that what is happening is not related to any kind of implicit string conversion etc, but because the Exception class implements __getitem__(), and uses it to return the values in the args tuple (ex.args). You can see this by the fact that you get the whole string as your first and only item in the iteration, rather than the character-by-character result you'd get if you iterate over the string.</p>
<p>This surprised me too, but thinking about it, I'm guessing it is for backwards compatability reasons. Python used to (<a href="http://www.python.org/doc/essays/stdexceptions.html">pre-1.5</a>) lack the current class hierarchy of exceptions. Instead, strings were thrown, with (usually) a tuple argument for any details that should be passed to the handling block. ie:</p>
<pre><code>try:
raise "something failed", (42, "some other details")
except "something failed", args:
errCode, msg = args
print "something failed. error code %d: %s" % (errCode, msg)
</code></pre>
<p>It looks like this behavior was put in to avoid breaking pre-1.5 code expecting a tuple of arguments, rather than a non-iterable exception object. There are a couple of examples of this with IOError in the Fatal Breakage section of the above <a href="http://www.python.org/doc/essays/stdexceptions.html">link</a></p>
<p>String exceptions have been depecated for a while, and are going away in Python 3. I've now checked how Python 3 handles exception objects, and it looks like they are no longer iterable there:</p>
<pre><code>>>> list(Exception("test"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'Exception' object is not iterable
</code></pre>
<p>[Edit] Checked python3's behaviour</p>
| 11 | 2008-11-03T15:08:02Z | [
"python",
"exception"
] |
Django models - how to filter number of ForeignKey objects | 258,296 | <p>I have a models <code>A</code> and <code>B</code>, that are like this:</p>
<pre><code>class A(models.Model):
title = models.CharField(max_length=20)
(...)
class B(models.Model):
date = models.DateTimeField(auto_now_add=True)
(...)
a = models.ForeignKey(A)
</code></pre>
<p>Now I have some <code>A</code> and <code>B</code> objects, and I'd like to get a query that selects all <code>A</code> objects that have less then 2 <code>B</code> pointing at them.</p>
<p>A is something like a pool thing, and users (the B) join pool. if there's only 1 or 0 joined, the pool shouldn't be displayed at all.</p>
<p>Is it possible with such model design? Or should I modify that a bit?</p>
| 33 | 2008-11-03T10:38:54Z | 258,310 | <p>Sounds like a job for <a href="http://docs.djangoproject.com/en/dev/ref/models/querysets/#extra-select-none-where-none-params-none-tables-none-order-by-none-select-params-none"><code>extra</code></a>.</p>
<pre><code>A.objects.extra(
select={
'b_count': 'SELECT COUNT(*) FROM yourapp_b WHERE yourapp_b.a_id = yourapp_a.id',
},
where=['b_count < 2']
)
</code></pre>
<p>If the B count is something you often need as a filtering or ordering criterion, or needs to be displayed on list views, you could consider denormalisation by adding a b_count field to your A model and using signals to update it when a B is added or deleted:</p>
<pre><code>from django.db import connection, transaction
from django.db.models.signals import post_delete, post_save
def update_b_count(instance, **kwargs):
"""
Updates the B count for the A related to the given B.
"""
if not kwargs.get('created', True) or kwargs.get('raw', False):
return
cursor = connection.cursor()
cursor.execute(
'UPDATE yourapp_a SET b_count = ('
'SELECT COUNT(*) FROM yourapp_b '
'WHERE yourapp_b.a_id = yourapp_a.id'
') '
'WHERE id = %s', [instance.a_id])
transaction.commit_unless_managed()
post_save.connect(update_b_count, sender=B)
post_delete.connect(update_b_count, sender=B)
</code></pre>
<p>Another solution would be to manage a status flag on the A object when you're adding or removing a related B.</p>
<pre><code>B.objects.create(a=some_a)
if some_a.hidden and some_a.b_set.count() > 1:
A.objects.filter(id=some_a.id).update(hidden=False)
...
some_a = b.a
some_b.delete()
if not some_a.hidden and some_a.b_set.count() < 2:
A.objects.filter(id=some_a.id).update(hidden=True)
</code></pre>
| 6 | 2008-11-03T10:47:39Z | [
"python",
"django",
"database-design"
] |
Django models - how to filter number of ForeignKey objects | 258,296 | <p>I have a models <code>A</code> and <code>B</code>, that are like this:</p>
<pre><code>class A(models.Model):
title = models.CharField(max_length=20)
(...)
class B(models.Model):
date = models.DateTimeField(auto_now_add=True)
(...)
a = models.ForeignKey(A)
</code></pre>
<p>Now I have some <code>A</code> and <code>B</code> objects, and I'd like to get a query that selects all <code>A</code> objects that have less then 2 <code>B</code> pointing at them.</p>
<p>A is something like a pool thing, and users (the B) join pool. if there's only 1 or 0 joined, the pool shouldn't be displayed at all.</p>
<p>Is it possible with such model design? Or should I modify that a bit?</p>
| 33 | 2008-11-03T10:38:54Z | 258,329 | <p>I'd recommend modifying your design to include some status field on A.</p>
<p>The issue is one of "why?" Why does A have < 2 B's and why does A have >= 2 B's. Is it because user's didn't enter something? Or is because they tried and their input had errors. Or is it because the < 2 rule doesn't apply in this case.</p>
<p>Using presence or absence of a Foreign Key limits the meaning to -- well -- present or absent. You don't have any way to represent "why?"</p>
<p>Also, you have the following option</p>
<pre><code>[ a for a in A.objects.all() if a.b_set.count() < 2 ]
</code></pre>
<p>This can be pricey because it does fetch all the A's rather than force the database to do the work.</p>
<p><hr /></p>
<p>Edit: From the comment "would require me to watch for user join / user leaving the pool events".</p>
<p>You don't "watch" anything -- you provide an API which does what you need. That's the central benefit of the Django model. Here's one way, with explict methods in the <code>A</code> class.</p>
<pre><code>class A( models.Model ):
....
def addB( self, b ):
self.b_set.add( b )
self.changeFlags()
def removeB( self, b ):
self.b_set.remove( b )
self.changeFlags()
def changeFlags( self ):
if self.b_set.count() < 2: self.show= NotYet
else: self.show= ShowNow
</code></pre>
<p>You can also define a special <code>Manager</code> for this, and replace the default <code>b_set</code> Manager with your manager that counts references and updates <code>A</code>.</p>
| 3 | 2008-11-03T11:00:13Z | [
"python",
"django",
"database-design"
] |
Django models - how to filter number of ForeignKey objects | 258,296 | <p>I have a models <code>A</code> and <code>B</code>, that are like this:</p>
<pre><code>class A(models.Model):
title = models.CharField(max_length=20)
(...)
class B(models.Model):
date = models.DateTimeField(auto_now_add=True)
(...)
a = models.ForeignKey(A)
</code></pre>
<p>Now I have some <code>A</code> and <code>B</code> objects, and I'd like to get a query that selects all <code>A</code> objects that have less then 2 <code>B</code> pointing at them.</p>
<p>A is something like a pool thing, and users (the B) join pool. if there's only 1 or 0 joined, the pool shouldn't be displayed at all.</p>
<p>Is it possible with such model design? Or should I modify that a bit?</p>
| 33 | 2008-11-03T10:38:54Z | 845,814 | <p>I assume that joining or leaving the pool may not happen as often as listing (showing) the pools. I also believe that it would be more efficient for the users join/leave actions to update the pool display status. This way, listing & showing the pools would require less time as you would just run a single query for SHOW_STATUS of the pool objects.</p>
| 1 | 2009-05-10T18:26:19Z | [
"python",
"django",
"database-design"
] |
Django models - how to filter number of ForeignKey objects | 258,296 | <p>I have a models <code>A</code> and <code>B</code>, that are like this:</p>
<pre><code>class A(models.Model):
title = models.CharField(max_length=20)
(...)
class B(models.Model):
date = models.DateTimeField(auto_now_add=True)
(...)
a = models.ForeignKey(A)
</code></pre>
<p>Now I have some <code>A</code> and <code>B</code> objects, and I'd like to get a query that selects all <code>A</code> objects that have less then 2 <code>B</code> pointing at them.</p>
<p>A is something like a pool thing, and users (the B) join pool. if there's only 1 or 0 joined, the pool shouldn't be displayed at all.</p>
<p>Is it possible with such model design? Or should I modify that a bit?</p>
| 33 | 2008-11-03T10:38:54Z | 6,205,303 | <p>The question and selected answer are from 2008 and since then this functionality has been integrated into the django framework. Since this is a top google hit for "django filter foreign key count" I'd like to add an easier solution with a recent django version using <a href="https://docs.djangoproject.com/en/dev/topics/db/aggregation/" title="Aggregation">Aggregation</a>.</p>
<pre><code>from django.db.models import Count
cats = A.objects.annotate(num_b=Count('b')).filter(num_b__lt=2)
</code></pre>
<p>In my case I had to take this concept a step further. My "B" object had a boolean field called is_available, and I only wanted to return A objects who had more than 0 B objects with is_available set to True.</p>
<pre><code>A.objects.filter(B__is_available=True).annotate(num_b=Count('b')).filter(num_b__gt=0).order_by('-num_items')
</code></pre>
| 90 | 2011-06-01T17:30:34Z | [
"python",
"django",
"database-design"
] |
Python filter/remove URLs from a list | 258,390 | <p>I have a text file of URLs, about 14000. Below is a couple of examples:</p>
<p><a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=10" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=10</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=11" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=11</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123</a><br /></p>
<p>I have loaded the text file into a Python list and I am trying to get all the URLs with CONTENT_ITEM_ID separated off into a list of their own. What would be the best way to do this in Python?</p>
<p>Cheers</p>
| 5 | 2008-11-03T11:34:18Z | 258,396 | <pre><code>list2 = filter( lambda x: x.find( 'CONTENT_ITEM_ID ') != -1, list1 )
</code></pre>
<p>The filter calls the function (first parameter) on each element of list1 (second parameter). If the function returns true (non-zero), the element is copied to the output list.</p>
<p>The lambda basically creates a temporary unnamed function. This is just to avoid having to create a function and then pass it, like this:</p>
<pre><code>function look_for_content_item_id( elem ):
if elem.find( 'CONTENT_ITEM_ID') == -1:
return 0
return 1
list2 = filter( look_for_content_item_id, list1 )
</code></pre>
| 5 | 2008-11-03T11:37:13Z | [
"python",
"url",
"list",
"filter"
] |
Python filter/remove URLs from a list | 258,390 | <p>I have a text file of URLs, about 14000. Below is a couple of examples:</p>
<p><a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=10" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=10</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=11" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=11</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123</a><br /></p>
<p>I have loaded the text file into a Python list and I am trying to get all the URLs with CONTENT_ITEM_ID separated off into a list of their own. What would be the best way to do this in Python?</p>
<p>Cheers</p>
| 5 | 2008-11-03T11:34:18Z | 258,415 | <p>Here's another alternative to Graeme's, using the newer list comprehension syntax:</p>
<pre><code>list2= [line for line in file if 'CONTENT_ITEM_ID' in line]
</code></pre>
<p>Which you prefer is a matter of taste!</p>
| 21 | 2008-11-03T11:45:47Z | [
"python",
"url",
"list",
"filter"
] |
Python filter/remove URLs from a list | 258,390 | <p>I have a text file of URLs, about 14000. Below is a couple of examples:</p>
<p><a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=10" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=10</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=11" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=11</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123</a><br /></p>
<p>I have loaded the text file into a Python list and I am trying to get all the URLs with CONTENT_ITEM_ID separated off into a list of their own. What would be the best way to do this in Python?</p>
<p>Cheers</p>
| 5 | 2008-11-03T11:34:18Z | 258,491 | <p>I liked @<a href="#258415">bobince</a>'s answer (+1), but will up the ante.</p>
<p>Since you have a rather large starting set, you may wish to avoid loading the entire list into memory. Unless you need the whole list for something else, you could use a <a href="http://www.python.org/doc/2.5.2/ref/genexpr.html">Python generator expression</a> to perform the same task by building up the filtered list item by item as they're requested:</p>
<pre><code>for filtered_url in (line for line in file if 'CONTENT_ITEM_ID' in line):
do_something_with_filtered_url(filtered_url)
</code></pre>
| 6 | 2008-11-03T12:32:06Z | [
"python",
"url",
"list",
"filter"
] |
Python filter/remove URLs from a list | 258,390 | <p>I have a text file of URLs, about 14000. Below is a couple of examples:</p>
<p><a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=100&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=10" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=10</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=101&param2=123</a><br />
<a href="http://www.domainname.com/images?IMAGE_ID=11" rel="nofollow">http://www.domainname.com/images?IMAGE_ID=11</a><br />
<a href="http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123" rel="nofollow">http://www.domainname.com/pagename?CONTENT_ITEM_ID=102&param2=123</a><br /></p>
<p>I have loaded the text file into a Python list and I am trying to get all the URLs with CONTENT_ITEM_ID separated off into a list of their own. What would be the best way to do this in Python?</p>
<p>Cheers</p>
| 5 | 2008-11-03T11:34:18Z | 258,512 | <p>For completeness; You can also use <code>ifilter</code>. It is like filter, but doesn't build up a list.</p>
<pre><code>from itertools import ifilter
for line in ifilter(lambda line: 'CONTENT_ITEM_ID' in line, urls):
do_something(line)
</code></pre>
| 5 | 2008-11-03T12:40:37Z | [
"python",
"url",
"list",
"filter"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 258,797 | <p>I figured it out below is what I needed to do:</p>
<pre><code>url = "http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3"
url = url[: url.find("&")]
print url
'http://www.domainname.com/page?CONTENT_ITEM_ID=1234'
</code></pre>
| 1 | 2008-11-03T14:33:32Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 258,798 | <pre><code>import re
url = 'http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3'
m = re.search('(.*?)&', url)
print m.group(1)
</code></pre>
| 0 | 2008-11-03T14:34:17Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 258,800 | <p>The quick and dirty solution is this:</p>
<pre><code>>>> "http://something.com/page?CONTENT_ITEM_ID=1234&param3".split("&")[0]
'http://something.com/page?CONTENT_ITEM_ID=1234'
</code></pre>
| 4 | 2008-11-03T14:34:34Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 258,810 | <p>Another option would be to use the split function, with & as a parameter. That way, you'd extract both the base url and both parameters.</p>
<pre><code> url.split("&")
</code></pre>
<p>returns a list with</p>
<pre><code> ['http://www.domainname.com/page?CONTENT_ITEM_ID=1234', 'param2', 'param3']
</code></pre>
| 3 | 2008-11-03T14:36:06Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 258,832 | <p>Look at the <a href="http://stackoverflow.com/questions/163009/urllib2-file-name">urllib2 file name</a> question for some discussion of this topic.</p>
<p>Also see the "<a href="http://stackoverflow.com/questions/229352/python-find-question">Python Find Question</a>" question.</p>
| 0 | 2008-11-03T14:41:39Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 258,993 | <p>This method isn't dependent on the position of the parameter within the url string. This could be refined, I'm sure, but it gets the point across.</p>
<pre><code>url = 'http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3'
parts = url.split('?')
id = dict(i.split('=') for i in parts[1].split('&'))['CONTENT_ITEM_ID']
new_url = parts[0] + '?CONTENT_ITEM_ID=' + id
</code></pre>
| 0 | 2008-11-03T15:31:04Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 259,054 | <p>Parsin URL is never as simple I it seems to be, that's why there are the urlparse and urllib modules.</p>
<p>E.G :</p>
<pre><code>import urllib
url ="http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3"
query = urllib.splitquery(url)
result = "?".join((query[0], query[1].split("&")[0]))
print result
'http://www.domainname.com/page?CONTENT_ITEM_ID=1234'
</code></pre>
<p>This is still not 100 % reliable, but much more than splitting it yourself because there are a lot of valid url format that you and me don't know and discover one day in error logs.</p>
| 1 | 2008-11-03T15:52:06Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 259,159 | <p>Use the <a href="http://docs.python.org/2/library/urlparse.html#urlparse.urlsplit" rel="nofollow">urlparse</a> module. Check this function:</p>
<pre><code>import urlparse
def process_url(url, keep_params=('CONTENT_ITEM_ID=',)):
parsed= urlparse.urlsplit(url)
filtered_query= '&'.join(
qry_item
for qry_item in parsed.query.split('&')
if qry_item.startswith(keep_params))
return urlparse.urlunsplit(parsed[:3] + (filtered_query,) + parsed[4:])
</code></pre>
<p>In your example:</p>
<pre><code>>>> process_url(a)
'http://www.domainname.com/page?CONTENT_ITEM_ID=1234'
</code></pre>
<p>This function has the added bonus that it's easier to use if you decide that you also want some more query parameters, or if the order of the parameters is not fixed, as in:</p>
<pre><code>>>> url='http://www.domainname.com/page?other_value=xx&param3&CONTENT_ITEM_ID=1234&param1'
>>> process_url(url, ('CONTENT_ITEM_ID', 'other_value'))
'http://www.domainname.com/page?other_value=xx&CONTENT_ITEM_ID=1234'
</code></pre>
| 14 | 2008-11-03T16:25:13Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 2,326,780 | <p>An ancient question, but still, I'd like to remark that query string paramenters can also be separated by ';' not only '&'.</p>
| 0 | 2010-02-24T14:43:26Z | [
"python",
"url",
"string"
] |
Slicing URL with Python | 258,746 | <p>I am working with a huge list of URL's. Just a quick question I have trying to slice a part of the URL out, see below:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234&param2&param3
</code></pre>
<p>How could I slice out:</p>
<pre><code>http://www.domainname.com/page?CONTENT_ITEM_ID=1234
</code></pre>
<p>Sometimes there is more than two parameters after the CONTENT_ITEM_ID and the ID is different each time, I am thinking it can be done by finding the first & and then slicing off the chars before that &, not quite sure how to do this tho.</p>
<p>Cheers</p>
| 8 | 2008-11-03T14:22:13Z | 11,576,768 | <p>beside <em>urlparse</em> there is also <a href="https://github.com/gruns/furl/" rel="nofollow">furl</a>, which has IMHO better API.</p>
| 0 | 2012-07-20T09:39:32Z | [
"python",
"url",
"string"
] |
Django: Overriding verbose_name for AutoField without dropping the model | 258,767 | <p>I am using <strong>0.97-pre-SVN-unknown</strong> release of Django.</p>
<p>I have a model for which I have not given any primary_key. Django, consequently, automatically provides an AutoField that is called "id". Everything's fine with that. But now, I have to change the "verbose_name" of that AutoField to something other than "id". I cannot override the "id" field the usual way, because that would require dropping/resetting the entire model and its data (which is strictly not an option). I cannot find another way around it. Does what I want even possible to achieve? If you may suggest any alternatives that would get me away with what I want without having to drop the model/table, I'd be happy.</p>
<p>Many Thanks.</p>
| 3 | 2008-11-03T14:27:08Z | 259,027 | <p>Look into the command-line options for <code>manage.py</code>; there's a command to dump all of the model data to JSON, and another command to load it back in from JSON. You can export all of your model data, add your new field to the model, then import your data back in. Just make sure that you set the <code>db_column</code> option to <code>'id'</code> so you don't break your existing data.</p>
<p><strong>Edit</strong>: Specifically, you want the commands <a href="http://docs.djangoproject.com/en/dev/ref/django-admin/#dumpdata" rel="nofollow"><code>dumpdata</code></a> and <a href="http://docs.djangoproject.com/en/dev/ref/django-admin/#loaddata-fixture-fixture" rel="nofollow"><code>loaddata</code></a>.</p>
| 2 | 2008-11-03T15:42:03Z | [
"python",
"django"
] |
Django: Overriding verbose_name for AutoField without dropping the model | 258,767 | <p>I am using <strong>0.97-pre-SVN-unknown</strong> release of Django.</p>
<p>I have a model for which I have not given any primary_key. Django, consequently, automatically provides an AutoField that is called "id". Everything's fine with that. But now, I have to change the "verbose_name" of that AutoField to something other than "id". I cannot override the "id" field the usual way, because that would require dropping/resetting the entire model and its data (which is strictly not an option). I cannot find another way around it. Does what I want even possible to achieve? If you may suggest any alternatives that would get me away with what I want without having to drop the model/table, I'd be happy.</p>
<p>Many Thanks.</p>
| 3 | 2008-11-03T14:27:08Z | 259,077 | <p>Hmm... and what about explicitly write <em>id</em> field in the model definition? Like this for example:</p>
<pre><code>class Entry(models.Model):
id = models.AutoField(verbose_name="custom name")
# and other fields...
</code></pre>
<p>It doesn't require any underlying database changes.</p>
| 4 | 2008-11-03T15:57:31Z | [
"python",
"django"
] |
How to find out if a lazy relation isn't loaded yet, with SQLAlchemy? | 258,775 | <p>With SQLAlchemy, is there a way to know beforehand whether a relation would be lazy-loaded?<br />
For example, given a lazy parent->children relation and an instance X of "parent", I'd like to know if "X.children" is already loaded, without triggering the query.</p>
| 8 | 2008-11-03T14:28:03Z | 261,191 | <p>I think you could look at the child's <code>__dict__</code> attribute dictionary to check if the data is already there or not.</p>
| 2 | 2008-11-04T07:48:26Z | [
"python",
"sqlalchemy"
] |
How to find out if a lazy relation isn't loaded yet, with SQLAlchemy? | 258,775 | <p>With SQLAlchemy, is there a way to know beforehand whether a relation would be lazy-loaded?<br />
For example, given a lazy parent->children relation and an instance X of "parent", I'd like to know if "X.children" is already loaded, without triggering the query.</p>
| 8 | 2008-11-03T14:28:03Z | 14,335,831 | <p>Slightly neater than Haes answer (though it effectively does the same thing) is to use hasattr(), as in:</p>
<pre><code>>>> hasattr(X, 'children')
False
</code></pre>
| 3 | 2013-01-15T10:34:28Z | [
"python",
"sqlalchemy"
] |
How to find out if a lazy relation isn't loaded yet, with SQLAlchemy? | 258,775 | <p>With SQLAlchemy, is there a way to know beforehand whether a relation would be lazy-loaded?<br />
For example, given a lazy parent->children relation and an instance X of "parent", I'd like to know if "X.children" is already loaded, without triggering the query.</p>
| 8 | 2008-11-03T14:28:03Z | 25,011,704 | <p>You can get a list of all unloaded properties (both relations and columns) from <code>sqlalchemy.orm.attributes.instance_state(obj).unloaded</code>.</p>
<p>See: <a href="http://stackoverflow.com/questions/5795492/completing-object-with-its-relations-and-avoiding-unnecessary-queries-in-sqlalch">Completing object with its relations and avoiding unnecessary queries in sqlalchemy</a></p>
<p>An easier way is to use <code>inspect()</code>, which gives the same results:</p>
<pre><code>from sqlalchemy import inspect
from sqlalchemy.orm import lazyload
user = session.query(User).options(lazyload(User.articles)).first()
ins = inspect(user)
ins.unloaded # <- set or properties that are not yet loaded
</code></pre>
| 6 | 2014-07-29T09:05:24Z | [
"python",
"sqlalchemy"
] |
python, functions running from a list and adding to a list through functions | 259,234 | <p>How do I run a function on a loop so all the results go straight into a list and is there a way to run a function which acts on all the values in a list?</p>
| 0 | 2008-11-03T16:46:57Z | 259,259 | <p>This example shows how to do it (run it in an interpreter)</p>
<pre><code>>>> def square(x):
... return x*x
...
>>> a = [1,2,3,4,5,6,7,8,9]
>>> map(square,a)
[1, 4, 9, 16, 25, 36, 49, 64, 81]
</code></pre>
| 0 | 2008-11-03T16:54:04Z | [
"python",
"function",
"list"
] |
python, functions running from a list and adding to a list through functions | 259,234 | <p>How do I run a function on a loop so all the results go straight into a list and is there a way to run a function which acts on all the values in a list?</p>
| 0 | 2008-11-03T16:46:57Z | 259,260 | <p>Your question needs clarification.</p>
<h3>run a function on a loop</h3>
<pre><code>new_list= [yourfunction(item) for item in a_sequence]
</code></pre>
<h3>run a function acting on all values in a list</h3>
<p>Your function should have some form of iteration in its code to process all items of a sequence, something like:</p>
<pre><code>def yourfunction(sequence):
for item in sequence:
â¦
</code></pre>
<p>Then you just call it with a sequence (i.e. a list, a string, an iterator etc)</p>
<pre><code>yourfunction(range(10))
yourfunction("a string")
</code></pre>
<p>YMMV.</p>
| 1 | 2008-11-03T16:54:38Z | [
"python",
"function",
"list"
] |
python, functions running from a list and adding to a list through functions | 259,234 | <p>How do I run a function on a loop so all the results go straight into a list and is there a way to run a function which acts on all the values in a list?</p>
| 0 | 2008-11-03T16:46:57Z | 259,266 | <p>Theres a couple ways to run a function on a loop like that - You can either use a list comprehension</p>
<pre><code>test = list('asdf')
[function(x) for x in test]
</code></pre>
<p>and use that result</p>
<p>Or you could use the map function</p>
<pre><code>test = list('asdf')
map(function, test)
</code></pre>
<p>The first answer is more "pythonic", while the second is more functional. </p>
<p>EDIT: The second way is also a lot faster, as it's not running arbitrary code to call a function, but directly calling a function using <code>map</code>, which is implemented in C.</p>
| 6 | 2008-11-03T16:55:55Z | [
"python",
"function",
"list"
] |
python smtplib | 259,314 | <p>Hey I have a windows server running python CGI scripts and I'm having a little trouble with smtplib. The server is running python 2.1 (unfortunately and I can not upgrade it). Anyway I have the following code:</p>
<p><code><pre>
session = smtplib.SMTP("smtp-auth.ourhosting.com", 587)
session.login(smtpuser, smtppass)
</pre></code></p>
<p>and it's giving me this error:</p>
<pre>exceptions.AttributeError : SMTP instance has no attribute 'login' : </pre>
<p>I'm assuming this is because the login() method was added after python 2.1. so how do I fix this? </p>
<p>I have to either add the module by uploading the files to the same directory as the cgi script (though I believe smtplib is written in C and needs to be compiled which we can't do on this server)</p>
<p>OR</p>
<p>Do it whatever way is expected by the libsmtp in python 2.1</p>
<p>Thanks so much!</p>
| -1 | 2008-11-03T17:09:05Z | 259,324 | <blockquote>
<p>Do it whatever way is expected by the libsmtp in python 2.1</p>
</blockquote>
| 0 | 2008-11-03T17:10:47Z | [
"python",
"smtp",
"smtplib",
"python-2.1"
] |
python smtplib | 259,314 | <p>Hey I have a windows server running python CGI scripts and I'm having a little trouble with smtplib. The server is running python 2.1 (unfortunately and I can not upgrade it). Anyway I have the following code:</p>
<p><code><pre>
session = smtplib.SMTP("smtp-auth.ourhosting.com", 587)
session.login(smtpuser, smtppass)
</pre></code></p>
<p>and it's giving me this error:</p>
<pre>exceptions.AttributeError : SMTP instance has no attribute 'login' : </pre>
<p>I'm assuming this is because the login() method was added after python 2.1. so how do I fix this? </p>
<p>I have to either add the module by uploading the files to the same directory as the cgi script (though I believe smtplib is written in C and needs to be compiled which we can't do on this server)</p>
<p>OR</p>
<p>Do it whatever way is expected by the libsmtp in python 2.1</p>
<p>Thanks so much!</p>
| -1 | 2008-11-03T17:09:05Z | 259,432 | <p>login() was introduced in Python 2.2, unluckily for you! The only way to do it in Python 2.1's own smtplib would be to issue the AUTH commands manually, which wouldn't be much fun.</p>
<p>I haven't tested it fully but it seems Python 2.2's smtplib should more or less work on 2.1 if you copy it across as you describe (perhaps call it smtplib2.py). It's only a Python module, no C compilation should be necessary. However you will at least need to copy the hmac.py library it relies on from 2.2's lib as well. If you use a later Python version to steal from it starts requiring the email package too which might be more work.</p>
| 4 | 2008-11-03T18:01:51Z | [
"python",
"smtp",
"smtplib",
"python-2.1"
] |
How to extract frequency information from an input audio stream (using PortAudio)? | 259,451 | <p>I want to record sound (voice) using PortAudio (PyAudio) and output the corresponding sound wave on the screen. Hopeless as I am, I am unable to extract the frequency information from the audio stream so that I can draw it in Hz/time form.</p>
<p><hr /></p>
<p>Here's an example code snippet that records and plays recorded audio for five seconds, in case it helps any:</p>
<pre><code>p = pyaudio.PyAudio()
chunk = 1024
seconds = 5
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=44100,
input=True,
output=True)
for i in range(0, 44100 / chunk * seconds):
data = stream.read(chunk)
stream.write(data, chunk)
</code></pre>
<p>I wish to extract the needed information from the above variable "data". (Or use some other high-level approach with PortAudio or another library with Python bindings.)</p>
<p><hr /></p>
<p>I'd be very grateful for any help! Even vaguely related tidbits of audio-analyzing wisdom are appreciated. :)</p>
| 2 | 2008-11-03T18:07:45Z | 259,521 | <p>What you want is probably the Fourier transform of the audio data. There is several packages that can calculate that for you. <code>scipy</code> and <code>numpy</code> is two of them. It is often named "Fast Fourier Transform" (FFT), but that is just the name of the algorithm.</p>
<p>Here is an example of it's usage: <a href="https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py" rel="nofollow">https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/advanced/spectrum.py</a></p>
| 4 | 2008-11-03T18:31:46Z | [
"python",
"voice",
"frequency",
"portaudio"
] |
How to extract frequency information from an input audio stream (using PortAudio)? | 259,451 | <p>I want to record sound (voice) using PortAudio (PyAudio) and output the corresponding sound wave on the screen. Hopeless as I am, I am unable to extract the frequency information from the audio stream so that I can draw it in Hz/time form.</p>
<p><hr /></p>
<p>Here's an example code snippet that records and plays recorded audio for five seconds, in case it helps any:</p>
<pre><code>p = pyaudio.PyAudio()
chunk = 1024
seconds = 5
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=44100,
input=True,
output=True)
for i in range(0, 44100 / chunk * seconds):
data = stream.read(chunk)
stream.write(data, chunk)
</code></pre>
<p>I wish to extract the needed information from the above variable "data". (Or use some other high-level approach with PortAudio or another library with Python bindings.)</p>
<p><hr /></p>
<p>I'd be very grateful for any help! Even vaguely related tidbits of audio-analyzing wisdom are appreciated. :)</p>
| 2 | 2008-11-03T18:07:45Z | 569,314 | <p>The Fourier Transform will not help you a lot if you want the analysis to be conducted in both the frequency and time domain. You might want to have a look at "Wavelet Transforms". There is a package called pywavelets...
<a href="http://www.pybytes.com/pywavelets/#discrete-wavelet-transform-dwt" rel="nofollow">http://www.pybytes.com/pywavelets/#discrete-wavelet-transform-dwt</a></p>
| 1 | 2009-02-20T11:52:06Z | [
"python",
"voice",
"frequency",
"portaudio"
] |
Credit card payments and notifications on the Google App Engine | 259,491 | <p>I ported gchecky to the google app engine. you can <a href="http://web2py.appspot.com/plugin_checkout" rel="nofollow">try it here</a></p>
<p>It implements both level 1 (cart submission) and level 2 (notifications from google checkout).</p>
<p>Is there any other payment option that works on the google app engine (paypal for example) and supports level 2 (notifications)?</p>
| 13 | 2008-11-03T18:22:17Z | 431,474 | <p>Paypal has a <a href="https://cms.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_api_soap_PayPalSOAPAPIArchitecture" rel="nofollow">SOAP interface</a>. You can certainly access that from within GAE--though you might run into timeout issues while waiting for the response.</p>
| 4 | 2009-01-10T18:17:17Z | [
"python",
"google-app-engine",
"web2py"
] |
Credit card payments and notifications on the Google App Engine | 259,491 | <p>I ported gchecky to the google app engine. you can <a href="http://web2py.appspot.com/plugin_checkout" rel="nofollow">try it here</a></p>
<p>It implements both level 1 (cart submission) and level 2 (notifications from google checkout).</p>
<p>Is there any other payment option that works on the google app engine (paypal for example) and supports level 2 (notifications)?</p>
| 13 | 2008-11-03T18:22:17Z | 1,136,350 | <p>Here's a <a href="http://blog.awarelabs.com/2008/paypal-ipn-python-code/" rel="nofollow">link</a> containing information about using PayPal IPN from AppEngine using Django.</p>
| 2 | 2009-07-16T09:09:14Z | [
"python",
"google-app-engine",
"web2py"
] |
Credit card payments and notifications on the Google App Engine | 259,491 | <p>I ported gchecky to the google app engine. you can <a href="http://web2py.appspot.com/plugin_checkout" rel="nofollow">try it here</a></p>
<p>It implements both level 1 (cart submission) and level 2 (notifications from google checkout).</p>
<p>Is there any other payment option that works on the google app engine (paypal for example) and supports level 2 (notifications)?</p>
| 13 | 2008-11-03T18:22:17Z | 3,628,409 | <p>I think you can have a look into the official toolkit from PayPal's X Platform <a href="http://code.google.com/p/paypalx-gae-toolkit/">http://code.google.com/p/paypalx-gae-toolkit/</a></p>
| 5 | 2010-09-02T15:03:41Z | [
"python",
"google-app-engine",
"web2py"
] |
Is it safe to make an old-style class into a new-style class using Multiple Inheritance? | 259,578 | <p>In a program that I'm writing, I wanted to make a ConfigParser that's read only so that it can safely be used globally. I didn't realize this, but apparently the SafeConfigParser is an old-style class, thus I had to subclass it like this:</p>
<pre><code>class ConstParser(SafeConfigParser, object):
"""This is a implementation of the SafeConfigParser that can't
write any values. This is to ensure that it can only be filled
once and won't get messy with multiple modules writing to it."""
def __init__(self, files, defaults={}):
super(ConstParser, self).__init__(defaults)
self.read(files)
def set(self, *args, **argd):
raise NotImplementedError()
def write(self, *args, **argd):
raise NotImplementedError()
def remove_option(self, *args, **argd):
raise NotImplementedError()
def remove_section(self, *args, **argd):
raise NotImplementedError()
</code></pre>
<p>If I didn't use object as a mixin, the call to SafeConfigParser's <code>__init__</code> method didn't work. Now, I'm sure that there are better ways to do what I want to do, but now I'm curious: is this ok to do in general?</p>
<p>I mean, I can't think of any reason why this would be a bad thing, but it still gives me bad feelings. Are they justified, or am I just being paranoid?</p>
| 7 | 2008-11-03T18:54:04Z | 259,729 | <p>Replace the call:</p>
<pre><code>super(ConstParser, self).__init__(defaults)
</code></pre>
<p>with:</p>
<pre><code>SafeConfigParser.__init__(self, defaults)
</code></pre>
<p>and it works just fine without multiple Inheritance.</p>
| 2 | 2008-11-03T19:57:22Z | [
"python",
"oop"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,638 | <p>Like this:</p>
<pre><code>print name.split()[-1]
</code></pre>
| 1 | 2008-11-03T19:20:27Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,639 | <p>you'd probably want to use rsplit for this:</p>
<p>rsplit([sep [,maxsplit]])</p>
<p>Return a list of the words in the string, using sep as the delimiter string. If maxsplit is given, at most maxsplit splits are done, the rightmost ones. If sep is not specified or None, any whitespace string is a separator. Except for splitting from the right, rsplit() behaves like split() which is described in detail below. New in version 2.4. </p>
| 0 | 2008-11-03T19:21:27Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,643 | <p>The problem with trying to split the names from a single input is that you won't get the full surname for people with spaces in their surname, and I don't believe you'll be able to write code to manage that completely.</p>
<p>I would recommend that you ask for the names separately if it is at all possible.</p>
| 15 | 2008-11-03T19:23:04Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,659 | <p>Splitting names is harder than it looks. Some names have two word last names; some people will enter a first, middle, and last name; some names have two work first names. The more reliable (or least unreliable) way to handle names is to always capture first and last name in separate fields. Of course this raises its own issues, like how to handle people with only one name, making sure it works for users that have a different ordering of name parts.</p>
<p>Names are hard, handle with care.</p>
| 2 | 2008-11-03T19:30:06Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,662 | <p><a href="http://stackoverflow.com/questions/159567/how-can-i-parse-the-first-middle-and-last-name-from-a-full-name-field-in-sql#159760">Here's how to do it in SQL</a>. But data normalization with this kind of thing is really a bear. I agree with Dave DuPlantis about asking for separate inputs.</p>
| 0 | 2008-11-03T19:30:41Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,694 | <p>You'll find that your key problem with this approach isn't a technical one, but a human one - different people write their names in different ways.</p>
<p>In fact, the terminology of "forename" and "surname" is itself flawed.</p>
<p>While many blended families use a hyphenated family name, such as Smith-Jones, there are some who just use both names separately, "Smith Jones" where both names are the family name.</p>
<p>Many european family names have multiple parts, such as "de Vere" and "van den Neiulaar". Sometimes these extras have important family history - for example, a prefix awarded by a king hundreds of years ago.</p>
<p>Side issue: I've capitalised these correctly for the people I'm referencing - "de" and "van den" don't get captial letters for some families, but do for others. </p>
<p>Conversely, many Asian cultures put the family name first, because the family is considered more important than the individual.</p>
<p>Last point - some people place great store in being "Junior" or "Senior" or "III" - and your code shouldn't treat those as the family name.</p>
<p>Also noting that there are a fair number of people who use a name that isn't the one bestowed by their parents, I've used the following scheme with some success:</p>
<p>Full Name (as normally written for addressing mail);
Family Name;
Known As (the name commonly used in conversation).</p>
<p>e.g:</p>
<p>Full Name: William Gates III; Family Name: Gates; Known As: Bill</p>
<p>Full Name: Soong Li; Family Name: Soong; Known As: Lisa</p>
| 57 | 2008-11-03T19:42:02Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,735 | <p>I would specify a standard format (some forms use them), such as "Please write your name in <em>First name, Surname</em> form".</p>
<p>It makes it easier for you, as names don't usually contain a comma. It also verifies that your users actually enter both first name and surname.</p>
| 0 | 2008-11-03T19:59:30Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,804 | <p>Golden rule of data - don't aggregate too early - it is much easier to glue fields together than separate them. Most people also have a middle name which should be an optional field. Some people have a plethora of middle names. Some people only have <a href="https://stilgherrian.com/category/only-one-name/" rel="nofollow">one name</a>, one word. Some cultures commonly have a dictionary of middle names, paying homage to the family tree back to the Golgafrincham Ark landing.</p>
<p>You don't need a code solution here - you need a business rule.</p>
| 5 | 2008-11-03T20:18:25Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 259,809 | <p>An easy way to do exactly what you asked in python is </p>
<pre><code>name = "Thomas Winter"
LastName = name.split()[1]
</code></pre>
<p>(note the parantheses on the function call split.)</p>
<p>split() creates a list where each element is from your original string, delimited by whitespace. You can now grab the second element using name.split()[1] or the last element using name.split()[-1]</p>
<p>However, as others said, unless you're SURE you're just getting a string like "First_Name Last_Name", there are a lot more issues involved. </p>
| 3 | 2008-11-03T20:20:52Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 263,331 | <p>Since there are so many different variation's of how people write their names, but here's how a basic way to get the first/lastname via regex.</p>
<pre><code>import re
p = re.compile(r'^(\s+)?(Mr(\.)?|Mrs(\.)?)?(?P<FIRST_NAME>.+)(\s+)(?P<LAST_NAME>.+)$', re.IGNORECASE)
m = p.match('Mr. Dingo Bat')
if(m != None):
first_name = m.group('FIRST_NAME')
last_name = m.group('LAST_NAME')
</code></pre>
| 1 | 2008-11-04T20:35:29Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 1,656,251 | <p>If you're trying to parse apart a human name in PHP, I recomment <a href="http://jonathonhill.net/2009-10-31/human-name-parsing-in-php/" rel="nofollow">Keith Beckman's nameparse.php script</a>.</p>
| 3 | 2009-11-01T02:45:02Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 2,862,586 | <p>This is a pretty old issue but I found it searching around for a solution to parsing out pieces from a globbed together name.</p>
<p><a href="http://code.google.com/p/python-nameparser/" rel="nofollow">http://code.google.com/p/python-nameparser/</a></p>
| 4 | 2010-05-19T03:05:27Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 3,688,828 | <p>It's definitely a more complicated task than it appears on the surface. I wrote up some of the challenges as well as my algorithm for solving it on my blog. Be sure to check out my Google Code project for it if you want the latest version in PHP:</p>
<p><a href="http://www.onlineaspect.com/2009/08/17/splitting-names/" rel="nofollow">http://www.onlineaspect.com/2009/08/17/splitting-names/</a></p>
| 1 | 2010-09-10T22:28:26Z | [
"python",
"split"
] |
Splitting a person's name into forename and surname | 259,634 | <p>ok so basically I am asking the question of their name
I want this to be one input rather than Forename and Surname.</p>
<p>Now is there any way of splitting this name? and taking just the last word from the "Sentence" e.g.</p>
<pre><code>name = "Thomas Winter"
print name.split()
</code></pre>
<p>and what would be output is just "Winter"</p>
| 17 | 2008-11-03T19:19:09Z | 9,305,240 | <p>This is how I do it in my application:</p>
<pre><code>def get_first_name(fullname):
firstname = ''
try:
firstname = fullname.split()[0]
except Exception as e:
print str(e)
return firstname
def get_last_name(fullname):
lastname = ''
try:
index=0
for part in fullname.split():
if index > 0:
if index > 1:
lastname += ' '
lastname += part
index += 1
except Exception as e:
print str(e)
return lastname
def get_last_word(string):
return string.split()[-1]
print get_first_name('Jim Van Loon')
print get_last_name('Jim Van Loon')
print get_last_word('Jim Van Loon')
</code></pre>
| 2 | 2012-02-16T03:45:20Z | [
"python",
"split"
] |
Does Python 2.5 include a package to natively transform an XML document? | 259,782 | <p>In my Python app, I have an XML document that I'd like to transform using my XSL file. I'm currently using xml.etree to generate the XML document, but I haven't found anything within Python 2.5 that will allow me to natively transform my XML document.</p>
<p>I've already found one library (<a href="http://xmlsoft.org/XSLT/" rel="nofollow">libxslt</a>) which can execute the transformation, but I figured Python would have a native library that achieves the desired result.</p>
<p>Any thoughts?</p>
| 3 | 2008-11-03T20:14:00Z | 259,818 | <p>There is no XSLT processor in Python 2.5.</p>
| 4 | 2008-11-03T20:22:57Z | [
"python",
"xml",
"xslt",
"elementtree"
] |
Does Python 2.5 include a package to natively transform an XML document? | 259,782 | <p>In my Python app, I have an XML document that I'd like to transform using my XSL file. I'm currently using xml.etree to generate the XML document, but I haven't found anything within Python 2.5 that will allow me to natively transform my XML document.</p>
<p>I've already found one library (<a href="http://xmlsoft.org/XSLT/" rel="nofollow">libxslt</a>) which can execute the transformation, but I figured Python would have a native library that achieves the desired result.</p>
<p>Any thoughts?</p>
| 3 | 2008-11-03T20:14:00Z | 260,148 | <p>If you have libxml2 and libxslt installed, then also install <a href="http://codespeak.net/lxml/" rel="nofollow">lxml</a>. It provides a nice, easy-to-use binding for libxml2 and libxslt, and it also implements the ElementTree API.</p>
<p>libxml2/libxslt also come with their own, much lower-level Python bindings, but lxml is much more straightforward and pythonic, and it seems to have great performance as well.</p>
| 5 | 2008-11-03T22:03:08Z | [
"python",
"xml",
"xslt",
"elementtree"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,075 | <p>The fundamental assumption is flawed, I think. you can't map hashes to regular expressions. </p>
| 0 | 2008-11-03T21:44:21Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,077 | <p>This is not possible to do with a regular hash table in any language. You'll either have to iterate through the entire keyset, attempting to match the key to your regex, or use a different data structure.</p>
<p>You should choose a data structure that is appropriate to the problem you're trying to solve. If you have to match against any arbitrary regular expression, I don't know of a good solution. If the class of regular expressions you'll be using is more restrictive, you might be able to use a data structure such as a <a href="http://en.wikipedia.org/wiki/Trie" rel="nofollow">trie</a> or <a href="http://en.wikipedia.org/wiki/Suffix_tree" rel="nofollow">suffix tree</a>.</p>
| 4 | 2008-11-03T21:44:41Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,079 | <p>I don't think it's even theoretically possible. What happens if someone passes in a string that matches more than 1 regular expression. </p>
<p>For example, what would happen if someone did:</p>
<pre><code>>>> regex_dict['FileNfoo']
</code></pre>
<p>How can something like that possibly be O(1)?</p>
| 0 | 2008-11-03T21:44:49Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,085 | <p>What happens if you have a dictionary such as</p>
<pre><code>regex_dict = { re.compile("foo.*"): 5, re.compile("f.*"): 6 }
</code></pre>
<p>In this case <code>regex_dict["food"]</code> could legitimately return either 5 or 6.</p>
<p>Even ignoring that problem, there's probably no way to do this efficiently with the regex module. Instead, what you'd need is an internal directed graph or tree structure.</p>
| 2 | 2008-11-03T21:46:56Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,114 | <p>In the general case, what you need is a lexer generator. It takes a bunch of regular expressions and compiles them into a recognizer. "lex" will work if you are using C. I have never used a lexer generator in Python, but there seem to be a few to choose from. Google shows <a href="http://www.dabeaz.com/ply/" rel="nofollow">PLY</a>, <a href="http://www.lava.net/~newsham/pyggy/" rel="nofollow">PyGgy</a> and <a href="http://margolis-yateley.org.uk/python/various/index.php" rel="nofollow">PyLexer</a>.</p>
<p>If the regular expressions all resemble each other in some way, then you may be able to take some shortcuts. We would need to know more about the ultimate problem that you are trying to solve in order to come up with any suggestions. Can you share some sample regular expressions and some sample data?</p>
<p>Also, how many regular expressions are you dealing with here? Are you sure that the naive approach <em>won't</em> work? As Rob Pike <a href="http://www.lysator.liu.se/c/pikestyle.html" rel="nofollow">once said</a>, "Fancy algorithms are slow when n is small, and n is usually small." Unless you have thousands of regular expressions, and thousands of things to match against them, and this is an interactive application where a user is waiting for you, you may be best off just doing it the easy way and looping through the regular expressions.</p>
| 4 | 2008-11-03T21:53:03Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,120 | <p>As other respondents have pointed out, it's not possible to do this with a hash table in constant time.</p>
<p>One approximation that might help is to use a technique called <a href="http://en.wikipedia.org/wiki/Ngram#n-grams_for_approximate_matching" rel="nofollow">"n-grams"</a>. Create an inverted index from n-character chunks of a word to the entire word. When given a pattern, split it into n-character chunks, and use the index to compute a scored list of matching words.</p>
<p>Even if you can't accept an approximation, in most cases this would still provide an accurate filtering mechanism so that you don't have to apply the regex to every key.</p>
| 1 | 2008-11-03T21:54:01Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,421 | <p>It <em>may</em> be possible to get the regex compiler to do most of the work for you by concatenating the search expressions into one big regexp, separated by "|". A clever regex compiler might search for commonalities in the alternatives in such a case, and devise a more efficient search strategy than simply checking each one in turn. But I have no idea whether there are compilers which will do that.</p>
| 0 | 2008-11-04T00:13:40Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,591 | <p>It really depends on what these regexes look like. If you don't have a lot regexes that will match almost anything like '<code>.*</code>' or '<code>\d+</code>', and instead you have regexes that <em>contains</em> mostly words and phrases or any fixed patterns longer than 4 characters (e.g.'<code>a*b*c</code>' in <code>^\d+a\*b\*c:\s+\w+</code>) , as in your examples. You can do this common trick that scales well to millions of regexes:</p>
<p>Build a inverted index for the regexes (rabin-karp-hash('fixed pattern') -> list of regexes containing 'fixed pattern'). Then at matching time, using Rabin-Karp hashing to compute sliding hashes and look up the inverted index, advancing one character at a time. You now have O(1) look-up for inverted-index non-matches and a reasonable O(k) time for matches, k is the average length of the lists of regexes in the inverted index. k can be quite small (less than 10) for many applications. The quality (false positive means bigger k, false negative means missed matches) of the inverted index depends on how well the indexer understands the regex syntax. If the regexes are generated by human experts, they can provide hints for contained fixed patterns as well.</p>
| 0 | 2008-11-04T01:57:30Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,886 | <p>There is a Perl module that does just this <a href="http://search.cpan.org/~davecross/Tie-Hash-Regex-1.02/lib/Tie/Hash/Regex.pm" rel="nofollow">Tie::Hash::Regex</a>.</p>
<pre><code>use Tie::Hash::Regex;
my %h;
tie %h, 'Tie::Hash::Regex';
$h{key} = 'value';
$h{key2} = 'another value';
$h{stuff} = 'something else';
print $h{key}; # prints 'value'
print $h{2}; # prints 'another value'
print $h{'^s'}; # prints 'something else'
print tied(%h)->FETCH(k); # prints 'value' and 'another value'
delete $h{k}; # deletes $h{key} and $h{key2};
</code></pre>
| 2 | 2008-11-04T04:31:23Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 260,942 | <p>A special case of this problem came up in the 70s AI languages oriented around deductive databases. The keys in these databases could be patterns with variables -- like regular expressions without the * or | operators. They tended to use fancy extensions of trie structures for indexes. See krep*.lisp in Norvig's <a href="http://norvig.com/paip/" rel="nofollow">Paradigms of AI Programming</a> for the general idea.</p>
| 1 | 2008-11-04T05:02:57Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 261,070 | <p>This is definitely possible, as long as you're using 'real' regular expressions. A textbook regular expression is something that can be recognized by a <a href="http://en.wikipedia.org/wiki/Deterministic_finite_state_machine" rel="nofollow">deterministic finite state machine</a>, which primarily means you can't have back-references in there.</p>
<p>There's a property of regular languages that "the union of two regular languages is regular", meaning that you can recognize an arbitrary number of regular expressions at once with a single state machine. The state machine runs in O(1) time with respect to the number of expressions (it runs in O(n) time with respect to the length of the input string, but hash tables do too).</p>
<p>Once the state machine completes you'll know which expressions matched, and from there it's easy to look up values in O(1) time.</p>
| 4 | 2008-11-04T06:30:01Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 261,755 | <p>If you have a small set of possible inputs, you can cache the matches as they appear in a second dict and get O(1) for the cached values.</p>
<p>If the set of possible inputs is too big to cache but not infinite, either, you can just keep the last N matches in the cache (check Google for "LRU maps" - least recently used).</p>
<p>If you can't do this, you can try to chop down the number of regexps you have to try by checking a prefix or somesuch.</p>
| 1 | 2008-11-04T12:39:42Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 266,620 | <p>What you want to do is very similar to what is supported by xrdb. They only support a fairly minimal notion of globbing however.</p>
<p>Internally you can implement a larger family of regular languages than theirs by storing your regular expressions as a character trie. </p>
<ul>
<li>single characters just become trie nodes. </li>
<li>.'s become wildcard insertions covering all children of the current trie node. </li>
<li>*'s become back links in the trie to node at the start of the previous item. </li>
<li>[a-z] ranges insert the same subsequent child nodes repeatedly under each of the characters in the range. With care, while inserts/updates may be somewhat expensive the search can be linear in the size of the string. With some placeholder stuff the common combinatorial explosion cases can be kept under control. </li>
<li>(foo)|(bar) nodes become multiple insertions</li>
</ul>
<p>This doesn't handle regexes that occur at arbitrary points in the string, but that can be modeled by wrapping your regex with .* on either side.</p>
<p>Perl has a couple of Text::Trie -like modules you can raid for ideas. (Heck I think I even wrote one of them way back when)</p>
| 4 | 2008-11-05T20:52:08Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 267,747 | <p>I created this exact data structure for a project once. I implemented it naively, as you suggested. I did make two immensely helpful optimizations, which may or may not be feasible for you, depending on the size of your data:</p>
<ul>
<li>Memoizing the hash lookups</li>
<li>Pre-seeding the the memoization table (not sure what to call this... warming up the cache?)</li>
</ul>
<p>To avoid the problem of multiple keys matching the input, I gave each regex key a priority and the highest priority was used.</p>
| 1 | 2008-11-06T05:55:42Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 816,047 | <p>What about the following:</p>
<pre><code>class redict(dict):
def __init__(self, d):
dict.__init__(self, d)
def __getitem__(self, regex):
r = re.compile(regex)
mkeys = filter(r.match, self.keys())
for i in mkeys:
yield dict.__getitem__(self, i)
</code></pre>
<p>It's basically a subclass of the dict type in Python. With this you can supply a regular expression as a key, and the values of all keys that match this regex are returned in an iterable fashion using yield.</p>
<p>With this you can do the following:</p>
<pre><code>>>> keys = ["a", "b", "c", "ab", "ce", "de"]
>>> vals = range(0,len(keys))
>>> red = redict(zip(keys, vals))
>>> for i in red[r"^.e$"]:
... print i
...
5
4
>>>
</code></pre>
| 3 | 2009-05-03T01:18:14Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 5,835,497 | <p>Ok, I have a very similar requirements, I have a lot of lines of different syntax, basically remark lines and lines with some codes for to use in a process of smart-card format, also, descriptor lines of keys and secret codes, in every case, I think that the "model" pattern/action is the beast approach for to recognize and to process a lot of lines.<br>
I'm using <code>C++/CLI</code> for to develop my assembly named <code>LanguageProcessor.dll</code>, the core of this library is a lex_rule class that basically contains :</p>
<ul>
<li>a Regex member</li>
<li>an event member </li>
</ul>
<p>The constructor loads the regex string and call the necessary codes for to build the event on the fly using <code>DynamicMethod</code>, <code>Emit</code> and <code>Reflexion</code>... also into the assembly exists other class like meta and object that constructs ans instantiates the objects by the simple names of the publisher and the receiver class, receiver class provides the action handlers for each rule matched. </p>
<p>Late, I have a class named <code>fasterlex_engine</code> that build a Dictionary<code><Regex, action_delegate></code>
that load the definitions from an array for to run.</p>
<p>The project is in advanced point but I'm still building, today. I will try to enhance the performance of running surrounding the sequential access to every pair foreach line input, thru using some mechanism of lookup the dictionary directly using the regexp like:</p>
<pre><code>map_rule[gcnew Regex("[a-zA-Z]")];
</code></pre>
<p>Here, some of segments of my code:</p>
<pre><code>public ref class lex_rule: ILexRule
{
private:
Exception ^m_exception;
Regex ^m_pattern;
//BACKSTORAGE delegates, esto me lo aprendi asiendo la huella.net de m*e*da JEJE
yy_lexical_action ^m_yy_lexical_action;
yy_user_action ^m_yy_user_action;
public:
virtual property String ^short_id;
private:
void init(String ^_short_id, String ^well_formed_regex);
public:
lex_rule();
lex_rule(String ^_short_id,String ^well_formed_regex);
virtual event yy_lexical_action ^YY_RULE_MATCHED
{
virtual void add(yy_lexical_action ^_delegateHandle)
{
if(nullptr==m_yy_lexical_action)
m_yy_lexical_action=_delegateHandle;
}
virtual void remove(yy_lexical_action ^)
{
m_yy_lexical_action=nullptr;
}
virtual long raise(String ^id_rule, String ^input_string, String ^match_string, int index)
{
long lReturn=-1L;
if(m_yy_lexical_action)
lReturn=m_yy_lexical_action(id_rule,input_string, match_string, index);
return lReturn;
}
}
};
</code></pre>
<p>Now the fasterlex_engine class that execute a lot of pattern/action pair:</p>
<pre><code>public ref class fasterlex_engine
{
private:
Dictionary<String^,ILexRule^> ^m_map_rules;
public:
fasterlex_engine();
fasterlex_engine(array<String ^,2>^defs);
Dictionary<String ^,Exception ^> ^load_definitions(array<String ^,2> ^defs);
void run();
};
</code></pre>
<p>AND FOR TO DECORATE THIS TOPIC..some code of my cpp file:</p>
<p>this code creates a constructor invoker by parameter sign</p>
<pre><code>inline Exception ^object::builder(ConstructorInfo ^target, array<Type^> ^args)
{
try
{
DynamicMethod ^dm=gcnew DynamicMethod(
"dyna_method_by_totem_motorist",
Object::typeid,
args,
target->DeclaringType);
ILGenerator ^il=dm->GetILGenerator();
il->Emit(OpCodes::Ldarg_0);
il->Emit(OpCodes::Call,Object::typeid->GetConstructor(Type::EmptyTypes)); //invoca a constructor base
il->Emit(OpCodes::Ldarg_0);
il->Emit(OpCodes::Ldarg_1);
il->Emit(OpCodes::Newobj, target); //NewObj crea el objeto e invoca al constructor definido en target
il->Emit(OpCodes::Ret);
method_handler=(method_invoker ^) dm->CreateDelegate(method_invoker::typeid);
}
catch (Exception ^e)
{
return e;
}
return nullptr;
</code></pre>
<p>}</p>
<p>This code attach an any handler function (static or not) for to deal with a callback raised by matching of a input string</p>
<pre><code>Delegate ^connection_point::hook(String ^receiver_namespace,String ^receiver_class_name, String ^handler_name)
{
Delegate ^d=nullptr;
if(connection_point::waitfor_hook<=m_state) // si es 0,1,2 o mas => intenta hookear
{
try
{
Type ^tmp=meta::_class(receiver_namespace+"."+receiver_class_name);
m_handler=tmp->GetMethod(handler_name);
m_receiver_object=Activator::CreateInstance(tmp,false);
d=m_handler->IsStatic?
Delegate::CreateDelegate(m_tdelegate,m_handler):
Delegate::CreateDelegate(m_tdelegate,m_receiver_object,m_handler);
m_add_handler=m_connection_point->GetAddMethod();
array<Object^> ^add_handler_args={d};
m_add_handler->Invoke(m_publisher_object, add_handler_args);
++m_state;
m_exception_flag=false;
}
catch(Exception ^e)
{
m_exception_flag=true;
throw gcnew Exception(e->ToString()) ;
}
}
return d;
}
</code></pre>
<p>finally the code that call the lexer engine: </p>
<pre><code>array<String ^,2> ^defs=gcnew array<String^,2> {/* shortID pattern namespc clase fun*/
{"LETRAS", "[A-Za-z]+" ,"prueba", "manejador", "procesa_directriz"},
{"INTS", "[0-9]+" ,"prueba", "manejador", "procesa_comentario"},
{"REM", "--[^\\n]*" ,"prueba", "manejador", "nullptr"}
}; //[3,5]
//USO EL IDENTIFICADOR ESPECIAL "nullptr" para que el sistema asigne el proceso del evento a un default que realice nada
fasterlex_engine ^lex=gcnew fasterlex_engine();
Dictionary<String ^,Exception ^> ^map_error_list=lex->load_definitions(defs);
lex->run();
</code></pre>
| 0 | 2011-04-29T17:50:33Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 10,190,682 | <p>The problem has nothing to do with regular expressions - you'd have the same problem with a dictionary with keys as functions of lambdas. So the problem you face is figuring is there a way of classifying your functions to figure which will return true or not and that isn't a search problem because f(x) is not known in general before hand.</p>
<p>Distributed programming or caching answer sets assuming there are common values of x may help.</p>
<p>-- DM</p>
| 0 | 2012-04-17T11:51:38Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 16,875,839 | <p>Here's an efficient way to do it by combining the keys into a single compiled regexp, and so not requiring any looping over key patterns. It abuses the <code>lastindex</code> to find out which key matched. (It's a shame regexp libraries don't let you tag the terminal state of the DFA that a regexp is compiled to, or this would be less of a hack.)</p>
<p>The expression is compiled once, and will produce a fast matcher that doesn't have to search sequentially. Common prefixes are compiled together in the DFA, so each character in the key is matched once, not many times, unlike some of the other suggested solutions. You're effectively compiling a mini lexer for your keyspace.</p>
<p>This map isn't extensible (can't define new keys) without recompiling the regexp, but it can be handy for some situations.</p>
<pre><code># Regular expression map
# Abuses match.lastindex to figure out which key was matched
# (i.e. to emulate extracting the terminal state of the DFA of the regexp engine)
# Mostly for amusement.
# Richard Brooksby, Ravenbrook Limited, 2013-06-01
import re
class ReMap(object):
def __init__(self, items):
if not items:
items = [(r'epsilon^', None)] # Match nothing
key_patterns = []
self.lookup = {}
index = 1
for key, value in items:
# Ensure there are no capturing parens in the key, because
# that would mess up match.lastindex
key_patterns.append('(' + re.sub(r'\((?!\?:)', '(?:', key) + ')')
self.lookup[index] = value
index += 1
self.keys_re = re.compile('|'.join(key_patterns))
def __getitem__(self, key):
m = self.keys_re.match(key)
if m:
return self.lookup[m.lastindex]
raise KeyError(key)
if __name__ == '__main__':
remap = ReMap([(r'foo.', 12), (r'FileN.*', 35)])
print remap['food']
print remap['foot in my mouth']
print remap['FileNotFoundException: file.x does not exist']
</code></pre>
| 3 | 2013-06-01T18:20:58Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Hashtable/dictionary/map lookup with regular expressions | 260,056 | <p>I'm trying to figure out if there's a reasonably efficient way to perform a lookup in a dictionary (or a hash, or a map, or whatever your favorite language calls it) where the keys are regular expressions and strings are looked up against the set of keys. For example (in Python syntax):</p>
<pre><code>>>> regex_dict = { re.compile(r'foo.') : 12, re.compile(r'^FileN.*$') : 35 }
>>> regex_dict['food']
12
>>> regex_dict['foot in my mouth']
12
>>> regex_dict['FileNotFoundException: file.x does not exist']
35
</code></pre>
<p>(Obviously the above example won't work as written in Python, but that's the sort of thing I'd like to be able to do.)</p>
<p>I can think of a naive way to implement this, in which I iterate over all of the keys in the dictionary and try to match the passed in string against them, but then I lose the O(1) lookup time of a hash map and instead have O(n), where n is the number of keys in my dictionary. This is potentially a big deal, as I expect this dictionary to grow very large, and I will need to search it over and over again (actually I'll need to iterate over it for every line I read in a text file, and the files can be hundreds of megabytes in size).</p>
<p>Is there a way to accomplish this, without resorting to O(n) efficiency?</p>
<p>Alternatively, if you know of a way to accomplish this sort of a lookup in a database, that would be great, too.</p>
<p>(Any programming language is fine -- I'm using Python, but I'm more interested in the data structures and algorithms here.)</p>
<p>Someone pointed out that more than one match is possible, and that's absolutely correct. Ideally in this situation I'd like to return a list or tuple containing all of the matches. I'd settle for the first match, though.</p>
<p>I can't see O(1) being possible in that scenario; I'd settle for anything less than O(n), though. Also, the underlying data structure could be anything, but the basic behavior I'd like is what I've written above: lookup a string, and return the value(s) that match the regular expression keys.</p>
| 18 | 2008-11-03T21:39:38Z | 16,878,309 | <p>@rptb1 you don't have to avoid capturing groups, because you can use re.groups to count them. Like this:</p>
<pre><code># Regular expression map
# Abuses match.lastindex to figure out which key was matched
# (i.e. to emulate extracting the terminal state of the DFA of the regexp engine)
# Mostly for amusement.
# Richard Brooksby, Ravenbrook Limited, 2013-06-01
import re
class ReMap(object):
def __init__(self, items):
if not items:
items = [(r'epsilon^', None)] # Match nothing
self.re = re.compile('|'.join('('+k+')' for (k,v) in items))
self.lookup = {}
index = 1
for key, value in items:
self.lookup[index] = value
index += re.compile(key).groups + 1
def __getitem__(self, key):
m = self.re.match(key)
if m:
return self.lookup[m.lastindex]
raise KeyError(key)
def test():
remap = ReMap([(r'foo.', 12),
(r'.*([0-9]+)', 99),
(r'FileN.*', 35),
])
print remap['food']
print remap['foot in my mouth']
print remap['FileNotFoundException: file.x does not exist']
print remap['there were 99 trombones']
print remap['food costs $18']
print remap['bar']
if __name__ == '__main__':
test()
</code></pre>
<p>Sadly very few RE engines actually compile the regexps down to machine code, although it's not especially hard to do. I suspect there's an order of magnitude performance improvement waiting for someone to make a really good RE JIT library.</p>
| 2 | 2013-06-01T23:32:46Z | [
"python",
"regex",
"dictionary",
"hash"
] |
Most efficient way to search the last x lines of a file in python | 260,273 | <p>I have a file and I don't know how big it's going to be (it could be quite large, but the size will vary greatly). I want to search the last 10 lines or so to see if any of them match a string. I need to do this as quickly and efficiently as possible and was wondering if there's anything better than:</p>
<pre><code>s = "foo"
last_bit = fileObj.readlines()[-10:]
for line in last_bit:
if line == s:
print "FOUND"
</code></pre>
| 29 | 2008-11-03T23:01:50Z | 260,282 | <p>read the last few Ks of the file, and split that into lines to return only the last 10.</p>
<p>it's quite unlikely the start of that chunk to fall on a line boundary, but you'll discard the first lines anyway.</p>
| 0 | 2008-11-03T23:05:34Z | [
"python",
"file",
"search"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.