text
stringlengths 226
34.5k
|
---|
jQuery Image Upload - CodeIgniter
Question: I use jQuery Image upload like this <http://blueimp.github.com/jQuery-File-
Upload/>
I get the thumbnail and after clicking the upload button I get the progress
bar, but then get 'SyntaxError: JSON.parse: unexpected character'
view page
</body>
</html>
<head>
<meta charset="utf-8">
<title>jQuery File Upload Demo</title>
<link rel="stylesheet" href="http://twitter.github.com/bootstrap/1.4.0/bootstrap.min.css">
<link rel="stylesheet" href="http://blueimp.github.com/Bootstrap-Image-Gallery/bootstrap-image-gallery.min.css">
<!--[if lt IE 7]><link rel="stylesheet" href="http://blueimp.github.com/Bootstrap-Image-Gallery/bootstrap-ie6.min.css"><![endif]-->
<link rel="stylesheet" href="http://blueimp.github.com/jQuery-File-Upload/jquery.fileupload-ui.css">
<style type="text/css">body {padding-top: 80px;}</style>
<meta name="description" content="File Upload widget with multiple file selection, drag&drop support, progress bar and preview images for jQuery. Supports cross-domain, chunked and resumable file uploads. Works with any server-side platform (Google App Engine, PHP, Python, Ruby on Rails, Java, etc.) that supports standard HTML form file uploads.">
</head>
<body>
<div class="container">
<?php echo form_open_multipart('upload/do_upload', array('id'=>'fileupload')); ?>
<div class="row">
<div class="span16 fileupload-buttonbar">
<div class="progressbar fileupload-progressbar fade"><div style="width:0%;"></div></div>
<span class="btn success fileinput-button">
<span>Add files...</span>
<input type="file" name="userfile[]" multiple>
</span>
<button type="submit" class="btn primary start">Start upload</button>
<button type="reset" class="btn info cancel">Cancel upload</button>
<button type="button" class="btn danger delete">Delete selected</button>
<input type="checkbox" class="toggle">
</div>
</div>
<br>
<div class="row">
<div class="span16">
<table class="zebra-striped"><tbody class="files"></tbody></table>
</div>
</div>
</form>
</div>
<!-- gallery-loader is the loading animation container -->
<div id="gallery-loader"></div>
<!-- gallery-modal is the modal dialog used for the image gallery -->
<div id="gallery-modal" class="modal hide fade">
<div class="modal-header">
<a href="#" class="close">×</a>
<h3 class="title"></h3>
</div>
<div class="modal-body"></div>
<div class="modal-footer">
<a class="btn primary next">Next</a>
<a class="btn info prev">Previous</a>
<a class="btn success download" target="_blank">Download</a>
</div>
</div>
<script>
var fileUploadErrors = {
maxFileSize: 'File is too big',
minFileSize: 'File is too small',
acceptFileTypes: 'Filetype not allowed',
maxNumberOfFiles: 'Max number of files exceeded',
uploadedBytes: 'Uploaded bytes exceed file size',
emptyResult: 'Empty file upload result'
};
</script>
<script id="template-upload" type="text/html">
{% for (var i=0, files=o.files, l=files.length, file=files[0]; i<l; file=files[++i]) { %}
<tr class="template-upload fade">
<td class="preview"><span class="fade"></span></td>
<td class="name">{%=file.name%}</td>
<td class="size">{%=o.formatFileSize(file.size)%}</td>
{% if (file.error) { %}
<td class="error" colspan="2"><span class="label important">Error</span> {%=fileUploadErrors[file.error] || file.error%}</td>
{% } else if (o.files.valid && !i) { %}
<td class="progress"><div class="progressbar"><div style="width:0%;"></div></div></td>
<td class="start">{% if (!o.options.autoUpload) { %}<button class="btn primary">Start</button>{% } %}</td>
{% } else { %}
<td colspan="2"></td>
{% } %}
<td class="cancel">{% if (!i) { %}<button class="btn info">Cancel</button>{% } %}</td>
</tr>
{% } %}
</script>
<script id="template-download" type="text/html">
{% for (var i=0, files=o.files, l=files.length, file=files[0]; i<l; file=files[++i]) { %}
<tr class="template-download fade">
{% if (file.error) { %}
<td></td>
<td class="name">{%=file.name%}</td>
<td class="size">{%=o.formatFileSize(file.size)%}</td>
<td class="error" colspan="2"><span class="label important">Error</span> {%=fileUploadErrors[file.error] || file.error%}</td>
{% } else { %}
<td class="preview">{% if (file.thumbnail_url) { %}
<a href="{%=file.url%}" title="{%=file.name%}" rel="gallery"><img src="{%=file.thumbnail_url%}"></a>
{% } %}</td>
<td class="name">
<a href="{%=file.url%}" title="{%=file.name%}" rel="{%=file.thumbnail_url&&'gallery'%}">{%=file.name%}</a>
</td>
<td class="size">{%=o.formatFileSize(file.size)%}</td>
<td colspan="2"></td>
{% } %}
<td class="delete">
<button class="btn danger" data-type="{%=file.delete_type%}" data-url="{%=file.delete_url%}">Delete</button>
<input type="checkbox" name="delete" value="1">
</td>
</tr>
{% } %}
</script>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<!-- The jQuery UI widget factory, can be omitted if jQuery UI is already included -->
<script src="http://blueimp.github.com/jQuery-File-Upload/vendor/jquery.ui.widget.js"></script>
<!-- The Templates and Load Image plugins are included for the FileUpload user interface -->
<script src="http://blueimp.github.com/JavaScript-Templates/tmpl.min.js"></script>
<script src="http://blueimp.github.com/JavaScript-Load-Image/load-image.min.js"></script>
<!-- Bootstrap Modal and Image Gallery are not required, but included for the demo -->
<script src="http://twitter.github.com/bootstrap/1.4.0/bootstrap-modal.min.js"></script>
<script src="http://blueimp.github.com/Bootstrap-Image-Gallery/bootstrap-image-gallery.min.js"></script>
<!-- The Iframe Transport is required for browsers without support for XHR file uploads -->
<script src="http://blueimp.github.com/jQuery-File-Upload/jquery.iframe-transport.js"></script>
<script src="http://blueimp.github.com/jQuery-File-Upload/jquery.fileupload.js"></script>
<script src="http://blueimp.github.com/jQuery-File-Upload/jquery.fileupload-ui.js"></script>
<script src="http://blueimp.github.com/jQuery-File-Upload/application.js"></script>
<!-- The XDomainRequest Transport is included for cross-domain file deletion for IE8+ -->
<!--[if gte IE 8]><script src="http://blueimp.github.com/jQuery-File-Upload/cors/jquery.xdr-transport.js"></script><![endif]-->
</body>
</html>
controller page:
class upload extends CI_Controller {
function __construct()
{
parent::__construct();
$this->load->helper(array('form', 'url'));
}
function index()
{
$this->load->view('index');
}
function do_upload()
{
$config['upload_path'] = './uploads/'; // server directory
$config['allowed_types'] = 'gif|jpg|png'; // by extension, will check for whether it is an image
$config['max_size'] = '1000'; // in kb
$config['max_width'] = '1024';
$config['max_height'] = '768';
$this->load->library('upload', $config);
$this->load->library('Multi_upload');
$files = $this->multi_upload->go_upload();
if ( ! $files )
{
$error = array('error' => $this->upload->display_errors());
$this->load->view('index', $error);
}
else
{
$data = array('upload_data' => $files);
$this->load->view('upload_success', $data);
}
}
}
Any Help?
Answer: I also got this error even when the files were uploading successfully. The
issue I came across was what I was returning. More info can be found here
<https://github.com/blueimp/jQuery-File-Upload/wiki/Setup> and the section
labelled **Using jQuery File Upload (UI version) with a custom server-side
upload handler**
so it basically says: _"Note that the response should always be a JSON object
containing a files array even if only one file is uploaded."_ You should still
pass a files array back even if you get a server side error.
Here is an example of my Upload function (the keys part are the "echo
json_encode"):
function upload_file($component_files_id = null,$thumb_width = 400)
{
$config['upload_path'] = ($component_files_id) ? './documents/component_files/'.$component_files_id : './documents/image_uploads';
if (!file_exists($config['upload_path']))
@mkdir($config['upload_path'], 0777, true);
$config['allowed_types'] = 'gif|jpg|png|pdf|doc|docx|docm|odt|xls|xlsx|xlsm|ods|csv';
$config['max_size'] = '10000'; #in KB
$config['max_width'] = '5000';
$config['max_height'] = '5000';
$this->load->library('upload', $config);
if (!$this->upload->do_upload())
{
$error = $this->upload->display_errors('','');
if(is_ajax()){
$file['error'] = $error;
echo json_encode(array('files'=>array($file)));
}
else
set_message($error,'error',true);
}
else
{
$file = $this->upload->data();
$file['is_image'] = ($file['is_image'] == '1') ? 'TRUE' : 'FALSE';
$file['updated_by'] = get_user('user_id');
$file['created_by'] = get_user('user_id');
if($component_files_id)
$file['component_files_id'] = $component_files_id;
//save the file details to the database
$file_id = $this->page_model->save_file($file);
if($file['is_image'] == 'TRUE'){
$thumb_width = ($component_files_id) ? 290 : $thumb_width;
$this->_create_thumbnail($file_id,$thumb_width);
}
//set the data for the json array
$info->id = $file_id;
$info->name = $file['file_name'];
$info->size = $file['file_size'];
$info->type = $file['file_type'];
if($file['is_image'] == 'TRUE'){
$info->url = base_url().'files/image/'.$file_id;
$info->thumbnail_url = base_url().'files/image/'.$file_id.'/thumb';
}
else{
$info->url = base_url().'files/download/'.$file_id;
$info->thumbnail_url = base_url().'images/document-icon.png';
}
$info->delete_url = base_url().'files/delete_document/'.$file_id;
$info->delete_type = 'DELETE';
$files['files'] = array($info);
header('Content-type: text/html');
echo json_encode($files);
}
}
|
AttributeError: 'list' object has no attribut 'has_key' in App Engine
Question: I am having some issues with the app engine's bulkloader. Below I have
inserted both the bulkloader.yaml, hs_transformers.py, and error log. Any idea
as to what is generating this error? My `hs_transformer` function works if I
return a single entity (just an entity, not a list with one entity in it, that
also throws the error), but when I try to return a list of entities this error
occurs. According to app engine's documents I should be able to return a list
of entities.
.yaml file:
python_preamble:
- import: re
- import: base64
- import: google.appengine.ext.bulkload.transform
- import: google.appengine.ext.bulkload.bulkloader_wizard
- import: google.appengine.ext.db
- import: google.appengine.api.datastore
- import: hs_transformers
- import: datetime
transformers:
- kind: HBO
connector: csv
property_map:
- property: __key__
external_name: swfServerID
import_transform: hs_transformers.string_null_converter
- property: IP_address
external_name: IP
import_transform: hs_transformers.string_null_converter
- property: name
external_name: swfServer
import_transform: hs_transformers.swf_server_converter
- property: last_checkin_date
external_name: clockStampOfLastCheckin
import_transform: hs_transformers.clock_stamp_of_last_checkin_converter
# - property: last_update
# external_name: clockStampOfLastUpdate
# import_transform: transform
- property: form_factor
external_name: formFactor
import_transform: hs_transformers.string_null_converter
- property: serial_number
external_name: serialNumber
import_transform: hs_transformers.string_null_converter
- property: allow_reverse_SSH
external_name: allowReverseSSH
import_transform: hs_transformers.boolean_converter
- property: insight_account
external_name: FK_insightAccountID
import_transform: hs_transformers.integer_converter
- property: version
external_name: ver
import_transform: hs_transformers.string_null_converter
post_import_function: hs_transformers.post_hbo
hs_transformers.py
def post_hbo(input_dict, entity_instance, bulkload_state):
return_entities = []
model_key = db.Key.from_path("Contact", 1)
logging.error("MODEL KEY " +str(model_key))
logging.error("MODEL KEY TYPE " +str(type(model_key)))
keys = db.allocate_ids(model_key, 1)
logging.error("KEYS " +str(keys))
logging.error("KEYS TYPE " +str(type(keys)))
id = keys[0]
logging.error("ID " +str(id))
logging.error("ID TYPE " +str(type(id)))
contact_key = db.Key.from_path("Contact", id)
logging.error("CONTACT KEY " +str(contact_key))
logging.error("CONTACT KEY TYPE " +str(type(contact_key)))
hbo_key = db.Key.from_path("HBO", input_dict["swfServerID"])
logging.error("HBO KEY " +str(hbo_key))
logging.error("HBO KEY TYPE " +str(type(hbo_key)))
contact = Contact(key=contact_key)
map = HBOContact()
map.hbo = hbo_key
map.contact = contact_key
return_entities.append(contact)
return_entities.append(map)
logging.error("CONTACT KEY AGAIN? " +str(contact.key()))
logging.error("CONTACT TYPE " +str(type(contact)))
logging.error("MAP TYPE " +str(type(map)))
logging.error("RETURN LIST " + str(return_entities))
return return_entities
And lastly
Microsoft Windows [Version 6.1.7600]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\Jack Frost>cd..
C:\Users>cd..
C:\>cd "Program Files (x86)"
C:\Program Files (x86)>cd "Google App Engine SDK"
C:\Program Files (x86)\Google App Engine SDK>python appcfg.py upload_data --url=http://bulkloader-testing.appspot.com/remote_api --config_file="C:\Users\Jack Frost\Eclipse Workspace\Headsprout\GAE 1.27.2012\src\utilities\bulkloader\bulkloader.yaml" --filename="C:\Users\Jack Frost\Eclipse Workspace\Headsprout\GAE 1.27.2012\src\utilities\bulkloader\csv_files\smallhbos.csv" --kind=HBO
Uploading data records.
[INFO ] Logging to bulkloader-log-20120131.160426
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20120131.160426.sql3
[INFO ] Connecting to bulkloader-testing.appspot.com/remote_api
[INFO ] Starting import; maximum 10 entities per post
2012-01-31 16:04:27,135 ERROR hs_transformers.py:66 type object 'datetime.datetime' has no attribute 'datetime'
2012-01-31 16:04:27,137 ERROR hs_transformers.py:17 MODEL KEY ahRzfmJ1bGtsb2FkZXItdGVzdGluZ3INCxIHQ29udGFjdBgBDA
2012-01-31 16:04:27,138 ERROR hs_transformers.py:18 MODEL KEY TYPE <class 'google.appengine.api.datastore_types.Key'>
2012-01-31 16:04:27,461 ERROR hs_transformers.py:20 KEYS (16031L, 16031L)
2012-01-31 16:04:27,463 ERROR hs_transformers.py:21 KEYS TYPE <type 'tuple'>
2012-01-31 16:04:27,463 ERROR hs_transformers.py:23 ID 16031
2012-01-31 16:04:27,464 ERROR hs_transformers.py:24 ID TYPE <type 'long'>
2012-01-31 16:04:27,466 ERROR hs_transformers.py:27 CONTACT KEY ahRzfmJ1bGtsb2FkZXItdGVzdGluZ3IOCxIHQ29udGFjdBiffQw
2012-01-31 16:04:27,466 ERROR hs_transformers.py:28 CONTACT KEY TYPE <class 'google.appengine.api.datastore_types.Key'>
2012-01-31 16:04:27,467 ERROR hs_transformers.py:30 HBO KEY ahRzfmJ1bGtsb2FkZXItdGVzdGluZ3IKCxIDSEJPIgEzDA
2012-01-31 16:04:27,467 ERROR hs_transformers.py:31 HBO KEY TYPE <class 'google.appengine.api.datastore_types.Key'>
2012-01-31 16:04:27,469 ERROR hs_transformers.py:42 CONTACT KEY AGAIN? ahRzfmJ1bGtsb2FkZXItdGVzdGluZ3IOCxIHQ29udGFjdBiffQw
2012-01-31 16:04:27,469 ERROR hs_transformers.py:43 CONTACT TYPE <class 'shared.datastore.Contact'>
2012-01-31 16:04:27,470 ERROR hs_transformers.py:44 MAP TYPE <class 'shared.datastore.HBOContact'>
2012-01-31 16:04:27,470 ERROR hs_transformers.py:46 RETURN LIST [<shared.datastore.Contact object at 0x0000000003DBBB00>, <shared.datastore.HBOContact object at 0x0000000003DBBC18>]
[ERROR ] [WorkerThread-0] WorkerThread:
Traceback (most recent call last):
File "C:\Program Files (x86)\Google App Engine SDK\google\appengine\tools\adaptive_thread_pool.py", line 176, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "C:\Program Files (x86)\Google App Engine SDK\google\appengine\tools\bulkloader.py", line 764, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "C:\Program Files (x86)\Google App Engine SDK\google\appengine\tools\bulkloader.py", line 933, in _TransferItem
self.content = self.request_manager.EncodeContent(self.rows)
File "C:\Program Files (x86)\Google App Engine SDK\google\appengine\tools\bulkloader.py", line 1394, in EncodeContent
entity = loader.create_entity(values, key_name=key, parent=parent)
File "C:\Program Files (x86)\Google App Engine SDK\google\appengine\ext\bulkload\bulkloader_config.py", line 446, in create_entity
self.__track_max_id(entity)
File "C:\Program Files (x86)\Google App Engine SDK\google\appengine\ext\bulkload\bulkloader_config.py", line 420, in __track_max_id
elif not entity.has_key():
AttributeError: 'list' object has no attribute 'has_key'
[INFO ] [WorkerThread-1] Backing off due to errors: 1.0 seconds
[INFO ] An error occurred. Shutting down...
[ERROR ] Error in WorkerThread-0: 'list' object has no attribute 'has_key'
[INFO ] 9 entities total, 0 previously transferred
[INFO ] 0 entities (2364 bytes) transferred in 1.5 seconds
[INFO ] Some entities not successfully transferred
> Also thought I would paste what code.google.com has to say about the
> post_import_function
>
>> > post_import_function(input_dict, instance, bulkload_state_copy)
functionName
>>>
>>> Your function must return one of the following: None, which means to skip
importing this record; a single entity (usually the instance argument that was
passed in); or a list of multiple entities to be imported.
>
> When I comment out all code in my post_import_transform function and just
> write return None I still get the same error; however, this is contradictory
> to code.google.com.
<http://code.google.com/appengine/docs/python/tools/uploadingdata.html>
Answer: Glancing through the relevant code, looks like a bug.
You're post import function is run within
[`dict_to_entity`](http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/ext/bulkload/bulkloader_config.py#131),
which simply returns whatever your function returns.
[`create_entity`](http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/ext/bulkload/bulkloader_config.py#430)
feeds whatever `dict_to_entity` returns into
[`__track_max_id`](http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/ext/bulkload/bulkloader_config.py#406),
which doesn't seem to properly account for a list or None.
I'd suggest you file this as a bug in the [App Engine Issue
tracker](http://code.google.com/p/googleappengine/issues/entry).
Note that you could fix this pretty easily in your local SDK. Basically change
`__track_max_id` to look something like:
def __track_max_id(self, entity):
"""Check the entity to see it has a numeric ID higher than any seen so far.
High IDs are stored in self.high_ids[path-to-entity-kind].
They are not tracked if self.increment_id is None.
Args:
entity: An entity with a key.
"""
if not self.increment_id:
return
if not entity:
return
if isinstance(entity, datastore.Entity):
entities = [entity]
else:
entities = entity
for entity in entities:
if not entity.key():
continue
key = entity.key()
key_id = key.id()
if not key_id:
continue
path = tuple(key.to_path()[:-1])
if self.high_ids.get(path, 0) < key_id:
self.high_ids[path] = key_id
|
Using a grammar parser for Python and constructing files from the tree
Question: I have a custom made grammar for an interpreted language and I am looking for
advice on a parser which will create a tree which I can query. From the
structure I would like to be able to generate code in the interpreted
language. Most grammar parsers that I have seen validate already existing
code. The second part of my question is should the grammar be abstracted to
the point that the Python code will substitute symbols in the tree for actual
code terminology? Ideally, I would love be be able to query a root symbol and
have returned all the symbols which fall under that root and so forth all the
way to a terminal symbol.
Any advice on this process or my vocabulary regarding it would be very
helpful. Thank you.
Answer: I'd use ANTLR. Version 3 (current) supports [generating Python
code](http://www.antlr.org/wiki/display/ANTLR3/Antlr3PythonTarget). It will
generate an Abstract Syntax Tree (AST) automatically during parsing, which you
can then traverse. An important part of this will be annotating your grammar
with which tokens are to be treated as subtrees (e.g. operators).
|
Task Manager Module
Question: I was wondering if there is a module that allows the program to see what tasks
are running. For example, if I am running Google Chrome, Python Idle, and the
program, it should see all 3. (It is most important that it can see its self.)
Answer: [**psutil**](http://code.google.com/p/psutil/)
> psutil is a module providing an interface for retrieving information on all
> running processes and system utilization (CPU, disk, memory, network) in a
> portable way by using Python.
|
Emacs Python-inferior shell not showing prompt after matplotlib show() command
Question: So I've been experimenting with numpy and matplotlib and have stumbled across
some bug when running python from the emacs inferior shell.
When I send the py file to the shell interpreter I can run commands after the
code executed. The command prompt ">>>" appears fine. However, after I invoke
a matplotlib show command on a plot the shell just hangs with the command
prompt not showing.
>>> plt.plot(x,u_k[1,:]);
[<matplotlib.lines.Line2D object at 0x0000000004A9A358>]
>>> plt.show();
I am running the traditional C-python implementation. under emacs 23.3 with
Fabian Gallina's Python python.el v. 0.23.1 on Win7.
A similar question has been raised here under the i-python platform: [running
matplotlib or enthought.mayavi.mlab from a py-shell inside emacs on
windows](http://stackoverflow.com/questions/4701607/running-matplotlib-or-
enthought-mayavi-mlab-from-a-py-shell-inside-emacs-on-wind)
**UPDATE: I have duplicated the problem on a fresh instalation of Win 7 x64
with the typical python 2.7.2 binaries available from the python website and
with numpy 1.6.1 and matplotlib 1.1.0 on emacs 23.3 and 23.4 for Windows.**
There must be a bug somewhere in the emacs shell.
Answer: I think there are two ways to do it.
1. Use ipython. Then you can use `-pylab` option. I don't use Fabian Gallina's python.el, but I guess you will need something like this:
(setq python-shell-interpreter-args "-pylab")
Please read the documentation of python.el.
2. You can manually activate interactive mode by [ion](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.ion)
>>> from matplotlib import pyplot as plt
>>> plt.ion()
>>> plt.plot([1,2,3])
[<matplotlib.lines.Line2D object at 0x20711d0>]
>>>
|
Python - What is the fastest way i can copy files from a source folder to multiple destinations
Question: source = /tmp/src contains a,b,c,d files destinations = '/one' , '/two'
so i want to copy files a,b,c,d to both destinations '/one' and 'two'
something like
source = '/tmp/src'
destinations = []
def copy_files_multiple_dest(source,destinations)
right ? now, how would i loop through all the destinations
Answer: how about something like:
import os
import shutil
source = '/tmp/src/'
destinations = []
def copy_files_multiple_dest(source,destinations):
sfiles = os.listdir(source) # list of all files in source
for f in sfiles:
for dest in destinations:
shutil.copy(os.path.join(source,f), dest)
i'm not sure it's the **fastest** but it should do the job.
|
join with a separator added at the end when not empty
Question: I need to append all elements in `list_` to a `string`; at the end I need to
add a suffix. A dot '.' has to separate all elements:
list_ = args[f(n) : f(n+1)]
if list_:
string += '.' + '.'.join(list_) + '.' + suffix # works except when list_ is empty
else:
string += '.' + suffix
# list_ isn't used after this
Can I rewrite it in a simpler way in one line? If `join` added a separator
after each element, it would just this:
string += '.' + '.'.join(args[f(n) : f(n+1)]) + '.' + suffix
Edit
I just learned that:
Slices are copies even if they are never assigned to: [Does Python do slice-
by-reference on strings?](http://stackoverflow.com/questions/5722006/does-
python-do-slice-by-reference-on-strings)
But islice may be even worse since it iterates through the start of the list:
[itertools.islice compared to list
slice](http://stackoverflow.com/questions/2738096/itertools-islice-compared-
to-list-slice)
Some alternatives are discussed here: [Avoiding unnecessary slice copying in
Python](http://stackoverflow.com/questions/2328171/avoiding-unnecessary-slice-
copying-in-python)
Answer: I would go with this (updated to reflect edits to the question):
'.'.join([''] + args[f(n):f(n+1)] + [suffix])
_EDIT_ : taking inspiration from sblom's and unutbu's answers, I might
_actually_ do this:
from itertools import chain, islice
string = '.'.join(chain([string], islice(args, f(n), f(n+1)), [suffix]))
if I were concerned about the memory cost of slicing `args`. You can use a
tuple `(string,)` or a list `[string]` for the string and the suffix; given
that they're one element each, there's no significant difference in memory
usage or execution time, and since they're not being stored, you don't have to
worry about mutability. I find the list syntax a little cleaner.
However: I'm not sure if Python actually creates a new list object for a slice
that is only going to be used, not assigned to. If it doesn't, then given that
`args` is a proper list, using `islice` over `[f(n):f(n+1)]`doesn't save you
much of anything, and in that case I'd just go with the simple approach (up
top). If `args` were a generator or other lazily evaluated iterable with a
very large number of elements, _then_ `islice` might be worth it.
|
Constantly looking for user input in Python
Question: How would I write a Python program that would always be looking for user
input. I think I would want to have a variable equal to the input and then
something different would happen based on what that variable equaled. So if
the variable were "w" then it would execute a certain command and keep doing
that until it received another input like "d" Then something different would
happen but it wouldn't stop until you hit enter.
Answer: If you want to **constantly** look for an user input you'll need
[**multithreading**](http://docs.python.org/py3k/library/threading.html).
Example:
import threading
import queue
def console(q):
while 1:
cmd = input('> ')
q.put(cmd)
if cmd == 'quit':
break
def action_foo():
print('--> action foo')
def action_bar():
print('--> action bar')
def invalid_input():
print('---> Unknown command')
def main():
cmd_actions = {'foo': action_foo, 'bar': action_bar}
cmd_queue = queue.Queue()
dj = threading.Thread(target=console, args=(cmd_queue,))
dj.start()
while 1:
cmd = cmd_queue.get()
if cmd == 'quit':
break
action = cmd_actions.get(cmd, invalid_input)
action()
main()
As you'll see this, will get your messages a little mixed up, something like:
> foo
> --> action foo
bar
> --> action bar
cat
> --> Unknown command
quit
That's beacuse there are two threads writing to stdoutput at the same time. To
sync them there's going to be need of
[`lock`](http://docs.python.org/py3k/library/threading.html#lock-objects):
import threading
import queue
def console(q, lock):
while 1:
input() # Afther pressing Enter you'll be in "input mode"
with lock:
cmd = input('> ')
q.put(cmd)
if cmd == 'quit':
break
def action_foo(lock):
with lock:
print('--> action foo')
# other actions
def action_bar(lock):
with lock:
print('--> action bar')
def invalid_input(lock):
with lock:
print('--> Unknown command')
def main():
cmd_actions = {'foo': action_foo, 'bar': action_bar}
cmd_queue = queue.Queue()
stdout_lock = threading.Lock()
dj = threading.Thread(target=console, args=(cmd_queue, stdout_lock))
dj.start()
while 1:
cmd = cmd_queue.get()
if cmd == 'quit':
break
action = cmd_actions.get(cmd, invalid_input)
action(stdout_lock)
main()
Ok, now it's better:
# press Enter
> foo
--> action foo
# press Enter
> bar
--> action bar
# press Enter
> cat
--> Unknown command
# press Enter
> quit
Notice that you'll need to press `Enter` before typing a command to enter in
"input mode".
|
PyQt - how to replace QString with sip API 2
Question: Please show me how to replace this code:
import sip
sip.setapi("QString", 2)
...
text = QString.fromLatin1("<p>Character: <span style=\"font-size: 16pt; font-family: %1\">").arg(self.displayFont.family()) + \
QChar(key) + \
QString.fromLatin1("</span><p>Value: 0x") + \
QString.number(key, 16)
and
if QChar(self.lastKey).category() != QChar.NoCategory:
self.characterSelected.emit(QString(QChar(self.lastKey)))
with sip API 2 Python equivalent. It says "NameError: global name 'QString' is
not defined" because I use Python strings instead. Thank you.
[SOLVED]
text = ('<p>Character: <span style="font-size: 16pt; font-family: %s">%s</span>
<p>Value: %#x' % (self.displayFont.family(), unichr(key), key))
and
if unicodedata.category(unichr(self.lastKey)) != 'Cn':
self.characterSelected.emit(unichr(self.lastKey))
Answer: Switching to the v2 api for `QString` removes the string-related Qt classes so
that python strings can be used everywhere instead.
The "sip API 2 Python equivalent" is therefore just normal python string-
handling:
>>> text = ('<p>Character: <span style="font-size: 16pt; '
... 'font-family: %s">%s</span><p>Value: %#x' %
... (font.family(), unichr(key), key))
>>>
>>> print text
<p>Character: <span style="font-size: 16pt; font-family: Sans Serif">A</span><p>Value: 0x41
|
How to cast object in Python
Question: I have two classes (let's call them Working and ReturnStatement) which I can't
modify, but I want to extend both of them with logging. The trick is that the
Working's method returns a ReturnStatement object, so the new MutantWorking
object also returns ReturnStatement unless I can cast it to
MutantReturnStatement. Saying with code:
# these classes can't be changed
class ReturnStatement(object):
def act(self):
print "I'm a ReturnStatement."
class Working(object):
def do(self):
print "I am Working."
return ReturnStatement()
# these classes should wrap the original ones
class MutantReturnStatement(ReturnStatement):
def act(self):
print "I'm wrapping ReturnStatement."
return ReturnStatement().act()
class MutantWorking(Working):
def do(self):
print "I am wrapping Working."
# !!! this is not working, I'd need that casting working !!!
return (MutantReturnStatement) Working().do()
rs = MutantWorking().do() #I can use MutantWorking just like Working
print "--" # just to separate output
rs.act() #this must be MutantReturnState.act(), I need the overloaded method
The expected result:
I am wrapping Working.
I am Working.
\--
I'm wrapping ReturnStatement.
I'm a ReturnStatement.
Is it possible to solve the problem? I'm also curious if the problem can be
solved in PHP, too. Unless I get a working solution I can't accept the answer,
so please write working code to get accepted.
Answer: There is no casting as the other answers already explained. You can make
subclasses or make modified new types with the extra functionality using
**decorators**.
Here's a complete example (credit to [Understanding Python
decorators](http://stackoverflow.com/questions/739654/understanding-python-
decorators)). You do not need to modify your original classes. In my example
the original class is called Working.
# decorator for logging
def logging(func):
def wrapper(*args, **kwargs):
print func.__name__, args, kwargs
res = func(*args, **kwargs)
return res
return wrapper
# this is some example class you do not want to/can not modify
class Working:
def Do(c):
print("I am working")
def pr(c,printit): # other example method
print(printit)
def bla(c): # other example method
c.pr("saybla")
# this is how to make a new class with some methods logged:
class MutantWorking(Working):
pr=logging(Working.pr)
bla=logging(Working.bla)
Do=logging(Working.Do)
h=MutantWorking()
h.bla()
h.pr("Working")
h.Do()
this will print
h.bla()
bla (<__main__.MutantWorking instance at 0xb776b78c>,) {}
pr (<__main__.MutantWorking instance at 0xb776b78c>, 'saybla') {}
saybla
pr (<__main__.MutantWorking instance at 0xb776b78c>, 'Working') {}
Working
Do (<__main__.MutantWorking instance at 0xb776b78c>,) {}
I am working
In addition, I would like to understand why you can not modify a class. Did
you try? Because, as an **alternative** to making a subclass, if you feel
dynamic you _can_ almost always modify an old class in place:
Working.Do=logging(Working.Do)
ReturnStatement.Act=logging(ReturnStatement.Act)
Update: **Apply logging to all methods of a class**
As you now specifically asked for this. You _can_ loop over all members and
apply logging to them all. But you need to define a rule for what kind of
members to modify. The example below excludes any method with __ in its name .
import types
def hasmethod(obj, name):
return hasattr(obj, name) and type(getattr(obj, name)) == types.MethodType
def loggify(theclass):
for x in filter(lambda x:"__" not in x, dir(theclass)):
if hasmethod(theclass,x):
print(x)
setattr(theclass,x,logging(getattr(theclass,x)))
return theclass
With this all you have to do to make a new logged version of a class is:
@loggify
class loggedWorker(Working): pass
Or modify an existing class in place:
loggify(Working)
|
python won't create thread?
Question: I might be missing something silly, but ive ran my code in pythonwin and it
works, but when I run it in command-line it freaks
import time, thread
def print_t(name, delay):
while 1:
time.sleep(delay)
print name
try:
thread.start_new_thread(print_t,("First Message",1,))
thread.start_new_thread(print_t,("Second Message",2,))
except Exception as e:
print e
Unhandled exception in thread started by
sys.excepthook is missing
lost sys.stderr
Unhandled exception in thread started by
sys.excepthook is missing
lost sys.stderr
Answer: The exception happens when the main thread (the one that starts other threads)
finishes. In your code the main thread quits before any of your sub threads
(created by `start_new_thread`) finish. The solution is to wait at your main
thread till the child threads ends.
See the discussion [Simple threading in Python 2.6 using
thread.start_new_thread()](http://stackoverflow.com/questions/849674/simple-
threading-in-python-2-6-using-thread-start-new-thread)
|
How to make Botan RSA signature verification match that of PyCrypto
Question: I'm developing a key generator that generates RSA signatures that are to be
downloaded to the clients computer.
In the clients computer i would like to use a RSA signature and a public key
to validate the string. What i would like to know, if you can help, is what is
the algorithm that i should use to get the signature validated or what is
wrong with my code.
[edit updated the code with the suggestion, but still no success.]
The Python code:
from Crypto.Signature import PKCS1_PSS
from Crypto.Hash import SHA
from Crypto.PublicKey import RSA
from Crypto import Random
key_priv = RSA.generate(1024)#, random_generator)
#key_priv = RSA.importKey(open('key.priv.pem.rsa').read())
key_pub = key_priv.publickey()
n, e = key_pub.n, key_pub.e
p,q,d,u = key_priv.p, key_priv.q, key_priv.d, key_priv.u
print "char n[] = \"",n,"\";"
print "char e[] = \"",e,"\";"
#print "key_pub.exportKey(): ",key_pub.exportKey()
mac = '192.168.0.106'
plugin = 'Bluetooth'
text = plugin + mac
hash = SHA.new()
hash.update(text)
#signature = key_priv.sign(hash, None, 'PKCS1')[0]
#print "signature: ", signature
#random_generator = Random.new().read
#signature = key_priv.sign(hash, '')
signer = PKCS1_PSS.new(key_priv)
# signature = signer.sign(hash)
signature = open('plugin_example.signature').read()
print "type(signature)", type(signature) #str
print "signature: ", signature
verifier = PKCS1_PSS.new(key_pub)
if verifier.verify(hash, signature):
print "The signature is authentic."
else:
print "The signature is not authentic."
fd = open("plugin_example.signature", "w")
fd.write(signature)
fd.close()
fd = open("key.pub.pem.rsa", "w")
fd.write(key_pub.exportKey())
fd.close()
fd = open("key.priv.pem.rsa", "w")
fd.write(key_priv.exportKey())
fd.close()
And the C++ Code:
#include <string.h>
#include <assert.h>
#include <iostream>
#include <fstream>
#include <string>
#include <memory>
#include <vector>
#include <botan/botan.h>
#include <botan/look_pk.h>
#include <botan/rsa.h>
#include <QtCore>
#include "lib/debug.hpp"
#include <QDebug>
#define Q(AAA) qDebug() << #AAA <<" " << AAA << endl;
#define P(X) std::cout << #X <<" = " << X << " "<< std::endl;
using namespace Botan;
static BigInt to_bigint(const std::string& h)
{
return BigInt::decode((const byte*)h.data(),
h.length(), BigInt::Hexadecimal);
}
int main(int argc, char ** argv) {
Botan::LibraryInitializer init;
QByteArray mac = "192.168.0.106";
QByteArray plugin = "Bluetooth";
QByteArray mac_and_plugin = plugin+mac;
QByteArray mac_and_plugin_hex = QCryptographicHash::hash ( mac_and_plugin, QCryptographicHash::Sha1 ).toHex();
QByteArray qByteArray_sig;
QFile file ( argv[1] );
file.open ( QIODevice::ReadOnly );
if ( file.isReadable() )
{
qByteArray_sig = file.readAll();
}
file.close();
QByteArray qByteArray_sig_hex = qByteArray_sig.toHex();
char n[] = "137758869720100695031597743484335597584728606037599895664824678915370363634933922524373276431650126408515526550739072301333537631796375930381713667037665579467940926539847824669399430790335904629465572107797677521774814742987023253982675971904413266030976887012380999213491205226382726115118193377641942499979";
char e[] = "65537";
BigInt big_n = to_bigint(n);// mod
BigInt big_e = to_bigint(e);// exp
RSA_PublicKey pub_key(big_n,big_e);
PK_Verifier* verifier = 0;
QStringList l;
l.push_back("EMSA1(SHA-1)");
l.push_back("EMSA3(SHA-1)");
l.push_back("EMSA4(SHA-1)");
l.push_back("EMSA1(SHA-256)");
l.push_back("EMSA3(SHA-256)");
l.push_back("EMSA4(SHA-256)");
l.push_back("EMSA3(MD5)");
P(qByteArray_sig.length());
for (int i = 0 ; i < l.size(); i++) {
if (verifier)
delete verifier;
verifier = get_pk_verifier(pub_key, l[i].toStdString() );
bool is_valid = verifier->verify_message(
mac_and_plugin_hex.data(),mac_and_plugin_hex.length(),
qByteArray_sig_hex.data(), qByteArray_sig_hex.length()
);
P(is_valid);
is_valid = verifier->verify_message(
mac_and_plugin_hex.data(),mac_and_plugin_hex.length(),
qByteArray_sig.data(), qByteArray_sig.length()
);
P(is_valid);
is_valid = verifier->verify_message(
mac_and_plugin.data(),mac_and_plugin.length(),
qByteArray_sig.data(), qByteArray_sig.length()
);
P(is_valid);
}
Q(qByteArray_sig);
Q(qByteArray_sig_hex);;
}
Answer: In the piece of Python code, you are creating an RSA signature that does not
follow any real protocol or standard. In other words, it's _raw_ and in most
cases it is not secure.
Instead, you should use something like PSS (use the pycrypto module [PKCS1_PSS
here](https://www.dlitz.net/software/pycrypto/api/current/Crypto.Signature.PKCS1_PSS-
module.html)). In the Botan code, that can by verified with the EMSA4
encoding.
Alternatively you could use [PKCS#1
v1.5](https://www.dlitz.net/software/pycrypto/api/current/Crypto.Signature.PKCS1_v1_5-module.html).
In Botan, that is EMSA3.
In either case, the hash algorithm must be the same at both sides.
|
clean C++ extension build directory in setup.py
Question: I have a python package which contains a C++ extension.
The C++ extension is built within `setup.py` using its own Makefile, and the
.so files are create in its own subfolder, and then copied in the build
folder.
When I call `python setup.py clean`, only the build directory is removed, but
the `cxxextension/build` is not removed, so if I build it again, it is just
copied and not recompiled.
How can I instruct `setup.py clean` to also remove my `cxxextension/build`
folder?
Answer: The easiest way would probably be to simply process it manually in setup.py.
You could add something like this to it before calling distutils.setup:
import sys, shutil
if 'clean' in sys.argv:
shutil.rmtree('cxxextension/build')
|
Find N largest lines from a file: how would you make this better?
Question: I was recently rejected from a potential employer after submitting this code.
They suggested I wasn't technically capable enough. I'm wondering if someone
could shed light on to how to make this better/more efficient.
The question was to find the N longest lines from a file of multiple lines.
This ultimately boiled down to a sorting problem, so I built an algorithm to
find the N largest numbers from a list of numbers as so:
def selection(numbers, n):
maximum = []
for x in range (0, n):
maximum.append(numbers[x])
ind = x
for y in range ( x, len(numbers) ):
if numbers[y] > maximum[len(maximum)-1]:
maximum[len(maximum)-1] = numbers[y]
numbers[ind], numbers[y] = numbers[y], numbers[ind]
return maximum
This runs in _O(n)_ , unless N = n, in which case it runs in _O(n^2)_. I was
surprised to hear them doubt my technical abilities, so I thought I'd bring it
to you SO. How do I make this better?
**EDIT: Thanks for the feedback. To clarify: I populated a list with the line-
by-line word-counts from the file, and ran it through this function.**
**EDIT2: Some people mentioned syntax. I've only been doing Python for about a
day or two. My employer suggested I write it in Python (and I mentioned that I
didn't know Python), so I assumed small syntax errors and methods wouldn't be
such an issue.**
**EDIT3: Turns out my initial reasoning was flawed with the selection sort. I
had it in my head that a min-heap would be nlogn, but I forgot that the
average complexity for my code is n^2. Thanks for the help everyone.**
Answer:
from heapq import nlargest
def longest_lines(n, filename):
with open(filename) as input:
return nlargest(n, input, key=len)
Alright, addressing the comments below:
def longest_lines(n, filename):
heap = []
with open(filename) as input:
for ln in filename:
push(heap, ln)
if len(heap) > n:
pop(heap)
return heap
where `push` and `pop` are the good old min-heap insert and delete-min
algorithms that can be found in any textbook (and that I never get right in
one go, so I'm not posting them now), comparing lines by their length. This
runs in O(_N_ ×lg(_n_)) time where _N_ is the number of lines in the file,
consuming O(_n_) temporary space.
Note that the resulting list is not sorted by length, but adding that can be
done by popping the elements until the heap is empty and reversing the result
of that.
|
how is a dictionary sorted?
Question: Ladies and Gents,
I have a question about dictionaries in python. While playing around I noticed
something that _to me_ seems strange.
I define a dict like this
stuff={'age':26,'name':'Freddie Mercury', 'ciy':'Vladivostok'}
I then add the word 'first' to stuff like this:
> stuff[1]='first'
When I print it out, it's fine
> stuff
>
> {1: 'first', 'age': 26, 'name': 'Freddie Mercury', 'city': 'Vladivostok'}
Then I add the word second:
> stuff[2]='second'
and that's fine, but when I display the content I get:
> stuff
>
> {1: 'first', 'age': 26, 2: 'second', 'name': 'Freddie Mercury', 'city':
> 'Vladivostok'}
** notice that 2 is now the third element, and not the second (in order) or
the first (if elements are added to the beginning) element
And when I add in the third element 'wtf', now all of a sudden everything is
back in order and I'm quite confused as to what's going on.
> stuff[3]='wtf'
>
> stuff
>
> {1: 'first', 2: 'second', 3: 'wtf', 'name': 'Freddie Mercury', 'age': 26,
> 'city': 'Vladivostok'}
Could someone please explain to me what's going on here?
Answer: The order you get from a dictionary is undefined. You should not rely on it.
In this case, it happens to depend on the hash values of the underlying keys,
but you shouldn't assume that's always the case.
If order matters to you, use should use an
[OrderedDict](http://docs.python.org/library/collections.html#collections.OrderedDict)
(since Python 2.7):
>>> from collections import OrderedDict
>>> stuff=OrderedDict({'age':26,'name':'Freddie Mercury', 'city':'Vladivostok'})
>>> stuff[1]='first'
>>> print stuff
OrderedDict([('city', 'Vladivostok'), ('age', 26), ('name', 'Freddie Mercury'), (1, 'first')])
>>> stuff[2]='second'
>>> print stuff
OrderedDict([('city', 'Vladivostok'), ('age', 26), ('name', 'Freddie Mercury'), (1, 'first'), (2, 'second')])
>>> stuff[3]='wtf'
>>> print stuff
OrderedDict([('city', 'Vladivostok'), ('age', 26), ('name', 'Freddie Mercury'), (1, 'first'), (2, 'second'), (3, 'wtf')])
|
ImportError: No module named bottle
Question:
$ sudo pip install bottle
Downloading/unpacking bottle
Downloading bottle-0.10.7.tar.gz (55Kb): 55Kb downloaded
Running setup.py egg_info for package bottle
Installing collected packages: bottle
Found existing installation: bottle 0.10.7
Uninstalling bottle:
Successfully uninstalled bottle
Running setup.py install for bottle
changing mode of build/scripts-2.6/bottle.py from 640 to 755
changing mode of /usr/local/bin/bottle.py to 755
Successfully installed bottle
>>> help('modules')
blahblah
bottle
blahblah
$ ls /usr/local/lib/python2.6/dist-packages/
bottle-0.10.7.egg-info bottle.py bottle.pyc
but
$ python
Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39)
[GCC 4.4.5] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import bottle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named bottle
wtf??? Ubuntu 10.10
**SOLUTION: # chmod -R 775 /usr/local/lib/python2.6/dist-packages/ is help for
me. Thanks for all.**
Answer: What ended up working for me was:
chmod -R 775 /usr/local/lib/python2.6/dist-packages/
|
Clear Clipboard?
Question: Is it possible for python to clear the clipboard? If so ... how can I do it?
I need this so in my quiz program, students can't copy paste answers from the
internet and other files.
**EDIT:** Im using WinXP and Python 2.6
Answer:
from ctypes import windll
if windll.user32.OpenClipboard(None):
windll.user32.EmptyClipboard()
windll.user32.CloseClipboard()
No external libraries needed.
|
Neo4j slow? I must be doing something wrong, please tell me what it is
Question: I'm seeing some rather improbable performance results from the embedded Neo4j,
on the surface it's orders of magnitude slower than expected so I'm assuming
I'm "doing it wrong", although I'm not doing anything complicated.
I'm using the latest embedded python bindings for Neo4j
(https://github.com/neo4j/python-embedded)
from neo4j import GraphDatabase
db = GraphDatabase('/tmp/neo4j')
I've created fake 1500 products with simple attributes:
fake_products = [{'name':str(x)} for x in range(0,1500)]
... and created nodes out of them that I connected to a subreference node:
with db.transaction:
products = db.node()
db.reference_node.PRODUCTS(products)
for prod_def in fake_products:
product = db.node(name=prod_def['name'])
product.INSTANCE_OF(products)
Now with what looks, to me, as almost exactly the same kind of code I've seen
in the documentation:
PRODUCTS = db.getNodeById(1)
for x in PRODUCTS.INSTANCE_OF.incoming:
pass
... iterating through these 1500 nodes takes >0.2s on my Macbook Pro. WHAT.
(EDIT: I of course ran this query a bunch of times so at least in the python
bindings it's not a matter of cold caches)
I amped it up to 15k, it took 2s. I downloaded Gremlin and issued an
equivalent query to investigate if it's neo4j or the python bindings:
g.v(1).in("INSTANCE_OF")
.. it seems it took about 2s on the first try, on the second run it seemed to
complete almost immediately.
Any idea why it's so slow? The results I'm getting have got to be some kind of
a mistake on my part.
Answer: This is Neo4j loading data lazily and not doing any prefetching. On the first
run, you are hitting the disk, on the second, the caches are warm, which is
your real production scenario.
|
Asynchronously retrieving information from a process
Question: Here is the pseudo code for what I want to do.
import time
def run():
while x < 10000000:
x += 1
if __name__ == "__main__":
p = Process(run)
p.start()
time.sleep(3)
#some code that I don't know that will give me the current value of x
Pythons `threading` module seems to be the way to go however I have yet to
successfully implement this example.
Answer: Everything you need is in the `multiprocessing`
[module](http://docs.python.org/library/multiprocessing.html). Perhaps a
[shared memory](http://docs.python.org/library/multiprocessing.html#sharing-
state-between-processes) object would help here?
Note that threading in Python is affected by the [Global Interpreter
Lock](http://wiki.python.org/moin/GlobalInterpreterLock), which essentially
prevents multithreaded Python code.
|
python loading c lib with CDLL, doesn't see libraries in the python path
Question: I'm trying to get some open source academic code working (the project home is
[here](http://lammps.sandia.gov/)). It is a big C++ codebase with a (very)
thin python wrapper which uses `CDLL` to load the C++ and call some C
functions that are available to allow primitive python scripting of the code.
However, the initial import code crashes because it can't find the .so files
sitting next to it in site-packages:
in the installed file:
from ctypes import *
try:
self.lib = CDLL("_lammps.so")
except:
try:
self.lib = CDLL("_lammps_serial.so")
except:
raise OSError,"Could not load LAMMPS dynamic library"
and in a script or the interpreter:
from lammps import lammps
l = lammps()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lammps.py", line 42, in __init__
raise OSError,"Could not load LAMMPS dynamic library"
OSError: Could not load LAMMPS dynamic library
Other answers [might seem to have this
covered](http://stackoverflow.com/questions/2980479/python-ctypes-loading-dll-
from-from-a-relative-path), but this only works if `CDLL()` is called within
the script actually invoked (or the working directory of the prompt that ran
the interpreter) - i.e. if the 'relative path' is in user-space, rather than
python-library-space.
How do we reliably install for import a C/C++ library that we built ourselves?
Short of polluting the system library locations like `/usr/lib`, which isn't
very pythonic, I can't see an easy solution.
(EDIT: corrected function names, unclear refactoring unhelpful! sorry!)
Answer: Run under strace -eopen, you will see something like this:
open("tls/x86_64/_lammps.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("tls/_lammps.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("x86_64/_lammps.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("_lammps.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 6
open("/lib/_lammps.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/usr/lib/_lammps.so", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
Which shows you all the locations where python ctypes looks for your library.
So far I was unable to find a runtime environment variable tweak to make it to
add search locations on my system, perhaps you have to use absolute paths.
|
Python module not working (not able to import) even after including it in PATH variable
Question: I have one testing module that I want to use for android testing. I have the
files but there is no installation file for it, so I added the module to PATH
variable, even then it doesn't work I try to import it.
Any way to make it work. Do I have to paste them in Python folder only (and
what it the Python file location). In windows, I use to paste all the files in
Python folder and everything works perfectly fine. Here in Ubuntu I'm not able
to find the location so I added it in PATH.
Any way out! Any help is appreciated.
Cheers
Some details: Python version: 2.7.2, Ubuntu 11.10 OS, Python module is in
file/folder format with no "setup.py" file to install, Location of module
already in PATH variable, Everything else in Python is working beside that
module, same worked in Windows XP with Python 2.7.2 after copy pasting.
Answer: `PATH` is for executables, `PYTHONPATH` is for Python modules.
You can also start your script with:
import sys
sys.path.append('/path/to/directory')
import your_module
Where `/path/to/directory/your_module.py` is the file you're importing.
The normal location for Python modules is in `/usr/lib/pythonX.X/site-
packages`. For installing stuff as a user
[virtualenv](http://pypi.python.org/pypi/virtualenv) is great.
|
list manipulation in python
Question: I have a list with sublists in it. EG: (`[1, 2], [1, 56], [2, 787], [2, 98],
[3, 90]`) which is created by appending values to it while running a for loop.
I am working in python, and i want to add the 2nd element of each sublist
where the 1st elements are same. in my eg: i want to add 2+56 (both have 1st
index as 1), 787+98(both have 1st index as 2) and keep 90 as it is because
there is just one element with 1st index as 3.
I'm not sure how to do this.
Here is my code:
import urllib, re
from itertools import groupby
import collections
import itertools, operator
text = urllib.urlopen("some html page").read()
data = re.compile(r'.*?<BODY>(.*?)<HR>', re.DOTALL).match(text).group(1)// storing contents from the BODY tag
values = [line.split() for line in data.splitlines()] //List with the BODY data
/* values contain elements like [[65, 67], [112, 123, 12], [387, 198, 09]]
it contains elements with length 2 and three.
i am just concerned with elements with length 3
in the for loop, i am doing this, and passing it to 2 functions.*/
def function1 (docid, doclen, tf):
new=[];
avgdoclen = 288;
tf = float(x[2]);
doclen = float(x[1]);
answer1 = tf / (tf + 0.5 + (1.5*doclen/avgdoclen));
q = function2(docid, doclen, tf)
production = answer1 * q //this is the production of
new.append(docid) // i want to add all the production values where docid are same.
new.append(production)
return answer1
def function2 (docid, doclen, tf):
avgdoclen = 288;
querylen = 12;
tf= float(x[2]);
answer2 = tf/(tf + 0.5 + (1.5*querylen/avgdoclen));
return answer2
for x in values:
if len(x)==3:
okapi_doc(x[0], x[1], x[2])
okapi_query(x[0], x[1], x[2])
I want to add all the production values where the docid are same. Now when i
print new, i get the following output:
['112', 0.3559469323909391]
['150', 0.31715060007742935]
['158', 0.122025819265144]
['176', 0.3862207694241891]
['188', 0.5057900225015092]
['236', 0.12628982528263102]
['251', 0.12166336633663369]
this is not a list. when i print new[0][0] i get 1. I want to get 112 when i
print new[0][0]. Is there something wrong with append? ['334',
0.5851519557155408]
Answer: This might be a job for itertools:
>>> import itertools, operator
>>> l = sorted([[1, 2], [1, 56], [2, 787], [2, 98], [3, 90]])
>>> keys_groups = itertools.groupby(l, key=operator.itemgetter(0))
>>> sums = [[key, sum(i[1] for i in group)] for key, group in keys_groups]
>>> sums
[[1, 58], [2, 885], [3, 90]]
Note that for `groupby` to work as expected, the items have to be sorted by
the key given. In this case, since the key is the first item in the pair, I
didn't have to do this, but for a more general solution, you should use a
`key` parameter to sort the list.
>>> l2 = [[787, 2], [98, 2], [90, 3], [2, 1], [56, 1]]
>>> l2.sort(key=operator.itemgetter(1))
>>> l2
[[2, 1], [56, 1], [787, 2], [98, 2], [90, 3]]
>>> keys_groups = itertools.groupby(l2, key=operator.itemgetter(1))
>>> sums = [[key, sum(i[0] for i in group)] for key, group in keys_groups]
>>> sums
[[1, 58], [2, 885], [3, 90]]
Works fine with the data you posted. I edited it a bit to make the example
more realistic.
>>> l = [['112', 0.3559469323909391], ['150', 0.31715060007742935],
['158',0.122025819265144], ['176', 0.3862207694241891],
['188', 0.5057900225015092], ['377', 0.12628982528263102],
['251', 0.12166336633663369], ['334', 0.5851519557155408],
['334', 0.14663484486873507], ['112', 0.2345038167938931],
['377', 0.10694516971279373], ['112', 0.28981132075471694]]
>>> l.sort(key=operator.itemgetter(0))
>>> keys_groups = itertools.groupby(l, key=operator.itemgetter(0))
>>> sums = [[key, sum(i[1] for i in group)] for key, group in keys_groups]
>>> sums
[['112', 0.88026206993954914], ['150', 0.31715060007742935],
['158', 0.122025819265144], ['176', 0.38622076942418909],
['188', 0.50579002250150917], ['251', 0.12166336633663369],
['334', 0.73178680058427581], ['377', 0.23323499499542477]]
Note that as WolframH points out, sorting will generally increase the time
complexity; but Python's sort algorithm is smart enough to make use of runs in
data, so it might not -- it all depends on the data. Still, if your data is
highly anti-sorted, [Winston
Ewert](http://stackoverflow.com/a/9145587/577088)'s `defaultdict`-based
solution may be better. (But ignore that first `Counter` snippet -- I have no
idea what's going on there.)
A couple of notes on how to create a list -- there are lots of ways, but the
two basic ways in Python are as follows -- first a list comprehension:
>>> def simple_function(x):
... return [x, x ** 2]
...
>>> in_data = range(10)
>>> out_data = [simple_function(x) for x in in_data]
>>> out_data
[[0, 0], [1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36], [7, 49], [8, 64], [9, 81]]
And second, a for loop:
>>> out_data = []
>>> for x in in_data:
... out_data.append(simple_function(x))
...
>>> out_data
[[0, 0], [1, 1], [2, 4], [3, 9], [4, 16], [5, 25], [6, 36], [7, 49], [8, 64], [9, 81]]
|
Enqueue and play a folder of .mp3s in Windows Media Player with Python
Question: Is it possible to play all the .mp3's within a folder in windows media player?
I am using Python 3.2, and so far I have code that returns the absolute
location of a random album in my music folder. I would like to take that
string and somehow open WMP and play the music within that folder
Any suggestions?
For reference, here is my code:
import random
import os
path = ["Q:\\#User\\Music\\", "Q:\\#user\\What CDs\\"]
print("You shall play " + random.sample(list(filter(lambda f: len([i for i in f if i in "."]) == 0, sum(map(lambda d: list(map(lambda e: d + "\\" + e,os.listdir(d))),list(filter(lambda c: len([i for i in c if i in "."]) == 0, sum(map(lambda a: list(map(lambda b: a + b ,os.listdir(a))), path), [])))), []) )), 1)[0])
input()
And yes, ideally that wouldn't all be in one line. I was learning how to use
`map` and `lambda` and thought I'd challenge myself. I'd now like to take this
one step further, and play the random album.
Thanks!
Answer: Hmmmm, interesting idea.
I would probably create a .m3u file on the fly and then pass it to WMP as a
command line argument (which, according to [WMP Command
Line](http://msdn.microsoft.com/en-
us/library/windows/desktop/dd562624%28v=vs.85%29.aspx) is certainly doable).
A .m3u file is simply a text file. Here is an example .m3u for the Tool album
Undertow:
#EXTM3U
#EXTINF:295,Tool - Intolerance
01 - Intolerance.mp3
#EXTINF:296,Tool - Prison Sex
02 - Prison Sex.mp3
#EXTINF:307,Tool - Sober
03 - Sober.mp3
#EXTINF:434,Tool - Bottom
04 - Bottom.mp3
#EXTINF:330,Tool - Crawl Away
05 - Crawl Away.mp3
#EXTINF:332,Tool - Swamp Song
06 - Swamp Song.mp3
#EXTINF:322,Tool - Undertow
07 - Undertow.mp3
#EXTINF:363,Tool - 4°
08 - 4°.mp3
#EXTINF:466,Tool - Flood
09 - Flood.mp3
#EXTINF:947,Tool - Disgustipated
69 - Disgustipated.mp3
Good luck!
PS - You can invoke the command line argument by importing the `os` module and
using `os.system("YOUR DOS COMMAND")`
Oh, and the format used in an m3u file:
#EXTINF:<song-time-in-seconds>, <Artist> - <Song>
<Track_Num> - <File name>
If it wasn't clear.
|
Open COM application in NET using IronPython
Question: I have been trying to perform what initially seemed a trivial task: opening up
a WordPerfect program using python and .NET. After 2 weeks of near-success and
miserable failure, I am starting to wonder if all .NET paths are designed to
lead the programmer toward the inevitable purchase of a real version of Visual
Studio...
Here are my basic tools: .NET version 4.030... ; IronPython 2.7.1; Eclipse
Indigo IDE with PyDev; WordPerfect x4 (I have tried using x5 also, with the
same results).
I converted the wpwin14.tlb into a .NET assembly using the tlbimp.exe program
from the Windows command line:
> tlbimp wpwin14.tlb /out: NETwpwin14.dll
The program converted the .tlb file, but renamed it as "WordPerfect.dll" for
some reason.
Then I registered the assembly by using the RegAsm command, acting as computer
administrator:
> regasm WordPerfect.dll
I got a message that said the assembly was registered (where it was registered
is unknown).
Then I attempted to connect to the program using the following code:
import clr
clr.AddReference ('WordPerfect')
import WordPerfect
WP = WordPerfect.PerfectScript
WP.AppMaximize () # AppMaximize is a PerfectScript call to open the program
The clr reference to WordPerfect and the import statement were recognized by
Eclipse, and the entire set of PerfectScript commands was made available
inside the editor (only, however, after putting a copy of the WordPerfect.dll
inside the IronPython2.7\Lib\site-packages folder).
Running the script from Eclipse produced the following error:
> TypeError: AppMaximize() takes exactly 1 argument (0 given)
Please note that AppMaximize() does not need any arguments to run correctly.
Trying other PerfectScript commands such as WPActivate() and RevealCodes(1)
gave similar errors, except that the RevealCodes(1) command somehow managed to
turn that feature on the next time I opened WordPerfect from the GUI manually.
I got the same error when running the script inside the IronPython
interpreter, and with the same ability to import the class and inspect it.
I worded the python code this way, based on previous successful experience
using VB.NET inside Visual Basic 2010 Express. That code was essentially this:
Imports WordPerfect
Module Module1
Dim wpwin As WordPerfect.PerfectScript = New WordPerfect.PerfectScript
wpwin.AppMaximize()
...
In order to create the reference to "WordPerfect" in VB Express, I merely
browsed to the very same wpwin14.tlb file inside the WordPerfect program
directory and dropped it into the COM box. VB Express then converted that file
into a usable dll (apparently, not using the same methods I used).
I then tried an approach similar to what the IronPython tutorial page suggests
(<http://www.ironpython.info/index.php/Interacting_with_Excel>), which is to
call the "ApplicationClass" within the object:
import clr
clr.AddReference ('Microsoft.Office.Interop.Excel')
from Microsoft.Office.Interop import Excel
ex = Excel.ApplicationClass()
Running that code opened up Excel with no complaints.
A quick look into the structure of my imported WordPerfect class using dir()
revealed an ApplicationClass method.
Then I tried the following commands (using the IronPython interpreter):
import clr
clr.AddReference ('WordPerfect')
import WordPerfect as WP
WP.ApplicationClass()
I got another error, with the pertinent part here:
> EnvironmentError: System.Runtime.InteropServices.COMException (0x80040154):
> Retrieving the COM class factory for component with CLSID
> {40852D4E-0076-47CD-8C70-92E42B35E5EC} failed due to the following error:
> 80040154 Class not registered (Exception from HRESULT: >0x80040154
> (REGDB_E_CLASSNOTREG)).
Thinking I perhaps needed to put my dll into the GAC, I tried this:
gacutil /i WordPerfect.dll
That threw me this error:
> Failure adding assembly to the cache: Attempt to install an assembly without
> a strong name.
Now I am stuck.
Do I need to make a strongly named assembly for this third party dll? (I am
inclined to think that is not needed.)
Am I missing something important on the python side regarding how class
objects are called?
Have I missed some sort of instantiation/invoking/incantation spell?
Why does VB Express have no trouble with the dll, and I am unable to use it
effectively in Eclipse?
Am I doomed to spend big $ on more Microsoft products?
I apologize for the length of this post. And, yes, there still are good
reasons to use WordPerfect!
Answer: Maybe you need to call the constructor for PerfectScript
WP = WordPerfect.PerfectScript()
WP.AppMaximize ()
instead of the current
WP = WordPerfect.PerfectScript
WP.AppMaximize ()
|
Adding the digits of an int and Python
Question:
import sys
def keepsumming(number):
numberlist = []
for digit in str(number):
numberlist.append(int(digit))
total = reduce(add, numberlist)
if total > 9:
keepsumming(total)
if total <= 9:
return total
def add(x,y):
return x+y
keepsumming(sys.argv[1])
I want to create a function that adds the individual digits of any number, and
to keep summing digits until the result is only one digit. (e.g. 1048576 =
1+0+4+8+5+7+6 = 31 = 3+1 = 4). The function seems to work in some laces but
not in others. for example:
$python csp39.py 29
returns None, but:
$python csp39.py 30
returns 3, as it should...
Any help would be appreciated!
Answer: As others have mentioned, the problem seems to be with the part
if total > 9:
keepsumming(total) # you need return here!
Just for completeness, I want to present you some examples how this task could
be solved a bit more elegantly (if you are interested). The first also uses
strings:
while number >= 10:
number = sum(int(c) for c in str(number))
The second uses modulo so that no string operations are needed at all (which
should be quite a lot faster):
while number >= 10:
total = 0
while number:
number, digit = divmod(number, 10)
total += digit
number = total
You can also use an iterator if you want to do different things with the
digits:
def digits(number, base = 10):
while number:
yield number % base
number //= base
number = 12345
# sum digits
print sum(digits(number))
# multiply digits
from operator import mul
print reduce(mul, digits(number), 1)
This last one is very nice and idiomatic Python, IMHO. You can use it to
implement your original function:
def keepsumming(number, base = 10):
if number < base:
return number
return keepsumming(sum(digits(number, base)), base)
Or iteratively:
def keepsumming(number, base = 10):
while number >= base:
number = sum(digits(number, base))
**UPDATE:** Thanks to Karl Knechtel for the hint that this actually is a very
trivial problem. It can be solved in one line if the underlying mathematics
are exploited properly:
def keepsumming(number, base = 10):
return 1 + (number - 1) % (b - 1)
|
get unix time from yesterday 0000 hrs to today 0000 hours using python
Question: I need to get the following values
Today = 6th Feb
time 1 = 5th Feb 0000 hrs
time 2 = 6th Feb 0000 hrs.
So I have 24 hours in epoch time. The reference is today but not now()
So far I have this.
yesterday = datetime.date.today() - datetime.timedelta(days=1)
epoch_yesterday = time.mktime(yesterday.timetuple())
epoch_today = time.mktime(datetime.date.today().timetuple())
Both epoch_ values are actually returning seconds from now() like 1600 hrs
(depends on when I run the script) not from 0000/2400 hrs. I understand it
would be better to get yesterdays epoch time and then ad 24 hours to it to get
the end date. But I need to get the first part right :) , maybe I need sleep?
p.s. Sorry code styling isn't working, SO won't let me post with code styling
and was very frustrating to get SO to post this.
Answer: The simpler way might be to construct the date explicitly as so:
now = datetime.now()
previous_midnight = datetime.datetime( now.year, now.month, now.day )
That gets you the timestamp for the just passed midnight. As you already know
`time.mktime` will get you your the epoch value. Just add or subtract 24 * 60
* 60 to get the previous or next midnight from there.
Oh! And be aware that the epochal value will be seconds from midnight 1970,
Jan 1st **UTC**. If you need seconds from midnight in your timezone, remember
to adjust accordingly!
_Update_ : test code, executed from the shell at just before 1 pm, PST:
>>> from datetime import datetime
>>> import time
>>> time_now = datetime.now()
>>> time_at_start_of_today_local = datetime( n.year, n.month, n.day )
>>> epochal_time_now = time.mktime( time_now.timetuple() )
>>> epochal_time_at_start_of_today_local = time.mktime( time_at_start_of_today.timetuple() )
>>> hours_since_start_of_day_today = (epochal_time_at_start_of_today_local - epochal_time_now) / 60 / 60
12.975833333333332
Note: time_at_start_of_today_local is the number of seconds between 1970 Jan
1st, midnight, and the start of the current day _in your timezone_. If you
want the number of seconds to the start of the current day in UTC, then
subtract your time zone from time_at_start_of_today_local:
>>> time_at_start_of_today_utc = time_at_start_of_today_local - time.timezone
Be aware that you get into odd territory here, specifically who's day do you
care about? If it's currently Tuesday in your timezone, but still Monday in
UTC, which midnight do you consider today's midnight for your purposes?
|
Configuring the SubLime Linter Plugin to use Ruby 1.9 Syntax
Question: I'd like to get the SubLime Linter plugin
(<https://github.com/Kronuz/SublimeLinter>) to recognize Ruby 1.9 syntax. Has
anybody been able to get this to work in SublimeText 2?
Here is my current default settings file:
/*
SublimeLinter default settings
*/
{
/*
Sets the mode in which SublimeLinter runs:
true - Linting occurs in the background as you type (the default).
false - Linting only occurs when you initiate it.
"load-save" - Linting occurs only when a file is loaded and saved.
*/
"sublimelinter": true,
/*
Maps linters to executables for non-built in linters. If the executable
is not in the default system path, or on posix systems in /usr/local/bin
or ~/bin, then you must specify the full path to the executable.
Linter names should be lowercase.
This is the effective default map; your mappings may override these.
"sublimelinter_executable_map":
{
"perl": "perl",
"php": "php",
"ruby": "ruby"
},
*/
"sublimelinter_executable_map":
{
},
/*
Maps syntax names to linters. This allows variations on a syntax
(for example "Python (Django)") to be linted. The key is
the base filename of the .tmLanguage syntax files, and the value
is the linter name (lowercase) the syntax maps to.
*/
"sublimelinter_syntax_map":
{
"Python Django": "python"
},
// An array of linter names to disable. Names should be lowercase.
"sublimelinter_disable":
[
],
/*
The minimum delay in seconds (fractional seconds are okay) before
a linter is run when the "sublimelinter" setting is true. This allows
you to have background linting active, but defer the actual linting
until you are idle. When this value is greater than the built in linting delay,
errors are erased when the file is modified, since the assumption is
you don't want to see errors while you type.
*/
"sublimelinter_delay": 0,
// If true, lines with errors or warnings will be filled in with the outline color.
"sublimelinter_fill_outlines": false,
// If true, lines with errors or warnings will have a gutter mark.
"sublimelinter_gutter_marks": false,
// If true, the find next/previous error commands will wrap.
"sublimelinter_wrap_find": true,
// If true, when the file is saved any errors will appear in a popup list
"sublimelinter_popup_errors_on_save": false,
// jshint: options for linting JavaScript. See http://jshint.com/#docs for more info.
// By deault, eval is allowed.
"jshint_options":
{
"evil": true,
"regexdash": true,
"browser": true,
"wsh": true,
"trailing": true,
"sub": true,
"strict": false
},
// A list of pep8 error numbers to ignore. By default "line too long" errors are ignored.
// The list of error codes is in this file: https://github.com/jcrocholl/pep8/blob/master/pep8.py.
// Search for "Ennn:", where nnn is a 3-digit number.
"pep8_ignore":
[
"E501"
],
/*
If you use SublimeLinter for pyflakes checks, you can ignore some of the "undefined name xxx"
errors (comes in handy if you work with post-processors, globals/builtins available only at runtime, etc.).
You can control what names will be ignored with the user setting "pyflakes_ignore".
Example:
"pyflakes_ignore":
[
"some_custom_builtin_o_mine",
"A_GLOBAL_CONSTANT"
],
*/
"pyflakes_ignore":
[
],
/*
Ordinarily pyflakes will issue a warning when 'from foo import *' is used,
but it is ignored since the warning is not that helpful. If you want to see this warning,
set this option to false.
*/
"pyflakes_ignore_import_*": true,
// Objective-J: if true, non-ascii characters are flagged as an error.
"sublimelinter_objj_check_ascii": false
}
Answer: I was able to get it to work using the absolute path to my ruby 1.9
executable. I’m using rbenv, so to get the path I ran `rbenv which ruby`, you
might need to put in `/usr/local/bin/ruby` or `/usr/local/bin/ruby19`.
This is how my sublimelinter default setting looks like (you can put this into
a project-specific file too if you prefer:)
`Preferences -> Package Settings -> SublimeLinter -> Settings - User`
"sublimelinter_executable_map":
{
"ruby": "~/.rbenv/versions/1.9.3-p0/bin/ruby"
},
|
Flipping a picture across the vertical axis in python
Question: I am trying to flip a picture across its vertical axis in python but have no
idea how to start. any suggestion would be appreciated. Thanks.
Answer: For something as simple as this, PIL is not really needed - you can do it with
numpy `fliplr`.
import matplotlib.pyplot as plt
import numpy as np
im = np.flipud(plt.imread('so.jpg'))
plt.subplot(2, 1, 1)
plt.imshow(im)
plt.subplot(2, 1, 2)
plt.imshow(np.fliplr(im))
plt.show()

wolf revok cats !
|
Efficiently processing all data in a Cassandra Column Family with a MapReduce job
Question: I want to process all of the data in a column family in a MapReduce job.
Ordering is not important.
An approach is to iterate over all the row keys of the column family to use as
the input. This could be potentially a bottleneck and could replaced with a
parallel method.
I'm open to other suggestions, or for someone to tell me I'm wasting my time
with this idea. I'm currently investigating the following:
A potentially more efficient way is to assign ranges to the input instead of
iterating over all row keys (before the mapper starts). Since I am using
`RandomPartitioner`, is there a way to specify a range to query based on the
MD5?
For example, I want to split the task into 16 jobs. Since the
`RandomPartitioner` is MD5 based (from what I have read), I'd like to query
everything starting with `a` for the first range. In other words, how would I
query do a get_range on the MD5 with the start of `a` and ends before `b`.
e.g. `a0000000000000000000000000000000 - afffffffffffffffffffffffffffffff`?
I'm using the pycassa API (Python) but I'm happy to see Java examples.
Answer: I'd cheat a little:
1. Create new rows job_(n) with each column representing each row key in the range you want
2. Pull all columns from that specific row to indicate which rows you should pull from the CF
I do this with users. Users from a particular country get a column in the
country specific row. Users with a particular age are also added to a specific
row.
Allows me to quickly pull the rows i need based on the criteria i want and is
a little more efficient compared to pulling everything.
This is how the Mahout CassandraDataModel example functions:
* <https://github.com/apache/mahout/blob/trunk/integration/src/main/java/org/apache/mahout/cf/taste/impl/model/cassandra/CassandraDataModel.java>
Once you have the data and can pull the rows you are interested in, you can
hand it off to your MR job(s).
Alternately, if speed isn't an issue, look into using PIG: [How to use
Cassandra's Map Reduce with or w/o
Pig?](http://stackoverflow.com/questions/2734005/how-to-use-cassandras-map-
reduce-with-or-w-o-pig)
|
Django, Virtualenv, nginx + uwsgi import module wsgi error
Question: Im trying to setup my django project on a staging server with nginx,
virtualenv, and uwsgi, but I keep getting an import module wsgi error.
If theres a community I can find an answer is here... Thank you all in
advance.
This are my configuration files:
uwsgi.py on my django project:
import os
import sys
import site
site.addsitedir(os.path.join(os.environ['WORKON_HOME'],'project/lib/python2.6/site-packages'))
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
sys.path.append(os.path.join(os.path.realpath(os.path.dirname(__file__)), '../../../'))
sys.path.append(os.path.join(os.path.realpath(os.path.dirname(__file__)), '../../'))
os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.staging.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
Nginx Configuration
# nginx configuration for project.maumercado.com
server {
server_name project.maumercado.com;
access_log /home/ubuntu/logs/project/nginx/access.log;
error_log /home/ubuntu/logs/project/nginx/error.log;
location / {
uwsgi_pass unix:/tmp/uwsgi.sock;
include /etc/nginx/uwsgi_params;
}
location /static {
root /home/ubuntu/django-projects/project/project/media;
}
location /media {
root /home/ubuntu/django-projects/project/project/media;
}
}
and, my uwsgi.conf
# file: /etc/init/uwsgi.conf
description "uWSGI starter"
start on (local-filesystems and runlevel [2345])
stop on runlevel [016]
respawn
# home - is the path to our virtualenv directory
# pythonpath - the path to our django application
# module - the wsgi handler python script
exec /home/ubuntu/ve/project/bin/uwsgi \
--uid www-data \
--pythonpath /home/ubuntu/django-projects/project/project/configs/staging/ \
--socket /tmp/uwsgi.sock \
--chmod-socket \
--module wsgi \
--logdate \
--optimize 2 \
--processes 2 \
--master \
--logto /home/ubuntu/logs/project/uwsgi.log
Nginx logs does not state anything besides a 500 in access.log, so heres the
uwsgi.log:
Mon Feb 6 13:58:23 2012 - *** Starting uWSGI 1.0.2.1 (32bit) on [Mon Feb 6 13:58:23 2012] ***
Mon Feb 6 13:58:23 2012 - compiled with version: 4.4.5 on 06 February 2012 12:32:36
Mon Feb 6 13:58:23 2012 - current working directory: /
Mon Feb 6 13:58:23 2012 - detected binary path: /home/ubuntu/ve/project/bin/uwsgi
Mon Feb 6 13:58:23 2012 - setuid() to 1000
Mon Feb 6 13:58:23 2012 - your memory page size is 4096 bytes
Mon Feb 6 13:58:23 2012 - chmod() socket to 666 for lazy and brave users
Mon Feb 6 13:58:23 2012 - uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
Mon Feb 6 13:58:23 2012 - Python version: 2.6.6 (r266:84292, Sep 15 2010, 16:02:57) [GCC 4.4.5]
Mon Feb 6 13:58:23 2012 - Set PythonHome to /home/ubuntu/ve/project
Mon Feb 6 13:58:23 2012 - Python main interpreter initialized at 0x9a9d740
Mon Feb 6 13:58:23 2012 - your server socket listen backlog is limited to 100 connections
Mon Feb 6 13:58:23 2012 - *** Operational MODE: preforking ***
Mon Feb 6 13:58:23 2012 - added /home/ubuntu/django-projects/project/ to pythonpath.
ImportError: No module named wsgi
Mon Feb 6 13:58:23 2012 - unable to load app 0 (mountpoint='') (callable not found or import error)
Mon Feb 6 13:58:23 2012 - *** no app loaded. going in full dynamic mode ***
Mon Feb 6 13:58:23 2012 - *** uWSGI is running in multiple interpreter mode ***
Mon Feb 6 13:58:23 2012 - spawned uWSGI master process (pid: 551)
Mon Feb 6 13:58:23 2012 - spawned uWSGI worker 1 (pid: 588, cores: 1)
Mon Feb 6 13:58:23 2012 - spawned uWSGI worker 2 (pid: 589, cores: 1)
I don't know if the way I set up my project has anything to do with it, but
anyways heres the manage file that I use to redirect django utilities:
manage.sh
#!/bin/bash
python ./project/configs/${DEPLOYMENT_TARGET:="common"}/manage.py $*
and just in case this is how I have set up a django project:
project
|-manage.sh -> this fellow is redirected to settings.py (production, common or staging)
|-requirements.txt
|-README
|-dashbard.py
|-project.sqlite
|- project/
|- apps
|- accounts
|-other internal apps
|- configs
|- common -> for local development
|-settings.py
|-manage.py
|-urls
|-staging
|-manage.py
|-settings.py
|-wsgi.py
|-logging.conf
|-production
|-manage.py
|-settings.py
|-wsgi.py
|-logging.conf
|-media
|-templates
Answer: I updated wsgi.py to look like this:
import os
import sys
import site
site.addsitedir(os.path.join('/home/ubuntu/ve','project/lib/python2.6/site-packages'))
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
sys.path.append(os.path.join(os.path.realpath(os.path.dirname(__file__)), '../../../'))
sys.path.append(os.path.join(os.path.realpath(os.path.dirname(__file__)), '../../'))
os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.staging.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
My uwsgi.conf file now looks like this:
# file: /etc/init/uwsgi.conf
description "uWSGI starter"
start on (local-filesystems and runlevel [2345])
stop on runlevel [016]
respawn
# home - is the path to our virtualenv directory
# pythonpath - the path to our django application
# module - the wsgi handler python script
exec /home/ubuntu/ve/project/bin/uwsgi \
--uid ubuntu \
--pythonpath /home/ubuntu/django-projects/project/project/configs/staging \
-H /home/ubuntu/ve/project \
--socket /tmp/uwsgi.sock \
--chmod-socket 644 \
--module wsgi \
--logdate \
--optimize 2 \
--processes 2 \
--master \
--logto /home/ubuntu/logs/project/uwsgi.log
And my nginx site-available file looks like this:
# file: /etc/nginx/sites-available/yourdomain.com
# nginx configuration for project.maumercado.com
server {
listen 80;
charset utf-8;
server_name project.maumercado.com;
access_log /home/ubuntu/logs/project/nginx/access.log;
error_log /home/ubuntu/logs/project/nginx/error.log;
location ^~ /cache/ {
root /home/ubuntu/django-projects/project/project/media;
expires max;
}
location / {
uwsgi_pass unix:/tmp/uwsgi.sock;
include /etc/nginx/uwsgi_params;
}
}
And its working perfect now, I had some problems with the styles because of
strange characters being used like ñ in the css files.
Now I would like to know what should I do when I need to run more projects in
the same server with uwsgi?
|
Python matplotlib : plot3D with a color for 4D
Question: I am trying to make a 3D plot from x, y, z points list, and I want to plot
color depending on the values of a fourth variable rho.
Currently I have ;
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot3D(cell_x, cell_y, cell_z, linestyle='None', marker='o', markersize = 5, antialiased=True)
ax.set_xlim3d(0.45, 0.55)
ax.set_ylim3d(0.45, 0.55)
ax.set_zlim3d(0.45, 0.55)
How to add cell_rho (my fourth array) as the color of my x, y, z points ? (for
example for a jet colormap).
Thank you very much.
EDIT : I can't use scatter plots because for my 18000 points scatter plots are
very slow compared to plot3d with markers only.
Answer: If you want to display a simple 3D scatterplot, can't you just use `scatter`?
E.g.,
x, y, z = randn(100), randn(100), randn(100)
fig = plt.figure()
from mpl_toolkits.mplot3d import Axes3D
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, c=randn(100))
plt.show()
(I'm running the above code under `python -pylab`.)

It seems, on the contrary, that with `plot3D` you must convert your fourth
dimension to RGB tuples.
|
Convert RGBA PNG to RGB with PIL
Question: I'm using PIL to convert a transparent PNG image uploaded with Django to a JPG
file. The output looks broken.
### Source file

### Code
Image.open(object.logo.path).save('/tmp/output.jpg', 'JPEG')
or
Image.open(object.logo.path).convert('RGB').save('/tmp/output.png')
### Result
Both ways, the resulting image looks like this:

Is there a way to fix this? I'd like to have white background where the
transparent background used to be.
* * *
# Solution
Thanks to the great answers, I've come up with the following function
collection:
import Image
import numpy as np
def alpha_to_color(image, color=(255, 255, 255)):
"""Set all fully transparent pixels of an RGBA image to the specified color.
This is a very simple solution that might leave over some ugly edges, due
to semi-transparent areas. You should use alpha_composite_with color instead.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
x = np.array(image)
r, g, b, a = np.rollaxis(x, axis=-1)
r[a == 0] = color[0]
g[a == 0] = color[1]
b[a == 0] = color[2]
x = np.dstack([r, g, b, a])
return Image.fromarray(x, 'RGBA')
def alpha_composite(front, back):
"""Alpha composite two RGBA images.
Source: http://stackoverflow.com/a/9166671/284318
Keyword Arguments:
front -- PIL RGBA Image object
back -- PIL RGBA Image object
"""
front = np.asarray(front)
back = np.asarray(back)
result = np.empty(front.shape, dtype='float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
falpha = front[alpha] / 255.0
balpha = back[alpha] / 255.0
result[alpha] = falpha + balpha * (1 - falpha)
old_setting = np.seterr(invalid='ignore')
result[rgb] = (front[rgb] * falpha + back[rgb] * balpha * (1 - falpha)) / result[alpha]
np.seterr(**old_setting)
result[alpha] *= 255
np.clip(result, 0, 255)
# astype('uint8') maps np.nan and np.inf to 0
result = result.astype('uint8')
result = Image.fromarray(result, 'RGBA')
return result
def alpha_composite_with_color(image, color=(255, 255, 255)):
"""Alpha composite an RGBA image with a single color image of the
specified color and the same size as the original image.
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
back = Image.new('RGBA', size=image.size, color=color + (255,))
return alpha_composite(image, back)
def pure_pil_alpha_to_color_v1(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
NOTE: This version is much slower than the
alpha_composite_with_color solution. Use it only if
numpy is not available.
Source: http://stackoverflow.com/a/9168169/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
def blend_value(back, front, a):
return (front * a + back * (255 - a)) / 255
def blend_rgba(back, front):
result = [blend_value(back[i], front[i], front[3]) for i in (0, 1, 2)]
return tuple(result + [255])
im = image.copy() # don't edit the reference directly
p = im.load() # load pixel array
for y in range(im.size[1]):
for x in range(im.size[0]):
p[x, y] = blend_rgba(color + (255,), p[x, y])
return im
def pure_pil_alpha_to_color_v2(image, color=(255, 255, 255)):
"""Alpha composite an RGBA Image with a specified color.
Simpler, faster version than the solutions above.
Source: http://stackoverflow.com/a/9459208/284318
Keyword Arguments:
image -- PIL RGBA Image object
color -- Tuple r, g, b (default 255, 255, 255)
"""
image.load() # needed for split()
background = Image.new('RGB', image.size, color)
background.paste(image, mask=image.split()[3]) # 3 is the alpha channel
return background
## Performance
The simple non-compositing `alpha_to_color` function is the fastest solution,
but leaves behind ugly borders because it does not handle semi transparent
areas.
Both the pure PIL and the numpy compositing solutions give great results, but
`alpha_composite_with_color` is much faster (8.93 msec) than
`pure_pil_alpha_to_color` (79.6 msec). ~~If numpy is available on your system,
that's the way to go.~~ (Update: The new pure PIL version is the fastest of
all mentioned solutions.)
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_to_color(i)"
10 loops, best of 3: 4.67 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.alpha_composite_with_color(i)"
10 loops, best of 3: 8.93 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color(i)"
10 loops, best of 3: 79.6 msec per loop
$ python -m timeit "import Image; from apps.front import utils; i = Image.open(u'logo.png'); i2 = utils.pure_pil_alpha_to_color_v2(i)"
10 loops, best of 3: 1.1 msec per loop
Answer: Here's a version that's much simpler - not sure how performant it is. Heavily
based on some django snippet I found while building `RGBA -> JPG + BG` support
for sorl thumbnails.
from PIL import Image
png = Image.open(object.logo.path)
png.load() # required for png.split()
background = Image.new("RGB", png.size, (255, 255, 255))
background.paste(png, mask=png.split()[3]) # 3 is the alpha channel
background.save('foo.jpg', 'JPEG', quality=80)
Result @80%

Result @ 50%

|
What is the most Pythonic way to modify the function of a function?
Question: I have a function I am using to read in files of a particular format. My
function looks likes this:
import csv
from collections import namedtuple
def read_file(f, name, header=True):
with open(f, mode="r") as infile:
reader = csv.reader(infile, delimiter="\t")
if header is True:
next(reader)
gene_data = namedtuple("Data", 'id, name, q, start, end, sym')
for row in reader:
row = data(*row)
yield row
I also have another type of file that I would like to read in with this
function. However, the other file type needs a few slight parsing steps before
I can use the `read_file` function. For example, trailing periods need to be
striped from column `q` and the characters `atr` need to be appended to the
`id` column. Obviously, I could create a new function, or add some optional
arguments to the existing function, but is there a simple way to modify this
function so that it can be used to read in an additional file type(s)? I was
thinking of something along the lines of a decorator?
Answer: IMHO, the most Pythonic way would be converting the function to a base class,
split file operations into methods and overriding these methods in new classes
based on your base class.
|
Python subprocess issue on Windows
Question: I'm trying to write a Python script to convert a bunch of images.
It's something like the following code (after a huge minimization):
#!/usr/bin/python
# -*- coding: utf-8 -*-
from subprocess import call
cmd = ' '.join(['convert.exe', '-resize', '110', 'foo\\a.jpg', 'bar\\a.jpg'])
print cmd
call(cmd)
I get the error below after executing it:
Parameter not valid - 110
While when I copy / paste the string emitted from the script on the command
line the image convertion works perfectly. What did I miss?
Answer: I would say that the error is most likely the result of two things
1. In the call command, try using `shell=True` as the second argument.
2. You're using relative paths, and the working directory might not be the same for python as it is from your cmd shell.
|
M-x breakpoints in python-mode in Emacs
Question: I have read in a several places (e.g.
[here](http://stackoverflow.com/questions/283294/django-debugging-with-emacs))
that I can use `M-x space` to add/remove breakpoints in my python files when
debugging with Emacs.
However, when I type `M-x space` once I have opened `pdb` on the file, Emacs
prompts me with a large autocompletion list, as if I were to type any other
`M-x command`
### Q1:
How can I use `M-x space` to add or remove breakpoints once I have started a
pdb session in Emacs?
### Q2:
The top answer in the thread above says I should invoke the following two
lines from pdb to use `M-x space` to add/remove breakpoints:
import sys
sys.path.append('/path/to/directory/containing/views.py')
but what is `views.py`? Is that something specific to Django?
Answer: I think you are mistaken. `gud-break` is set to `C-x SPC` (`SPC` means space).
I guess this is just a typo in the linked post.
|
Python OpenCV cv.WaitKey spits back weird output on Ubuntu modulo 256 maps correctly
Question: I am running Ubuntu 11.10 (Lenovo T400) with OpenCV 2.2 (I believe as imports
are done as import cv2.cv as cv). This problem also happens if i just 'import
cv' instead.
I recently started having this problem, and it's kind of a weird one. I don't
know anything significant I did, I have restarted since it started happening.
I installed a couple programs, but I don't think those would affect this.
When I run with an artificial image showing (just a black image), I try to
poll cv.WaitKey(10). It spits back garbage.
Here's my OpenCV code:
import cv2.cv as cv
import time
cv.NamedWindow("camera", 1)
img = cv.CreateImage((400,400), 8, 3)
valkeys = range(1,255)
f = open('/home/andrew/webuploads/keyboardtest', 'wb')
while True:
cv.ShowImage("camera", img)
k = cv.WaitKey(10)
if k is -1:
pass
else:
print 'writing %s' %str(k)
f.write((str(k)+' '))
f.close()
Here's the output I get from the program:
1048678 1048676 1048673 1048691 1048676 1048678 1048689 1048695 1048677
1048688 1048687 1048681 1048677 1048677 1048695 1048624 1048633 1048690
1048633 1048624 1048695 1048677 1048690 1048624 1048633 1048681 1048677
1048681 1048688 1048687 1048677 1048681 1048692 1048688 1048681 1048688
1048687 1048681 1048681 1048688 1048687 1048585 1048687 1048681 1048688
1048687 1048681 1114085 1179728 1179727 1179721 1179728 1179721 1245153
1245289 1179727 1179721 1179727 1179721 1179728 1179727 1245155 1441865
1179728 1179727 1179721 1179728 1179727 1179721 1179728 1179727 1179718
1179721 1179716 1179728 1179727 1179731 1179721 1179713 1179728 1179727
1179687 1179723 1179716 1179736 1179724 1179715 1179734 1179725 1179692
1179736 1179738 1179725 1179715 1179734 1179692 1245155 1441859
Now I can modulo 256 these numbers and get somewhat sensible results out (just
tried it, it correctly identified all my keys), however, why would I need to
do this? It did work previously without doing anything (print chr(k) would
give me a letter). Anyone have any ideas?
Answer: The modulus works because the information about the key is stored in the
**last 8 bits** of the return value. A `k & 255` will also pick the last 8
bits:
>>> k = 1048678
>>> chr(k & 255)
'f'
In Python, `chr(n)` will return the character corresponding to _n_.
Unfortunately, OpenCV documentation [presents no information about this
issue](http://opencv.itseez.com/modules/highgui/doc/user_interface.html?highlight=waitkey#cv2.waitKey).
|
Python, networkx
Question: I need help since I'm not expert in programming.
How can I draw a planar graph (A graph is said to be planar if it can be drawn
in the plane such that there are no edge-crossings) for a given graph with n
nodes and E edges. and then flip the edges to have another planar graph.( for
loop until we get all the possibilities).
Thanks in advance,and I appreciate your help.
PY
* * *
>>>#visualize with pygraphviz
A=pgv.AGraph()
File "<stdin>", line 6
A=pgv.AGraph()
^
SyntaxError: invalid syntax
>>> A.add_edges_from(G.edges())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'A' is not defined
>>> A.layout(prog='dot')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'A' is not defined
>>> A.draw('planar.png')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'A' is not defined
Answer: There are several hard computational problems involved in your question.
**First, some theory**. If the graph G is planar, then every subgraph of G is
planar. Flipping edges from G (that has `e` edges), would give `2^e-1` planar
subgraphs (if we don't care about connectivity), which is exponential (i.e.
huge and "bad"). Probably, you'd like to find "maximal" planar subgraphs.
If you'd like to draw planar graphs which also look like planar is
[computationally
hard](http://en.wikipedia.org/wiki/Crossing_number_%28graph_theory%29#Complexity),
i.e. it's one thing to know that there exists a graphical representation where
the edges don't cross and another to find such a representation.
**To implementation**. It seems like networkx doesn't have the function that
checks if a graph is planar. Some other packages that work with graphs have
(for instance, [sage](http://www.sagemath.org/) has `g.is_planar()` function
where `g` is a graph object). Below, I wrote a "naive" (there must be more
efficient methods) planarity check with networkx, based on [Kuratowski
theorem](http://en.wikipedia.org/wiki/Kuratowski%27s_theorem#Kuratowski.27s_and_Wagner.27s_theorems).
import pygraphviz as pgv
import networkx as nx
import itertools as it
from networkx.algorithms import bipartite
def is_planar(G):
"""
function checks if graph G has K(5) or K(3,3) as minors,
returns True /False on planarity and nodes of "bad_minor"
"""
result=True
bad_minor=[]
n=len(G.nodes())
if n>5:
for subnodes in it.combinations(G.nodes(),6):
subG=G.subgraph(subnodes)
if bipartite.is_bipartite(G):# check if the graph G has a subgraph K(3,3)
X, Y = bipartite.sets(G)
if len(X)==3:
result=False
bad_minor=subnodes
if n>4 and result:
for subnodes in it.combinations(G.nodes(),5):
subG=G.subgraph(subnodes)
if len(subG.edges())==10:# check if the graph G has a subgraph K(5)
result=False
bad_minor=subnodes
return result,bad_minor
#create random planar graph with n nodes and p probability of growing
n=8
p=0.6
while True:
G=nx.gnp_random_graph(n,p)
if is_planar(G)[0]:
break
#visualize with pygraphviz
A=pgv.AGraph()
A.add_edges_from(G.edges())
A.layout(prog='dot')
A.draw('planar.png')
**Edit2**. _If you face troubles with pygraphviz, try to draw with networkx,
maybe you find the results ok. So, instead of "visualize with pygraphviz"
block try the following_ :
import matplotlib.pyplot as plt
nx.draw(G)
# comment the line above and uncomment one of the 3 lines below (try each of them):
#nx.draw_random(G)
#nx.draw_circular(G)
#nx.draw_spectral(G)
plt.show()
**End of edit2**.
The result looks like this.
You see there's one crossing on the picture (but the graph is planar), it's
actually a good result (don't forget the problem is computationally hard),
pygraphviz is a wrapper to [Graphviz](http://www.graphviz.org/) which use
heuristic algorithms. In the line `A.layout(prog='dot')` you may try replace
'dot' with 'twopi', 'neato', 'circo' etc to see if you achieve a better
visualization.
**Edit**. Let's also consider your question on planar subgraphs. Let's
generate a non-planar graph:
while True:
J=nx.gnp_random_graph(n,p)
if is_planar(J)[0]==False:
break
I think the most efficient way in finding a planar subgraph is to eliminate
nodes from the "bad minor" (i.e. K(5) or K(3,3)). Here is my implementation:
def find_planar_subgraph(G):
if len(G)<3:
return G
else:
is_planar_boolean,bad_minor=is_planar(G)
if is_planar_boolean:
return G
else:
G.remove_node(bad_minor[0])
return find_planar_subgraph(G)
Action:
L=find_planar_subgraph(J)
is_planar(L)[0]
>> True
Now you have a planar subgraph L (a networkx graph object) of the non-planar
graph G.
|
Updating XML Elements and Attribute values using Python etree
Question: I'm very new to Python scripting, I'm trying to use 2.7 ElementTree to parse
an XML file then update/replace specific element attributes with values
sourced from a test data file. The idea is to be able to use a base XML file
to then load and populate fields with specific test data etc. then save out as
a unique XML file
My idea for a solution was to source new data from a CSV file by reading a
file to a string then slicing the string at the delimiter marks and append to
a List (done that OK). Then use etree to somehow update or delete/replace the
attribute with a specifc value from the List.
I've looked in the Python ElementTree documentation & seen the clear() and
remove() element objectsbut I've no idea of the syntax to use them adequately
An example of the XML to modify is as below, attributes with XXXXX are to be
replaced/updated:
<TrdCaptRpt RptID="10000001" TransTyp="0">
<RptSide Side="1" Txt1="XXXXX">
<Pty ID="XXXXX" R="1"/>
</RptSide>
</TrdCaptRpt>
The intended result will be for example:
<TrdCaptRpt RptID="10000001" TransTyp="0">
<RptSide Side="1" Txt1="12345">
<Pty ID="ABCDE" R="1"/>
</RptSide>
</TrdCaptRpt>
How do I use the etree commands to change the base XML to update with an item
from the list[]?
Answer: For this kind of work, I always recommend
[`BeautifulSoup`](http://www.crummy.com/software/BeautifulSoup/) because it
has a really easy to learn API:
from BeautifulSoup import BeautifulStoneSoup as Soup
xml = """
<TrdCaptRpt RptID="10000001" TransTyp="0">
<RptSide Side="1" Txt1="XXXXX">
<Pty ID="XXXXX" R="1"/>
</RptSide>
</TrdCaptRpt>
"""
soup = Soup(xml)
rpt_side = soup.trdcaptrpt.rptside
rpt_side['txt1'] = 'Updated'
rpt_side.pty['id'] = 'Updated'
print soup
Example output:
<trdcaptrpt rptid="10000001" transtyp="0">
<rptside side="1" txt1="Updated">
<pty id="Updated" r="1">
</pty></rptside>
</trdcaptrpt>
Edit: With `xml.etree.ElementTree` you could use the following script:
from xml.etree import ElementTree as etree
xml = """
<TrdCaptRpt RptID="10000001" TransTyp="0">
<RptSide Side="1" Txt1="XXXXX">
<Pty ID="XXXXX" R="1"/>
</RptSide>
</TrdCaptRpt>
"""
root = etree.fromstring(xml)
rpt_side = root.find('RptSide')
rpt_side.set('Txt1', 'Updated')
pty = rpt_side.find('Pty')
pty.set('ID', 'Updated')
print etree.tostring(root)
Example output:
<TrdCaptRpt RptID="10000001" TransTyp="0">
<RptSide Side="1" Txt1="Updated">
<Pty ID="Updated" R="1" />
</RptSide>
</TrdCaptRpt>
|
Python UTF-16 CSV reader
Question: I have a UTF-16 CSV file which I have to read. Python csv module does not seem
to support UTF-16.
I am using python 2.7.2. CSV files I need to parse are huge size running into
several GBs of data.
Answers for John Machin questions below
print repr(open('test.csv', 'rb').read(100))
Output with test.csv having just abc as content
'\xff\xfea\x00b\x00c\x00'
I think csv file got created on windows machine in USA. I am using Mac OSX
Lion.
If I use code provided by phihag and test.csv containing one record.
sample test.csv content used. Below is print repr(open('test.csv',
'rb').read(1000)) output
'\xff\xfe1\x00,\x002\x00,\x00G\x00,\x00S\x00,\x00H\x00 \x00f\x00\xfc\x00r\x00 \x00e\x00 \x00\x96\x00 \x00m\x00 \x00\x85\x00,\x00,\x00I\x00\r\x00\n\x00'
Code by phihag
import codecs
import csv
with open('test.csv','rb') as f:
sr = codecs.StreamRecoder(f,codecs.getencoder('utf-8'),codecs.getdecoder('utf-8'),codecs.getreader('utf-16'),codecs.getwriter('utf-16'))
for row in csv.reader(sr):
print row
Output of the above code
['1', '2', 'G', 'S', 'H f\xc3\xbcr e \xc2\x96 m \xc2\x85']
['', '', 'I']
expected output is
['1', '2', 'G', 'S', 'H f\xc3\xbcr e \xc2\x96 m \xc2\x85','','I']
Answer: At the moment, the csv module does not support UTF-16.
In Python 3.x, csv expects a text-mode file and you can simply use the
encoding parameter of
[`open`](http://docs.python.org/release/3.2/library/functions.html#open) to
force another encoding:
# Python 3.x only
import csv
with open('utf16.csv', 'r', encoding='utf16') as csvf:
for line in csv.reader(csvf):
print(line) # do something with the line
In Python 2.x, you can recode the input:
# Python 2.x only
import codecs
import csv
class Recoder(object):
def __init__(self, stream, decoder, encoder, eol='\r\n'):
self._stream = stream
self._decoder = decoder if isinstance(decoder, codecs.IncrementalDecoder) else codecs.getincrementaldecoder(decoder)()
self._encoder = encoder if isinstance(encoder, codecs.IncrementalEncoder) else codecs.getincrementalencoder(encoder)()
self._buf = ''
self._eol = eol
self._reachedEof = False
def read(self, size=None):
r = self._stream.read(size)
raw = self._decoder.decode(r, size is None)
return self._encoder.encode(raw)
def __iter__(self):
return self
def __next__(self):
if self._reachedEof:
raise StopIteration()
while True:
line,eol,rest = self._buf.partition(self._eol)
if eol == self._eol:
self._buf = rest
return self._encoder.encode(line + eol)
raw = self._stream.read(1024)
if raw == '':
self._decoder.decode(b'', True)
self._reachedEof = True
return self._encoder.encode(self._buf)
self._buf += self._decoder.decode(raw)
next = __next__
def close(self):
return self._stream.close()
with open('test.csv','rb') as f:
sr = Recoder(f, 'utf-16', 'utf-8')
for row in csv.reader(sr):
print (row)
`open` and `codecs.open` require the file to start with a BOM. If it doesn't
(or you're on Python 2.x), you can still convert it in memory, like this:
try:
from io import BytesIO
except ImportError: # Python < 2.6
from StringIO import StringIO as BytesIO
import csv
with open('utf16.csv', 'rb') as binf:
c = binf.read().decode('utf-16').encode('utf-8')
for line in csv.reader(BytesIO(c)):
print(line) # do something with the line
|
python program for accessing values from csv files and storing it into a database using wamp
Question: I am new to Python and [WampServer](http://www.wampserver.com/en/). I want to
retrieve values from csv files (more than 10 GB) and, after some processing,
store them in a MySQL database using Wamp efficiently.
I have installed Python and Django on Wamp server, and have checked the
previous posts, but since I am a complete novice in this field, I am not
getting much from them.
Can somebody please suggest appropriate resources for a beginner like me? I
have already looked into [Python Power! The Comprehensive
Guide](http://rads.stackoverflow.com/amzn/click/1598631586), but I did not get
much from it. Any help would be appreciated.
Answer: You can read data from CSV files using the built-in [`csv`
module](http://docs.python.org/library/csv.html). For example:
>>> import csv
>>> open('test.csv','w').write('1,one\n2,two\n3,three\n')
>>> list(csv.reader(open('test.csv')))
[['1', 'one'], ['2', 'two'], ['3', 'three']]
You can write data to MySQL using the [`MySQL-python`
package](http://pypi.python.org/pypi/MySQL-python), which conforms to the
[Python database API](http://www.python.org/dev/peps/pep-0249/) and is
[documented here](http://mysql-python.sourceforge.net/MySQLdb.html). For
example, after you've installed this package, you can do this kind of thing:
>>> import MySQLdb
>>> conn = MySQLdb.connect(host, user, passwd, db)
>>> cursor = conn.cursor()
>>> cursor.executemany('INSERT INTO numbers (value, name) VALUES (%, %)',
... [['1', 'one'], ['2', 'two'], ['3', 'three']])
...
>>> cursor.commit()
However, if you're using Django to manage the database, then you may be better
off using Django's own database interface, for example
[`Model.save`](https://docs.djangoproject.com/en/dev/ref/models/instances/#saving-
objects) to save an instance of a model to the database.
|
Error in simple python module
Question: I tried to write a little python script today but have failed horribly. Why is
it that the code below gives me the following error after being called from
the shell?
Error
File "./testmod.py", line 15, in <module>
printdnsfile(sys.argv[1])
File "./testmod.py", line 10, in printdnsfile
print(socket.gethostbyname(str(line)))
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
Code
#!/usr/bin/python
def printdnsfile(file):
file= open (file,"r")
import socket
dest = open("/dnsfile.txt",'w')
for line in file:
print(socket.gethostbyname(str(line)))
print>>dest, str(",".join([line,socket.gethostbyname(line)])+'\n')
if __name__ == "__main__":
import sys
printdnsfile(sys.argv[1])
I tested the socket module in the python-console and it worked as expected. Is
there an error with my code or is this a problem with my configuration?
Thanks.
Answer: You might have an empty line in your input file. Try checking your line before
you gethostbyname.
def printdnsfile(file):
file= open (file,"r")
import socket
dest = open("/dnsfile.txt",'w')
for line in file:
line = line.strip()
if line:
print(socket.gethostbyname(str(line)))
print>>dest, str(",".join([line,socket.gethostbyname(line)])+'\n')
|
How to multiplex multiple blocking Python generators into one?
Question: Consider the following pseudo code:
def g_user():
while True:
yield read_user_input()
def g_socket():
while True:
yield read_socket_input()
def g_combined(gu, gs):
# should read user or socket input, whichever is available
while True:
sel = select(gu, gs)
if sel.contains(gu):
yield gu.next()
if sel.contains(gs):
yield gs.next()
gc = g_combined ( g_user(), g_socket() )
How to implement this in easiest way?
Answer: Look's like someone already implemented this:
<http://www.dabeaz.com/generators/genmulti.py>
Mirrored here:
import Queue, threading
def gen_multiplex(genlist):
item_q = Queue.Queue()
def run_one(source):
for item in source: item_q.put(item)
def run_all():
thrlist = []
for source in genlist:
t = threading.Thread(target=run_one,args=(source,))
t.start()
thrlist.append(t)
for t in thrlist: t.join()
item_q.put(StopIteration)
threading.Thread(target=run_all).start()
while True:
item = item_q.get()
if item is StopIteration: return
yield item
|
KeyError being thrown after logging in to Django admin
Question: I'm getting a pretty nasty KeyError after logging into the Django admin. It is
unable to find the key "user". If it helps, I have a model named "User".
How do I fix this? Could something be wrong with my configuration? I am using
the default admin configuration mentioned in the [Django
tutorials](https://docs.djangoproject.com/en/dev/intro/tutorial02/).
[07/Feb/2012 19:04:52] "GET /web/admin/ HTTP/1.1" 500 1865
http://localhost:8000/web/admin/
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/contrib/admin/sites.py", line 214, in wrapper
return self.admin_view(view, cacheable)(*args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/utils/decorators.py", line 93, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/views/decorators/cache.py", line 79, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/contrib/admin/sites.py", line 197, in inner
return view(request, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/views/decorators/cache.py", line 79, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/contrib/admin/sites.py", line 382, in index
context_instance=context_instance
File "/usr/local/lib/python2.6/dist-packages/django/shortcuts/__init__.py", line 20, in render_to_response
return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs)
File "/usr/local/lib/python2.6/dist-packages/django/template/loader.py", line 188, in render_to_string
return t.render(context_instance)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 123, in render
return self._render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 117, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 744, in render
bits.append(self.render_node(node, context))
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 757, in render_node
return node.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/loader_tags.py", line 127, in render
return compiled_parent._render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 117, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 744, in render
bits.append(self.render_node(node, context))
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 757, in render_node
return node.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/loader_tags.py", line 127, in render
return compiled_parent._render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 117, in _render
return self.nodelist.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 744, in render
bits.append(self.render_node(node, context))
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 757, in render_node
return node.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/loader_tags.py", line 64, in render
result = block.nodelist.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 744, in render
bits.append(self.render_node(node, context))
File "/usr/local/lib/python2.6/dist-packages/django/template/base.py", line 757, in render_node
return node.render(context)
File "/usr/local/lib/python2.6/dist-packages/django/contrib/admin/templatetags/log.py", line 19, in render
user_id = context[self.user].id
File "/usr/local/lib/python2.6/dist-packages/django/template/context.py", line 55, in __getitem__
raise KeyError(key)
KeyError: u'user'
[07/Feb/2012 19:06:28] "GET /web/admin/ HTTP/1.1" 500 1865
Here is part of my settings.py:
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
TEMPLATE_CONTEXT_PROCESSORS = (
'django.core.context_processors.request',
'django.contrib.messages.context_processors.messages',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'web.exception.Middleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# 'django.middleware.csrf.CsrfViewMiddleware',
)
TEMPLATE_DIRS = (
BASE('web/templates')
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sites',
'django.contrib.sessions',
'django.contrib.messages',
...
'django_nose',
# Uncomment the next line to enable the admin:
'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
# 'django.contrib.admindocs',
)
Answer: Try adding this to your **TEMPLATE_CONTEXT_PROCESSORS** :
`django.contrib.auth.context_processors.auth`
It makes the `user` object available to templates through the RequestContext:
<https://docs.djangoproject.com/en/dev/ref/templates/api/#django-contrib-auth-
context-processors-auth>
|
Django on apache2/mod_wsgi and collectstatic
Question: I'm configuring Django on Apache under Ubuntu 11.04. My media files is not
available.
httpd.conf
Alias /robots.txt /home/i159/workspace/prod-shivablog/shivablog/robots.txt
Alias /favicon.ico /home/i159/workspace/prod-shivablog/shivalog/favicon.ico
AliasMatch ^/([^/]*\.css) /home/i159/workspace/prod- shivablog/shivablog/site_media/static/css/$1
Alias /media/ /home/i159/workspace/prod-shivablog/shivablog/site_media/static/
Alias /static/ /home/i159/workspace/prod-shivablog/shivablog/site_media/static/
<Directory /home/i159/workspace/prod-shivablog/shivablog/static>
Order deny,allow
Allow from all
</Directory>
<Directory /home/i159/workspace/prod-shivablog/shivablog/media>
Order deny,allow
Allow from all
</Directory>
WSGIScriptAlias / /home/i159/workspace/prod-shivablog/shivablog/deploy/wsgi.py
WSGIDaemonProcess local-shivablog.com python-path=/home/i159/workspace/prod- shivablog/shivablog/:/home/i159/.envs/shivablog/python2.7/site-packages
<Directory /home/i159/workspace/prod-shivablog/shivablog>
<Files wsgi.py>
Order allow,deny
Allow from all
</Files>
</Directory>
urls
# Static files url.
(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT}),
(r'^site_media/static/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.STATIC_ROOT}),
wsgi
import os, sys
sys.path.insert(0, os.path.abspath(os.path.join(os.path.abspath(os.path.dirname(__file__)), os.pardir, os.pardir)))
sys.path.insert(0, os.path.abspath(os.path.join(os.path.abspath(os.path.dirname(__file__)), os.pardir)))
from django.core.handlers.wsgi import WSGIHandler
os.environ["DJANGO_SETTINGS_MODULE"] = "shivablog.settings"
application = WSGIHandler()
settings
MEDIA_ROOT = ''
MEDIA_URL = "/media/"
STATIC_ROOT = ''
STATIC_URL = "/site_media/static/"
How to make my media files available? What the configurations are correct?
After `collectstatic` all the static and media files collects to
`site_media/static`. Should I get my media files from this directory
(`site_media/static`)?
Answer: You should not have an entry in your urls.py file for media or static files.
If you do at least wrap them in a cause which only does this in debug mode =
True.
<https://docs.djangoproject.com/en/1.2/howto/static-files/>
(r'^media/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT}),
(r'^site_media/static/(?P<path>.*)$', 'django.views.static.serve',
{'document_root': settings.STATIC_ROOT}),
Also your .htaccess file has the following lines which point to the same
folder, I think they should be different as your static files are not in your
media folder.
Alias /media/ /home/i159/workspace/prod-shivablog/shivablog/site_media/static/
Alias /static/ /home/i159/workspace/prod-shivablog/shivablog/site_media/static/
|
Split a string "aabbcc" -> ["aa", "bb", "cc"] without re.split
Question: I would like to split a string according to the title in a single call. I'm
looking for a simple syntax using list comprehension, but i don't got it yet:
s = "123456"
And the result would be:
["12", "34", "56"]
What i don't want:
re.split('(?i)([0-9a-f]{2})', s)
s[0:2], s[2:4], s[4:6]
[s[i*2:i*2+2] for i in len(s) / 2]
**Edit** :
Ok, i wanted to parse a hex RGB[A] color (and possible other color/component
format), to extract all the component. It seem that the fastest approach would
be the last from sven-marnach:
1. sven-marnach xrange: 0.883 usec per loop
python -m timeit -s 's="aabbcc";' '[int(s[i:i+2], 16) / 255. for i in xrange(0, len(s), 2)]'
2. pair/iter: 1.38 usec per loop
python -m timeit -s 's="aabbcc"' '["%c%c" % pair for pair in zip(* 2 * [iter(s)])]'
3. Regex: 2.55 usec per loop
python -m timeit -s 'import re; s="aabbcc"; c=re.compile("(?i)([0-9a-f]{2})");
split=re.split' '[int(x, 16) / 255. for x in split(c, s) if x != ""]'
Answer: Reading through the comments, it turns out the actual question is: What is the
fastest way to parse a color definition string in hexadecimal `RRGGBBAA`
format. Here are some options:
def rgba1(s, unpack=struct.unpack):
return unpack("BBBB", s.decode("hex"))
def rgba2(s, int=int, xrange=xrange):
return [int(s[i:i+2], 16) for i in xrange(0, 8, 2)]
def rgba3(s, int=int, xrange=xrange):
x = int(s, 16)
return [(x >> i) & 255 for i in xrange(0, 32, 8)]
As I expected, the first version turns out to be fastest:
In [6]: timeit rgba1("aabbccdd")
1000000 loops, best of 3: 1.44 us per loop
In [7]: timeit rgba2("aabbccdd")
100000 loops, best of 3: 2.43 us per loop
In [8]: timeit rgba3("aabbccdd")
100000 loops, best of 3: 2.44 us per loop
|
How to convert a python string(include '\b') to a raw string?
Question: I use Maya to render my cartoon, and got a problem when I programming with
python. Maya gives me an string variable which is a path, and I want to
convert it to a normalized path. However, when it comes to '\b', I totally
failed to play with it. Here's the problem:
I want to convert the string in **paths** whose value looks like
'D:\notebook\bot' to 'D/notebook/bot':
paths = ['D:\notebook\bot', 'D:\newworld']
for p in paths:
p = '{0!r}'.format(p).replace('\\', '/').replace(':','')
print p
The output is :
D/notebook/x08ot
D/newworld
See, the \n is correctly printed out as /n, however, \b is being escaped as
/x08. How can I achieve what I want? Note I can't format the input paths as:
paths = [r'D:\notebook\bot', r'D:\newworld']
because it's passed from Maya. Python version is 2.6.
Answer: The answer I came up with was to use the re module and make all the strings
"raw" strings. For more info about raw strings check out
<http://docs.python.org/reference/lexical_analysis.html>
import re
paths = [r'D:\notebook\bot', r'D:\newworld']
for p in paths:
result = re.sub(r':?\\', "/", p)
print(result)
|
Python random answering program
Question: I wrote this programm
#!/usr/bin/env python
"""
"""
import random
def CollectStrings():
string_list = []
while True:
string = raw_input("What is your question: (enter to quit): ")
if not string:
return string_list
string_list.append(string)
def ChooseStrings(how_many):
string_list = CollectStrings()
chosen = random.sample(string_list, how_many)
chosen.sort()
return ', '.join(chosen)
print ChooseStrings (3)
But I need to make this program randomly answer questions, like an 8ball. How
would I do that?
Answer: Add all answers to a list and than use
[random.choices](http://docs.python.org/library/random.html#random.choice) to
get a random answer:
import random
answers = [
"The answer lies in your heart",
"I do not know",
"Almost certainly",
"No",
"Yes",
"Why do you need to ask?",
"Go away. I do not wish to answer at this time.",
"Time will only tell",
]
print random.choice(answers) // Print random answer
|
Running fselect using Python
Question: I am trying to run this python script called fselect in windows 7. It can be
downloaded from this website: <http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/>
under this name called Feature selection tool. I am running it on Python
2.7.2.Facing a bit of problem running it..
Typed this first in IDLE:
>>> import pprint
>>> import sys
>>> print pprint.pprint(sys.path)
>>> sys.path.append("C:\Users\HP\Documents\MATLAB\libsvm-3.11\tools")
>>> import fselect
Usage: training_file [testing_file]
Then the problem is when i type the next part:
Tried this:
>>> ./fselect.py TrainVec
SyntaxError: invalid syntax
Next tried this:
>>> fselect.py TrainVec
SyntaxError: invalid syntax
Next tried this:
>>> TrainVec
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
TrainVec
NameError: name 'TrainVec' is not defined
Tried this also:
>>> TrainVec.mat
Traceback (most recent call last):
File "<pyshell#8>", line 1, in <module>
TrainVec.mat
NameError: name 'TrainVec' is not defined
What is the correct way of typing it? Need some guidance on it...
tried running using cmd but there is an error...

Answer: If you are trying to run the `fselect.py` directly from the command prompt,
make sure that python is set into the path variable. For guidance with that,
please read <http://people.cis.ksu.edu/~schmidt/200f07/setpath.html>.
The script will also invoke `grid.py`. `grid.py` requires `gnuplot` to be
there. So ensure that `grid.py` is running properly and if necessary check the
paths of the `svm_train`, `svm_test` in the script along with that of
`grid.py`. Hope it will work now.
|
How to correctly structure SQLAlchemy (declarative style) python project and it`s unittests
Question: I am developing large backend for some web apps. This is my first python and
SQLAlchemy project, so I am confused with some things. And was kind of spoiled
by Java programming tools and IDE`s, compared to python`s (I use pydev in
eclipse anyway). I need help with how to structure the project and write
tests. I`ll describe situation first.
In PyDev i named my project for example "ProjectName", and below I have shown
my current folder/package and files structure.
* ProjectName
* projectname
* __init__.py
* some_package
* __init__.py
* Foo.py
* Bar.py
* tests
* unit_tests
* __init__.py
* some_package
* __init__.py
* TestFoo.py
* TestBar.py
* load_tests
* integration_tests
* __init__.py
I use declarative style in SQLAlchemy. Foo and Bar are some classes, such that
Foo extends SQLAlchemy declarative Base and Bar extends Foo. Under
'projectname.some_package' in it`s __init__.py I have this code :
engine = create_engine('mysql+mysqldb://user:pass@localhost:3306/SomeDataBase', pool_recycle=3600)
Session = sessionmaker(bind=engine)
Base = declarative_base()
So, Foo imports this Base and extends it, and Bar imports Foo and extends it.
My first question is, should I store Base in that __init__.py and use it like
I started with this 2 classes? This create_engine is just temporary there, I
would like to have config file and load it`s settings from there, how to do
that? Where should I call Base.metadata.create_all(), so it can create all
database tables at once?
Next, in testing classes, for example in TestFoo I have this code :
def setUp(self):
#create database tables and session object
self.engine = create_engine('mysql+mysqldb://user:pass@localhost:3306/SomeDatabase', pool_recycle=3600)
Session = sessionmaker(bind=self.engine)
Foo.metadata.create_all(bind=self.engine)
self.session = Session()
def tearDown(self):
#drop all tables and close session object
self.session.close()
meta = MetaData(self.engine)
meta.reflect()
meta.drop_all()
End then I have some test methods in that test class and it runs fine. In
TestBar class difference is that
Foo.metadata.create_all(bind=self.engine)
is :
Bar.metadata.create_all(bind=self.engine)
When I run TestBar, it also runs fine. But, when I select both test classes
and run them, I get errors :
/usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/declarative.py:1336: SAWarning: The classname 'Foo' is already in the registry of this declarative base, mapped to <class 'projectname.some_package.Foo.Foo'>
_as_declarative(cls, classname, cls.__dict__)
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py:330: Warning: Field 'id' doesn't have a default value
cursor.execute(statement, parameters)
What is the problem here? I tried to run tests with nose and pydev runners and
get same errors. Then I tried to move database tables creation in __init__.py
in some_package under unit_tests but I could not get it working. Also, I`m
confused about how python import works. For example if I add Foo import in
TestBar class I also get errors similar to that that I have shown already. How
can I run many unit tests that test SQLAlchemy classes, all at once?
So to extract most important questions again :
1. How to structure python project that uses SQLAlchemy declarative style and unittests correctly. BTW I have many class methods in Foo and Bar that interact with database, in context of their respective classes, I hope that is OK?
2. Where to store Base declarative class and how to properly use it in whole project, and how to extract all database schema (that I defined declaratively in my classes) anywhere in project and use it?
3. How to best use unit tests with SQLAlchemy and run multiple unittests at once?
4. If u have any other suggestions feel free to add it?
Thank you very much for the help.
Answer: Quick answer (lack of time, sorry): use a single MetaData instance instead of
having one for both Foo and Bar. In general multiple MetaData instances is an
advanced trick that you almost never need.
|
Python WindowsError: [Error 3] The system cannot find the file specified when trying to rename
Question: I can't figure out what's wrong. I've used rename before without any problems,
and can't find a solution in other similar questions.
import os
import random
directory = "C:\\whatever"
string = ""
alphabet = "abcdefghijklmnopqrstuvwxyz"
listDir = os.listdir(directory)
for item in listDir:
path = os.path.join(directory, item)
for x in random.sample(alphabet, random.randint(5,15)):
string += x
string += path[-4:] #adds file extension
os.rename(path, string)
string= ""
Answer: If you want to save back to the same directory you will need to add a path to
your 'string' variable. Currently it is just creating a filename and os.rename
requires a path.
for item in listDir:
path = os.path.join(directory, item)
for x in random.sample(alphabet, random.randint(5,15)):
string += x
string += path[-4:] #adds file extension
string = os.path.join(directory,string)
os.rename(path, string)
string= ""
|
Using BCP to import data to SQL Server while preserving accents, Asian characters, etc
Question: I'm trying to import a PostgreSQL dump of data into SQL Server using bcp. I've
written a Python script to switches delimiters into '^' and eliminate other
bad formatting, but I cannot find the correct switches to preserve unicode
formatting for the strings when importing into SQL Server.
In Python, if I print out the lines that are causing me trouble, the row looks
like this with the csv module:
['12', '\xe4\xb8\x89\xe5\x8e\x9f \xe3\x81\x95\xe3\x81\xa8\xe5\xbf\x97']
The database table only has 2 columns: one `integer`, one `varchar`.
My statement (simplified) for creating the table is only:
CREATE TABLE [dbo].[example](
[ID] [int] NOT NULL,
[Comment] [nvarchar](max)
)
And to run bcp, I'm using this line:
c:\>bcp dbo.example in fileinput -S servername -T -t^^ -c
It successfully imports about a million rows, but all of my accented
characters are broken.
For example, "Böhm, Rüdiger" is turned into "B+¦hm, R++diger". Does anyone
have experience with how to properly set switches or other hints with bcp?
**Edit** : `varchar` switched to `nvarchar`, but this does not fix the issue.
This output in Python (reading with CSV module):
['62', 'B\xc3\xb6hm, R\xc3\xbcdiger']
is displayed as this in SSMS from the destination DB (delimiters matched for
consistency):
select * from dbo.example where id = 62
62;"B├╢hm, R├╝diger"
where in pgAdmin, using the original DB, I have this:
62;"Böhm, Rüdiger"
Answer: You may need to modify your BCP command to support wide character sets (note
the use of -w instead of -c switch)
bcp dbo.example in fileinput -S servername -T -t^^ -w
[BCP documentation reference](http://msdn.microsoft.com/en-
us/library/ms162802.aspx)
See also <http://msdn.microsoft.com/en-us/library/ms188289.aspx>
|
How to make a directory structure from list
Question: I've been looking into this but im not having much luck.
The idea is that python should generate 10 separate 6 digit random codes,
these 6 digit codes can then be used as folder names. this seems like such a
simple task and i have been using the makedirs to attempt it but so far no
luck, can someone please give a quick example on how this would be done?
Answer: Don't know why I did this for you. Feeling generous.
from random import randint
import os
nums = 10
digits = 6
for i in range(nums):
value = "".join([str(randint(0,9)) for _ in range(digits)])
os.mkdir(value)
|
Find all CSV files in a directory using Python
Question: How can I find all files in directory with the extension .csv in python?
Answer:
import glob
for files in glob.glob("*.csv"):
print files
|
RPython sys methods don't work
Question: I have the following code:
import sys
def entry_point(argv):
sys.exit(1)
return 0
def target(*args):
return entry_point, None
However, when I run `python ./pypy/pypy/translator/goal/translate.py t.py` I
get the following error:
...
[translation:ERROR] Exception: unexpected prebuilt constant: <built-in function exit>
[translation:ERROR] Processing block:
[translation:ERROR] block@9 is a <class 'pypy.objspace.flow.flowcontext.SpamBlock'>
[translation:ERROR] in (t:3)entry_point
[translation:ERROR] containing the following operations:
[translation:ERROR] v0 = simple_call((builtin_function_or_method exit), (1))
[translation:ERROR] --end--
There was actually more to the error but I thought only this last part was
relevant. If you think more of it might be helpful, please comment and I will
edit.
In fact, I get another error when I replace sys.exit with something even
simpler like sys.stdout.write.
import sys
def entry_point(argv):
sys.stdout.write('some mesg\n')
return 0
def target(*args):
return entry_point, None
gives me:
...
[translation:ERROR] AnnotatorError: annotation of v0 degenerated to SomeObject()
[translation:ERROR] v0 = getattr((module sys), ('stdout'))
[translation:ERROR]
[translation:ERROR] In <FunctionGraph of (t:3)entry_point at 0x10d03de10>:
[translation:ERROR] Happened at file t.py line 4
[translation:ERROR]
[translation:ERROR] ==> sys.stdout.write('some mesg\n')
[translation:ERROR]
[translation:ERROR] Previous annotation:
[translation:ERROR] (none)
[translation:ERROR] Processing block:
[translation:ERROR] block@3 is a <class 'pypy.objspace.flow.flowcontext.SpamBlock'>
[translation:ERROR] in (t:3)entry_point
[translation:ERROR] containing the following operations:
[translation:ERROR] v0 = getattr((module sys), ('stdout'))
[translation:ERROR] v1 = getattr(v0, ('write'))
[translation:ERROR] v2 = simple_call(v1, ('some mesg\n'))
[translation:ERROR] --end--
Are sys methods simply off limits for RPython? It seems kind of weird to me
because exit and stdout are so readily available in C. However, the error
messages kind of look like they might be about different things, so I might
just be barking down the wrong tree.
Currently I am using [this](http://doc.pypy.org/en/latest/coding-guide.html)
guide to figure out roughly what is allowed and not allowed in RPython. Are
there other rather accessible references I could use for more information?
Answer: The sys module isn't RPython, you can't use it in an RPython program. To
return a status code you must return it directly from the entry_point
function.
You also can't use sys.stdout/sys.stdin/sys.stderr, you'll need to read/write
using the os.read/os.write functions combined with a file descriptor.
|
Importing SciPy does not work
Question: I am trying to use SciPy to solve a very simple equation (Kepler's equation)
using Newton-Raphson. However, exectuing the program fails with the following
error message:
return sc.optimize.newton(f, meanAnomaly, f_prime, args=(),
AttributeError: 'module' object has no attribute 'newton'
Clearly though, I have SciPy installed under Ubuntu 12.04. From scipy.test():
NumPy version 1.5.1
NumPy is installed in /usr/lib/python2.7/dist-packages/numpy
SciPy version 0.9.0
SciPy is installed in /usr/lib/python2.7/dist-packages/scipy
Python version 2.7.2+ (default, Jan 21 2012, 23:31:34) [GCC 4.6.2]
nose version 1.1.2
What is wrong? Here is my code:
# File a
from b import *
print calculate_eccentric_anomaly(1,2)
# File b
def calculate_eccentric_anomaly(meanAnomaly, eccentricity):
import scipy.optimize as sc
def f(eccentricAnomaly):
return (eccentricAnomaly - eccentricity *
sc.sin(eccentricAnomaly) - meanAnomaly)
def f_prime(eccentricAnomaly):
return 1 - eccentricity * sc.cos(eccentricAnomaly)
return sc.optimize.newton(f, meanAnomaly, f_prime, args=(),
tol=1e-10, maxiter=50)
Answer: You're importing `scipy.optimize` as `sc`, then you're trying to call
`sc.optimize.newton`, which would effectively be
`scipy.optimize.optimize.newton`. I would do
import scipy.optimize as opt
or
import scipy.optimize as scopt
|
Borda Count using python?
Question: I have a list of ballots that look like A>B>C>D>E and some of them that look
like A>B>C=D=E. The ballots are in a text file and each ballot is on its own
line. I want to assign point values to each candidate. For A>B>C>D>E, A should
get 4 points for being in first, B should get 3, C 2, D 1, and E 0. For
A>B>C=D=E, A should get 4 points, B should get 3, and because C, D, and E are
tied they should split the remaining 3 points and so they each get 1. I want
all the ballots in the text file to be counted and the votes be added up. What
do you think is the easiest way to go about doing this?
Answer:
import itertools
import collections
def borda(ballot):
n = len([c for c in ballot if c.isalpha()]) - 1
score = itertools.count(n, step = -1)
result = {}
for group in [item.split('=') for item in ballot.split('>')]:
s = sum(next(score) for item in group)/float(len(group))
for pref in group:
result[pref] = s
return result
def tally(ballots):
result = collections.defaultdict(int)
for ballot in ballots:
for pref,score in borda(ballot).iteritems():
result[pref]+=score
result = dict(result)
return result
ballots = ['A>B>C>D>E',
'A>B>C=D=E',
'A>B=C>D>E',
]
print(tally(ballots))
yields
{'A': 12.0, 'C': 5.5, 'B': 8.5, 'E': 1.0, 'D': 3.0}
|
Python 2.6 decorator with parameter
Question: I'm writing filesystem cache decorator for django. The problem is when I
decorate my function with this decorator and @register.simple_tag I get
my_decorated_func takes 0 arguments error (when page is loaded with this
template tag)
from functools import wraps
from django.conf import settings
from django.core.cache import get_cache
from django.utils.http import urlquote
from django.utils.hashcompat import md5_constructor
FILESYSTEM_CACHE_NAME = 'filesystem'
def filesystem_cache(key_prefix, cache_time=None):
"""
Caches function based on key_prefix and function args/kwargs.
Stores function result in filesystem cache for a certain cache_time.
"""
if cache_time:
FILESYSTEM_CACHE_TIME = cache_time
else:
FILESYSTEM_CACHE_TIME = settings.CACHES['filesystem'].get('TIMEOUT')
def wrapfunc(func):
@wraps(func)
def wrapper(*args, **kwargs):
full_args = list()
full_args.extend(args)
for k, v in kwargs.items():
full_args.append('%s:%s' % (str(k), str(v)))
md5_args = md5_constructor(u':'.join([urlquote(var) for var in full_args]))
cache_key = 'template.cache.%s.%s' % (key_prefix, md5_args.hexdigest())
filesystem_cache = get_cache(FILESYSTEM_CACHE_NAME)
cached_value = filesystem_cache.get(cache_key)
if cached_value:
# if cached value exists - return it
return cached_value
result = func(*args, **kwargs)
filesystem_cache.set(cache_key, result, FILESYSTEM_CACHE_TIME)
return result
return wrapper
return wrapfunc
@register.simple_tag
@filesystem_cache('countrypages')
def my_decorated_func(country, date, locale=None):
Calling my_decorated_func with parameters from console is ok, but rendering a
template :
{% my_decorated_func country todays_date 'en' %}
gives me an error.
Any suggestions ? Thanks !
Answer: Here is a solution : pip install decorator, than
def filesystem_cache(key_prefix, cache_time=None):
"""
Caches function based on key_prefix and function args/kwargs.
Stores function result in filesystem cache for a certain cache_time.
"""
if cache_time:
FILESYSTEM_CACHE_TIME = cache_time
else:
FILESYSTEM_CACHE_TIME = settings.CACHES['filesystem'].get('TIMEOUT')
@decorator
def wrapfunc(func, *args, **kwargs):
full_args = list()
full_args.extend(args)
for k, v in kwargs.items():
full_args.append('%s:%s' % (str(k), str(v)))
md5_args = md5_constructor(u':'.join([urlquote(var) for var in full_args]))
cache_key = 'template.cache.%s.%s' % (key_prefix, md5_args.hexdigest())
filesystem_cache = get_cache(FILESYSTEM_CACHE_NAME)
cached_value = filesystem_cache.get(cache_key)
if cached_value:
# if cached value exists - return it
return cached_value
result = func(*args, **kwargs)
filesystem_cache.set(cache_key, result, FILESYSTEM_CACHE_TIME)
return result
return wrapfunc
|
SVG -> Google Drawing
Question: I want to create SVG drawings for Google Drawings using Python, generating
them in the browser and then uploading them into Google Docs. I see the Google
Docs List API (3.0) supports importing files and then uploading them into
entries with the ResumableUploader, but I was thinking of creating images in
the browser with the svg-edit library. Does anyone know how to upload SVG
drawings to google docs directly from the browser to Google Docs? Thanks in
advance!
Answer: You can. You just have to generate the right POST request. This can probably
be done with Javascript. See: [How do you create a document in Google Docs
programmatically?](http://stackoverflow.com/questions/2704902/how-do-you-
create-a-document-in-google-docs-programmatically)
|
Make a GROUP BY with MapReduce in App Engine
Question: I'm looking for a way to make a GROUP BY operation in a query in datastore
using MapReduce. AFAIK App Engine doesn't support GROUP BY itself in GQL and a
good approach suggested by other developers is use
[MapReduce](http://code.google.com/p/appengine-mapreduce/).
I downloaded the [source code](http://code.google.com/p/appengine-
mapreduce/source/browse/#svn/trunk/python/src) and I'm studying the [demo
code](http://code.google.com/p/appengine-
mapreduce/source/browse/#svn/trunk/python/demo), and I tryied to implement in
my case. But I hadn't success. Here is how I tryied to do it. Maybe everything
I did is wrong. So if anyone could help me to do that, I would thank.
* * *
What I want to do is: I have a bunch of contacts in the datastore, and each
contact have a date. There are a bunch of repeated contacts with the same
date. What I want to do is simple the group by, gather the same contacts with
the same date.
E.g:
Let's say I have this contacts:
1. CONTACT_NAME: Foo1 | DATE: 01-10-2012
2. CONTACT_NAME: Foo2 | DATE: 02-05-2012
3. CONTACT_NAME: Foo1 | DATE: 01-10-2012
So after the MapReduce operation It would be something like this:
1. CONTACT_NAME: Foo1 | DATE: 01-10-2012
2. CONTACT_NAME: Foo2 | DATE: 02-05-2012
For a GROUP BY functionality I think word count does the work.
* * *
**EDIT**
The only thing that is shown in the log is:
> /mapreduce/pipeline/run 200
>
> Running GetContactData.WordCountPipeline(_(u'2012-02-02',),
> *_{})#da26a9b555e311e19b1e6d324d450c1a
**END EDIT**
If I'm doing something wrong, and if I'm using a wrong approach to do a GROUP
BY with MapReduce, help me in how to do that with MapReduce.
* * *
Here is my code:
from Contacts import Contacts
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.api import mail
from google.appengine.ext.db import GqlQuery
from google.appengine.ext import db
from google.appengine.api import taskqueue
from google.appengine.api import users
from mapreduce.lib import files
from mapreduce import base_handler
from mapreduce import mapreduce_pipeline
from mapreduce import operation as op
from mapreduce import shuffler
import simplejson, logging, re
class GetContactData(webapp.RequestHandler):
# Get the calls based on the user id
def get(self):
contactId = self.request.get('contactId')
query_contacts = Contact.all()
query_contacts.filter('contact_id =', int(contactId))
query_contacts.order('-timestamp_')
contact_data = []
if query_contacts != None:
for contact in query_contacts:
pipeline = WordCountPipeline(contact.date)
pipeline.start()
record = { "contact_id":contact.contact_id,
"contact_name":contact.contact_name,
"contact_number":contact.contact_number,
"timestamp":contact.timestamp_,
"current_time":contact.current_time_,
"type":contact.type_,
"current_date":contact.date }
contact_data.append(record)
self.response.headers['Content-Type'] = 'application/json'
self.response.out.write(simplejson.dumps(contact_data))
class WordCountPipeline(base_handler.PipelineBase):
"""A pipeline to run Word count demo.
Args:
blobkey: blobkey to process as string. Should be a zip archive with
text files inside.
"""
def run(self, date):
output = yield mapreduce_pipeline.MapreducePipeline(
"word_count",
"main.word_count_map",
"main.word_count_reduce",
"mapreduce.input_readers.DatastoreInputReader",
"mapreduce.output_writers.BlobstoreOutputWriter",
mapper_params={
"date": date,
},
reducer_params={
"mime_type": "text/plain",
},
shards=16)
yield StoreOutput("WordCount", output)
class StoreOutput(base_handler.PipelineBase):
"""A pipeline to store the result of the MapReduce job in the database.
Args:
mr_type: the type of mapreduce job run (e.g., WordCount, Index)
encoded_key: the DB key corresponding to the metadata of this job
output: the blobstore location where the output of the job is stored
"""
def run(self, mr_type, output):
logging.info(output) # here I should append the grouped duration in JSON
Answer: I based on the code @autumngard provided in this
[question](http://stackoverflow.com/questions/9251040/memory-limit-hit-with-
appengine-mapreduce) and modified to fit my purpose and it worked.
|
Stacking astronomy images with Python
Question: I thought this was going to be easier but after a while I'm finally giving up
on this, at least for a couple of hours...
I wanted to reproduce this a trailing stars image from a timelapse set of
pictures. Inspired by this: 
[The original
author](http://www.reddit.com/r/pics/comments/pkycp/star_trails_above_earthly_light_trails_as_viewed/c3qa141)
used low resolution video frames taken with VirtualDub and combined with
imageJ. I imagined I could easily reproduce this process but with a more
memory-conscious approach with Python, so I could use [the original high-
resolution
images](http://eol.jsc.nasa.gov/Videos/CrewEarthObservationsVideos/) for a
better output.
My algorithm's idea is simple, merging two images at a time, and then
iterating by merging the resulting image with the next image. This done some
hundreds of times and properly weighing it so that every image has the same
contribution to the final result.
I'm fairly new to python (and I'm no professional programmer, that'll be
evident), but looking around it appears to me the Python Imaging Library is
very standard, so I decided to use it (correct me if you think something else
would be better).
Here's what I have so far:
#program to blend many images into one
import os,Image
files = os.listdir("./")
finalimage=Image.open("./"+files[0]) #add the first image
for i in range(1,len(files)): #note that this will skip files[0] but go all the way to the last file
currentimage=Image.open("./"+files[i])
finalimage=Image.blend(finalimage,currentimage,1/float(i+1))#alpha is 1/i+1 so when the image is a combination of i images any adition only contributes 1/i+1.
print "\r" + str(i+1) + "/" + str(len(files)) #lousy progress indicator
finalimage.save("allblended.jpg","JPEG")
This does what it's supposed to but the resulting image is dark, and if I
simply try to enhance it, it's evident that information was lost due lack of
depth in pixel's values. (I'm not sure what the proper term here is, color
depth, color precision, pixel size). Here's the final result using low
resolution images:

or one I was trying with the full 4k by 2k resolution (from another set of
photos):

So, I tried to fix it by setting the image mode:
firstimage=Image.open("./"+files[0])
size = firstimage.size
finalimage=Image.new("I",size)
but apparently Image.blend does not accept that image mode.
> ValueError: image has wrong mode
Any ideas?
(I also tried making the images "less dark" by multiplying it before combining
them with im.point(lambda i: i * 2) but results were just as bad)
Answer: The problem here is that you are averaging the brightness at each pixel. This
may seem sensible but it is actually not what you want at all -- the bright
stars will get "averaged away" because they move accross the image. Take the
following four frames:
1000 0000 0000 0000
0000 0100 0000 0000
0000 0000 0010 0000
0000 0000 0000 0001
If you average those, you will get:
0.25 0 0 0
0 0.25 0 0
0 0 0.25 0
0 0 0 0.25
When you want:
1000
0100
0010
0001
Instead of blending the images you can try taking the maximum seen in any
image for each pixel. If you have PIL you can try the lighter function in
ImageChops.
from PIL import ImageChops
import os, Image
files = os.listdir("./")
finalimage=Image.open("./"+files[0])
for i in range(1,len(files)):
currentimage=Image.open("./"+files[i])
finalimage=ImageChops.lighter(finalimage, currentimage)
finalimage.save("allblended.jpg","JPEG")
Here is what I got: 
**EDIT:** I read the Reddit post and see that he actually combined two
approaches -- one for the star trails and a different one for the Earth. Here
is a better implementation of the averaging you tried, with proper weighting.
I used a numpy array for the intermediate storage instead of the uint8 Image
array.
import os, Image
import numpy as np
files = os.listdir("./")
image=Image.open("./"+files[0])
im=np.array(image,dtype=np.float32)
for i in range(1,len(files)):
currentimage=Image.open("./"+files[i])
im += np.array(currentimage, dtype=np.float32)
im /= len(files) * 0.25 # lowered brightness, with magic factor
# clip, convert back to uint8:
final_image = Image.fromarray(np.uint8(im.clip(0,255)))
final_image.save('all_averaged.jpg', 'JPEG')
Here is the image, which you could then combine with the star trails from the
previous one. 
|
Python w/ Pygame - The diffrence between these two sets?
Question: I'm using Python 2.7 and Pygame 1.9.1 The fact that there are a few things
that pygame and IDLE do not like each other is irrelevant as I attempted to
run it as a .py file as well.
_This works:_
import pygame
y = 0
dir = 1
running = 1
width = 800
height = 600
screen = pygame.display.set_mode((width, height))
linecolor = 255, 0, 0
bgcolor = 0, 0, 0
while running:
event = pygame.event.poll()
if event.type == pygame.QUIT:
running = 0
screen.fill(bgcolor)
pygame.draw.line(screen, linecolor, (0, y), (width-1, y))
y += dir
if y == 0 or y == height-1: dir *= -1
pygame.display.flip()
_But this does not work:_
import pygame
y = 0
dir = 1
running = 1
width = 800
height = 600
linecolor = 255, 0, 0
bgcolor = 0, 0, 0
screen = pygame.display.set_mode((640, 400))
while running:
event = pygame.event.poll()
if event.type == pygame.QUIT:
running = 0
screen.fill(bgcolor)
pygame.draw.aaline(screen, linecolor, (0, y), (width-1, y)
y += dir
if y == 0 or y == height-1: dir *= -1
pygame.display.flip()
Could anybody explain the differences and why one works over the other?
The only diffrence appears to be the the two lines locations:
linecolor = 255, 0, 0
and
bgcolor = 0, 0, 0
Answer: But that's not the only difference. This is a syntax error:
pygame.draw.aaline(screen, linecolor, (0, y), (width-1, y)
It also differs from the above in that it calls `aaline` instead of `line` and
is indented 8 spaces instead of 4. Any of these differences could be causing a
problem (since the 8-space indention, to me, suggest a possible mix of tabs
and spaces.)
Also, in general, it's a good idea to post a stack trace when you get an error
from a piece of code. They contain useful information.
|
How to sign a file and then verify?
Question: I am writing an application in which I sync a file to a server where I want to
sign the file then send it back to the client where it can be verified.
There seem to be multiple gnupg modules for Python for Ubuntu/Debian:
python-gnupginterface - Python interface to GnuPG (GPG)
python-gpgme - python wrapper for the GPGME library
python-gpgme-dbg - python wrapper for the GPGME library (debug extension)
python-pyme - Python interface to the GPGME GnuPG encryption library
python-pyme-doc - Python interface to the GPGME GnuPG encryption library
Can someone recommend which I one I should use that gets me up and running
quickly?
Should I just shell out to gpg instead of using a Python module?
Thanks!
Answer: Use [`python-gpgme`](https://launchpad.net/pygpgme) (as a bonus, you probably
know the maintainer).
Here's how you can use it to sign something (check with jamesh if there's a
better way, I haven't use this extensively):
import gpgme
from io import BytesIO
ctx = gpgme.Context()
plain = BytesIO("Hello")
sign = BytesIO("")
ctx.sign(plain, sign, gpgme.SIG_MODE_CLEAR)
print sign.getvalue()
in case it's not clear, BytesIO is a file-like thing. You could give it
`file("/etc/passwd")` as plain and `sys.stdout` as sign and it'd DWYW.
|
How to write to a file using non blocking IO?
Question: I want to write to a file using a non-blocking method in Python. On some
googling, I found that the language supports `fcntl` in order to do so, but
the method to implement the same is not very clear to me.
This is the code snippet (I don't know where I am going wrong):
import os, fcntl
nf = fcntl.fcntl(0,fcntl.F_UNCLK)
fcntl.fcntl(0,fcntl.F_SETFL , nf | os.O_NONBLOCK )
nf = open ("test.txt", 'a')
nf.write ( " sample text \n")
Is this the correct way to perform a non-blocking IO operation on a file? I
doubt it. Also, could you suggest any other modules in Python which allow me
to do so?
Answer: This is how you turn non-blocking mode on for a file in UNIX:
fd = os.open("filename", os.O_CREAT | os.O_WRONLY | os.O_NONBLOCK)
os.write(fd, "data")
os.close(fd)
On UNIX, however, [turning on non-blocking mode has no visible effect for
regular files](http://www.remlab.net/op/nonblock.shtml)! Even though the file
is in non-blocking mode, the `os.write` call won't return immediately, it will
sleep until the write is complete. To prove it to yourself experimentally, try
this:
import os
import datetime
data = "\n".join("testing\n" * 10 for x in xrange(10000000))
print("Size of data is %d bytes" % len(data))
print("open at %s" % str(datetime.datetime.now()))
fd = os.open("filename", os.O_CREAT | os.O_WRONLY | os.O_NONBLOCK)
print("write at %s" % str(datetime.datetime.now()))
os.write(fd, data)
print("close at %s" % str(datetime.datetime.now()))
os.close(fd)
print("end at %s" % str(datetime.datetime.now()))
You'll notice that the `os.write` call does take several seconds. Even though
the call is non-blocking (technically, it's not blocking, it's sleeping), the
call is _not_ asynchronous.
* * *
AFAIK, there is no way to write to a file asynchronously on Linux or on
Windows. You can simulate it, however, using threads. Twisted has a method
named `deferToThread` for this purpose. Here's how you use it:
from twisted.internet import threads, reactor
data = "\n".join("testing\n" * 10 for x in xrange(10000000))
print("Size of data is %d bytes" % len(data))
def blocking_write():
print("Starting blocking_write")
f = open("testing", "w")
f.write(data)
f.close()
print("End of blocking_write")
def test_callback():
print("Running test_callback, just for kicks")
d = threads.deferToThread(blocking_code)
reactor.callWhenRunning(cc)
reactor.run()
|
default python not finding same modules that other pythons can. Any suggestions?
Question: Not sure how to explain it and can't seem to figure out why this is happening
but I tried to install 'MySQLdb' from macports.
When I type 'python' and `import MySQLdb`, then it fails but when I launch
python from `/opt/local/bin/python2.7` and do the same then it works
perfectly. I thought I'd be smart and see where the default python is pointing
to and then point it to this one but thats when it starts getting strange.
$ which python
/opt/local/bin/python
$ ls -l /opt/local/bin/python
lrwxr-xr-x 1 root admin 24 Feb 13 13:55 /opt/local/bin/python -> /opt/local/bin/python2.7
I don't understand, it seems to be pointing the the one that works but I can't
load the same module as I can when I open python with the full path.
I'm sure I'm doing something dumb, so any help to explain why this is
happening would be great.
Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import MySQLdb
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "MySQLdb/__init__.py", line 19, in <module>
import _mysql
ImportError: No module named _mysql
Answer: The Python version banner you show is from yet another Python, that of the
python.org 2.7.2 64-bit/32-bit installer downloaded from python.org. It is not
a MacPorts Python. There should be a symlink to it at
`/usr/local/bin/python2.7`. Apparently, the PATH environment variable in the
terminal session you are using is not what you think it is or possibly you
have a shell alias defined. Try:
echo $PATH
You need to ensure that the MacPorts Python directory (`/opt/local/bin`) comes
before that of the python.org Python
(`/Library/Frameworks/Python.frameworks/Version/2.7/bin` or `/usr/local/bin`).
UPDATE: Also check the current value of the MacPorts `port select` command. It
may be pointing at a non MacPorts Python.
$ sudo port select --list python
Available versions for python:
none
python25-apple
python26-apple
python27 (active)
python27-apple
python32
|
SQLAlchemy Execute with raw SQL containing @DECLARE local tables
Question: I'm stuck -- I have the following python script with SQL alchemy which I've
been using quite successfully for several other purposes.
import sqlalchemy
from sqlalchemy import MetaData
from sqlalchemy.orm import *
engine = sqlalchemy.create_engine("this line of code would provide credentials to the database")
connection = engine.connect()
session = sessionmaker(bind=engine)
result = connection.execute(sqlquery)
for row in result: print row
Recently though I discovered that if my 'sqlquery' contains an @Declare
MyTable statement I get the error:
"This result object does not return rows. "
sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
Here is my SQL query which works fine in SSMS but will not execute when I try
to execute it using SQLAlchemy
DECLARE @USER TABLE
(
UserID INT
, StatsVals INT
)
INSERT INTO @USER (UserID, StatsVals)
SELECT TOP 10 u.UserID
, u.StatsVals
FROM UserProfile u
SELECT * FROM @USER
**Does anyone know why SQLAlchemy would be giving me this error? What should I
do to fix this?**
Answer: When the DBAPI executes on a cursor, if results are present, there's an
attribute called `cursor.description` that's required to be present. If it's
not, SQLAlchemy knows there's no results to return.
In this case, this is probably an issue with the DBAPI, unless this usage
falls under the realm of "multiple result sets" on the cursor. SQLAlchemy
doesn't have direct support for multiple result sets as of yet. If this is the
case, you'd need to use the DBAPI cursor directly and call `.nextset()` to get
at the results. You can get this via:
connection = engine.raw_connection()
cursor = connection.cursor()
(docs on how cursor.nextset() works at
<http://www.python.org/dev/peps/pep-0249/>)
Otherwise, you'd really need to contact the DBAPI author and see if what
you're doing here is really possible. I'm guessing this is pyodbc, even though
you haven't specified what backend you're on. If so, you can contact them at
<http://code.google.com/p/pyodbc/>.
|
pulling a dictionary out of the function that created it (python)
Question: I am importing a list from an outside document, and inputing that list into a
dictionary. This question is applicable to pretty much all values associated
to a variable from inside a function. Once the function is complete how do I
pull that information from the function with out having to assign the
variables as global. Sorry if this question isnt very clear, im having trouble
vocalizing it.
Here is the program so far. the dictionary 'result' has values while in the
function but when I try to call it from outside the function its empty.
fin = open('words.txt')
def dictionary(words):
result = {}
for line in words:
result[line] = 'yupp!' # dont care about value at the moment
return result
word_dict = dictionary(fin)
'the' in word_dict# checking to see if a word is in the dictionary
Answer: Use:
result = dictionary(fin)
to assign the value returned by `dictionary` to the variable `result`.
Note that `result` is a global variable, so I'm not sure what you mean by
"with out having to assign the variables as global".
* * *
def dictionary(words):
result = {}
for word in words:
word = word.strip()
result[word] = 'yupp!'
return result
with open('words.txt') as fin:
result = dictionary(fin)
print('the' in result)
* * *
Alternatively,
def dictionary(words):
return dict.fromkeys((word.strip() for word in words), 'yupp')
|
trouble getting text from xpath entry in python
Question: I am on the website
[http://www.baseball-
reference.com/players/event_hr.cgi?id=bondsba01&t=b](http://www.baseball-
reference.com/players/event_hr.cgi?id=bondsba01&t=b)
and trying to scrape the data from the tables. When I pull the xpath from one
entry, say the pitcher "Terry Mulholland," I retrieve this:
pitchers = site.xpath("/html/body/div[2]/div[2]/div[6]/table/tbody/tr/td[3]/table/tbody/tr[2]/td/a)
When I try to print `pitcher[0].text` for pitcher in printers, I get `[]`
rather than the `text`, Any idea why?
Answer: The problem is, last `tbody` doesn't exist in the original source. If you get
that xpath via some browser, keep in mind that browsers can guess and add
missing elements to make html valid.
Removing the last `tbody` resolves the problem.
In : import lxml.html as html
In : site = html.parse("http://www.baseball-reference.com/players/event_hr.cgi?id=bondsba01&t=b")
In : pitchers = site.xpath("/html/body/div[2]/div[2]/div[6]/table/tbody/tr/td[3]/table/tr[2]/td/a")
In : pitchers[0].text
Out: 'Terry Mulholland'
But I need to add that, the xpath expression you are using is pretty fragile.
One `div` added in some convenient place and now you have a broken script. If
possible, try to find better references like `id` or `class` that points to
your expected location.
|
python fabric logging
Question: I came across fabric modules - Its really cool. It works well for me. Now I
have an issue ,how to collect output from fabric script?
# cat fabfile.py
from fabric.api import *
from fabric.contrib.console import confirm
env.hosts = ['localhost' , '172.16.10.112','172.16.10.106']
env.user='testuser'
env.password = 'testuser'
@parallel
def uptime():
run('uname -a')
I would like to use logging modules with fabric and use them inside the code
itself .- Don't want to use normal redirection like "fab uptime &> log.out "
Answer: It looks like `fabric` itself doesn't use `logging`.
[Issue#57](https://github.com/fabric/fabric/issues/57) is already opened
regarding that, but I'm afraid that until it's fixed you'll need to stick to
redirection or have a look at some of the branches in github with changes to
do that:
* [tswicegood/fabric](https://github.com/tswicegood/fabric/commit/89c3cec8a8aa10d583a35a6201c1b4fd424f795f)
* ~~[ampledata/fabric](https://github.com/ampledata/fabric/commit/cd691df3285c67cc9ab135984b9cb5c2e0f857be)~~
* [pmuller's streamlogger gist](https://gist.github.com/pmuller/2376336)
|
Using a dash in a xpath does not work in py-dom-xpath
Question: I'm curreny using [py-dom-xpath](http://code.google.com/p/py-dom-
xpath/downloads/list) with python 2.7.2 under Debian 4.1.1-21.
Everything works pretty well, instead of one XML element.
Whenever I try to check a XML document for a xpath like `//AAA/BBB/CCC-DDD`
the path is not found. It's the only node with a dash `-` in it. I already
tried to escape the dash, but that didn't work.
I also tried `//*[name()='CCC-DDD']` and the `starts-with` and `contains`
statement. The element is definately in the XML and the spelling is also
correct.
I tried an [online xpath validation
site](http://chris.photobooks.com/xml/default.htm), and it works flawless
there, even with the dash.
Any help is appreciated.
Answer: Is using [lxml](http://codespeak.net/lxml/) an option? Dashes in the XPath
work fine there:
import lxml.etree as ET
content = '''<root><AAA><BBB><CCC-DDD>xyz</CCC-DDD></BBB></AAA></root>'''
doc = ET.fromstring(content)
print(doc.xpath('//AAA/BBB/CCC-DDD'))
yields
[<Element CCC-DDD at 0xb746f504>]
|
No data with QProcess.waitForReadyRead(): non consistent behavior
Question: I experienced this bug in a more complex application I am developing. I
execute a python server program and want to read the first data available. And
then close it. I do that to validate some settings from the server.
My problem boils down to:
* QProcess.waitForReadyRead() doesn't return and timeouts, it's supposed to return True very quickly
* It used to work, I rollbacked to an older revision to try to find what caused this to break now, but it's always there now, I really tried everything I could think of, so I want to know if it's a normal problem or maybe something that would only affect me and caused by my environment.
This is the test I wrote to show the problem, when I execute it, the 3 first
checks return the data immediately, but the last one timeouts and I get no
data.
This is certainly not logical. In the test I used wait, in my server it's just
a select-like function, it's implemented with base modules in python.
from PyQt4 import QtCore
#FILE: 1.py
#print 'TEST'
#FILE: 2.py
# import time
#print 'TEST'
#time.sleep(100)
#FILE: 1.sh
# echo 'TEST'
#FILE: 2.sh
# echo 'TEST'
# sleep 100
proc0= QtCore.QProcess()
proc0.start('sh', ['./1.sh'])
proc0.waitForStarted()
proc0.waitForReadyRead(10000)
output0 = proc0.readAll()
proc1= QtCore.QProcess()
proc1.start('sh', ['./2.sh'])
proc1.waitForStarted()
proc1.waitForReadyRead(10000)
output1 = proc1.readAll()
proc2= QtCore.QProcess()
proc2.start('python', ['./1.py'])
proc2.waitForStarted()
proc2.waitForReadyRead(10000)
output2 = proc2.readAll()
proc3= QtCore.QProcess()
proc3.start('python', ['./2.py'])
proc3.waitForStarted()
proc3.waitForReadyRead(10000)
output3 = proc3.readAll()
print "0"
print output0.size()
print repr(output0.data())
print "1"
print output1.size()
print repr(output1.data())
print "2"
print output2.size()
print repr(output2.data())
print "3"
print output3.size()
print repr(output3.data())
proc0.close()
proc1.close()
proc2.close()
proc3.close()
Is the last test (proc3) supposed to behave like I described? Is there a
workaround or a fix that would let me read the data form stdout in my python
server...? What is it?
Answer: It's a comment but...
I found the solution, python's print doesn't flush stdout and waits for a
certain amount of data before actually pushing the data to the stdout.
sys.stdout.flush() fixed it.
Hope it helps.
|
M2Crypto BIO.readlines hangs, python 2.7
Question: We're in the process of converting a C++ openssl based project to python w/
M2Crypto, and we've run into a somewhat unusual issue w/ the BIO routines from
M2Crypto. Specifically, any call to BIO.readlines() hangs forever on a file
object.
Here's a quick sample of what we tried:
f = open('test.txt','w')
f.write('hello world\n')
f.close()
import M2Crypto.BIO
bio = M2Crypto.BIO.openfile('test.txt','r')
lines = bio.readlines()
# the above call hangs forever
To ensure we didn't have something horribly wrong with our OpenSSL
installation, we create a small test program to read the test.txt file we just
created
#include <openssl/bio.h>
#include <openssl/err.h>
int main() {
const int maxrd = 4096;
char line[maxrd];
int rd;
BIO* bio = BIO_new_file("test.txt","r");
while((rd = BIO_gets(bio, line, maxrd)) > 0) {
printf("%s",line);
}
if (rd == -1) {
printf("BIO error %ld\n", ERR_get_error());
}
}
No problem.
We've been studying the M2Crypto-0.21.1/SWIG/_bio.i wrapper file, and think we
might have an idea of the source of the issue. Line 109 tests the return value
from BIO_gets()
if (r < 0) {
// return Py_None
}
BUT, the man page for BIO_gets() suggests it could return either 0 or -1 to
indicate end-of-stream.
I believe it should be
if (r < 1) {
// return Py_None
}
But wanted to see if other's had encountered -- or whether we are mistaken in
our understanding of the BIO_gets() system.
\--- Details --- Pythong 2.7 M2Crypto 0.21.1 OpenSSL 0.9.8q-fips 2 Dec 2010
FreeBSD 8.2-RELEASE-p4
Answer: In the event others stumble across this in the future, I wanted to share our
patch.
--- M2Crypto-0.21.1.orig/SWIG/_bio.i 2011-01-15 14:10:06.000000000 -0500
+++ M2Crypto-0.21.1/SWIG/_bio.i 2012-02-14 11:34:15.000000000 -0500
@@ -106,7 +106,7 @@
Py_BEGIN_ALLOW_THREADS
r = BIO_gets(bio, buf, num);
Py_END_ALLOW_THREADS
- if (r < 0) {
+ if (r < 1) {
PyMem_Free(buf);
if (ERR_peek_error()) {
PyErr_SetString(_bio_err, ERR_reason_error_string(ERR_get_error()));
NOTE: For those familiar with the internals of M2Crypto, there were
essentially three solutions to this problem. The first is the patch posted
above. Since we believe this matches the intention of the man page for
BIO_gets(), it's the solution we opted for.
The second solution was to patch M2Crypto/BIO.py. Specifically, to patch the
code that implements BIO.readlines() to test the return value from
m2.bio.gets() for either None or len(buf) == 0, and treat both as end-of-
stream.
The third solution was simply to avoid calling BIO.readlines(), and restrict
yourself to calling BIO.readline() (note -- singluar readline vs readlines),
and to test the return value from BIO.readline() for either None or len(buf)
== 0.
The third solution may not seem like much of an option -- more like avoidance.
But if you are concerned about deploying an application into an evironment
where M2Crypto may not be patched, this approach is guaranteed to be the most
compatible.
We've submitted our patch to the developer of record Heikki, but he hasn't had
a chance to review our suggestion yet. Until an official answer one way or the
other develops, I wanted to share our thoughts.
|
Run a python script in windows
Question: I have always used a mac to write and run python scripts. However, I have a
bunch of files on a PC that I need to run a script on. I have never really
used a PC and don't know how to run the script.
If I go to the `command` program on the PC and type in `python`, nothing
happens. How do I find where the python path is in order to get into the
`python prompt`? Also, once I am in the prompt, is the importing of `modules`
the same as in a `Unix` system?
Answer: Python isn't added to the system environment's PATH variable by default on
windows. You have to either add the path to the directory containing the
Python.exe file to the PATH variable, or call python explicitly.
This issue has been addressed in the Python documentation: [Python
Documentation: # How to run a Python program under
windows](http://docs.python.org/faq/windows.html#how-do-i-run-a-python-
program-under-windows)
|
Logging Process' STDIN and STDOUT
Question: I would like to run a process with `subprocess.Popen()` and communicate with
it through the python shell, like the `subprocess.Popen` usual behavior.
Beside that, I would like to discursively log the STDIN and STDOUT to a
logfile.
How can I do it?
Answer: Assuming discursively means rambling and rambling means all in the same file,
then the following snippet is what you requested.
**Discursive logging with discrimination of the source and interaction**
Override its communicate method like similar question
[here](http://stackoverflow.com/questions/3456692/python-subprocess-popen-
communicate-through-a-pipeline)
import subprocess
def logcommunicate(self, s):
self.logfilehandle.write("Input "+s)
std = self.oldcommunicate(s)
self.logfilehandle.write("Output "+std[0])
return std
subprocess.Popen.oldcommunicate = subprocess.Popen.communicate
subprocess.Popen.communicate = logcommunicate
logfh = open("/tmp/communicate.log", "a")
proc = subprocess.Popen(['cat'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
proc.logfilehandle = logfh
result = proc.communicate("hello there\n")
print result
**Discursive logging with discrimination of the source**
First use StringIO instead of files, then subclass StringIO to override its
write method to open that appends timestamp and source. Then write a custom
compare function that sorts based on timestamp and source, timestamp first and
then source Input and then output
with open("file.log","wb") as in logfile:
out = MyOutPutStringIO.StringIO()
in = MyInputStringIO.StringIO()
subprocess.Popen(cmd, shell=True, universal_newlines = True, stdin=in, stdout=out)
#Then after you are done
linestotal = []
for line in in.readlines():
linestotal.append(line)
for line in out.readlines():
linestotal.append(line)
linestotal.sort(customsortbasedontimestampandinput)
for line in linestotal.readlines():
logwrite.write(line)
**Discursive logging**
with open("file.log","wb") as in logfile:
subprocess.Popen(cmd, shell=True, universal_newlines = True, stdin=logfile, stdout=logfile)
The opposite is shown below
**Cursive logging**
with open("stdout.txt","wb") as out:
with open("stderr.txt","wb") as err:
with open("stdin.txt","wb") as in:
subprocess.Popen(cmd, shell=True, universal_newlines = True, stdin=in,stdout=out,stderr=err)
|
Cannot properly display unicode string after parsing a file with lxml, works fine with simple file read
Question: I'm attempting to use the lxml module to parse HTML files, but am struggling
to get it to work with some UTF-8 encoded data. I'm using Python 2.7 on
Windows. For example, consider a UTF-8 encoded file without byte order mark
that contains nothing but the text string `Québec`. If I just read the
contents of the file using a regular file handler and decode the resulting
string object, I get a length 6 unicode string that looks good when written
back to a file. But if I parse the file with lxml, I see get a length 7
unicode string that looks odd when written back to a file. Can someone explain
what is happening differently with lxml and how to get the original, pretty
string?
For example:
import lxml.html as html
from lxml import etree
f = open("output.txt", "w")
text = open("input.txt").read().decode("utf-8")
f.write("String of type '%s' with length %d: %s\n" % (type(text), len(text), text.encode("utf-8")))
root = html.parse("input.txt")
text = root.xpath(".//p")[0].text.strip()
f.write("String of type '%s' with length %d: %s\n" % (type(text), len(text), text.encode("utf-8")))
Produces output in `output.txt` of:
String of type '<type 'unicode'>' with length 6: Québec
String of type '<type 'unicode'>' with length 7: Québec
**EDIT**
A partial workaround here seems to be to parse the file using:
etree.parse("input.txt", etree.HTMLParser(encoding="utf-8"))
or
html.parse("input.txt", etree.HTMLParser(encoding="utf-8"))
However, as far as I know the base etree library lacks some convenience
classes for things like selectors, so a solution that allows me to use
lxml.html without etree.HTMLParser() would still be useful.
Answer: The function `lxml.html.parse` **already** uses an instance of
lxml.html.HTMLParser, so you shouldn't really be averse to using
html.parse("input.txt", html.HTMLParser(encoding="utf-8"))
to handle the utf-8 data
|
file not accessible: 'templates/test.html' using jinja2 templates in certain directories
Question: A simple GAE application threw the following error on
self.jinja2.render_template() on only one computer, but not on any others
(both macs and pcs):
ERROR 2012-02-14 21:54:04,987 webapp2.py:1528] [Errno 13] file not accessible: 'templates/test.html'
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/Users/scott/svn/GAE_branches/sample_broken_app/handlers.py", line 21, in get
self.render_response('test.html', **context)
File "/Users/scott/svn/GAE_branches/sample_broken_app/handlers.py", line 14, in render_response
rv = self.jinja2.render_template(_template, **context)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2_extras/jinja2.py", line 158, in render_template
return self.environment.get_template(_filename).render(**context)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/environment.py", line 719, in get_template
return self._load_template(name, self.make_globals(globals))
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/environment.py", line 693, in _load_template
template = self.loader.load(self, name, globals)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/loaders.py", line 115, in load
source, filename, uptodate = self.get_source(environment, name)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/loaders.py", line 165, in get_source
f = open_if_exists(filename)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/utils.py", line 224, in open_if_exists
return open(filename, mode)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver_import_hook.py", line 592, in __init__
raise IOError(errno.EACCES, 'file not accessible', filename)
IOError: [Errno 13] file not accessible: 'templates/test.html'
ERROR 2012-02-14 21:54:04,991 wsgi.py:205]
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 193, in Handle
result = handler(self._environ, self._StartResponse)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1519, in __call__
response = self._internal_error(e)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/Users/scott/svn/GAE_branches/sample_broken_app/handlers.py", line 21, in get
self.render_response('test.html', **context)
File "/Users/scott/svn/GAE_branches/sample_broken_app/handlers.py", line 14, in render_response
rv = self.jinja2.render_template(_template, **context)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/webapp2/webapp2_extras/jinja2.py", line 158, in render_template
return self.environment.get_template(_filename).render(**context)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/environment.py", line 719, in get_template
return self._load_template(name, self.make_globals(globals))
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/environment.py", line 693, in _load_template
template = self.loader.load(self, name, globals)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/loaders.py", line 115, in load
source, filename, uptodate = self.get_source(environment, name)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/loaders.py", line 165, in get_source
f = open_if_exists(filename)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/jinja2/jinja2/utils.py", line 224, in open_if_exists
return open(filename, mode)
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver_import_hook.py", line 592, in __init__
raise IOError(errno.EACCES, 'file not accessible', filename)
IOError: [Errno 13] file not accessible: 'templates/test.html'
INFO 2012-02-14 21:54:05,006 dev_appserver.py:2884] "GET /test HTTP/1.1" 500 -
The app is just:
handlers.py:
import webapp2
from webapp2_extras import jinja2
class BaseHandler(webapp2.RequestHandler):
@webapp2.cached_property
def jinja2(self):
# Returns a Jinja2 renderer cached in the app registry.
return jinja2.get_jinja2(app=self.app)
def render_response(self, _template, **context):
# Renders a template and writes the result to the response.
rv = self.jinja2.render_template(_template, **context)
self.response.write(rv)
class MyHandler(BaseHandler):
def get(self):
context = {'message': 'Hello, world!'}
self.render_response('test.html', **context)
webapp2_config = {}
webapp2_config['webapp2_extras.sessions'] = {
'secret_key': 'ef23fsdawe444',
}
application = webapp2.WSGIApplication([
webapp2.Route(r'/test', handler=MyHandler, name='test'),
], debug=True, config=webapp2_config)
app.yaml:
application: sampleapp
version: 0-01
api_version: 1
runtime: python27
threadsafe: false
builtins:
- remote_api: on
handlers:
- url: .*
script: handlers.application
libraries:
- name: jinja2
version: 2.6
- name: webapp2
version: 2.3
There's also a templates directory with test.html in it.
Now when I run the app from a different directory, it works fine.
This [google python group](https://groups.google.com/forum/#!searchin/google-
appengine-python/file%2420not%2420accessible/google-appengine-
python/AvDCQwoexCQ/pq5Sa4I6N4YJ) post gave me a hint to try a different
directory, but I have no idea what's wrong with the original, which ran
versions of the code without jinja2 from webapp2_extras fine.
Version info: OS X 10.6.8, GoogleAppEngineLauncher: 1.6.2, Python: 2.7.2
I reinstalled everything, and set my PYTHONPATH to "" in my .bash_profile, but
that didn't change anything.
Note: I did strip out a few of the non public directory names from the debug
output, but they didn't have spaces or anything.
A few others had a similar
[error](http://stackoverflow.com/questions/8799304/ioerror-errno-13-file-not-
accessible-with-google-appengine-1-6-1), but their fixes were for older
versions. There seems to be something about os.path
Answer: I was getting this error of yours, but without using the jinja2 from
webapp2_extras (I was creating a jinja_enviroment myself, like
jinja_environment = jinja2.Environment(loader=jinja2.FileSystemLoader(os.path.dirname(__file__)))
). I solved it by ignoring that `os.path.dirname(__file__)` that is in the
docs and just putting
jinja_environment = jinja2.Environment(loader=jinja2.FileSystemLoader('templates'))
So I thought I could help, but tried to solve your problem and nothing came.
But when you said
> A few others had a similar error, but their fixes were for older versions.
> There seems to be something about os.path
what did you mean? Are you using `os.path` in some part of the code? Because
os.path is problematic and `webapp2_extras.jinja2` don't use it, it just
creates the environment with 'templates' handwritten as the template
directory. Check [the source](http://code.google.com/p/webapp-
improved/source/browse/webapp2_extras/jinja2.py#120).
So, maybe there's a hint somewhere in this comment. Good luck.
|
Python: How to find n-gram patterns in the text?
Question: I have a string which could be arbitrary long say
s = 'Choose from millions of possibilities on Shaadi.com. Create your profile, search&contact; your special one.RegisterFree\xa0\xa0\xa0unsubscribing reply to this mail\xa0\n and 09times and this is limited time offer! and this is For free so you are saving cash'
I have a list of spam words which could be like
p_words = ['cash', 'for free', 'limited time offer']
All I want to know if there pattern exists in the input text and how many
times?
It becomes simple when it has just one word
import re
p = re.compile(''.join[p_words]) # correct me if I am wrong here
m = p.match(s)
but it could be a `bi-gram, tri-gram or n-gram`
How do we approach this?
Answer:
p = re.compile('|'.join(re.escape(w) for w in p_words))
`p` will then match any of the strings in `p_words`.
|
Doxygen and add a value of an attribute to the output documentation
Question: [ServiceStack](http://servicestack.net) marks rest paths for web services
using c# attributes.
For example
[RestService("/hello1")]
[RestService("/hello2")]
public class Hello
I would like to make Doxygen include values of the RestService attribute in
the doxygen output for the Hello class. I'm not concerned too much with pretty
formattin if the full line with brackets is included in the output document.
Any suggestions?
A quick and dirty trick would be a preferable to writing a Doxygen extension
;)
Cheers
Tymek
====EDIT
The Python version (so will work on Windows easily) of _doxygen_ user's answer
would be:
#!/usr/bin/env python
import sys
import re
if (len(sys.argv) < 2):
print "No input file"
else:
f = open(sys.argv[1])
line = f.readline()
while line:
re1 = re.compile("\[RestService\(\"(.*)\",.*\"(.*)\"\)]")
re1.search(line)
sys.stdout.write(re1.sub(r"/** \\b RestService: \2 \1\\n */\n", line))
#sys.stdout.write(line)
line = f.readline()
f.close()
and the DOXYFILE would have:
INPUT_FILTER = "doxygenFilter.py"
Answer: You could make an input filter that converts a line with
[RestService("/hello1")]
to
/** \b RestService: "/hello1"\n */
like for instance by putting following piece of perl magic in a file called
`filter.pl`:
open(F, "<", $ARGV[0]);
while(<F>) { /^\s*\[RestService\((.*)\)\]\s*$/ ?
print "/** \\b RestService: $1\\n */\n" : print $_; }
and use that with the `INPUT_FILTER` tag in the Doxyfile:
INPUT_FILTER = "perl filter.pl"
|
Return at least X results from split
Question: [split](http://docs.python.org/library/stdtypes.html#str.split) has a maxsplit
parameter, which is useful when you want _at most_ X results. If there
something similar to return _at least_ X results and populate the rest with
`None`s. I'd like to be able to write
a, b, c = 'foo,bar'.magic_split(',', 3)
and have `a=foo`, `b=bar` and `c=None`.
Any ideas how to write such a function?
Upd. I ended up with a solution which is a combination of
[this](http://stackoverflow.com/a/9294793/989121) and
[this](http://stackoverflow.com/a/9294073/989121) answers:
>>> def just(n, iterable, fill=None):
... return (list(iterable) + [fill] * n)[:n]
...
>>> just(3, 'foo,bar'.split(','))
['foo', 'bar', None]
Answer: One way would be:
from itertools import chain
from itertools import repeat
from itertools import islice
def magic_split(seq, sep, n, def_value=None):
return list(islice(chain(seq.split(sep), repeat(def_value)), n))
You could just return the return value of `islice` if you don't need the list.
If you don't want the values to be cut off when n is less than number of split
elements in seq, the modification is trivial:
def magic_split(seq, sep, n, def_value=None):
elems = seq.split(sep)
if len(elems) >= n:
return elems
return list(islice(chain(elems, repeat(def_value)), n))
|
Python: How do I code to make a button do something else if pressed a second time?
Question: I have this music program, so when I click a button it starts playing music.
When I press it again, I want it to stop the music, but instead it plays the
same beat on top of the first one. How do I get around doing this?
EDIT: Oops, silly me, forgot about the details. I use Python 2.7,
wxFormBuilder, and Easy Eclipse. Here's the code:
import gui
import wx
import wx.media
import pygame
import tkFileDialog
class MainFrame( gui.GUI_MainFrame):
def __init__( self, parent ): #Definerar KunddatabasMainFrame
pygame.init()
gui.GUI_MainFrame.__init__( self, parent ) #Initierar MainFrame f�nstret
self.sound1=pygame.mixer.Sound('beat1.wav')
self.sound2=pygame.mixer.Sound('beat2.wav')
self.recording = False
def evtBrowse1(self, evt):
tkFileDialog.askopenfilename()
def evtSoOne(self, evt):
self.sound1.play(loops=-1)
Answer: It would be easier to help if you posted some code. But think about adding an
`AlreadyPlaying` variable and check it in your function using an `if`
statement.
|
read conf file and make contents available to all files in package
Question:
myMain.py
/myPackage
file1.py
file2.py
confFile.cfg
myMain.py imports file1 and file2 etc. and it reads the confFile. what would
be the best way for the conf options to be visible in the whole package? i.e.
for file1 and file2 to be able to read the conf options. I do not want to pass
a lot of variables around in functions.
What I have right now is a confFile.py which i import into the files and read
the variables defined from it as they are required.
I am new to python and any suggestions are appreciated.
/Zaar
Answer: I think the best way is to just have a python module `config.py` or
`settings.py` and store those settings directly in those python files instead
of the `confFile.cfg`.
Then you could just get anything you want by:
import config
config.my_specific_option
|
ImportError: cannot import name reverse_lazy
Question: I'm very new to python and trying to run a piece of Django code on my system,
but I'm running into this problem.
$ python manage.py runserver
Running in development mode.
Traceback (most recent call last):
File "manage.py", line 11, in <module>
import settings
File "/Users/Kinnovate/Desktop/fsdjango/platformsite/settings.py", line 321, in <module>
from django.core.urlresolvers import reverse_lazy
ImportError: cannot import name reverse_lazy
I'm using python 2.7. How do I fix this?
Answer: `reverse_lazy` is newer than any released version of Django. Are you sure you
have a trunk version of Django?
|
Read numbers from formatted file in Python
Question: I have file with k columns of numbers (same number of elements for each
columns). What is the fastest way to read it and save the numbers in each
column in a separate numpy.array?
Answer: Try using
[`genfromtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html).
This has the benefit of you being able to specify column names if you like, or
even read into a `recarray`.
I made a file 'tmp':
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
Then from numpy:
import numpy as np
data = np.genfromtxt('tmp')
#array([[ 1., 2., 3., 4., 5.],
# [ 6., 7., 8., 9., 10.],
# [ 11., 12., 13., 14., 15.]])
If you look at `help(np.genfromtxt)` you'll see there are various options like
specifying custom `dtype`s (so you can make a recarray if you want), setting
options for missing values, reading in column names, etc.
|
QR Code decoder library for python
Question: I am trying to build an application in python that would encode and decode QR
codes. I am successful with the encoder, but i don't find libraries(but
[zbar](http://pypi.python.org/pypi/zbar)) for decoding in python. I am using
Python 2.7 in a Windows 7 system.
I am not able to install [zbar](http://pypi.python.org/pypi/zbar) in my
system. I installed the dependency module required by the library and even
then I end up with so many errors, whenever I try to install it - so many
syntax errors in zbar.h and in zbarmodule.c. I don't understand why and
clueless about what the problem is.
I get the following errors while installing zbar
C:\Users\vijay>easy_install zbar
Searching for zbar
Reading http://pypi.python.org/simple/zbar/
Reading http://zbar.sourceforge.net
Best match: zbar 0.10
Downloading http://pypi.python.org/packages/source/z/zbar/zbar-0.10.zip#md5=9e99
ef2f6b471131120982a0dcacd64b
Processing zbar-0.10.zip
Running zbar-0.10\setup.py -q bdist_egg --dist-dir c:\users\vijay\appdata\local\
temp\easy_install-hv_kag\zbar-0.10\egg-dist-tmp-sxhz3s
zbarmodule.c
C:\Python27\include\zbar.h(685) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(687) : error C2085: 'zbar_processor_parse_config' : n
ot in formal parameter list
C:\Python27\include\zbar.h(687) : error C2143: syntax error : missing ';' before
'{'
C:\Python27\include\zbar.h(761) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(763) : error C2085: 'zbar_processor_error_spew' : not
in formal parameter list
C:\Python27\include\zbar.h(763) : error C2143: syntax error : missing ';' before
'{'
C:\Python27\include\zbar.h(768) : error C2143: syntax error : missing '{' before
'const'
C:\Python27\include\zbar.h(777) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(778) : error C2085: 'zbar_processor_get_error_code' :
not in formal parameter list
C:\Python27\include\zbar.h(778) : error C2143: syntax error : missing ';' before
'{'
C:\Python27\include\zbar.h(882) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(884) : error C2085: 'zbar_video_error_spew' : not in
formal parameter list
C:\Python27\include\zbar.h(884) : error C2143: syntax error : missing ';' before
'{'
C:\Python27\include\zbar.h(889) : error C2143: syntax error : missing '{' before
'const'
C:\Python27\include\zbar.h(897) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(898) : error C2085: 'zbar_video_get_error_code' : not
in formal parameter list
C:\Python27\include\zbar.h(898) : error C2143: syntax error : missing ';' before
'{'
C:\Python27\include\zbar.h(968) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(970) : error C2085: 'zbar_window_error_spew' : not in
formal parameter list
C:\Python27\include\zbar.h(970) : error C2143: syntax error : missing ';' before
'{'
C:\Python27\include\zbar.h(975) : error C2143: syntax error : missing '{' before
'const'
C:\Python27\include\zbar.h(984) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(985) : error C2085: 'zbar_window_get_error_code' : no
t in formal parameter list
C:\Python27\include\zbar.h(985) : error C2143: syntax error : missing ';' before
'{'
C:\Python27\include\zbar.h(1050) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(1052) : error C2085: 'zbar_image_scanner_parse_config
' : not in formal parameter list
C:\Python27\include\zbar.h(1052) : error C2143: syntax error : missing ';' befor
e '{'
C:\Python27\include\zbar.h(1141) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(1143) : error C2085: 'zbar_decoder_parse_config' : no
t in formal parameter list
C:\Python27\include\zbar.h(1143) : error C2143: syntax error : missing ';' befor
e '{'
C:\Python27\include\zbar.h(1276) : error C2054: expected '(' to follow 'inline'
C:\Python27\include\zbar.h(1278) : error C2085: 'zbar_scan_rgb24' : not in forma
l parameter list
C:\Python27\include\zbar.h(1278) : error C2143: syntax error : missing ';' befor
e '{'
zbarmodule.c(65) : error C2143: syntax error : missing ';' before 'type'
zbarmodule.c(66) : error C2065: 'major' : undeclared identifier
zbarmodule.c(66) : error C2065: 'minor' : undeclared identifier
zbarmodule.c(68) : error C2065: 'major' : undeclared identifier
zbarmodule.c(68) : error C2065: 'minor' : undeclared identifier
zbarmodule.c(133) : error C2275: 'zbar_error_t' : illegal use of this type as an
expression
C:\Python27\include\zbar.h(121) : see declaration of 'zbar_error_t'
zbarmodule.c(133) : error C2146: syntax error : missing ';' before identifier 'e
i'
zbarmodule.c(133) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(134) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(134) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(134) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(135) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(135) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(136) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(146) : error C2275: 'PyObject' : illegal use of this type as an exp
ression
c:\python27\include\object.h(108) : see declaration of 'PyObject'
zbarmodule.c(146) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(147) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(151) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(151) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(151) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(152) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(152) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(152) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(153) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(153) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(153) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(154) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(154) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(154) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(155) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(155) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(155) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(156) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(156) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(156) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(157) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(157) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(157) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(158) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(158) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(158) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(159) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(159) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(159) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(160) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(160) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(160) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(162) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(162) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(162) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(163) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(164) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(164) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(164) : warning C4024: 'PyModule_AddObject' : different types for fo
rmal and actual parameter 1
zbarmodule.c(164) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(164) : error C2065: 'ei' : undeclared identifier
zbarmodule.c(167) : error C2275: 'PyObject' : illegal use of this type as an exp
ression
c:\python27\include\object.h(108) : see declaration of 'PyObject'
zbarmodule.c(167) : error C2065: 'dict' : undeclared identifier
zbarmodule.c(167) : error C2065: 'mod' : undeclared identifier
zbarmodule.c(167) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(167) : warning C4024: 'PyModule_GetDict' : different types for form
al and actual parameter 1
zbarmodule.c(169) : error C2065: 'dict' : undeclared identifier
zbarmodule.c(169) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(169) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(171) : error C2065: 'dict' : undeclared identifier
zbarmodule.c(171) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(171) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(183) : error C2275: 'PyObject' : illegal use of this type as an exp
ression
c:\python27\include\object.h(108) : see declaration of 'PyObject'
zbarmodule.c(183) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(185) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(185) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(185) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(186) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(186) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(186) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(187) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(187) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(187) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(188) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(188) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(188) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(189) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(189) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(189) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(190) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(190) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(190) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(191) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(191) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(191) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(192) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(192) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(192) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(193) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(193) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(193) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(194) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(194) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(194) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(195) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(195) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(195) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(196) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(196) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(196) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
zbarmodule.c(197) : error C2065: 'tp_dict' : undeclared identifier
zbarmodule.c(197) : warning C4047: 'function' : 'PyObject *' differs in levels o
f indirection from 'int'
zbarmodule.c(197) : warning C4024: 'zbarEnumItem_New' : different types for form
al and actual parameter 1
error: Setup script exited with error: command '"C:\Program Files (x86)\Microsof
t Visual Studio 9.0\VC\BIN\cl.exe"' failed with exit status 2
Can anyone help me with zbar installation or get me a library with which I can
decode QR codes?
Answer: [pyqrcode](http://pyqrcode.sourceforge.net/) supports encoding and decoding QR
codes.
Regarding zbar, as other have commented , it is difficult to help you without
knowing any of the error messages.
Did you install zbar from Windows binary packages or source ?
zbar has a prebuilt Windows binary package available
[here](http://sourceforge.net/projects/zbar/files/zbar/0.10/zbar-0.10-setup.exe/download),
it also has binary for the Python modules for 2.5 and 2.6 available
[here](http://pypi.python.org/pypi/zbar).
Regarding zbar installation via see these [zbar Installation
Instructions](http://code.google.com/p/qrdecoder/wiki/zbarInstallation)
A summary of the steps you need to take based on the above link to install the
zbar Python module from source is shown below.
1. Install zbar (preferably from the binary [here](http://sourceforge.net/projects/zbar/files/zbar/0.10/zbar-0.10-setup.exe/download))
2. Install [MinGW](http://www.mingw.org/)
3. Add the Zbar\bin and MinGw\bin (binary installation directories) to your Windows Path Variable
4. Download the Zbar Python module source from [here](http://pypi.python.org/packages/source/z/zbar/zbar-0.10.zip#md5=9e99ef2f6b471131120982a0dcacd64b) and unzip it to a temporary folder
5. Modify the setup.py script to use custom zbar include and library path.
Add `from distutils.sysconfig import get_config_vars` to line 3 and add the
following parameters to the Extension call:
library_dirs=["""zbarlibdirectory"""],
include_dirs=[get_config_vars('INCLUDEDIR'),
get_config_vars('INCLUDEPY'),
"""zbarincludedirectory"""]
where zbarlibdirectory is something like `C:\zbar\lib` and
zbarincludedirectory is something like `C:\zbar\include`
6. Install zbar Python module using modified setup.py
`python setup.py build --compiler=mingw32`
`python setup.py install`
As for installation pyqrcode on Windows7, I haven't done so yet but believe
you just for the source instructions under the relevant heading, first
installing all the dependencies then running make and make install either
using nmake or make from MinGW.
|
Django Testing: no data in temporary database file
Question: I'm using a sqlite3 database set up as follows in `settings.py`:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.spatialite',
'NAME': 'path/to/config.sqlite',
'TEST_NAME': 'path/to/test-config.sqlite',
# ... USER, PASSWORD and PORT left out for brevity
}
}
During a test run started with:
python manage.py test myapp.mytest
this temporarily creates a database file `path/to/test-config.sqlite` which I
need in another application loaded with the required fixtures.
The database file however is empty, which I asserted during a pause in one
test:
sqlite> select * from someapp_somemodel;
... no results here :(
Other test cases which do not require a sqlite file and for which the in-
memory database suffices, no errors occurs.
My questions:
* Why doesn't django flush its data to the database file if it creates it anyway? and
* How can I convince django to do it, as I require the data to be dumped in the temporary database file?
**EDIT**
I'm using Django 1.3.1, if thats of any interest.
**EDIT2**
I'm familiar with fixtures, and I use them to populate the database, but my
problem is that the data from the fixtures is not written to the database file
during the test. Sorry if I wasn't clear enough on that fact.
**EDIT3**
As my question needs some clarification, please consider the following test
setup (which is close to what I'm actually doing):
class SomeTestCase(django.test.TestCase):
fixtures = ["some_fixture.json", "some_other_fixture.json"]
def testSomething(self):
import pdb; pdb.set_trace()
When the `testSomething` method runs into the breakpoint I start up the
`sqlite3` program and connect to the temporary database file created by
Django. The fixtures are loaded (which I know, because other tests work
aswell) but the data is not written to the temporary database file.
Answer: Have you run the initial syncdb to create the database tables? (**python
yourproject/manage.py syncdb**)
In your settings.py, what apps do you have installed under INSTALLED_APPS?
In your project, what models have you built?
Depending on what apps you have installed in INSTALLED_APPS and what custom
models you have added to your project, will dictate what databases syncdb will
create.
|
Python runs file from specific folder but can't run from others
Question: I have a VB script which I want to run with python27, when I place my VBS in
desktop it can be run successfully but when I copy the same file to another
folder and want to run from there it doesn't run!! this is running me crazy, I
can't understand the logic, can someone please help me?!
I have python 32bit installed on my 64bit win 7 because I need some python
modules that are only available for 32bit version, also my VBS is based on
32bit DLL.
Runs correctly:
import os
os.system("C:\Users\OFFICE2\Desktop\TSATResultSaver.vbs")
Doesn't run:
import os
os.system("C:\My Modules\TSATResultSaver.vbs")
Answer: Make sure permissions on those two folders are the same. Also, I would try a
folder without a space in the path (there are no spaces in the first path, but
"My Modules" contains a space.
|
Rails & NLTK deploying to Heroku
Question: I may be wrong, but I have a Rails app which uses python script with some NLTK
dependencies (it was easier to achieve what I wanted with it)...
It works fine locally (python script runs in the background) of course, but
not everything is as smooth when I try to deploy it..
Is there any way I can deploy NLTK to heroku?
I know I can put it to vendor/plugins or smth like that, but that will hardly
make
import nltk
in my python script recognize it.
I hope the question is clear enough )
Answer: You should be able to do this, though it will take some customization. You
could create a buildpack which both installs your ruby dependencies as well as
your python dependencies. It could become some composite of:
<https://github.com/heroku/heroku-buildpack-ruby/>
https://github.com/heroku/heroku-buildpack-python/
Once it installs the dependencies you just need to ensure your PATH is set to
load the appropriate locations.
|
Search a list of list of strings for a list of strings in python efficiently
Question: I have a list of list of strings and a list of strings. for example:
L1=[["cat","dog","apple"],["orange","green","red"]]
L2=["cat","red"]
if L1[i] contains any item from L2 I need to put the pairs (for creating edges
in a graph) like, in my example, I need the pairs
`("cat","dog"),("cat,apple"),("red,orange"),("red","green")`
What approach should I use to make it most efficient. (My list L1 is huge)
Answer: Supposing that you might have, more than one "control" item in your L1
sublists.
I'd do it using
[`set()`](http://docs.python.org/py3k/library/stdtypes.html#set-types-set-
frozenset) and
[`itertools.product()`](http://docs.python.org/py3k/library/itertools.html#itertools.product):
from itertools import product
def generate_edges(iterable, control):
edges = []
control_set = set(control)
for e in iterable:
e_set = set(e)
common = e_set & control_set
to_pair = e_set - common
edges.extend(product(to_pair, common))
return edges
Example:
>>> L1 = [["cat","dog","apple"],
... ["orange","green","red"],
... ["hand","cat","red"]]
>>> L2 = ["cat","red"]
>>> generate_edges(L1, L2)
[('apple', 'cat'),
('dog', 'cat'),
('orange', 'red'),
('green', 'red'),
('hand', 'red'),
('hand', 'cat')]
|
Python Logging module: custom loggers
Question: I was trying to create a custom attribute for logging (caller's class name,
module name, etc.) and got stuck with a strange exception telling me that the
LogRecord instance created in the process did not have the necessary
attributes. After a bit of testing I ended up with this:
import logging
class MyLogger(logging.getLoggerClass()):
value = None
logging.setLoggerClass(MyLogger)
loggers = [
logging.getLogger(),
logging.getLogger(""),
logging.getLogger("Name")
]
for logger in loggers:
print(isinstance(logger, MyLogger), hasattr(logger, "value"))
This seemingly correct piece of code yields:
False False
False False
True True
Bug or feature?
Answer: Looking at the source code we can see the following:
root = RootLogger(WARNING)
def getLogger(name=None):
if name:
return Logger.manager.getLogger(name)
else:
return root
That is, a root logger is created by default when the module is imported.
Hence, every time you look for the root looger (passing a false value such as
the empty string), you're going to get a `logging.RootLogger` object
regardless of any call to `logging.setLoggerClass`.
Regarding the logger class being used we can see:
_loggerClass = None
def setLoggerClass(klass):
...
_loggerClass = klass
This means that a global variable holds the logger class to be used in the
future.
In addition to this, looking at `logging.Manager` (used by
`logging.getLogger`), we can see this:
def getLogger(self, name):
...
rv = (self.loggerClass or _loggerClass)(name)
That is, if `self.loggerClass` isn't set (which won't be unless you've
explicitly set it), the class from the global variable is used.
Hence, it's a feature. The root logger is always a `logging.RootLogger` object
and the other logger objects are created based on the configuration at that
time.
|
Python: ImportError with package
Question: I develop in eclipse using pydev plugin. When I run project in eclipse
everything works fine. But when I try to run it from command line I get an
Import error. I have a dir structure like this:
TGRParser
|----tgr
|--graph
|--main
| |-- main.py
| |-- __init__.py
|--parser
|--__init__py
|--parserClass.py
Now I have a class Main in module main (main.py) in package main
(TGRParser/tgr/main). Now in class Main I try to call
from tgr.parser.parserClass import Parser
It works fine from within eclipse but doesnt work at all from command line. I
checked sys.path. They are both the same in cmd line and in eclipse.
It says:
File "main.py", line 6, in <module>
from tgr.parser.parserClass import Parser
ImportError: No module named tgr.parser.parserClass
Answer: Add the `TGRParser` directory to your `PYTHONPATH` environment variable.
|
markdown to html using a specified css
Question: First, let me say - I love markdown. Truly love it. It's simple, it's elegant,
it's sexy, it's everything I wish for in a markup language. If I could, I'd
propose to it :)
So far I've been using it in a very nice and simple way, Vim + [python-
markdown](http://www.freewisdom.org/projects/python-markdown/) = fast preview
in my browser of choice.
But, it has one drawback ... the css sheet is hardcoded somewhere inside the
plugin, and I can't change it. _Note: I know zero python, or something very
close to it._
**Is there a markdown to -various formats- plugin that lets you specify a css
page to use, so that I could have several and create several versions of the
same document using the one I wish at that time?**
It would go something like
markdown my-document-in.markdown css-sheet.css cool-looking-document.html
Answer: Using <https://github.com/trentm/python-markdown2/> (specifically
<https://raw.github.com/trentm/python-markdown2/master/lib/markdown2.py>), I
wrote a small script which when called as `generator.py input.markdown
styles.css pretty.html` (assuming you saved it as generator.py) uses the
python-markdown2 library to convert the markdown to HTML, embeds the css file
in the top and writes it to pretty.html.
import markdown2
import os, sys
output = """<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<style type="text/css">
"""
cssin = open(sys.argv[2])
output += cssin.read()
output += """
</style>
</head>
<body>
"""
mkin = open(sys.argv[1])
output += markdown2.markdown(mkin.read())
output += """</body>
</html>
"""
outfile = open(sys.argv[3])
outfile.write(output)
outfile.close()`
Copy the linked file from github and the code above into a folder together and
it should run fine. I've tested it locally and it works. Hope it can help you
too.
|
Unable to append to clipboard
Question: Whenever I try the following in my python interpreter. I am able to copy the
word hello to the command line, even after I close the interpreter
from Tkinter import Tk
r = Tk()
r.clipboard_append(" hello ")
However if I put this in a file called test.py and then try
python test.py
This will not work, I can't append this to the system clipboard
Does any one know why not or know what difference between running it in a
script and in the interpreter would cause
Answer: Apparently it won't work until Tkinter is in it's mainloop. This works on my
system:
from Tkinter import *
r = Tk()
r.clipboard_append("hello")
r.mainloop()
|
Set encoding in Python 3 CGI scripts
Question: When writing a **Python 3.1** CGI script, I run into horrible
UnicodeDecodeErrors. However, when running the script on the command line,
everything works.
It seems that `open()` and `print()` use the return value of
`locale.getpreferredencoding()` to know what encoding to use by default. When
running on the command line, that value is 'UTF-8', as it should be. But when
running the script through a browser, the encoding mysteriously gets redefined
to 'ANSI_X3.4-1968', which appears to be a just a fancy name for plain ASCII.
I now need to know how to make the cgi script run with 'utf-8' as the default
encoding in all cases. My setup is Python 3.1.3 and Apache2 on Debian Linux.
The system-wide locale is en_GB.utf-8.
Answer: Answering this for late-comers because I don't think that the posted answers
get to the root of the problem, which is the lack of locale environment
variables in a CGI context. I'm using Python 3.2.
1. open() opens file objects in text (string) or binary (bytes) mode for reading and/or writing; in text mode the encoding used to encode strings written to the file, and decode bytes read from the file, may be specified in the call; if it isn't then it is determined by locale.getpreferredencoding(), which on linux uses the encoding from your locale environment settings, which is normally utf-8 (from e.g. LANG=en_US.UTF-8)
>>> f = open('foo', 'w') # open file for writing in text mode
>>> f.encoding
'UTF-8' # encoding is from the environment
>>> f.write('€') # write a Unicode string
1
>>> f.close()
>>> exit()
user@host:~$ hd foo
00000000 e2 82 ac |...| # data is UTF-8 encoded
2. sys.stdout is in fact a file opened for writing in text mode with an encoding based on locale.getpreferredencoding(); you can write strings to it just fine and they'll be encoded to bytes based on sys.stdout's encoding; print() by default writes to sys.stdout - print() itself has no encoding, rather it's the file it writes to that has an encoding;
>>> sys.stdout.encoding
'UTF-8' # encoding is from the environment
>>> exit()
user@host:~$ python3 -c 'print("€")' > foo
user@host:~$ hd foo
00000000 e2 82 ac 0a |....| # data is UTF-8 encoded; \n is from print()
; you cannot write bytes to sys.stdout - use sys.stdout.buffer.write() for
that; if you try to write bytes to sys.stdout using sys.stdout.write() then it
will return an error, and if you try using print() then print() will simply
turn the bytes object into a string object and an escape sequence like `\xff`
will be treated as the four characters \, x, f, f
user@host:~$ python3 -c 'print(b"\xe2\xf82\xac")' > foo
user@host:~$ hd foo
00000000 62 27 5c 78 65 32 5c 78 66 38 32 5c 78 61 63 27 |b'\xe2\xf82\xac'|
00000010 0a |.|
3. in a CGI script you need to write to sys.stdout and you can use print() to do it; but a CGI script process in Apache has no locale environment settings - they are not part of the CGI specification; therefore the sys.stdout encoding defaults to ANSI_X3.4-1968 - in other words, ASCII; if you try to print() a string that contain non-ASCII characters to sys.stdout you'll get "UnicodeEncodeError: 'ascii' codec can't encode character...: ordinal not in range(128)"
4. a simple solution is to pass the Apache process's LANG environment variable through to the CGI script using Apache's mod_env PassEnv command in the server or virtual host configuration: PassEnv LANG; on Debian/Ubuntu make sure that in /etc/apache2/envars you have uncommented the line ". /etc/default/locale" so that Apache runs with the system default locale and not the C (Posix) locale (which is also ASCII encoding); the following CGI script should run without errors in Python 3.2:
#!/usr/bin/env python3
import sys
print('Content-Type: text/html; charset=utf-8')
print()
print('<html><body><pre>' + sys.stdout.encoding + '</pre>h€lló wörld<body></html>')
|
ImportError: cannot import name get_hexdigest
Question: im running into this problem as i try to run a piece of django code on my
osX10.7 , python2.7 django1.4 system. how do i obtain get_hexdigest? do i
download it from somewhere?
Kinnovates-MacBook-Pro:platformsite Kinnovate$ sudo python manage.py runserver
Running in development mode.
Running in development mode.
Running in development mode.
Running in development mode.
Validating models...
HACKUING USER MODEL
Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x1016bc050>>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 91, in inner_run
self.validate(display_num_errors=True)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 266, in validate
num_errors = get_validation_errors(s, app)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/validation.py", line 30, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/loading.py", line 158, in get_app_errors
self._populate()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/loading.py", line 67, in _populate
self.load_app(app_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/loading.py", line 88, in load_app
models = import_module('.models', app_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django_sha2-0.4-py2.7.egg/django_sha2/models.py", line 2, in <module>
from django_sha2 import auth
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django_sha2-0.4-py2.7.egg/django_sha2/auth.py", line 96, in <module>
monkeypatch()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django_sha2-0.4-py2.7.egg/django_sha2/auth.py", line 42, in monkeypatch
from django_sha2 import bcrypt_auth
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django_sha2-0.4-py2.7.egg/django_sha2/bcrypt_auth.py", line 10, in <module>
from django.contrib.auth.models import get_hexdigest
ImportError: cannot import name get_hexdigest
Answer: You are using dev version of Django (1.4), and there is no `get_hexdigest`
method in corresponding module.
Solutions:
* use 1.3 version (it's the last stable at the moment)
* implement `get_hexdigest` yourself (it can be copypasted from [here](https://github.com/django/django/blob/1.3.1/django/contrib/auth/models.py#L18))
* use another tool (that doesn't have compatibility problems) to solve your task
|
Using jcifs in jython in order to access site using NTLM security
Question: For a while now I have been trying to find a way for jython to access site
using NTLM. I have just basic knowledge of python and next to none in java, so
I could use some help (or an example) how to make the request use NTLM in this
script part I have found. I am using this with open source application
grinder.
First I start with importing jcifs in script along with others used by
grinder:
from net.grinder.script import Test
from net.grinder.script.Grinder import grinder
from net.grinder.plugin.http import HTTPPluginControl, HTTPRequest
from HTTPClient import NVPair
from jcifs.ntlmssp import Type1Message
from jcifs.ntlmssp import Type2Message, Type3Message
from jcifs.util import Base64
This code part was provided in example I found. It was the closes thing I
could find, that would fit my needs, since I just need to get the full
response to request.
def NTLMAuthentication1(url, request, info, NTLMfield):
token_type1 = info.token_type1()
params = (NVPair("Authorization", "NTLM "+token_type1), )
result = request.GET(url, None, params)
NTLMfield = result.getHeader("WWW-Authenticate")
return NTLMAuthentication2(url, request, info, NTLMfield)
def NTLMAuthentication2(url, request, info, NTLMfield):
if NTLMfield.startswith("Negotiate"):
token_type2 = NTLMfield[len("Negotiate "):]
else:
token_type2 = NTLMfield[5:]
token_type3 = info.token_type3(token_type2)
params = (NVPair("Cookie", "WSS_KeepSessionAuthenticated=80"),
NVPair("Authorization", "NTLM " + token_type3), )
result = request.GET(url, None, params)
return result
# this function validate request and its result to see if the NTLM authentication is required
def NTLMAuthentication(lastResult, request, info):
# get last http request's url
url = lastResult.getEffectiveURI().toString()[len(request.getUrl()):]
# The result is ask for authentication
if lastResult.statusCode != 401 and lastResult.statusCode != 407:
return lastResult
NTLMfield = lastResult.getHeader("WWW-Authenticate")
if NTLMfield == None:
return lastResult
# check it is the first shakehands
if NTLMfield == "Negotiate, NTLM" or NTLMfield == "NTLM":
return NTLMAuthentication1(url, request, info, NTLMfield)
# check it is the second shakehands
elif len(NTLMfield) > 4 and NTLMfield[:4] == "NTLM":
return NTLMAuthentication2(url, request, info, NTLMfield)
else:
return lastResult
class NTLMAuthenticationInfo:
def __init__(self, domain, host, user, passwd):
self.domain = 'domain'
self.host = 'host'
self.user = 'user'
self.passwd = 'password'
def token_type1(self):
msg = Type1Message(Type1Message.getDefaultFlags(), self.domain, self.host)
return Base64.encode(msg.toByteArray())
def token_type3(self, token_type2):
msg2 = Type2Message(Base64.decode(token_type2))
#if jcifs 1.3.7 using msg3 = Type3Message(msg2, self.passwd, self.domain, self.user, self.host)
msg3 = Type3Message(msg2, self.passwd, self.domain, self.user, self.host)
return Base64.encode(msg3.toByteArray())
In the main part the request looks something like this:
result = request101.GET('/')
where request101 has been predefined with URL and header. So, basically, I
don't have a clue how to implement the
I have tried this
result = request101.GET('/')
print str(NTLMAuthentication(result, request101, NTLMAuthenticationInfo))
as well as just this
NTLMAuthentication(request101.GET('/'), request101, NTLMAuthenticationInfo)
but neither of these worked. Any tips on how to run this?
Answer: try this
ai = NTLMAuthenticationInfo("domain", "your host", "user", "password")
result = request101.GET('/')
result = NTLMAuthentication(result, request101, ai)
|
Removing blocks of text with python
Question: I'm trying to remove large blocks of text from a file using python. Each block
of text begins with
/translation="SOMETEXT"
Ending with the second quote.
Can anyone give me some advice on how to accomplish this?
Thank you
Answer: You can use re.sub like this:
import re
re.sub("/translation=\".*?\" ", "", s)
|
Testing Memcached connection
Question: I want to run a basic service which shows the status of various other services
on the system (i.e. Mongo, Redis, Memcached, etc).
For Memcached, I thought that I might do something like this:
from django.core.cache import cache
host, name = cache._cache._get_server('test')
This seems to return a host and an arbitrary string. Does the host object
confirm that I'm connecting to Memcached successfully?
I know that the returned host object has a `connect()` method. I'm slightly
scared to open a new connection in production environment and I don't have an
easy Dev setup to test that method. I assume that it's in one of the Python
Memcached libraries, but I'm not sure which one is relevant here.
Can I just use the `_get_server` method to test Memecached connection success,
or should I use the connect method?
Answer: There are various things you could monitor, like memcache process up, memcache
logs moving etc. that don't require network connectivity. Next level of test
would be to see that you can open a socket at the memcache port. But the real
test is of course to set and get a value from memcache. For that kind of
testing I would probably just use the python-memcached package directly to
make the connection and set and get values.
|
selenium python stuck on this line
Question: Why does my code:
self.driver.find_element_by_xpath("//*[text()[contains(.,'%s')]]" % ('Sorry'))
Get stuck and won't pass this line? Even if I do something like:
driver.implicitly_wait(30)
self.driver.find_element_by_xpath("//*[text()[contains(.,'%s')]]" % ('Sorry'))
Complete code:
# gets stuck here
if self.is_text_present('Hello'):
print 'Hello'
# rest of code
def is_text_present(self, text):
try:
self.driver.find_element_by_xpath('//*[contains(text(), "%s")]' % (text))
except NoSuchElementException, e:
return False
return True
Answer: Your XPath can be simplified to
"//*[contains(text(),'%s')]" % ('Sorry')
Perhaps try something like:
import contextlib
import selenium.webdriver as webdriver
import selenium.webdriver.support.ui as ui
with contextlib.closing(webdriver.Firefox()) as driver:
...
# Set up a WebDriverWait instance that will poll for up to 10 seconds
wait = ui.WebDriverWait(driver, 10)
# wait.until returns the value of the callback
elt = wait.until(
lambda driver: driver.find_element_by_xpath(
"//*[contains(text(),'%s')]" % ('Sorry')
))
From [the docs](http://seleniumhq.org/docs/04_webdriver_advanced.html):
> This waits up to 10 seconds before throwing a TimeoutException or if it
> finds the element will return it in 0 - 10 seconds.
* * *
To debug the problem you might try saving the HTML source to a file right
before calling `find_element_by_xpath` so you can see what the driver is
seeing. Is the XPath valid for that HTML?.
def is_text_present(self, text):
with open('/tmp/debug.html', 'w') as f:
time.sleep(5) # crude, but perhaps effective enough for debugging here.
f.write(driver.page_source)
try:
self.driver.find_element_by_xpath('//*[contains(text(), "%s")]' % (text))
except NoSuchElementException, e:
return False
return True
|
Can't import pty module even though it's installed
Question: I have Python 2.7 installed on OpenSUSE. I'm using the `pty` module to spawn
some ptys:
import pty
But Python can't seem to find it.
ImportError: No module named pty
Running `help('modules')` in the interpreter shows that `pty` is installed.
Answer: Reposting as an answer:
The problem is likely that your IDE is not setting up `sys.path` correctly.
Find where `pty` is imported from, and make sure that's on your PYTHONPATH in
your IDE's settings.
|
Force feedparser to sanitize on all content types
Question: For a project, I want to use
[feedparser](http://code.google.com/p/feedparser/). Basicly I got it working.
In the documentation [section about
sanitization](http://packages.python.org/feedparser/html-sanitization.html) is
described, that not all content types are sanitized. How can I force
feedparser to do this on all content types?
Answer: I think the feedparser doc page you referenced gives good advice:
*It is recommended that you check the content type in e.g. entries[i].summary_detail.type. If it is text/plain then it has not been sanitized (and you should perform HTML escaping before rendering the content).*
import cgi
import feedparser
d = feedparser.parse('http://rss.slashdot.org/Slashdot/slashdot')
# iterate through entries. If the type is not text/html, HTML clean it
for entry in d.entries:
if entry.summary_detail.type != 'text/html':
print cgi.escape(entry.summary)
else:
print entry.summary
Of course, there are dozens of ways you can iterate through the entries
depending on what you want to do with them once they are clean.
|
what's the difference between `groups` and `group` in Python's `re` module
Question: Here it is:
import re
>>>s = 'abc -j k -l m'
>>>m = re.search('-\w+ \w+', s)
>>>m.groups()
()
>>> m.group(0)
'-j k'
Why `groups()` gives me nothing, but `group(0)` yields some? What is the
difference?
**Follow Up**
Code is as follows
>>>re.findall('(-\w+ \w+)', s)
['-j k', '-l m', '-n o']
`findall` can get me all the `-\w+ \w+` substrings, but look at this:
>>>m = re.search('(-\w+ \w+)+', s)
>>>m.groups()
('-j k',)
Why can't `search` give me all the substrings?
**Follow Up Again**
If `s = 'abc -j k -l m -k o`, and
>>>m = re.search(r'(-\w+ \w+ )+', s)
>>>m.groups()
('-l m ',) # why just one substring?
>>>m.group(0)
'-j k -l m ' # as I expected
Answer: `groups()` only returns any explicitly-captured groups in your regex (denoted
by `(` round brackets `)` in your regex), whereas `group(0)` returns the
entire substring that's matched by your regex regardless of whether your
expression has any capture groups.
The first explicit capture in your regex is indicated by `group(1)` instead.
Re follow-up edit:
> Why can't `search` give me all the substrings?
`search()` will only return the first match against the pattern in your input
string.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.