text
stringlengths
226
34.5k
python django soaplib response with classmodel issue Question: I run a soap server in django. Is it possible to create a soap method that returns a soaplib classmodel instance without <{method name}Response><{method name}Result> tags? For example, here is a part of my soap server code: # -*- coding: cp1254 -*- from soaplib.core.service import rpc, DefinitionBase, soap from soaplib.core.model.primitive import String, Integer, Boolean from soaplib.core.model.clazz import Array, ClassModel from soaplib.core import Application from soaplib.core.server.wsgi import Application as WSGIApplication from soaplib.core.model.binary import Attachment class documentResponse(ClassModel): __namespace__ = "" msg = String hash = String class MyService(DefinitionBase): __service_interface__ = "MyService" __port_types__ = ["MyServicePortType"] @soap(String, Attachment, String ,_returns=documentResponse,_faults=(MyServiceFaultMessage,) , _port_type="MyServicePortType" ) def sendDocument(self, fileName, binaryData, hash ): binaryData.file_name = fileName binaryData.save_to_file() resp = documentResponse() resp.msg = "Saved" resp.hash = hash return resp and it responses like that: <senv:Body> <tns:sendDocumentResponse> <tns:sendDocumentResult> <hash>14a95636ddcf022fa2593c69af1a02f6</hash> <msg>Saved</msg> </tns:sendDocumentResult> </tns:sendDocumentResponse> </senv:Body> But i need a response like this: <senv:Body> <ns3:documentResponse> <hash>A694EFB083E81568A66B96FC90EEBACE</hash> <msg>Saved</msg> </ns3:documentResponse> </senv:Body> What kind of configurations should i make in order to get that second response i mentioned above ? Thanks in advance. Answer: I haven't used Python's SoapLib yet, but had the same problem while using .NET soap libs. Just for reference, in .NET this is done using the following decorator: [SoapDocumentMethod(ParameterStyle=SoapParameterStyle.Bare)] I've looked in the soaplib source, but it seems it doesn't have a similar decorator. The closest thing I've found is the `_style` property. As seen from the code <https://github.com/soaplib/soaplib/blob/master/src/soaplib/core/service.py#L124> \- when using @soap(..., _style='document') it doesn't append the `%sResult` tag, but I haven't tested this. Just try it and see if this works in the way you want it. If it doesn't work, but you still want to get this kind of response, look at Spyne: <http://spyne.io/docs/2.10/reference/decorator.html> It is a fork from soaplib(I think) and has the `_soap_body_style='bare'` decorator, which I believe is what you want.
regex does not work if I read a string from a file Question: I have a file named `foo` with the following text <ca> -----BEGIN CERTIFICATE----- MIIB6DCCAVECBCMBFpQwDQYJKoZIhvcNAQEFBQAwOzEPMA0GA1UEAxMGbGZ0Lmpw MRswGQYDVQQKExJ1N2FoMzZpN24wYSBsejFpZzUxCzAJBgNVBAYTAlVTMB4XDTEz MTIwNzE5MjkxNVoXDTIxMDMyMTE5MjkxNVowOzEPMA0GA1UEAxMGbGZ0LmpwMRsw GQYDVQQKExJ1N2FoMzZpN24wYSBsejFpZzUxCzAJBgNVBAYTAlVTMIGfMA0GCSqG SIb3DQEBAQUAA4GNADCBiQKBgQDKEcE9hTtJk/XmOpISG33ADHGpS+fzxjun7N3/ Nqj43JC9EIHazLE2UKVHaajgcGYUDGkTTcGCATWRtKuWJKmE57msEp0qCHv8WxI/ HV5OhW2LT5BD48ImZRnlPqtnclcgmYbvdeg7oPBcgXZ14mIqTVOA/bkoxc8ZI7/W 4TXU9wIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAAa7HCk24EXNjjEAKTr5/MysFSZd DVJbVc+QpThDrEAj6OzCteLXGiSYhtDi4EXeyJORifau+UYLihuy2BU3TooWDKIZ 4+grA2XGe7+N+d02mbLbMnloVyrqslweMy9muQUzjbH7gvtQj9X0ZvIWcTJCvhwX y+sh9N42+sqJTLu3 -----END CERTIFICATE----- </ca> <cert> -----BEGIN CERTIFICATE----- MIICxjCCAa4CAQAwDQYJKoZIhvcNAQEFBQAwKTEaMBgGA1UEAxMRVlBOR2F0ZUNs aWVudENlcnQxCzAJBgNVBAYTAkpQMB4XDTEzMDIxMTAzNDk0OVoXDTM3MDExOTAz MTQwN1owKTEaMBgGA1UEAxMRVlBOR2F0ZUNsaWVudENlcnQxCzAJBgNVBAYTAkpQ MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5h2lgQQYUjwoKYJbzVZA 5VcIGd5otPc/qZRMt0KItCFA0s9RwReNVa9fDRFLRBhcITOlv3FBcW3E8h1Us7RD 4W8GmJe8zapJnLsD39OSMRCzZJnczW4OCH1PZRZWKqDtjlNca9AF8a65jTmlDxCQ CjntLIWk5OLLVkFt9/tScc1GDtci55ofhaNAYMPiH7V8+1g66pGHXAoWK6AQVH67 XCKJnGB5nlQ+HsMYPV/O49Ld91ZN/2tHkcaLLyNtywxVPRSsRh480jju0fcCsv6h p/0yXnTB//mWutBGpdUlIbwiITbAmrsbYnjigRvnPqX1RNJUbi9Fp6C2c/HIFJGD ywIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQChO5hgcw/4oWfoEFLu9kBa1B//kxH8 hQkChVNn8BRC7Y0URQitPl3DKEed9URBDdg2KOAz77bb6ENPiliD+a38UJHIRMqe UBHhllOHIzvDhHFbaovALBQceeBzdkQxsKQESKmQmR832950UCovoyRB61UyAV7h +mZhYPGRKXKSJI6s0Egg/Cri+Cwk4bjJfrb5hVse11yh4D9MHhwSfCOH+0z4hPUT Fku7dGavURO5SVxMn/sL6En5D+oSeXkadHpDs+Airym2YHh15h0+jPSOoR6yiVp/ 6zZeZkrN43kuS73KpKDFjfFPh8t4r1gOIjttkNcQqBccusnplQ7HJpsk -----END CERTIFICATE----- </cert> <key> -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA5h2lgQQYUjwoKYJbzVZA5VcIGd5otPc/qZRMt0KItCFA0s9R wReNVa9fDRFLRBhcITOlv3FBcW3E8h1Us7RD4W8GmJe8zapJnLsD39OSMRCzZJnc zW4OCH1PZRZWKqDtjlNca9AF8a65jTmlDxCQCjntLIWk5OLLVkFt9/tScc1GDtci 55ofhaNAYMPiH7V8+1g66pGHXAoWK6AQVH67XCKJnGB5nlQ+HsMYPV/O49Ld91ZN /2tHkcaLLyNtywxVPRSsRh480jju0fcCsv6hp/0yXnTB//mWutBGpdUlIbwiITbA mrsbYnjigRvnPqX1RNJUbi9Fp6C2c/HIFJGDywIDAQABAoIBAERV7X5AvxA8uRiK k8SIpsD0dX1pJOMIwakUVyvc4EfN0DhKRNb4rYoSiEGTLyzLpyBc/A28Dlkm5eOY fjzXfYkGtYi/Ftxkg3O9vcrMQ4+6i+uGHaIL2rL+s4MrfO8v1xv6+Wky33EEGCou QiwVGRFQXnRoQ62NBCFbUNLhmXwdj1akZzLU4p5R4zA3QhdxwEIatVLt0+7owLQ3 lP8sfXhppPOXjTqMD4QkYwzPAa8/zF7acn4kryrUP7Q6PAfd0zEVqNy9ZCZ9ffho zXedFj486IFoc5gnTp2N6jsnVj4LCGIhlVHlYGozKKFqJcQVGsHCqq1oz2zjW6LS oRYIHgECgYEA8zZrkCwNYSXJuODJ3m/hOLVxcxgJuwXoiErWd0E42vPanjjVMhnt KY5l8qGMJ6FhK9LYx2qCrf/E0XtUAZ2wVq3ORTyGnsMWre9tLYs55X+ZN10Tc75z 4hacbU0hqKN1HiDmsMRY3/2NaZHoy7MKnwJJBaG48l9CCTlVwMHocIECgYEA8jby dGjxTH+6XHWNizb5SRbZxAnyEeJeRwTMh0gGzwGPpH/sZYGzyu0SySXWCnZh3Rgq 5uLlNxtrXrljZlyi2nQdQgsq2YrWUs0+zgU+22uQsZpSAftmhVrtvet6MjVjbByY DADciEVUdJYIXk+qnFUJyeroLIkTj7WYKZ6RjksCgYBoCFIwRDeg42oK89RFmnOr LymNAq4+2oMhsWlVb4ejWIWeAk9nc+GXUfrXszRhS01mUnU5r5ygUvRcarV/T3U7 TnMZ+I7Y4DgWRIDd51znhxIBtYV5j/C/t85HjqOkH+8b6RTkbchaX3mau7fpUfds Fq0nhIq42fhEO8srfYYwgQKBgQCyhi1N/8taRwpk+3/IDEzQwjbfdzUkWWSDk9Xs H/pkuRHWfTMP3flWqEYgW/LW40peW2HDq5imdV8+AgZxe/XMbaji9Lgwf1RY005n KxaZQz7yqHupWlLGF68DPHxkZVVSagDnV/sztWX6SFsCqFVnxIXifXGC4cW5Nm9g va8q4QKBgQCEhLVeUfdwKvkZ94g/GFz731Z2hrdVhgMZaU/u6t0V95+YezPNCQZB wmE9Mmlbq1emDeROivjCfoGhR3kZXW1pTKlLh6ZMUQUOpptdXva8XxfoqQwa3enA M7muBbF0XN7VO80iJPv+PmIZdEIAkpwKfi201YB+BafCIuGxIF50Vg== -----END RSA PRIVATE KEY----- </key> My goal is to capture all the text between the `<ca> .. </ca>` tag. I had try with this code: #! /usr/bin/env python #-*- coding: utf-8 -*- import re def read_file(name): result = "" with open(name, 'r') as lines: for line in lines: result = result + line return result f = read_file('foo') m = re.search('(^<ca>.+</ca>$)', f, re.MULTILINE|re.DOTALL) print m.group(0) But this not work. If I try to put the content of the `foo` file inside a variable, and pass it to the search() function, the code works well. #! /usr/bin/env python #-*- coding: utf-8 -*- import re f = """ <ca> -----BEGIN CERTIFICATE----- MIIB6DCCAVECBCMBFpQwDQYJKoZIhvcNAQEFBQAwOzEPMA0GA1UEAxMGbGZ0Lmpw MRswGQYDVQQKExJ1N2FoMzZpN24wYSBsejFpZzUxCzAJBgNVBAYTAlVTMB4XDTEz MTIwNzE5MjkxNVoXDTIxMDMyMTE5MjkxNVowOzEPMA0GA1UEAxMGbGZ0LmpwMRsw GQYDVQQKExJ1N2FoMzZpN24wYSBsejFpZzUxCzAJBgNVBAYTAlVTMIGfMA0GCSqG SIb3DQEBAQUAA4GNADCBiQKBgQDKEcE9hTtJk/XmOpISG33ADHGpS+fzxjun7N3/ Nqj43JC9EIHazLE2UKVHaajgcGYUDGkTTcGCATWRtKuWJKmE57msEp0qCHv8WxI/ HV5OhW2LT5BD48ImZRnlPqtnclcgmYbvdeg7oPBcgXZ14mIqTVOA/bkoxc8ZI7/W 4TXU9wIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAAa7HCk24EXNjjEAKTr5/MysFSZd DVJbVc+QpThDrEAj6OzCteLXGiSYhtDi4EXeyJORifau+UYLihuy2BU3TooWDKIZ 4+grA2XGe7+N+d02mbLbMnloVyrqslweMy9muQUzjbH7gvtQj9X0ZvIWcTJCvhwX y+sh9N42+sqJTLu3 -----END CERTIFICATE----- </ca> <cert> -----BEGIN CERTIFICATE----- MIICxjCCAa4CAQAwDQYJKoZIhvcNAQEFBQAwKTEaMBgGA1UEAxMRVlBOR2F0ZUNs aWVudENlcnQxCzAJBgNVBAYTAkpQMB4XDTEzMDIxMTAzNDk0OVoXDTM3MDExOTAz MTQwN1owKTEaMBgGA1UEAxMRVlBOR2F0ZUNsaWVudENlcnQxCzAJBgNVBAYTAkpQ MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA5h2lgQQYUjwoKYJbzVZA 5VcIGd5otPc/qZRMt0KItCFA0s9RwReNVa9fDRFLRBhcITOlv3FBcW3E8h1Us7RD 4W8GmJe8zapJnLsD39OSMRCzZJnczW4OCH1PZRZWKqDtjlNca9AF8a65jTmlDxCQ CjntLIWk5OLLVkFt9/tScc1GDtci55ofhaNAYMPiH7V8+1g66pGHXAoWK6AQVH67 XCKJnGB5nlQ+HsMYPV/O49Ld91ZN/2tHkcaLLyNtywxVPRSsRh480jju0fcCsv6h p/0yXnTB//mWutBGpdUlIbwiITbAmrsbYnjigRvnPqX1RNJUbi9Fp6C2c/HIFJGD ywIDAQABMA0GCSqGSIb3DQEBBQUAA4IBAQChO5hgcw/4oWfoEFLu9kBa1B//kxH8 hQkChVNn8BRC7Y0URQitPl3DKEed9URBDdg2KOAz77bb6ENPiliD+a38UJHIRMqe UBHhllOHIzvDhHFbaovALBQceeBzdkQxsKQESKmQmR832950UCovoyRB61UyAV7h +mZhYPGRKXKSJI6s0Egg/Cri+Cwk4bjJfrb5hVse11yh4D9MHhwSfCOH+0z4hPUT Fku7dGavURO5SVxMn/sL6En5D+oSeXkadHpDs+Airym2YHh15h0+jPSOoR6yiVp/ 6zZeZkrN43kuS73KpKDFjfFPh8t4r1gOIjttkNcQqBccusnplQ7HJpsk -----END CERTIFICATE----- </cert> <key> -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA5h2lgQQYUjwoKYJbzVZA5VcIGd5otPc/qZRMt0KItCFA0s9R wReNVa9fDRFLRBhcITOlv3FBcW3E8h1Us7RD4W8GmJe8zapJnLsD39OSMRCzZJnc zW4OCH1PZRZWKqDtjlNca9AF8a65jTmlDxCQCjntLIWk5OLLVkFt9/tScc1GDtci 55ofhaNAYMPiH7V8+1g66pGHXAoWK6AQVH67XCKJnGB5nlQ+HsMYPV/O49Ld91ZN /2tHkcaLLyNtywxVPRSsRh480jju0fcCsv6hp/0yXnTB//mWutBGpdUlIbwiITbA mrsbYnjigRvnPqX1RNJUbi9Fp6C2c/HIFJGDywIDAQABAoIBAERV7X5AvxA8uRiK k8SIpsD0dX1pJOMIwakUVyvc4EfN0DhKRNb4rYoSiEGTLyzLpyBc/A28Dlkm5eOY fjzXfYkGtYi/Ftxkg3O9vcrMQ4+6i+uGHaIL2rL+s4MrfO8v1xv6+Wky33EEGCou QiwVGRFQXnRoQ62NBCFbUNLhmXwdj1akZzLU4p5R4zA3QhdxwEIatVLt0+7owLQ3 lP8sfXhppPOXjTqMD4QkYwzPAa8/zF7acn4kryrUP7Q6PAfd0zEVqNy9ZCZ9ffho zXedFj486IFoc5gnTp2N6jsnVj4LCGIhlVHlYGozKKFqJcQVGsHCqq1oz2zjW6LS oRYIHgECgYEA8zZrkCwNYSXJuODJ3m/hOLVxcxgJuwXoiErWd0E42vPanjjVMhnt KY5l8qGMJ6FhK9LYx2qCrf/E0XtUAZ2wVq3ORTyGnsMWre9tLYs55X+ZN10Tc75z 4hacbU0hqKN1HiDmsMRY3/2NaZHoy7MKnwJJBaG48l9CCTlVwMHocIECgYEA8jby dGjxTH+6XHWNizb5SRbZxAnyEeJeRwTMh0gGzwGPpH/sZYGzyu0SySXWCnZh3Rgq 5uLlNxtrXrljZlyi2nQdQgsq2YrWUs0+zgU+22uQsZpSAftmhVrtvet6MjVjbByY DADciEVUdJYIXk+qnFUJyeroLIkTj7WYKZ6RjksCgYBoCFIwRDeg42oK89RFmnOr LymNAq4+2oMhsWlVb4ejWIWeAk9nc+GXUfrXszRhS01mUnU5r5ygUvRcarV/T3U7 TnMZ+I7Y4DgWRIDd51znhxIBtYV5j/C/t85HjqOkH+8b6RTkbchaX3mau7fpUfds Fq0nhIq42fhEO8srfYYwgQKBgQCyhi1N/8taRwpk+3/IDEzQwjbfdzUkWWSDk9Xs H/pkuRHWfTMP3flWqEYgW/LW40peW2HDq5imdV8+AgZxe/XMbaji9Lgwf1RY005n KxaZQz7yqHupWlLGF68DPHxkZVVSagDnV/sztWX6SFsCqFVnxIXifXGC4cW5Nm9g va8q4QKBgQCEhLVeUfdwKvkZ94g/GFz731Z2hrdVhgMZaU/u6t0V95+YezPNCQZB wmE9Mmlbq1emDeROivjCfoGhR3kZXW1pTKlLh6ZMUQUOpptdXva8XxfoqQwa3enA M7muBbF0XN7VO80iJPv+PmIZdEIAkpwKfi201YB+BafCIuGxIF50Vg== -----END RSA PRIVATE KEY----- </key> """ m = re.search('(^<ca>.+</ca>$)', f, re.MULTILINE|re.DOTALL) print m.group(0) Why does it wont work reading the files? Answer: I think your function reads the file wrong. Note the 'rb' flag. It should be something like this: def read_file(name): f = open(name, 'rb') return f.read() f = read_file('foo')
Set day, month and year of a python time_struct without changing the time Question: How can I set day, month, and year of a python time_struct without changing the time which is already in. The time is specified by the user and assumed to be a time of the current day. This code converts the given string to a time_struct, but sets the date to 1/1/1900: from time import strptime self._alarmTime = strptime(self._frame.txtAlarmTime.text(), '%H:%M:%S') I now want to replace this 1/1/1900 with some kind of `today()`. It sounds simple, but I have no idea at the moment. Answer: You could also use datetime module: from datetime import datetime datetime.combine( datetime.today().date(), datetime.strptime('23:46:00', '%H:%M:%S').time() )
Flattening list in python Question: I have seen many posts regarding how to flatten a list in Python. But I was never able to understand how this is working: `reduce(lambda x,y:x+y,*myList)` Could someone please explain, how this is working: >>> myList = [[[1,2,3],[4,5],[6,7,8,9]]] >>> reduce(lambda x,y:x+y,*myList) [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> Linked already posted : [How to print list of list into one single list in python without using any for or while loop?](http://stackoverflow.com/questions/20402558/how-to-print- list-of-list-into-one-single-list-in-python-without-using-any-for-o) [Flattening a shallow list in Python](http://stackoverflow.com/questions/406121/flattening-a-shallow-list- in-python) [Flatten (an irregular) list of lists in Python](http://stackoverflow.com/questions/2158395/flatten-an-irregular-list- of-lists-in-python) If anybody thinks this is duplicate to other post, I'll remove it once I understood how it works. Thanks. Answer: It is equivalent to : def my_reduce(func, seq, default=None): it = iter(seq) # assign either the first item from the iterable to x or the default value # passed to my_reduce x = next(it) if default is None else default #For each item in iterable, update x by appying the function on x and y for y in it: x = func(x, y) return x ... >>> my_reduce(lambda a, b: a+b, *myList, default=[]) [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> my_reduce(lambda a, b: a+b, *myList) [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> from operator import add >>> my_reduce(add, *myList) [1, 2, 3, 4, 5, 6, 7, 8, 9] >>> my_reduce(lambda a, b: a+b, ['a', 'b', 'c', 'd']) 'abcd' Docstring of `reduce` has a very good explanation: reduce(...) reduce(function, sequence[, initial]) -> value Apply a function of two arguments cumulatively to the items of a sequence, from left to right, so as to reduce the sequence to a single value. For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates ((((1+2)+3)+4)+5). If initial is present, it is placed before the items of the sequence in the calculation, and serves as a default when the sequence is empty.
TypeError: 'int' object is not callable In Python Question: Bit of a python noob and I'm required to add a regular expression in my code but can not get it to work :/ i have searched the error message on google and tried to figure out whats wrong but no luck so i figured next best thing to do would be to ask directly. Here is the code so far: # allows us to access a random 'key' in the dictionary import random import re # Contains the question and it's correct answer my_dict = { "A form of Protectionism that imposes a tax on imports" : "tariff", "....is quantity of a good or service that consumers are willing and able to buy at a given price in a given time period" : "Demand", "....is the quantity of a good or service that a producer is willing and able to supply onto the market at a given price in a given time period" : "Supply", "By using ..... businesses can bring down their average costs by producing on a larger scale" : "Economies of scale", "The cost of the next best alternative" : "Opportunity Cost", ".... is the transfer of assets from the public (government) sector to the private sector." : "Privatisation" } # welcome message print("Economics Revision App") print("=======================") # the quiz will end when this variable becomes 'False' playing = True # While the game is running while playing == True: # set the score to 0 score = 0 # gets the number of questions the player wants to answer num = int(input("\nHow many questions would you like: ")) num = re.match("r\d[0-9]{2}$", num()) if match: print ('foo') # loop the correct number of times for i in range(num): # the question is one of the dictionary keys, picked at random question = (random.choice( list(my_dict.keys()))) # the answer is the string mapped to the question key answer = my_dict[question] # print the question, along with the question number print("\nQuestion " + str(i+1) ) print(question + "?") # get the user's answer attempt guess = input("> ") # if their guess is the same as the answer if guess.lower() == answer.lower(): # add 1 to the score and print a message print("Correct!") score += 1 else: print("Incorrect guess again!") # after the quiz, print their final score print("\nYour final score was " + str(score)) # store the user's input... again = input("Enter any key to play again, or 'q' to quit.") #... and quit if they types 'q' if again.lower() == 'q': playing = False The code I'm struggling with in questions Answer: `num = re.match("r\d[0-9]{2}$", num())` `num` is simply an interger, so `num()` is invalid. Should be something like `match = re.match("r\d[0-9]{2}$", str(num))` 1) it should be `match` right? 2) `re` is working on `str`, so the argument passed should be `str(num)` Then the code should be fine and fun as well. :)
Is there a way to enable autocompletion for PyQt5 with Eclipse Kepler and PyDev 2.8? Question: it seems I can't find a way to enable autocompletion for PyQt5 in Eclipse using PyDev 2.8. I'm running Mac OS 10.9 Mavericks. While installing PyQt5 I noticed that there weren't '.py' modules installed in the default library paths for PyQt5 but only '.sip' files. Can this be the cause of the problem? If yes, does exist a workaround? Example: from PyQt5.QtCore import * QA #here I'd expect QApplication appear as suggestion but it doesn't Note: this is just an example. Autocompletion does not show any of the modules, classes, method, functions or whatever from PyQt5. The following is the value of the PYTHONPATH variable: macbookpro:~ giovanni$ echo $PYTHONPATH :/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages And here the listing of files and dirs there: macbookpro:site-packages giovanni$ ls -haltR total 0 drwxr-xr-x 26 root wheel 884B 6 Dic 12:51 PyQt5 drwxr-xr-x 3 root wheel 102B 6 Dic 12:51 . drwxr-xr-x 649 root wheel 22K 6 Dic 12:51 .. ./PyQt5: total 36408 drwxr-xr-x 26 root wheel 884B 6 Dic 12:51 . -rwxr-xr-x 1 root wheel 12K 6 Dic 12:51 Qt.so -rwxr-xr-x 1 root wheel 440K 6 Dic 12:51 QtDesigner.so -rwxr-xr-x 1 root wheel 207K 6 Dic 12:51 QtXmlPatterns.so -rwxr-xr-x 1 root wheel 280K 6 Dic 12:51 _QOpenGLFunctions_2_0.so -rw-r--r--@ 1 root wheel 826B 6 Dic 12:51 __init__.py drwxr-xr-x@ 15 root wheel 510B 6 Dic 12:51 uic -rwxr-xr-x 1 root wheel 95K 6 Dic 12:51 QtSerialPort.so -rwxr-xr-x 1 root wheel 379K 6 Dic 12:51 QtSql.so -rwxr-xr-x 1 root wheel 128K 6 Dic 12:51 QtSvg.so -rwxr-xr-x 1 root wheel 116K 6 Dic 12:51 QtTest.so -rwxr-xr-x 1 root wheel 211K 6 Dic 12:51 QtWebKit.so -rwxr-xr-x 1 root wheel 358K 6 Dic 12:51 QtWebKitWidgets.so -rwxr-xr-x 1 root wheel 5,8M 6 Dic 12:51 QtWidgets.so -rwxr-xr-x 1 root wheel 729K 6 Dic 12:51 QtMultimedia.so -rwxr-xr-x 1 root wheel 137K 6 Dic 12:51 QtMultimediaWidgets.so -rwxr-xr-x 1 root wheel 861K 6 Dic 12:51 QtNetwork.so -rwxr-xr-x 1 root wheel 153K 6 Dic 12:51 QtOpenGL.so -rwxr-xr-x 1 root wheel 266K 6 Dic 12:51 QtPrintSupport.so -rwxr-xr-x 1 root wheel 595K 6 Dic 12:51 QtQml.so -rwxr-xr-x 1 root wheel 920K 6 Dic 12:51 QtQuick.so -rwxr-xr-x 1 root wheel 327K 6 Dic 12:51 QtSensors.so drwxr-xr-x 3 root wheel 102B 6 Dic 12:51 .. -rwxr-xr-x 1 root wheel 2,7M 6 Dic 12:51 QtCore.so -rwxr-xr-x 1 root wheel 3,0M 6 Dic 12:51 QtGui.so -rwxr-xr-x 1 root wheel 148K 6 Dic 12:51 QtHelp.so ./PyQt5/uic: total 200 drwxr-xr-x@ 15 root wheel 510B 6 Dic 12:51 . drwxr-xr-x 26 root wheel 884B 6 Dic 12:51 .. drwxr-xr-x@ 9 root wheel 306B 6 Dic 12:51 Compiler drwxr-xr-x@ 5 root wheel 170B 6 Dic 12:51 Loader -rw-r--r--@ 1 root wheel 8,5K 6 Dic 12:51 __init__.py -rw-r--r--@ 1 root wheel 4,0K 6 Dic 12:51 driver.py -rw-r--r--@ 1 root wheel 2,1K 6 Dic 12:51 exceptions.py -rw-r--r--@ 1 root wheel 5,0K 6 Dic 12:51 icon_cache.py -rw-r--r--@ 1 root wheel 5,3K 6 Dic 12:51 objcreator.py drwxr-xr-x@ 9 root wheel 306B 6 Dic 12:51 port_v2 drwxr-xr-x@ 9 root wheel 306B 6 Dic 12:51 port_v3 -rw-r--r--@ 1 root wheel 18K 6 Dic 12:51 properties.py -rw-r--r--@ 1 root wheel 2,7K 6 Dic 12:51 pyuic.py -rw-r--r--@ 1 root wheel 35K 6 Dic 12:51 uiparser.py drwxr-xr-x@ 7 root wheel 238B 6 Dic 12:51 widget-plugins ./PyQt5/uic/Compiler: total 104 drwxr-xr-x@ 9 root wheel 306B 6 Dic 12:51 . drwxr-xr-x@ 15 root wheel 510B 6 Dic 12:51 .. -rw-r--r--@ 1 root wheel 1,0K 6 Dic 12:51 __init__.py -rw-r--r--@ 1 root wheel 4,4K 6 Dic 12:51 compiler.py -rw-r--r--@ 1 root wheel 2,7K 6 Dic 12:51 indenter.py -rw-r--r--@ 1 root wheel 2,5K 6 Dic 12:51 misc.py -rw-r--r--@ 1 root wheel 4,2K 6 Dic 12:51 proxy_metaclass.py -rw-r--r--@ 1 root wheel 5,5K 6 Dic 12:51 qobjectcreator.py -rw-r--r--@ 1 root wheel 16K 6 Dic 12:51 qtproxies.py ./PyQt5/uic/Loader: total 32 drwxr-xr-x@ 5 root wheel 170B 6 Dic 12:51 . drwxr-xr-x@ 15 root wheel 510B 6 Dic 12:51 .. -rw-r--r--@ 1 root wheel 1,0K 6 Dic 12:51 __init__.py -rw-r--r--@ 1 root wheel 3,0K 6 Dic 12:51 loader.py -rw-r--r--@ 1 root wheel 4,9K 6 Dic 12:51 qobjectcreator.py ./PyQt5/uic/port_v2: total 56 drwxr-xr-x@ 9 root wheel 306B 6 Dic 12:51 . drwxr-xr-x@ 15 root wheel 510B 6 Dic 12:51 .. -rw-r--r--@ 1 root wheel 1,0K 6 Dic 12:51 __init__.py -rw-r--r--@ 1 root wheel 1,4K 6 Dic 12:51 as_string.py -rw-r--r--@ 1 root wheel 1,3K 6 Dic 12:51 ascii_upper.py -rw-r--r--@ 1 root wheel 1,5K 6 Dic 12:51 invoke.py -rw-r--r--@ 1 root wheel 1,5K 6 Dic 12:51 load_plugin.py -rw-r--r--@ 1 root wheel 1,2K 6 Dic 12:51 proxy_base.py -rw-r--r--@ 1 root wheel 1,1K 6 Dic 12:51 string_io.py ./PyQt5/uic/port_v3: total 56 drwxr-xr-x@ 9 root wheel 306B 6 Dic 12:51 . drwxr-xr-x@ 15 root wheel 510B 6 Dic 12:51 .. -rw-r--r--@ 1 root wheel 1,0K 6 Dic 12:51 __init__.py -rw-r--r--@ 1 root wheel 1,4K 6 Dic 12:51 as_string.py -rw-r--r--@ 1 root wheel 1,3K 6 Dic 12:51 ascii_upper.py -rw-r--r--@ 1 root wheel 1,5K 6 Dic 12:51 invoke.py -rw-r--r--@ 1 root wheel 1,5K 6 Dic 12:51 load_plugin.py -rw-r--r--@ 1 root wheel 1,2K 6 Dic 12:51 proxy_base.py -rw-r--r--@ 1 root wheel 1,0K 6 Dic 12:51 string_io.py ./PyQt5/uic/widget-plugins: total 40 drwxr-xr-x@ 7 root wheel 238B 6 Dic 12:51 . drwxr-xr-x@ 15 root wheel 510B 6 Dic 12:51 .. -rw-r--r--@ 1 root wheel 1,5K 6 Dic 12:51 qaxcontainer.py -rw-r--r--@ 1 root wheel 1,5K 6 Dic 12:51 qscintilla.py -rw-r--r--@ 1 root wheel 1,5K 6 Dic 12:51 qtdeclarative.py -rw-r--r--@ 1 root wheel 1,6K 6 Dic 12:51 qtprintsupport.py -rw-r--r--@ 1 root wheel 2,4K 6 Dic 12:51 qtwebkit.py Answer: It seems things are in place... PyDev 2.8.x did have some issues on setting the PYTHONPATH when things changed, so, ideally, please try the nightly build (see: <http://pydev.org/download.html> for details on getting it) and see if it fixes things for you. Note that on PyDev 3.x you need to point Eclipse to use a Java 7 JVM (some users seem to have issues on making Eclipse use it the proper java vm, especially on Mac OS -- if you have this problem, maybe you can check LiClipse 0.9.0 -- which is mostly a distribution of PyDev standalone + some other niceties + a way to directly support PyDev -- and it has PyDev 3 builtin -- otherwise, take a look at <http://stackoverflow.com/a/20477000/110451> for instructions on how to configure it).
When calling a function from a module can you say what is returned? (Python) Question: It may sound a bit noobish but say i call a function from the module psutils, is there a way to say which value i want back for example: psutil.swap_memory() returns swap(total=A, used=B, free=C, percent=D, sin=0, sout=0) is there a way to make it only return B and C? Answer: There are a few ways, the most obvious being: info = psutil.swap_memory() used, free = info.used, info.free The returned object is actually a tuple-like object, so you could also slice it and then unpack it: used, free = psutil.swap_memory()[1:3] There's also the more convoluted approach, which has the advantage of ignoring order: from operator import attgetter used, free = attrgetter('used', 'free')(psutil.swap_memory())
Python glmnet "No module named _glmnet" Question: **UPDATE** Getting close. Now I'm running `f2py` on the `.pyf` file that should generate the `_glmnet` module. I build the package [python-glmnet](https://github.com/dwf/glmnet-python) packet with the following command. python setup.py config_fc --fcompiler=gnu95 --f77flags='-fdefault-real-8' --f90flags='-fdefault-real-8' build But when I import the module I get this error: > File "/Users/rose/221/tagger/tagger/glmnet/glmnet.py", line 2, in import > _glmnet ImportError: No module named _glmnet How can I import that module? The glmnet directory also contains a `glmnet.pyf` file that begins with the following: ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module _glmnet ! in interface ! in :_glmnet subroutine elnet(ka,parm,no,ni,x,y,w,jd,vp,ne,nx,nlam,flmin,ulam,thr,isd,lmu,a0,ca,ia,nin,rsq,alm,nlp,jerr) ! in :glmnet:glmnet.f integer optional :: ka=1 ! Use covariance updates over naive by default real*8 :: parm integer intent(hide),check(shape(x,0)==no),depend(x) :: no=shape(x,0) integer intent(hide),check(shape(x,1)==ni),depend(x) :: ni=shape(x,1) real*8 dimension(no,ni) :: x real*8 dimension(no),depend(no) :: y real*8 dimension(no),depend(no) :: w integer dimension(*) :: jd real*8 dimension(ni),depend(ni) :: vp integer optional,depend(x) :: ne=min(shape(x,1), nx) integer :: nx integer optional,check((flmin < 1.0 || len(ulam)==nlam)),depend(flmin,ulam) :: nlam=len(ulam) real*8 :: flmin real*8 dimension(nlam) :: ulam real*8 :: thr integer optional :: isd=1 ! Standardize predictors by default **UPDATE** Where can I find this `_glmnet` module? Is it contained in the glmnet.pyf file, as shown below? I tried adding this glment folder to my `PYTHONPATH`, but that didn't change anything. ~/221/tagger/tagger/glmnet master ls __init__.py example_lasso_elastic_net.py glmnet.pyc __init__.pyc glmnet.f glmnet.pyf elastic_net.py glmnet.py ~/221/tagger/tagger/glmnet master head -10 glmnet.pyf ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module _glmnet ! in interface ! in :_glmnet subroutine elnet(ka,parm,no,ni,x,y,w,jd,vp,ne,nx,nlam,flmin,ulam,thr,isd,lmu,a0,ca,ia,nin,rsq,alm,nlp,jerr) ! in :glmnet:glmnet.f integer optional :: ka=1 ! Use covariance updates over naive by default real*8 :: parm integer intent(hide),check(shape(x,0)==no),depend(x) :: no=shape(x,0) integer intent(hide),check(shape(x,1)==ni),depend(x) :: ni=shape(x,1) ~/221/tagger/tagger/glmnet master echo $PYTHONPATH /Users/rose/221/tagger/tagger/glmnet: ~/221/tagger/tagger/glmnet master cd .. ~/221/tagger/tagger master python main.py Traceback (most recent call last): File "main.py", line 14, in <module> from glmnet import glmnet File "/Users/rose/221/tagger/tagger/glmnet/glmnet.py", line 2, in <module> import _glmnet ImportError: No module named _glmnet Answer: As far as I can tell from the source, it's looking for the `_gmlnet` module, which is defined in `gmlnet.pyf`. `gmlnet.pyf` is not a python module, it's a set of additional instructions for a program called `f2py`, and python will ignore the `.pyf` file. You need to compile the `.pyf` file along with a fortran file using `f2py`. Use a command like this: f2py -c --fcompiler=gnu95 gmlnet.pyf gmlnet.f Try installing `f2py` and then reinstalling the `gmlnet` package.
matplotlib showing empty plot in Python Question: I am using Python 2.7 and matplotlib to create a simple plot window on Ubuntu 12.10. When `plt.plot()` is called, an empty windows comes up, but closes instantaneously, without displaying any figure inside it, only a grey background. However, if I try with `plt.savefig('name')` the correct figure is saved. Also, I guess it is important to mention that I do have `plt.show()` at the end. A plot window does appear, it's just blank. Some code: def plot(self): # Plotting the track coordinates = np.genfromtxt(self.data, delimiter=self.coord_delimiter) plt.plot(coordinates[:, 0:1], coordinates[:, 1:2], linestyle='-', color=self.track_color, linewidth='2') plt.savefig('m1') # This works plt.show() What could cause the plot window to be blank, but the saved image to be correct? Answer: It's been a while now, but if I remember correctly, I think I used: plt.block(True); in order to get it fixed.
lxml install on windows 7 using pip and python 2.7 Question: When I try to upgrade lxml using pip on my windows 7 machine I get the log printed below. When I uninstall and try to install from scratch I get the same errors. Any ideas? > Downloading/unpacking lxml from > <https://pypi.python.org/packages/source/l/lxml/l> > xml-3.2.4.tar.gz#md5=cc363499060f615aca1ec8dcc04df331 Downloading > lxml-3.2.4.tar.gz (3.3MB): 3.3MB downloaded Running setup.py egg_info for > package lxml Building lxml version 3.2.4. Building without Cython. ERROR: > Nazwa 'xslt-config' nie jest rozpoznawana jako polecenie wewnętrzne l ub > zewnętrzne, program wykonywalny lub plik wsadowy. > > > ** make sure the development packages of libxml2 and libxslt are > installed * > > > * > > > Using build configuration of libxslt > D:\software\Python27\lib\distutils\dist.py:267: UserWarning: Unknown > distrib ution option: 'bugtrack_url' > warnings.warn(msg) > > warning: no files found matching 'lxml.etree.c' under directory > 'src\lxml' > warning: no files found matching 'lxml.objectify.c' under directory > 'src\lxm l' > warning: no files found matching 'lxml.etree.h' under directory > 'src\lxml' > warning: no files found matching 'lxml.etree_api.h' under directory > 'src\lxm l' > warning: no files found matching 'etree_defs.h' under directory > 'src\lxml' > warning: no files found matching '*.txt' under directory > 'src\lxml\tests' > warning: no files found matching 'pubkey.asc' under directory 'doc' > warning: no files found matching 'tagpython*.png' under directory 'doc' > warning: no files found matching 'Makefile' under directory 'doc' > Installing collected packages: lxml Found existing installation: > > > lxml 2.3 Uninstalling lxml: Successfully uninstalled lxml Running setup.py > install for lxml Building lxml version 3.2.4. Building without Cython. > ERROR: Nazwa 'xslt-config' nie jest rozpoznawana jako polecenie wewnętrzne l > ub zewnętrzne, program wykonywalny lub plik wsadowy. > > > ** make sure the development packages of libxml2 and libxslt are > installed * > > > * > > > Using build configuration of libxslt > building 'lxml.etree' extension > D:\software\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD > /W3 /GS- /DNDEBUG > > > -Ic:\users\x\appdata\local\temp\pip_build_x\lxml\src\lxml\inc ludes > -ID:\software\Python27\include -ID:\software\Python27\PC /Tcsrc\lxml\lxml. > etree.c /Fobuild\temp.win32-2.7\Release\src\lxml\lxml.etree.obj lxml.etree.c > c:\users\x\appdata\local\temp\pip_build_x\lxml\src\lxml\includes\etree_d > efs.h(9) : fatal error C1083: Cannot open include file: > 'libxml/xmlversion.h': N o such file or directory > D:\software\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distrib > ution option: 'bugtrack_url' warnings.warn(msg) error: command > '"D:\software\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"' fai led with exit > status 2 Complete output from command D:\software\Python27\python.exe -c > "import setu ptools;**file** > ='c:\users\x\appdata\local\temp\pip_build_x\lxml\setu > p.py';exec(compile(open(**file**).read().replace('\r\n', '\n'), **file** , > 'exec' ))" install --record c:\users\x\appdata\local\temp\pip-pyyuss- > record\install-r ecord.txt \--single-version-externally-managed: Building > lxml version 3.2.4. > > Building without Cython. > > ERROR: Nazwa 'xslt-config' nie jest rozpoznawana jako polecenie wewnętrzne > lub z ewnętrzne, > > program wykonywalny lub plik wsadowy. > > ** make sure the development packages of libxml2 and libxslt are installed > ** > > Using build configuration of libxslt > > running install > > running build > > running build_py > > creating build > > creating build\lib.win32-2.7 > > creating build\lib.win32-2.7\lxml > > copying src\lxml\builder.py -> build\lib.win32-2.7\lxml > > copying src\lxml\cssselect.py -> build\lib.win32-2.7\lxml > > copying src\lxml\doctestcompare.py -> build\lib.win32-2.7\lxml > > copying src\lxml\ElementInclude.py -> build\lib.win32-2.7\lxml > > copying src\lxml\pyclasslookup.py -> build\lib.win32-2.7\lxml > > copying src\lxml\sax.py -> build\lib.win32-2.7\lxml > > copying src\lxml\usedoctest.py -> build\lib.win32-2.7\lxml > > copying src\lxml_elementpath.py -> build\lib.win32-2.7\lxml > > copying src\lxml__init__.py -> build\lib.win32-2.7\lxml > > creating build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes__init__.py -> build\lib.win32-2.7\lxml\includes > > creating build\lib.win32-2.7\lxml\html > > copying src\lxml\html\builder.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\clean.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\defs.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\diff.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\ElementSoup.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\formfill.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\html5parser.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\soupparser.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html\usedoctest.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html_diffcommand.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html_html5builder.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html_setmixin.py -> build\lib.win32-2.7\lxml\html > > copying src\lxml\html__init__.py -> build\lib.win32-2.7\lxml\html > > creating build\lib.win32-2.7\lxml\isoschematron > > copying src\lxml\isoschematron__init__.py -> > build\lib.win32-2.7\lxml\isoschema tron > > copying src\lxml\lxml.etree.h -> build\lib.win32-2.7\lxml > > copying src\lxml\lxml.etree_api.h -> build\lib.win32-2.7\lxml > > copying src\lxml\includes\c14n.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\config.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\dtdvalid.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\etreepublic.pxd -> > build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\htmlparser.pxd -> > build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\relaxng.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\schematron.pxd -> > build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\tree.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\uri.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\xinclude.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\xmlerror.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\xmlparser.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\xmlschema.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\xpath.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\xslt.pxd -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\etree_defs.h -> build\lib.win32-2.7\lxml\includes > > copying src\lxml\includes\lxml-version.h -> > build\lib.win32-2.7\lxml\includes > > creating build\lib.win32-2.7\lxml\isoschematron\resources > > creating build\lib.win32-2.7\lxml\isoschematron\resources\rng > > copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> > build\lib.win 32-2.7\lxml\isoschematron\resources\rng > > creating build\lib.win32-2.7\lxml\isoschematron\resources\xsl > > copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> > build\lib.win32-2 .7\lxml\isoschematron\resources\xsl > > copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> > build\lib.win32-2 .7\lxml\isoschematron\resources\xsl > > creating build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso- > schematron-xsl t1 > > copying src\lxml\isoschematron\resources\xsl\iso-schematron- > xslt1\iso_abstract_e xpand.xsl -> > build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron -xslt1 > > copying src\lxml\isoschematron\resources\xsl\iso-schematron- > xslt1\iso_dsdl_inclu de.xsl -> > build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xs lt1 > > copying src\lxml\isoschematron\resources\xsl\iso-schematron- > xslt1\iso_schematron _message.xsl -> > build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schemat ron-xslt1 > > copying src\lxml\isoschematron\resources\xsl\iso-schematron- > xslt1\iso_schematron _skeleton_for_xslt1.xsl -> > build\lib.win32-2.7\lxml\isoschematron\resources\xsl\ iso-schematron-xslt1 > > copying src\lxml\isoschematron\resources\xsl\iso-schematron- > xslt1\iso_svrl_for_x slt1.xsl -> > build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron- xslt1 > > copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt > -> build\lib.win32-2.7\lxml\isoschematron\resources\xsl\iso-schematron-xslt1 > > running build_ext > > building 'lxml.etree' extension > > creating build\temp.win32-2.7 > > creating build\temp.win32-2.7\Release > > creating build\temp.win32-2.7\Release\src > > creating build\temp.win32-2.7\Release\src\lxml > > D:\software\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 > /GS \- /DNDEBUG > -Ic:\users\x\appdata\local\temp\pip_build_x\lxml\src\lxml\include s > -ID:\software\Python27\include -ID:\software\Python27\PC > /Tcsrc\lxml\lxml.etre e.c > /Fobuild\temp.win32-2.7\Release\src\lxml\lxml.etree.obj > > lxml.etree.c > > c:\users\x\appdata\local\temp\pip_build_x\lxml\src\lxml\includes\etree_defs. > h(9) : fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': > No su ch file or directory > > D:\software\Python27\lib\distutils\dist.py:267: UserWarning: Unknown > distributio n option: 'bugtrack_url' > > warnings.warn(msg) > > error: command '"D:\software\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"' > failed with exit status 2 > > \---------------------------------------- Rolling back uninstall of lxml > Cleaning up... Command D:\software\Python27\python.exe -c "import > setuptools;**file** ='c:\user > s\x\appdata\local\temp\pip_build_x\lxml\setup.py';exec(compile(open(_ _file_ > _).read().replace('\r\n', '\n'), **file** , 'exec'))" install --record c:\u > sers\x\appdata\local\temp\pip-pyyuss-record\install-record.txt \--single- > versio n-externally-managed failed with error code 1 in > c:\users\x\appdata\local\temp \pip_build_x\lxml Traceback (most recent call > last): File "D:\software\Python27\Scripts\pip-script.py", line 9, in > load_entry_point('pip==1.4.1', 'console_scripts', 'pip')() File > "D:\software\Python27\lib\site-packages\pip__init__.py", line 148, in ma in > return command.main(args[1:], options) File "D:\software\Python27\lib\site- > packages\pip\basecommand.py", line 169, in main text = > '\n'.join(complete_log) UnicodeDecodeError: 'ascii' codec can't decode byte > 0xa9 in position 72: ordinal not in range(128) Answer: If you have a compiler installed (tested with VS C++ 2008 Express), you can simply run: `set STATICBUILD=true && pip install lxml ` As pointed out [on documentation](http://lxml.de/build.html#static-linking-on- windows), setting `STATICBUILD` will tell to lxml's installer to automatically download all its binary dependencies before build. These `lxml` binary dependencies, that should be present when installing from source, will be downloaded and build together by the installer: * libxslt * iconv * zlib * libxml2 **Bonus** : It also works inside a virtualenv.
Building an egg of my python project Question: Can somebody please guide me with step-by-step procedure on how to eggfy my existing python project? The documentation is keep mentioning something about setup.py within a package but I cannot find it in my project... thank you, Answer: You can use [setuptools](https://pypi.python.org/pypi/setuptools) to achieve that. In two steps, you just need to: ### Create the setup.py script This file calls the setuptools API that takes care of building your package. A very simple `setup.py` would look like this: from setuptools import setup, find_packages setup( name='mypackage', version='0.1.0', description='A short description', long_description='I would just open("README.md").read() here', author='Author of the Project', author_email='[email protected]', url='https://github.com/user/proj', packages=find_packages(exclude=['*tests*']), ) ### Generate the .egg file That's definitely the easier part. You just have to call $ python setup.py bdist_egg Just look at the `dist` directory created and you'll find the `.egg` file. I'd suggest you to take a look in a very good tutorial about setuptools: <http://pythonhosted.org/an_example_pypi_project/setuptools.html>
can not import the custome apps in django 1.6 Question: i am creating a simple project to test the environment, the project structrue look like the following root/ manage.py mysite/ apps/ __init__.py app1/ urls.py views.py settings.py urls.py __init__.py in the settings.py file, i install app1 in the INSTALLED_APPS, when i try to modify the root urls.py by url(r'^app1/', include('apps.app1.urls')), i got the following error Traceback (most recent call last): File "D:\Python33\lib\site-packages\django\core\urlresolvers.py", line 339, in urlconf_module return self._urlconf_module AttributeError: 'RegexURLResolver' object has no attribute '_urlconf_module' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\Python33\lib\site-packages\django\core\handlers\base.py", line 101, in get_response resolver_match = resolver.resolve(request.path_info) File "D:\Python33\lib\site-packages\django\core\urlresolvers.py", line 318, in resolve for pattern in self.url_patterns: File "D:\Python33\lib\site-packages\django\core\urlresolvers.py", line 346, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "D:\Python33\lib\site-packages\django\core\urlresolvers.py", line 341, in urlconf_module self._urlconf_module = import_module(self.urlconf_name) File "D:\Python33\lib\importlib\__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1584, in _gcd_import File "<frozen importlib._bootstrap>", line 1565, in _find_and_load File "<frozen importlib._bootstrap>", line 1532, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 584, in _check_name_wrapper File "<frozen importlib._bootstrap>", line 1022, in load_module File "<frozen importlib._bootstrap>", line 1003, in load_module File "<frozen importlib._bootstrap>", line 560, in module_for_loader_wrapper File "<frozen importlib._bootstrap>", line 868, in _load_module File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed File "D:/project/worldcup\worldcup\urls.py", line 4, in <module> url(r'^app1/', include('apps.app1.urls')), File "D:\Python33\lib\site-packages\django\conf\urls\__init__.py", line 26, in include urlconf_module = import_module(urlconf_module) File "D:\Python33\lib\importlib\__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1584, in _gcd_import File "<frozen importlib._bootstrap>", line 1565, in _find_and_load File "<frozen importlib._bootstrap>", line 1512, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1584, in _gcd_import File "<frozen importlib._bootstrap>", line 1565, in _find_and_load File "<frozen importlib._bootstrap>", line 1512, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1584, in _gcd_import File "<frozen importlib._bootstrap>", line 1565, in _find_and_load File "<frozen importlib._bootstrap>", line 1529, in _find_and_load_unlocked ImportError: No module named 'apps' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\Python33\lib\wsgiref\handlers.py", line 137, in run self.result = application(self.environ, self.start_response) File "D:\Python33\lib\site-packages\django\contrib\staticfiles\handlers.py", line 67, in __call__ return self.application(environ, start_response) File "D:\Python33\lib\site-packages\django\core\handlers\wsgi.py", line 206, in __call__ response = self.get_response(request) File "D:\Python33\lib\site-packages\django\core\handlers\base.py", line 196, in get_response response = self.handle_uncaught_exception(request, resolver, sys.exc_info()) File "D:\Python33\lib\site-packages\django\core\handlers\base.py", line 231, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "D:\Python33\lib\site-packages\django\views\debug.py", line 69, in technical_500_response html = reporter.get_traceback_html() File "D:\Python33\lib\site-packages\django\views\debug.py", line 323, in get_traceback_html c = Context(self.get_traceback_data()) File "D:\Python33\lib\site-packages\django\views\debug.py", line 281, in get_traceback_data frames = self.get_traceback_frames() File "D:\Python33\lib\site-packages\django\views\debug.py", line 428, in get_traceback_frames pre_context_lineno, pre_context, context_line, post_context = self._get_lines_from_file(filename, lineno, 7, loader, module_name) File "D:\Python33\lib\site-packages\django\views\debug.py", line 379, in _get_lines_from_file source = loader.get_source(module_name) File "<frozen importlib._bootstrap>", line 605, in _requires_frozen_wrapper ImportError: importlib._bootstrap is not a frozen module [08/Dec/2013 22:25:59] "GET /user/hello/ HTTP/1.1" 500 59 my develop environment is python 3.3, django 1.6, win 7 can anybody help me out of this trouble, thanks in previous! Answer: Quick fix modify url(r'^app1/', include('apps.app1.urls')), to url(r'^app1/', include('s20462310mysite.apps.app1.urls')), A better approach would be to change folder structure to: root/ manage.py app1/ __init__.py urls.py views.py mysite/ __init__.py settings.py urls.py __init__.py and then use url(r'^app1_new/', include('app1.urls')), in the main urls.py Other notes: You don't actually need models.py in an application folder In Django 1.6 you don't need "**__init_ _**.py" either But it is a good practice to have those files
Recommendation for validating Python class data? Question: I'm looking for recommendations for automated validation of instance variables in Python objects. The validation should occur immediately upon instantiation or setting. For example, imagine a class like: import recordtype class TemperatureReading(recordtype.recordtype('TemperatureReading', ['lat','long','alt','temp'])): pass I'd like to be able to constrain lat to +/- 90, long to +/- 180, alt >=0, and temp >=0 . Writing getters and setters seems too un-Pythonic and tedious. Is there a better way? Answer: Write getters and setters, but use Python [properties](http://docs.python.org/2/library/functions.html#property) so that it's less tedious for the user. You can do some simple metaprogramming, possibly with a [metaclass](http://stackoverflow.com/a/6581949/344821), to make it less tedious for the class writer as well.
How to add a library in python Question: I am trying to authenticate `jawbone` api in python. In the code there is a line: import requests How can I add this. I have very little knowledge on python. Just manipulating the code. Can any one please help? The library is already present in python 3.3 Answer: **This is only for the users using python33 and in windows platform,,,, download Requests packages from any site. Copy the folder Requests from the downloaded package and place it on C://python33/LIB/ folder.... now you are able to import Requests package to your program**
MTV newbie issues with Python Django Question: I am kinda new with the MVC and MTV concept and i am trying to learn Python Django. I want to make catalog with books, must have add and delete functionality. I am trying to follow best practices and working with Generic views, but kinda stuck at the end, i feel that i am missing something very small but i cant figure it out - to be honest i am very exhausted at this moment and i dont have much time. So this is my code: Models: class Books(models.Model): title = models.CharField(max_length=200) author = models.CharField(max_length=200) isbn = models.CharField(max_length=200) pages = models.IntegerField(default=0) def __unicode__(self): return self.title class BooksForm(ModelForm): class Meta: model = Books Views: # coding: utf-8 from django.core.urlresolvers import reverse_lazy from django.views.generic import ListView, UpdateView, CreateView, DetailView from models import Book class BooksDetailView(DetailView): model = Book template_name = "books_portal/details.html" class BooksCreateView(CreateView): model = Book template_name = "books_portal/add.html" success_url = reverse_lazy('books_portal') class BooksUpdateView(UpdateView): model = Book template_name = "books_portal/add.html" success_url = reverse_lazy('books_portal') class BooksListView(ListView): model = Book context_object_name = 'books_list' template_name = "books_portal/index.html" def get_queryset(self): return Book.objects.order_by('author')[:5] Templates: add.html {% extends "books_portal/base.html" %} {% block title %}Add books{% endblock %} {% block extracss %} <style> .top-buffer { margin-top:20px; } .bs-docs-nav { background-color: #563d7c; } </style> {% endblock extracss %} {% block content %} <form action="" method="post" class="form-horizontal" role="form">{% csrf_token %} <div class="row top-buffer"> <div class="col-md-1"> {{form.title.label_tag}} <input type="text" value="" class=""/> </div> </div> <div class="row top-buffer"> <div class="col-md-1"> {{form.author.label_tag}} <input type="text" value="" class=""/> </div> </div> <div class="row top-buffer"> <div class="col-md-2 col-md-offset-1"> <input type="submit" value="Save" class="btn btn-primary btn-lg"/> </div> </div> </form> {% endblock %} base.html <!DOCTYPE html> <html> <head> <title>{% block title %}{{title|default:"Book Library"}}{% endblock %}</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> {% block extracss %}{% endblock extracss %} </head> <body> <div class="container"> <div class="navbar-header"> <a href="{% url 'books_portal' %}" class="navbar-brand">Books Portal</a> </div> {% block content %} {% endblock %} </div> {% block extrajs %}{% endblock extrajs %} </body> </html> details.html {% extends "books_portal/base.html" %} {% block title %}Details{% endblock %} {% block extracss %} <style> .top-buffer { margin-top:20px; } .bs-docs-nav { background-color: #4CD085; } </style> {% endblock extracss %} {% block content %} <div class="row top-buffer"> <div class="col-md-1"> <strong>Title:</strong> </div> <div class="col-md-2"> {{book.title}} </div> </div> <div class="row top-buffer"> <div class="col-md-1"> <strong>Author:</strong> </div> <div class="col-md-2"> {{book.author}} </div> </div> <div class="row top-buffer"> </div> <div class="row"> <div class="col-md-1 col-md-offset-1 text-center"><a href="{% url 'books_portal' %}" class="btn btn-primary btn-lg">OK</a></div> </div> {% endblock %} index.html {% extends "books_portal/base.html" %} {% block title %}Collection of books{% endblock %} {% block extracss %} <style> .top-buffer { margin-top:20px; } .bs-docs-nav { background-color: #563d7c; } </style> {% endblock extracss %} {% block content %} <table class="table table table-hover"> <tr> <th class="text-center">Title</th> <th class="text-center">Author</th> <th class="text-center">Edit</th> </tr> {% for book in books_list %} <tr> <td class="text-center"><a href="{% url 'books_details' pk=book.id%}">{{ book.title }}</a></td> <td class="text-center">{{ book.author }}</td> <td class="text-center"><a href="{% url 'books_edit' pk=book.id%}" class="btn btn-default">Delete</a></td> </tr> {% endfor %} </table> <div class="row"> <div class="col-md-2 col-md-offset-5 text-center"><a href="{% url 'books_add' %}" class="btn btn-primary btn-lg">Add</a></div> </div> {% endblock %} Currently i cant add or delete any books, any help will be appreciated. Thank you. Answer: I've cut and pasted all your code, plus a relevant urls.py, into a new project. The immediate problem is that you're not showing any form errors in your add.html. You can just add `{{ form.errors }}` at the top of the template. Once you've done that, you'll see the actual issue: you're not providing all relevant fields when creating your book. In fact, Django can't see _any_ fields, because you haven't given any of the inputs `name` attributes. Really, you shouldn't create the `input` elements manually: you should get Django to do that, because then it also takes responsibility for prepopulating the field with the existing value either when re-showing the form with errors, or when editing an existing Book. It should look like this: {{ form.title.label_tag }} {{ form.title }} {{ form.title.errors }} Then the outstanding issue is that you're not including the `isbn` or `pages` fields, which your model is treating as required. You can either set them as `blank=False` in the model, or use an `exclude` list in the ModelForm. Either way, you'll need `null=False` in the model field for `pages`. Or, of course, you can add them to the form. To be honest, the easiest thing for you to do now, while you're learning, is to replace all the HTML inside your `<form>` tag with just `{{ form.as_p }}`, so that Django outputs everything for you.
split up classmethods in python Question: I am often using classmethods instead of the default constructor in python for example: class Data(object): def __init__(self, x, y, z): self.x = x etc.. @classmethod def from_xml(cls, xml_file): x, y, z = import_something_from_xml(xml_file) return cls(x,y,z) this approach works good, but since i often have large classmethod- constructors I want to split them up in smaller functions. My problem with that is, that these smaller functions can be seen in the Class namespace, Is there any way to avoid this ? Answer: You can mark the smaller helper functions as private: @classmethod def __import_something_from_xml(cls, data): #logic return a, b, c and you would run: @classmethod def from_xml(cls, xml_file): x, y, z = cls.__import_something_from_xml(xml_file) return cls(x,y,z) Keep in mind this is only naming convention and this method can be accessed from Data namespace. Or you can designate a helper class: class XMLDataHelper: @staticmethod def import_something_from_xml(data): #logic return a, b, c And the code would look like this @classmethod def from_xml(cls, xml_file): x, y, z = XMLDataHelper.import_something_from_xml(xml_file) return cls(x,y,z)
Can PyCharm's optimize imports also alphabetize them? Question: I am enjoying PyCharm's optimizing of Python imports - as well as removing unused imports, following PEP8 gives them a sensible layout and makes them easier to read. Is there any way to get PyCharm to additionally alphabetize them (which would make scanning them faster, for me at least)? Answer: PyCharm sorts imports only according to groups specified in PEP-8, not alphabetically.
How to implement Circular Permutation (left and right shift) of CSR_Matrix in Scipy Python Sparse Matrices? Question: I am using Scipy sparse matrix `csr_matrix` to be used as context vectors in word-context vectors. My `csr_matrix` is a `(1, 300)` shape so it is a 1-dimensional vector. I need to use permutation (circular right shift or circular left shift) on the sparse vector (for showing left context and right context). example: i have `[1, 2, 3, 4]` and i want to create right and left permutations as follow: right permutation: `[4, 1, 2, 3]` left permutation: `[2, 3, 4, 1]` In csr matrices i can't access to column indices so i can not just change the column indices. Is there any efficient high performance solution for row permutations in `csr_matrix` or am i missing something? runnable code: from scipy.sparse import csr_matrix rows = [0, 0, 0] columns = [100, 47, 150] data = [-1, +1, -1] contextMatrix = csr_matrix( (data,(rows, columns)), shape=(1, 300) ) it means that i have a 300-column vector whose columns 100, 47, 150 all from row 0 are non-zero valued and their value is in data list respectively. now what i want is a permutation which means i want the columns array be changed into [101, 48, 151] for right permutation and [99, 46, 149] for left permutation. It should be noted that permutations are circular which means if column 299 has non-zero data, using a right permutation the data will be moved to column 0. Answer: You can access and alter the `data` and `indices` attributes of your CSR matrix, which are stored as NumPy arrays. <http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html#scipy.sparse.csr_matrix> So using your code and following the suggestion in the comments you could do this: from scipy.sparse import csr_matrix rows = [0, 0, 0] columns = [100, 47, 150] data = [-1, +1, -1] m = csr_matrix( (data,(rows, columns)), shape=(1, 300) ) indices = m.indices # right permutation m.indices = (indices + 1) % m.shape[1] # left permutation m.indices = (indices - 1) % m.shape[1]
Deploying Flask app on EC2 for localhost access Question: I have finished up a simple Flask app that I am trying to host on an AWS EC2 instance with Apache2. I've been following [this tutorial](http://blog.garethdwyer.co.za/2013/07/getting-simple-flask-app- running-on.html). The only changes I've made from the development process (in which the app runs totally fine when I run it and then try to access it via localhost) are: 1) Moved all the code in to /var/www 2) Changed it so that if __name__=='__main__': app.run(debug = False) #Now False instead of True 3) Added a app.wsgi file 4) Added file my_app to /etc/apache2/sites-available 5) Ran these commands: $ sudo a2dissite default $ sudo a2ensite sitename.com $ sudo /etc/init.d/apache2 restart Here is the app.wsgi file: import sys sys.path.insert(0, '/var/www/my_app') from app import app as application Here is the my_app file in `/etc/apache2/sites-available`: <VirtualHost *:5000> WSGIDaemonProcess app WSGIScriptAlias / /var/www/my_app/app.wsgi <Directory /var/www/my_app> WSGIProcessGroup app WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from 127.0.0.1 </Directory> </VirtualHost> As you can see from the above file, I only want the flask app to be available on localhost. When I run apache and try to access the website at `my_site.com:5000` I get an "Unable to connect error". I can't really figure out why. Any help would be appreciated. Also, here is my directory structure for the Flask app itself if that's needed: /var/www/my_app/ app/ __init__.py static/ css/ bootstrap.css favicon.ico js/ bootstrap.js templates/ base.html index.html search.html views.py app.wsgi flask/ #my virtualenv #Your typical virutalenv structure flask_util_js.py #File that is a FLask entension for client-side generation of URLs requirements.txt run.py virtualenv.py #Creates virutalenv flask * * * **UPDATE:** So, I got the feeling that the way I had my code set up was being problematic. So I took everything in `run.py`, `__init__.py`, and `views.py` and made one big `main.py`. I've updated my `app.wsgi` to look like this: **app.wsgi** import sys activate_this = '/home/achumbley/flask/bin/activate_this.py' execfile(activate_this, dict(__file__=activate_this)) sys.path.insert(0, '/home/achumbley/new_flask_app') from main import app as application And now, my `/etc/apache2/sites-available/new_flask_app` looks like: <VirtualHost *> ServerName dev.east.appliedminds.com WSGIDaemonProcess app WSGIScriptAlias / /var/www/app.wsgi <Directory /home/achumbley/new_flask_app> WSGIProcessGroup main WSGIScriptReloading On WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Finally, he is my newest directory structure: /home/my_username/new_flask_app/ logging.log flask_util_js.py main.py static/ templates/ It still does not work. But, it's possible I don't understand how to run the whole thing. I should have to run a `python main.py` right? It should just be automatic, at least that is what I assumed. Answer: > Moved all the code in to /var/www This is wrong. You need to post your code in a non-web accessible directory. Only post your static files to `/var/www` Please see the [official deployment guide](http://flask.pocoo.org/docs/deploying/mod_wsgi/) on how to set it up with Apache and mod_wsgi. If you are having problems, you might consider [this AMI image](http://thecloudmarket.com/image/ami-ad29fec4--flask-nginx-uwsgi- amazon-linux-32) which has flask, nginx and uwsgi installed. The nginx+uwsgi stack is also [detailed in the documentation](http://flask.pocoo.org/docs/deploying/uwsgi/). Here are the steps (simplified) that you need to follow: Assume that your application is: my_app/ static/ logo.gif style.css templates/ index.html main.py Then follow these instructions: 1. Upload all your code into `/home/youruser/` 2. Upload the `app.wsgi` file to `/var/www/` 3. Upload the contents of the static directory to `/var/www/static` 4. In the `app.wsgi` file: import sys sys.path.append(0, '/home/youruser/my_app') from main import app as application
python ascii to unicode conversion Question: i have a file with data like this: \r\n\tSoci\u00e9t\u00e9 implant\u00e9 dans l'internet recrute des t\u00e9l\u00e9conseillers en b to b pour effectuer de la prise de rendez-vous qualifi\u00e9 pour de la conception de site internet et du r\u00e9f\u00e9rencement google. how can i print it as unicode, like this: Société implanté dans l'internet recrute des téléconseillers en b to b pour effectuer de la prise de rendez-vous qualifié pour de la conception de site internet et du référencement google. i know i have to use some unicode function but what? Answer: That looks like a python unicode string literal; decode this from `unicode_escape`. Demo: >>> data = "\r\n\tSoci\u00e9t\u00e9 implant\u00e9 dans l'internet recrute des t\u00e9l\u00e9conseillers en b to b pour effectuer de la prise de rendez-vous qualifi\u00e9 pour de la conception de site internet et du r\u00e9f\u00e9rencement google." >>> data.decode('unicode_escape') u"\r\n\tSoci\xe9t\xe9 implant\xe9 dans l'internet recrute des t\xe9l\xe9conseillers en b to b pour effectuer de la prise de rendez-vous qualifi\xe9 pour de la conception de site internet et du r\xe9f\xe9rencement google." >>> print data.decode('unicode_escape') Société implanté dans l'internet recrute des téléconseillers en b to b pour effectuer de la prise de rendez-vous qualifié pour de la conception de site internet et du référencement google. You can either decode the data as you read it from the file (using a binary mode), or you can use `io.open()` in Python 2, or regular `open()` in Python 3 to have data decoded on the fly: from io import open with open(filename, 'r', encoding="unicode_escape") as inputfile: for line in inputfile: print(inputfile) Note that JSON strings use the same escape syntax; `\uhhhh` denotes a Unicode codepoint using just ASCII characters.
I'm getting slightly different hmac signatures out of clojure and python Question: The HMAC SHA1 signatures I'm getting from my python implementation and my clojure implementation are slightly different. I'm stumped as to what would cause that. Python implementation: import hashlib import hmac print hmac.new("my-key", "my-data", hashlib.sha1).hexdigest() # 8bcd5631480093f0b00bd072ead42c032eb31059 Clojure implementation: (ns my-project.hmac (:import (javax.crypto Mac) (javax.crypto.spec SecretKeySpec))) (def algorithm "HmacSHA1") (defn return-signing-key [key mac] "Get an hmac key from the raw key bytes given some 'mac' algorithm. Known 'mac' options: HmacSHA1" (SecretKeySpec. (.getBytes key) (.getAlgorithm mac))) (defn sign-to-bytes [key string] "Returns the byte signature of a string with a given key, using a SHA1 HMAC." (let [mac (Mac/getInstance algorithm) secretKey (return-signing-key key mac)] (-> (doto mac (.init secretKey) (.update (.getBytes string))) .doFinal))) ; Formatting (defn bytes-to-hexstring [bytes] "Convert bytes to a String." (apply str (map #(format "%x" %) bytes))) ; Public functions (defn sign-to-hexstring [key string] "Returns the HMAC SHA1 hex string signature from a key-string pair." (bytes-to-hexstring (sign-to-bytes key string))) (sign-to-hexstring "my-key" "my-data") ; 8bcd563148093f0b0bd072ead42c32eb31059 Answer: The part of your Clojure code that translates bytes to hex strings drops leading zeros. You could use a format string that maintains a leading zero (`"%02x"`), or use a proper hex encoding library, such as [Guava](http://code.google.com/p/guava- libraries/) or [Commons Codec](http://commons.apache.org/proper/commons- codec/).
TypeError while representing arbitrary element type in multiprocessing.Array Question: >>> from multiprocessing import Array, Value >>> import numpy as np >>> a = [(i,[]) for i in range(3)] >>> a [(0, []), (1, []), (2, [])] >>> a[0][1].extend(np.array([1,2,3])) >>> a[1][1].extend(np.array([4,5])) >>> a[2][1].extend(np.array([6,7,8])) >>> a [(0, [1, 2, 3]), (1, [4, 5]), (2, [6, 7, 8])] Following the python multiprocessing [example: def test_sharedvalues():](http://docs.python.org/2/library/multiprocessing.html) I am trying to create a shared Proxy object using the below code: shared_a = [multiprocessing.Array(id, e) for id, e in a] but it is giving me an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python2.6/multiprocessing/__init__.py", line 255, in Array return Array(typecode_or_type, size_or_initializer, **kwds) File "/usr/lib64/python2.6/multiprocessing/sharedctypes.py", line 87, in Array obj = RawArray(typecode_or_type, size_or_initializer) File "/usr/lib64/python2.6/multiprocessing/sharedctypes.py", line 60, in RawArray result = _new_value(type_) File "/usr/lib64/python2.6/multiprocessing/sharedctypes.py", line 36, in _new_value size = ctypes.sizeof(type_) TypeError: this type has no size Answer: Ok. The problem is solved I changed >>> a = [(i,[]) for i in range(3)] to >>> a = [('i',[]) for i in range(3)] and this solved the TypeError. Actually, I also found out that I did not necessarily had to use the i as count within range(3) (since Array automatically allows indexing), The 'i' is for c_int typecode under multiprocessing.sharedctypes Hope this helps.
Py2app: saving files on hard drive with easygui Question: I usually save xls files created within Python scripts on my hard drive. That 's usually a pretty straight forward thing to do with pandas, for instance. My problem is that I'm trying to accomplish this from a py2app-compiled script. I tried using easygui to ask where (which folder) to save the file, but I'm not sure how to go about doing it, and once compiled into an app, it crashes at the end. Here's what I attempted: path = easygui.diropenbox() #Easygui is used to get a path in order to save the file to the right place dfA = pd.DataFrame(A) #the pandas dataframe C = ['Gen','Density','ASPL','Modularity'] # pandas' excel file header name = str(n) + "_" + str(NGEN) + "_" + str(nbrhof) + ".xls" # the name of the file (should I add the path here somewhere?) dfA.to_excel(name, path, header=C,index=False) # exporting the dataframe to excel Can I modify this script to save the excel file named "name" to the folder chosen with "easygui.diropenbox()", from a py2app-compiled app? The Traceback is as follow: Traceback (most recent call last): File "/Users/myself/Dropbox/Python/Tests/test2/myscript.py", line 135, in <module> nx.write_gexf(G, path, name+".gexf") File "<string>", line 2, in write_gexf File "/Library/Python/2.7/site-packages/networkx-1.8.1-py2.7.egg/networkx/utils/decorators.py", line 241, in _open_file fobj = _dispatch_dict[ext](path, mode=mode) IOError: [Errno 21] Is a directory: '/Users/Rodolphe/Desktop/chosenfolder' [Finished in 65.5s with exit code 1] [shell_cmd: python -u "/Users/myself/Dropbox/Python/Tests/test2/myscript.py"] [dir: /Users/Rodolphe/Dropbox/Python/Tests/test2] [path: /usr/bin:/bin:/usr/sbin:/sbin] Answer: Here is a working version: import pandas as pd import easygui import os A = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']), 'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])} dfA = pd.DataFrame(A) #the pandas dataframe path = easygui.diropenbox() # easygui is used to get a path name = "tmp.xlsx" # the name of the file path = os.path.join(path, name) # this is the full pathname dfA.to_excel(path, 'tmp_sheet') # exporting the dataframe to excel Note that the `to_excel()` method has an initial parameter that is the full path name to the Excel sheet you are writing and the second parameter is the name of the worksheet. Also it seems your stack trace is referring to another part of your script and may be pointing to another bug.
Is it necessarily safe to delete unused imports in borrowed code? Question: I'm studying a demo script from the [paramiko](https://github.com/paramiko/paramiko) package ([forward.py](https://github.com/paramiko/paramiko/blob/master/demos/forward.py)) to familiarize myself with SSH tunneling in Python. When I opened the demo script in my IDE, I noticed some unused import warnings: import getpass import os # <--- IDE warning: "'os' imported but unused" import socket # <--- IDE warning: "'socket' imported but unused" import select import SocketServer import sys from optparse import OptionParser And sure enough, neither module is explicitly referenced in the script. I'm a neat freak, so I want to delete this clutter - but I'm also inexperienced and can't tell for certain if they're extraneous. The most straightforward thing for me to do would be to delete the imports and then try to run the script without them, but I don't know that I would be able to tell if something wasn't working correctly since I'm still trying to understand how the script works. I found [a conversation](https://mail.python.org/pipermail/python- list/2013-May/646311.html) that made me think there might be a reason to import a module without using it. But in [investigating further](https://docs.djangoproject.com/en/1.3/topics/http/views/) it seems to me that the example in that conversation has nothing to do with the functionality of the code, and the user is just trying to follow a stylistic convention. How can an import be justified if there is never a reference to whatever was imported? I guess there might be code that runs on import, e.g.: """ silly.py: this is very silly """ class ManBearPig(Man, Bear, Pig): def __init__(self, name): self.name = name @classmethod def summon(cls): """summon Gary""" global Gary Gary = cls('Gary') ManBearPig.summon() That seems like a terrible idea, but I'm not confident enough in my python-fu to be sure. `socket` is already imported within `SocketServer` so what's the point of importing it here if you're not using it explicitly? Answer: Yes, it is safe enough to remove imports that your code doesn't use in the current module, or that do not themselves have side effects. Python imports modules _once_ and stores them in `sys.modules` for future reference; subsequent imports will re-use the already imported module object. Modules that have side effects make use of that, but they still have to be imported that one time. For 'regular' modules, all you have is an extra reference to the module object or to objects within that module. That takes a modicum of extra memory (apart from the module object itself), so the cost isn't that high. But you then do have to track what modules you have used, and what global names therefore can be in use in your code that could lead to bugs.
Move files from one directory to another if they appear in a text file in Python (take 2) Question: Ok I'm going to try this again, apologies for my poor effort in my pervious question. I am writing a program in Java and I wanted to move some files from one directory to another based on whether they appear in a list. I could do it manually but there a thousands of files in the directory so it would be an arduous task and I need to repeat it several times! I tried to do it in Java but because I am using Java it appears I cannot use java.nio, and I am not allowed to use external libraries. So I have tried to write something in python. import os import shutil with open('files.txt', 'r') as f: myNames = [line.strip() for line in f] print myNames dir_src = "trainfile" dir_dst = "train" for file in os.listdir(dir_src): print file # testing src_file = os.path.join(dir_src, file) dst_file = os.path.join(dir_dst, file) shutil.move(src_file, dst_file) "files.txt" is in the format: a.txt edfs.txt fdgdsf.txt and so on. So at the moment it is moving everything from train to `trainfile`, but I need to only move files if the are in the `myNames` list. Does anyone have any suggestions? Answer: > so at the moment it is moving everything from train to trainfile, but ii > need to only move files if the are in the myName list You can translate that "if they are in the myName list" directly from English to Python: if file in myNames: shutil.move(src_file, dst_file) And that's it. However, you probably want a _set_ of names, rather than a _list_. It makes more conceptual sense. It's also more efficient, although the speed of looking things up in a list will probably be negligible compared to the cost of copying files around. Anyway, to do that, you need to change one more line: myNames = {line.strip() for line in f} And then you're done.
How to create multiple sprites which are similar? Question: Let me show you the code which may help make this question make more sense. import pygame, classes, random pygame.init() screen = pygame.display.set_mode((640, 480)) def game(): pygame.display.set_caption("GOLOGO") laser = classes.Laser(screen) player = classes.Player(laser, screen) bad = classes.BadGuy(screen, 320, 0) bad1 = classes.BadGuy(screen, 260, 0) bad2 = classes.BadGuy(screen, 200, 0) bad3 = classes.BadGuy(screen, 140, 0) bad4 = classes.BadGuy(screen, 80, 0) bad5 = classes.BadGuy(screen, 20, 0) bad6 = classes.BadGuy(screen, 380, 0) bad7 = classes.BadGuy(screen, 440, 0) bad8 = classes.BadGuy(screen, 440, 60) bad9 = classes.BadGuy(screen, 320, 60) bad10 = classes.BadGuy(screen, 260, 60) bad11 = classes.BadGuy(screen, 200, 60) bad12 = classes.BadGuy(screen, 140, 60) bad13 = classes.BadGuy(screen, 80, 60) bad14 = classes.BadGuy(screen, 20, 60) bad15 = classes.BadGuy(screen, 380, 60) space = classes.Space(screen) scoreboard = classes.Scoreboard() allSprites = pygame.sprite.OrderedUpdates(space, laser, player) badSprites = pygame.sprite.Group(bad, bad1, bad2, bad3, bad4, \ bad5, bad6, bad7, bad8, bad9, bad10, bad11, bad12, \ bad13, bad14, bad15) scoreSprite = pygame.sprite.Group(scoreboard) As you can see, there are a lot of BadGuy sprites. They are just in different locations. I don't mind continuing to do this method, but I was hoping that there would be a much easier way to do this. I am probably going to add a bunch more of them while increasing the size of the screen. I am using **python 3 and pygame** Answer: This is a classic case where a list will help. For example: bad_guys = [] bad_guys.append(classes.BadGuy(screen, 320, 0)) bad_guys.append(classes.BadGuy(screen, 260, 0)) bad_guys.append(classes.BadGuy(screen, 200, 0)) # ... and so on ... Now you can refer to a specific bad guy with indexing: bad_guys[0] # looks up the first bad guy, the one at (320, 0) But notice that we have a lot of repetition here. We are always using the `BadGuy` class to make a bad guy, and we always pass `screen` as an argument. We can pull out just those coordinate values, make a list of them, and then loop to make the bad guys: coords = [ (320, 0), (260, 0), (200, 0), (140, 0), (80, 0), (20, 0), (380, 0), (440, 0), (440, 0), (320, 0), (260, 0), (200, 0), (140, 0), (80, 0), (20, 0), (380, 0), ] bad_guys = [] for x, y in coords: bad_guys.append(classes.BadGuy(screen, x, y)) The list of coordinates we have to just build, but now the list of bad guys can be built by a `for` loop. But Python provides a short-cut we can use to build the `bad_guys` list. We can use a "list comprehension" to build the list of bad guys without an explicit `for` loop: coords = [ (320, 0), (260, 0), (200, 0), (140, 0), (80, 0), (20, 0), (380, 0), (440, 0), (440, 0), (320, 0), (260, 0), (200, 0), (140, 0), (80, 0), (20, 0), (380, 0), ] bad_guys = [classes.BadGuy(screen, x, y) for x, y in coords] That handles creating a list of bad guys. Now that we have the list, we can need to call `pygame.sprite.Group()` with all the bad guys. We could do it by hand: badSprites = pygame.sprite.Group(bad_guys[0], bad_guys[1], ... and so on ...) but there is a much better way. In Python we can use the `*` operator to "paste" the values from a list or tuple: badSprites = pygame.sprite.Group(*bad_guys) Now it doesn't matter how many bad guys are in the `bad_guys` list; however many there are, they will be passed in as the arguments to `pygame.sprite.Group()`. EDIT: changed the code to store values as coordinates (both `x` and `y`). I wasn't sure what the "bad guy numbers" were but I am pretty sure that @alKid is correct and they are coordinates. Currently all the `y` values are 0 but that could change, so store both `x` and `y` values.
parsing with libclang : getting CXX_BASE_SPECIFIER cursors when base types unknown Question: I'm writing a **documentation generator** and getting the include paths right is hell so **I just skip entirely every includes** when I parse a file. I also tune by hands all problematic defines or #ifdef blocks that would get skipped because of the missing includes (and different command line versus production build). the problem I noticed is that: struct ComplexBuffer : IAnimatable { }; With `IAnimatable` is not declared (or is forward declared). I'm using the python binding of clang.cindex so I use get_children for iteration: this result comes out: Found grammar element "IAnimatable" {CursorKind.CLASS_DECL} [line=37, col=8] Found grammar element "ComplexBuffer" {CursorKind.STRUCT_DECL} [line=39, col=9] if I complete the base type: class IAnimatable {}; struct ComplexBuffer : IAnimatable I get a correct output: Found grammar element "IAnimatable" {CursorKind.CLASS_DECL} [line=37, col=8] Found grammar element "ComplexBuffer" {CursorKind.STRUCT_DECL} [line=39, col=9] Found grammar element "class IAnimatable" {CursorKind.CXX_BASE_SPECIFIER} [line=39, col=25] Found grammar element "class IAnimatable" {CursorKind.TYPE_REF} [line=39, col=25] Exactly what I want because I can detect the inheritance list to put in the documentation. This problem only arises because I skip all the includes. Maybe I can workaround this by reparsing the declaration line by hand ? EDIT PS : my parsing python script for the sake of completion: import clang.cindex index = clang.cindex.Index.create() tu = index.parse(sys.argv[1], args=["-std=c++98"], options=clang.cindex.TranslationUnit.PARSE_SKIP_FUNCTION_BODIES) def printall_visitor(node): print 'Found grammar element "%s" {%s} [line=%s, col=%s]' % (node.displayname, node.kind, node.location.line, node.location.column) def visit(node, func): func(node) for c in node.get_children(): visit(c, func) visit(tu.cursor, printall_visitor) Answer: I'm going to answer this one myself, because the code I came up with can be useful to future googlers. In the end, I have coded the both methods that are supposed to work to retreive the list of base classes in the inheritance list on a class declaration line. one using the AST cursor and one fully manual, coping as much as it can with C++ complexity. here is the whole result: #!/usr/bin/env python # -*- coding: utf-8 -*- ''' Created on 2013/12/09 @author: voddou ''' import sys import re import clang.cindex import os import string class bcolors: HEADER = '\033[95m' OKBLUE = '\033[94m' CYAN = '\033[96m' OKGREEN = '\033[92m' WARNING = '\033[93m' FAIL = '\033[91m' ENDC = '\033[0m' MAGENTA = '\033[95m' GREY = '\033[90m' def disable(self): self.HEADER = '' self.OKBLUE = '' self.OKGREEN = '' self.WARNING = '' self.FAIL = '' self.ENDC = '' self.CYAN = '' self.MAGENTA = '' self.GREY = '' from contextlib import contextmanager @contextmanager def scopedColorizer(color): sys.stdout.write(color) yield sys.stdout.write(bcolors.ENDC) #clang.cindex.Config.set_library_file("C:/python27/DLLs/libclang.dll") src_filepath = sys.argv[1] src_basename = os.path.basename(src_filepath) parseeLines = file(src_filepath).readlines() def trim_all(astring): return "".join(astring.split()) def has_token(line, token): trimed = trim_all(line) pos = string.find(trimed, token) return pos != -1 def has_any_token(line, token_list): results = [has_token(line, t) for t in token_list] return any(results) def is_any(astring, some_strings): return any([x == astring for x in some_strings]) def comment_out(line): return "//" + line # alter the original file to remove #inlude directives and protective ifdef blocks for i, l in enumerate(parseeLines): if has_token(l, "#include"): parseeLines[i] = comment_out(l) elif has_any_token(l, ["#ifdef", "#ifdefined", "#ifndef", "#if!defined", "#endif", "#elif", "#else"]): parseeLines[i] = comment_out(l) index = clang.cindex.Index.create() tu = index.parse(src_basename, args=["-std=c++98"], unsaved_files=[(src_basename, "".join(parseeLines))], options=clang.cindex.TranslationUnit.PARSE_SKIP_FUNCTION_BODIES) print 'Translation unit:', tu.spelling, "\n" def gather_until(strlist, ifrom, endtokens): """make one string out of a list of strings, starting from a given index, until one token in endtokens is found. ex: gather_until(["foo", "toto", "bar", "kaz"], 1, ["r", "z"]) will yield "totoba" """ result = strlist[ifrom] nextline = ifrom + 1 while not any([string.find(result, token) != -1 for token in endtokens]): result = result + strlist[nextline] nextline = nextline + 1 nearest = result for t in endtokens: nearest = nearest.partition(t)[0] return nearest def strip_templates_parameters(declline): """remove any content between < > """ res = "" nested = 0 for c in declline: if c == '>': nested = nested - 1 if nested == 0: res = res + c if c == '<': nested = nested + 1 return res # thanks Markus Jarderot from Stackoverflow.com def comment_remover(text): def replacer(match): s = match.group(0) if s.startswith('/'): return "" else: return s pattern = re.compile( r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"', re.DOTALL | re.MULTILINE ) return re.sub(pattern, replacer, text) def replace_any_of(haystack, list_of_candidates, by_what): for cand in list_of_candidates: haystack = string.replace(haystack, cand, by_what) return haystack cxx_keywords = ["class", "struct", "public", "private", "protected"] def clean_name(displayname): """remove namespace and type tags """ r = displayname.rpartition("::")[2] r = replace_any_of(r, cxx_keywords, "") return r def find_parents_using_clang(node): l = [] for c in node.get_children(): if c.kind == clang.cindex.CursorKind.CXX_BASE_SPECIFIER: l.append(clean_name(c.displayname)) return None if len(l) == 0 else l # syntax based custom parsing def find_parents_list(node): ideclline = node.location.line - 1 declline = parseeLines[ideclline] with scopedColorizer(bcolors.WARNING): print "class decl line:", declline.strip() fulldecl = gather_until(parseeLines, ideclline, ["{", ";"]) fulldecl = clean_name(fulldecl) fulldecl = trim_all(fulldecl) if string.find(fulldecl, ":") != -1: # if inheritance exists on the declaration line baselist = fulldecl.partition(":")[2] res = strip_templates_parameters(baselist) # because they are separated by commas, they would break the split(",") res = comment_remover(res) res = res.split(",") return res return None # documentation generator def make_htll_visitor(node): if (node.kind == clang.cindex.CursorKind.CLASS_DECL or node.kind == clang.cindex.CursorKind.STRUCT_DECL or node.kind == clang.cindex.CursorKind.CLASS_TEMPLATE): bases2 = find_parents_list(node) bases = find_parents_using_clang(node) if bases is not None: with scopedColorizer(bcolors.CYAN): print "class clang list of bases:", str(bases) if bases2 is not None: with scopedColorizer(bcolors.MAGENTA): print "class manual list of bases:", str(bases2) def visit(node, func): func(node) for c in node.get_children(): visit(c, func) visit(tu.cursor, make_htll_visitor) with scopedColorizer(bcolors.OKGREEN): print "all over" this code has allowed me to accept incomplete C++ translation units, correctly parsing declarations such as this: struct ComplexBuffer : IAnimatable , Bugger, Mozafoka { }; coping also with these: struct AnimHandler : NonCopyable, IHandlerPrivateGetter< AnimHandler, AafHandler > // CRTP { ... }; giving me this output: class manual list of bases: ['NonCopyable', 'IHandlerPrivateGetter<>'] which is nice, the `clang` function version didn't return a single class in the base list. Now it is forseeable to merge the result of both these functions using a `set` to be on the safe side in case the manual parser would miss something. However I'm thinking this could cause subtle duplications because of the difference between `displayname` and my own parser. But there you go, googlers, a nice clang python documentation generator template that doesn't need full correctness of the build options and is pretty fast because it totally ignores the `include`s statements. nice day to all.
Django application override & import path? Question: Let's have a django project using a 3rd party application. I'd like to override some of its modules without touching original files. Simple subclassing is not possible here, need to override code transparently as many other apps rely on original class names and functions. Project's structure looks like: django_project/ __init__.py settings.py overrides/ <-- here is a subdir with apps overrides __init__.py payment/ <-- here is an example of app to override __init__.py admin.py forms.py <-- this file is ignored, original is imported models.py tests.py views.py `settings.py` was modified with INSTALLED_APPS=( 'satchmo_store.shop' #'payment' # original values 'overrides.payment' # modified app ... ) The above solution however does not work, because Django does not insert path of added app into modules search path (`sys.path`). Django just loads `admin.py`, `models.py`, `tests.py` and `views.py`, other files like `forms.py` are ignored. **Is this behaviour documented somewhere ? What exactly placing a module name in INSTALLED_APPS does behind scenes ?** I hacked the situation with hardcoding new modules search path in `manage.py` and Apache's setting of WSGIPythonPath. import os.path import sys DIRNAME = os.path.dirname(__file__) APPS_OVERRIDE = os.path.join(DIRNAME, 'overrides') if not APPS_OVERRIDE in sys.path: sys.path.insert(1, APPS_OVERRIDE) I doubt this is the right way. Cann't find a guide describing apps overriding. **So, how can I properly override external Django application in my project ?** The bonus question: **Do I need to copy whole application directory tree, not just particular files which are really modified ?** As far as I know, Python stops at first matching module path, so it won't import other modules available in following parts of the search path. Answer: Example of how to override your form: **overrides/payment/forms.py** from django import forms class YourNewFormThingy(forms.Form): pass **overrides/payment/models.py** from satchmo.payment import forms as satchmo_payment_forms from . import forms satchmo_payment_forms.SomeForm = forms.YourNewFormThingy
Python compare bombs if files not sorted Question: I have written some code to compare two files via a search string. The file = master data file The checkfile = list of states & regions When I have more than 1 state in the file that is not in sorted order it bombs out. How can i get this to work without having to sort my "file" The Error message: Traceback (most recent call last): File "./gangnamstyle.py", line 27, in csvLineList_2 = csv2[lineCount].split(",") IndexError: list index out of range My code: #!/usr/bin/python import csv file = raw_input("Please enter the file name to search: ") #File name checkfile = raw_input("Please enter the file with the search data: ") #datafile save_file = raw_input("Please enter the file name to save: ") #Save Name search_string = raw_input("Please type string to search for: ") #search string #row = raw_input("Please enter column text is in: ") #column number - starts at 0 #ID_INDEX = row #ID_INDEX = int(ID_INDEX) f = open(file) f1 = open(save_file, 'a') csv1 = open(file, "r").readlines() csv2 = open(checkfile, "r").readlines() #what looks for the string in the file copyline=False for line in f.readlines(): if search_string in line: copyline=True if copyline: f1.write(line) for lineCount in range( len( csv1) ): csvLineList_1 = csv1[lineCount].split(",") csvLineList_2 = csv2[lineCount].split(",") if search_string == csvLineList_2[0]: f1.write(csvLineList_2[2]) f1.close() #close saved file f.close() #close source file #csv1.close() #csv2.close() Answer: OK, so that error message is an `IndexError: list index out of range` in the line `csvLineList_2 = csv2[lineCount].split(",")`. There's only one indexing happening there, so apparently `lineCount` is too big for csv2. lineCount is one of the values of range(len(csv1)). That makes it automatically in range for csv1. Apparently csv1 and csv2 are not the same length, causing the IndexError. Now that's quite possible, because they contain lines from different files. Apparently the files don't have equal number of lines. To be honest I have no clue why you are reading the lines into csv1 at all. You loop over those lines and split them (into the variable `csvLineList_1`), but you never use that variable. I think your loop should just be: for line in csv2: parts = line.strip().split(",") # line.strip() removes whitespace and the newline # at the end of the line if search_string == parts[0]: f1.write(parts[2] + "\n") # Add a newline, you probably want it I hope this helps.
String Compression: Output Alphabet Restricted to Alphanumeric Characters Question: I have a long string and I would like to compress it to a new string with the restriction that the **output** alphabet only contains `[a-z]` `[A-Z]` and `[0-9]` characters. How can I do this, specifically in Python? Answer: While many encoding algorithms _can_ take an arbitrary output range, most implementations can't, and many algorithms are much less efficient if the output range isn't a power of 2/16/256. So, you want to split this into two parts: First compress one byte stream to another. Then encode the output byte stream into alphanumeric characters. (If you're starting with something that isn't a byte stream, like a Python 3 string or a Python 2 `unicode`, then there's a zeroth step of encoding it into a byte stream.) For example, if you wanted base64, you could do this: import base64, zlib compressed_bytes = zlib.compress(plain_bytes) compressed_text = base64.b64encode(compressed_bytes) * * * Unfortunately, you don't want base-64, because that includes a few non- alphanumeric characters. You can use [base32](http://en.wikipedia.org/wiki/Base32), which has just the capital letters and 6 digits, and the only change to your code is `b32encode` instead of `encode`. But that's a bit wasteful, because it's only using 5 out of every 8 bits, when you could in theory use ~5.594 of each 8 bits. If you want to do this optimally, and you can't bend the requirement for alphanumeric characters only, base62 is very complicated, because you can't do it byte by byte, but only in chunks of 7936 bytes at a time. That's not going to be fun, or efficient. You can get reasonably close to optimal by chunking, say, 32 bytes at a time and wasting the leftover bits. But you might be better off using base64 plus an escaping mechanism to handle the two characters that don't fit into your scheme. For example: def b62encode(plain): b64 = base64.b64encode(plain) return b64.replace('0', '00').replace('+', '01').replace('/', '02') def b62decode(data): b64 = '0'.join(part.replace('01', '+').replace('02', '/') for part in data.split('00')) return base64.b64decode(b64) For comparison, here's how much each algorithm expands your binary data: * base32: 60.0% * fake base62: 39.2% * realistic base62: ~38% * optimal base62: 34.4% * base64: 33% The point of partial-byte transfer encodings like base64 is that they're dead- simple and run fast. While you _can_ extend it to partial-bit encodings like base62, you lose all of the advantages… so if the fake base62 isn't good enough, I'd suggest using something completely different instead. * * * To reverse this, reverse all the same steps in reverse order. Putting it all together, using the fake base62, and using `unicode`/Python 3 strings: plain_bytes = plain_text.encode('utf-8') compressed_bytes = zlib.compress(plain_bytes) b62_bytes = b62encode(compressed_bytes) b62_text = b62_bytes.decode('ascii') b62_bytes = b62_text.encode('ascii') compressed_bytes = b62decode(b62_bytes) plain_bytes = zlib.decompress(compressed_bytes) plain_text = plain_bytes.decode('utf-8') And that's about as complicated as it can get.
Python: literally "printing" a function Question: For reasons I won't get into, I need some way to literally print a function. I know when you run print on a function object you get some complicated <010101045 function> output. So to my understanding perhaps the only way to do this is run some kind of python notepad module. Then I could devise an algorithm to write a line of text to match each function line so as to get a printed function. If anyone knows some other way to achieve this, please do tell because I am sure doing it this way is incredibly complicated. If however this is the only way, if someone could walk me through the basic steps to doing so that would also be greatly appreciated. It occurred to me also that it would be interesting to be able to have the syntax of the printed function be saved in python format so that I could literally just import it directly into a program if I wanted. Yes, I realize this is getting really complicated, but again, a general walk through would be really cool if anyone knows how to do this. If anyone has gotten as far as automating the aforementioned process to the point where a program could modify an imported module from inside of the program importing it, now that would really cool. (In an ideal case of course the modified version would be saved under a different name and be imported as well as the previous version and handled at that point within the program doing the importing) I guess at this point I'm just curious about the basic commands for importing, saving, and modifying of files from within a python program. All insight is appreciated, Thanks, PS: if you are wondering why I want to do all of this, it should be obvious, if not it would take a while to explain. Answer: Use [`inspect.getsource`](http://docs.python.org/2/library/inspect.html?highlight=inspect#inspect.getsource): def a(): pass import inspect print inspect.getsource(a) Of course, this only works if you have access to the original source. If you don't have the original source, the most you could get is the byte code (see `dis.dis`). EDIT: The Python interactive shell is one example where `inspect` does not have access to the source.
CSV file writing error. python Question: I made code working previously pasted . import serial import csv import os import time def main(): pass if __name__ == '__main__': main() COUNT=0 ser=serial.Serial() ser.port=2 ser.baudrate=9600 foo=open("new.csv","ab"); result=csv.writer(foo,delimiter=',') result_statement=("date","time","Zenith","Azimuth","Elevation","conv_elevation") result.writerow(result_statement) foo.close() while(COUNT<500): ser.open() str=ser.read(110) val=str.split(":") print "value is",val lines=str.split("\r\n") wst=[] for line in lines[:-1]: parts=line.split(":") #print parts for p in parts[1:]: wst.append(p) #print "wst is", wst foo=open("new.csv","a+"); result=csv.writer(foo,delimiter=',') result_statement=wst result.writerow(result_statement) COUNT=COUNT+1 #print COUNT foo.close() ser.close() Here i am getting proper output as below. value is ['date is', ' 1/1/14\r\ntime_is', '9-15-0\r\nZenith', '52.53\r\nAzimuth', '226.85\r\nElevation', '37.47\r\nConverted Elevation', '-46.42\r\nlo'] value is ['date is', ' 1/1/14\r\ntime_is', '9-30-0\r\nZenith', '55.25\r\nAzimuth', '229.47\r\nElevation', '34.75\r\nConverted Elevation', '-42.39\r\nlo'] value is ['date is', ' 1/1/14\r\ntime_is', '9-45-0\r\nZenith', '58.08\r\nAzimuth', '231.84\r\nElevation', '31.92\r\nConverted Elevation', '-38.39\r\nlo'] value is ['date is', ' 1/1/14\r\ntime_is', '10-0-0\r\nZenith', '60.99\r\nAzimuth', '233.98\r\nElevation', '29.01\r\nConverted Elevation', '-34.43\r\nlo'] value is ['date is', ' 1/1/14\r\ntime_is', '10-15-0\r\nZenith', '63.98\r\nAzimuth', '235.92\r\nElevation', '26.02\r\nConverted Elevation', '-30.51\r\nl'] value is ['date is', ' 1/1/14\r\ntime_is', '10-30-0\r\nZenith', '67.03\r\nAzimuth', '237.68\r\nElevation', '22.97\r\nConverted Elevation', '-26.63\r\nl'] value is ['date is', ' 1/1/14\r\ntime_is', '10-45-0\r\nZenith', '70.15\r\nAzimuth', '239.28\r\nElevation', '19.85\r\nConverted Elevation', '-22.78\r\nl'] value is ['date is', ' 1/1/14\r\ntime_is', '11-0-0\r\nZenith', '73.31\r\nAzimuth', '240.73\r\nElevation', '16.69\r\nConverted Elevation', '-18.97\r\nlo'] value is ['date is', ' 1/1/14\r\ntime_is', '11-15-0\r\nZenith', '76.52\r\nAzimuth', '242.05\r\nElevation', '13.48\r\nConverted Elevation', '-15.19\r\nl'] value is ['date is', ' 1/1/14\r\ntime_is', '11-30-0\r\nZenith', '79.76\r\nAzimuth', '243.25\r\nElevation', '10.24\r\nConverted Elevation', '-11.44\r\nl'] value is ['date is', ' 1/1/14\r\ntime_is', '11-45-0\r\nZenith', '83.04\r\nAzimuth', '244.34\r\nElevation', '6.96\r\nConverted Elevation', '-7.72\r\nlon'] value is ['date is', ' 1/1/14\r\ntime_is', '12-0-0\r\nZenith', '86.34\r\nAzimuth', '245.33\r\nElevation', '3.66\r\nConverted Elevation', '-4.02\r\nlong'] value is ['date is', ' 1/1/14\r\ntime_is', '12-15-0\r\nZenith', '89.67\r\nAzimuth', '246.23\r\nElevation', '0.33\r\nConverted Elevation', '-0.36\r\nlon'] value is ['date is', ' 1/1/14\r\ntime_is', '12-30-0\r\nZenith', '93.03\r\nAzimuth', '247.04\r\nElevation', '-3.03\r\nConverted Elevation', '-3.29\r\nlo'] value is ['date is', ' 1/1/14\r\ntime_is', '12-45-0\r\nZenith', '96.40\r\nAzimuth', '247.77\r\nElevation', '-6.40\r\nConverted Elevation', '-6.91\r\nlo'] But While writing in to CSV file it is adding 2000 in time cloumn please let me know reason why it is doing it. date time Zenith Azimuth Elevation conv_elevation 1/1/14 9/15/2000 52.53 226.85 37.47 -46.42 1/1/14 9/30/2000 55.25 229.47 34.75 -42.39 1/1/14 9-45-0 58.08 231.84 31.92 -38.39 1/1/14 10-0-0 60.99 233.98 29.01 -34.43 1/1/14 10/15/2000 63.98 235.92 26.02 -30.51 1/1/14 10/30/2000 67.03 237.68 22.97 -26.63 1/1/14 10-45-0 70.15 239.28 19.85 -22.78 1/1/14 11-0-0 73.31 240.73 16.69 -18.97 1/1/14 11/15/2000 76.52 242.05 13.48 -15.19 1/1/14 11/30/2000 79.76 243.25 10.24 -11.44 Answer: Go to csv file select Date and time column and format the column as a date and time.
how to store a SQL-database in a python object, and perform queries in the object? Question: I have a big postgrSQL database. I would like to somehow store the full database in a python object, which form/structure would reflect the one of the database. Namely I imagine something like * An object database, with an attribute .tables which a kind of list of object "table", and a table object has an attribute "list_of_keys" (list of the column names) and an attribute "rows", which reflects all the rows of the corresponding table in the database. Now, the **main point** i need is: i want to be able to perform a search in the database object, with exactely the same SQL synthax that i would use in the corresponding SQL database. Thus something like database.execute("SELECT * FROM .....") where, i repeat, "database" is a purely python object (which was filled with data coming from an SQL database, but which is now independent of it). My aim is: i want to be able to apply the same algorithm either on a SQL- Database, or on a python-object, such as described above, without changing my code. So, i imagine, let "database" be either a usual database- connector/cursor (like with psycopg, for example), or a python object as i described, and the same piece of code database.execute("SELECT BLABLABLA") would work in both cases. Is there any known module which allows that ? thanks. Answer: It might get a bit complicated, but take a look at [SQLite's](http://docs.python.org/2/library/sqlite3.html) in-memory storage: import sqlite3 cnx = sqlite3.connect(':memory:') cnx.execute('CREATE TABLE ...') There are some differences in the SQL syntax, but the basic stuff works fine. This might also take a good amount of RAM, depending on your data.
Access a variable in imported module in Python Question: I have three Python scripts: aaa.py, bbb.py and ccc.py. bbb.py: import aaa as a ccc.py: import bbb as b Can I use variable `a` directly in ccc.py? Like `a.hello`? Or can any one please tell me how to access it? Answer: The ansewr should be yes. import bbb as b hello = b.a.hello
using threading in pygame Question: I have a raspberry pi and on one of the gpio pins I am sending pulse, thus I have python code to detect interrupt on that pin and their are at most 2 interrupts every seconds. Now I want to pass this value(total no. of interrupts) to pygame application. Currently python code for detecting interrupts writes total no. interrupts to file as interrupt is detected and then pygame application reads that number from the file. Thus my question how can I integrate interrupt detecting code in pygame using threads since I want both pygame application and interrupt detecting code to run in parallel. I read somewhere that pygame is not thread safe. My code for interrupt detection GPIO.setmode(GPIO.BCM) count = 0 GPIO.setup(2, GPIO.IN, pull_up_down=GPIO.PUD_UP) def my_callback(channel): file = open('hello.txt','w') global count count += 1 file.write(str(count)) GPIO.add_event_detect(2,GPIO.BOTH, callback=my_callback) while True: print "Waiting for input." sleep(60) GPIO.cleanup() pygame application code pygame.init() size=[640,640] screen=pygame.display.set_mode(size) pygame.display.set_caption("Test") done=False clock=pygame.time.Clock() font = pygame.font.SysFont("consolas", 25, True) frame_rate = 20 frame_count = 0 count = 0 while done==False: for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done=True # Flag that we are done so we exit this loop pygame.quit() sys.exit() f = open("hello.txt", "r") count = int(f.read()) output_string = "ACTUAL %s" %count text = font.render(output_string,True,red) screen.blit(text, [250,420]) frame_count += 1 clock.tick(frame_rate) pygame.display.flip() pygame.quit () Answer: You can use e.g. the threadsafe [`Queue`](http://docs.python.org/2/library/queue.html) class to let your threads communicate with each other. quick'n'dirty example: import pygame from pygame.color import Color from Queue import Queue from threading import Thread q = Queue() def worker(): GPIO.setmode(GPIO.BCM) GPIO.setup(2, GPIO.IN, pull_up_down=GPIO.PUD_UP) def my_callback(channel): q.put(True) GPIO.add_event_detect(2,GPIO.BOTH, callback=my_callback) while True: print "Waiting for input." sleep(60) GPIO.cleanup() t = Thread(target=worker) t.daemon = True t.start() pygame.init() screen = pygame.display.set_mode([640,640]) clock = pygame.time.Clock() font = pygame.font.SysFont("consolas", 25, True) count = 0 pygame.display.set_caption("Test") done = False while not done: screen.fill(Color('black')) for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done = True try: q.get() count += 1 except: pass output_string = "ACTUAL %s" % count text = font.render(output_string, True, Color('red')) screen.blit(text, [250,420]) clock.tick(20) pygame.display.flip()
Authenticating via curl with Tasty Pie, using Session Authentication Question: So, it seems that I can perform this action just fine from the browser, but I can't seem to replicate it via CURL. Any pointers on how this is supposed to work are greatly, greatly appreciated. I perform this request to log in a user: curl -X POST -H "Content-Type: application/json" \ -d '{"username":"tester", "password":"password"}' --verbose \ http://localhost:8000/api/user/login/ And the response seems to indicate that the request was successful: * About to connect() to localhost port 8000 (#0) * Trying 127.0.0.1... connected > POST /api/user/login/ HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: localhost:8000 > Accept: */* > Content-Type: application/json > Content-Length: 44 > * upload completely sent off: 44out of 44 bytes < HTTP/1.1 200 OK < Server: nginx/1.1.19 < Date: Wed, 11 Dec 2013 12:31:34 GMT < Content-Type: application/json < Transfer-Encoding: chunked < Connection: keep-alive < Vary: Accept, Cookie < Set-Cookie: csrftoken=h4tjM6o3QyelsAvUhdqNJPinZRdJyrBz; Path=/ < Set-Cookie: sessionid=4tsny8kcl7j9x7icr6vptnq1ims89tzr; expires=Wed, 25-Dec-2013 12:31:34 GMT; httponly; Max-Age=1209600; Path=/ < * Connection #0 to host localhost left intact * Closing connection #0 {"success": true, "username": "tester"} If I include only the CSRF token in my authenticated request, I get a 401. However, if I include both the CSRF token and the session ID, I get some kind of Python error. For example: curl -X GET -H "Content-Type: application/json" -H \ "X-CSRFToken: h4tjM6o3QyelsAvUhdqNJPinZRdJyrBz" --cookie \ "sessionid=4tsny8kcl7j9x7icr6vptnq1ims89tzr" --verbose \ http://localhost:8000/api/user/ | python -mjson.tool \ I get back from the server: { "error_message": "getattr(): attribute name must be string", "traceback": "Traceback (most recent call last): File \"/opt/phaidra/env/local/lib/python2.7/site-packages/tastypie/resources.py\", line 195, in wrapper\n response = callback(request, *args, **kwargs)\n\n File \"/opt/phaidra/env/local/lib/python2.7/site-packages/tastypie/resources.py\", line 426, in dispatch_list\n return self.dispatch('list', request, **kwargs)\n\n File \"/opt/phaidra/env/local/lib/python2.7/site-packages/tastypie/resources.py\", line 454, in dispatch\n self.throttle_check(request)\n\n File \"/opt/phaidra/env/local/lib/python2.7/site-packages/tastypie/resources.py\", line 551, in throttle_check\n identifier = self._meta.authentication.get_identifier(request)\n\n File \"/opt/phaidra/env/local/lib/python2.7/site-packages/tastypie/authentication.py\", line 515, in get_identifier\n return request._authentication_backend.get_identifier(request)\n\n File \"/opt/phaidra/env/local/lib/python2.7/site-packages/tastypie/authentication.py\", line 283, in get_identifier\n return getattr(request.user, username_field)\n\n TypeError: getattr(): attribute name must be string\n" } Looking at the lines of the errors is not particularly illuminating. Since this error doesn't occur unless --cookie is used, I'm presuming it's trying incorrectly to parse the cookie parameter. It should also be noted that I am using Neo4django, which I believe precludes me from being able to use API Key Authentication. The code for my user is as such: class UserResource(ModelResource): class Meta: queryset = AppUser.objects.all() resource_name = 'user' fields = ['first_name', 'last_name', 'username', 'email', 'is_staff'] allowed_methods = ['get', 'post', 'patch'] always_return_data = True authentication = MultiAuthentication(SessionAuthentication(), BasicAuthentication()) authorization = Authorization() def prepend_urls(self): params = (self._meta.resource_name, trailing_slash()) return [ url(r"^(?P<resource_name>%s)/login%s$" % params, self.wrap_view('login'), name="api_login"), url(r"^(?P<resource_name>%s)/logout%s$" % params, self.wrap_view('logout'), name="api_logout") ] def login(self, request, **kwargs): """ Authenticate a user, create a CSRF token for them, and return the user object as JSON. """ self.method_check(request, allowed=['post']) data = self.deserialize(request, request.raw_post_data, format=request.META.get('CONTENT_TYPE', 'application/json')) username = data.get('username', '') password = data.get('password', '') if username == '' or password == '': return self.create_response(request, { 'success': False, 'error_message': 'Missing username or password' }) user = authenticate(username=username, password=password) if user: if user.is_active: login(request, user) response = self.create_response(request, { 'success': True, 'username': user.username }) response.set_cookie("csrftoken", get_new_csrf_key()) return response else: return self.create_response(request, { 'success': False, 'reason': 'disabled', }, HttpForbidden) else: return self.create_response(request, { 'success': False, 'error_message': 'Incorrect username or password' }) def read_list(self, object_list, bundle): """ Allow the endpoint for the User Resource to display only the logged in user's information """ self.is_authenticated(request) return object_list.filter(pk=bundle.request.user.id) (You can view the entire contents of the file, if you need, at <https://github.com/OpenPhilology/phaidra/blob/master/api/api.py>) So, in summary, the main questions/points of confusion for me: 1. Which data must be sent via the curl request to send an authenticated GET/POST/etc.? 2. Is the Authentication value correct for the User Resource? 3. Am I supposed to be able to authenticate with only the CSRF token, or is the session ID also necessary? Thanks in advance for any insight on this! EDIT: Here is the custom user model we have. from django.contrib.auth import authenticate from django.db import models as django_models from neo4django.db import models from neo4django.graph_auth.models import User, UserManager class AppUser(User): objects = UserManager() USERNAME_FIELD = 'username' def __unicode__(self): return unicode(self.username) or u'' Answer: The issue here ended up being two fold: I discovered the Django function get_user_model() was failing -- which is used in several places -- but _not_ because the USERNAME_FIELD was blank. If I hardcoded the values into this file, everything worked fine. The issue instead is that it was failing because Django requires a very specific naming scheme for custom user models. From the Django docs: > This dotted pair describes the name of the Django app (which must be in your > INSTALLED_APPS), and the name of the Django model that you wish to use as > your User model. <https://docs.djangoproject.com/en/dev/topics/auth/customizing/#substituting- a-custom-user-model> HOWEVER, this is not the entire story. Django presumes that your AUTH_USER_MODEL can be split by the period in the middle, and this will give it two variables, "app_label" and "model_name". See: def get_user_model(): "Return the User model that is active in this project" from django.conf import settings from django.db.models import get_model try: app_label, model_name = settings.AUTH_USER_MODEL.split('.') except ValueError: raise ImproperlyConfigured("AUTH_USER_MODEL must be of the form 'app_label.model_name'") user_model = get_model(app_label, model_name) if user_model is None: raise ImproperlyConfigured("AUTH_USER_MODEL refers to model '%s' that has not been installed" % settings.AUTH_USER_MODEL) return user_model (in file: django/contrib/auth/__init__.py) However, mine had been accessible via 'from core.models.user import AppUser'. I had to flatten my project structure so I had an app called "app", all my models in a file called "models.py", and then in settings.py I was able to set my AUTH_USER_MODEL to 'app.AppUser'. **The weird part about this: In many other situations, I had been able to log in via the API, even while my APP_USER_MODEL was set to 'core.models.user.AppUser'.** It was only when I tried to use SessionAuth that I had issues. Furthermore, there were recent changes to Neo4Django that also had to be upgraded, as they dealt directly with graph auth. Previously, backends.py hadn't be property importing and trying to use my custom model. Now it does. Specifically, this file: <https://github.com/scholrly/neo4django/blob/9058c0b6f4eb9d23c2a87044f0661f8178b80b12/neo4django/graph_auth/backends.py>
how to get tbody from table from python beautiful soup ? Question: I'm trying to scrap Year & Winners ( first & second columns ) from "List of finals matches" table (second table) from <http://en.wikipedia.org/wiki/List_of_FIFA_World_Cup_finals>: I'm using the code below: import urllib2 from BeautifulSoup import BeautifulSoup url = "http://www.samhsa.gov/data/NSDUH/2k10State/NSDUHsae2010/NSDUHsaeAppC2010.htm" soup = BeautifulSoup(urllib2.urlopen(url).read()) soup.findAll('table')[0].tbody.findAll('tr') for row in soup.findAll('table')[0].tbody.findAll('tr'): first_column = row.findAll('th')[0].contents third_column = row.findAll('td')[2].contents print first_column, third_column With the above code, I was able to get first & thrid column just fine. But when I use the same code with `http://en.wikipedia.org/wiki/List_of_FIFA_World_Cup_finals`, It could not find tbody as its element, but I can see the tbody when I inspect the element. url = "http://en.wikipedia.org/wiki/List_of_FIFA_World_Cup_finals" soup = BeautifulSoup(urllib2.urlopen(url).read()) print soup.findAll('table')[2] soup.findAll('table')[2].tbody.findAll('tr') for row in soup.findAll('table')[0].tbody.findAll('tr'): first_column = row.findAll('th')[0].contents third_column = row.findAll('td')[2].contents print first_column, third_column Here's what I got from comment error: ' --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-150-fedd08c6da16> in <module>() 7 # print soup.findAll('table')[2] 8 ----> 9 soup.findAll('table')[2].tbody.findAll('tr') 10 for row in soup.findAll('table')[0].tbody.findAll('tr'): 11 first_column = row.findAll('th')[0].contents AttributeError: 'NoneType' object has no attribute 'findAll' ' Answer: If you are inspecting through the inspect tool in the browser it will insert the `tbody` tags. The source code, may, or may not contain them. I suggest looking at the source view if you really want to know. Either way, you do not need to traverse to the tbody, simply: `soup.findAll('table')[0].findAll('tr')` should work.
Use python slice objects when reading HDF5 file with h5py? Question: I am trying to use python slice objects to access data from an HDF5 file using the `h5py` module. I put together this example to show that it works with `numpy` arrays, but not with `h5py`. import h5py import numpy as np slice_obj = [slice(None,3,None), slice(2,5,None)] test_array = np.ones((3,5)) print test_array[0:3,2:5] print test_array[slice_obj] f = h5py.File("testing.hdf5","w") f['data'] = test_array f.close() f = h5py.File("testing.hdf5","r") test2 = f['data'][0:3,2:5] print test2 test2 = f['data'][slice_obj] print test2 f.close() This gives the following output: [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] Traceback (most recent call last): File "slice.py", line 17, in <module> test2 = f['data'][slice_obj] File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/h5py/_hl/dataset.py", line 439, in __getitem__ self.id.read(mspace, fspace, arr, mtype) File "h5d.pyx", line 179, in h5py.h5d.DatasetID.read (h5py/h5d.c:2479) File "_proxy.pyx", line 118, in h5py._proxy.dset_rw (h5py/_proxy.c:1300) File "_proxy.pyx", line 84, in h5py._proxy.H5PY_H5Dread (h5py/_proxy.c:1051) IOError: can't read data (Dataset: Read failed) Does anyone know if this is just not possible with `h5py`? If it is not, is there an alternative way to slice in `h5py`, using objects or variables, instead of explicitly typing the slice like `f['data'][0:3,2:5]` as in my example? Answer: Playing around with your example: test2 = f['data'] print test2 print test2.shape print test2[0:3,2:5] print test2[slice(None,3,None),slice(2,5,None)] # ok print test2[slice_obj[0],slice_obj[1]] # ok print test2[tuple(slice_obj)] # ok print test2[[slice(None,3,None),slice(2,5,None)]] # fail print f['data'][tuple(slice_obj)] 3 ok So it looks like `h5py` arrays can use slices, but cannot split a list into its elements. But it does take a tuple. My guess is that there is minor difference in how `getitem` is implemented. You are doing advanced indexing. `numpy` doc says: > Advanced indexing is triggered when the selection object, obj,... a tuple > with at least one sequence object.... when the selection object is not a > tuple, it will be referred to as if it had been promoted to a 1-tuple, which > will be called the selection tuple. `h5py` may not be doing this promoting to tuple. Otherwise it appears to do advance indexing just fine.
opencv: image with grid and HIGHGUI error Question: Hi I'm new to opencv(version 2.4.7) and using it in python 2.7.4. I always get this error > HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP whenever I use the command cam = cv2.VideoCapture(cam_id) The code works fine otherwise even with the error. I'm trying to use [this wireless camera](http://www.pollin.de/shop/dt/MTU4OTE0OTk-/Haustechnik/Sicherheitstechnik/Kameras/Zusatzkamera_Kanal_4.html) and it shows an image which has a magenta and green colored grid structure. My question is why am I getting the error and this weird image. The code gives nice image on other system also on my system itself. gstreamer-properties also have clear picture. The code: from cv2 import cv import cv2 import sys def main(): cam_id = 0 # parameter for i, arg in enumerate( sys.argv ): if i == 0: continue else: cam_id = arg cam = cv2.VideoCapture(cam_id) cv2.namedWindow("window", cv.CV_WINDOW_AUTOSIZE) running = True while running: try: flag, img = cam.read() if flag: cv2.imshow("window", img) cv2.waitKey(30) except KeyboardInterrupt: running = False cv2.destroyWindow("window") main() Answer: Sorry to update so late, I had figured out solution of the issue long ago but forgot to answer it here. It required loading a library before running the code. Use of following commands should do the trick. For 32bit system: > `$ LD_PRELOAD=/usr/lib/i386-linux-gnu/libv4l/v4l2convert.so python > filename.py` For 64bit system: > `$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l2convert.so python > filename.py` If this doesn't work then try locating the file v4l2convert.so by using command, > `$ locate v4l2convert.so` As the output you'll see different paths, now try LD_PRELOAD with different paths.
Python - Generate a list of IP addresses from user input Question: I am trying to create a script that generates a list of IP addresses based on a users input for a start and end IP range. For example, they could enter 192.168.1.25 & 192.168.1.50. The list would then be fed into scapy to test for open ports. I think I have the list generated, but I am stuck on getting the individual IP's out and into the rest of my code. I think I am trying to use the whole list vs. an item in the list. If there is a better way of doing this, that is fine. I am doing this mainly to improve my understanding of Python. Thanks! from scapy.all import * ip_start = raw_input('Please provide the starting IP address for your scan --> ') start_list = ip_start.split(".") ip_end = raw_input('Please provide the ending IP address for your scan --> ') end_list = ip_end.split(".") top = int(start_list[3]) bot = int(end_list[3]) octet_range = range(top,bot) #print octet_range for i in octet_range: #new_ip_start = ip_start.replace(str(top),str(i)) start_list[3] = i #print top #print i print type(start_list) src_port = RandShort() dst_port = 80 scan = sr1(IP(dst=str(start_list))/TCP(sport=src_port,dport=dst_port,flags="S"),timeout=10) Answer: It'd be easier to use a format like `nmap`'s: 192.168.1.1-255 As now, you can do: octets = '192.168.1.1-255'.split('.') parsed_ranges = [map(int, octet.split('-')) for octet in octets] `parsed_ranges` will look like `[[192], [168], [1], [1, 255]]`. From there, generating the addresses is simple with `itertools`: import itertools ranges = [range(r[0], r[1] + 1) if len(r) == 2 else r for r in parsed_ranges] addresses = itertools.product(*ranges) Here's a simple implementation: import itertools def ip_range(input_string): octets = input_string.split('.') chunks = [map(int, octet.split('-')) for octet in octets] ranges = [range(c[0], c[1] + 1) if len(c) == 2 else c for c in chunks] for address in itertools.product(*ranges): yield '.'.join(map(str, address)) And the result: >>> for address in ip_range('192.168.1-2.1-12'): print(address) 192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4 192.168.1.5 192.168.1.6 192.168.1.7 192.168.1.8 192.168.1.9 192.168.1.10 192.168.1.11 192.168.1.12 192.168.2.1 192.168.2.2 192.168.2.3 192.168.2.4 192.168.2.5 192.168.2.6 192.168.2.7 192.168.2.8 192.168.2.9 192.168.2.10 192.168.2.11 192.168.2.12
cElementTree.interparse() doesn't accept custom parser Question: I parse XML file on Python with ElementTree. I found out that C implementation of cElementTree work very fast comparing to regular one. But I have also discovered construction: xml.etree.cElementTree.iterparse(filename, parser=MyCystomParser()) wont work. You will see something like: __init__() got an unexpected keyword argument 'parser' Meanwhile same construction with 'xml.etree.ElementTree.iterparse' does work. I use custom parser to keep comments while parsing XML file (default parser ignores/removes it). Does anyone know why in C implementation it doesn't work? The 'parser' argument was already in ElementTree when cElementTree released. Answer: > Does anyone know why in C implementation it doesn't work? Well, yeah, because it's [_documented_](http://docs.python.org/2.7/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse) to not work: > _parser_ is not supported by `cElementTree`. But why didn't they make it work? The version of ElementTree that was incorporated into Python 2.5 did not have a `parser` argument on `iterparse`. This feature was only added in Python 3.2. It was then backported to Python 2.7.* (Note that it's not there in [2.6](http://docs.python.org/2.6/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse).) Python 3.x does not have `cElementTree`—instead, it just has a single `ElementTree` implementation, with C accelerator code that it uses wherever possible. So, it would have been much more work to backport the new feature to `cElementTree` than to `ElementTree`. And presumably it wasn't important enough for anyone to bother doing. Also, note that ElementTree is developed and maintained outside of Python's stdlib, by Frederik Lundh, [here](http://effbot.org/zone/element-index.htm). I believe Gregory P. Smith drives the integration-into-the-stdlib work, but I could be wrong. So, you could ask either of them, or ask on the python-dev list, if you want any more details. But I'm pretty sure the answer will just be "it wasn't important enough to bother doing". * * * * Technically, it was added in ElementTree 1.3, the version incorporated into Python 3.2 and 2.7. See the What's New docs for [2.7](http://docs.python.org/3.3/whatsnew/2.7.html#updated-module-elementtree-1-3) and [3.2](http://docs.python.org/3.3/whatsnew/3.2.html#elementtree).
Scipy.cluster.vq.kmeans "list has no attribute shape" Question: So this is a really weird problem I've been getting. I'm basically trying to create a practice codebook which uses SIFT features of images that are clustered by the kmeans algorithm in Python. However whenever I run the code I get the following error Traceback (most recent call last): File "C:\Users\Administrator\Desktop\Python\assignment2\SIFT_Dectection.py", line 34, in <module> codebook, dis = cluster.vq.kmeans(codebook_construction(files[:20]),3) File "C:\Python27\lib\site-packages\scipy\cluster\vq.py", line 513, in kmeans No = obs.shape[0] AttributeError: 'list' object has no attribute 'shape' I assume that this is an error within the vq script for the Scipy library. However, I have other friends who are working on this as well and I am using the exact same code as them with the scipy library but I'm still getting this problem. I've also tried to completely uninstall Python reinstalling everything. I'm running the thing on Windows 7 btw. The code I'm using looks something like this: import cv2 import glob from scipy import cluster files = glob.glob('101_ObjectCategories/*/*.jpg') def codebook_construction(im): codebook = [] for image in im: img = cv2.imread(image) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) sift = cv2.SIFT() kp, desc = sift.detectAndCompute(gray, None) if codebook == []: codebook = desc else: codebook = np.vstack((codebook, desc)) return codebook codebook, dis = cluster.vq.kmeans(codebook_construction(files[:20]),3) The glob function there calls for a library of images I've downloaded from Caltech. I've searched high and low for an answer but it seems that no one has been having similar problems. Hopefully I can get some guidance here Answer: The issue looks to be that `kmeans` is expecting an array, and you're feeding it a list. Try changing the last line of your `codebook_construction()` function to: return scipy.array(codebook)
reading data from arduino with python Question: im trying to do something which arduino sends bunch of data frequently, and my objective is: 1. every 100 data, make a new file. (lets call it a1, a2, ...) 2. in one generic file, take the average of each of these a files and write it inside of that file as a new line for experiment i coded my arduino like that: void setup(){ Serial.begin(9600); } void loop(){ Serial.println(random(100,500)); delay(20); } and here is the python code: import serial, struct initialfreq = 0 a = 0 interval = 0 fileName = 'general_list' general_list = open(fileName, 'wb') ser = serial.Serial(port = 'COM3', baudrate = 9600) def mean(numberList): return sum(numberList) / len(numberList) while(1): for i in '100' : temparray=[] fileName = 'interval' + str(initialfreq) + '.data' temp_file = open(fileName, 'wb') readoff = ser.readline() temparray.append(readoff) temp_file.write(readoff) ## temp_file.flush() print("bitti") general_list.write(str(interval)+" "+str(mean(temparray))) general_list.write(str(mean(temparray))) initialfreq= initialfreq + 1 a=0` my problem is, 1. for loop is not working properly, even when i sad 100, its not taking 100 values. 2. arduino sending codes with \n. i cant see them in files but in temparray i see that there is \n 's so its not calculating the average. thanks a lot guys. Answer: Here is the solution for your first problem. 1) for i in range(100):
Python turtle classes and methods: TypeError: 'method' takes exactly 2 arguments (1 given)? Question: I'm writing a word prediction program using turtle as a GUI and I can't get the code to work. My problem concerns the WordList class and the readlist method in that class (near the end of the code). Here is the error message I get: Traceback (most recent call last): File "C:\Python32\Projects\Practice 3.py", line 118, in <module> main() File "C:\Python32\Projects\Practice 3.py", line 115, in main wordlist = WordList(alpha,'greatexpectationschapter1.txt') File "C:\Python32\Projects\Practice 3.py", line 89, in __init__ self.list = self.stripchar() TypeError: stripchar() takes exactly 2 arguments (1 given) and here is the code itself: import turtle as trt class Key(object): def __init__(self,letter,left,right,top,bottom): self.letter = letter self.left = left self.right = right self.top = top self.bottom = bottom class Board(object): def __init__(self): self.newline1 = [] self.newline2 = [] self.newline3 = [] self.newline4 = [] def makeboard(self): line1 = ['1','2','3','4','5','6','7','8','9','0'] line2 = ['q','w','e','r','t','y','u','i','o','p'] line3 = ['a','s','d','f','g','h','j','k','l'] line4 = ['z','x','c','v','b','n','m',"'","_"] for i in range(len(line1)): self.newline1.append(Key(line1[i],0+(40*i),10+(40*i),50,0)) for i in range(len(line2)): self.newline2.append(Key(line2[i],0+(40*i),10+(40*i),0,-50)) for i in range(len(line3)): self.newline3.append(Key(line3[i],0+(40*i),10+(40*i),-50,-100)) for i in range(len(line4)): self.newline4.append(Key(line4[i],0+(40*i),10+(40*i),-100,-150)) def drawboard(self): trt.penup() for i in self.newline1: trt.goto(i.left,i.bottom) trt.write(i.letter, font = ("arial",25)) for i in self.newline2: trt.goto(i.left,i.bottom) trt.write(i.letter, font = ("arial",25)) for i in self.newline3: trt.goto(i.left,i.bottom) trt.write(i.letter, font = ("arial",25)) for i in self.newline4: trt.goto(i.left,i.bottom) trt.write(i.letter, font = ("arial",25)) def getletter(self): count = 0 if self.newline1[0].bottom < self.y < self.newline1[0].top: for i in range(len(self.newline1)): if self.newline1[i].left < self.x < self.newline1[i].right: count+=1 return(self.newline1[i].letter) if self.newline2[0].bottom < self.y < self.newline2[0].top: for i in range(len(self.newline2)): if self.newline2[i].left < self.x < self.newline2[i].right: count+=1 return(self.newline2[i].letter) if self.newline3[0].bottom < self.y < self.newline3[0].top: for i in range(len(self.newline3)): if self.newline3[i].left < self.x < self.newline3[i].right: count+=1 return(self.newline3[i].letter) if self.newline4[0].bottom < self.y < self.newline4[0].top: for i in range(len(self.newline3)): if self.newline4[i].left < self.x < self.newline4[i].right: count+=1 return(self.newline4[i].letter) class Word(object): def __init__(self,word,ct): self.word = word self.ct = ct self.x = 0 self.y = 0 def __str__(self): return(self.word+": "+str(self.ct)) class WordPredict(object): def __init__(self,board,wordlist): self.board = board self.wordlist = wordlist self.currentword = "" self.predictions = [] self.sentence = "" self.enterword() def enterword(self): trt.onscreenclick(self.findletter) def findletter(self,x,y): print(x) print(y) class WordList(object): def __init__(self,alpha,doc): self.alpha = alpha self.list = self.readlist(doc) self.list = self.stripchar() self.wordlist = self.makewordlist() def stripchar(self,ls): newlist = [] for x in ls: z = "" for y in x: if y.lower() in self.alpha: if y != "": z += y.lower() elif len(z) > 0: newlist.append(z) z = "" return(newlist) def readlist(self,doc): f = open(doc,'r') ls = [] for line in f: ls.append(line.strip()) f.close() ls = self.stripchar(ls) return(ls) def main(): alpha = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'] board1 = Board() board1.makeboard() wordlist = WordList(alpha,'greatexpectationschapter1.txt') board1.drawboard() word = WordPredict(board1,wordlist) main() Is there anything I can do to fix it? Thank you for your help. Answer: In this line self.list = self.stripchar() you have to explicitly tell which string has to be stripped. It would be most likely self.list = self.stripchar(self.list) Those two lines can be simply written like this self.list = self.stripchar(self.readlist(doc))
How can I best isolate 2 different unlabeled pieces of html using beautiful soup to be printed to a CSV? Question: To preface, I'm a python beginner and this is my first time using BeautifulSoup. Any input is greatly appreciated. I'm attempting to scrape all the company names and email addresses from [this site](http://www.indiainfoline.com/Markets/Company/A.aspx). There are 3 layers of links to crawl through (Alphabetized pagination list -> Company list by letter -> Company detail page) and I'd subsequently print them to a csv. So far, I've been able to isolate the alphabetized list of links with the code below, but I'm stuck when attempting to isolate the different company pages and then extracting the name/email from unlabeled html. import re import urllib2 from bs4 import BeautifulSoup page = urllib2.urlopen('http://www.indiainfoline.com/Markets/Company/A.aspx').read() soup = BeautifulSoup(page) soup.prettify() pattern = re.compile(r'^\/Markets\/Company\/\D\.aspx$') all_links = [] navigation_links = [] root = "http://www.indiainfoline.com/" # Finding all links for anchor in soup.findAll('a', href=True): all_links.append(anchor['href']) # Isolate links matching regex for link in all_links: if re.match(pattern, link): navigation_links.append(root + re.match(pattern, link).group(0)) navigation_links = list(set(navigation_links)) company_pages = [] for page in navigation_links: for anchor in soup.findAll('table', id='AlphaQuotes1_Rep_quote') [0].findAll('a',href=True): company_pages.append(root + anchor['href']) Answer: By pieces. Getting the links to each individual company is easy: from bs4 import BeautifulSoup import requests html = requests.get('http://www.indiainfoline.com/Markets/Company/A.aspx').text bs = BeautifulSoup(html) # find the links to companies company_menu = bs.find("div",{'style':'padding-left:5px'}) # print all companies links companies = company_menu.find_all('a') for company in companies: print company['href'] Second, get the companies names: for company in companies: print company.getText().strip() Third, emails is a little more complicated, but you can use regex here, so in a independent company page, do the following: import re # example company page html = requests.get('http://www.indiainfoline.com/Markets/Company/Adani-Power-Ltd/533096').text EMAIL_REGEX = re.compile("mailto:([A-Za-z0-9.\-+]+@[A-Za-z0-9_\-]+[.][a-zA-Z]{2,4})") re.findall(EMAIL_REGEX, html) # and there you got a list of found emails ... Cheers,
Trying to parse Word docx file as a zip document using Python's xml.elementtree Question: I'm trying to parse a Windows docx file as a zip file using Python's xml.elementtree module. I saved the docx file as a zip. Below is what the document looks like: <?xml version="1.0" encoding="UTF-8" standalone="true"?> <?mso-application progid="Word.Document"?> -<pkg:package xmlns:pkg="http://schemas.microsoft.com/office/2006/xmlPackage"> -<pkg:part pkg:padding="512" pkg:contentType="application/vnd.openxmlformats- package.relationships+xml" pkg:name="/_rels/.rels"> +<pkg:xmlData> </pkg:part> +<pkg:part pkg:padding="256" pkg:contentType="application/vnd.openxmlformats-package.relationships+xml" pkg:name="/word/_rels/document.xml.rels"> -<pkg:part pkg:contentType="application/vnd.openxmlformats-officedocument.wordprocessingml.document.main+xml" pkg:name="/word/document.xml"> -<pkg:xmlData> -<w:document xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:mv="urn:schemas-microsoft-com:mac:vml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:mo="http://schemas.microsoft.com/office/mac/office/2008/main" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" mc:Ignorable="w14 wp14"> -<w:body> -<w:p w:rsidP="00E65A71" w:rsidRDefault="00E65A71" w:rsidR="00E65A71"> -<w:r> <w:t>Gloss:</w:t> </w:r> -<w:r> <w:tab/> </w:r> -<w:r w:rsidRPr="00EC6528"> -<w:rPr> <w:noProof/> </w:rPr> <w:t>the door</w:t> </w:r> </w:p> -<w:p w:rsidP="00E65A71" w:rsidRDefault="00E65A71" w:rsidR="00E65A71"> -<w:r> <w:t xml:space="preserve">Base: </w:t> </w:r> -<w:r> <w:tab/> </w:r> -<w:r w:rsidRPr="00EC6528"> -<w:rPr> <w:noProof/> </w:rPr> <w:t>words</w:t> </w:r> -<w:r> As you can see I've minimized a few of the elements to save space. I'm interested in the stuff in the <w:document><w:body> elements specifically: <w:r><w:t> that's where the data is that I want to parse. I can't seem to get past the first element. Below is what's tried to get at that stuff: import xml.etree.ElementTree as ET tree = ET.parse('document.xml') body = tree.getroot().findall("w") #body = tree.getroot().findall(w:t) #body = tree.getroot() and also: for child in root: print child.tag, child.attrib I've tried that just to see if I could see any of the elements I could then drill into but that returns nothing. I've also tried other code but I can't seem to get to the stuff I want. I've programmed a lot in Python put never used this module to parse XML. I'm using VS studio 2012 with pytools and when I set a breakpoint and look at the "root" structure I can't seem to drill into the element I want to get. I can't seem to navigate past the "pkg:package" stuff. My end goal is to set up a for loop to work through the "" stuff that will be repeated throughout the document. I've been researching this for a little while and trying to work through a few of the tutorials so any help is greatly appreciated! Thanks. Answer: In Open Office Xml (which is the standard that Microsoft uses for its newer Office software), the letter in front of the colon in the tag is a prefix and requires a particular namespace mapping to be processed correctly. For instance, the tag <w:t> actually requires you to search for the tag string "{<http://schemas.openxmlformats.org/wordprocessingml/2006/main>}t". The prefix/namespace is surrounded by curly brackets and the actual tag name follows at the end. Fortunately, most of what you're probably looking for uses the namespace that I mentioned above. Here's some sample code that should get you started in the right direction: import xml.etree.ElementTree as ET # I find that using a dictionary to map prefixes to namespaces keeps # things easier to understand. You can also use the namespaces directly # though if you prefer NAMESPACE_PREFIXES = { 'w': 'http://schemas.openxmlformats.org/wordprocessingml/2006/main' } tree = ET.parse('document.xml') root = tree.getroot() text_elements = [element for element in root.iter() if element.tag == '{' + NAMESPACE_PREFIXES['w'] + '}t'] # Equivalent to: # text_elements = [element for element in root.iter() if element.tag == # '{http://schemas.openxmlformats.org/wordprocessingml/2006/main}t'] for text_element in text_elements: if text_element.text == 'Hello world!': text_element.text = 'Goodbye world!' [Here](http://www.schemacentral.com/sc/ooxml/) are some additional namespaces for OOXML in case you need them as well.
wxpython with matplotlib doesn't initialize Question: I tried to use `matplotlib` in `wxpython`, but there is some problem. import wx from matplotlib.figure import Figure import numpy as np from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas class MplCanvasFrame(wx.Frame): def _init_(self,parent): wx.Frame._init_(self, parent, size=(600, 400), title='Matplotlib Figure with Navigation Toolbar') self.figure = Figure() self.axes = self.figure.add_subplot(111) x = np.arange(0, 6, .01) y = np.sin(x**2)*np.exp(-x) self.axes.plot(x, y) self.canvas = FigureCanvas(self, -1, self.figure) app = wx.App(redirect = False) frame = MplCanvasFrame(None) frame.Show() app.MainLoop() It doesn't draw the curve at all. but when I try this: * * * import wx import wx from matplotlib.figure import Figure import numpy as np from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas app = wx.App() frame = wx.Frame(None, title = 'dasf',size = (600, 400)) figure = Figure() axes = figure.add_subplot(111) x = np.arange(0, 6, .01) y = np.sin(x**2)*np.exp(-x) axes.plot(x, y) canvas = FigureCanvas(frame, -1, figure) frame.Show() app.MainLoop() The curve is drawed. Why? Answer: I believe the problem is the spelling of `__init__`. there should be two underscores on each side (note that this is in two places in the code).
Downloading embedded Iframe videos from vimeo using python Question: I have been looking all over and I see how to download vimeo videos using python. I so far have this code.I can get to the parent page but I cannot do anything to hit that iframe. I was thinking the best way to do this would be login and hit the iframe and download the video from there but I am missing something. do any of you have any ideas? let me know if you need more info and as always thank you for your time. import spynner import os, sys, urllib os.system("dir") browser = spynner.Browser() #browser.show() url = 'https://somelink.php' browser.load("https://somelink2.php") browser.wk_fill("input[name=log]", "loginname") browser.wk_fill("input[name=pwd]", "password") browser.click("#wp-submit") print browser.url, len(browser.html) browser.load("http://somelink3-00000333/") browser.click("//player.vimeo.com/video/747474749") print browser.html Here is the embedded video that I would like to download. <iframe src="//player.vimeo.com/video/747474749" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe> Answer: The Site allowed for Javascript to be run from the client side. so in short a simply running javascript to access the link is enough. javascript:window.location.href="%s"; '%uls is really all that was needed to have this happen. I hope it helps others in the future and perhaps there is a better way to do this please let me know. def getvideourl(htmldoc): downloadurls = re.findall("//player.+video.\d+", htmldoc) for uls in downloadurls: uls.encode('ascii','ignore') javasinject = 'javascript:window.location.href="%s"; '%uls return javasinject def jsinject(link): str(link) browser.runjs(link) jsinject(str(getvideourl(browser.html))) browser._wait_load()
Inconsistent lxml.etree.xpath element counting between Mac OS and Linux hosts Question: This problem has puzzled me for a while now. I am trying to scrape tables from financial websites. For example, from barchart.com ( finviz.com same problem ) import lxml.html as lh import lxml, urllib2 gList = ['gapup','gapdown'] count = {} doc = {} for g in gList: doc[g] = lh.parse(urllib2.urlopen('http://www.barchart.com/stocks/{0}.php'.format(g))) count[g] = len(doc[g].xpath('/html/body//table[4]//tr')) Basically, I am counting how many rows (stocks) there are in the table. On my macbook (Python 2.7.5+ lxml 3.2.4). Works perfect: In [14]: count Out[14]: {'gapdown': 101, 'gapup': 34} However, when I test the same code remotely on my dreamhost site. Won't work: In [7]: count Out[7]: {'gapdown': 0, 'gapup': 0} Initially, I thought it's because Python 2.6 on dreamhost, and 2.6 is known to have less xpath support. So I installed pyenv to use 2.7.5 today. Problem is still there. I tried to get around using this code (I call it xpath relay :) ): In [11]: len(doc['gapup'].xpath('/html/body//table')[3].xpath('//tr')) Out[11]: 43 As the number suggests (43 != 34), didn't work. Using `lh.tostring()` , I realize it basically picked up all tr-s in the html file. No relay. I am new to xpath. But why the same code won't work on same python 2.7 + lxml 3.2.4 setup on different machines? Thanks a lot! Answer: I was able to reproduce the issue on an Ubuntu 12.04 host with Python 2.7.3. For some reason, the table is the fifth table sometimes. Anyway, when parsing HTML documents, better do not rely on counting elements, but use `@id` attributes if given. Using the XPath expression `//table[@id="dt1"]//tr` I could get reproducible results, also on my Ubuntu host.
DRY code for Google Cloud Endpoints APIs Question: I want to avoid boilerplate code for creating Google Cloud Endpoints APIs for different models of my Google App Engine application. Suppose I have a `Post`, `User` and `Category` model. The data is stored in the datastore. I want to create a REST API with the resources `posts`, `users` and `categories`. I have written the following code for the `posts` resource: import endpoints from protorpc import messages from protorpc import message_types from protorpc import remote from blog.models import Post from cloud_endpoints import WEB_CLIENT_ID, ANDROID_CLIENT_ID, IOS_CLIENT_ID, ANDROID_AUDIENCE class PostMessage(messages.Message): id = messages.StringField(1) title = messages.StringField(2) body = messages.StringField(3) class PostMessageCollection(messages.Message): post_messages = messages.MessageField(PostMessage, 1, repeated=True) def post_to_message(post): return PostMessage( id=str(post.key()), title=post.title, body=post.body) ID_RESOURCE = endpoints.ResourceContainer( message_types.VoidMessage, id=messages.StringField(1, variant=messages.Variant.STRING)) PUT_RESOURCE = endpoints.ResourceContainer( PostMessage, id=messages.StringField(1, variant=messages.Variant.STRING)) POST_RESOURCE = endpoints.ResourceContainer(Post) @endpoints.api(name='posts', version='v1', allowed_client_ids=[WEB_CLIENT_ID, ANDROID_CLIENT_ID, IOS_CLIENT_ID], audiences=[ANDROID_AUDIENCE]) class PostsApi(remote.Service): """List""" @endpoints.method(message_types.VoidMessage, PostMessageCollection, path='/posts', http_method='GET', name='posts.listPosts') def list(self, unused_request): post_messages = [] for post in Post.all(): post_messages.append(post_to_message(post)) return PostCollection(post_messages=post_messages) """Get""" @endpoints.method(ID_RESOURCE, PostMessage, path='/posts/{id}', http_method='GET', name='posts.getPost') def get(self, request): try: return post_to_message(Post.get(request.id)) except (IndexError, TypeError): raise endpoints.NotFoundException('Post %s not found.' % (request.id,)) """Create""" @endpoints.method(POST_RESOURCE, message_types.VoidMessage, path='/posts', http_method='POST', name='posts.createPost') def create(self, request): post = Post(title=request.title, body=request.body)\ post.put() return message_types.VoidMessage() """Update""" @endpoints.method(PUT_RESOURCE, message_types.VoidMessage, path='/posts/{id}', http_method='POST', name='posts.updatePost') def update(self, request): try: post = Post.get(request.id) post.title = request.title post.body = request.body return message_types.VoidMessage() except (IndexError, TypeError): raise endpoints.NotFoundException('Post %s not found.' % (request.id,)) """Delete""" @endpoints.method(ID_RESOURCE, message_types.VoidMessage, path='/posts/{id}', http_method='DELETE', name='posts.deletePost') def delete(self, request): try: post = Post.get(request.id) post.delete() return message_types.VoidMessage() except (IndexError, TypeError): raise endpoints.NotFoundException('Post %s not found.' % (request.id,)) I could copy/paste this code and change "Post" to "Category" everywhere, and edit `PostMessage`, `PostMessageCollection` and `post_to_message`, but that seems bad practise. I would like not to repeat myself. Is it possible to create an abstract API class and make subclasses for `PostAPI`, `CategoryAPI` and `UserAPI`? Or is there a better way to parameterize `Post`, `PostMessage`, `PostMessageCollection`, `post_to_message` and the path to the resource ("/posts", "/categories" and "/users") so that I don't have to copy/paste the class for every resource? The classes would have the same methods with the same decorators, and I would like not to repeat that for every resource. I use Python 2.7. Answer: I also have stumbled upon the same problem, and unfortunately this is not possible with the `google cloud endpoints`. The method decorator needs the request description (`PostMessageCollection` here). The request description which subclasses the `message.Message` doesn't allow reuse through inheritance, so all the message classes have to be completely defined without any inheritance. You can however achieve this to some extent (though I haven't tested it, thought of it right now :) ) extent in the following way: # All the message and response definitions have to be here, complete. class PostMessage(messages.Message): id = messages.StringField(1) title = messages.StringField(2) body = messages.StringField(3) class PostMessageCollection(messages.Message): post_messages = messages.MessageField(PostMessage, 1, repeated=True) def post_to_message(post): return PostMessage( id=str(post.key()), title=post.title, body=post.body) ID_RESOURCE = endpoints.ResourceContainer( message_types.VoidMessage, id=messages.StringField(1, variant=messages.Variant.STRING)) PUT_RESOURCE = endpoints.ResourceContainer( PostMessage, id=messages.StringField(1, variant=messages.Variant.STRING)) POST_RESOURCE = endpoints.ResourceContainer(Post) # Now define all the 'Category' related messages here. @endpoints.api(name='posts_n_categories', # The name can be a common one. version='v1', allowed_client_ids=[WEB_CLIENT_ID, ANDROID_CLIENT_ID, IOS_CLIENT_ID], audiences=[ANDROID_AUDIENCE]) class BaseAPI(remote.Service): """List""" # Common defs go here. MessageCollection = messages.Message PATH = '/' NAME = '' @staticmethod def converter(x): raise NotImplemented iterator = [] collection = messages.Message @endpoints.method(message_types.VoidMessage, MessageCollection, path=PATH, http_method='GET', name=NAME) def list(self, unused_request): # Do the common work here. You can _messages = [] for post in self.__class__.iterator.all(): _messages.append(self.__class__.converter(post)) return self.__class__.collection(post_messages=_messages) @endpoints.api(name='posts', # The name can be different. version='v1', allowed_client_ids=[WEB_CLIENT_ID, ANDROID_CLIENT_ID, IOS_CLIENT_ID], audiences=[ANDROID_AUDIENCE]) class PostAPI(Base): # Post specific defs go here. MessageCollection = PostMessageCollection PATH = '/posts' NAME = 'posts.listPosts' converter = post_to_message iterator = Post collection = PostCollection # Define the category class here. Clearly, it doesn't save much time.
Ending the GTK+ main loop in an Python MDI application Question: I am trying to code an application that consists of various windows (e.g., generic message dialog, login dialog, main interface, etc.) and am having trouble getting the `gtk.main_quit` function to be called: either I get a complaint about the call being outside the main loop, or the function doesn't get called at all. I am a newbie to both Python and GTK+, but my best guess as to how to get this to work is to have a "root" window, which is just a placeholder that is never seen, but controls the application's GTK+ loop. My code, so far, is as follows: import pygtk pygtk.require("2.0") import gtk class App(gtk.Window): _exitStatus = 0 # Generic message box def msg(self, title, text, type = gtk.MESSAGE_INFO, buttons = gtk.BUTTONS_OK): # Must always have a button if buttons == gtk.BUTTONS_NONE: buttons = gtk.BUTTONS_OK dialog = gtk.MessageDialog(None, 0, type, buttons, title) dialog.set_title(title) dialog.set_geometry_hints(min_width = 300) dialog.set_resizable(False) dialog.set_deletable(False) dialog.set_position(gtk.WIN_POS_CENTER) dialog.set_modal(True) dialog.format_secondary_text(text) response = dialog.run() dialog.destroy() return response def nuke(self, widget, data): gtk.main_quit() exit(self._exitStatus) def __init__(self): super(App, self).__init__() self.connect('destroy', self.nuke) try: raise Exception() except: self.msg('OMFG!', 'WTF just happened!?', gtk.MESSAGE_ERROR, gtk.BUTTONS_CLOSE) self._exitStatus = 1 self.destroy() if self.msg('OK', 'Everything worked fine') == gtk.RESPONSE_OK: self.destroy() # Let's go! App() gtk.main() The `nuke` function never gets called, despite the explicit calls to `destroy`. * * * **DIFF** On @DonQuestion's advice: - self.destroy() + self.emit('destroy') - App() + app = App() This didn't solve the problem... * * * **UPDATE** Accepted @jku's answer, but also see my own answer for extra information... Answer: First, there is a bit of a test problem with the code: You call Gtk.main_quit() from the App initialization: this happens before main loop is even running so signals probably won't work. Second, you'll probably get a warning on destroy(): 'destroy' handler only takes two arguments (self plus one) but yours has three... Also with regards to your comment about control flow: You don't need a Window to get signals as they're a GObject feature. And for your testing needs you could write a App.test_except() function and use `glib.idle_add (self.test_except)` in the object initialization -- this way test_except() is called when main loop is running.
python imaging library: Can I simply fill my image with one color? Question: I googled, checked the documentation of PIL library and much more, but I couldn't find the answer to my simple question: how can I fill an existing image with a desired color? (I am using `from PIL import Image` and `from PIL import ImageDraw`) This command creates a new image filled with a desired color image = Image.new("RGB", (self.width, self.height), (200, 200, 200)) But I would like to reuse the same image without the need of calling "new" every time **Edit:** One possibility is, after `draw = ImageDraw.Draw(image)`, to use draw.rectangle([(0,0),image.size], fill = (200,200,200) ) but I am astonished there is no simpler method to fill a whole image with one background color, like `setTo` for opencv. Answer: Have you tried: image.paste(color, box) where `box` can be a 2-tuple giving the upper left corner, a 4-tuple defining the left, upper, right, and lower pixel coordinate, or None (same as (0, 0))
Regex matching certain character in string pattern Question: I have a example of many of string pattern like this but I want to show some example. from: [name: Illianney Amada id: 674176087] from: [name: Natalia Morel-Gibbs id: 100003799207624] from: [name: Jules Kaneyge Pand id: 100000110811550] And, I would like to illustrate the parameter type like this: (Watch **String** and **SequenceOfNumber**) from: [name: String id: SequenceOfNumber] but actually, it was represented from this from:\t[name:\tString\nid:\tSequenceofNumber] So, I would like to replace the **"\n"** that is between **"String"** and **"id:"** with **",\t"** or tab character. The result should be like this from:\t[name:\tString,\tid:\tSequenceofNumber]\n from:\t[name:\tString,\tid:\tSequenceofNumber]\n from:\t[name:\tString,\tid:\tSequenceofNumber]\n Or in other way like this from: [name: String, id: SequenceOfNumber] from: [name: String, id: SequenceOfNumber] from: [name: String, id: SequenceOfNumber] Note: I implement the regex replacing with Python module **re** Answer: **Update:** import re fixed = re.sub(r"(\[name:.*?)\n", r"\1,\t", originalString, re.M) Results in: from: [name: Illianney Amada, id: 674176087] from: [name: Natalia Morel-Gibbs, id: 100003799207624] from: [name: Jules Kaneyge Pand, id: 100000110811550] Working example: <http://regex101.com/r/wN7aT0> * * * **Old:** If you've only got the one `\n` there, you could do: originalString = "from:\t[name:\tString\nid:\tSequenceofNumber]" fixedString = ",\t".join(originalString.split("\n")) This will split the string on `\n` and join it back together with `,\t`, resulting in: from:\t[name:\tString,\tid:\tSequenceofNumber] **Caveat:** in your original example, you didn't actually set a string in the variable. Are you perhaps opening this from a text file? If so, that changes the answer dramatically, because you might be looping one line at a time.
Django Error loading MySqlDB module Question: I'm new with Django and I follow a tuto. The problem is that the tuto uses Sqlite but I want to use MySql server instead. I changed the parameters following documentation but I have the following error when I try to run the server. I already found some resolve but it didn't work... For your information, I installed MySql-Python and reinstall Django with PIP. Without any success. I hope you will be able to help me. Traceback : Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\adescamp>cd C:\Users\adescamp\agregmail C:\Users\adescamp\agregmail>python manage.py runserver 8000 Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line utility.execute() File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Python27\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv self.execute(*args, **options.__dict__) File "C:\Python27\lib\site-packages\django\core\management\base.py", line 280, in execute translation.activate('en-us') File "C:\Python27\lib\site-packages\django\utils\translation\__init__.py", line 130, in activate return _trans.activate(language) File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 188, in activate _active.value = translation(language) File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 177, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "C:\Python27\lib\site-packages\django\utils\translation\trans_real.py", line 159, in _fetch app = import_module(appname) File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module __import__(name) File "C:\Python27\lib\site-packages\django\contrib\admin\__init__.py", line 6, in <module> from django.contrib.admin.sites import AdminSite, site File "C:\Python27\lib\site-packages\django\contrib\admin\sites.py", line 4, in <module> from django.contrib.admin.forms import AdminAuthenticationForm File "C:\Python27\lib\site-packages\django\contrib\admin\forms.py", line 6, in <module> from django.contrib.auth.forms import AuthenticationForm File "C:\Python27\lib\site-packages\django\contrib\auth\forms.py", line 17, in <module> from django.contrib.auth.models import User File "C:\Python27\lib\site-packages\django\contrib\auth\models.py", line 48, in <module> class Permission(models.Model): File "C:\Python27\lib\site-packages\django\db\models\base.py", line 96, in __new__ new_class.add_to_class('_meta', Options(meta, **kwargs)) File "C:\Python27\lib\site-packages\django\db\models\base.py", line 264, in add_to_class value.contribute_to_class(cls, name) File "C:\Python27\lib\site-packages\django\db\models\options.py", line 124, in contribute_to_class self.db_table = truncate_name(self.db_table, connection.ops.max_name_length()) File "C:\Python27\lib\site-packages\django\db\__init__.py", line 34, in __getattr__ return getattr(connections[DEFAULT_DB_ALIAS], item) File "C:\Python27\lib\site-packages\django\db\utils.py", line 198, in __getitem__ backend = load_backend(db['ENGINE']) File "C:\Python27\lib\site-packages\django\db\utils.py", line 113, in load_backend return import_module('%s.base' % backend_name) File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module __import__(name) File "C:\Python27\lib\site-packages\django\db\backends\mysql\base.py", line 17, in <module> raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb Thank you! Answer: I found the solution! I had to install mysql-python. Easy but: don't do that with PIP because it won't work. You have to install mysql-p
Multiply all numbers in a generated list by an increasing number to reach a value python Question: I want to multiply an integer with all the numbers in a list I generate until a value of 65 is reached. I am starting off at 2(first prime) and multiply up to 63(again all primes) until I reach 65. If a value is not reached with 2 I then want it to try with 3, and so on until the correct value us reached. I am doing this in python, which I am new too aswell so I apologise if this is basic. I then want to print out the numbers which multiplied together to give me that value, i.e. I know 5 and 13 would give me 65. Here is some of my code below: from __future__ import division import fractions ml = [] nl = [] p = 5 q = 17 d = 0 x = 2 y = 2 z = (p-1)*(q-1) print z n = p*q print n for x in range(z): if (fractions.gcd(x, z) == 1): ml.append(x) ##print ml s = 1 for x in ml: t = s * x if t == 65: print s print x break else: s = s + 1 Answer: You're incrementing s and moving to the next element in ml in each stage of the loop. So you only try 1 * ml[0], 2 * ml[1], etc. I think you want two nested for loops, so that you try every element of ml with every possible value of s. You can get this behaviour a bit more cleanly with itertools: for s,x in itertools.product(range(65),ml): t = s * x if t == 65: print s print x break
best way to implement Apriori in python pandas Question: What is the best way to implement the Apriori algorithm in pandas? So far I got stuck on transforming extracting out the patterns using for loops. Everything from the for loop onward does not work. Is there a vectorized way to do this in pandas? import pandas as pd import numpy as np trans=pd.read_table('output.txt', header=None,index_col=0) def apriori(trans, support=4): ts=pd.get_dummies(trans.unstack().dropna()).groupby(level=1).sum() #user input collen, rowlen =ts.shape #max length of items tssum=ts.sum(axis=1) maxlen=tssum.loc[tssum.idxmax()] items=list(ts.columns) results=[] #loop through items for c in range(1, maxlen): #generate patterns pattern=[] for n in len(pattern): #calculate support pattern=['supp']=pattern.sum/rowlen #filter by support level Condit=pattern['supp']> support pattern=pattern[Condit] results.append(pattern) return results results =apriori(trans) print results When I insert this with support 3 a b c d e 0 11 1 1 1 0 0 666 1 0 0 1 1 10101 0 1 1 1 0 1010 1 1 1 1 0 414147 0 1 1 0 0 10101 1 1 0 1 0 1242 0 0 0 1 1 101 1 1 1 1 0 411 0 0 1 1 1 444 1 1 1 0 0 it should output something like Pattern support a 6 b 7 c 7 d 7 e 3 a,b 5 a,c 4 a,d 4 Answer: Assuming I understand what you're after, maybe from itertools import combinations def get_support(df): pp = [] for cnum in range(1, len(df.columns)+1): for cols in combinations(df, cnum): s = df[list(cols)].all(axis=1).sum() pp.append([",".join(cols), s]) sdf = pd.DataFrame(pp, columns=["Pattern", "Support"]) return sdf would get you started: >>> s = get_support(df) >>> s[s.Support >= 3] Pattern Support 0 a 6 1 b 7 2 c 7 3 d 7 4 e 3 5 a,b 5 6 a,c 4 7 a,d 4 9 b,c 6 10 b,d 4 12 c,d 4 14 d,e 3 15 a,b,c 4 16 a,b,d 3 21 b,c,d 3 [15 rows x 2 columns]
python to node.js confusion Question: So I have this python code that I'm trying to convert to node.js, but I am not sure how. import urllib.request, re def getDef(word): link = "http://www.merriam-webster.com/dictionary/%s" % word data = urllib.request.urlopen(link).read().decode() try: return re.search("<p>: (.*?)</p><p>", data).group(1) except: return "No match" class newDefinition: def __init__(self, word): self.definition = getDef(word); >>> definition = newDefintion("color") >>> print(definition.definition) a quality such as red, blue, green, yellow, etc., that you see when you look at something In node.js however though it I can seem to return it like in python because of it's callback way of doing things, or at least I can't seem to return it which is why I'm asking how would I do the node.js equivalent or is their no equivalent? Here is what I have so far maybe you can spot what I'm doing wrong and how to fix it var urllib = require("urllib"); // installed with npm var getDef = function(word){ var link = "http://www.merriam-webster.com/dictionary/" + word; var urlData = urllib.request(link, {}, function(err, data, res){ var re = new RegExp("<p>: (.*?)</p><p>"); var results = data.toString(); var match = re.exec(results)[1]; return match; // Expected it to give urlData the definition }); return urlData; } var Definition = function(word){ this.definition = getDef(word); } definition = new Definition("color"); console.log(definition.definition); // this won't give the definition but the information of the urllib itself rather. So in general trying to figure out is how to use asynchronous code so I can return things that I need, but I am not use to this concept either so is there an equivalent to this in python? Also if you can point me to some good documentation on asynchronous code that would be great also. Answer: Since `return` will actually just exit your function instead of returning a value, you need to use a callback. It would look like this: var urllib = require("urllib"); var getDef = function(word, callback){ var link = 'http://www.merriam-webster.com/dictionary/' + word; urllib.request(link, {}, function(err, data, res) { var re = new RegExp('<p>: (.*?)</p><p>'); var results = data.toString(); var match = re.exec(results)[1]; callback(match); }); }; Then you would pass a callback while calling the function: getDef('color', function(definition) { console.log(definition); }); * * * **Edit:** Setting an object's property has the same idea. It might look like this instead: var Definition = function(word) { var self = this; getDef(world, function(definition, callback) { self.definition = definition; callback.call(self); }); }; And would be called like so: var definition = new Definition('color', function() { console.log(definition.definition); });
SQLite AUTOINCREMENT column counter behavior with ON CONFLICT IGNORE clause Question: When creating a small app using SQLite I've noticed a strange thing (for me) with autoincrement columns counter algorithm behavior. For instance, let's try to create database with the following schema CREATE TABLE numbers (num INTEGER UNIQUE ON CONFLICT IGNORE); and a small Python script import sqlite3 con = sqlite3.connect('db.sqlite3') cur = con.cursor() def values(): for i in xrange(1, 3): yield (i,) try: cur.executemany('INSERT INTO numbers (num) VALUES (?)', values()) except sqlite3.DatabaseError, err: print u'Error: ', err else: con.commit() print u'Number of added rows: %d' % cur.rowcount cur.close() con.close() Let's run script for three times. Last time with different values() output, xrange(3,5) for example. So, we get output Number of added rows: 2 Number of added rows: 0 Number of added rows: 2 Ok, then let's check our database and everything seems as it should $ sqlite3 db.sqlite3 'select rowid, * from numbers' 1|1 2|2 3|3 4|4 Then try to create database with adding an autoincrement alias to system rowid column. CREATE TABLE numbers (id INTEGER PRIMARY KEY AUTOINCREMENT, num INTEGER UNIQUE ON CONFLICT IGNORE); Then do the same thing as above and check the rows. $ sqlite3 db.sqlite3 'select rowid, * from numbers' 1|1|1 2|2|2 5|5|3 <- sqlite jumped over intermediate counter values for rowid and id column 6|6|4 SQLite saves intermediate counter values for rowid and alias autoincrement column when using IGNORE constraint voilation resolution algorithm for a table and skips them when assigning a new autoincrement value. Why? Answer: Because: 1. sqlite sees that a `ROWID` has not been specified and generates one. For regular `INTEGER PRIMARY KEY` cases it's basically `MAX(ROWID)+1`. For `INTEGER PRIMARY KEY AUTOINCREMENT` cases a new number that has never before been used is picked from the special `sqlite_sequence` table. This table holds the table `name` and `seq` value for autoincrement row ids. 2. An insert is attempted. The unique constraint is violated so the `ON CONFLICT IGNORE` resolution takes place. Effectively nothing is done and no error is raised. However, altering `sqlite_sequence` was still successful and will be committed once the transaction completes. Reference: <http://www.sqlite.org/autoinc.html>
pyqt auto connect signal Question: I want use the autoconnection feature. I am using this example: <http://www.eurion.net/python- snippets/snippet/Connecting%20signals%20and%20slots.html> it works, but I want to create my own signals and own slots, the example using built in signals. for example, here are a custom signal with a custom slot, but don't works: import sys from PyQt4 import QtGui, QtCore class SignalsAndSlots(QtGui.QWidget): testSignal = QtCore.pyqtSignal(str,name='testSignal') def __init__(self): QtGui.QMainWindow.__init__(self) self.setObjectName('testObject') self.label = QtGui.QLabel(self) QtCore.QMetaObject.connectSlotsByName(self) self.emitSignal() def emitSignal(self): self.testSignal.emit('message') @QtCore.pyqtSlot(str,name='on_testObject_testSignal') def autoSlot(self,msg): self.label.setText(msg) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) gui = SignalsAndSlots() gui.show() app.exec_() Thanks a lot Answer: Ber is right. This what the [pyqt documentation](http://pyqt.sourceforge.net/Docs/PyQt4/qmetaobject.html#connectSlotsByName) says: **QMetaObject.connectSlotsByName** searches recursively for all **child objects** of the given object [...] Here is a simple example with custom signals : import sys from PyQt4 import QtGui, QtCore class CustomButton(QtGui.QPushButton): custom_clicked = QtCore.pyqtSignal(str, name='customClicked') def mousePressEvent(self, event): self.custom_clicked.emit("Clicked!") class SignalsAndSlots(QtGui.QWidget): def __init__(self): QtGui.QMainWindow.__init__(self) layout = QtGui.QHBoxLayout(self) self.custom_button = CustomButton("Press Me", self) self.custom_button.setObjectName('customButton') self.label = QtGui.QLabel("Nothing...", parent=self) layout.addWidget(self.custom_button) layout.addWidget(self.label) QtCore.QMetaObject.connectSlotsByName(self) @QtCore.pyqtSlot(str, name='on_customButton_customClicked') def autoSlot(self, msg): self.label.setText(msg) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) gui = SignalsAndSlots() gui.show() app.exec_() But I think you should consider not using the object names. New-style signal connection is way neater. Here is the same application : import sys from PyQt4 import QtGui, QtCore class CustomButton(QtGui.QPushButton): custom_clicked = QtCore.pyqtSignal(str) def mousePressEvent(self, event): self.custom_clicked.emit("Clicked!") class SignalsAndSlots(QtGui.QWidget): def __init__(self): QtGui.QMainWindow.__init__(self) layout = QtGui.QHBoxLayout(self) self.custom_button = CustomButton("Press Me", self) self.custom_button.setObjectName('customButton') self.label = QtGui.QLabel("Nothing...", parent=self) layout.addWidget(self.custom_button) layout.addWidget(self.label) self.custom_button.custom_clicked.connect(self.on_clicked) def on_clicked(self, msg): self.label.setText(msg) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) gui = SignalsAndSlots() gui.show() app.exec_()
passing array as command line argument to python script Question: I am calling a python script from a ruby program as: sku = ["VLJAI20225", "VJLS1234"] qty = ["3", "7"] system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku} #{qtys}" But I'd like to access the array elements in the python script. int the python script: print sys.argv[1] #gives [VLJAI20225, expected ["VLJAI20225", "VJLS1234"] print sys.argv[2] #gives VJLS1234] expected ["3", "7"] I feel that the space between the array elements is treating the array elements as separate arguments. I may be wrong. How can I pass the array correctly. Thanks Answer: You need to find a suitable protocol to encode your data (your array, list, whatever) over the interface you've chosen (this much is true pretty general). In your case, you've chosen as interface the Unix process call mechanism which allows only passing of a list of strings during calling. This list also is rather limited (you cannot pass, say, a gigabyte that way), so you might want to consider passing data via a pipe between the two processes. Anyway, all you can do is pass bytes, so I propose to encode your data accordingly, e. g. as a JSON string. That means encode your Ruby array in JSON, transfer that JSON string, then decode on the Python side the received JSON string to get a proper Python list again. This only sketches the approach and reasoning behind it. Since I'm not fluent in Ruby, I will leave that part to you (see probably @apneadiving's answer on that), but on the Python side you can use import json myList = json.loads(s) to convert a JSON string `s` to a Python list `myList`.
Getting invalid json in RoR Question: I have an array sku = `["MAPNQ20673"]`. On converting this to json using sku.to_json I'm getting `"[\"MAPNQ20673\"]"`. This is an invalid json. I need to pass this array to a python script. system "python2 /home/nish/stuff/repos/Untitled/voylla_staging_changes/app/models/ReviseItem.py #{sku.to_json} #{qtys.to_json}" Then, I am using `json.loads(sys.argv[1])` to decode. But since the json is invalid, I am getting the following error on decoding the json in python script: raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded How can i convert the array to a valid json? **EDIT:** It works fine for me when I call the python script from a standalone ruby program. But when I rty to do the same from controller of an RoR app, it fails Answer: This works prefectly for me with **Python 2.7** : >>> from json import dumps, loads >>> sku = ["MAPNQ20673"] >>> s = dumps(sku) >>> x = loads(s) >>> x == sku True >>> x [u'MAPNQ20673']
Trouble opening page with Python requests Question: I am having some trouble opening a page with Python requests package. The page opens fine in a browser, but the program will just hang when attempting to get the site. import bs4, requests link = "http://s6.mediastreaming.it:8080/" r = requests.get(link) data=r.text soup = bs4.BeautifulSoup(data) soup.prettify() print soup Any help understanding why this site hangs would be greatly appreciated. The code works fine with <http://google.com> as the link. Edit: Added info to help. This is a site that is streaming music. I just want to scrape the part of the page that says what the current song is. That is all. But maybe the fact that the page is a music streaming site is causing the issue? I just want the text of the source. Nothing else. Edit 2: Tried the following to see if I was indeed getting a stream rather than the page I wanted. import bs4, requests link = "http://s6.mediastreaming.it:8080/" r = requests.get(link,stream=True) filename = "testfile.txt" with open(filename,'wb') as fd: for chunk in r.iter_content(100): fd.write(chunk) Here's what I got as the output: ICY 200 OK icy-notice1:<BR>This stream requires <a href="http://www.winamp.com/">Winamp</a><BR> icy-notice2:SHOUTcast Distributed Network Audio Server/Linux v1.9.8<BR> icy-name:Pig Radio - The Best Electronic & Indie Pop/Rock 24/7 icy-genre:Eclectic icy-url:http://www.pigradio.com content-type:audio/mpeg icy-pub:1 icy-br:128 &Oç)goiYQŠ < 6Ê !‡À¡ö³ ‡/OFÌ)8…¨ÐU!ðiÁP¡¢ãÅ.......... When I open the page in a browser and view source I get the following: <HTML><HEAD><meta http-equiv="Content-Language" content="en-us"><meta http-equiv="Content-Type" content="text/html; charset=windows-1252"><meta http-equiv="Pragma" content="no-cache"><meta http-equiv="Expires" content="Mon, 01 Jan 1990 12:00:00 GMT"><title>SHOUTcast Administrator</title><style type="text/css"><!--a:link {color: blue; font-family:Arial, Helvetica; font-size:9pt;}a:visited {color: blue; font-family:Arial, Helvetica; font-size:9pt;}a:hover {color: red; font-family:Arial, Helvetica; font-size:9pt; }.default {color: White; font-family:Arial, Helvetica; font-size:9pt; font-weight: normal}.ST {color: White; font-family:Arial, Helvetica; font-size:8pt; font-weight: normal}.logoText {color: red; font-family: Arial Black, Helvetica, sans-serif; font-size: 25pt; font-weight: normal; letter-spacing : -2.5px;}.flagText {color: blue; font-family: webdings; font-size: 36pt; font-weight: normal; }.ltv {color: blue; font-family: Arial, Helvetica, sans-serif; font-size: 9pt; font-weight: normal;}.tnl {color: black; font-family: Arial, Helvetica, sans-serif; font-size: 10pt; font-weight: bold; text-decoration: none;}--></style></HEAD><BODY topmargin=0 leftmargin=0 marginheight=0 marginwidth=0 bgcolor=#000000 text=#EEEEEE link=#001155 vlink=#001155 alink=#FF0000><font class=default><table width=100% border=0 cellpadding=0 cellspacing=0><tr><td height=50><font class=flagText>U</font><font class=logoText>&nbsp;SHOUTcast D.N.A.S. Status</font></td></tr><tr><td height=14 align=right><font class=ltv><a id=ltv href="http://www.shoutcast.com/">SHOUTcast Server Version 1.9.8/Linux</a></font></td></tr><tr><td bgcolor=#DDDDDD height=20 align=center><table width=100% border=0 cellpadding=0 cellspacing=0><tr><td align=center><font class=tnl><a id=tnl href="index.html">Status</a></font></td><td align=center><font class=tnl>&nbsp;|&nbsp;</font></td><td align=center><font class=tnl><a id=tnl href="played.html">Song History</a></font></td><td align=center><font class=tnl>&nbsp;|&nbsp;</font></td><td align=center><font class=tnl><a id=tnl href="listen.pls">Listen</a></font></td><td align=center><font class=tnl>&nbsp;|&nbsp;</font></td><td align=center><font class=tnl><a id=tnl href="home.html">Stream URL</a></font></td><td align=center><font class=tnl>&nbsp;|&nbsp;</font></td><td align=center><font class=tnl><a id=tnl href="admin.cgi">Admin Login</a></font></td></tr></table></td></tr></table><br><table cellpadding=5 cellspacing=0 border=0 width=100%><tr><td bgcolor=#000025 colspan=2 align=center><font class=ST>Current Stream Information</font></td></tr></table><table cellpadding=2 cellspacing=0 border=0 align=center><tr><td width=100 nowrap><font class=default>Server Status: </font></td><td><font class=default><b>Server is currently up and public.</b></td></tr><tr><td width=100 nowrap><font class=default>Stream Status: </font></td><td><font class=default><b>Stream is up at 128 kbps with <B>45 of 300 listeners (45 unique)</b></b></td></tr><tr><td width=100 nowrap><font class=default>Listener Peak: </font></td><td><font class=default><b>200</b></td></tr><tr><td width=100 nowrap><font class=default>Average Listen Time: </font></td><td><font class=default><b>7h&nbsp;13m&nbsp;12s</b></td></tr><tr><td width=100 nowrap><font class=default>Stream Title: </font></td><td><font class=default><b>Pig Radio - The Best Electronic & Indie Pop/Rock 24/7</b></td></tr><tr><td width=100 nowrap><font class=default>Content Type: </font></td><td><font class=default><b>audio/mpeg</b></td></tr><tr><td width=100 nowrap><font class=default>Stream Genre: </font></td><td><font class=default><b>Eclectic</b></td></tr><tr><td width=100 nowrap><font class=default>Stream URL: </font></td><td><font class=default><b><a href="http://www.pigradio.com">http://www.pigradio.com</a></b></td></tr><tr><td width=100 nowrap><font class=default>Stream AIM: </font></td><td><font class=default><b><a href="aim:goim?screenname=N/A">N/A</a></b></td></tr><tr><td width=100 nowrap><font class=default>Stream IRC: </font></td><td><font class=default><b><a href="http://www.shoutcast.com/chat.phtml?dc=N%2FA">N/A</a></b></td></tr><tr><td width=100 nowrap><font class=default>**Current Song: </font></td><td><font class=default><b>Midlake - Young Bride (Cassettes Won't Listen Remix)**</b></td></tr></table><br><table cellpadding=0 cellspacing=0 border=0 width=100%> <tr><td bgcolor=#DDDDDD nowrap colspan=5 align=center><table cellspacing=0 cellpadding=0 border=0><tr><td><font class=ltv>Written by Stephen 'Tag Loomis, Tom Pepper and Justin Frankel</font></td></tr></table></td></tr><tr><td nowrap colspan=5 align=center><font class=ST><b><a href="http://www.shoutcast.com/disclaimer.phtml">Copyright Nullsoft Inc</a><a href="/llamacookie">.</a> 1998-2004</b></font></td></tr></table></font></body></html> Obviously it's this last bit of HTML that I want to scrape for the "Current Song" How do I get just this HTML?? Edit3: Solved it! I used Wireshark to capture the GET that the browser was sending and added all the parameters I saw there to the header of my Python GET. It looks like this: import bs4, requests, urllib2 link = "http://173.255.137.244:8080/" filename = "testfile.txt" payload = {'Host':'173.255.137.244:8080','User-Agent':'Mozilla/5.0 (Windows NT 6.1; rv:26.0) Gecko/20100101 Firefox/26.0','Accept-Encoding':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language':'en-US,en;q=0.5','Accept-Encoding':'gzip, deflate'} r = requests.get(link,headers=payload) data = r.text soup = bs4.BeautifulSoup(data) soup.prettify() print soup Moral of the story. Wireshark is cool. Answer: It seems like the site has some approaches preventing spiders from crawling. If there's no authentication needed to crawl a page and the spider has something wrong when crawling, then there're usually two ways to do: * Add http request headers when emitting a request which makes it like visiting the page by a web browser. Some sites usually check the `User-Agent` field to see if it's a spider. * Add some cookies needed when emitting a request. Some sites need some information from cookies when responding. Apart from wireshark which is a little complex, observing the network requests and its headers by the chrome developer tools in chrome is a good choice.
how can I copy a string to the windows clipboard? python 3 Question: If I have a variable var = 'this is a variable' how can I copy this string to the windows clipboard so I can simply Ctrl+v and it's transferred elsewhere? I don't want to use anything that isn't that isn't built in, I hope it's possible. thanks! Answer: You can do this: >>> import subprocess >>> def copy2clip(txt): ... cmd='echo '+txt.strip()+'|clip' ... return subprocess.check_call(cmd, shell=True) ... >>> copy2clip('now this is on my clipboard')
multiprocessing - reading big input data - program hangs Question: I want to run parallel computation on some input data which is loaded from a file. (The file can be really big, so I use a generator for this.) On a certain number of items, my code runs OK but above this threshold the program hangs (some of the worker processes do not end). Any suggestions? (I am running this with python2.7, 8 CPUs; 5,000 lines still OK, 7,500 does not work.) Firstly, you need an input file. Generate it in bash: for i in {0..10000}; do echo -e "$i"'\r' >> counter.txt; done Then, run this: python2.7 main.py 100 counter.txt > run_log.txt **main.py:** #!/usr/bin/python2.7 import os, sys, signal, time import Queue import multiprocessing as mp def eat_queue(job_queue, result_queue): """Eats input queue, feeds output queue """ proc_name = mp.current_process().name while True: try: job = job_queue.get(block=False) if job == None: print(proc_name + " DONE") return result_queue.put(execute(job)) except Queue.Empty: pass def execute(x): """Does the computation on the input data """ return x*x def save_result(result): """Saves results in a list """ result_list.append(result) def load(ifilename): """Generator reading the input file and yielding it row by row """ ifile = open(ifilename, "r") for line in ifile: line = line.strip() num = int(line) yield (num) ifile.close() print("file closed".upper()) def put_tasks(job_queue, ifilename): """Feeds the job queue """ for item in load(ifilename): job_queue.put(item) for _ in range(get_max_workers()): job_queue.put(None) def get_max_workers(): """Returns optimal number of processes to run """ max_workers = mp.cpu_count() - 2 if max_workers < 1: return 1 return max_workers def run(workers_num, ifilename): job_queue = mp.Queue() result_queue = mp.Queue() # decide how many processes are to be created max_workers = get_max_workers() print "processes available: %d" % max_workers if workers_num < 1 or workers_num > max_workers: workers_num = max_workers workers_list = [] # a process for feeding job queue with the input file task_gen = mp.Process(target=put_tasks, name="task_gen", args=(job_queue, ifilename)) workers_list.append(task_gen) for i in range(workers_num): tmp = mp.Process(target=eat_queue, name="w%d" % (i+1), args=(job_queue, result_queue)) workers_list.append(tmp) for worker in workers_list: worker.start() for worker in workers_list: worker.join() print "worker %s finished!" % worker.name if __name__ == '__main__': result_list = [] args = sys.argv workers_num = int(args[1]) ifilename = args[2] run(workers_num, ifilename) Answer: This is because nothing in your code takes anything _off_ `result_queue`. The behavior then depends on internal queue buffering details: if "not a lot" of data is waiting, everything appears fine, but if "a lot" of data is waiting, everything freezes. Not much more can be said, because it involves layers of internal magic ;-) But the docs do warn about it: > Warning > > As mentioned above, if a child process has put items on a queue (and it has > not used JoinableQueue.cancel_join_thread), then that process will not > terminate until all buffered items have been flushed to the pipe. > > This means that if you try joining that process you may get a deadlock > unless you are sure that all items which have been put on the queue have > been consumed. Similarly, if the child process is non-daemonic then the > parent process may hang on exit when it tries to join all its non-daemonic > children. > > Note that a queue created using a manager does not have this issue. See > Programming guidelines. One easy way to repair that: First add result_queue.put(None) before `eat_queue()` returns. Then add: count = 0 while count < workers_num: if result_queue.get() is None: count += 1 before the main program `.join()`s the workers. That drains the result queue, and everything shuts down cleanly then. BTW, this code is pretty bizarre: while True: try: job = job_queue.get(block=False) if job == None: print(proc_name + " DONE") return result_queue.put(execute(job)) except Queue.Empty: pass Why are you doing non-blocking `get()`? This turns into a CPU-hog "busy loop" so long as the queue is empty. The primary point of `.get()` is to supply an _efficient_ way to wait for work to show up. So: while True: job = job_queue.get() if job is None: print(proc_name + " DONE") break else: result_queue.put(execute(job)) result_queue.put(None) does the same thing, but far more efficiently. **Queue size caution** You didn't ask about this, but let's cover it before it bites you ;-) By default, there is no bound on a `Queue`'s size. If, e.g., you add a billion items to the `Queue`, it will demand enough RAM to hold a billion items. So if your producer(s) _can_ generate work items faster than your consumer(s) can process them, memory use can get out of hand quickly. Fortunately, that's easy to repair: specify a maximum queue size. For example, job_queue = mp.Queue(maxsize=10*workers_num) ^^^^^^^^^^^^^^^^^^^^^^^ Then `job_queue.put(some_work_item)` will block until consumers reduce the size of the queue to less than the maximum. This way you can process enormous problems with a queue that requires trivial RAM.
Python - Infinite while loop, break on user input Question: I have an infinite while loop that I want to break out of when the user presses a key. Usually I use `raw_input` to get the user's response; however, I need `raw_input` to not wait for the response. I want something like this: print 'Press enter to continue.' while True: # Do stuff # # User pressed enter, break out of loop This should be a simple, but I can't seem to figure it out. I'm leaning towards a solution using threading, but I would rather not have to do that. How can I accomplish this? Answer: You can use non-blocking read from stdin: import sys import os import fcntl import time fl = fcntl.fcntl(sys.stdin.fileno(), fcntl.F_GETFL) fcntl.fcntl(sys.stdin.fileno(), fcntl.F_SETFL, fl | os.O_NONBLOCK) while True: print("Waiting for user input") try: stdin = sys.stdin.read() if "\n" in stdin or "\r" in stdin: break except IOError: pass time.sleep(1)
Generate multi-page PDF without duplicating the template on each page? Question: I'm working on an application which generates multi-page (sometimes hundreds or thousands of pages) PDF documents for printing. Each page consists of a generic template with some page-specific content superimposed (think: automatically filling in the "name" field of a paper form). The problem, though, is that the template is fairly large (about 100kb/page), and duplicating it across every page yields _very_ large PDF files (currently the PDF is generated by using `rsvg-convert` to convert a directory full of SVG files into a PDF). Is it possible to reduce the duplication by referencing the static template so that each PDF page only contains the custom content? Ideally I'd like to know how to do this with Python or Ghostscript, but any starting points would be appreciated. Answer: What you want are `Form XObjects` inside PDF files. From **PDF Reference:** > A form XObject is a PDF content stream that is a self-contained description > of any sequence of graphics objects (including path objects, text objects, > and sampled images). A form XObject may be painted multiple times—either on > several pages or at several locations on the same page—and produces the same > results each time, subject only to the graphics state at the time it is > invoked. Not only is this shared definition economical to represent in the > PDF file, but under suitable circumstances the PDF consumer application can > optimize execution by caching the results of rendering the form XObject for > repeated reuse. Many applications that add e.g. watermarks to PDF pages, add them as `Form XObjects` automatically. As an example, you can add template content as background to existing multipage PDF that already has page-specific content, using pdftk: pdftk multipage.pdf background template.pdf output multipage+.pdf With Ghostscript, you should have template as EPS, then create multi-page PDF with `Form XObjects` added, then you add page-specific content with some other methods. But, maybe something smart can be implemented to super-impose specific pages to PDF with background using _"Ghostscript only"_. To create "ready to be filled" multipage PDF with template as `Form XObject` on each page, do something like this: gs -sDEVICE=pdfwrite -o 100_pages_template.pdf \ -c '[/_objdef {background} /BBox [0 0 595 841] /BP pdfmark save /showpage {} def 0 0 translate % adjust according to EPS BBox (template.eps) run restore [/EP pdfmark 1 1 100 { [{background} /SP pdfmark showpage } for' Don't know about Python, I think it's as easy as next example using Perl. Here, too, I create 100 pages PDF with template on each page: use strict; use warnings; use PDF::API2; my $pdf = PDF::API2->new(); my $tmpl = PDF::API2->open('template.pdf'); my $xo = $pdf->importPageIntoForm($tmpl, 1); for (1..100) { my $page = $pdf->page(); my $gfx = $page->gfx(); $gfx->formimage($xo, 0, 0); # add page specific content } $pdf->saveas('out.pdf');
python dictionary sorting in descending order based on values Question: I want to sort this dictionary d based on value of sub key key3 in descending order. See below: d = { '123': { 'key1': 3, 'key2': 11, 'key3': 3 }, '124': { 'key1': 6, 'key2': 56, 'key3': 6 }, '125': { 'key1': 7, 'key2': 44, 'key3': 9 }, } So final dictionary would look like this. d = { '125': { 'key1': 7, 'key2': 44, 'key3': 9 }, '124': { 'key1': 6, 'key2': 56, 'key3': 6 }, '123': { 'key1': 3, 'key2': 11, 'key3': 3 }, } My approach was to form another dictionary e from d, whose key would be value of key3 and then use reversed(sorted(e)) but since value of key3 can be same, so dictionary e lost some of the keys and their values. makes sense? How I can accomplish this? This is not a tested code. I am just trying to understand the logic. Answer: [Dictionaries do not have any inherent order](http://docs.python.org/3/library/stdtypes.html#dictionary-view- objects). Or, rather, their inherent order is "arbitrary but not random", so it doesn't do you any good. In different terms, your `d` and your `e` would be exactly equivalent dictionaries. What you can do here is to use an [`OrderedDict`](http://docs.python.org/3/library/collections.html#ordereddict- objects): from collections import OrderedDict d = { '123': { 'key1': 3, 'key2': 11, 'key3': 3 }, '124': { 'key1': 6, 'key2': 56, 'key3': 6 }, '125': { 'key1': 7, 'key2': 44, 'key3': 9 }, } d_ascending = OrderedDict(sorted(d.items(), key=lambda kv: kv[1]['key3'])) d_descending = OrderedDict(sorted(d.items(), key=lambda kv: kv[1]['key3'], reverse=True)) The original `d` has some arbitrary order. `d_ascending` has the order you _thought_ you had in your original `d` but didn't. And `d_ascending` has the order you want for your `e`. * * * If you don't really need to use `e` as a dictionary, you just want to be able to iterate over the elements of `d` in a particular order, you can simplify this: for key, value in sorted(d.items(), key=lambda kv: kv[1]['key3'], reverse=True): do_something_with(key, value) * * * If you want to maintain a dictionary in sorted order across any changes, instead of an `OrderedDict`, you want some kind of sorted dictionary. There are a number of options available that you can find on PyPI, some implemented on top of trees, others on top of an `OrderedDict` that re-sorts itself as necessary, etc.
UnicodeDecodeError in Python while reading UTF-8 sql file from English Wikipedia Question: **Update:** I have changed the encoding to with open("../data/enwiki-20131202-pagelinks.sql", encoding="ISO-8859-1") ...and the program is now chewing through the file without complaint. Maybe the SQL dumps aren't UTF-8 and don't contain such literals, a false assumption on my part. **Original:** I'm trying to process one of Wikipedia's humongous data sets, namely the _pagelinks.sql_ file. Unfortunately I get the following error while reading the file: (...) File "c:\Program Files\Python 3.3\lib\codecs.py", line 301, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf8 in position 5095: invalid start byte My code is as follows: import re reg1 = re.compile(",0,") ref_count = 0 with open("../data/enwiki-20131202-pagelinks.sql", encoding="utf8") as infile: for line in infile: matches = re.findall(reg1, line) ref_count += len(matches) print ("found", ref_count, "references.") Answer: An excerpt from a comment under the "Unicode" heading here <http://meta.wikimedia.org/wiki/Data_dumps/Dump_format> may be helpful: > "The dumps may contain non-Unicode (UTF8) characters in older text revisions > due to lenient charset validation in the earlier MediaWiki releases..." Ignoring for the moment the conflation of Unicode and UTF8 what you can do to avoid the error is pass the `errors` keyword argument to [`open()`](http://docs.python.org/3/library/functions.html#open), e.g.: filepath = "../data/enwiki-20131202-pagelinks.sql" with open(filepath, encoding="utf8", errors='replace') as infile: ... That "causes a replacement marker (such as `?`) to be inserted where there is malformed data." <http://docs.python.org/3/library/functions.html#open> If you'd rather ignore the non-UTF8 characters you can use `errors='ignore'`.
Python: Getting value from Gevent Greenlet Question: I'm learning Gevent, but can't get the value returned by the function called in a greenlet. The following code: import gevent.monkey gevent.monkey.patch_socket() import gevent from gevent import Greenlet import urllib2 import simplejson as json def fetch(pid): response = urllib2.urlopen('http://time.jsontest.com') result = response.read() json_result = json.loads(result) datetime = json_result['time'] print('Process %s: %s' % (pid, datetime)) return json_result['time'] def synchronous(): for i in range(1,10): fetch(i) def asynchronous(): threads = [Greenlet.spawn(fetch, i) for i in range(10)] result = gevent.joinall(threads) print [Greenlet.value(thread) for thread in threads] print('Synchronous:') synchronous() print('Asynchronous:') asynchronous() gives me the error: print [Greenlet.value(thread) for thread in threads] AttributeError: type object 'Greenlet' has no attribute 'value' What am I doing wrong, and how do I get the value from each greenlet? Answer: According to <http://www.gevent.org/intro.html> you want def asynchronous(): threads = [Greenlet.spawn(fetch, i) for i in range(10)] result = gevent.joinall(threads) print [thread.value for thread in threads]
detect emoticon in a sentence using regex python Question: Here is the list of emoticons: <http://en.wikipedia.org/wiki/List_of_emoticons> I want to form a regex which checks if any of these emoticons exist in the sentence. For example, "hey there I am good :)" or "I am angry and sad :(" but there are a lot of emoticons in the list on wikipedia so wondering how I can achieve this task. I am new to regex. & python. >>> s = "hey there I am good :)" >>> import re >>> q = re.findall(":",s) >>> q [':'] Answer: I see two approaches to your problem: 1. Either, you can create a regular expression for a "generic smiley" and try to match as many as possible without making it overly complicated and insane. For example, you could say that each smiley has some sort of eyes, a nose (optional), and a mouth. 2. Or, if you want to match each and every smiley from that list (and none else) you can just take those smileys, escape any regular-expression specific special characters, and build a huge disjunction from those. Here is some code that should get you started for both approaches: # approach 1: pattern for "generic smiley" eyes, noses, mouths = r":;8BX=", r"-~'^", r")(/\|DP" pattern1 = "[%s][%s]?[%s]" % tuple(map(re.escape, [eyes, noses, mouths])) # approach 2: disjunction of a list of smileys smileys = """:-) :) :o) :] :3 :c) :> =] 8) =) :} :^) :D 8-D 8D x-D xD X-D XD =-D =D =-3 =3 B^D""".split() pattern2 = "|".join(map(re.escape, smileys)) text = "bla bla bla :-/ more text 8^P and another smiley =-D even more text" print re.findall(pattern1, text) Both approaches have pros, cons, and some general limitations. You will always have false positives, like in a mathematical term like `18^P`. It might help to put spaces around the expression, but then you can't match smileys followed by punctuation. The first approach is more powerful and catches smileys the second approach won't match, but only as long as they follow a certain schema. You could use the same approach for "eastern" smileys, but it won't work for strictly symmetric ones, like `=^_^=`, as this is not a regular language. The second approach, on the other hand, is easier to extend with new smileys, as you just have to add them to the list.
Distribution independent libpython path Question: Under newer Ubuntu/Debian versions, `libpython2.7.so` is under `/usr/lib/i386-linux-gnu/libpython2.7.so` or `/usr/lib/x86_64-linux- gnu/libpython2.7.so`, etc. Earlier, they could be found in `/usr/lib/libpython2.7.so`, no matter the architecture. I haven't checked for other distributions. How do I find the path of `libpython2.7.so` with python? Answer: Using `pkg-config` is not the best option - it will not distinguish between different installations of Python, returning only the system installation. You are better off using the Python executable to discover the location of `libpythonX.Y.so`. From inside Python: from distutils import sysconfig; print sysconfig.get_config_var("LIBDIR") Or inside a Makefile: PYTHON_LIBDIR:=$(shell python -c 'from distutils import sysconfig; print sysconfig.get_config_var("LIBDIR")') This will discover the location from whatever Python executable is first in `$PATH` and thus will work if there are multiple Python installations on the system. Credit to [Niall Fitzgerald](https://github.com/potentialventures/cocotb/issues/128) for pointing this out.
Parse english textual date expression into datetime Question: How do I convert these strings: one hour ago three days ago two weeks ago yesterday next month into Python datetime object? Answer: Just found [parsedatetime](https://pypi.python.org/pypi/parsedatetime/1.1.2) for parsing human readable date/time text from the [link](http://stackoverflow.com/questions/11340963/natural-language-time- parser) provided by Jon Clements. Here is a solution in case you interested: from time import mktime from datetime import datetime import parsedatetime as pdt time_str = '1 hour ago' cal = pdt.Calendar() dt = datetime.fromtimestamp(mktime(cal.parse(time_str)[0])) time_formatted = dt.strftime('%b %d, %Y %H:%M') print(time_formatted) # will print something like: Dec 15, 2013 02:10 Also see this question: [python 'x days ago' to datetime](http://stackoverflow.com/q/12566152/1396314)
Pygame: Variable not updating each iteration Question: I'm brand new to Python, so this is probably a simple problem. I want the code to display "rotation: " followed by the value of the variable player_rotation. It does this, but the value displayed does not by 1 every iteration (as I would expect it to). import pygame from pygame.locals import * pygame.init() screen = pygame.display.set_mode((480, 480)) myfont = pygame.font.SysFont("monospace", 15) player_rotation = 0 rotation_label = myfont.render("rotation: " + str(player_rotation), 1, (255,255,0)) while 1: screen.blit(rotation_label, (100,100)) player_rotation += 1 pygame.display.flip() for event in pygame.event.get(): if event.type==pygame.QUIT: pygame.quit() exit(0) Answer: You set the label only once and this occurs before the loop. Try moving rotation_label into your loop. while 1: rotation_label = myfont.render("rotation: " + str(player_rotation), 1, (255,255,0)) screen.blit(rotation_label, (100,100)) player_rotation += 1 pygame.display.flip() Also, your code in `for` loop is never executed because it appears after your `while 1`
Do things for a period of time program in python Question: I made a program in python that looks like this: import time y = "a" x = 0 while x != 10 and y == "a": y = input("What is your name? ") time.sleep(1) x = x + 1 if y != "a": print("Hi " + y) else: print("You took too long to answer...") I know that there is a way of accomplishing the same thing at this question: [Keyboard input with timeout in Python](http://stackoverflow.com/questions/1335507/keyboard-input-with- timeout-in-python), but I would like to know why this is not working. No matter how long I wait it doesn't time out; it just sits there waiting for me to type something in. What have I done wrong? I am using python 3.3 on Win 7. Answer: Input is blocking in python. Meaning `time.sleep(1)` line and all lines after only get executed after the input is received.
UnicodeDecodeError Class Based Views, urls Question: I have this in a app url, I cannot figure out what is wrong. and the "encoding Error" I am learning from 2 scoops of django. The problem is this It is this because when I comment out it's use everything works fine #!/usr/bin/env python # -*- coding: utf-8 -*- # ofertoj/urls.py from django.conf.urls import patterns, url from .views import * urlpatterns = patterns("", url( regex=r"ˆ(?P<pk>\d+)/$", view=OfertoDetailView.as_view(), name="oferto_detail" ), url( regex=r"ˆ(?P<pk>\d+)/results/$", view=OfertoResultsView.as_view(), name="oferto_results" ), url( regex=r"ˆ(?P<pk>\d+)/listview/$", view=OfertoListView.as_view(), name="oferto_listview" ), url( regex=r"^(?P<pk>\d+)/createview/$", view=OfertoCreateView.as_view(), name="oferto_createview" ), url( regex=r"ˆ(?P<pk>\d+)/updateview/$", view=OfertoUpdateView.as_view(), name="oferto_updateview" ), ) This is my Stacktrace Environment: Request Method: GET Request URL: http://127.0.0.1:8000/ Django Version: 1.5.4 Python Version: 2.7.4 Installed Applications: ('django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.admin', 'django.contrib.admindocs', 'django.contrib.comments', 'django.contrib.sitemaps', 'zinnia', 'tagging', 'mptt', 'south', 'registration', 'blogs', 'turtle', 'ofertoj', 'petoj') Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware') Template error: In template /home/talisman/projects/tempilo/templates/skeleton.html, error at line 20 ascii 10 : <meta name="robots" content="follow, all" /> 11 : <meta name="language" content="{{ LANGUAGE_CODE }}" /> 12 : <meta name="viewport" content="width=device-width; initial-scale=1.0;" /> 13 : <meta name="description" content="{% block meta-description %}Browse through a few featured Isla Vista Restaurants and we are here to take your order. 14 : Call night or day (805)689-6969 or order online{% endblock %}" /> 15 : <meta name="keywords" content="{% block meta-keywords %}delivery,food,take-out {{ entry_tags|join:", "}}{% endblock %}" /> 16 : <meta name="author" content="Brian Scott Carpenter" /> 17 : {% block meta %}{% endblock %} 18 : <link rel="pingback" href="/xmlrpc/" /> 19 : <link rel="shortcut icon" href="{{ STATIC_URL }}img/favicon.ico" /> 20 : <link rel="home" href=" {% url 'home' %} " /> 21 : <link rel="stylesheet" type="text/css" media="screen, projection" href="{{ STATIC_URL }}css/screen.css" /> 22 : <link rel="stylesheet" type="text/css" media="print" href="{{ STATIC_URL }}css/print.css" /> 23 : 24 : <link href='http://fonts.googleapis.com/css?family=Jolly+Lodger' rel='stylesheet' type='text/css'> 25 : 26 : <!--[if lt IE 8]> 27 : <link rel="stylesheet" type="text/css" media="screen, projection" href="{{ STATIC_URL }}/css/ie.css" /> 28 : <![endif]--> 29 : {% block link %}{% endblock %} 30 : {% block script %}{% endblock %} Traceback: File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/core/handlers/base.py" in get_response 115. response = callback(request, *callback_args, **callback_kwargs) File "/home/talisman/projects/tempilo/tempilo/views.py" in home 19. return render_to_response(('index.html'),context_instance=RequestContext(request)) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/shortcuts/__init__.py" in render_to_response 29. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/loader.py" in render_to_string 177. return t.render(context_instance) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/base.py" in render 140. return self._render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/base.py" in render 830. bit = self.render_node(node, context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/debug.py" in render_node 74. return node.render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/loader_tags.py" in render 124. return compiled_parent._render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/base.py" in render 830. bit = self.render_node(node, context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/debug.py" in render_node 74. return node.render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/loader_tags.py" in render 124. return compiled_parent._render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/base.py" in _render 134. return self.nodelist.render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/base.py" in render 830. bit = self.render_node(node, context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/debug.py" in render_node 74. return node.render(context) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/template/defaulttags.py" in render 413. url = reverse(view_name, args=args, kwargs=kwargs, current_app=context.current_app) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/core/urlresolvers.py" in reverse 496. return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/core/urlresolvers.py" in _reverse_with_prefix 382. possibilities = self.reverse_dict.getlist(lookup_view) File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/core/urlresolvers.py" in reverse_dict 297. self._populate() File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/core/urlresolvers.py" in _populate 274. for name in pattern.reverse_dict: File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/core/urlresolvers.py" in reverse_dict 297. self._populate() File "/home/talisman/virt_env/tempilo/local/lib/python2.7/site-packages/Django-1.5.4-py2.7.egg/django/core/urlresolvers.py" in _populate 265. if p_pattern.startswith('^'): Exception Type: UnicodeDecodeError at / Exception Value: 'ascii' codec can't decode byte 0xcb in position 0: ordinal not in range(128) Answer: It appears that you have unicode code points in your template. By looking at the hex you can see the raw bytes: 280a 7265 6765 783d 7222 cb86 283f 503c (.regex=r"..(?P< The part right after the r" is a ^ which appears to have been entered in unicode. It shows as .. and the bytes are 0xcb 0x86. So your regex has unicode but probably should only be ascii, so you should change this: regex=r"ˆ(?P<pk>\d+)/$", into this: regex=r"^(?P<pk>/d+)/$", If you change all of those it probably fixes it.
Python - str() outputs non unicode Question: I'm using the [py](https://pypi.python.org/pypi/py) library to rename .flac files with information from its metadata. I'm having a problem with getting a `LocalPath` object to a normal utf-8 string to run it into `subprocess`. Here's the bit of code that goes wrong: #!/usr/bin/python # -*- coding: utf-8 -*- # This program takes the information from FLAC metadata to rename the files # according to various naming paterns. import subprocess import sys from py.path import local # Defining the function that fetches metadata and formats it def metadata(filename): filename = str(filename).decode("utf-8") pipe = subprocess.Popen( ["metaflac", "--show-tag=tracknumber", filename], stdout=subprocess.PIPE) tracknumber, error = pipe.communicate() tracknumber = tracknumber.decode("utf-8") tracknumber = tracknumber.replace("tracknumber=", "") tracknumber = tracknumber.replace("TRACKNUMBER=", "") tracknumber = tracknumber.rstrip() # Remove whitespaces if int(tracknumber) < 10: if "0" in tracknumber: pass else: tracknumber = "0" + tracknumber else: pass pipe = subprocess.Popen( ["metaflac", "--show-tag=title", filename], stdout=subprocess.PIPE) title, error = pipe.communicate() title = title.decode("utf-8") title = title.replace("TITLE=", "") title = title.replace("title=", "") title = title.rstrip() pipe = subprocess.Popen( ["metaflac", "--show-tag=artist", filename], stdout=subprocess.PIPE) artist, error = pipe.communicate() artist = artist.decode("utf-8") artist = artist.replace("ARTIST=", "") artist = artist.replace("artist=", "") artist = artist.rstrip() return tracknumber, title, artist # Defining function that renames the files def rename(root): if output == filename: print "%s is already named correctly\n" % (title) else: filename.rename(filename.new(basename=output)) # Importing command line arguments try: root = local(sys.argv[2]) option = sys.argv[1] print option, root except ValueError: print "WTF!!!" # Running if option == "--s": for filename in root.visit(fil="*.flac", rec=True): tracknumber, title, artist = metadata(filename) output = "%s - %s" % (tracknumber, filename) rename(root) print "Files renamed" I'm using `filename = str(filename)` because `filename` is not a string, but a `LocalPath` and `subprocess` crashes when not given string. The problem is that with utf-8 file name, `str(filename)` gives back something like this: `/media/Main/Musique/2 Bal 2 Neg'/3X plus efficace/09 - Po\xc3\xa8tes de la mort.flac` Adding `filename = filename.decode("utf-8")` does not seem to change anything I'm still very new to python :D Answer: The `UnboundLocalError` you are getting at the end is because in this line: filename = str(filename) you are trying to treat `filename` as both a local and a global variable. You are (hopefully) not wanting to reassign the global (which would happen if you did `global filename` before this line). But when you assign it as a local, Python will not let you look up the global version of it anywhere in the function - if you do this before it is assigned, you get the error you are seeing. Choose a different variable name, or pass `filename` in as an argument - eg, def metadata(filename): filename = str(filename) ... return tracknumber, title, artist tracknumber, title, artist = metadata(filename) will work. To fix your unicode errors, you are mostly on the right track. You need to maintain the distiction at all times between bytes and (unicode) characters - in Python 2, use `str` for the former and `unicode` for the latter. If you upgrade to Python 3, you would use `bytes` for the former and `str` for the latter (and the distiction would be easier to maintain in a lot of cases). Use the `decode` method to go from bytes to unicode, and the `encode` method to go the other way. You generally want to use bytes only when you are directly doing IO, and unicode everywhere else. The `subprocess` module accepts _either_ , but for filenames it is usually easier to use unicode. The `Py` library helpfully doesn't maintain, or acknowledge, this distinction at all. It just uses `str` internally, regardless of which Python version it is running on (it uses a single codebase for both Python 2.x and Python 3.x). This means in Python 2, you _do_ want to decode it to `unicode`: def metadata(filename): filename = str(filename).decode('utf-8') but in Python 3, you want to leave it as a `str`: def metadata(filename): filename = str(filename) in both cases, you then want to only use `filename` in all your calls to `subprocess.Popen`, and _not_ `str(filename)`. So, pipe = subprocess.Popen( ["metaflac", "--show-tag=title", str(filename)], stdout=subprocess.PIPE) should be: pipe = subprocess.Popen( ["metaflac", "--show-tag=title", filename], stdout=subprocess.PIPE) You want to do something similar with the metadata you do get back. `communicate` counts as direct IO, and so it returns bytes, not characters. You already try to convert two of your three bits of metadata like this: title.decode("utf-8") but that needs to be: title = title.decode("utf-8") you need to do the same for `artist`, and add the equivalent line for `tracknumber`.
Does the Python framework nose support testing a function which inititializes a daemon thread Question: I am new on Stackoverflow and at Python. I wrote a simple Python program as follows: from threading import Thread from Queue import Queue import time g_log_queue = Queue() def print_log(): while True: record = g_log_queue.get() if record == 'a': print 'a is out' elif record == 'b': print 'b is out' else: print 'other is out' def run(): g_log_thread = Thread(target=print_log,name='logthread',args=()) g_log_thread.daemon = True g_log_thread.start() g_log_queue.put('a') g_log_queue.put('b') g_log_queue.put('c') run() Now, I write a demo test case for the program as follows: from nose.tools import * from pdaemon import * def test_run(): run() When I run the nosetest command, I get this message: . ---------------------------------------------------------------------- Ran 1 test in 0.003s OK Exception in thread logthread (most likely raised during interpreter shutdown): Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner File "/usr/lib/python2.7/threading.py", line 505, in run File "/root/python/daemon/pdaemon.py", line 7, in print_log File "/usr/lib/python2.7/Queue.py", line 179, in get File "/usr/lib/python2.7/threading.py", line 279, in notify <type 'exceptions.TypeError'>: 'NoneType' object is not callable Do nose support testing the function which initializes a daemon thread? Answer: It's the python bug <http://bugs.python.org/issue14623> The workaround is to add timeout time.sleep(1) in the end of script that allows the threads to finish before the script comes to end of life and close
More efficient way to skip columns in csv to namedtuple function? Question: I'm downloading a master data table from Bloomberg's Open Symbology. The csv has columns that I'm not interested in. **Question** Is there an efficient/Pythonic way to produce namedtuple instances from a subset of the columns found within a csv file? **What I've tried** My current process (Python 3.3 code below) is as follows: 1. Create a TempRecord namedtuple with all columns in the csv. 2. Create a TempRecord instance for each record in the csv file. 3. Create a BSYMRecord (with fewer and renamed attributes) from a given TempRecord. 4. Yield BSYMRecord. This smells really inefficient. from csv import reader from collections import namedtuple from datetime import date from io import BytesIO from urllib.request import urlopen from urllib.error import HTTPError from zipfile import ZipFile def bsym_records(sector, security_type, file_date): """Yield BSYMRecord for given sector and security type.""" template = 'http://bdn-ak.bloomberg.com/precanned/{s}_{t}_{d}.txt.zip' url = template.format(s=sector, t=security_type, d=file_date) response = urlopen(url) zipfile = ZipFile(BytesIO(response.read())) for filename in zipfile.namelist(): with zipfile.open(filename) as f: line = f.readline().decode('utf-8') headers = line.strip().replace(' ', '_').split('|') TempRecord = namedtuple('BSYMRecord', headers) while True: line = f.readline().decode('utf-8') if line[0] == '#': break t = TempRecord._make(line.strip().split('|')) yield reduce_bsym_record(t) BSYMRecord = namedtuple('BSYMRecord', ['name', 'ticker', 'pricing_source', 'security_type', 'market_sector', 'BBGID', 'BBGID_composite', 'BSID', 'unique_id']) def reduce_bsym_record(record): """Eliminate non-essential fields.""" return BSYMRecord._make((record.NAME, record.ID_BB_SEC_NUM_DES, record.FEED_SOURCE, record.SECURITY_TYP, record.MARKET_SECTOR_DES, record.ID_BB_GLOBAL, record.COMPOSITE_ID_BB_GLOBAL, record.ID_BB_SEC_NUM_SRC, record.ID_BB_UNIQUE)) Answer: You're currently importing the `csv` module but not using it. If you _were_ using it, you could use the [`csv.DictReader`](http://docs.python.org/3.3/library/csv.html#csv.DictReader) class to create a dictionary instead of a list for each line in the file. You can construct a `namedtuple` using keyword arguments, but it doesn't ignore spurious ones. So you'll still need to filter them manually - but you can now do this with a dict comprehension, rather than a different namedtuple: for line in csvfile: yield BSYMRecord(**{k:v for k,v in line if k in BSYMRecord._fieldnames}) The trick is getting the DictReader set up in the first place. It needs a file-like object that yields strings; `ZipFile.open` gives a file-like object that yields _bytes_ , and can't take an encoding. The [codecs](http://docs.python.org/3.3/library/codecs.html) module comes to the rescue here - you can get a StreamReader that transparently decodes utf8 bytes to strings for you like this: import codecs utf8 = codecs.lookup('utf8').streamreader And use it like so: for filename in zipfile.namelist(): with zipfile.open(filename) as f: csvfile = csv.DictReader(utf8(f)) for line in csvfile: yield BSYMRecord(**{k:v for k,v in line if k in BSYMRecord._fieldnames})
How suitable is opting for RethinkDB instead of traditional SQL for a JSON API? Question: I am building the back-end for my web app; it would act as an API for the front-end and it will be written in Python (Flask, to be precise). After taking some decisions regarding design and implementation, I got to the database part. And I started thinking whether NoSQL data storage may be more appropriate for my project than traditional SQL databases. Following is a basic functionality description which should be handled by the database and then a list of pros and cons I could come up with regarding to which type of storage should I opt for. Finally some words about why I have considered RethinkDB over other NoSQL data storages. **Basic functionality of the API** The API consists of only a few models: `Artist`, `Song`, `Suggestion`, `User` and `UserArtists`. I would like to be able to add a `User` with some associated data and link some `Artist`s to it. I would like to add `Song`s to `Artist`s on request, and also generate a `Suggestion` for a `User`, which will contain an `Artist` and a `Song`. Maybe one of the most important parts is that `Artist`s will be periodically linked to `User`s (and also `Artist`s can be removed from the system -- hence from `User`s too -- if they don't satisfy some criteria). `Song`s will also be dynamically added to `Artist`s. All this means is that `User`s don't have a fixed set of `Artist`s and nor do `Artist`s have a fixed set of `Song`s -- they will be continuously updating. **Pros** _for NoSQL_ : * Flexible schema, since not every `Artist` will have a FacebookID or `Song` a SoundcloudID; * While a JSON API, I believe I would benefit from the fact that records are stored as JSON; * I believe the number of `Song`s, but especially `Suggestion`s will raise quite a bit, hence NoSQL will do a better job here; _for SQL_ : * It's fixed schema may come in handy with relations between models; * Flask has support for SQLAlchemy which is very helpful in defining models; **Cons** _for NoSQL_ : * Relations are harder to implement and updating models transaction-like involves a bit of code; * Flask doesn't have any wrapper or module to ease things, hence I will need to implement some kind of wrapper to help me make the code more readable while doing database operations; * I don't have any certainty on how should I store my records, especially `UserArtist`s _for SQL_ : * Operations are bulky, I have to define schemas, check whether columns have defaults, assign defaults, validate data, begin/commit transactions -- I believe it's too much of a hassle for something simple like an API; **Why RethinkDB?** I've considered RehinkDB for a possible implementation of NoSQL for my API because of the following: * It looks simpler and more lightweight than other solutions; * It has native Python support which is a big plus; * It implements table joins and other things which could come in handy in my API, which has some relations between models; * It is rather new, and I see a lot of implication and love from the community. There's also the will to continuously add new things that leverage database interaction. All these being considered, I would be glad to hear any advice on whether NoSQL or SQL is more appropiate for my needs, as well as any other pro/con on the two, and of course, some corrections on things I haven't stated properly. Answer: I'm working at RethinkDB, but that's my unbiased answer as a web developer (at least as unbiased as I can). * Flexible schema are nice from a developer point of view (and in your case). Like you said, with something like PostgreSQL you would have to format all the data you pull from third parties (SoundCloud, Facebook etc.). And while it's not something really hard to do, it's not something enjoyable. * Being able to join tables, is for me the natural way of doing things (like for user/userArtist/artist). While you could have a structure where a user would contain artists, it is going to be unpleasant to use when you will need to retrieve artists and for each of them a list of users. The first point is something common in NoSQL databases, while JOIN operations are more a SQL databases thing. You can see RethinkDB as something providing the best of each world. I believe that developing with RethinkDB is easy, fast and enjoyable, and that's what I am looking for as a web developer. There is however one thing that you may need and that RethinkDB does not deliver, which is transactions. If you need **atomic** updates on multiple tables (or documents - like if you have to transfer money between users), you are definitively better with something like PostgreSQL. If you just need updates on multiple tables, RethinkDB can handle that. And like you said, while RethinkDB is new, the community is amazing, and we - at RethinkDB - care a lot about our users. If you have more questions, I would be happy to answer them : )
Python which built in exception to use Question: As I am sure you are well aware, python, as do most programming languages, comes with [built-in exceptions](http://docs.python.org/2/library/exceptions.html). I have gone through the list, and cannot deduce which would be appropriate. Of course I can make my own exception, but this is a fairly standard error that would be excepted. This is an error based on instance relations. Instances of a class are related to **only some** of the other instances. Computations can be made depending on the different connections. This error will be `raise`d if a computation is attempted on an unrelated instance. # example class Foo: def __init__(self,related=None): '''related is a list of Foo instances''' if related is not None: self.related = related else: self.related = [] def compute(self,instance): if instance in self.related: #execute code else: raise SomeError("You cannot make a computation with an unrelated instance") # my thoughts To be honest, it seems like `ValueError` would make most sense because the value is not allowed, but for some reason this does not fully sit well with me. The fact that there is no relation is the importance of this error, not simply the fact that the value attempted is not allowed. **Is there a better exception than ValueError for my logic?** note: I understand that this `ValueError` may just be the right answer, but I am curious if there is something more precise that I may have not been able to see the connection with when I went through the documentation. Answer: For me `unrelated instance` sounds like you want to do something with the instance of wrong type. What about choosing `TypeError`? > Raised when an operation or function is applied to an object of > inappropriate type. The associated value is a string giving details about > the type mismatch. [Source](http://docs.python.org/2.7/library/exceptions.html#exceptions.TypeError) EDIT based on your comment: [Documentation](http://docs.python.org/2.7/library/exceptions.html#exceptions.TypeError) says: > ValueError Raised when a built-in operation or function receives an argument > that has the right type but an inappropriate value, and the situation is not > described by a more precise exception such as IndexError. It is the right type - as you stated in your comment but it has an inappropriate value. Situation can't be described by `IndexError` => I'd go for `ValueError`.
Appengine endpoints-proto-datastore issue with user_required: No records returned Question: I'm facing an issue on a deployed appengine app: It does not return me my locations using Google's API explorer. I did define a ./datastore_models/location.py file: #!/usr/bin/env python # -*- coding: utf-8 -*- from google.appengine.ext import ndb from google.appengine.api import search from endpoints_proto_datastore.ndb import EndpointsModel class Location(EndpointsModel): name = ndb.StringProperty(required=True, verbose_name="Name") description = ndb.TextProperty(required=True, verbose_name="Description") address = ndb.StringProperty(required=True, verbose_name="Address") coordinate = ndb.GeoPtProperty(required=False, verbose_name="Coordinate") enable_geocode = ndb.BooleanProperty(default=False, verbose_name="Enable geocode") active = ndb.BooleanProperty(default=True, verbose_name="Active") owner = ndb.UserProperty(required=False, verbose_name="Owner") And I do have got a simple ./main.py file: #!/usr/bin/env python # -*- coding: utf-8 -*- import endpoints from protorpc import remote from datastore_models.location import Location WEB_CLIENT_ID = 'ID.apps.googleusercontent.com' @endpoints.api(name='uemd', version='v1', description='API for locations, objects and events', audiences=[WEB_CLIENT_ID, endpoints.API_EXPLORER_CLIENT_ID]) class UemDAPI(remote.Service): @Location.method(user_required=True, request_fields=('name', 'description', 'address', 'enable_geocode'), path='location', http_method='POST', name='location.insert') def LocationInsert(self, location): location.owner = endpoints.get_current_user() location.put() return location @Location.method(user_required=True, request_fields=('id',), path='location/{id}', http_method='GET', name='location.get') def LocationGet(self, location): if not location.from_datastore: raise endpoints.NotFoundException('Location not found.') return location @Location.query_method(user_required=True, query_fields=('active', 'limit', 'order', 'pageToken'), path='locations', name='location.list') def LocationList(self, query): return query.filter(Location.owner == endpoints.get_current_user()) app = endpoints.api_server([UemDAPI], restricted=False) When I now do call "GET <https://uemd- core.appspot.com/_ah/api/uemd/v1/locations>" I do not receive the locations I did create before using the POST method. I do just receive: { "kind": "uemd#locationItem", "etag": "\"llW4_dZC50NEF69z_hZurfpZb1s/wnbopBN8xedxeOulX5Nry_3uwCw\"" } Executing "GET <https://uemd- core.appspot.com/_ah/api/uemd/v1/location/5634387206995968>" does return one location as expected. In the Appengine Logs I see the following debug message: id_token verification failed: Can't parse header:ɭ� But I also see this message for the method which does return a single location... Running the same query on the dev_appserver.py raises: RuntimeError: UnicodeDecodeError('utf8', "id_token verification failed: Can't parse header: \xc9\xad\xbd", 52, 53, 'invalid start byte') So far I did follow the examples of endpoints-proto-datastore, but I did want to store my datastore models in an extra directory (datastore_models). The location.insert and location.get methods are working, but no the location.list. All of those methods do show the "Can't parse header" message... How can I fix this? Cheers I do still have got the same issue. Retrieving one location does work, multiple doesn't. Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate req.respond() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond self.server.gateway(self).respond() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond response = self.req.server.wsgi_app(self.env, self.start_response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 269, in __call__ return app(environ, start_response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware response_body = iter(application(environ, wrapped_start_response)) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 148, in __call__ self._flush_logs(response.get('logs', [])) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 284, in _flush_logs apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall return stubmap.MakeSyncCall(service, call, request, response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall rpc.CheckSuccess() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl self.request, self.response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall self._MakeRealSyncCall(service, call, request, response) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 234, in _MakeRealSyncCall raise pickle.loads(response_pb.exception()) RuntimeError: UnicodeDecodeError('utf8', "id_token verification failed: Can't parse header: \xc9\xad\xbd", 52, 53, 'invalid start byte') Answer: This is not related to endpoints-proto-datastore. It's an issue with googleappengine and it seems like it's going to be fixed in the next release (1.8.9): <https://code.google.com/p/googleappengine/issues/detail?id=10285> Cheers
Building Pillow for PyPy - unresolved externals Question: I've been struggling with building Pillow for PyPy (2.1) on my Windows (XP) installation, using easy_install. Problems seems to be related to it missing needed python/pypy header(?). I've been searching Google both up and down for days, hoping to find a few who could elaborate on the problems I am having, but not finding anything helpful. The error log I am getting is: ... C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IlibImaging -IC:\pypy-21\include -IC:\pypy-21\include /Tc libImaging\UnsharpMask.c /Fobuild\temp.win32-2.7\Release\libImaging\UnsharpMask.obj ... decode.obj : error LNK2001: unresolved external symbol __imp__PyType_Type encode.obj : error LNK2001: unresolved external symbol __imp__PyType_Type map.obj : error LNK2001: unresolved external symbol __imp__PyType_Type encode.obj : error LNK2019: unresolved external symbol __imp___PyString_Resize r eferenced in function __encode encode.obj : error LNK2019: unresolved external symbol __imp__PyErr_SetFromErrno referenced in function __encode_to_file encode.obj : error LNK2019: unresolved external symbol __imp__PyExc_SystemError referenced in function __setimage display.obj : error LNK2019: unresolved external symbol __imp__PyList_Append ref erenced in function _list_windows_callback@8 display.obj : error LNK2019: unresolved external symbol __imp__PyErr_Print refer enced in function _callback_error display.obj : error LNK2019: unresolved external symbol __imp__PyFile_WriteStrin g referenced in function _callback_error display.obj : error LNK2019: unresolved external symbol __imp__PySys_GetObject r eferenced in function _callback_error display.obj : error LNK2019: unresolved external symbol _PyObject_CallFunction r eferenced in function _windowCallback@16 path.obj : error LNK2001: unresolved external symbol _PyObject_CallFunction display.obj : error LNK2019: unresolved external symbol __imp__PyThreadState_Swa p referenced in function _windowCallback@16 display.obj : error LNK2019: unresolved external symbol __imp__PyThreadState_Get referenced in function _PyImaging_CreateWindowWin32 path.obj : error LNK2019: unresolved external symbol __imp__PyErr_ExceptionMatch es referenced in function _PyPath_Flatten path.obj : error LNK2019: unresolved external symbol __imp__PyNumber_Check refer enced in function _PyPath_Flatten path.obj : error LNK2019: unresolved external symbol __imp__PyExc_RuntimeError r eferenced in function _path_clip_polygon build\lib.win32-2.7\_imaging.pypy-21.pyd : fatal error LNK1120: 61 unresolved ex ternals error: command 'C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\link.e xe' failed with exit status 1120 I really struggle to see why this is happening, I actually have no clue. I am easily able to build packages for CPython. As far as I see dependencies folder is included as well: "-IC:\pypy-21\include"(?). For more of the log you can look at <http://pastebin.com/raw.php?i=kKnNpRgr> **Edit - The solution** After grabbing PyPy 2.2.1 I found that I was indeed missing some files in my "PyPy-21/Include"-path. And so.. Pillow installed just fine with the newfound header, and it's running flawlessly! =) Answer: PyPy doesn't implement the entire C extension module API found in CPython. Or rather, PyPy implements a reasonable subset of the symbols a Cext module is most likely to use, but PyPY couldn't realistically provide every symbol CPython provides without inhaling most of CPython. I wish CPython were more careful about what symbols it exposes to external code, but I don't believe it is. PyPy is best with Pure Python modules. It can run (rather slowly) a subset of C extension modules too. You may be fighting a losing battle trying to get PyPy to import this particular CPython C extension module. Bear in mind that you cannot JIT C code in PyPy.
In Python, importing twice with class instantiation? Question: In `models.py` I have: ... db = SQLAlchemy(app) class User(db.Document): ... In my app both `serve.py` and `models.py` call: from models import User Is that double import going to instantiate the db twice and potentially cause a problem? Answer: > Is that double import going to instantiate the db twice and potentially > cause a problem? No it will not. Once a module is imported it remains available regardless of any further imports via the `import` statement. The module is stored in `sys.modules` once imported. If you want to **reload the module** you have to use `reload(module)`. **Example:** `bar.py` xs = [1, 2, 3] **Importing twice:** >>> from bar import xs >>> id(xs) 140211778767256 >>> import bar >>> id(bar.xs) 140211778767256 Notice that the **identities** are identical? **Caching:** >>> import sys >>> sys.modules["bar"] is bar True
Deploying Flask app to Heroku - "Connection in Use" Question: I'm deploying a simple Flask app to Heroku - (first time using Flask and Heroku both). When I try to deploy, I get an "Application Error" and the page tells me to try again in a few minutes. The logs state - the "connection [is] in use", retries a few times and then the worker exits (I can post the logs if that is helpful). My demo.py file: import flask, flask.views import os import urllib2 from bs4 import BeautifulSoup opener = urllib2.build_opener() app = flask.Flask(__name__) app.secret_key = "bacon" class View(flask.views.MethodView): def get(self): return flask.render_template('index.html') def post(self): url = (flask.request.form['url']) ourUrl = opener.open(url).read() soup = BeautifulSoup(ourUrl) title = soup.title.text recipe = soup.find("div", {"id": "recipe"}).getText() flask.flash(title) flask.flash(recipe) return self.get() app.add_url_rule('/', view_func=View.as_view('main'), methods=['GET', 'POST']) app.debug = True app.run() My procfile is: web: gunicorn demo:app If I change the procfile to web: python demo.py, I am able to run the app locally using Foreman but still cannot deploy to Heroku. Any help is very much appreciated. This is my first time doing this!! Thank you. Answer: I figured it out. Need to add the following before app.run() if __name__ == "__main__": Now it runs fine on Heroku.
Monitoring output of parallel subprocess in python 2.7 - pocketsphinx Question: I'm new to python and maybe my problem might be rather obvious and simple. My goal would be to run a pocketsphinx_continuous in parallel with my own program. And when needed I would like to poll for output created by pocketsphinx. I found different questions and answers related to parallel processing etc in python. But I need to keep sub-process running until my own program is running. I'm using following command: command ='pocketsphinx_continuous -hmm ~/speech/et/models/hmm/est16k.cd_ptm_1000-mapadapt -jsgf ~/speech/et/models/lm/robot2.jsgf -dict ~/speech/et/robot.dict 2>/dev/null | grep "^000"' Output of pocketsphinx will be something like: 0000001:some text; 0000002: some more text. Any references or hints are most welcome! Thank you! Edit: Thanks for your response but i solve it as following: def start(self): print "start" command ='pocketsphinx_continuous -hmm ~/speech/et/models/hmm/est16k.cd_ptm_1000-mapadapt -jsgf ~/speech/et/models/lm/robot2.jsgf -dict ~/speech/et/robot.dict' pocketsphinx = Popen(command, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) thread.start_new_thread(self.listen_to_pocketsphinx, (pocketsphinx,)) def listen_to_pocketsphinx(self,pocketsphinx): while True: line = pocketsphinx.stdout.readline() if line.startswith("000"): sentence = line.partition(": ")[2].strip() if sentence: self.direction = sentence else: self.direction = "" I'm not sure that it's the most elegant way but it worked for me. So for me the case is closed. Thank you! Answer: My personal recommendation is that you use an **Asynchronous Application Framework**. There are many out there with varying features and levels of simplicity and complexity. I have a personal bias towards [circuits](http://circuitsframework.com/) and very generically you would implement something like this (_using circuits-dev from<https://bitbucket.org/circuits/circuits-dev>_): #!/usr/bin/env python import sys from circuits import Component from circuits.io import stdout, Process, write class Ping(Component): stdout = stdout def init(self, host): self.p = Process(["/bin/ping", host]).register(self) self.p.start() def read(self, data): self.fire(write(data), stdout) app = Ping(sys.argv[1]) app.run() Something like this could also be running a simple web server for example: from circuits.web import Controller, Server class Root(Controller): def index(self): return "Hello World!" Server(("0.0.0.0", 8000)).register(app)
netCDF to *.csv without Loops(!) Question: I'm having some performance and 'ugly code' problem and maybe some of you can help. I have to export data from **netCDF-files** to ***.csv**. For this I wrote some python code. Let's take a 3-dim netcdf-File: def to3dim_csv(): var = ncf.variables['H2O'] #e.g. data for 'H2O' values one,two,three = var.shape #variable dimension shape e.g. (551,42,94) dim1,dim2,dim3 = var.dimensions #dimensions e.g. (time,lat,lon) if crit is not None: bool1 = foo(dim1,crit,ncf) #boolean table: ("value important?",TRUE,FALSE) bool2 = foo(dim2,crit,ncf) bool3 = foo(dim3,crit,ncf) writer.writerow([dim1,dim2,dim3,varn]) for i in range(one): for k in range(two): for l in range(three): if bool1[i] and bool2[k] and bool3[l]: writer.writerow([ ncf.variables[dim1][i], ncf.variables[dim2][k], ncf.variables[dim3][l], var[i,k,l], ]) ofile.close() # Sample csv output is like: # time,lat,lon,H2O # 1,90,10,100 # 1,90,11,90 # 1,91,10,101 I want to remove the `for val in range(d):` blocks. Perhaps using a **recursiv function** , like: var = ncf.variables['H2O'] dims = [d for d in var.dimensions] shapes = [var.variables[d].shape for d in dims] bools = [bool_table(d,crit,ncf) for d in dims] dims.append('H2O') writer.writerow(dims) magic_function(data) def magic_function(data): [enter code] writer.writerow(data) magic_function(left_data) **Update:** For anyone who is interested. This works instantaneous ... def data_to_table(dataset, var): assert isinstance(dataset,xr.Dataset), 'Dataset must be xarray.Dataset' obj = getattr(dataset, var) table = np.zeros((obj.data.size, obj.data.ndim+1), dtype=np.object_) table[:,0] = obj.data.flat for i,d in enumerate(obj.dims): repeat = np.prod(obj.data.shape[i+1:]) tile = np.prod(obj.data.shape[:i]) dim = getattr(dataset, d) dimdata = dim.data dimdata = np.repeat(dimdata, repeat) dimdata = np.tile(dimdata, tile) table[:,i+1] = dimdata.flat return table def export_to_csv(dataset, var, filename, size=None): obj = getattr(dataset, var) header = [var] + [x for x in obj.dims] tabular = data_to_table(dataset, var) size = slice(None,size,None) if size else slice(None,None,None) with open(filename, 'w') as f: writer = csv.writer(f,dialect=csv.excel) writer.writerow(header) writer.writerows(tabular[size]) Answer: Something like this. Get the indexes of bol1\2\3 and combine them while fetching the relevant values. with open('numpy.csv', 'wb') as f: out_csv = csv.writer(f) header = ['dim1','dim2','dim3','varn'] out_csv.writerow(header) bol1_indices = np.nonzero(bol1)[0] bol2_indices = np.nonzero(bol2)[0] bol3_indices = np.nonzero(bol3)[0] out_csv.writerows(([a[i, k, l], dim1[i], dim2[k], dim3[l]] for i in bol1_indices for k in bol2_indices for l in bol3_indices))
Matplotlib: use 2 datasets different length datasets to plot x and y Question: I've been trying to use file gages.txt for x axis values and file data.rei for y axis values. I've run into an error because the two file are not the same length. I want to plot a separate graph for each time the `pestID` in `gages.txt` matches the Name in `data.rei`. Here is an excerpt of gages.txt gage date pestID Measurement(cfd) weight group 06459175 10/1/1993 devfl1 12788474.59 1.40309E-06 devflux 06459175 11/1/1993 devfl2 12208086.39 1.40309E-06 devflux 06459175 12/1/1993 devfl3 13559062.49 1.40309E-06 devflux 06459175 1/1/1994 devfl4 12419465.45 1.40309E-06 devflux 06459175 2/1/1994 devfl5 12070242.32 1.40309E-06 devflux 06459175 3/1/1994 devfl6 14298632.14 1.40309E-06 devflux 06459175 4/1/1994 devfl7 13348094.29 1.40309E-06 devflux 06459175 5/1/1994 devfl8 13164766.46 1.40309E-06 devflux 06459175 6/1/1994 devfl9 12737079.24 1.40309E-06 devflux 06459175 7/1/1994 devfl10 12663994.86 1.40309E-06 devflux 06459175 8/1/1994 devfl11 13164849.87 1.40309E-06 devflux 06459200 10/1/1966 devfl253 17304667.25 1.20897E-06 devflux 06459200 11/1/1966 devfl254 16790039.95 1.20897E-06 devflux 06459200 12/1/1966 devfl255 13414046.27 1.20897E-06 devflux 06459200 1/1/1967 devfl256 13146007.51 1.20897E-06 devflux 06459200 2/1/1967 devfl257 15104020.28 1.20897E-06 devflux 06459200 3/1/1967 devfl258 16573573.51 1.20897E-06 devflux 06459200 4/1/1967 devfl259 18090091.13 1.20897E-06 devflux 06459200 5/1/1967 devfl260 18112268.35 1.20897E-06 devflux 06459200 6/1/1967 devfl261 16365348.96 1.20897E-06 devflux 06459200 7/1/1967 devfl262 16490349.44 1.20897E-06 devflux 06459200 8/1/1967 devfl263 16167208.44 1.20897E-06 devflux 06459200 9/1/1967 devfl264 15875425.16 1.20897E-06 devflux 06776500 7/1/1961 devfl6725 27784610.2 7.33613E-07 devflux 06776500 8/1/1961 devfl6726 27008782.61 7.33613E-07 devflux 06776500 9/1/1961 devfl6727 27727258.45 7.33613E-07 devflux 06776500 10/1/1961 devfl6728 30051668.13 7.33613E-07 devflux 06776500 11/1/1961 devfl6729 28593805.65 7.33613E-07 devflux 06776500 12/1/1961 devfl6730 20188155.91 7.33613E-07 devflux 06776500 1/1/1962 devfl6731 18106275.83 7.33613E-07 devflux 06776500 2/1/1962 devfl6732 19852941.78 7.33613E-07 devflux 06776500 3/1/1962 devfl6733 26060013.78 7.33613E-07 devflux Here is excerpt of data.rei: Name Group Measured Modelled Residual Weight pdwl1 pdwls 2083.620 2089.673 -6.052805 9.4067000E-04 pdwl2 pdwls 2186.748 2199.771 -13.02284 8.9630800E-04 pdwl3 pdwls 2150.983 2160.259 -9.275730 9.1121100E-04 pdwl4 pdwls 2133.283 2142.970 -9.686504 9.1877100E-04 pdwl5 pdwls 2241.741 1769.331 472.4097 8.7432100E-04 pst_1 devwls 2191.200 2094.658 96.54200 1.000000 pst_2 devwls 2194.160 2094.070 100.0900 1.000000 pst_3 devwls 2190.790 2093.375 97.41500 1.000000 pst_4 devwls 2191.700 2092.671 99.02900 1.000000 pst_5 devwls 2188.260 2092.739 95.52100 1.000000 devfl1 devflux 1.2788475E+07 1.2199410E+07 589064.6 1.4030900E-06 devfl2 devflux 1.2208086E+07 1.2044727E+07 163359.4 1.4030900E-06 devfl3 devflux 1.3559062E+07 1.1423958E+07 2135104. 1.4030900E-06 devfl4 devflux 1.2419465E+07 1.1141419E+07 1278046. 1.4030900E-06 devfl5 devflux 1.2070242E+07 1.0925833E+07 1144409. 1.4030900E-06 devfl10673 devflux 1.5491064E+07 1.0987895E+08 -9.4387886E+07 3.3832500E-07 devfl10674 devflux 1.4034349E+07 1.0585104E+08 -9.1816691E+07 3.3832500E-07 devfl10675 devflux 1.8542658E+07 1.0808722E+08 -8.9544562E+07 3.3832500E-07 devfl10676 devflux 2.6080914E+07 1.1146742E+08 -8.5386506E+07 3.3832500E-07 devfl10677 devflux 2.7600680E+07 1.1286638E+08 -8.5265700E+07 3.3832500E-07 devfl10678 devflux 5.7568459E+07 1.2289897E+08 -6.5330511E+07 3.3832500E-07 devfl10679 devflux 7.9939784E+07 1.2019735E+08 -4.0257566E+07 3.3832500E-07 devfl10772 devflux 5.8896718E+07 1.3656509E+08 -7.7668372E+07 3.3832500E-07 devfl10773 devflux 9.1145662E+07 1.3911792E+08 -4.7972258E+07 3.3832500E-07 devfl10774 devflux 7.6386027E+07 1.3618379E+08 -5.9797763E+07 3.3832500E-07 devfl10775 devflux 8.6729650E+07 1.5717141E+08 -7.0441760E+07 3.3832500E-07 devfl10776 devflux 1.3065667E+08 1.5286262E+08 -2.2205948E+07 3.3832500E-07 The basic loop i want to do is "if data['Name'] == gages['pestID'], then plot gages['date'] on x-axis and data['Measured'] on y axis]" Here is my script: import numpy as np import matplotlib.pyplot as plt data = np.genfromtxt('data.rei', dtype=None, names=True, skip_header=6) gages = np.genfromtxt('gages.txt', dtype=None, names=True, delimiter=('\t'), autostrip=True, usecols=(0, 1, 2)) font = {'size' : 10,} #-----Dev BFs__________________________________ #plt.rc('axes', color_cycle=['r']) #for gages['gage'] in gages: if gages['pestID'] == data['Name']: plt.scatter(gages['date'], data['Measured'],gages['date'], data['Modelled']) plt.legend('Measured','Modelled') #plt.plot([0,4000],[0,4000]) plt.xlabel('date', fontdict=font) plt.ylabel('flux (cfd)', fontdict=font) plt.title.gages(['gage'], fontdict=font) #plt.xlim(1000,4000) #plt.ylim(-2000,4000) plt.show() else: print 'no match' Here is the error: ValueError Traceback (most recent call last) C:\Program Files\Enthought\Canopy\App\appdata\canopy-1.1.0.1371.win-x86_64\lib\site-packages\IPython\utils\py3compat.pyc in execfile(fname, glob, loc) 174 else: 175 filename = fname --> 176 exec compile(scripttext, filename, 'exec') in glob, loc 177 else: 178 def execfile(fname, *where): C:\From_LT017_old D drive\Projects\ELM\FY14\python\elm3_1-4 devFlux plot from rei.py in <module>() 13 #plt.rc('axes', color_cycle=['r']) 14 #for gages['gage'] in gages: ---> 15 if gages['pestID'] == data['Name']: 16 plt.scatter(gages['date'], data['Measured'],gages['date'], data['Modelled']) 17 plt.legend('Measured','Modelled') ValueError: shape mismatch: objects cannot be broadcast to a single shape I am perplexed because i thought the loop would take care of this problem. Answer: I had to guess that for every row of `gages`, `pestID` is unqiue, and that for every row of `data`, `Name` is unique. Using your data: gages_txt = """gage date pestID Measurement(cfd) weight group 06459175 10/1/1993 devfl1 12788474.59 1.40309E-06 devflux 06459175 11/1/1993 devfl2 12208086.39 1.40309E-06 devflux 06459175 12/1/1993 devfl3 13559062.49 1.40309E-06 devflux 06459175 1/1/1994 devfl4 12419465.45 1.40309E-06 devflux 06459175 2/1/1994 devfl5 12070242.32 1.40309E-06 devflux 06459175 3/1/1994 devfl6 14298632.14 1.40309E-06 devflux 06459175 4/1/1994 devfl7 13348094.29 1.40309E-06 devflux 06459175 5/1/1994 devfl8 13164766.46 1.40309E-06 devflux 06459175 6/1/1994 devfl9 12737079.24 1.40309E-06 devflux 06459175 7/1/1994 devfl10 12663994.86 1.40309E-06 devflux 06459175 8/1/1994 devfl11 13164849.87 1.40309E-06 devflux 06459200 10/1/1966 devfl253 17304667.25 1.20897E-06 devflux 06459200 11/1/1966 devfl254 16790039.95 1.20897E-06 devflux 06459200 12/1/1966 devfl255 13414046.27 1.20897E-06 devflux 06459200 1/1/1967 devfl256 13146007.51 1.20897E-06 devflux 06459200 2/1/1967 devfl257 15104020.28 1.20897E-06 devflux 06459200 3/1/1967 devfl258 16573573.51 1.20897E-06 devflux 06459200 4/1/1967 devfl259 18090091.13 1.20897E-06 devflux 06459200 5/1/1967 devfl260 18112268.35 1.20897E-06 devflux 06459200 6/1/1967 devfl261 16365348.96 1.20897E-06 devflux 06459200 7/1/1967 devfl262 16490349.44 1.20897E-06 devflux 06459200 8/1/1967 devfl263 16167208.44 1.20897E-06 devflux 06459200 9/1/1967 devfl264 15875425.16 1.20897E-06 devflux 06776500 7/1/1961 devfl6725 27784610.2 7.33613E-07 devflux 06776500 8/1/1961 devfl6726 27008782.61 7.33613E-07 devflux 06776500 9/1/1961 devfl6727 27727258.45 7.33613E-07 devflux 06776500 10/1/1961 devfl6728 30051668.13 7.33613E-07 devflux 06776500 11/1/1961 devfl6729 28593805.65 7.33613E-07 devflux 06776500 12/1/1961 devfl6730 20188155.91 7.33613E-07 devflux 06776500 1/1/1962 devfl6731 18106275.83 7.33613E-07 devflux 06776500 2/1/1962 devfl6732 19852941.78 7.33613E-07 devflux 06776500 3/1/1962 devfl6733 26060013.78 7.33613E-07 devflux """ data_rei_txt = """Name Group Measured Modelled Residual Weight pdwl1 pdwls 2083.620 2089.673 -6.052805 9.4067000E-04 pdwl2 pdwls 2186.748 2199.771 -13.02284 8.9630800E-04 pdwl3 pdwls 2150.983 2160.259 -9.275730 9.1121100E-04 pdwl4 pdwls 2133.283 2142.970 -9.686504 9.1877100E-04 pdwl5 pdwls 2241.741 1769.331 472.4097 8.7432100E-04 pst_1 devwls 2191.200 2094.658 96.54200 1.000000 pst_2 devwls 2194.160 2094.070 100.0900 1.000000 pst_3 devwls 2190.790 2093.375 97.41500 1.000000 pst_4 devwls 2191.700 2092.671 99.02900 1.000000 pst_5 devwls 2188.260 2092.739 95.52100 1.000000 devfl1 devflux 1.2788475E+07 1.2199410E+07 589064.6 1.4030900E-06 devfl2 devflux 1.2208086E+07 1.2044727E+07 163359.4 1.4030900E-06 devfl3 devflux 1.3559062E+07 1.1423958E+07 2135104. 1.4030900E-06 devfl4 devflux 1.2419465E+07 1.1141419E+07 1278046. 1.4030900E-06 devfl5 devflux 1.2070242E+07 1.0925833E+07 1144409. 1.4030900E-06 devfl10673 devflux 1.5491064E+07 1.0987895E+08 -9.4387886E+07 3.3832500E-07 devfl10674 devflux 1.4034349E+07 1.0585104E+08 -9.1816691E+07 3.3832500E-07 devfl10675 devflux 1.8542658E+07 1.0808722E+08 -8.9544562E+07 3.3832500E-07 devfl10676 devflux 2.6080914E+07 1.1146742E+08 -8.5386506E+07 3.3832500E-07 devfl10677 devflux 2.7600680E+07 1.1286638E+08 -8.5265700E+07 3.3832500E-07 devfl10678 devflux 5.7568459E+07 1.2289897E+08 -6.5330511E+07 3.3832500E-07 devfl10679 devflux 7.9939784E+07 1.2019735E+08 -4.0257566E+07 3.3832500E-07 devfl10772 devflux 5.8896718E+07 1.3656509E+08 -7.7668372E+07 3.3832500E-07 devfl10773 devflux 9.1145662E+07 1.3911792E+08 -4.7972258E+07 3.3832500E-07 devfl10774 devflux 7.6386027E+07 1.3618379E+08 -5.9797763E+07 3.3832500E-07 devfl10775 devflux 8.6729650E+07 1.5717141E+08 -7.0441760E+07 3.3832500E-07 devfl10776 devflux 1.3065667E+08 1.5286262E+08 -2.2205948E+07 3.3832500E-07 """ And this code: import StringIO import pandas as pd import matplotlib.pyplot as plt gages = pd.read_csv(StringIO.StringIO(gages_txt), sep=None, parse_dates=[1]) data = pd.read_csv(StringIO.StringIO(data_rei_txt), sep=' ', skipinitialspace=True) gages.index=gages['pestID'] data.index=data['Name'] joined = gages.join(data) joined.index = joined['date'] joined['Measured'].dropna().plot(style='o') plt.show() You get: ![enter image description here](http://i.stack.imgur.com/5GHDL.png)
Simple pygame program to fix Question: I am new to Python and I have modified dodger (here's the link <http://inventwithpython.com/dodger.py>) to add ''goodies'', sprites similar to the baddies, but that give you score when you touch them; instead of killing you as the baddies do. (I have made a change at the start with easygui too, but it works fine). I am really confused as this code works (I mean this code **starts**) but the goodies don't appear, like if I didn't put them in at all. I have tried to figure out by myself what the problem is but I haven't found it. The source code is long but there are some comments to make it more readable. I think that the multimedia files are right because it doesn't give me error messages. Here you have the not working program: import pygame, random, sys from pygame.locals import * import easygui #Message to make the user decide the hardness of the game msg = 'Inserisci un numero da 1 a 20\n per la difficoltà: \n1 = Semplice\n 20 = Impossibile' title = 'Difficoltà' #Message to make the user decide the colour of the background of the game Difficoltà = easygui.enterbox(msg,title) msg = "Quale colore preferisci fra questi come sfondo?" choices = ["Nero","Blu","Verde"] COLORESCELTODALLUTENTE = easygui.buttonbox(msg,choices=choices) #Unused Values as it runs in fullscreen mode WINDOWWIDTH = 800 WINDOWHEIGHT = 600 #The text is white TEXTCOLOR = (255, 255, 255) #Changes the colour of the background according to the choice of the user if COLORESCELTODALLUTENTE == 'Nero': BACKGROUNDCOLOR = (0, 0, 0) elif COLORESCELTODALLUTENTE == 'Blu': BACKGROUNDCOLOR = (36, 68, 212) elif COLORESCELTODALLUTENTE == 'Verde': BACKGROUNDCOLOR = (36, 237, 52) #Frames per second the game will run at FPS = 40 #Description of the baddies baddie_type_1MINSIZE = 20 baddie_type_1MAXSIZE = 40 baddie_type_1MINSPEED = 4 baddie_type_1MAXSPEED = 5 ADDNEWbaddie_type_1RATE = 21 - int(Difficoltà) #Description of the goddies goddie_type_1MINSIZE = 20 goddie_type_1MAXSIZE = 40 goddie_type_1MINSPEED = 4 goddie_type_1MAXSPEED = 5 ADDNEWgoddie_type_1RATE = 10 #How fast you move with the arrows PLAYERMOVERATE = 5 def terminate(): pygame.quit() sys.exit() def waitForPlayerToPressKey(): while True: for event in pygame.event.get(): if event.type == QUIT: terminate() if event.type == KEYDOWN: if event.key == K_ESCAPE: # pressing escape quits terminate() return def playerHasHitbaddie_type_1(playerRect, baddies_type_1): for b in baddies_type_1: if playerRect.colliderect(b['rect_b']): return True return False def playerHasHitgoddie_type_1(playerRect, goddies_type_1): for g in goddies_type_1: if playerRect.colliderect(g['rect_g']): return True return False def drawText(text, font, surface, x, y): textobj = font.render(text, 1, TEXTCOLOR) textrect = textobj.get_rect() textrect.topleft = (x, y) surface.blit(textobj, textrect) # set up pygame, the window, and the mouse cursor pygame.init() mainClock = pygame.time.Clock() #This down here is windowed mode #windowSurface = pygame.display.set_mode((WINDOWWIDTH, WINDOWHEIGHT)) #This down here is fullscreen mode windowSurface = pygame.display.set_mode((WINDOWWIDTH, WINDOWHEIGHT), pygame.FULLSCREEN) pygame.display.set_caption('Dodger') pygame.mouse.set_visible(False) # set up fonts font = pygame.font.SysFont(None, 48) # set up sounds gameOverSound = pygame.mixer.Sound('Gameover.wav') pygame.mixer.music.load('Background.mp3') # set up images playerImage = pygame.image.load('Player.png') playerRect = playerImage.get_rect() baddie_type_1Image = pygame.image.load('Baddie_type_1.png') goddie_type_1Image = pygame.image.load('Goddie_type_1.png') # show the "Start" screen drawText('Dodger', font, windowSurface, (WINDOWWIDTH / 3), (WINDOWHEIGHT / 3)) drawText('Press a key to start.', font, windowSurface, (WINDOWWIDTH / 3) - 30, (WINDOWHEIGHT / 3) + 50) pygame.display.update() waitForPlayerToPressKey() topScore = 0 while True: # set up the start of the game baddies_type_1 = [] goddies_type_1 = [] score = 0 playerRect.topleft = (WINDOWWIDTH / 2, WINDOWHEIGHT - 50) moveLeft = moveRight = moveUp = moveDown = False reverseCheat = slowCheat = False baddie_type_1AddCounter = 0 goddie_type_1AddCounter = 0 pygame.mixer.music.play(-1, 0.0) while True: # the game loop runs while the game part is playing score += 1 # increase score for event in pygame.event.get(): if event.type == QUIT: terminate() if event.type == KEYDOWN: if event.key == ord('z'): reverseCheat = True if event.key == ord('x'): slowCheat = True if event.key == K_LEFT or event.key == ord('a'): moveRight = False moveLeft = True if event.key == K_RIGHT or event.key == ord('d'): moveLeft = False moveRight = True if event.key == K_UP or event.key == ord('w'): moveDown = False moveUp = True if event.key == K_DOWN or event.key == ord('s'): moveUp = False moveDown = True if event.type == KEYUP: if event.key == ord('z'): reverseCheat = False score = 0 if event.key == ord('x'): slowCheat = False score = 0 if event.key == K_ESCAPE: terminate() if event.key == K_LEFT or event.key == ord('a'): moveLeft = False if event.key == K_RIGHT or event.key == ord('d'): moveRight = False if event.key == K_UP or event.key == ord('w'): moveUp = False if event.key == K_DOWN or event.key == ord('s'): moveDown = False if event.type == MOUSEMOTION: # If the mouse moves, move the player where the cursor is. playerRect.move_ip(event.pos[0] - playerRect.centerx, event.pos[1] - playerRect.centery) # ize is for size # Add new baddies_type_1 at the top of the screen, if needed. if not reverseCheat and not slowCheat: baddie_type_1AddCounter += 1 if baddie_type_1AddCounter == ADDNEWbaddie_type_1RATE: baddie_type_1AddCounter = 0 baddies_type_1ize = random.randint(baddie_type_1MINSIZE, baddie_type_1MAXSIZE) newbaddie_type_1 = {'rect_b': pygame.Rect(random.randint(0, WINDOWWIDTH-baddies_type_1ize), 0 - baddies_type_1ize, baddies_type_1ize, baddies_type_1ize), 'speed_b': random.randint(baddie_type_1MINSPEED, baddie_type_1MAXSPEED), 'surface_b':pygame.transform.scale(baddie_type_1Image, (baddies_type_1ize, baddies_type_1ize)), } baddies_type_1.append(newbaddie_type_1) # ize is for size # Add new goddies_type_1 at the top of the screen, if needed. if not reverseCheat and not slowCheat: goddie_type_1AddCounter += 1 if goddie_type_1AddCounter == ADDNEWgoddie_type_1RATE: goddie_type_1AddCounter = 0 goddies_type_1ize = random.randint(goddie_type_1MINSIZE, goddie_type_1MAXSIZE) newgoddie_type_1 = {'rect_g': pygame.Rect(random.randint(0, WINDOWWIDTH-goddies_type_1ize), 0 - goddies_type_1ize, goddies_type_1ize, goddies_type_1ize), 'speed_g': random.randint(goddie_type_1MINSPEED, goddie_type_1MAXSPEED), 'surface_g':pygame.transform.scale(goddie_type_1Image, (goddies_type_1ize, goddies_type_1ize)), } # Move the player around. if moveLeft and playerRect.left > 0: playerRect.move_ip(-1 * PLAYERMOVERATE, 0) if moveRight and playerRect.right < WINDOWWIDTH: playerRect.move_ip(PLAYERMOVERATE, 0) if moveUp and playerRect.top > 0: playerRect.move_ip(0, -1 * PLAYERMOVERATE) if moveDown and playerRect.bottom < WINDOWHEIGHT: playerRect.move_ip(0, PLAYERMOVERATE) # Move the mouse cursor to match the player. pygame.mouse.set_pos(playerRect.centerx, playerRect.centery) # Move the baddies_type_1 down. for b in baddies_type_1: if not reverseCheat and not slowCheat: b['rect_b'].move_ip(0, b['speed_b']) elif reverseCheat: b['rect_b'].move_ip(0, -5) elif slowCheat: b['rect_b'].move_ip(0, 1) # Move the goddies_type_1 down. for g in goddies_type_1: if not reverseCheat and not slowCheat: g['rect_g'].move_ip(0, g['speed_g']) elif reverseCheat: g['rect_g'].move_ip(0, -5) elif slowCheat: g['rect_g'].move_ip(0, 1) # Delete baddies_type_1 that have fallen past the bottom. for b in baddies_type_1[:]: if b['rect_b'].top > WINDOWHEIGHT: baddies_type_1.remove(b) # Delete goddies_type_1 that have fallen past the bottom. for g in goddies_type_1[:]: if g['rect_g'].top > WINDOWHEIGHT: goddies_type_1.remove(g) # Draw the game world on the window. windowSurface.fill(BACKGROUNDCOLOR) # Draw the score and top score. drawText('Score: %s' % (score), font, windowSurface, 10, 0) drawText('Top Score: %s' % (topScore), font, windowSurface, 10, 40) # Draw the player's rectangle windowSurface.blit(playerImage, playerRect) # Draw each baddie_type_1 for b in baddies_type_1: windowSurface.blit(b['surface_b'], b['rect_b']) # Draw each goddie_type_1 for g in goddies_type_1: windowSurface.blit(g['surface_g'], g['rect_g']) pygame.display.update() # Check if any of the baddies_type_1 have hit the player. if playerHasHitbaddie_type_1(playerRect, baddies_type_1): if score > topScore: topScore = score # set new top score break # Check if any of the goddies_type_1 have hit the player. if playerHasHitgoddie_type_1(playerRect, goddies_type_1): score = score + 200 #Frapes of the game mainClock.tick(FPS) # Stop the game and show the "Game Over" screen. pygame.mixer.music.stop() gameOverSound.play() drawText('GAME OVER', font, windowSurface, (WINDOWWIDTH / 3), (WINDOWHEIGHT / 3)) drawText('Press a key to play again.', font, windowSurface, (WINDOWWIDTH / 3) - 80, (WINDOWHEIGHT / 3) + 50) pygame.display.update() waitForPlayerToPressKey() gameOverSound.stop() Answer: After your `# Add new baddies_type_1 at the top of the screen, if needed.` code, it looks like you actually add the baddie with this line: `baddies_type_1.append(newbaddie_type_1)` You don't appear to be doing that with your goodies code. Try adding: `goddies_type_1.append(newgoddie_type_1)` after your `# Add new goddies_type_1 at the top of the screen, if needed.` if statements. Also, you spelled `goodies` as `goddies` throughout your code.
Can't communicate with MySQL from django, but can directly in python Question: I'm setting up a django web site locally on Windows with Apache, django 1.6 and MySQL 5.5.8. I've created a test database and populated it with a couple of sample records in the MyPHPAdmin interface. It has a user specifically for django, with full permissions (for that database only). In a python session I can interact with the database using this function def TestConnection(): import MySQLdb db=MySQLdb.connect(host="localhost", user="django", passwd="nu_django", db="nutana_django") cursor=db.cursor() cursor.execute("select * from test") for x in range(0,cursor.rowcount): row=cursor.fetchone() print row[0], ' --> ', row[1] and it outputs the records of the db, so I seem to have MySQL, MySQLdb and python working together properly. Next, in django, I've created an app called Courses using "python manage.py startapp Courses", and edited my settings.py file to include Courses in the Installed_Apps and defined the database connection like this: ROOT_URLCONF = 'Nutana.urls' WSGI_APPLICATION = 'Nutana.wsgi.application' DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'nutana_django', 'USER':'django', 'PASSWORD': 'nu_django', 'HOST':'127.0.0.1', 'PORT': '80' } } Last step is Apache. I've added the WSGI module to apache and appended the following to the end of the file: LoadModule wsgi_module modules/mod_wsgi.so WSGIScriptAlias /Nutana F:/Web_Django/Nutana/Nutana/wsgi.py WSGIPythonPath F:/Web_Django/Nutana <Directory F:/Web_Django/Nutana/Nutana/> <Files wsgi.py> Order deny,allow Allow from all </Files> </Directory> where the paths above are all correct, but I'm not convinced are all necessary. Within f:\web_django\Nutana\Courses I've edited models.py to the following: from django.db import models # Create your models here. class Course(models.Model): Name = models.CharField(max_length=200) start_date = models.DateTimeField('Start Date') Quota = models.IntegerField() class Course_Venue(models.Model): Course = models.ForeignKey(Course) Name = models.CharField(max_length=200) Max_Size = models.IntegerField() Ok... now when I run "python manage.py sql Courses" from the command prompt in "f:\web_django\Nutana", where manage.py lives, I get an error: django.db.utils.OperationalError: (2013, "Lost connection to MySQL server at 'waiting for initial communication packet', system error: 0") That's a lot of configuration, and I don't know where I've gone wrong! I don't think Apache is the issue here, but I've included it to be thorough. If I call up a non existent web page the 404 error includes reference to wsgi, so maybe thats all good? localhost 16/12/2013 10:14:34 AM Apache/2.2.17 (Win32) mod_ssl/2.2.17 OpenSSL/0.9.8o mod_wsgi/3.3 Python/2.7.6 PHP/5.3.4 mod_perl/2.0.4 Perl/v5.10.1 Answer: MySQL server has port 3306 insted 80!
Python unittest, skip tests when using a base-test-class Question: Whilst testing one of our web-apps for clarity I have created a `BaseTestClass` which inherits `unittest.TestCase`. The `BaseTestClass` includes my `setUp()` and `tearDown()` methods, which each of my `<Page>Test` classes then inherit from. Due to different devices under test having similar pages with some differences I wanted to use the `@unittest.skipIf()` decorator but its proving difficult. Instead of 'inheriting' the decorator from `BaseTestClass`, if I try to use that decorator Eclipse tries to auto-import `unittest.TestCase` into `<Page>Test`, which doesn't seem right to me. Is there a way to use the `skip` decorators when using a `Base`? class BaseTestClass(unittest.TestCase): def setUp(): #do setup stuff device = "Type that blocks" def tearDown(): #clean up One of the test classes in a separate module: class ConfigPageTest(BaseTestClass): def test_one(self): #do test def test_two(self): #do test @unittest.skipIf(condition, reason) <<<What I want to include def test_three(self): #do test IF not of the device type that blocks Answer: Obviously this requires unittest2 (or Python 3, I assume), but other than that, your example was pretty close. Make sure the name of your real test code gets discovered by your unit test discovery mechanism (`test_*.py` for nose). #base.py import sys import unittest2 as unittest class BaseTestClass(unittest.TestCase): def setUp(self): device = "Type that blocks" def tearDown(self): pass And in the actual code: # test_configpage.py from base import * class ConfigPageTest(BaseTestClass): def test_one(self): pass def test_two(self): pass @unittest.skipIf(True, 'msg') def test_three(self): pass Which gives the output .S. ---------------------------------------------------------------------- Ran 3 tests in 0.016s OK (SKIP=1)
Add url parameters in python Question: This is my code to access a webpage but I need to add parameters to it: 1\. First parameter is added by reading a line from file 2\. Second parameter is a counter to continuously access pages import urllib2 import json,os f = open('codes','r') for line in f.readlines(): id = line.strip('\n') url = 'http://api.opencorporates.com/v0.2/companies/search?q=&jurisdiction_code=%s&per_page=26&current_status=Active&page=%d' i = 0 directory = id os.makedirs(directory) while True: i += 5 req = urllib2.Request('%s%s%d' % (url,id, i)) print req try: response = urllib2.urlopen('%s%s%d' % (url, id, i)) except urllib2.HTTPError, e: break content = response.read() fo = str(i) + '.json' OUTFILE = os.path.join(directory, fo) with open(OUTFILE, 'w') as f: f.write(content) This keeps creating empty directories. I know something is wrong with the url parameters. How to rectify this? Answer: It looks like what you want to do is to insert `id` and `i` into `url`, but the string formatting you're using here concatenates `url`, `id`, and `i`. Try changing this: req = urllib2.Request('%s%s%d' % (url,id, i)) Into this: req = urllib2.Request(url % (id, i)) Does that give you the result you want? Also, the string formatting syntax you are using is deprecated; the currently preferred syntax is detailed in [PEP 3101 -- Advanced String Formatting](http://www.python.org/dev/peps/pep-3101/). So even better would be to do: url = 'http://api.opencorporates.com/v0.2/companies/search?q=&jurisdiction_code={0}&per_page=26&current_status=Active&page={1}' ... req = urllib2.Request(url.format(id, i)) Instead of `%s` and `%d` you use curly braces (`{}`) as placeholders for your parameters. Inside the curly braces, you can put a tuple index: >>> 'I like to {0}, {0}, {0}, {1} and {2}'.format('eat', 'apples', 'bananas') 'I like to eat, eat, eat, apples and bananas' If you just use bare curly braces, each placeholder consumes one parameter, and extras are ignored; e.g.: >>> '{} and {} and {}'.format(1, 2, 3) '1 and 2 and 3' >>> '{} and {} and {}'.format(1, 2, 3, 4) '1 and 2 and 3' >>> '{} and {} and {}'.format(1, 2) Traceback (most recent call last): File "<pyshell#18>", line 1, in <module> '{} and {} and {}'.format(1, 2) IndexError: tuple index out of range You can also use keyword arguments, and therefore dictionary unpacking: >>> d = {'adj':'funky', 'noun':'cheese', 'pronoun':'him'} >>> 'The {adj} {noun} intrigued {pronoun}.'.format(**d) 'The funky cheese intrigued him.' There are more features, detailed in the PEP, if you're interested.
Python: Request handler in Flask Question: I'm learning Flask, and the request handling seems to be like: @app.route("/") def hello(): return "Hello World!" So I end up defining the functions for all my routes in a single file. I'd much rather have functions for a model in its own file, e.g. get_user, create_user in user.py. I've used Express (node.js) in the past, and I can do: user = require('./models/user') app.get('/user', user.list) where user.coffee (or .js) has a list function. How do I do the same in Flask? Answer: From the [docs](http://flask.pocoo.org/docs/api/#flask.Flask.route): > A decorator that is used to register a view function for a given URL rule. > This does the same thing as > [add_url_rule()](http://flask.pocoo.org/docs/api/#flask.Flask.add_url_rule) > but is intended for decorator usage The `add_url_rule` docs elaborate: @app.route('/') def index(): pass Is equivalent to the following: def index(): pass app.add_url_rule('/', 'index', index) You can just as easily import your view functions into a `urls.py` file and call `add_url_rule` once for each view function there instead of defining the rules along side the functions or use the [lazy loading pattern](http://flask.pocoo.org/docs/patterns/lazyloading/).
Travis special requirements for each python version Question: I need unittest2 and importlib for python 2.6 that is not required for other python versions that travis tests against. Is there a way to tell Travis-CI to have different requirements.txt files for each python version? Answer: Travis CI adds an environment variable called `$TRAVIS_PYTHON_VERSION` that can be referenced in your .travis.yml: python: - 2.6 - 2.7 - 3.2 - 3.3 - pypy install: - if [[ $TRAVIS_PYTHON_VERSION == 2.6 ]]; then pip install importlib unittest2; fi - pip install -r requirements.txt This would cause `unittest2` and `importlib` to be installed only for Python 2.6, with requirements.txt being installed for all versions listed. You can do as many of these checks as necessary. Tornado's [.travis.yml](https://github.com/facebook/tornado/blob/master/.travis.yml) file uses it quite a bit.
numpy function IOError Question: On my macbook air running OSX Mavericks (I'm almost certain this wasn't happening the other day on a PC running Windows 7 running virtually identical code) the following code gives me the following error. import numpy as np massFile='Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt' print massFile sampleInfo=np.genfromtxt(fname=massFile,skip_header=2,usecols=(2,3,4),dtype=float) massfile is printed out as expected as `'Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt'` but I get the error Traceback (most recent call last): File "test.py", line 7, in <module> sampleInfo=np.genfromtxt(fname=massFile,skip_header=2,usecols=(2,3,4),dtype=float) File "//anaconda/lib/python2.7/site-packages/numpy/lib/npyio.py", line 1317, in genfromtxt fhd = iter(np.lib._datasource.open(fname, 'rbU')) File "//anaconda/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 145, in open return ds.open(path, mode) File "//anaconda/lib/python2.7/site-packages/numpy/lib/_datasource.py", line 477, in open return _file_openers[ext](found, mode=mode) IOError: [Errno 2] No such file or directory: '/Users/BigD/Dropbox/PhD/PPMS/Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt' it appears to be trying to be using half of the path and then adding the full path file to the end of it. Does anyone know why this is happening or can suggest a work around? Answer: The path you're supplying in `massFile` is relative to the directory you're executing the script in. To see where you are, just type `pwd` in your shell. In your case, it will return `/Users/BigD/Dropbox/PhD/PPMS/`. So this value is silently prepended to your path: massFile='/Users/BigD/Dropbox/PhD/PPMS/Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt' This is also the value you're seing in your traceback. There are two ways to fix this: To mark a path to be **absolute** just prefix the path with a `/`: massFile='/Users/BigD/Dropbox/PhD/PPMS/DATA/DB/HeatCap/HeatCapMass.txt' or to keep it relative you have to remove the unneeded bits: massFile='DATA/DB/HeatCap/HeatCapMass.txt' I would suggest picking the latter, that way you can move the project around without breaking all your paths.
Can I use one import and expose keywords from multiple python libraries? Question: I am fairly new to Python and Robot, so please bare with me, but here is what I am trying to do. I am doing some automation on 2 mobile devices, and I have a library created that allows me to communicate with these devices, provided to me by one of our dev teams. In my test suite I have to import this library twice, one instance for each device, and I am using the robot WITH NAME to call keywords specific to each device. For example I import the library like this: Library keyword_module.DEVICE.DEVICE ${USB_GUID_1} WITH NAME d1 Library keyword_module.DEVICE.DEVICE ${USB_GUID_2} WITH NAME d2 The library has a set of keywords that I can call from Robot to perform basic operations on the device. Such as: d1.turn_on_wifi d2.turn_on_cellular I then created my own python module, in this module I created keywords that take some keywords from this library to create higher level functions. I used Robots BuiltIn().get_library_instance to bring in the instances above so i can work with the keywords in my module. For example: class Common(object): def __init__(self, libName): self.d = BuiltIn().get_library_instance(libName) def some_keyword(self): self.d.turn_on_wifi self.d.turn_off_cellular I then import my module for both devices as follows: Library keyword_module.DEVICE.DEVICE $[USB_GUID_1} WITH NAME d1 Library keyword_module.DEVICE.DEVICE $[USB_GUID_2} WITH NAME d2 Library Common.Common d1 WITH NAME D1 Library Common.Common d2 WITH NAME D2 So now I can call my keywords from Common with D1 and D2, and use d1 and d2 for the DEVICE library. The problem is that my library is growing larger and larger, and I want to break it down into sub modules that are grouped based on similar traits, ie I want one file/module for my wifi keywords that I create, another for cellular, another for something else. I could just do what I did above for each file, however then I would end up with a large list of imports in Robot all with different names, which is what I wanted to avoid. Is there a way for me to move my keywords into multiple files but still only have one import for each device in Robot. I basically want to have multiple py files, but have them all linked somehow so that I can still only import the one Common library for each device and this will expose all of the keywords for me to call from all the sub-files? Does that make sense? Answer: You can see 'Selenium2Library'! class Selenium2Library( _LoggingKeywords, _RunOnFailureKeywords, _BrowserManagementKeywords, _ElementKeywords, _TableElementKeywords, _FormElementKeywords, _SelectElementKeywords, _JavaScriptKeywords, _CookieKeywords, _ScreenshotKeywords, _WaitingKeywords ): You just to need to import 'Selenium2Library',but expose all of the keywords! For example: a module named '_ALibrary.py' class _ALibrary(object): def __init__(self): pass def fun1(self): print 'fun1'` a module named '_BLibrary.py' class _BLibrary(object): def __init__(self): pass def fun2(self): print 'fun2' def fun3(self): print 'fun3' a common module named'CommonLibrary.py' import _ALibrary import _BLibrary class CommonLibrary(_BLibrary._BLibrary,_ALibrary._ALibrary): def __init__(self): for base in CommonLibrary.__bases__: base.__init__(self) so you just to need import 'CommonLibrary.py'
How do I disable a window? Question: I am creating a ten question multiple choice quiz in Tkinter / Python. Essentially in the parent window there are 13 buttons - help button, user details, questions 1-10 and an 'End' button. Each button opens up a new window with the question, the options as checkbuttons and radiobuttons and an 'Enter' button which links to a piece of code that will calculate the correct answer and add 1 to the score if condition is true. Once the user has selected the answer and pressed 'Enter' the button will be disabled. However, once the user exits this window they are able to re answer the same question which will obviously result in multiple points being added to the global variable score. How do I disable the question button/window once the user has answered the question? And how do I make the 'Reveal Score' button only activated when all questions have been answered? I have used a class to define each button and individual classes thereafter for each button so I am not sure if this will cause issues (I am new to Object Orientated). Thanks I AM AWARE THAT INDENTATION IS NOT CORRECT, IHAD TO FORMAT IT FOR THIS WEBSITE from Tkinter import * #Copied from online examples import tkMessageBox #Copied from online examples import Tkinter, Tkconstants, tkFileDialog #Copied from online examples import Tkinter as tk #Copied from online examples class Example(tk.Frame): def __init__(self, *args, **kwargs): tk.Frame.__init__(self, *args, **kwargs) self.question_1_window = None self.question_1 = tk.Button(self, text="1", foreground="blue", command=self.show_question_1) self.question_1.pack(side="left") def show_question_1(self): '''show the question window; create it if it doesn't exist''' if self.question_1_window is None or not self.question_1_window.winfo_exists(): self.question_1_window = Question_1_Window(self) else: self.question_1_window.flash() class Question_1_Window(tk.Toplevel): '''A simple instruction window''' def __init__(self, parent): tk.Toplevel.__init__(self, parent) self.text = tk.Label(self, width=75, height=4, text = "1) Do you have the time to do at least twenty minutes of prefect duty each week?") self.text.pack(side="top", fill="both", expand=True) question_1_Var = IntVar() #creating a variable to be assigned to the radiobutton Yes_1 = Radiobutton(self, text = "Yes", variable = question_1_Var, value=1, height=5, width = 20) Yes_1.pack() #creating 'yes' option #Here we are assigning values to each option which will be used in the validation No_1 = Radiobutton(self, text = "No", variable = question_1_Var, value=2, height=5, width = 20) No_1.pack() #creating 'no' option def calculate_score_1(): Enter_1.config(state="disabled") if (question_1_Var.get() == 1) and not (question_1_Var.get() == 2): print("calculate score has worked") #test lines parent.score_per_question[1] = 1 else: print("not worked") #testlines Enter_1 = Button(self, text= "Enter", width=10, command = calculate_score_1) Enter_1.pack() def flash(self): '''make the window visible, and make it flash temporarily''' # make sure the window is visible, in case it got hidden self.lift() self.deiconify() # blink the colors self.after(100, lambda: self.text.configure(bg="black", fg="white")) self.after(500, lambda: self.text.configure(bg="white", fg="black")) if __name__ == "__main__": root = tk.Tk() root.geometry("800x500") #defining the size of the root root.title("Prefect Quiz") #defining root title Example(root).pack(side="top", fill="both", expand=True) root.mainloop() Answer: If it creates a new window after than you can destroy it. window.destroy() "window" change depending on what you call your tkinter. TK window = tkinter. TK Whatever is before the = is what you replace "window" with. hope that helped:)
Send http post from android Question: I have a server made in python that reads the querystring-message and stores it in a sqlite database, and then displays the content. Now I want to send the message from a android application. This is my code so far. import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import java.io.UnsupportedEncodingException; import java.net.URL; import java.net.URLConnection; import java.net.URLEncoder; import android.R.string; import android.os.Bundle; import android.app.Activity; import android.view.Menu; import android.view.View; import android.widget.Button; import android.widget.TextView; public class MainActivity extends Activity { Button send; TextView display; String message; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); send = (Button)findViewById(R.id.button1); display = (TextView)findViewById(R.id.editText1); send.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { try{ post(); } catch(Exception e) { display.setText("Det sket sig"); } } public void post() throws UnsupportedEncodingException { message = display.getText().toString(); String data = URLEncoder.encode("?message", "UTF-8") + "=" + URLEncoder.encode(message, "UTF-8"); String text = ""; BufferedReader reader=null; try { URL url = new URL("http:homepage.net"); URLConnection conn = url.openConnection(); conn.setDoOutput(true); OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream()); wr.write(data); wr.flush(); reader = new BufferedReader(new InputStreamReader(conn.getInputStream())); StringBuilder sb = new StringBuilder(); String line = null; while((line = reader.readLine()) != null) { sb.append(line + "\n"); } text = sb.toString(); } catch(Exception e) { } finally { try { reader.close(); } catch(Exception ex) {} } display.setText(text); } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } } This is not functioning as per the expectations. What am i doing wrong here? Answer: for network related operation you have to use asynctask or thread other wise you wil get NetworkOnMainThread Exception. refer [here](http://developer.android.com/reference/android/os/AsyncTask.html) private class MyTask extends AsyncTask<Void, Void, Void> { ... }
Drawing obtuse circular arc with arrowhead in matplotlib Question: I am new to Python, though I do have previous programming experience. My current interest is to generate good quality schematics (and maybe later on also data graphs). I need to draw a circlar arc with an arrowhead at the end point. I have sieved through many posts and tried my hand at a few examples, but it seems I still miss some basic knowledge. So I need something like this: <http://i.stack.imgur.com/Z5V86.png> I was experimenting with the example below, trying to initially plot a circular arc, but even this does not work. I am using matplotlib.patches but I am open to other suggestions for what I need. # -*- coding: utf-8 -*- import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Ellipse, Arc from matplotlib import cm, colors, patches from math import pi fig = plt.figure() ax = fig.add_subplot(1,1,1) ellipse = Arc([2.5,2.5],1,1,0,0,pi,color='green', linewidth='0.5') ax.add_patch(ellipse) plt.show() Answer: Well, * if you want an angle of `pi`, pass `180`. * linewidth must be a scalar. pass `.5` and not `'.5'` as linewidth parameter * you need to change the limits of the axes, or you want see the arc. Add something like `ax.set_xlim(1.5,3.1)` and `ax.set_ylim(2.4,3.7)`. You may also want to opt for another strategy, something like `ax.plot([.5],[.5],marker=r'$\circlearrowleft$',ms=100)`
Python convert tuple to array Question: How can I convert at 3-Dimensinal tuple into an array a = [] a.append((1,2,4)) a.append((2,3,4)) in a array like: b = [1,2,4,2,3,4] Answer: Using [list comprehension](http://docs.python.org/2/tutorial/datastructures.html#list- comprehensions): >>> a = [] >>> a.append((1,2,4)) >>> a.append((2,3,4)) >>> [x for xs in a for x in xs] [1, 2, 4, 2, 3, 4] Using [`itertools.chain.from_iterable`](http://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable): >>> import itertools >>> list(itertools.chain.from_iterable(a)) [1, 2, 4, 2, 3, 4]
wxPython TextCtrl assertion error: wx.wxEVT_COMMAND_TEXT_ENTER not a PyEventBinder instance Question: Trying to make a wxPython `TextCtrl` to react on ENTER, I get an assertion error: self.fileNameInput = wx.TextCtrl (self, style=wx.TE_PROCESS_ENTER) self.fileNameInput.Bind (wx.wxEVT_COMMAND_TEXT_ENTER, self.onRename) terminates with an assertion error in `Bind`: assert isinstance(event, wx.PyEventBinder) AssertionError No wonder that `wx.wxEVT_COMMAND_TEXT_ENTER` is not an instance, it's number. I read a remark about changes to the events between Python 2 and 3 - did I mix versions of libraries? Answer: Do you mean `wx.EVT_TEXT_ENTER` ? >>> import wx >>> wx.wxEVT_COMMAND_TEXT_ENTER 10165 >>> wx.EVT_TEXT_ENTER <wx._core.PyEventBinder object at 0x000000000321C8D0> Example: import wx class MyWindow(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.fileNameInput = wx.TextCtrl (self, style=wx.TE_PROCESS_ENTER) self.fileNameInput.Bind(wx.EVT_TEXT_ENTER, self.onRename) def onRename(self, e): print('ENTER') app =wx.PySimpleApp() win = MyWindow() win.Show() app.MainLoop()