text
stringlengths
226
34.5k
Check Contents of Python Package without Running it? Question: I would like a function that, given a `name` which caused a `NameError`, can identify Python packages which could be `import`ed to resolve it. That part is fairly easy, and I've done it, but now I have an additional problem: I'd like to do it without causing side-effects. Here's the code I'm using right now: def necessaryImportFor(name): from pkgutil import walk_packages for package in walk_packages(): if package[1] == name: return name try: if hasattr(__import__(package[1]), name): return package[1] except Exception as e: print("Can't check " + package[1] + " on account of a " + e.__class__.__name__ + ": " + str(e)) print("No possible import satisfies " + name) The problem is that this code actually `__import__`s every module. This means that every side-effect of importing every module occurs. When testing my code I found that side-effects that can be caused by importing all modules include: * Launching tkinter applications * Requesting passwords with `getpass` * Requesting other `input` or `raw_input` * Printing messages (`import this`) * Opening websites (`import antigravity`) A possible solution that I considered would be finding the path to every module (how? It seems to me that the only way to do this is by `import`ing the module then using some methods from `inspect` on it), then parsing it to find every `class`, `def`, and `=` that isn't itself within a `class` or `def`, but that seems like a huge PITA and I don't think it would work for modules which are implemented in C/C++ instead of pure Python. Another possibility is launching a child Python instance which has its output redirected to `devnull` and performing its checks there, killing it if it takes too long. That would solve the first four bullets, and the fifth one is such a special case that I could just skip `antigravity`. But having to start up thousands of instances of Python in this single function seems a bit... heavy and inefficient. Does anyone have a better solution I haven't considered? Is there a simple way of just telling Python to generate an AST or something without actually importing a module, for example? Answer: So I ended up writing a few methods which can list everything from a source file, without importing the source file. The `ast` module doesn't seem particularly well documented, so this was a bit of a PITA trying to figure out how to extract everything of interest. Still, after ~6 hours of trial and error today, I was able to get this together and run it on the 3000+ Python source files on my computer without any exceptions being raised. def listImportablesFromAST(ast_): from ast import (Assign, ClassDef, FunctionDef, Import, ImportFrom, Name, For, Tuple, TryExcept, TryFinally, With) if isinstance(ast_, (ClassDef, FunctionDef)): return [ast_.name] elif isinstance(ast_, (Import, ImportFrom)): return [name.asname if name.asname else name.name for name in ast_.names] ret = [] if isinstance(ast_, Assign): for target in ast_.targets: if isinstance(target, Tuple): ret.extend([elt.id for elt in target.elts]) elif isinstance(target, Name): ret.append(target.id) return ret # These two attributes cover everything of interest from If, Module, # and While. They also cover parts of For, TryExcept, TryFinally, and With. if hasattr(ast_, 'body') and isinstance(ast_.body, list): for innerAST in ast_.body: ret.extend(listImportablesFromAST(innerAST)) if hasattr(ast_, 'orelse'): for innerAST in ast_.orelse: ret.extend(listImportablesFromAST(innerAST)) if isinstance(ast_, For): target = ast_.target if isinstance(target, Tuple): ret.extend([elt.id for elt in target.elts]) else: ret.append(target.id) elif isinstance(ast_, TryExcept): for innerAST in ast_.handlers: ret.extend(listImportablesFromAST(innerAST)) elif isinstance(ast_, TryFinally): for innerAST in ast_.finalbody: ret.extend(listImportablesFromAST(innerAST)) elif isinstance(ast_, With): if ast_.optional_vars: ret.append(ast_.optional_vars.id) return ret def listImportablesFromSource(source, filename = '<Unknown>'): from ast import parse return listImportablesFromAST(parse(source, filename)) def listImportablesFromSourceFile(filename): with open(filename) as f: source = f.read() return listImportablesFromSource(source, filename) The above code covers the titular question: How do I check the contents of a Python package without running it? But it leaves you with another question: How do I get the path to a Python package from just its name? Here's what I wrote to handle that: class PathToSourceFileException(Exception): pass class PackageMissingChildException(PathToSourceFileException): pass class PackageMissingInitException(PathToSourceFileException): pass class NotASourceFileException(PathToSourceFileException): pass def pathToSourceFile(name): ''' Given a name, returns the path to the source file, if possible. Otherwise raises an ImportError or subclass of PathToSourceFileException. ''' from os.path import dirname, isdir, isfile, join if '.' in name: parentSource = pathToSourceFile('.'.join(name.split('.')[:-1])) path = join(dirname(parentSource), name.split('.')[-1]) if isdir(path): path = join(path, '__init__.py') if isfile(path): return path raise PackageMissingInitException() path += '.py' if isfile(path): return path raise PackageMissingChildException() from imp import find_module, PKG_DIRECTORY, PY_SOURCE f, path, (suffix, mode, type_) = find_module(name) if f: f.close() if type_ == PY_SOURCE: return path elif type_ == PKG_DIRECTORY: path = join(path, '__init__.py') if isfile(path): return path raise PackageMissingInitException() raise NotASourceFileException('Name ' + name + ' refers to the file at path ' + path + ' which is not that of a source file.') Trying the two bits of code together, I have this function: def listImportablesFromName(name, allowImport = False): try: return listImportablesFromSourceFile(pathToSourceFile(name)) except PathToSourceFileException: if not allowImport: raise return dir(__import__(name)) Finally, here's the implementation for the function that I mentioned I wanted in my question: def necessaryImportFor(name): packageNames = [] def nameHandler(name): packageNames.append(name) from pkgutil import walk_packages for package in walk_packages(onerror=nameHandler): nameHandler(package[1]) # Suggestion: Sort package names by count of '.', so shallower packages are searched first. for package in packageNames: # Suggestion: just skip any package that starts with 'test.' try: if name in listImportablesForName(package): return package except ImportError: pass except PathToSourceFileException: pass return None And that's how I spent my Sunday.
Button binding in Kivy Python Question: I am wondering how to get my code to work. I have a class wich creates a popup window with buttons. Each button should be bound to subclass. But it doesnt work. What´s wrong with my code? class chooser: def __init__(self): None def show(self,title,options=["NOTHING"],size=(.5,.5)): self.bts = {} self.response = False self.content = FloatLayout() self.content.pos_hint = {"y":0,"x":0} # create buttons pos_cntr = 0 for opt in options: self.bts[pos_cntr] = Button(text=opt) self.bts[pos_cntr].size_hint = 1,float(1)/float(len(options)) self.bts[pos_cntr].pos_hint = {"x":0,"y":pos_cntr} self.bts[pos_cntr].bind(on_press=self.canceldia) self.content.add_widget(self.bts[pos_cntr]) print "bound" pos_cntr += float(1)/float(len(options)) self.pop = Popup(title=title,content=self.content,auto_dismiss=False) self.pop.size_hint = size self.pop.open() def canceldia(self,instance): print "closing" self.response = instance.text self.pop.dismiss() def getresponse(self): return self.response I have imported all needed modules. I execute it so: c = chooser() c.show("hello","world",["welcome","close","nothing","example"]) I have create a root widget. The popup works fine and all is created nice but the buttons are not bound. Please help me! Answer: In your loop, you always reference `self.bts[pos_cntr]`, so you override it in every iteration. How about this? for idx, opt in enumerate(options): self.bts[idx] = Button(text=opt) self.bts[idx].size_hint = 1,float(1)/float(len(options)) self.bts[idx].pos_hint = {"x":0,"y":pos_cntr} self.bts[idx].bind(on_press=self.canceldia) self.content.add_widget(self.bts[idx])
Open and read certain files in a directory; write all text in those files to cells in a .csv Question: I've read a number of posts that get close to my problem, but I still haven't been able to figure it out so hopefully you all can help me get there! I have a directory with thousands of subfolders, each with 1-4 files. I need to find all the .txt files (there are thousands), open them, and write the text to individual cells in a .csv. In each .txt is a single chunk of text (sometimes multiple pages long). I need each chunk of text to occupy a single cell in my .csv. When I run my code on a test directory (only 10 folders and about 15 .txt files) I get no errors, a .csv is created, but all the cells are empty. I'm running python 2.7 in aptana studio and I'm a novice so I'm excited for some SO brilliance to bail me out :) Here is my code so far: import csv import os def get_text(): with open('out.csv','w') as out_file: csv_out = csv.writer(out_file, delimiter=',') for root, dirs, files in os.walk('/Users//Desktop/TEXT-TEST'): for file in files: if file.lower().endswith('.txt'): with open(file) as f: csv_out.writerows(f.read()) get_text() Answer: Maybe this work. import csv import os def get_text(): with open('out.csv','wb') as out_file: #use 'wb' to prevent extra blank line csv_out = csv.writer(out_file, delimiter=',') for root, dirs, files in os.walk(r'/Users/Desktop/TEXT-TEST'): for file in files: if file.lower().endswith('.txt'): filepath = os.path.join(root,file) # You may got an error with out this line with open(filepath, 'r') as f: content = f.read() csv_out.writerow([content]) # change to wirterow get_text()
Equivalent of gmtime in Julia? Question: Julia has `strftime` as a built-in but not `gmtime`. julia> strftime strftime (generic function with 3 methods) julia> gmtime ERROR: gmtime not defined What is the preferred Julia way to do the equivalent of `gmtime`? The idea is to turn seconds since the epoch into a time structure in the Z (+00:00) time zone. Here in Los Angeles, I see: julia> strftime("%H:%M:%S", 0) "16:00:00" I would like to see `"00:00:00"`. I can do it in Python: >>> from time import strftime, gmtime >>> strftime("%H:%M:%S", gmtime(0)) '00:00:00' I tried to use `ccall` in Julia but it did not work julia> ccall( (:gmtime, "libc"), TmStruct, (Int64,), 0) TmStruct(1590498096,32767,16041550,1,-1924564896,32744,1,0,0,0,0,0,1590498144,32767) julia> strftime("%H:%M:%S", ans) "16041550:32767:1590498096" What went wrong with my `ccall`? And better, is there just a nicer all-Julia way to get the effect of `gmtime`? Answer: Looks like `gmtime` is on Julia's [**TODO**](https://github.com/JuliaLang/julia/blob/master/base/libc.jl#L30) list. Until it gets included, will something like this work for you? julia> function gmtime(t::Real) t = floor(t) tm = TmStruct() ccall(:gmtime_r, Ptr{TmStruct}, (Ptr{Int}, Ptr{TmStruct}), &t, &tm) return tm end gmtime (generic function with 1 method) julia> strftime("%H:%M:%S", gmtime(0)) "00:00:00"
Python initialise Struct Question: I have a block of memory with binary data. The block was created with `ctypes.create_string_buffer`, so the data is mutable, and accessible as an array. Each 32 bits are made up of a pair, a 20 bit unsigned integer, and a 12 bit unsigned integer. I want to access the nth element pair, and change the values in the memory block. I have a structure: from ctypes import * class Int(Structure): _fields_ = [("first", c_uint, 20), ("second", c_uint, 12)] How do I populate the Structure from my data? Is there a C like pointer I can set to point the structure at my data? Answer: You can create an array type of your `Int` by multiplying your structure type by its length; here I'm using `5` as an example, you can use whatever length you need: IntArray5 = Int * 5 arr = IntArray5() If you need to use different lengths each time, there's no need to name the type, you can just use: arr = (Int * 5)() Then rather than passing your string buffer to the C function, reading into that, and needing to do a conversion, it's probably best if you just pass your array into the function that will populate it: myfunc = libfoo.myfunc myfunc(arr, len(arr)) Now you should be able to access the elements of `arr`: print arr[0].first print arr[0].second However, if for some reason you can't pass that array in to the function directly, and instead need to take an existing string buffer and produce an array of `Int` that shares the buffer, you can use `from_buffer` as eryksun suggests: arr = (Int * 5).from_buffer(buff) Or if you wanted to copy into a new array rather than sharing the underlying buffer: arr = (Int * 5).from_buffer_copy(buff)
json_decode produces a string Question: I have a small php code bellow that tries to read a .json file, extract the content and then convert it to an array. Instead I get a string. The json.file is created in a python code that is also bellow. **python script** dict_test= {'Subcellular': ['Ribosome', 'Plasma Membrane'], 'CAS': ['56-85-9', '50-99-7'], 'Bio_target': ['RNA 18S', 'Insuline receptor'], 'strain': ['', ''], 'Unity': ['J', 'nM'], 'value': ['-80', '0.01'], 'InchI': ['1S/C5H10N2O3/c6-3(5(9)10)1-2-4(7)8/h3H,1-2,6H2,(H2,7,8)(H,9,10)\\xa0', '1S/C6H12O6/c7-1-2-3(8)4(9)5(10)6(11)12-2/h2-11H,1H2/t2-,3-,4+,5-,6?/m1/s1\\xa0'], 'Conditions': ['Temperature of 25\\xbaC and normal pressure', 'Computational simulation in R module canislup'], 'Link\\n': ['www.soulink.pt\\n', 'www.sououtro.pt\\n'], 'Asay_parameter': ['enthalpy', 'concentration'], 'Smiles': ['O=C(N)CCC(N)C(=O)O', 'OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O'], 'Journal': ['Nature', 'Science'], 'Experimental_error': ['0.01 J', '0.05 uM'], 'Title': ['Glutamine is a novel compound in Brainstem studies', 'Glucose concetration is essential to calcium inducted waves'], 'Assay_ID': ['12345', '123456'], 'Cell_type': ['Neuron', 'Myocite'], 'Comparisons': ['enthalpy>70 J', 'Concentration< 10 uM'], 'Mol_name': ['Glutamine', 'Glucose'], 'target_type': ['Peptide', 'Protein'], 'LAB': ['IMED', 'Lasige'], 'Tissue': ['Brainstem', 'Pericardium'], 'Species': ['Homo sapiens', 'Canis lupus'], 'Observations': ['Outliers were not found', 'Python modules were also used']} utf=unicode(dic_test) output_file= 'aaa.txt'+'.json' import json with open(output_file, 'wb') as fp: json.dump(utf, fp) **.json file** "{'Subcellular': ['Ribosome', 'Plasma Membrane'], 'CAS': ['56-85-9', '50-99-7'], 'Bio_target': ['RNA 18S', 'Insuline receptor'], 'strain': ['', ''], 'Unity': ['J', 'nM'], 'value': ['-80', '0.01'], 'InchI': ['1S/C5H10N2O3/c6-3(5(9)10)1-2-4(7)8/h3H,1-2,6H2,(H2,7,8)(H,9,10)\\xa0', '1S/C6H12O6/c7-1-2-3(8)4(9)5(10)6(11)12-2/h2-11H,1H2/t2-,3-,4+,5-,6?/m1/s1\\xa0'], 'Conditions': ['Temperature of 25\\xbaC and normal pressure', 'Computational simulation in R module canislup'], 'Link\\n': ['www.soulink.pt\\n', 'www.sououtro.pt\\n'], 'Asay_parameter': ['enthalpy', 'concentration'], 'Smiles': ['O=C(N)CCC(N)C(=O)O', 'OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O'], 'Journal': ['Nature', 'Science'], 'Experimental_error': ['0.01 J', '0.05 uM'], 'Title': ['Glutamine is a novel compound in Brainstem studies', 'Glucose concetration is essential to calcium inducted waves'], 'Assay_ID': ['12345', '123456'], 'Cell_type': ['Neuron', 'Myocite'], 'Comparisons': ['enthalpy>70 J', 'Concentration< 10 uM'], 'Mol_name': ['Glutamine', 'Glucose'], 'target_type': ['Peptide', 'Protein'], 'LAB': ['IMED', 'Lasige'], 'Tissue': ['Brainstem', 'Pericardium'], 'Species': ['Homo sapiens', 'Canis lupus'], 'Observations': ['Outliers were not found', 'Python modules were also used']}" **php** <?php $string = file_get_contents("aaa.txt.json"); $json = json_decode($string, true); var_dump($json); ?> The output of php is a string. Answer: From the php documentation: > // the name and value must be enclosed in double quotes > // single quotes are not valid The Json created from your python script is invalid. You need to use double quotes for names and strip out the //n inside the links {"Subcellular": ["Ribosome", "Plasma Membrane"], "CAS": ["56-85-9", "50-99-7"], "Link": ["www.soulink.pt", "www.sououtro.pt"], "Bio_target": ["RNA 18S", "Insuline receptor"], "strain": ["", ""], "Unity": ["J", "nM"], "LAB": ["IMED", "Lasige"], "InchI": ["1S/C5H10N2O3/c6-3(5(9)10)1-2-4(7)8/h3H,1-2,6H2,(H2,7,8)(H,9,10)\\\\xa0", "1S/C6H12O6/c7-1-2-3(8)4(9)5(10)6(11)12-2/h2-11H,1H2/t2-,3-,4+,5-,6?/m1/s1\\\\xa0"], "Conditions": ["Temperature of 25\\\\xbaC and normal pressure", "Computational simulation in R module canislup"], "Asay_parameter": ["enthalpy", "concentration"], "Smiles": ["O=C(N)CCC(N)C(=O)O", "OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O"], "Journal": ["Nature", "Science"], "Title": ["Glutamine is a novel compound in Brainstem studies", "Glucose concetration is essential to calcium inducted waves"], "Experimental_error": ["0.01 J", "0.05 uM"], "Assay_ID": ["12345", "123456"], "Cell_type": ["Neuron", "Myocite"], "Comparisons": ["enthalpy>70 J", "Concentration< 10 uM"], "Mol_name": ["Glutamine", "Glucose"], "target_type": ["Peptide", "Protein"], "value": ["-80", "0.01"], "Tissue": ["Brainstem", "Pericardium"], "Species": ["Homo sapiens", "Canis lupus"], "Observations": ["Outliers were not found", "Python modules were also used"]} After changing your Json as above your php will work. $string = file_get_contents("aaa.txt.json"); $json = json_decode($string, true); echo "<pre>"; print_r($json); echo "</pre>"; ![enter image description here](http://i.stack.imgur.com/Stvyw.png)
Funcparserlib.lexer.Spec ImportError: cannot import name 'Spec' Question: For learning purposes, I'm trying to convert a Chef interpreter project to python 3.4 and trying to wrangle the libraries involved into their newest versions, but when it comes to funcparserlib I'm a little over my head. Here's the Chef script: from pprint import pprint from collections import namedtuple import re import logging import funcparserlib.parser as p from funcparserlib.lexer import make_tokenizer from funcparserlib.lexer import Spec from funcparserlib.contrib.lexer import space, newline from funcparserlib.contrib.common import sometok, unarg from common import * log = logging.getLogger('preserve.chefparser') #log.addHandler(logging.StreamHandler()) #log.setLevel(logging.DEBUG) pos = 0 # order matters instruction_spec = [ Spec(x.lower().split()[0], x) for x in [ 'Take', 'Put', 'Fold', 'Add', 'Remove', 'Combine', 'Divide', 'Stir', 'Mix', 'Clean', 'Pour', 'Set aside', 'Refrigerate', 'from', 'the', 'for', 'contents of the', 'until', 'refrigerator', 'minute', 'minutes', 'hour', 'hours', 'well' ] ] instruction_spec.insert(0, Spec('to', r'to')) instruction_spec.insert(0, Spec('into', r'into')) instruction_spec.insert(0, Spec('add_dry', 'Add dry ingredients')) instruction_spec.insert(0, Spec('liquefy', 'Liquefy|Liquify')) instruction_spec.append(Spec('serve_with', r'Serve with')) instruction_spec.append(Spec('bowl', 'mixing bowl')) instruction_spec.append(Spec('dish', 'baking dish')) instruction_spec.append(space) instruction_spec.append(Spec('string', '[A-Za-z]+')) instruction_spec.append(Spec('ordinal', '[0-9]+(st|nd|rd|th)')) instruction_spec.append(Spec('number', '[0-9]+')) tokens = [ Spec('ingredients_start', 'Ingredients'), Spec('method_start', r'^Method', re.MULTILINE), Spec('dry_measure', r' g | kg | pinch[es]? '), Spec('liquid_measure', r' ml | l | dash[es]? '), Spec('mix_measure', r'cup[s]?|teaspoon[s]?|tablespoon[s]?'), Spec('measure_type', 'heaped|level'), # TODO hours minutes Spec('cooking_time', r'Cooking time:'), # TODO gas mark Spec('oven', r'Pre\-heat oven to'), Spec('oven_temp', 'degrees Celcius'), # serve is treated separate here as it is # not necessary for it to appear # following 'Method.' # But it is treated as just another # instruction by the interpreter Spec('serve', r'^Serves', re.MULTILINE), Spec('number', '[0-9]+'), space, Spec('period', r'\.'), Spec('string', r'[^\.\r\n]+'), ] def tokenize_minus_whitespace(token_list, input): return [x for x in make_tokenizer(token_list)(input) if x.type not in ['space']] def tokenize_instruction(spec): return tokenize_minus_whitespace(instruction_spec, spec) def tokenize(input): return tokenize_minus_whitespace(tokens, input) def parse_instruction(spec): string = p.oneplus(sometok('string')) >> (lambda x: ' '.join(x)) ordinal = sometok('ordinal') bowl = sometok('bowl') the = sometok('the') dish = sometok('dish') to = sometok('to') into = sometok('into') concat = lambda list: ' '.join(list) take_i = sometok('take') + (p.oneplus(string) >> concat) + sometok('from') + sometok('refrigerator') put_i = sometok('put') + p.skip(p.maybe(the)) + (p.oneplus(string) >> concat) + p.skip(into) + p.maybe(ordinal|the) + bowl liquefy_1 = sometok('liquefy') + sometok('contents') + p.maybe(ordinal) + bowl liquefy_2 = sometok('liquefy') + (p.oneplus(string) >> concat) liquefy_i = liquefy_1 | liquefy_2 pour_i = sometok('pour') + sometok('contents') + p.maybe(ordinal) + bowl + sometok('into') + the + p.maybe(ordinal) + dish fold_i = sometok('fold') + p.skip(p.maybe(the)) + (p.oneplus(string) >> concat) + into + p.maybe(ordinal|the) + bowl # cleanup repitition add_i = sometok('add') + (p.oneplus(string) >> concat) + p.maybe(to + p.maybe(ordinal|the) + bowl) remove_i = sometok('remove') + (p.oneplus(string) >> concat) + p.maybe(sometok('from') + p.maybe(ordinal|the) + bowl) combine_i = sometok('combine') + (p.oneplus(string) >> concat) + p.maybe(into + p.maybe(ordinal|the) + bowl) divide_i = sometok('divide') + (p.oneplus(string) >> concat) + p.maybe(into + p.maybe(ordinal|the) + bowl) add_dry_i = sometok('add_dry') + p.maybe(to + p.maybe(ordinal|the) + bowl) stir_1 = sometok('stir') + p.maybe(the + p.maybe(ordinal|the) + bowl) + sometok('for') + sometok('number') + (sometok('minute')|sometok('minutes')) stir_2 = sometok('stir') + (p.oneplus(string) >> concat) + into + the + p.maybe(ordinal) + bowl stir_i = stir_1 | stir_2 mix_i = sometok('mix') + p.maybe(the + p.maybe(ordinal) + bowl) + sometok('well') clean_i = sometok('clean') + p.maybe(ordinal|the) + bowl loop_start_i = (sometok('string') + p.maybe(the) + (p.oneplus(string) >> concat)) >> (lambda x: ('loop_start', x)) loop_end_i = (sometok('string') + p.maybe(p.maybe(the) + (p.oneplus(string) >> concat)) + sometok('until') + string) >> (lambda x: ('loop_end', x)) set_aside_i = sometok('set') >> (lambda x: (x, None)) serve_with_i = sometok('serve_with') + (p.oneplus(string) >> concat) refrigerate_i = sometok('refrigerate') + p.maybe(sometok('for') + sometok('number') + (sometok('hour')|sometok('hours'))) instruction = ( take_i | put_i | liquefy_i | pour_i | add_i | fold_i | remove_i | combine_i | divide_i | add_dry_i | stir_i | mix_i | clean_i | loop_end_i # -| ORDER matters | loop_start_i # -| | set_aside_i | serve_with_i | refrigerate_i ) >> (lambda x: Instruction(x[0].lower().replace(' ', '_'), x[1:])) return instruction.parse(tokenize_instruction(spec)) def parse(input): period = sometok('period') string = p.oneplus(sometok('string')) >> (lambda x: ' '.join(x)) number = sometok('number') title = string + p.skip(period) >> RecipeTitle ingredients_start = sometok('ingredients_start') + p.skip(period) >> IngredientStart dry_measure = p.maybe(sometok('measure_type')) + sometok('dry_measure') liquid_measure = sometok('liquid_measure') mix_measure = sometok('mix_measure') # is this valid ? 'g of butter', unit w/o initial_value ingredient = (p.maybe(number) + p.maybe(dry_measure | liquid_measure | mix_measure) + string >> unarg(Ingredient) ) ingredients = p.many(ingredient) cooking_time = (p.skip(sometok('cooking_time')) + (number >> unarg(CookingTime)) + p.skip(sometok('period')) ) oven_temp = (p.skip(sometok('oven')) + p.many(number) + p.skip(sometok('oven_temp')) >> unarg(Oven) ) method_start = sometok('method_start') + p.skip(period) comment = p.skip(p.many(string|period)) header = title + p.maybe(comment) instruction = (string + p.skip(period) ) >> parse_instruction instructions = p.many(instruction) program = (method_start + instructions) >> unarg(MethodStart) serves = (sometok('serve') + number >> (lambda x: Serve('serve', x[1])) ) + p.skip(period) ingredients_section = (ingredients_start + ingredients) >> unarg(IngredientSection) recipe = ( header + p.maybe(ingredients_section) + p.maybe(cooking_time) + p.maybe(oven_temp) + p.maybe(program) + p.maybe(serves) ) >> RecipeNode main_parser = p.oneplus(recipe) return main_parser.parse(tokenize(input)) Running the script fails: ImportError: cannot import name 'Spec' The version of funcparserlib.lexer that I have is: #Snipped some licence. Hint it's MIT. __all__ = ['make_tokenizer', 'Token', 'LexerError'] import re class LexerError(Exception): def __init__(self, place, msg): self.place = place self.msg = msg def __str__(self): s = 'cannot tokenize data' line, pos = self.place return '%s: %d,%d: "%s"' % (s, line, pos, self.msg) class Token(object): def __init__(self, type, value, start=None, end=None): self.type = type self.value = value self.start = start self.end = end def __repr__(self): return 'Token(%r, %r)' % (self.type, self.value) def __eq__(self, other): # FIXME: Case sensitivity is assumed here return self.type == other.type and self.value == other.value def _pos_str(self): if self.start is None or self.end is None: return '' else: sl, sp = self.start el, ep = self.end return '%d,%d-%d,%d:' % (sl, sp, el, ep) def __str__(self): s = "%s %s '%s'" % (self._pos_str(), self.type, self.value) return s.strip() @property def name(self): return self.value def pformat(self): return "%s %s '%s'" % (self._pos_str().ljust(20), self.type.ljust(14), self.value) def make_tokenizer(specs): """[(str, (str, int?))] -> (str -> Iterable(Token))""" def compile_spec(spec): name, args = spec return name, re.compile(*args) compiled = [compile_spec(s) for s in specs] def match_specs(specs, str, i, position): line, pos = position for type, regexp in specs: m = regexp.match(str, i) if m is not None: value = m.group() nls = value.count('\n') n_line = line + nls if nls == 0: n_pos = pos + len(value) else: n_pos = len(value) - value.rfind('\n') - 1 return Token(type, value, (line, pos + 1), (n_line, n_pos)) else: errline = str.splitlines()[line - 1] raise LexerError((line, pos + 1), errline) def f(str): length = len(str) line, pos = 1, 0 i = 0 while i < length: t = match_specs(compiled, str, i, (line, pos)) yield t line, pos = t.end i += len(t.value) return f # This is an example of a token spec. See also [this article][1] for a # discussion of searching for multiline comments using regexps # (including `*?`). # # [1]: http://ostermiller.org/findcomment.html _example_token_specs = [ ('COMMENT', (r'\(\*(.|[\r\n])*?\*\)', re.MULTILINE)), ('COMMENT', (r'\{(.|[\r\n])*?\}', re.MULTILINE)), ('COMMENT', (r'//.*',)), ('NL', (r'[\r\n]+',)), ('SPACE', (r'[ \t\r\n]+',)), ('NAME', (r'[A-Za-z_][A-Za-z_0-9]*',)), ('REAL', (r'[0-9]+\.[0-9]*([Ee][+\-]?[0-9]+)*',)), ('INT', (r'[0-9]+',)), ('INT', (r'\$[0-9A-Fa-f]+',)), ('OP', (r'(\.\.)|(<>)|(<=)|(>=)|(:=)|[;,=\(\):\[\]\.+\-<>\*/@\^]',)), ('STRING', (r"'([^']|(''))*'",)), ('CHAR', (r'#[0-9]+',)), ('CHAR', (r'#\$[0-9A-Fa-f]+',)), ] #tokenize = make_tokenizer(_example_token_specs) And I can sure see why it can't import Spec! There's no Spec there! What's the best way to go about this, guys? Is there a simple "find-replace" that I can do to move forward with this project? Drudging through the repos I could find online (and there are confusing several) wasn't much help to me, but maybe I missed something. Answer: You don't need the Specs class, in the current version of the `funcparserlib` you just have to declare a list of tuples, if you need to set up tokenizer. See the example in the lexer module: _example_token_specs = [ ('COMMENT', (r'\(\*(.|[\r\n])*?\*\)', re.MULTILINE)), ('COMMENT', (r'\{(.|[\r\n])*?\}', re.MULTILINE)), ('COMMENT', (r'//.*',)), ('NL', (r'[\r\n]+',)), ('SPACE', (r'[ \t\r\n]+',)), ('NAME', (r'[A-Za-z_][A-Za-z_0-9]*',)), ('REAL', (r'[0-9]+\.[0-9]*([Ee][+\-]?[0-9]+)*',)), ('INT', (r'[0-9]+',)), ('INT', (r'\$[0-9A-Fa-f]+',)), ('OP', (r'(\.\.)|(<>)|(<=)|(>=)|(:=)|[;,=\(\):\[\]\.+\-<>\*/@\^]',)), ('STRING', (r"'([^']|(''))*'",)), ('CHAR', (r'#[0-9]+',)), ('CHAR', (r'#\$[0-9A-Fa-f]+',)), ] `Specs` class is out of date, according to source of the `funcparserlib`.
python: find all Latin Squares of a set (or partial square with fewer columns) Question: EDIT: Thanks to commenter Douglas Zare, I have renamed the title of this post with more appropriate terminology for anybody else who may be looking for something similar. The code from David Eisenstat below was very helpful. * * * ORIGINAL POST: I apologize for the lack of appropriate set theory terminology... but I'm a bit out of my depths here (though I suspect it's an easy problem). I'm trying to develop an algorithm that will accept a set, and a number K, and return all possible "complete" partitions with K-sized subsets and with set coverage=K such that: 1. **any given subset will not have duplicates** 2. **combining the nth term across all subsets gives a complete partition of the set** 3. **the nth term of all subsets is unique across the rest of the subsets** for example: function({A, B, C, D}, 2) should return all possible sets like this: [AB, BC, CD, DA] [AB, BD, CA, DC] [AC, BD, CA, DB] [AD, DA, CB, BC] [BC, CA, DB, AD] ... also, for what it's worth 4. **the order of the elements in an individual subset doesn't matter (as long as the other three rules are obeyed** so the following are equivalent: [AB, BA, CD, DC] = [BA, AB, DC, CD] = [AB, BA, DC, CD] = [BA, AB, CD, DC] and 5. **order of the various subsets _does_ matter:** so the following are different: [BC, CA, DB, AD] ≠ [CA, BC, AD, DB] Put another way, in matrix terms: I'm looking for all matrices with `rows=len(set)` and `columns=K` such every column has exact cover and no item appears in any row more than once. function({A, B, C, D}, 3) would return all matrices like the following... ABC ADB BCD DCA CDA CBD DAB BAC I'm hoping for an answer in python, and using libraries like numpy is fine... but just a general algorithmic strategy would be appreciated as well. I thought something like [Algorithm X](http://www.cs.mcgill.ca/~aassaf9/python/algorithm_x.html) might come in handy... but I've been unable to make the leap from that to the problem outlined here... Answer: Doable with a backtracking search. Python 3: import itertools import operator def valid(cols_so_far): for i, col1 in enumerate(cols_so_far): for col2 in cols_so_far[i + 1:]: if any(map(operator.eq, col1, col2)): return False return True def enum(letters, k, cols_so_far=None): if cols_so_far is None: cols_so_far = (tuple(letters),) if not valid(cols_so_far): pass elif len(cols_so_far) == k: yield tuple(zip(*cols_so_far)) # transpose else: for perm in itertools.permutations(letters): yield from enum(letters, k, cols_so_far + (perm,))
Calling an R script with command line arguments from Python rpy2 Question: I want to be able to call R files from python using the rpy2 modules. I would like to be able to pass arguments to these scripts that can be interpreted by R's commandArgs function. So if my R script (`trivial_script.r`) looks like: print(commandArgs(TRUE)) and my Python 2.7 script looks like: >>> import rpy2.robjects as robjects >>> script = robjects.r.source('trivial_script.r') How can I call it from rpy2.robjects with the arguments "arg1", "arg2", "arg3" ? Answer: I see the two approaches (RPy vs command line script) as two seperate parts. If you use RPy there is no need to pass command line arguments. You simply create a function that contains the functionality you want to run in R, and call that function from Python with `arg1`, `arg2`, `arg3`. If you want to use a R command line script from within Python, have a look at the [`subprocess` library](https://docs.python.org/2/library/subprocess.html) in Python. There you can call and R script from the command line using: import subprocess subprocess.call(['Rscript', 'script.R', arg1, arg2, arg3]) where `arg1`, `arg2`, `arg3` are python objects (e.g. strings) that contain the information you want to pass to the R script.
How to convert json response into Python list Question: I get the JSON response by `requests.get` req = requests.get(SAMPLE_SCHEDULE_API) and convert it into dictionary `data = json.loads(req.text)["data"]` When I tried to convert the string into Python dict, I got `ValueError: malformed node or string:` `ast.literal_eval(data)` I have no idea how to do this task. # code snippets def schedules(cls, start_date=None, end_date=None): import ast req = requests.get(SAMPLE_SCHEDULE_API) data = json.loads(req.text)["data"] ast.literal_eval(data) return pd.DataFrame(json.loads(req.text)["data"]) # JSON response { status: "ok", version: "v1", data: "[ {"_id":"2015-01-28","end_date":"2015-01-28","estimated_release":1422453600000,"is_projection":false,"is_statement":true,"material_link":null,"start_date":"2015-01-27"}, {"_id":"2015-03-18","end_date":"2015-03-18","estimated_release":1426687200000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-03-17"}, {"_id":"2015-04-29","end_date":"2015-04-29","estimated_release":1430316000000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-04-28"}, {"_id":"2015-06-17","end_date":"2015-06-17","estimated_release":1434549600000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-06-16"}, {"_id":"2015-07-29","end_date":"2015-07-29","estimated_release":1438178400000,"is_projection":false,"is_statement":false,"material_link":null,"start_date":"2015-07-28"}]" } # Detail error message Traceback (most recent call last): File "fomc.py", line 25, in <module> schedules = FOMC.schedules() File "fomc.py", line 21, in schedules ast.literal_eval(data) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 86, in literal_eval return _convert(node_or_string) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 58, in _convert return list(map(_convert, node.elts)) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 63, in _convert in zip(node.keys, node.values)) File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 62, in <genexpr> return dict((_convert(k), _convert(v)) for k, v File "/usr/local/Cellar/python3/3.3.2/Frameworks/Python.framework/Versions/3.3/lib/python3.3/ast.py", line 85, in _convert raise ValueError('malformed node or string: ' + repr(node)) ValueError: malformed node or string: <_ast.Name object at 0x10a19c990> Answer: You have encoded the `data` twice (which would strictly not be necessary). You just need to decode the `data` again with `json.loads`: def schedules(cls, start_date=None, end_date=None): req = requests.get(SAMPLE_SCHEDULE_API) data_json = json.loads(req.text)["data"] data = json.loads(data_json) return pd.DataFrame(data) * * * Do note that `ast.literal_eval` is for Python code, whereas `json.loads` is for JSON that closely follows JavaScript code; the differences are for example `true` , `false` and `null` vs `True`, `False` and `None`. The former are the javascript syntax as used in JSON (and thus you would need `json.loads`; the latter is Python code, for which you would use `ast.literal_eval`.
Bypass proxy and capture webpage data from server using HTTP GET request using mitmproxy in Python Question: I need to bypass proxy using mitmproxy and capture web data using GET request. I am using Windows 7 and python 2.7 and mitmproxy python I tried the following code : #!/usr/bin/env python """ This example shows how to build a proxy based on mitmproxy's Flow primitives. Note that request and response messages are not automatically replied to, so we need to implement handlers to do this. """ import sys,os import os import cStringIO import threading import thread import exceptions import gc from libmproxy import proxy, flow from libmproxy.proxy import config from libmproxy.proxy import server from libmproxy.proxy.server import ProxyServer from libmproxy.platform.windows import TransparentProxy class MyMaster(flow.FlowMaster): def run(self): try: flow.FlowMaster.run(self) except KeyboardInterrupt: self.shutdown() def responseheaders(context, flow): ct = flow.response.headers["Content-Type"] if ct and len(ct) > 0 and ct[0].startswith("application/"): flow.stream = True print "streaming" def handle_request(self, r): print r f = flow.FlowMaster.handle_request(self, r) if f: def run(): r.reply() return f threading.Thread(target=run).start() else: return null def handle_response(self, r): f = flow.FlowMaster.handle_response(self, r) if f: def run(): r.reply() return f threading.Thread(target=run).start() else: return null config = proxy.config.ProxyConfig( #ca_file=os.path.expanduser("~\.mitmproxy\mitmproxy-ca.pem") confdir=os.path.realpath('.\\conf')#, mode="transparent" ) gc.enable() gc.set_threshold(250, 10, 10) print gc.get_threshold() state = flow.State() server = ProxyServer(config, 8080) #server = TransparentProxy() m = MyMaster(server, state) TransparentProxy().setup() #proxifier = TransparentProxy() #proxifier.start() print "got here" m.run(); But got some errors: confdir=os.path.realpath(r'C:\Users\rnive\Documents\certificates') ``#,mode="transparent" TypeError: __init__() got an unexpected keyword argument 'confdir' Any help on rectifying this error. Imported the mitmproxy CA by double clicking the mitmproxy-ca-cert.p12 file and loaded mitm.it in chrome browser and got something like : If you can see this, traffic is not passing through mitmproxy.![enter image description here](http://i.stack.imgur.com/oQtth.png) Any help on how to congiure and how to rectify the TypeError ! Answer: Check the proxy settings in the browser. The error message '... traffic is not passing through mitmproxy'should disappear. In Firefox go to Options|Advanced|Network|Settings. See also [here](http://www.wikihow.com/Enter-Proxy-Settings-in-Firefox).
how to post in facebook using selenium webdriver and python Question: I writen this code to post in facebook group from desktop program but it didnnot work. I'm using python and selenium webdriver in this script. Can someone help me? from selenium import webdriver from selenium.webdriver.support.ui import webDriverWait import unittest class post(unittest.Testcase): def setUp(self): self.driver = webdriver.Firefox self.driver.get("https://www.facebook.com/<grouplink>") def test_shar(self): driver = self.driver grouppost = "test" textid = "xhpc_message_text" buttonid = ".//*[@id='u_jsonp_25_s']/div/div[5]/div/ul/li[2]/button" fblogopath = "(//a[contains(@href, 'logo')])[1]" textfiledelemnt = WebDriverWait(driver, 10).until(lambda driver: driver.find_element_by_id(textid)) sharebotton = WebDriverWait(driver, 10).until(lambda driver: driver.find_element_by_xpath(buttonid) textfiledelemnt.clear() textfiledelemnt.send_keys(grouppost) buttonid.click() WebDriverWait(driver, 10).until(lambda driver: driver.find_element_by_xpath(fblogopath) def tearDown(self): self.driver.quit() if __name__ == '__main': unittest.main() Answer: I would suggest you not to rely to WebDriver for purpose such as one you mention. Use [Graph API](https://developers.facebook.com/docs/graph-api), here are two good examples: 1) [How do I update FB Status using Python & GraphAPI](http://stackoverflow.com/questions/13372528/how-do-i-update-fb- status-using-python-graphapi) 2) [Posting to Facebook wall](http://stackoverflow.com/questions/16668498/posting-to-facebook-wall)
How do I change the font size of ticks of matplotlib.pyplot.colorbar.ColorbarBase? Question: I would like to know how to change the font size of ticks of `ColorbarBase` of `matplotlib`. The following lines are a relevant part in my analysis script, in which `ColorbarBase` is used. import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import matplotlib as mpl axcb = fig.add_axes([0.9, 0.135, 0.02, 0.73]) cb = mpl.colorbar.ColorbarBase(axcb, norm=LogNorm(vmin=7e-5, vmax=1), cmap=plt.cm.CMRmap) cb.set_label("Relative Photon Intensity", labelpad=-1, size=14) I am using `matplotlib` ver 1.4.3 with Python 2.7 on OS X. Answer: You can change the tick size using: font_size = 14 # Adjust as appropriate. cb.ax.tick_params(labelsize=font_size) See the [docs](http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.tick_params) for `ax.tick_params` here for more parameters that can be modified.
What is my mistake? Question: This is my rexster.xml file configured as below <?xml version="1.0" encoding="UTF-8"?> <rexster> <http> <server-port>8182</server-port> <server-host>0.0.0.0</server-host> <base-uri>http://localhost</base-uri> <web-root>public</web-root> <character-set>UTF-8</character-set> <enable-jmx>false</enable-jmx> <enable-doghouse>true</enable-doghouse> <max-post-size>2097152</max-post-size> <max-header-size>8192</max-header-size> <upload-timeout-millis>30000</upload-timeout-millis> <thread-pool> <worker> <core-size>8</core-size> <max-size>8</max-size> </worker> <kernal> <core-size>4</core-size> <max-size>4</max-size> </kernal> </thread-pool> <io-strategy>leader-follower</io-strategy> </http> <rexpro> <server-port>8184</server-port> <server-host>0.0.0.0</server-host> <session-max-idle>1790000</session-max-idle> <session-check-interval>3000000</session-check-interval> <connection-max-idle>180000</connection-max-idle> <connection-check-interval>3000000</connection-check-interval> <enable-jmx>false</enable-jmx> <thread-pool> <worker> <core-size>8</core-size> <max-size>8</max-size> </worker> <kernal> <core-size>4</core-size> <max-size>4</max-size> </kernal> </thread-pool> <io-strategy>leader-follower</io-strategy> </rexpro> <shutdown-port>8183</shutdown-port> <shutdown-host>127.0.0.1</shutdown-host> <script-engine-reset-threshold>-1</script-engine-reset-threshold> <script-engine-init>data/init.groovy</script-engine-init> <script-engines>gremlin-groovy</script-engines> <security> <authentication> <type>none</type> <configuration> <users> <user> <username>rexster</username> <password>rexster</password> </user> </users> </configuration> </authentication> </security> <graphs> <graph> <graph-name>ramgraph</graph-name> <graph-type>tinkergraph</graph-type> <graph-mock-tx>false</graph-mock-tx> <properties> <storage.backend>cassandra</storage.backend> <storage.hostname>localhost</storage.hostname> <storage.buffer-size>100</storage.buffer-size> </properties> <extensions> <allows> <allow>tp:gremlin</allow> </allows> </extensions> </graph> <graph> <graph-name>titanexample</graph-name> <graphtype>com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration</graph-type> <graph-location>/tmp/titan</graph-location> <graph-read-only>false</graph-read-only> <properties> <storage.backend>local</storage.backend> <storage.buffer-size>100</storage.buffer-size> </properties> <extensions> <allows> <allow>tp:gremlin</allow> </allows> </extensions> </graph> </graphs> </rexster> and i wrote a python client with bulbs like as below __author__ = 'rponnapureddy' from bulbs.config import Config, DEBUG from bulbs.rexster import Graph from bulbs.rexster import Graph # config = Config('http://localhost:8182/graphs/empgraph') config = Config('http://localhost:8182/graphs/ramgraph') g = Graph(config) class inser_class(): ponnapu = g.vertices.create(name="ramnath") pr = g.vertices.create(name="reddy") tanu = g.vertices.create(name="brothers") g.edges.create(pr, "knows", tanu) # z=g.get_graphml() #print z #g.clear() # print z I got below error . To correct answer what i have do ? 'usr/bin/python2.7 "/home/rpo/Desktop/ramnathreddy/addverices to rexsterdefault graph.py" Traceback (most recent call last): File "/home/rpo/Desktop/ramnath/addverices to rexsterdefault graph.py", line 10, in g = Graph(config) File "/usr/local/lib/python2.7/dist- packages/bulbs/rexster/graph.py", line 56, in **init** super(Graph, self).**init**(config) File "/usr/local/lib/python2.7/dist- packages/bulbs/base/graph.py", line 58, in **init** self.vertices = self.build_proxy(Vertex) File "/usr/local/lib/python2.7/dist- packages/bulbs/base/graph.py", line 124, in build_proxy return self.factory.build_element_proxy(element_class, index_class) File "/usr/local/lib/python2.7/dist-packages/bulbs/factory.py", line 19, in build_element_proxy primary_index = self.get_index(element_class,index_class,index_name) File "/usr/local/lib/python2.7/dist-packages/bulbs/factory.py", line 27, in get_index index = index_proxy.get_or_create(index_name) File "/usr/local/lib/python2.7/dist-packages/bulbs/rexster/index.py", line 80, in get_or_create resp = self.client.get_or_create_vertex_index(index_name, index_params) File "/usr/local/lib/python2.7/dist- packages/bulbs/rexster/client.py", line 668, in get_or_create_vertex_index resp = self.gremlin(script, params) File "/usr/local/lib/python2.7/dist- packages/bulbs/rexster/client.py", line 356, in gremlin return self.request.post(gremlin_path, params) File "/usr/local/lib/python2.7/dist- packages/bulbs/rest.py", line 131, in post return self.request(POST, path, params) File "/usr/local/lib/python2.7/dist-packages/bulbs/rest.py", line 186, in request return self.response_class(http_resp, self.config) File "/usr/local/lib/python2.7/dist- packages/bulbs/rexster/client.py", line 198, in **init** self.handle_response(response) File "/usr/local/lib/python2.7/dist-packages/bulbs/rexster/client.py", line 222, in handle_response response_handler(http_resp) File "/usr/local/lib/python2.7/dist-packages/bulbs/rest.py", line 50, in server_error raise SystemError(http_resp) SystemError: ({'status': '500', 'transfer-encoding': 'chunked', 'server': 'grizzly/2.2.16', 'connection': 'close', 'date': 'Mon, 16 Mar 2015 11:32:19 GMT', 'access-control-allow- origin': '*', 'content-type': 'application/json'}, '{"message":"","error":"javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: groovy.lang.MissingMethodException.rollback() is applicable for argument types: () values: []\nPossible solutions: collect(), collect(groovy.lang.Closure), collect(java.util.Collection, groovy.lang.Closure)","api":{"description":"evaluate an ad-hoc Gremlin script for a graph.","parameters":{"rexster.returnKeys":"an array of element property keys to return (default is to return all element properties)","rexster.showTypes":"displays the properties of the elements with their native data type (default is false)","load":"a list of \'stored procedures\' to execute prior to the \'script\' (if \'script\' is not specified then the last script in this argument will return the values","rexster.offset.end":"end index for a paged set of data to be returned","rexster.offset.start":"start index for a paged set of data to be returned","params":"a map of parameters to bind to the script engine","language":"the gremlin language flavor to use (default to groovy)","script":"the Gremlin script to be evaluated"}},"success":false}') Process finished with exit code 1' Answer: from bulbs.config import Config, DEBUG from bulbs.rexster import Graph from bulbs.titan import Graph config = Config('http://localhost:8182/graphs/ramgraph') g = Graph(config) class inser_class(): ponnapu = g.vertices.create(name="reddy", age="26", state="TELNGANA", mobn="111111111") pr = g.vertices.create(name="ramnath" ,age="25" , state="TELNGANA", mobn="1111111") tanu = g.vertices.create (name="ponnapu",age="27",state="AP",mobn="11111111111111") g.edges.create(pr, "knows", tanu) g.edges.create(pr, "friends", ponnapu) g.edges.create(ponnapu, "dontknow",tanu)
debugging errors in python multiprocessing Question: I'm using the `Pool` function of the `multiprocessing` module in order to run the same code in parallel on different data. It turns out that on some data my code raises an exception, but the precise line in which this happens is not given: Traceback (most recent call last): File "my_wrapper_script.py", line 366, in <module> main() File "my_wrapper_script.py", line 343, in main results = pool.map(process_function, folders) File "/usr/lib64/python2.6/multiprocessing/pool.py", line 148, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib64/python2.6/multiprocessing/pool.py", line 422, in get raise self._value KeyError: 'some_key' I am aware of `multiprocessing.log_to_stderr()` , but it seems that it is useful when concurrency issues arise, which is not my case. Any ideas? Answer: If you're using a new enough version of Python, you'll actually see the real exception get printed prior to that one. For example, here's a sample that fails: import multiprocessing def inner(): raise Exception("FAIL") def f(): print("HI") inner() p = multiprocessing.Pool() p.apply(f) p.close() p.join() Here's the exception when running this with python 3.4: multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.4/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "test.py", line 9, in f inner() File "test.py", line 4, in inner raise Exception("FAIL") Exception: FAIL """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "test.py", line 13, in <module> p.apply(f) File "/usr/local/lib/python3.4/multiprocessing/pool.py", line 253, in apply return self.apply_async(func, args, kwds).get() File "/usr/local/lib/python3.4/multiprocessing/pool.py", line 599, in get raise self._value Exception: FAIL If using a newer version isn't an option, the easiest thing to do is to wrap your worker function in a try/except block that will print the exception prior to re-raising it: import multiprocessing import traceback def inner(): raise Exception("FAIL") def f(): try: print("HI") inner() except Exception: print("Exception in worker:") traceback.print_exc() raise p = multiprocessing.Pool() p.apply(f) p.close() p.join() Output: HI Exception in worker: Traceback (most recent call last): File "test.py", line 11, in f inner() File "test.py", line 5, in inner raise Exception("FAIL") Exception: FAIL Traceback (most recent call last): File "test.py", line 18, in <module> p.apply(f) File "/usr/local/lib/python2.7/multiprocessing/pool.py", line 244, in apply return self.apply_async(func, args, kwds).get() File "/usr/local/lib/python2.7/multiprocessing/pool.py", line 558, in get raise self._value Exception: FAIL
How to print variable length lists as columns in python? Question: I need a way to print several lists of varying lengths as columns next to each other tab delimited and with the empty cells remaining empty or containing some fill character (e.g "-"). The methods attempted so far have not worked for lists of varying lengths and numpy has not been working as I expected it. To summarize: listname = [[1,2,3],[4,5,6,7,8],[9,10,11,12]] printed as such in a .txt file: 1 4 9 2 5 10 3 6 11 - 7 12 - 8 - Answer: You can use [`itertools.izip_longest`](https://docs.python.org/2/library/itertools.html#itertools.izip_longest). To fill the `None` spaces in the longer sequences you can use `fillvalue` (thanks @szxk): >>> import itertools >>> listname = [[1,2,3],[4,5,6,7,8],[9,10,11,12]] >>> for x in itertools.izip_longest(*listname, fillvalue="-"): ... print '\t'.join([str(e) for e in x]) ... 1 4 9 2 5 10 3 6 11 - 7 12 - 8 -
Scrapy: 'str' object has no attribute 'iter' Question: I added `restrict_xpaths` rules to my scrapy spider and now it immediately fails with: 2015-03-16 15:46:53+0000 [tsr] ERROR: Spider error processing <GET http://www.thestudentroom.co.uk/forumdisplay.php?f=143> Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py", line 800, in runUntilCurrent call.func(*call.args, **call.kw) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/task.py", line 602, in _tick taskObj._oneWorkUnit() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/task.py", line 479, in _oneWorkUnit result = self._iterator.next() File "/Library/Python/2.7/site-packages/scrapy/utils/defer.py", line 57, in <genexpr> work = (callable(elem, *args, **named) for elem in iterable) --- <exception caught here> --- File "/Library/Python/2.7/site-packages/scrapy/utils/defer.py", line 96, in iter_errback yield next(it) File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/offsite.py", line 26, in process_spider_output for x in result: File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr> return (_set_referer(r) for r in result or ()) File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr> return (r for r in result or () if _filter(r)) File "/Library/Python/2.7/site-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr> return (r for r in result or () if _filter(r)) File "/Library/Python/2.7/site-packages/scrapy/contrib/spiders/crawl.py", line 73, in _parse_response for request_or_item in self._requests_to_follow(response): File "/Library/Python/2.7/site-packages/scrapy/contrib/spiders/crawl.py", line 52, in _requests_to_follow links = [l for l in rule.link_extractor.extract_links(response) if l not in seen] File "/Library/Python/2.7/site-packages/scrapy/contrib/linkextractors/lxmlhtml.py", line 107, in extract_links links = self._extract_links(doc, response.url, response.encoding, base_url) File "/Library/Python/2.7/site-packages/scrapy/linkextractor.py", line 94, in _extract_links return self.link_extractor._extract_links(*args, **kwargs) File "/Library/Python/2.7/site-packages/scrapy/contrib/linkextractors/lxmlhtml.py", line 50, in _extract_links for el, attr, attr_val in self._iter_links(selector._root): **File "/Library/Python/2.7/site-packages/scrapy/contrib/linkextractors/lxmlhtml.py", line 38, in _iter_links for el in document.iter(etree.Element): exceptions.AttributeError: 'str' object has no attribute 'iter'** I cannot understand why this error is happening. Here is my short `Spider`: import scrapy from tutorial.items import DmozItem from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors import LinkExtractor class TsrSpider(CrawlSpider): name = 'tsr' allowed_domains = ['thestudentroom.co.uk'] start_urls = ['http://www.thestudentroom.co.uk/forumdisplay.php?f=143'] download_delay = 4 user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:35.0) Gecko/20100101 Firefox/35.0' rules = ( Rule( LinkExtractor( allow=('forumdisplay\.php\?f=143\&page=\d',), restrict_xpaths=("//li[@class='pager-page_numbers']/a/@href",))), Rule( LinkExtractor( allow=('showthread\.php\?t=\d+\&page=\d+',), restrict_xpaths=("//li[@class='pager-page_numbers']/a/@href",)), callback='parse_link'), Rule( LinkExtractor( allow=('showthread\.php\?t=\d+',), restrict_xpaths=("//tr[@class='thread unread ']",)), callback='parse_link'), ) def parse_link(self, response): # Iterate over posts. for sel in response.xpath("//li[@class='post threadpost old ']"): rating = sel.xpath( "div[@class='post-footer']//span[@class='score']/text()").extract() if not rating: rating = 0 else: rating = rating[0] item = DmozItem() item['post'] = sel.xpath( "div[@class='post-content']/blockquote[@class='postcontent restore']/text()").extract() item['link'] = response.url item['topic'] = response.xpath( "//div[@class='forum-header section-header']/h1/span/text()").extract() item['rating'] = rating yield item **source** : <http://pastebin.com/YXdWvPgX> Can someone help me out? Where is the mistake? I've been searching for days!? Answer: The problem is that `restrict_xpaths` should **point to elements** \- either the links directly or containers containing links, not attributes: rules = [ Rule(LinkExtractor(allow='forumdisplay\.php\?f=143\&page=\d', restrict_xpaths="//li[@class='pager-page_numbers']/a")), Rule(LinkExtractor(allow='showthread\.php\?t=\d+\&page=\d+', restrict_xpaths="//li[@class='pager-page_numbers']/a"), callback='parse_link'), Rule(LinkExtractor(allow='showthread\.php\?t=\d+', restrict_xpaths="//tr[@class='thread unread ']"), callback='parse_link'), ] Tested (worked for me). FYI, Scrapy defines [`restrict_xpaths`](http://doc.scrapy.org/en/0.22/topics/link- extractors.html#scrapy.contrib.linkextractors.sgml.SgmlLinkExtractor) as "expressions pointing to regions": > `restrict_xpaths` (str or list) – is a XPath (or list of XPath’s) which > defines **regions** inside the response where links should be extracted > from. If given, only the text selected by those XPath will be scanned for > links. See examples below.
Python Challenge level 17 in Python 3 Question: I recently started playing with [The Python Challenge](http://www.pythonchallenge.com/). While fairly convoluted, the required coding isn't very hard, which makes leaning many useful modules quite interesting. My question is about level 17. I understand the idea of following the clues as was needed in level 4, while collecting the cookies, which is what I did. However, I cannot BZ2-decompress the string that I get. I tried Googling, and I found a nice blog with the solutions in Python 2. Specifically, the one for the level 17 is [here](http://intelligentgeek.blogspot.co.uk/2007/03/python- challenge-17-uggggg.html). Analysing that one, I realized that I indeed get the compressed string (from the cookies) right and it decompresses properly in Python 2: bz2.decompress(urllib.unquote_plus(compressed)) However, `bz2.decompress` in Python 3 requires a byte array instead of a string, but the obvious Python 3 counterpart of the above line: bz2.decompress(urllib.parse.unquote_plus(message).encode("utf8")) fails with `OSError: Invalid data stream`. I tried all the [standard encodings](https://docs.python.org/3/library/codecs.html#standard-encodings) and some variants of the above, but to no avail. Here is my (non-working) solution so far: #!/usr/bin/env python3 """ The Python Challenge #17: http://www.pythonchallenge.com/pc/return/romance.html This is similar to #4 and it actually uses its solution. However, the key is in the cookies. The page's cookie says: "you+should+have+followed+busynothing..." So, we follow the chain from #4, using the word "busynothing" and reading the cookies. """ import urllib.request, urllib.parse import re import bz2 nothing = "12345" last_cookie = None message = "" while True: headers = dict() if last_cookie: headers["Cookie"] = last_cookie r = urllib.request.Request("http://www.pythonchallenge.com/pc/def/linkedlist.php?busynothing=" + nothing, headers=headers) with urllib.request.urlopen(r) as u: last_cookie = u.getheader("Set-Cookie") m = re.match(r"info=(.*?);", last_cookie) if m: message += m.group(1) text = u.read().decode("utf8") print("{} >>> {}".format(nothing, text)) m = re.search(r"\d+$", text) try: nothing = str(int(m.group(0))) except Exception as e: print(e) break print("Cookies message:", message) print("Decoded:", bz2.decompress(urllib.parse.unquote_plus(message).encode("utf8"))) So, my question is: what would a Python 3 solution to the above problem look like and why doesn't mine work as expected? I am well aware that parts of this can be done more nicely. I was going for a quick and dirty solution, so my interest here is only that it works (and why not the way I did it above). Answer: You need to use the [`urllib.parse.unquote_to_bytes()` function](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.unquote_to_bytes) here. It does not support the `+` to space mapping, but that is trivially worked around with `str.replace()`: urllib.parse.unquote_to_bytes(message.replace('+', '%20')) This then decompresses nicely. You can decode the resulting uncompressed string as ASCII: print("Decoded:", bz2.decompress(urllib.parse.unquote_to_bytes(message.replace('+', '%20'))).decode('ascii')) Demo using a _different_ message I prepared to not give away the puzzle: >>> import bz2 >>> import urllib.parse >>> another_message = 'BZh91AY%26SY%80%F4C%E8%00%00%02%13%80%40%00%04%00%22%E3%8C%00+%00%22%004%D0%40%D04%0C%B7%3B%E6h%B1AIM%3D%5E.%E4%8Ap%A1%21%01%E8%87%D0' >>> bz2.decompress(urllib.parse.unquote_to_bytes(another_message.replace('+', '%20'))).decode('ascii') 'This is not the message'
python how to include a file of lists in a script Question: I have a file that gets generated by : excerpt: group0 = ['ParentPom'] group1 = ['Commons','http', 'availability','ingestPom','abcCommons','solrIndex','123Service'] ... group10=['totalCommons','Generator'] How can I include this in my python script, tried import but no luck >>> import dependencies_custom >>> print (group2[0]) Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'group2' is not defined Answer: In the `import` form you're using, you should be able to access the groups by dependencies_custom.group2[0] type notation. If you want just use just `group2[0]` notation, try using: from dependencies_custom import *
advanced array of bytes searching Question: I have a binary file and i have to track a "dynamic array of bytes" in this file , this array is something like: d0 30 60 XX 5d 48 Where XX can be any HEX value I need to find all the occurences of this array in the binary file , i mean all the array of bytes that starts with D0 30 60 (hex) , followed by "XX" (random hex byte) , followed by 5D 48 (hex). is there any tool or a python script that can do that ? Answer: You can use this : import re fil = open("myfile.txt") txt = fil.read() mo = re.match(r'd0 3d0 30 60 ([0-9a-f][0-9a-f]) 5d 48',txt,re.M) I have not much used [regex in python](http://www.tutorialspoint.com/python/python_reg_expressions.htm) earlier. I used regex in Php etc so have alook at that link. Edit : Might be better, f = open('test.txt', 'r') s = re.findall(r'd0 3d0 30 60 ([0-9a-f][0-9a-f]) 5d 48',f.read())
Scheduling a Python program to sleep within a given time period Question: while True: now = datetime.datetime.now(); if now.hour >= 22 and now.hour < 3: print "sleep" sleep_at = datetime.datetime.combine(datetime.date.today(),datetime.time(3)) sleep_til = (sleep_at - now).seconds print sleep_til time.sleep(sleep_til) print "wake" else: print "break" break This code should make my entire program to go to sleep at 10 PM and wake up at 3 AM. MY question is.. will this work? I tried running it.. but I cannot change my system/computer Time.. so I cannot check. Reason why I am posting this question is because my coding is using datetime.date.tday and datetime.datetime which is calling current date.. Once again.. I want my program to run before 10PM and sleep between 10PM to 3AM and rerun after 3AM.. Can someone check if this is the right way to do it? Answer: Consider (extra-verbose for clarity): import time, datetime # Create time bounds -- program should run between RUN_LB and RUN_UB RUN_LB = datetime.time(hour=22) # 10pm RUN_UB = datetime.time(hour=3) # 3am # Helper function to determine whether we should be currently running def should_run(): # Get the current time ct = datetime.datetime.now().time() # Compare current time to run bounds lbok = RUN_LB <= ct ubok = RUN_UB >= ct # If the bounds wrap the 24-hour day, use a different check logic if RUN_LB > RUN_UB: return lbok or ubok else: return lbok and ubok # Helper function to determine how far from now RUN_LB is def get_wait_secs(): # Get the current datetime cd = datetime.datetime.now() # Create a datetime with *today's* RUN_LB ld = datetime.datetime.combine(datetime.date.today(), RUN_LB) # Create a timedelta for the time until *today's* RUN_LB td = ld - cd # Ignore td days (may be negative), return td.seconds (always positive) return td.seconds while True: if should_run(): print("--do something--") else: wait_secs = get_wait_secs() print("Sleeping for %d seconds..." % wait_secs) time.sleep(wait_secs) But I do agree that sleeping is not the best way to delay your program starting. You may look into the task scheduler on Windows or `cron` on Linux.
Error when executing TwitterSearch in Python Question: I am currently attempting to use TwitterSearch (<https://github.com/ckoepp/TwitterSearch>) to import tweets into a csv for analysis. However, I am getting the following error message when executing the python script: from .TwitterSearchException import TwitterSearchException ValueError: Attempted relative import in non-package Here is the code: from TwitterSearch import * from TwitterSearchException import * import csv def get_tweets(query, max_tweets): query = raw_input ("Search for:") max_tweets = 2000 # takes a search term (query) and a max number of tweets to find # gets content from twitter and writes it to a csv bearing the name of your query i = 0 search = query with open(search+'.csv', 'wb') as outf: writer = csv.writer(outf) writer.writerow(['user','time','tweet','latitude','longitude']) try: tso = TwitterSearchOrder() tso.set_keywords([search]) tso.set_language('en') # English tweets only ts = TwitterSearch( consumer_key = '', consumer_secret = '', access_token = '', access_token_secret = '' ) for tweet in ts.search_tweets_iterable(tso): lat = None long = None time = tweet['created_at'] # UTC time when Tweet was created. user = tweet['user']['screen_name'] tweet_text = tweet['text'].strip().encode('ascii', 'ignore') tweet_text = ''.join(tweet_text.splitlines()) print i,time, if tweet['geo'] != None and tweet['geo']['coordinates'][0] != 0.0: # avoiding bad values lat = tweet['geo']['coordinates'][0] long = tweet['geo']['coordinates'][1] print('@%s: %s' % (user, tweet_text)), lat, long else: print('@%s: %s' % (user, tweet_text)) writer.writerow([user, time, tweet_text, lat, long]) i += 1 if i > max: return() except TwitterSearchException as e: print(e) Thanks for your help! Answer: Try a more specific import for the `TwitterSearchException` line, instead of the `*`.
How to parse code after it has been stripped of styles and elements in python Question: This is a very basic question regarding html parsing: I am new to python(coding,computer science, etc), teaching myself to parse html and I have imported both pattern and beautiful soup modules to parse with. I found this code on the internet to cut out all formatting. import requests import json import urllib from lxml import etree from pattern import web from bs4 import BeautifulSoup url = "http://webrates.truefx.com/rates/connect.html?f=html" html = urllib.urlopen(url).read() soup = BeautifulSoup(html) # kill all script and style elements for script in soup(["script", "style"]): script.extract() # rip it out # get text text = soup.get_text() # break into lines and remove leading and trailing space on each lines = (line.strip() for line in text.splitlines()) # break multi-headlines into a line each chunks = (phrase.strip() for line in lines for phrase in line.split(" ")) # drop blank lines text = '\n'.join(chunk for chunk in chunks if chunk) print(text) This produces the following Output: EUR/USD14265522866931.056661.056751.056081.057911.05686USD/JPY1426552286419121.405121.409121.313121.448121.382GBP/USD14265522866821.482291.482361.481941.483471.48281EUR/GBP14265522865290.712790.712900.712300.713460.71273USD/CHF14265522866361.008041.008291.006551.008791.00682EUR/JPY1426552286635128.284128.296128.203128.401128.280EUR/CHF14265522866551.065121.065441.063491.066281.06418USD/CAD14265522864891.278211.278321.276831.278531.27746AUD/USD14265522864960.762610.762690.761150.764690.76412GBP/JPY1426552286682179.957179.976179.854180.077179.988 now from this point how can I parse further to say If I only want the string 'USD/CHF' or a particular point of data? Is there a easier method to webscrape and parse with? Any suggestions would be great! System Specs: windows 7 64bit IDE: idle python: 2.7.5 Thank you all in advance, Rusty Answer: [Keep it simple](http://en.wikipedia.org/wiki/KISS_principle). Find the cell _by text_ (`USD/CHF`, for example) and get the [_following siblings_](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#next- siblings-and-previous-siblings): text = 'USD/CHF' cell = soup.find('td', text=text) for td in cell.next_siblings: print td.text Prints: 1426561775912 1.00 768 1.00 782 1.00655 1.00879 1.00682
Passing file to function to parse Question: I have an upload form that takes a file and sends it to a function to parse. It is a CSV file and im using a DataField type to store it. views.py def upload(request): # Handle file upload if request.method == 'POST': form = UploadForm(request.POST, request.FILES) if form.is_valid(): newdoc = CSV(file=request.FILES['csvfile']) newdoc.save() # Send file to parser import fanduel.load_data fanduel.load_data.parse(newdoc.file, request.user) # Redirect to the document list after POST return HttpResponseRedirect(reverse('app.views.upload')) else: form = UploadForm() # A empty, unbound form load_data.py def parse(file, username): import csv dataReader = csv.reader(open(file), delimiter=',', quotechar='"') forms.py # In forms.py... from django import forms class UploadForm(forms.Form): csvfile = forms.FileField( label='Select a CSV file', ) models.py class CSV(models.Model): file = models.FileField(upload_to='csv/') traceback: Environment: Request Method: POST Request URL: http://127.0.0.1:8000/app/ Django Version: 1.7.6 Python Version: 3.4.2 Installed Applications: ('django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'import_export', 'app') Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware') Traceback: File "C:\Users\Wilson\AppData\Roaming\Python\Python34\site-packages\django\core\handlers\base.py" in get_response 111. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "C:\Users\Wilson\PycharmProjects\bankroll2\app\views.py" in upload 20. app.load_data.parse(newdoc.file, request.user) File "C:\Users\Wilson\PycharmProjects\bankroll2\app\load_data.py" in parse 13. dataReader = csv.reader(open(file), delimiter=',', quotechar='"') Exception Type: TypeError at /app/ Exception Value: invalid file: <FieldFile: csv/app_entry_history_20150316_w62ruKt.csv> I'm not sure how to get it to pass it in as the correct file type. If I enter a filepath manually into load_data.py it works fine, so it's something to do with the way it's passed in. Answer: I _think_ your problem is that you are trying to open the wrong filename. When a file is uploaded to a webserver, it is either stored in memory, or written to a temporary location - Django does this as well (<https://docs.djangoproject.com/en/1.7/topics/http/file-uploads/#where- uploaded-data-is-stored>). So the value in `request.FILES['csv']` is the location of the temporary file. What you want is the file that gets stored in your forms. You will need to paste your forms and models, but I am guessing the following might work, so instead of this newdoc = CSV(file=request.FILES['csvfile']) you need newdoc = CSV(file=form.cleaned_data['csvfile'])
How do I grab all the links within an element in HTML using python? Question: First, please check the image below so I can better explain my question: ![enter image description here](http://i.stack.imgur.com/7XcGs.png) I am trying to take a user input to select one of the links below "Course Search By Term".... (ie. Winter 2015). The HTML opened shows the part of the code for this webpage. I would like to grab all the href links in the element , which consists of five term links I want. I am following the instructions from this website (www.gregreda.com/2013/03/03/web-scraping-101-with-python/), but it doesn't explain this part. Here is some code I have been trying. from bs4 import BeautifulSoup from urllib2 import urlopen BASE_URL = "http://classes.uoregon.edu/" def get_category_links(section_url): html = urlopen(section_url).read() soup = BeautifulSoup(html, "lxml") pldefault = soup.find("td", "pldefault") ul_links = pldefault.find("ul") category_links = [BASE_URL + ul.a["href"] for i in ul_links.findAll("ul")] return category_links Any help is appreciated! Thanks. Or if you would like to see the website, its classes.uoregon.edu/ Answer: I would keep it simple and locate all links containing `2015` in the text and `term` in `href`: for link in soup.find_all("a", href=lambda href: href and "term" in href, text=lambda text: text and "2015" in text): print link["href"] Prints: /pls/prod/hwskdhnt.p_search?term=201402 /pls/prod/hwskdhnt.p_search?term=201403 /pls/prod/hwskdhnt.p_search?term=201404 /pls/prod/hwskdhnt.p_search?term=201406 /pls/prod/hwskdhnt.p_search?term=201407 If you want full URLs, use [`urlparse.urljoin()`](https://docs.python.org/2/library/urlparse.html#urlparse.urljoin) to join the links with a base url: from urlparse import urljoin ... for link in soup.find_all("a", href=lambda href: href and "term" in href, text=lambda text: text and "2015" in text): print urljoin(url, link["href"]) This would print: http://classes.uoregon.edu/pls/prod/hwskdhnt.p_search?term=201402 http://classes.uoregon.edu/pls/prod/hwskdhnt.p_search?term=201403 http://classes.uoregon.edu/pls/prod/hwskdhnt.p_search?term=201404 http://classes.uoregon.edu/pls/prod/hwskdhnt.p_search?term=201406 http://classes.uoregon.edu/pls/prod/hwskdhnt.p_search?term=201407
how to get data from 'ImmutableMultiDict' in flask Question: I am learning how to use ajax and Flask ,so what I do is I send a ajax request and I receive the data as `post` request in my python file `My html file contains this code` var data = {"name":"John Doe","age":"21"}; $.ajax({ url:'/post/data', datatype : "json", contentType: "application/json; charset=utf-8", data : JSON.stringify(data), success : function(result) { jQuery("#clash").html(result); },error : function(result){ console.log(result); } }); And My python file contains : @app.route('/post/data',methods=['GET','POST']) def postdata(): #do some data = str(request.args) json_dumps = json.dumps(data) return json_dumps This gives me following data on the page "ImmutableMultiDict([('{\"name\":\"John Doe\",\"age\":\"21\"}', u'')])" And this is what my `request.query_string` looks `{%22name%22:%22John%20Doe%22,%22age%22:%2221%22}` So how do I get the `name` and `age`. Please correct me If I am wrong anywhere.Thanks in advance. Answer: You don't actually need to get data from an `ImmutableMultiDict`. There are a couple of errors in what you have that are preventing you from just pulling the response as json data. First off, you have to slightly tweak the parameters of your ajax call. You should add in the call type as a `POST`. Furthermore, `datatype` should be spelt as `dataType`. Your new call should be: var data = {"name":"John Doe","age":"21"}; $.ajax({ type: 'POST', contentType: 'application/json', url: '/post/data', dataType : 'json', data : JSON.stringify(data), success : function(result) { jQuery("#clash").html(result); },error : function(result){ console.log(result); } }); The data is now actually being sent as a post request with the `json` type. On the Flask server, we can now read the data as son information as follows: @app.route('/post/data',methods=['GET','POST']) def postdata(): jsonData = request.get_json() print jsonData['name'] print jsonData['age'] return "hello world" #or whatever you want to return This will print `John Doe` and `21` successfully. Let me know if this works for you or if you have any additional questions! Edit: You can return success to the ajax call from flask as follows: # include this import at the tomb from flask import jsonify @app.route('/post/data',methods=['GET','POST']) def postdata(): ... return jsonify(success=True, data=jsonData)
ImportError: cannot import name 'webdriver' Question: I am newbie for selenium python. I have installed pyhton, pip etc.. I am trying to run the below code but it is showing error ImportError: cannot import name 'webdriver' from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Firefox() driver.get("http://www.python.org") could anyone please solve this issue? Answer: if your file name is **selenium** change it to something else and delete .pyc files too.
Uniformly scaled axes in 3d plot with python matplotlib Question: I'm plotting a set of 3d coordinates (x,y,z) using Axes3D. My code reads import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D x,y,z=data[::1,0],data[::1,1],data[::1,2] fig=plt.figure(figsize=(10,10)) ax=fig.add_subplot(111,projection='3d') ax.scatter(x,y,z,s=0.1,c=z,cmap='hot',marker='+') plt.show() I want all three axes to have the same scaling. Now, the problem is that the data has a large aspect ratio, that is, the variation in the x and y coordinates is about 10 times larger than the variation in the z coordinate. If I put mi=np.min(data) ma=np.max(data) ax.set_xlim(mi,ma) ax.set_ylim(mi,ma) ax.set_zlim(mi,ma) this will result in uniformly scaled axes but will waste a lot of space in the z direction. How can I avoid this and get uniformly scaled axes nevertheless? Answer: There is a question in SO that is similar... but I cannot find it. So I am attaching some code for what I understand is your problem (the part that scales is not mine, but from that other post I mentioned) import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D data =np.random.rand(100,3) x,y,z= 10*data[:,0], 20*data[:,1], 5*data[:,2] fig=plt.figure(figsize=(10,10)) ax=fig.add_subplot(111,projection='3d') ax.scatter(x,y,z) # Fix aspect ratio max_range = np.array([x.max()-x.min(), y.max()-y.min(), z.max()-z.min()]).max() / 2.0 mean_x = x.mean() mean_y = y.mean() mean_z = z.mean() ax.set_xlim(mean_x - max_range, mean_x + max_range) ax.set_ylim(mean_y - max_range, mean_y + max_range) ax.set_zlim(mean_z - max_range, mean_z + max_range) plt.show() And, this is the result: ![enter image description here](http://i.stack.imgur.com/u7gPp.png)
ImportError: libexslt.so.0: cannot open shared object file: No such file or directory Question: I am trying to use python 2.7.8 came with splunk 6 for some XML parsing usign lxml. from lxml import etree I see below error [root@**** bin]# ./python some.py Traceback (most recent call last): File "some.py", line 1, in <module> from lxml import etree ImportError: libexslt.so.0: cannot open shared object file: No such file or directory Also I have this in splunk/lib/ lrwxrwxrwx libexslt.so -> libexslt.so.0.8.17 lrwxrwxrwx libexslt.so.0 -> libexslt.so.0.8.17 -r-xr-xr-x libexslt.so.0.8.17 Thanks for the help Answer: Assuming you're running an operating system with manageable library paths through ldconfig utility (like GNU/Linux), you can try the following: # echo "/your/path/to/splunk/lib" >> /etc/ld.so.conf.d/splunk.conf # ldconfig The problem here is that the search path for the shared object isn't set, so the application has no way to know where to find it. If you're running an operating system that doesn't have ldconfig, i suggest you search a bit for the utility that manages the library paths for that particular case (or just update your question with more information). If you don't have privileges on that system to perform such operations, you can use environment variables to set such search paths, as suggested in the link referred in the comments to your question (by @jedwards).
How should I encrypt API tokens in Python? Question: I've written a basic Python application that uses Twitter's API. I need to be able to encode my API secret as it should never be human-readable within my program (Twitter's words). How should I do this in Python? Is it possible? Answer: Store the API Key in an external file and load it at runtime. Just be sure to never commit the external file or publish it. Obfuscating the key once loaded doesn't really prevent them from stealing it (especially if they have the source to unwind your obfuscation as [jonrsharpe](http://stackoverflow.com/questions/29100900/how-should-i-encrypt- api-tokens-in-python/29101478#comment46432400_29100900) pointed out). Heres a crude example, I'm sure you could refine it to suit your needs: secret_keys file: { "TWITTER_SECRET" : "somebase64encodedkey" } python: import json secrets_filename = 'secret_keys' api_keys = {} with open(secrets_filename, 'r') as f: api_keys = json.loads(f.read()) print api_keys['TWITTER_SECRET'] # somebase64encodedkey
Directly calculating conditional averages in a Python Dictionary Question: I ‘m guessing there is a better method to go about avergaing a dict in Python, but I’m unsure how to go about it. At the moment I have a dict of dicts and I am trying to find a better method of finding say the average age of company car owners in a company. At the moment I am getting a correct result but I think my method is inefficient, as I would search each NAME key in the Company dict, check for Company Car == to ‘Yes’ and then drop the age of the employee into a list, I would then do an average calculation on the list at the end of the Company dict. I am sure there must be a better method than creating lists and dropping values into it? Here’s an example on my dict of dicts… Company{ 'NAME1': {''M_or_F': 'Male', 'AGE’: '24', ‘DEPT’: ‘Finance', ‘Company Car’:’No’'} 'NAME2': {''M_or_F': 'Male', 'AGE’: '52', ‘DEPT’: ‘Marketing', ‘Company Car’:’Yes’'} 'NAME3': {''M_or_F': 'Female', 'AGE’: '36', ‘DEPT’: ‘Finance', ‘Company Car’:'Yes''} 'NAME4': {''M_or_F': 'Male', 'AGE’: '28', ‘DEPT’: ‘Finance', ‘Company Car’:’No’'} 'NAME5': {''M_or_F': 'Female', 'AGE’: '23', ‘DEPT’: ‘HR', ‘Company Car’:’Yes’'} } Any hints on how I could do away with lists and calculate directly from the dictionary? My current inefficient method is … CC_agelist = [] for NAME in Company: if (Company[NAME][‘Company Car'] == 'Yes'): CC_agelist.append(int(Company[NAME]['AGE’])) #followed by an average calculation on CC_agelist Answer: First, clean up the syntax of your data as follows: Company = { 'NAME1': {'M_or_F': 'Male', 'AGE': '24', 'DEPT': 'Finance', 'Company Car': 'No'}, 'NAME2': {'M_or_F': 'Male', 'AGE': '52', 'DEPT': 'Marketing', 'Company Car': 'Yes'}, 'NAME3': {'M_or_F': 'Female', 'AGE': '36', 'DEPT': 'Finance', 'Company Car': 'Yes'}, 'NAME4': {'M_or_F': 'Male', 'AGE': '28', 'DEPT': 'Finance', 'Company Car': 'No'}, 'NAME5': {'M_or_F': 'Female', 'AGE': '23', 'DEPT': 'HR', 'Company Car': 'Yes'} } Now you can use a list comprehension to get your list, followed by a simple formula for calculating the mean: CC_agelist = [int(D['AGE']) for D in Company.itervalues() if D['Company Car'] == 'Yes'] mean_CC_age = float(sum(CC_agelist)) / len(CC_agelist) Or you can import numpy and do everything on one line: import numpy as np mean_CC_age = np.mean([int(D['AGE']) for D in Company.itervalues() if D['Company Car'] == 'Yes'])
Getting 'Missing required field: member' when trying to add a member to a google group via API Question: Trying to use Google admin directory API in order to read members of a google group (organization) - it works fine. When I try to add a member I get: { errors: [ { domain: 'global', reason: 'required', message: 'Missing required field: member' } ], code: 400, message: 'Missing required field: member' } I've googled the error and found questions like [this](http://stackoverflow.com/questions/22765599/missing-required-field- member), [this](http://stackoverflow.com/questions/27057265/google-groups- directory-api-add-user-to-group-raises-error-php) and a few other unhelpful results. I checked and it's definitely _not_ a missing scope nor permissions. #!/usr/bin/python import httplib2 import json from oauth2client.client import SignedJwtAssertionCredentials from urllib import urlencode def get_group_members(group): url = 'https://www.googleapis.com/admin/directory/v1/groups/{}/members'.format(group['email']) return call_google_api("GET", url) def add_group_member(group, payload=False): url = 'https://www.googleapis.com/admin/directory/v1/groups/{}/members'.format(group) return call_google_api("POST", url, payload) def call_google_api(method, url, payload=False): content = {} try: http = get_conn() if payload: (resp, content) = http.request(uri=url, method=method, body=urlencode(payload)) else: (resp, content) = http.request(uri=url, method=method) except Exception as e: print "Failed to post request to [{}] due to: {}".format(url, e) return json.loads(content) def get_conn(): client_email = get_client_email_from_db() with open(get_private_key_filename()) as f: private_key = f.read() oauth_scope = ['https://www.googleapis.com/auth/admin.directory.group.member', 'https://www.googleapis.com/auth/admin.directory.group', ] credentials = SignedJwtAssertionCredentials(client_email, private_key, oauth_scope, sub='[email protected]') http = httplib2.Http() return credentials.authorize(http) if __name__ == '__main__': payload = { "email": "[email protected]", "role": "MEMBER", } print "\n ---------------------------------- \n" print "calling add_group_member('[email protected]', '[email protected]')" res = add_group_member("[email protected]", payload) print "\n ---------------------------------- \n" **Comment** : I managed to achieve what I wanted by using the sdk `apiclient.discovery.build`, but still - I'm curious, what's the issue and if it can be solved. Debugging the request: connect: (www.googleapis.com, 443) send: 'POST /admin/directory/v1/groups/[email protected]/members HTTP/1.1\r\nHost: www.googleapis.com\r\nContent-Length: 38\r\ncontent-type: application/json\r\naccept-encoding: gzip, deflate\r\nauthorization: Bearer ya29.RAFzf3hyxvP0LuR4VdpqKr_dD0WzOcvXjn4eWV5Em6xJDissi4ieOZ2ZBRMOP-WLhvTrecBxgF_6sznc1GKSWHanvgYTh_EzcilsAN0f5jOiiMahOadG2v5ixBPL9GcqebRdz_kQc1y2iQ\r\nuser-agent: Python-httplib2/0.9 (gzip)\r\n\r\nrole=MEMBER&email=alfasi%40xxxx.com' reply: 'HTTP/1.1 400 Bad Request\r\n' header: Vary: Origin header: Vary: X-Origin header: Content-Type: application/json; charset=UTF-8 header: Content-Encoding: gzip header: Date: Sat, 28 Mar 2015 23:14:47 GMT header: Expires: Sat, 28 Mar 2015 23:14:47 GMT header: Cache-Control: private, max-age=0 header: X-Content-Type-Options: nosniff header: X-Frame-Options: SAMEORIGIN header: X-XSS-Protection: 1; mode=block header: Server: GSE header: Alternate-Protocol: 443:quic,p=0.5 header: Transfer-Encoding: chunked Answer: Since google APIs use (only?) JSON encoding, your post data is not being parsed into the needed member object. You are already loading json for the response, so you should just need to change the encoding, and optionally indicate it explicitly: if payload: (resp, content) = http.request(uri=url, method=method, body=urlencode(payload)) # becomes: if payload: (resp, content) = http.request(uri=url, method=method, body=json.dumps(payload), headers={'Content-type':'application/json'})
DJANGO celery task is executed from shell but it's not executed from view Question: I am trying to create some asynchronous tasks with celery in my django application settings.py BROKER_URL = 'django://localhost:6379/0' CELERY_ACCEPT_CONTENT = ['json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' celery.py: from __future__ import absolute_import import os from celery import Celery from django.conf import settings os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'provcon.settings') app = Celery('provcon') app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) project __init__py: from __future__ import absolute_import from .celery import app as celery_app tasks.py: from __future__ import absolute_import from celery import shared_task from celery import task from .models import Proc_Carga @task() def carga_ftp(): tabla = Proc_Carga() sp = tabla.carga() return None I call the asysnchronous tasks from my view like this: from .tasks import carga_ftp @login_required(login_url='/login/') def archivoview(request): usuario= request.user if request.method == 'POST': form = ProcFTPForm(usuario,request.POST) if form.is_valid(): form.save() proc = Lista_Final() lista = proc.archivos() # call asynchronous task carga_ftp.delay() return HttpResponseRedirect('/resumen/') else: form = ProcFTPForm(usuario) return render_to_response('archivo.html',{'form':form},context_instance=RequestContext(request)) When I run the task from the python manage.py shell. The worker is executed and create the database objects without any problem but when I try to execute the task from the view, doesn't work is not executed Any idea why the task run from the manage.py shell but not from the views Thanks in advance Answer: Check if redis is running $redis-cli ping Check if the celery worker is running in the django admin interface If not running execute this command celery -A provcon worker -l info Test your task from your django application If works run the celery worker in background like a [daemon](http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html)
Generate date ranges broken by month for a given period Question: I'm struggling with writing a pythonic, clean generator method that, given a date period, like `['2014-01-15', '2015-02-03]`, will give me this: ['2014-01-15', '2014-01-31'] ['2014-02-01', '2014-02-28'] ... ['2015-02-01', '2015-02-03'] This is what I came up with: from datetime import datetime import calendar def genDatePeriods(startDate, endDate, format='%Y-%m-%d'): dt1 = datetime.strptime(startDate, format) dt2 = datetime.strptime(endDate, format) for year in range(dt1.year, dt2.year + 1): for month in range(1, 13): day0 = dt1.day if month == dt1.month and year == dt1.year else 1 day1 = dt2.day if month == dt2.month and year == dt2.year else calendar.monthrange(year, month)[1] if (year == dt1.year and month < dt1.month) or (year == dt2.year and month > dt2.month): continue else: d0 = (year, month, day0) d1 = (year, month, day1) yield [datetime(*d).strftime(format) for d in [d0, d1]] It works, however I feel like there is a more pythonic/tidy/efficient way to do this. Any ideas? Answer: The following is much more concise, using `datetime.date()` objects to find the first day of the next month each time, until you reach the end date: from datetime import datetime, timedelta def genDatePeriods(startDate, endDate, format='%Y-%m-%d'): curr = datetime.strptime(startDate, format).date() end = datetime.strptime(endDate, format).date() while curr <= end: # first day of the next month, using modular arithmetic next_month = curr.replace( month=curr.month % 12 + 1, year=curr.year + curr.month // 12, day=1) curr_formatted = curr.strftime(format) # end date is next month's first day, minus one day, # or the given endDate, whichever comes first end_formatted = min(next_month - timedelta(days=1), end).strftime(format) yield [curr_formatted, end_formatted] curr = next_month Demo: >>> for res in genDatePeriods('2014-01-15', '2015-02-03'): ... print res ... ['2014-01-15', '2014-01-31'] ['2014-02-01', '2014-02-28'] ['2014-03-01', '2014-03-31'] ['2014-04-01', '2014-04-30'] ['2014-05-01', '2014-05-31'] ['2014-06-01', '2014-06-30'] ['2014-07-01', '2014-07-31'] ['2014-08-01', '2014-08-31'] ['2014-09-01', '2014-09-30'] ['2014-10-01', '2014-10-31'] ['2014-11-01', '2014-11-30'] ['2014-12-01', '2014-12-31'] ['2015-01-01', '2015-01-31'] ['2015-02-01', '2015-02-03']
why simple led python program not working Question: I am testing my pi for the first time and i cant able to run the first program to light the led. below is my code id from raspberry cookbook import Rpi.GPIO as GPIO import time GPIO.setmode(GPIO.BCD) GPIO.setup(18,GPIO.OUT) while(True): GPIO.output(18, True) time.sleep(1) GPIO.output(18,False) time.sleep(1) when i run the script sudo python led.py it shows no error but the led has no output. I tried connecting a 360ohms between pin 18 and led. also tried connecting 540Ohms between pin 18 and led but no result. Can anyone suggest how to debug the problem Answer: You should probably (and I don't know if either main stack or the EE-centric or the RPi spinoffs has a good way to do this inline) include some sort of diagram showing which pins you're connecting. At a minimum, you need to do GPIO 18 to LED to GND, and as LED is Light Emitting Diode or something to that effect, you need to make sure your LED is pointing in the correct direction. I'd suggest as a way to debug this partially, take python out of the loop and just configure the LED to be always on by writing a 1 to the appropiate /sys/blah/path. Also, <http://elinux.org/RPi_Low-level_peripherals#sysfs> (which has the path you need) points out... GPIO 24 is wired to P1_18 so you might want to double check that the pin you think is 18 is called 18 on both sides of the system.
imported modules becomes None when replacing current module in sys.modules using a class object Question: an unpopular but "supported" python hack (see Guido: <https://mail.python.org/pipermail/python-ideas/2012-May/014969.html>) that enables `__getattr__` usage on module attributes involves the following: import os, sys class MyClass(object): def check_os(self): print os sys.modules[__name__] = MyClass() On import, this the imported module becomes a class instance: >>> import myModule >>> myModule <myModule.MyClass object at 0xf76def2c> However, in Python-2.7, all other imported modules within the original module is set to None. >>> repro.check_os() None In Python-3.4, everything works: >>> repro.check_os() <module 'os' from '/python/3.4.1/lib/python3.4/os.py'> This feels like something to do with [Imported modules become None when running a function](http://stackoverflow.com/questions/17084260/imported- modules-become-none-when-running-a-function), but, anyone knows why this happens internally? It seems that if you store the original module (without fully replacing it in Python-2) then everything continues to work: sys.modules[__name__+'_bak'] = sys.modules[__name__] Answer: The problem you are running in to is that in Pythons prior to 3.4 when a module is destroyed (as yours is because you replace it with a class and there are no further references to it), the items in that module's `__dict__` are forcibly set to `None`. The workaround, if you need to support Pythons prior to 3.4, is have an `import` statement in the class that will replace the module: class MyClass(object): import os def check_os(self): print(os) For more info, see [this answer about interpreter shutdown](http://stackoverflow.com/a/25649713/208880).
Sorting data in alphabetic, highest to lowest order using CSV Question: How to sort data in alphabetic, highest to lowest order in notebook document created by python, using CSV? I have a maths quiz with saves results in notebook but it is in random order. How to sort all this data using python? Answer: import numpy as np text_column_index = 0 csv_has_header = True data = ... df = np.genfromtxt(data, delimiter=',', skip_header=csv_has_header) print df[np.argsort(df[:, text_column_index])[::-1]] Fill out the parameters accordingly.
Python packaging for hive/hadoop streaming Question: I have a hive query with custom mapper and reducer written in python. The mapper and reducer modules depend on some 3rd party modules/packages which are not installed on my cluster (installing them on the cluster is not an option). I realized this problem only after running the hive query when it failed saying that the xyz module was not found. How do I package the whole thing so that I have all the dependencies (including transitive dependencies) available in my streaming job? How do I use such a packaging and import modules in my mapper and reducer? The question is rather naive but I could not find an answer even after an hour of googling. Also, it's not just specific to hive but holds for hadoop streaming jobs in general when mapper/reducer is written in python. Answer: This may be done by packaging the dependencies and the reducer script in a zip, and adding this zip as a resource in Hive. Let's say the Python reducer script depends on package D1, which in turn depends on D2 (thus resolving OP's query on transitive dependencies), and both D1 and D2 are not installed on any machine in the cluster. * Package D1, D2, and the Python reducer script (let's call it reducer.py) in, say, dep.zip * Use this zip like in the following sample query: ADD ARCHIVE dep.zip; FROM (some_table) t1 INSERT OVERWRITE TABLE t2 REDUCE t1.col1, t1.col2 USING 'python dep.zip/dep/reducer.py' AS output; Notice the first and the last line. Hive unzips the archive and creates these directories. The _dep_ directory will hold the script and dependencies.
How does pySerial implement the "with" statement without __enter__ and __exit__? Question: pySerial can be used with Python's `with` statement like this: with serial.Serial("/dev/ttyS1") as ser: ser.write("AAAA") ser.read(8) I tried to see exactly what it is pySerial does to make this work, but I couldn't find an `__enter__` or `__exit__` function anywhere! I ran `grep -e '\(__enter__\)\|\(__exit__\)' *.py` from the pySerial directory, and grep didn't find either function. How does pySerial do it? **Edit:** I also tried the following to see what was called next: import serial, pdb with serial.Serial("/dev/ttyUSB0") as ser: pdb.set_trace() This brought me directly to line 433 of `serialposix.py`, the `close()` function. Does the `with` statement also look for `open()` and `close()` functions? Answer: `Serial` class doesn't implement these methods, it inherits them from [`io.IOBase`](https://docs.python.org/3/library/io.html?highlight=io.rawiobase#io.IOBase). The relevant part from [/serial/serialposix.py](http://svn.code.sf.net/p/pyserial/code/trunk/pyserial/serial/serialposix.py): try: import io except ImportError: # classic version with our own file-like emulation class Serial(PosixSerial, FileLike): pass else: # io library present class Serial(PosixSerial, io.RawIOBase): pass
How to download file from local server in Python Question: Scenario is: 1. Client will Enter a file name e.g **xyz** 2. Server will show all the files that it have in different folders. Client will select 1 or 2 or 3 (if there). and file will be downloaded. **I have done searching part. I want help in downloading and saving the file in any other directory.** My code so far is for searching the file. import socket tcp_ip="127.0.0.1" tcp_port=1024 buffer_size= 200 filename=raw_input("Enter file name\n") s= socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((tcp_ip,tcp_port)) data=s.recv(buffer_size) s.close() Server Code : (This code is now for one file) **The required help is how to download and save that file which is found at server.** import socket import os tcp_ip='127.0.0.1' tcp_port=1024 buffer_size=100 s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((tcp_ip, tcp_port)) s.listen(1) conn, addr = s.accept() print 'Connection Address:',addr while 1: data=conn.recv(buffer_size) if not data: break else: print "received server side data:", data conn.send(data) conn.close() Answer: Following is the example which shows how to download a file from a server over tcp. Client Code: import socket import os HOST = 'localhost' PORT = 1024 downloadDir = "/tmp" filename = raw_input('Enter your filename: ') socket1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) socket1.connect((HOST, PORT)) socket1.send(filename) with open(os.path.join(downloadDir, filename), 'wb') as file_to_write: while True: data = socket1.recv(1024) if not data: break file_to_write.write(data) file_to_write.close() socket1.close() Server Code: import socket HOST = 'localhost' PORT = 1024 socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) socket.bind((HOST, PORT)) socket.listen(1) while (1): conn, addr = socket.accept() reqFile = conn.recv(1024) with open(reqFile, 'rb') as file_to_send: for data in file_to_send: conn.sendall(data) conn.close() socket.close() Note: server code is not robust and will crash when file doesn't exists. You should modify above example according to your needs.
Python concordance command in NLTK Question: I have a question regarding Python concordance command in NLTK. First, I came through an easy example: from nltk.book import * text1.concordance("monstrous") which worked just fine. Now, I have my own .txt file and I would like to perform the same command. I have a list called "textList" and want to find the word "CNA" so I put command textList.concordance('CNA') Yet, I got the error AttributeError: 'list' object has no attribute 'concordance'. In the example, is the text1 NOT a list? I wonder what is going on here. Answer: `.concordance()` is a special nltk function. So you can't just call it on any python object (like your list). More specifically: `.concordance()` is a method in the [`Text` class of nltk](http://www.nltk.org/api/nltk.html#nltk.text.Text) Basically, if you want to use the `.concordance()`, you have to instantiate a Text object first, and then call it on that object. [Text](http://www.nltk.org/api/nltk.html#nltk.text.Text) > A Text is typically initialized from a given document or corpus. E.g.: > > > import nltk.corpus > from nltk.text import Text > moby = Text(nltk.corpus.gutenberg.words('melville-moby_dick.txt')) > [.concordance()](http://www.nltk.org/_modules/nltk/text.html#Text.concordance) > concordance(word, width=79, lines=25) > > Print a concordance for word with the specified context window. Word > matching is not case-sensitive. So I imagine something like this would work (not tested) import nltk.corpus from nltk.text import Text textList = Text(nltk.corpus.gutenberg.words('YOUR FILE NAME HERE.txt')) textList.concordance('CNA')
Comparing File Dates in a Directory Question: I am trying to write a script in Python to upload a series of photos depending on the dates they were created. I am having an issue of comparing the dates of each of the files to a date before and after the dates I want so that I can create an array to loop through for my uploading. Here is what I have: from stat import S_ISREG, ST_CTIME, ST_MODE import os, sys, time, datetime array = [] area = "/home/user/blah" # Edit the path to match your desired folder between the "" os.chdir(area) retval = os.getcwd() # Puts you in the desired directory dirpath = sys.argv[1] if len(sys.argv) == 2 else r'.' entries = (os.path.join(dirpath, fn) for fn in os.listdir(dirpath)) entries = ((os.stat(path), path) for path in entries) entries = ((stat[ST_CTIME], path) for stat, path in entries if S_ISREG(stat[ST_MODE])) for cdate, path in sorted(entries): filedate = time.ctime(cdate) if filedate < datetime.date(2015,03,13) and filedate > datetime.date(2015,02,17): print time.ctime(cdate) print os.path.basename(path) Is there a way to do this with ctime or is there a better way? Answer: There's no real need to `os.chdir()` here. Dealing with absolute filenames is fine. You can simplify the selection criteria using a list-comp, `datetime`, `os.path.isfile` and `os.path.getctime`, eg: import os from datetime import datetime files = [ fname for fname in sorted(os.listdir(dirpath)) if os.path.isfile(fname) and datetime(2015, 2, 17) <= datetime.fromtimestamp(os.path.getctime(fname)) <= datetime(2015, 3, 13) ] This returns a list of all files between two dates... I'm guessing you're using Python 2.x because otherwise `datetime.date(2015,03,13)` would be giving you a `SyntaxError` in 3.x. Be wary of that as `03` is an octal literal and just happens to work in your case - but `08`/`09` will break as they're invalid for octal.
How to fix the int error in a matching game using python? Question: Okay, I've been trying this forever now. Keep getting stuck on a 'int' error. Description: A common memory matching game played by young children is to start with a deck of cards that contains identical pairs. For example, given six cards in the deck, two might be labeled 1, two labeled 2 and two labeled 3. The cards are shuffled and placed face down on a board. A player then selects two cards that are face down, turns them face up, and if the cards match they are left face up. If the two cards do not match, they are returned to their original face down position. The game continues until all cards are face up. Sample input/output: > > > main() You input number of rows and columns... Enter number of rows: 3 Enter number of columns: 2 * * * * * * Then you input coords like so... Enter coordinates for first card: 1 1 Enter coordinates for second card: 3 1 Not an identical pair. Found 2 at (1,1) and 1 at (3,1) * * * * * * Enter coordinates for first card: 1 2 Enter coordinates for second card: 2 2 Not an identical pair. Found 2 at (1,2) and 3 at (2,2) * * * * * * Enter coordinates for first card: 1 1 Enter coordinates for second card: 1 2 2 2 * * * * Enter coordinates for first card: 3 1 Enter coordinates for second card: 3 2 Not an identical pair. Found 1 at (3,1) and 3 at (3,2) 2 2 * * * * Enter coordinates for first card: 2 1 Enter coordinates for second card: 3 1 2 2 1 * 1 * Enter coordinates for first card: 3 2 Enter coordinates for second card: 2 2 2 2 1 3 1 3 > > > Requirements: Design Requirments: You need to use three classes: Card, Deck and Game. Card stores both the card's value and face (a string or Boolean variable to indicate whether the card is facing up or down). Deck contains the cards needed for the game. It will contain among its methods a method to deal a card, another for shuffling the deck, and a method that returns number of cards left in the deck. These two classes are not identical to the classes Card and Deck discussed in the book but have many things in common. The class Game simulates playing a single game and represents the interaction between the user and the other classes. Its instance members store a 2D list (of card objects) representing the game board where the cards are placed, number of rows and number of columns of the game board. Among the instance methods of the Game class: play(), which simulates playing the game; isGameOver(), which checks whether or not the game is over; displayBoard(), which displays the board; populateBoard(), which creates the initial 2D list of identical pairs of cards with all the cards facing down. Most probably, you will need to write other instance methods as you see appropriate. My code so far: import random class Card(object): '''A card object with a suit and face''' def __init__(self, value): '''Stores both the card's value and face''' self._value = value self._face = False def getValue(self): '''Get the value of the card''' return self._value def getFace(self): '''Get the face of the card''' return self._face def setFace(self): self._face = True class Deck(object): def __init__(self, pairs): self._pairs = pairs self._cards = [] for cards in range(self._pairs): c1 = Card(cards) self._cards.append(c1) c2 = Card(cards) self._cards.append(c2) def deal(self): if len(self) == 0: return None else: return self._cards.pop(0) def shuffle(self): '''Shuffels the cards.''' random.shuffle(self._cards) def __len__(self): '''Returns the number of cards in the deck''' return len(self._cards) class Game(object): def __init__(self, rows, columns): self._deck = Deck((rows * columns)//2) self._rows = rows self._columns = columns self._board = [] for row in range(self._rows): self._board.append([0] * columns) def populateBoard(self): self._deck_shuffle() for columns in self._columns: for rows in self._rows: self._board[rows][columns] = self._deck_deal() def displayBoard(self): for rows in self._rows: for columns in self._columns: if self._board[rows][columns]._getFace() == False: print('*') else: print(self._board[rows][columns._getValue()]) print() def play(self): while True: if self.isGameOver() == False: break self.displayBoard() coord1 = input('Enter coordinates for the first card: ') coord2 = input('Enter coordinates for the second card: ') newCoord1 = coord1.split(" ") newCard1 = self._board[int(newCoord1[0])][int(newCoord1[1])].getValue() newCoord2 = coord2.split(" ") newCard2 = self._board[int(newCoord2[0])][int(newCoord2[1])].getValue() if newCard1 != newCard2: print("Not an identical pair. Found", newCard1, "at", newCoord1, "and", newcard2, "at", newCoord2) else: self._board[int(newCoord1[0])][int(newCoord1[1])].setFace() self._board[int(newCoord2[0])][int(newCoord2[1])].setFace() def isGameOver(self): face = False for rows in self._rows: for columns in self._columns: if self._board[row][column] == False: face = True return face def main(): while True: # Force user to enter valid value for number of rows while True: rows = input("Enter number of rows ") if rows.isdigit() and ( 1 <= int(rows) <= 9): rows = int(rows) break else: print (" ***Number of rows must be between 1 and 9! Try again.***") # Adding *** and indenting error message makes it easier for the user to see # Force user to enter valid value for number of columns while True: columns = input("Enter number of columns ") if columns.isdigit() and ( 1 <= int(columns) <= 9): columns = int(columns) break else: print (" ***Number of columns must be between 1 and 9! Try again.***") if rows * columns % 2 == 0: break else: print (" ***The value of rows X columns must be even. Try again.***") game = Game(rows, columns) game.play() if __name__ == "__main__": main() Sorry for the format Problems. This site confuses me with it. Lol. If anyone can help me out, that'd be great. EXACT TRACEBACKS: Traceback (most recent call last): File "C:\Users\River\Desktop\Hw-3.py", line 135, in <module> main() File "C:\Users\River\Desktop\Hw-3.py", line 132, in main game.play() File "C:\Users\River\Desktop\Hw-3.py", line 84, in play self.displayBoard() File "C:\Users\River\Desktop\Hw-3.py", line 74, in displayBoard if self._board[rows][columns].getFace() == False: AttributeError: 'int' object has no attribute 'getFace' and sometimes this Traceback (most recent call last): File "C:\Users\River\Desktop\Hw-3.py", line 130, in <module> main() File "C:\Users\River\Desktop\Hw-3.py", line 127, in main game.play() File "C:\Users\River\Desktop\Hw-3.py", line 79, in play self.displayBoard() File "C:\Users\River\Desktop\Hw-3.py", line 67, in displayBoard for rows in self._rows: TypeError: 'int' object is not iterable Maybe a **str** method? Not sure. If it is, not sure how to write one for this.. Answer: I fixed most of your problems. See comments within. Your code is not considered "Pythonic", but I've left most of it as it is. Your choice of variable and method names can be improved. For e.g., when iterating over the rows and columns of the board, use `row` and `col` for the iterating variables. Also, the `getFace()` method in `Card` class can perhaps be better renamed as `isFaceUp()`, or something to that effect. In this case, doing away with comparison with `False` will also read more naturally. Another issue is your use of setters/getters. They are not considered "Pythonic" as others have already suggested in the comments. With errors in different methods, some elementary, it seems that you were trying to write everything at one shot. It is often a good idea to write/test/debug small pieces of code at a time. #!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import print_function # I'm using Python 2.7 so need this import random class Card(object): '''A card object with a suit and face''' def __init__(self, value): '''Stores both the card's value and face''' self._value = value self._face = False # Getters and setters are not Pythonic # Consider using property def getValue(self): '''Get the value of the card''' return self._value def getFace(self): '''Get the face of the card''' return self._face def setFace(self): self._face = True def __str__(self): '''Override str to return printable result. Useful for debugging.''' return ", ".join(("Value: ", str(self._value), "Face: ", str(self._face))) class Deck(object): def __init__(self, pairs): self._pairs = pairs self._cards = [] for cards in range(self._pairs): c1 = Card(cards) self._cards.append(c1) c2 = Card(cards) self._cards.append(c2) def deal(self): if len(self) == 0: return None else: return self._cards.pop(0) def shuffle(self): '''Shuffles the cards.''' random.shuffle(self._cards) def __len__(self): '''Returns the number of cards in the deck''' return len(self._cards) class Game(object): def __init__(self, rows, columns): self._deck = Deck((rows * columns)//2) self._rows = rows self._columns = columns self._board = [] for row in range(self._rows): self._board.append([0] * self._columns) self.populateBoard() # self.revealBoard() # For debugging def populateBoard(self): self._deck.shuffle() # For debugging # print("Deck: {}".format(map(str, self._deck._cards))) # self._board = [[self._deck.deal() for _ in range(self._columns)] # for _ in range(self._rows)] # Consider renaming variables as col and row. Similar for others # elsewhere for columns in range(self._columns): for rows in range(self._rows): self._board[rows][columns] = self._deck.deal() def revealBoard(self): """For debugging. Reveal the cards for the board""" for rows in range(self._rows): for columns in range(self._columns): print(str(self._board[rows][columns].getValue()) + " ", end="") print("") def displayBoard(self): for rows in range(self._rows): for columns in range(self._columns): if self._board[rows][columns].getFace() == False: print('* ', end="") else: # print(self._board[rows][columns._getValue()]) print(str(self._board[rows][columns].getValue()) + " ", end="") print("") # Print newline after each row def play(self): while not self.isGameOver(): self.displayBoard() coord1 = raw_input('Enter coordinates for the first card: ') coord2 = raw_input('Enter coordinates for the second card: ') newCoord1 = map(int, coord1.strip().split()) newCard1 = self._board[newCoord1[0]][newCoord1[1]].getValue() newCoord2 = map(int, coord2.strip().split()) newCard2 = self._board[newCoord2[0]][newCoord2[1]].getValue() # Need to check that user has entered valid data here # newCoord1 = coord1.split(" ") # newCard1 = self._board[int(newCoord1[0])][int(newCoord1[1])].getValue() # newCoord2 = coord2.split(" ") # newCard2 = self._board[int(newCoord2[0])][int(newCoord2[1])].getValue() if newCard1 != newCard2: print("Not an identical pair. Found", newCard1, "at", newCoord1, "and", newCard2, "at", newCoord2) else: self._board[newCoord1[0]][newCoord1[1]].setFace() self._board[newCoord2[0]][newCoord2[1]].setFace() # self._board[int(newCoord1[0])][int(newCoord1[1])].setFace() # self._board[int(newCoord2[0])][int(newCoord2[1])].setFace() print("Game Over") self.displayBoard() def isGameOver(self): for rows in range(self._rows): if not all(card.getFace() for card in self._board[rows]): return False # for columns in range(self._columns): # # If there is still one card facing down then # # game is not over yet # if self._board[rows][columns].getFace() == False: # return False return True def getUserInput(input_type): while True: try: val = int(raw_input("Enter number of {}: ".format(input_type))) if 1 <= val <= 9: return val else: raise ValueError except ValueError: print (" ***Number of {} must be between 1 and 9! Try again.***".format(input_type)) def main(): while True: # Force user to enter valid value for number of rows and columns rows = getUserInput("rows") columns = getUserInput("cols") if rows * columns % 2 == 0: break else: print (" ***The value of rows X columns must be even. Try again.***") game = Game(rows, columns) game.play() if __name__ == "__main__": main()
python mailchimp api 2.0 json response error Question: Hi I'm trying to set up a batch job taking data from the mysql database and send those users to mailchimp to be used for direct email campaigns. I'm having an issue with the my code running on python 2.6 Red Hat linux (Red Hat 4.4.7-4) and python 2.7.3 Debian 4.6.3-14. I know this code works on windows 7 python 2.7.2 anaconda ipython notebook but I get a mailchimp json response error at the end I don't fully understand. import mysql.connector import mailchimp #prepare mysql connection def connection(query): cnx = mysql.connector.connect(user='x', password='x', host='x', port=xxxx, connection_timeout=60, database='database') cursor = cnx.cursor() cursor.execute(query) data=cursor.fetchall () cursor.close() cnx.close() return data #execute query email_data=None query_con = ("SELECT * FROM email") email_data=connection(query_con) #set mailchimp api key apikey='key' #create mailchimp oauth instance m=mailchimp.Mailchimp(apikey) #set list id to target mailchimp list list_id='xxxx' #method to format table data to mailchimp structure def batch_load(data): b_load= [{"email":{"email":x[3]},"merge_vars":{"FNAME":x[1],"LNAME":x[2], "MMERGE3":x[0], "MMERGE4":x[6], "MMERGE5":x[7], "MMERGE6":x[8], "MMERGE7":x[10], "MMERGE8":x[11], "MMERGE9":x[12], "MMERGE10":x[13],"MMERGE11":x[14], "MMERGE12":x[16], "MMERGE13":x[17]}} for x in data] return b_load #use method b_load=batch_load(email_data) #upload users to mailchimp with parameters: list_id, formated_data, no double op in, allow updates, do not update interests. m.lists.batch_subscribe(list_id,b_load,False,True,False) Here's a json upload sample: { 'merge_vars': { 'LNAME': u'Client', 'MMERGE8': datetime.datetime(2014, 6, 20, 7, 33, 43), 'MMERGE9': 0, 'FNAME': u'Dev', 'MMERGE3': 74, 'MMERGE6': 2, 'MMERGE7': 99, 'MMERGE4': u'(111)111-1111', 'MMERGE5': u'90210', 'MMERGE10': datetime.datetime(2015, 3, 9, 14, 4, 50), 'MMERGE11': None, 'MMERGE12': None, 'MMERGE13': 0 }, 'email': { 'email': u'[email protected]' } } Here's the error it seems like there's an issue with the date time format: File "/foo/bar/batch_job.py", line 113, in <module> m.lists.batch_subscribe(list_id,b_load,False,True,False) File "/usr/lib/python2.6/site-packages/mailchimp.py", line 1393, in batch_subscribe return self.master.call('lists/batch-subscribe', _params) File "/usr/lib/python2.6/site-packages/mailchimp.py", line 351, in call params = json.dumps(params) File "/usr/lib64/python2.6/site-packages/simplejson/__init__.py", line 370, in dumps return _default_encoder.encode(obj) File "/usr/lib64/python2.6/site-packages/simplejson/encoder.py", line 269, in encode chunks = self.iterencode(o, _one_shot=True) File "/usr/lib64/python2.6/site-packages/simplejson/encoder.py", line 348, in iterencode return _iterencode(o, 0) File "/usr/lib64/python2.6/site-packages/simplejson/encoder.py", line 246, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: datetime.datetime(2014, 6, 20, 7, 33, 43) is not JSON serializable Any help is appreciated. Thanks Answer: This is actually a Python error, not a MailChimp error. the JSON module doesn't know how to turn a datetime object into a JSON Value. You'll need to convert it into a string yourself. MailChimp should be able to accept the isoformat() string or, if you're working with a Date Merge Field (where time isn't needed) you can call date().isoformat(). Here's some code that gets converted into JSON properly: json.dumps({ 'merge_vars': { 'LNAME': u'Client', 'MMERGE8': datetime.datetime(2014, 6, 20, 7, 33, 43).date().isoformat(), 'MMERGE9': 0, 'FNAME': u'Dev', 'MMERGE3': 74, 'MMERGE6': 2, 'MMERGE7': 99, 'MMERGE4': u'(111)111-1111', 'MMERGE5': u'90210', 'MMERGE10': datetime.datetime(2015, 3, 9, 14, 4, 50).date().isoformat(), 'MMERGE11': None, 'MMERGE12': None, 'MMERGE13': 0 }, 'email': { 'email': u'[email protected]' } }) You'll notice if you remove the `.date().isoformat()` from either `MMERGE8` or `MMERGE10` that your error comes back.
Wrong field type in osgeo.org for ogr.FieldDefn('field', ogr.OFTInteger) Question: I have a problem with osgeo.org for python using versions python version 2.7 osgeo.org version 1.3.39 I want to use osgeo to convert `MapInfo File` from MongoDB. With from osgeo import ogr, osr, gdal driver = ogr.GetDriverByName("MapInfo File") number_of_rooms = ogr.FieldDefn('number_of_rooms', ogr.OFTInteger) feature.SetField("number_of_rooms ",num) layer.CreateFeature(feature) the `MapInfo File` is built but the field `number_of_romms`'s type is `Integer(12)` whereas I want it to be `Integer` and can't figure out the problem. Is there any way to solve this issue? The mif file is the following: Version 300 Charset "Neutral" Delimiter "," CoordSys Earth Projection 1, 104 Columns 19 [...] number_of_rooms Integer(12) //i want number_of_rooms Integer Data [...] Answer: Not sure if I get you right but if it is the precision or width of the field you want to change you can use: number_of_rooms = ogr.FieldDefn('number_of_rooms', ogr.OFTInteger) number_of_rooms.SetPrecision(int_new_precision) Read more at: <http://gdal.org/python/>
Using python scrapy to extract links from a webpage Question: I am a beginner with python and using scrapy to extract links from the following webpage <http://www.basketball- reference.com/leagues/NBA_2015_games.html>. The code that I have written is from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors import LinkExtractor from basketball.items import BasketballItem class BasketballSpider(CrawlSpider): name = 'basketball' allowed_domains = ['basketball-reference.com/'] start_urls = ['http://www.basketball-reference.com/leagues/NBA_2015_games.html'] rules = [Rule(LinkExtractor(allow=['http://www.basketball-reference.com/boxscores/^\w+$']), 'parse_item')] def parse_item(self, response): item = BasketballItem() item['url'] = response.url return item I run this code through the command prompt, but the file created does not have any links. Could someone please help? Answer: It cannot find the links, fix you regular expression in the rule: rules = [ Rule(LinkExtractor(allow='boxscores/\w+')) ] Also, you don't have to set the `callback` when it is called `parse_item` \- it is a default. And `allow` can be set as a string also.
How to calculate inverse using cramer's rule in python? Question: I'm trying to generate the inverse matrix using numpy package in python. Unfortunately , I'm not getting the answers I expected. Original matrix: `([17 17 5] [21 18 21] [2 2 19])` Inverting the original matrix by Cramer's rule gives: `([4 9 15] [15 17 6] [24 0 17])` Apparently using `numpy.linalg.inv()` gives `-3.19488818e-01,3.80191693e-01,-6.38977636e-03, 3.33333333e-01, -3.33333333e-01, 2.26123699e-18, -2.84345048e-01, 2.68370607e-01, 5.43130990e-02n` I expected that multiplying the original matrix and the inverse would have given an identity matrix but as you can see I give a matrix filled with floating points. Where can be the issue? Answer: I think you may have made a mistake when inverting the matrix by hand. When I perform the following import numpy as np a = np.array([[17, 17, 5], [21, 18, 21], [2, 2, 19]], dtype=np.float) inv = np.linalg.inv(a) print np.dot(inv, a) I get array([[ 1.00000000e+00, 0.00000000e+00, 1.05471187e-15], [ 1.11022302e-16, 1.00000000e+00, -7.21644966e-16], [ 1.38235777e-17, 5.65818009e-18, 1.00000000e+00]]) Which is fine, note that all the off diagonal elements are approximately zero to machine precision so it looks like numpy is doing an ok job with this one! Remember that floating point numbers don't work like real numbers and you can expect small rounding errors to creep into your calculations unless you are careful. * * * If you want to do this exactly have a look at [sympy](http://www.sympy.org/en/index.html) which will be able to do the calculation with exact maths (at the expense of it being a little slower). import sympy as sp a = sp.Matrix([[17, 17,5],[21,18,21],[2,2,19]]) inv = a.inv() print inv print a * inv yielding the exact inverse Matrix([ [-100/313, 1/3, -89/313], [ 119/313, -1/3, 84/313], [ -2/313, 0, 17/313]]) which when multiplied by the original matrix gives the exact identity as you would expect Matrix([ [1, 0, 0], [0, 1, 0], [0, 0, 1]])
How can I filter statistics from facebook ads api using the python sdk? Question: I would like to filter by data ranges, just to see the performance on a daily basis, or for the last day, last week, last month...etc. How can I add a date parameter different from start_date or end_date because? I guess those parameters are just only for the start and end date of the campaign but they might not give me the result I want. My current code is this one: from facebookads.api import FacebookAdsApi from facebookads import objects my_app_id = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' my_app_secret = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' my_access_token = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' FacebookAdsApi.init(my_app_id, my_app_secret, my_access_token) me = objects.AdUser(fbid='10153166606850429') my_accounts = list(me.get_ad_accounts()) my_account = objects.AdAccount('act_XXXXXXXXXXXX') params = { 'start_date': '2015-02-01', } fields = { 'impressions', 'clicks', 'spent', } stats = my_account.get_ad_campaign_stats(fields=fields, params=params) # print stats for stat in stats: print stat Answer: According to the docs [here](https://developers.facebook.com/docs/marketing- api/adstatistics/v2.2#filtering), It looks like you have the field name wrong. The API lists `start_time` and `end_time` but also mentions you should have both a start and end. params = { 'start_time': '2015-02-01', 'end_time': '2015-02-02', } Hope this helps!
Python - Fork Modules Question: My requirement is to do something like below - def task_a(): ... ... ret a1 def task_b(): ... ... ret b1 . . def task_z(): ... ... ret z1 Now in my main code I want to Execute Tasks a..z in parallel and then wait for the return values of all of the above.. a = task_a() b = task_b() z = task_z() Is there a way to call the above modules in parallel in Python? Thanks, Manish Answer: > Reference: [Python: How can I run python functions in > parallel?](http://stackoverflow.com/questions/7207309/python-how-can-i-run- > python-functions-in-parallel) Import: from multiprocessing import Process Add new function: def runInParallel(*fns): proc = [] for fn in fns: p = Process(target=fn) p.start() proc.append(p) for p in proc: p.join() Input existing functions into the new function: runInParallel(task_a, task_b, task_c...task_z)
sorting a list in python Question: I am trying to sort lists on a list : each list contains `[seq1,seq2,score]` , I want to sort the list `L` according to the score of (seq1,seq2) from the max score to the minimum score, then each list take a rank to each (seq1,seq2) according to the score L=[ ['AA', 'CG', 0],['AA', 'AA', 4], ['CG', '--', -1]] the sorted list must be: L=[['AA', 'AA', 4], ['AA', 'CG', 0], ['CG', '--', -1]] for the ranks: ['AA', 'AA', 4] has rank 1 ['AA', 'CG', 0] has rank 2 ['CG', '--', -1] has rank 3 how can I do it? I tried: def getKey():/* to get the score from each list*/ scorelist=score()/*this is the list L*/ return scorelist[][2] def sort_list(): scorelist=score() p=sorted(s, key=getKey) return p Answer: You can use [`itemgetter`](https://docs.python.org/3.4/library/operator.html?highlight=itemgetter#operator.itemgetter) as a function to get the value to sort the list by. Then to get the rank, you can use [`enumerate`](https://docs.python.org/3.4/library/functions.html#enumerate). from operator import itemgetter score_list = score() score_list.sort(key=itemgetter(2), reversed=True) # sort list descending by third element for i, values in enumerate(score_list): print(values, i+1) # print values and rank If several items can share the same score, you can use [groupby](https://docs.python.org/3.4/library/itertools.html?highlight=groupby#itertools.groupby) to assign them the same rank: from operator import itemgetter from itertools import groupby score_list = score() score_list.sort(key=itemgetter(2), reversed=True) for i, group in enumerate(groupby(score_list)): for item in group: print(item, i+1) With this code, if you have 2 items ranked first, the next item will be ranked second. If you want to rank it third (as expected), you can increment the rank in the inner loop (and save it rank for display): from operator import itemgetter from itertools import groupby score_list = score() score_list.sort(key=itemgetter(2), reversed=True) rank = 1 # counter for the rank for group in groupby(score_list): current_rank = rank # save the rank for item in group: print(item, current_rank) rank += 1 # rank increment in inner loop
Prediction in Caffe - Exception: Input blob arguments do not match net inputs Question: I'm using Caffe for classifying non-image data using a quite simple CNN structure. I've had no problems training my network on my HDF5-data with dimensions n x 1 x 156 x 12. However, I'm having difficulties classifying new data. How do I do a simple forward pass without any preprocessing? My data has been normalized and have correct dimensions for Caffe (it's already been used to train the net). Below is my code and the CNN structure. **EDIT:** I've isolated the problem to the function '_Net_forward' in pycaffe.py and found that the issue arises as the self.input dict is empty. Can anyone explain why that is? The set is supposed to be equal to the set coming from the new test data: if set(kwargs.keys()) != set(self.inputs): raise Exception('Input blob arguments do not match net inputs.') My code has changed a bit as I now use the IO methods for converting the data into datum (see below). In that way I've filled the kwargs variable with the correct data. Even small hints would be greatly appreciated! import numpy as np import matplotlib import matplotlib.pyplot as plt # Make sure that caffe is on the python path: caffe_root = '' # this file is expected to be run from {caffe_root} import sys sys.path.insert(0, caffe_root + 'python') import caffe import os import subprocess import h5py import shutil import tempfile import sklearn import sklearn.datasets import sklearn.linear_model import skimage.io def LoadFromHDF5(dataset='test_reduced.h5', path='Bjarke/hdf5_classification/data/'): f = h5py.File(path + dataset, 'r') dat = f['data'][:] f.close() return dat; def runModelPython(): model_file = 'Bjarke/hdf5_classification/conv_v2_simple.prototxt' pretrained = 'Bjarke/hdf5_classification/data/train_iter_10000.caffemodel' test_data = LoadFromHDF5() net = caffe.Net(model_file, pretrained) caffe.set_mode_cpu() caffe.set_phase_test() user = test_data[0,:,:,:] datum = caffe.io.array_to_datum(user.astype(np.uint8)) user_dat = caffe.io.datum_to_array(datum) user_dat = user_dat.astype(np.uint8) out = net.forward_all(data=np.asarray([user_dat])) if __name__ == '__main__': runModelPython() **CNN Prototext** name: "CDR-CNN" layers { name: "data" type: HDF5_DATA top: "data" top: "label" hdf5_data_param { source: "Bjarke/hdf5_classification/data/train.txt" batch_size: 10 } include: { phase: TRAIN } } layers { name: "data" type: HDF5_DATA top: "data" top: "label" hdf5_data_param { source: "Bjarke/hdf5_classification/data/test.txt" batch_size: 10 } include: { phase: TEST } } layers { name: "feature_conv" type: CONVOLUTION bottom: "data" top: "feature_conv" blobs_lr: 1 blobs_lr: 2 convolution_param { num_output: 10 kernel_w: 12 kernel_h: 1 stride_w: 1 stride_h: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" } } } layers { name: "conv1" type: CONVOLUTION bottom: "feature_conv" top: "conv1" blobs_lr: 1 blobs_lr: 2 convolution_param { num_output: 14 kernel_w: 1 kernel_h: 4 stride_w: 1 stride_h: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" } } } layers { name: "pool1" type: POOLING bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_w: 1 kernel_h: 3 stride_w: 1 stride_h: 3 } } layers { name: "conv2" type: CONVOLUTION bottom: "pool1" top: "conv2" blobs_lr: 1 blobs_lr: 2 convolution_param { num_output: 120 kernel_w: 1 kernel_h: 5 stride_w: 1 stride_h: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" } } } layers { name: "fc1" type: INNER_PRODUCT bottom: "conv2" top: "fc1" blobs_lr: 1 blobs_lr: 2 weight_decay: 1 weight_decay: 0 inner_product_param { num_output: 84 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layers { name: "accuracy" type: ACCURACY bottom: "fc1" bottom: "label" top: "accuracy" include: { phase: TEST } } layers { name: "loss" type: SOFTMAX_LOSS bottom: "fc1" bottom: "label" top: "loss" } Answer: Here is [the answer from Evan Shelhamer I got on the Caffe Google Groups](https://groups.google.com/forum/#!topic/caffe-users/aojN_bmbg74): > `self._inputs` is indeed for the manual or "deploy" inputs as defined by the > input fields in a prototxt. To run a net with data layers in through > pycaffe, just call `net.forward()` without arguments. No need to change the > definition of your train or test nets. > > See for instance code cell [10] of the [Python LeNet > example](http://nbviewer.ipython.org/github/BVLC/caffe/blob/tutorial/examples/01-learning- > lenet.ipynb). In fact I think it's clearer in the [Instant Recognition with Caffe tutorial](https://github.com/BVLC/caffe/blob/master/examples/00-classification.ipynb), cell 6: # Feed in the image (with some preprocessing) and classify with a forward pass. net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(caffe_root + 'examples/images/cat.jpg')) out = net.forward() print("Predicted class is #{}.".format(out['prob'].argmax())) In other words, to generate the predicted outputs as well as their probabilities using pycaffe, once you have trained your model, you have to first feed the data layer with your input, then perform a forward pass with `net.forward()`. * * * Alternatively, as pointed out in other answers, you can use a deploy prototxt that is similar to the one you use to define the trained network but removing the input and output layers, and add the following at the beginning (obviously adapting according to your input dimension): name: "your_net" input: "data" input_dim: 1 input_dim: 1 input_dim: 1 input_dim: 250 That's what they use in the [CIFAR10 tutorial](https://github.com/BVLC/caffe/blob/master/examples/cifar10/cifar10_quick.prototxt). (pycaffe really ought to be better documented…)
Use python to connect to sqlplus in a remote host and execute sql commands Question: Here is my situation : We have sqlplus set up in a remote machine and I want to connect to that remote machine and then run sqlplus to execute sql queries. I am trying to write a python script to do that. Here is my code: import sys import getpass import paramiko import time user=raw_input('Enter User Name :') #host_name=raw_input('Enter Host Name:') psswd=getpass.getpass() ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('xxx.hostname.xxx',port=22, username=user, password=psswd) command='export ORACLE_HOME=/opt/app/oracle/product/10.2.0.2/client export LD_LIBRARY_PATH=$ORACLE_HOME/lib \ sudo -S -H /apollo/env/envImprovement/bin/sqlplus' print 'running remote command' print(command) stdin, stdout, stderr=ssh.exec_command(command) stdin.write(psswd+'\n') stdin.flush() for out in stdout.readlines(): print out ssh.close() I have two issues here First is if i pass command like this 'export ORACLE_HOME=/opt/app/oracle/product/10.2.0.2/client export LD_LIBRARY_PATH=$ORACLE_HOME/lib \ sudo -S -H /apollo/env/envImprovement/bin/sqlplus' +' echo $ORACLE_HOME' I get an empty response even if I have added echo that means that variable is not set correctly. Secondly, I can't figure out what next to do here. How to provide `username/password` to sqlplus to allow executing sql queries and then how to supply sql statements. Answer: Why don't you split your command into a function, and use subprocess.Popen() to execute it in a subprocess? from subprocess import * def run_sql_query(sql_command, connection_string): session = Popen([‘sqlplus’, ‘-S’, connection_string], stdin=PIPE, stdout=PIPE, stderr=PIPE) session.stdin.write(sql_command) return session.communicate() Then you can pass your connection string and command as arguments to your function: con_str = 'xxx.hostname.xxx',port=22, username=user, password=psswd' cmd = ''export ORACLE_HOME=/opt/app/oracle/product/10.2.0.2/client export LD_LIBRARY_PATH=$ORACLE_HOME/lib sudo -S -H /apollo/env/envImprovement/bin/sqlplus' print(run_sql_query(con_str, cmd))
Combine two large dictionary by key - Fastest approach Question: I have a two large dictionaries: This is an example to demonstrate but you can imagine each dictionary having close to 100k records. d1 = {'0001': [('skiing',0.789),('snow',0.65),('winter',0.56)],'0002': [('drama', 0.89),('comedy', 0.678),('action',-0.42), ('winter',-0.12),('kids',0.12)]} d2 = {'0001': [('action', 0.89),('funny', 0.58),('sports',0.12)],'0002': [('dark', 0.89),('Mystery', 0.678),('crime',0.12), ('adult',-0.423)]} As of final goal I want to have a dictionary that has combined values by key from each dictionary: {'0001': [('skiing', 0.789), ('snow', 0.65), ('winter', 0.56), [('action', 0.89), ('funny', 0.58), ('sports', 0.12)]], '0002': [('drama', 0.89), ('comedy', 0.678), ('action', -0.42), ('winter', -0.12), ('kids', 0.12), [('dark', 0.89), ('Mystery', 0.678), ('crime', 0.12), ('adult', -0.423)]]} The way I would achieve this is: for key,value in d1.iteritems(): if key in d2: d1[key].append(d2[key]) But after reading in many places I found out that `iteritems()` is really slow and doesn't actually use C data structures to do it but use python functions. How can I do this combine/merge process fast and efficiently. Answer: I think you need to merge the `dicts` from collections import Counter res = Counter(d1) + Counter(d2) >>>res Counter({'0001': [('skiing', 0.789), ('snow', 0.65), ('winter', 0.56 **...** **For example** from collections import Counter d1 = {"a":[1,2], "b":[]} d2 = {"a":[1,3], "b":[5,6]} res = Counter(d1)+Counter(d2) >>>res Counter({'b': [5, 6], 'a': [1, 2, 1, 3]}) Even this approach support unequal number of keys in `dicts`, like d1 = {"a":[1,2], "b":[]} d2 = {"a":[1,3], "b":[5,6], "c":["ff"]} >>>res Counter({'c': ['ff'], 'b': [5, 6], 'a': [1, 2, 1, 3]})
completely connected subgraphs from a larger graph in networkx Question: I have tried not to repost here, but I think my request is very simple and I am just inexperienced with network graphs. When using the networkx module in python, I would like to recover, from a connected graph, the subgraphs where all nodes are connected to each other (where the number of nodes is greater than 2). Is there a simple way to do this? Here is my example: A simple graph with seven nodes. Nodes 1,2,3 are shared connections, nodes 1,2,4 all share connections, and nodes 5,6,7 all share connections. import networkx as nx G=nx.Graph() #Make the graph G.add_nodes_from([1,2,3,4,5,6,7]) #Add nodes, although redundant because of the line below G.add_edges_from([(1,2),(1,3),(2,3),(1,4),(2,4),(1,5),(5,6),(5,7),(6,7)]) # Adding the edges My desired output would be: ([1,2,3],[1,2,4],[5,6,7]) I can think of slightly laborious methods for writing this but was wondering if there was a simple inbuilt function for it. Answer: It sounds like you want to discover the cliques in your graph. For this you could use [`nx.clique.find_cliques()`](http://networkx.lanl.gov/reference/generated/networkx.algorithms.clique.find_cliques.html#networkx.algorithms.clique.find_cliques): >>> list(nx.clique.find_cliques(G)) [[1, 2, 3], [1, 2, 4], [1, 5], [6, 5, 7]] `nx.clique.find_cliques()` returns a generator which will yield all cliques in the graph. You can filter out the cliques with fewer than three nodes using list comprehension: >>> [g for g in nx.clique.find_cliques(G) if len(g) > 2] [[1, 2, 3], [1, 2, 4], [6, 5, 7]]
asigning ids in kivy on the python side Question: im using kivy. the what im trying to do is have and 'idea',a slider and a label containing the slider's current value in a row in a grid layout now getting the layout is fine but getting the label to have a text value the same as the slider's current value is tricky. I'm trying to use string concation to refer to the label with the same number suffix as the slider that it is paired with. I think the problem im having is that im trying to assign ids on the python side when they normally have to be done on the kv side. It's either that or the fact the ids i'm assigning are strings when kv would normally expect plain text. any help would be appreciated class ScatterTextWidget(FloatLayout): def run_me(self): r=1 main_list=self.ids.main_list main_list.clear_widgets() main_list.height=0 for idea in imported_ideas: main_list.add_widget(Label(text=idea,color=(0,0,0,1),id='idea_label_'+str(r))) main_list.add_widget(Slider(id='Slider_'+str(r),min=0,max=10,value=5, step=1,on_value_pos=self.slider_slid(self))) main_list.add_widget(Label(color=(0,0,0,1),id='value_label_'+str(r))) value_label=self.ids['value_label_'+str(r)] # get this working and then apply the method into slider slid value_label.text='xxx' main_list.height+=35 r +=1 button_1=self.ids.button_1 button_1.text='Begin' button_1.bind(on_press=self.begin) def slider_slid(self,sender): s=str(sender.id) value_label=self.ids['value_label_'+str(s[12:])] value_label.text=str(sender.value) value_label=self.ids['value_label_'+str(s[12:])] KeyError: 'value_label_' Answer: `self.ids` only collects ids from children in the kv language rule of the widget. It doesn't know about widgets you added via python. You don't need to use the id though. In this case you could keep e.g. a dictionary of id -> widget keys. self.keys_dict = {} for idea in imported_ideas: new_widget = Label(color=(0,0,0,1),id='value_label_'+str(r))) main_list.add_widget(new_widget) self.keys_dict['value_label_' + str(r)] = new_widget Then later you can access it with `self.keys_dict['value_label_' + str(s[12:])]` or whatever you like. I suppose in practice you could also modify the actual ids dictionary in the same way, though I subjectively feel it is preferable to maintain your own dictionary with a name that represents its more specific contents.
Why does Apache PySpark top() fail when the RDD contains a user defined class? Question: I'm prototyping some code using Apache Spark's PySpark on my local machine, via iPython Notebook. I've written some code that seems to work fine, but when I make a simple change to it, it breaks. The first code block below works. The second block fails with the given error. Really appreciate any help. I suspect the error is something to do with serializing Python objects. The error says it cant Pickle TestClass. I cant find information on how to make my class pickle-able. The documentation says "Generally you can pickle any object if you can pickle every attribute of that object. Classes, functions, and methods cannot be pickled -- if you pickle an object, the object's class is not pickled, just a string that identifies what class it belongs to. This works fine for most pickles (but note the discussion about long-term storage of pickles).". I don't understand this, as I've tried replacing my TestClass with a datetime class and things seem to work just fine. Anyway, the code: # ----------- This code works ----------------------------- class TestClass(object): def __init__(self): self.teststr = 'Hello' def __str__(self): return self.teststr def __repr__(self): return self.teststr def test(self): return 'test: {0}'.format(self.teststr) #load multiple text files into list of RDDs, concatenate them, then remove headers trip_rdd = trip_rdds[0] for rdd in trip_rdds[1:]: trip_rdd = trip_rdd.union(rdd) #filter out header rows from result trip_rdd = trip_rdd.filter(lambda r: r != header) #split the line, then convert each element to a dictionary trip_rdd = trip_rdd.map(lambda r: r.split(',')) trip_rdd = trip_rdd.map(lambda r, k = header_keys: dict(zip(k, r))) trip_rdd = trip_rdd.map(convert_trip_dict) #trip_rdd = trip_rdd.map(lambda d, ps = g_nyproj_str: Trip(d, ps)) #originally I map the given dictionaries to a 'Trip' class I defined with various bells and whistles. #I've simplified to using TestClass above and still seem to get the same error trip_rdd = trip_rdd.map(lambda t: TestClass()) trip_rdd = trip_rdd.map(lambda t: t.test()) #(1) Watch this row print trip_rdd.count() temp = trip_rdd.top(3) print temp print '...done' The above code returns the following: _347098_ _['test: Hello', 'test: Hello', 'test: Hello']_ _...done_ But when I delete the row marked "(1) watch this row" - the last map line - and re-run I get the following error instead. Its long, so I'm going to wrap up my question here, before posting the output. Again, I'd really appreciate help with this. Thanks in advance! # ----------- This code FAILS ----------------------------- class TestClass(object): def __init__(self): self.teststr = 'Hello' def __str__(self): return self.teststr def __repr__(self): return self.teststr def test(self): return 'test: {0}'.format(self.teststr) #load multiple text files into list of RDDs, concatenate them, then remove headers trip_rdds = [sc.textFile(f) for f in trip_files] trip_rdd = trip_rdds[0] for rdd in trip_rdds[1:]: trip_rdd = trip_rdd.union(rdd) #filter out header rows from result trip_rdd = trip_rdd.filter(lambda r: r != header) #split the line, then convert each element to a dictionary trip_rdd = trip_rdd.map(lambda r: r.split(',')) trip_rdd = trip_rdd.map(lambda r, k = header_keys: dict(zip(k, r))) trip_rdd = trip_rdd.map(convert_trip_dict) #trip_rdd = trip_rdd.map(lambda d, ps = g_nyproj_str: Trip(d, ps)) #originally I map the given dictionaries to a 'Trip' class I defined with various bells and whistles. #I've simplified to using TestClass above and still seem to get the same error trip_rdd = trip_rdd.map(lambda t: TestClass()) trip_rdd = trip_rdd.map(lambda t: t.test()) #(1) Watch this row print trip_rdd.count() temp = trip_rdd.top(3) print temp print '...done' Output: _347098_ *--------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-76-6550318a5d5b> in <module>() 29 #count them 30 print trip_rdd.count() ---> 31 temp = trip_rdd.top(3) 32 print temp 33 print '...done' C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\rdd.pyc in top(self, num, key) 1043 return heapq.nlargest(num, a + b, key=key) 1044 -> 1045 return self.mapPartitions(topIterator).reduce(merge) 1046 1047 def takeOrdered(self, num, key=None): C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\rdd.pyc in reduce(self, f) 713 yield reduce(f, iterator, initial) 714 --> 715 vals = self.mapPartitions(func).collect() 716 if vals: 717 return reduce(f, vals) C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\rdd.pyc in collect(self) 674 """ 675 with SCCallSiteSync(self.context) as css: --> 676 bytesInJava = self._jrdd.collect().iterator() 677 return list(self._collect_iterator_through_file(bytesInJava)) 678 C:\Programs\Coding\Languages\Python\Anaconda_32bit\Conda\lib\site-packages\py4j-0.8.2.1-py2.7.egg\py4j\java_gateway.pyc in __call__(self, *args) 536 answer = self.gateway_client.send_command(command) 537 return_value = get_return_value(answer, self.gateway_client, --> 538 self.target_id, self.name) 539 540 for temp_arg in temp_args: C:\Programs\Coding\Languages\Python\Anaconda_32bit\Conda\lib\site-packages\py4j-0.8.2.1-py2.7.egg\py4j\protocol.pyc in get_return_value(answer, gateway_client, target_id, name) 298 raise Py4JJavaError( 299 'An error occurred while calling {0}{1}{2}.\n'. --> 300 format(target_id, '.', name), value) 301 else: 302 raise Py4JError( Py4JJavaError: An error occurred while calling o463.collect. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 49.0 failed 1 times, most recent failure: Lost task 1.0 in stage 49.0 (TID 99, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\worker.py", line 107, in main process() File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\worker.py", line 98, in process serializer.dump_stream(func(split_index, iterator), outfile) File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\serializers.py", line 231, in dump_stream bytes = self.serializer.dumps(vs) File "C:\Programs\Apache\Spark\spark-1.2.0-bin-hadoop2.4\python\pyspark\serializers.py", line 393, in dumps return cPickle.dumps(obj, 2) PicklingError: Can't pickle <class '__main__.TestClass'>: attribute lookup __main__.TestClass failed at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:137) at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:174) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:96) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263) at org.apache.spark.rdd.RDD.iterator(RDD.scala:230) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)* Answer: Turns out you have to define your class in its own module, not in the main body of the code. If you do that and then import the module, pickle is able to pickle and unpickle the object successfully. The class then works with Spark as you'd expect.
Django ReverseSingleRelatedObjectDescriptor.__set__ ValueError Question: I am creating a custom data migration to automatically create GenericRelation entries in the database, based on existing entries across two different models. **Example models.py:** ... class Place content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') class Restaurant name = models.CharField(max_length=60) location = models.CharField(max_length=60) class House location = models.CharField(max_length=60) **Example 0011_place_data.py:** # -*- coding: utf-8 -*- from django.contrib.contenttypes.models import ContentType from django.db import models, migrations def forwards_func(apps, schema_editor): Restaurant = apps.get_model("simpleapp", "Restaurant") House = apps.get_model("simpleapp", "House") Place = apps.get_model("simpleapp", "Place") db_alias = schema_editor.connection.alias content_type = ContentType.objects.using(db_alias).get( app_label="simpleapp", model="restaurant" ) for restaurant in Restaurant.objects.using(db_alias).all(): Place.objects.using(db_alias).create( content_type=content_type, object_id=restaurant.id) content_type = ContentType.objects.using(db_alias).get( app_label="simpleapp", model="house" ) for house in House.objects.using(db_alias).all(): Place.objects.using(db_alias).create( content_type=content_type, object_id=house.id) class Migration(migrations.Migration): dependencies = [ ('simpleapp', '0010_place') ] operations = [ migrations.RunPython( forwards_func, ), ] When I run this (Django 1.7.4) I get Operations to perform: Apply all migrations: simpleapp, admin, sessions, auth, contenttypes Synchronizing apps without migrations: Creating tables... Installing custom SQL... Installing indexes... Running migrations: Applying projects.0011_place_data...passing Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File ".../lib/python3.4/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line utility.execute() File ".../lib/python3.4/site-packages/django/core/management/__init__.py", line 377, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File ".../lib/python3.4/site-packages/django/core/management/base.py", line 288, in run_from_argv self.execute(*args, **options.__dict__) File ".../lib/python3.4/site-packages/django/core/management/base.py", line 338, in execute output = self.handle(*args, **options) File ".../lib/python3.4/site-packages/django/core/management/commands/migrate.py", line 161, in handle executor.migrate(targets, plan, fake=options.get("fake", False)) File ".../lib/python3.4/site-packages/django/db/migrations/executor.py", line 68, in migrate self.apply_migration(migration, fake=fake) File ".../lib/python3.4/site-packages/django/db/migrations/executor.py", line 102, in apply_migration migration.apply(project_state, schema_editor) File ".../lib/python3.4/site-packages/django/db/migrations/migration.py", line 108, in apply operation.database_forwards(self.app_label, schema_editor, project_state, new_state) File ".../lib/python3.4/site-packages/django/db/migrations/operations/special.py", line 117, in database_forwards self.code(from_state.render(), schema_editor) File ".../simpleapp/migrations/0011_place_data.py", line 19, in forwards_func object_id=restaurant.id) File ".../lib/python3.4/site-packages/django/db/models/query.py", line 370, in create obj = self.model(**kwargs) File ".../lib/python3.4/site-packages/django/db/models/base.py", line 440, in __init__ setattr(self, field.name, rel_obj) File ".../lib/python3.4/site-packages/django/db/models/fields/related.py", line 598, in __set__ self.field.rel.to._meta.object_name, ValueError: Cannot assign "<ContentType: restaurant>": "Place.content_type" must be a "ContentType" instance. If I comment out the stanza raising the value error in the Django module (django.db.models.fields.related.ReverseSingleRelatedObjectDescriptor.**set**) it works as expected: ... elif value is not None and not isinstance(value, self.field.rel.to): print('skipping') #raise ValueError( # 'Cannot assign "%r": "%s.%s" must be a "%s" instance.' % ( # value, # instance._meta.object_name, # self.field.name, # self.field.rel.to._meta.object_name, # ) #) ... Should this exception be raised in the first place? Is this a bug in Django or am I doing it wrong? Answer: **Edit** : The solution below does not actually work; it just ends up not running the forwards_func so there are no errors. New solutions welcome: * * * I was able to resolve this using [Django's post_migrate signal](https://docs.djangoproject.com/en/1.7/ref/signals/#post-migrate). This also fixes some other issues that arrive from these types of migrations (data migrations which reference the ContentType table). My understanding is that essentially, the issue is the ContentType table doesn't get created until the very end of a migration for performance reasons. This means I'm not actually retrieving the same type ContentType object that the relation module is checking for. The solution is to run this type of data migration as a callback: # -*- coding: utf-8 -*- from django.db.models.signals import post_migrate from django.contrib.contenttypes.models import ContentType from django.db import models, migrations def forwards_func(apps, schema_editor): Restaurant = apps.get_model("simpleapp", "Restaurant") House = apps.get_model("simpleapp", "House") Place = apps.get_model("simpleapp", "Place") db_alias = schema_editor.connection.alias def add_stuffs(*args, **kwargs) content_type = ContentType.objects.using(db_alias).get( app_label="simpleapp", model="restaurant" ) for restaurant in Restaurant.objects.using(db_alias).all(): Place.objects.using(db_alias).create( content_type=content_type, object_id=restaurant.id) content_type = ContentType.objects.using(db_alias).get( app_label="simpleapp", model="house" ) for house in House.objects.using(db_alias).all(): Place.objects.using(db_alias).create( content_type=content_type, object_id=house.id) post_migrate.connect(add_stuffs) class Migration(migrations.Migration): dependencies = [ ('simpleapp', '0010_place') ] operations = [ migrations.RunPython( forwards_func, ), ]
Python Socket Programming - Messages Getting Truncated Question: I have a GPS modem (Sixnet BT-5800) that attempts to broadcast GPS NMEA messages over ethernet to my Linux client on a timed interval. On the client I have a python script running. I was hoping if someone could identify if I'm doing something wrong here. ''' The main program waits on the TCP socket to receive data, parses the data into GPS NMEA sentences and writes them to the MySQL database ''' # python library import configparser import select import socket import sys from time import sleep # custom library #import gpsnmeapacketparser # open the configuration files config = configparser.ConfigParser() files = ['.config.host', '.config', '.config.mysql'] dataset = config.read(files) if (len(files) != len(dataset)): print("Error: Failed to open/find configuration files. Has this package been installed?") exit() def main(): host_address = config['HOST']['IPAddress'] host_gps_port = config['HOST']['GPSPort'] # create a tcp/ip socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) server_address = (host_address, int(host_gps_port, 10)) print("Binding socket:", server_address) sock.bind(server_address) # listen for incoming connections sock.listen(5) while True: block = "" # wait for a connection print('waiting for a connection') connection, client_address = sock.accept() print('connection from', client_address) data = bytearray([]) buf = bytearray([]) while True: buf = connection.recv(10) if buf != b'': data += buf else: break if data != b'': block = data.decode("utf-8") print(block) print() else: connection.close() if __name__ == "__main__": sys.exit(main()) The modem seems to max out at 512 byte messages. Running the python script I will see output like the following: waiting for a connection connection from ('192.168.0.1', 4433) waiting for a connection connection from ('192.168.0.1', 4434) $GPRMC,210458.00,A,4437.35460,N,07545.93616,W,000.0,000.0,180315,13.4,W,A*0F $GPGGA,210458.00,4437.35460,N,07545.93616,W,1,08,0.91,00121,M,-034,M,,*54 $GPGLL,4437.35460,N,07545.93616,W,210458.00,A,A*79 $GPVTG,000.0,T,013.4,M,000.0,N,000.0,K,A*25 $GPGSV,3,1,09,31,32,089,36,03,19,236,29,16,77,229,36,23,57,292,35*75 $GPGSV,3,2,09,10,07,326,23,29,08,032,17,08,58,067,41,09,29,312,36*73 $GPGSV,3,3,09,27,26,164,37,,,,,,,,,,,,*46 $GPGSA,A,3,31,03,16,23,10,29,09,27,,,,,1.61,0.91,1.33*0C $GPZDA,210458.00,18,0 waiting for a connection connection from ('192.168.0.1', 4435) waiting for a connection connection from ('192.168.0.1', 4436) $GPRMC,210528.00,A,4437.35458,N,07545.93617,W,000.0,000.0,180315,13.4,W,A*03 $GPGGA,210528.00,4437.35458,N,07545.93617,W,1,07,1.05,00121,M,-034,M,,*5B $GPGLL,4437.35458,N,07545.93617,W,210528.00,A,A*75 $GPVTG,000.0,T,013.4,M,000.0,N,000.0,K,A*25 $GPGSV,3,1,09,31,32,089,36,03,19,236,30,16,77,229,36,23,57,292,35*7D $GPGSV,3,2,09,10,07,326,22,29,08,032,06,08,58,067,42,09,29,312,35*72 $GPGSV,3,3,09,27,26,164,35,,,,,,,,,,,,*44 $GPGSA,A,3,31,03,16,23,10,09,27,,,,,,1.77,1.05,1.42*0A $GPZDA,210528.00,18,03, waiting for a connection connection from ('192.168.0.1', 4437) During execution the `connection.recv(10)` runs to gather data until it returns the empty array, and then it runs once more and times out. (**This is a secondary problem, how can I make sure I've received all data without having to wait on this timeout?**) Here is the output of a tcpdump 17:09:27.907495 IP 192.168.0.1.4621 > 192.168.0.5.8763: Flags [F.], seq 1, ack 1, win 2920, options [nop,nop,TS val 5166242 ecr 6220255], length 0 17:09:27.907667 IP 192.168.0.5.8763 > 192.168.0.1.4621: Flags [F.], seq 1, ack 2, win 202, options [nop,nop,TS val 6224000 ecr 5166242], length 0 17:09:27.908091 IP 192.168.0.1.4621 > 192.168.0.5.8763: Flags [.], ack 2, win 2920, options [nop,nop,TS val 5166242 ecr 6224000], length 0 17:09:27.910329 IP 192.168.0.1.4622 > 192.168.0.5.8763: Flags [S], seq 2455146170, win 5840, options [mss 1460,sackOK,TS val 5166244 ecr 0,nop,wscale 1], length 0 17:09:27.910390 IP 192.168.0.5.8763 > 192.168.0.1.4622: Flags [S.], seq 3558179681, ack 2455146171, win 25760, options [mss 1300,sackOK,TS val 6224000 ecr 5166244,nop,wscale 7], length 0 17:09:27.910796 IP 192.168.0.1.4622 > 192.168.0.5.8763: Flags [.], ack 1, win 2920, options [nop,nop,TS val 5166245 ecr 6224000], length 0 17:09:27.914219 IP 192.168.0.1.4622 > 192.168.0.5.8763: Flags [P.], seq 1:513, ack 1, win 2920, options [nop,nop,TS val 5166248 ecr 6224000], length 512 17:09:27.914309 IP 192.168.0.5.8763 > 192.168.0.1.4622: Flags [.], ack 513, win 210, options [nop,nop,TS val 6224001 ecr 5166248], length 0 17:09:42.895197 IP 192.168.0.1.4622 > 192.168.0.5.8763: Flags [F.], seq 513, ack 1, win 2920, options [nop,nop,TS val 5181229 ecr 6224001], length 0 17:09:42.897588 IP 192.168.0.1.4623 > 192.168.0.5.8763: Flags [S], seq 2470830214, win 5840, options [mss 1460,sackOK,TS val 5181231 ecr 0,nop,wscale 1], length 0 17:09:42.897643 IP 192.168.0.5.8763 > 192.168.0.1.4623: Flags [S.], seq 2665688556, ack 2470830215, win 25760, options [mss 1300,sackOK,TS val 6227747 ecr 5181231,nop,wscale 7], length 0 17:09:42.898114 IP 192.168.0.1.4623 > 192.168.0.5.8763: Flags [.], ack 1, win 2920, options [nop,nop,TS val 5181232 ecr 6227747], length 0 17:09:42.898383 IP 192.168.0.5.8763 > 192.168.0.1.4622: Flags [F.], seq 1, ack 514, win 210, options [nop,nop,TS val 6227747 ecr 5181229], length 0 17:09:42.898773 IP 192.168.0.1.4622 > 192.168.0.5.8763: Flags [.], ack 2, win 2920, options [nop,nop,TS val 5181232 ecr 6227747], length 0 It looks as though the modem never gets properly signalled to send the remaining characters and those enormous sequence numbers could be error code? **I can't figure out if the error is in my code, or if the modem is using some non-standard TCP?** Answer: TCP is a streaming protocol, you are not receiving the message packet based. Increase your buffersize drastically and be ready to receive messages that might be broken and spread over several receive actions. So yes this is a TCP issue, but it is normal. Because as I said TCP is a streaming protocol. If a client sender does this without any pause : * send 10 bytes * send 10 bytes the receiver is very likely to get this result : * receive 20 bytes now if the receiver's buffer is too small, chance is very high that he will loose part of the information. Not what you want of course. The way the receiver is going to receive the data is not predictable, you should not rely on it. In theory you should be prepared to even receive the message byte per byte. But because TCP is implemented to make use of the MTU in the most intelligent way, it is not very likely that you will receive byte per byte. But the thing is you just don't know which message is going to be spread over different receive calls. Bottom line make your receive buffer a few k's in size.
OpenCV in Python - Manipulating pixels Question: I am using python 2.7 and OpenCV to set an image to all white pixels, but it is not working. Here is my code: import cv2 import numpy as np image = cv2.imread("strawberry.jpg") #Load image imageWidth = image.shape[1] #Get image width imageHeight = image.shape[0] #Get image height xPos = 0 yPos = 0 while xPos < imageWidth: #Loop through rows while yPos < imageHeight: #Loop through collumns image.itemset((xPos, yPos, 0), 255) #Set B to 255 image.itemset((xPos, yPos, 1), 255) #Set G to 255 image.itemset((xPos, yPos, 2), 255) #Set R to 255 yPos = yPos + 1 #Increment Y position by 1 xPos = xPos + 1 #Increment X position by 1 cv2.imwrite("result.bmp", image) #Write image to file print "Done" I use numpy to set the pixels of the image - but the result.bmp is an exact copy of the original image. What am I doing wrong? EDIT: I know it is a bad idea to iterate over pixels, but what is the non functioning part of my code? Answer: rule one with opencv/python: **never** iterate over pixels, if you can avoid it ! if you wanted to set all of the pixels to (1,2,3), it's as easy as: image[::] = (1,2,3) for 'all white': image[::] = (255,255,255)
Basic python-while loop prints out extra space in random card generator Question: #!usr/bin/python import random seg1='''_________''' seg2='''| |''' seg3a="| Ace |" seg32="| 2 |" seg33="| 3 |" seg34="| 4 |" seg35="| 5 |" seg36="| 6 |" seg37="| 7 |" seg38="| 8 |" seg39="| 9 |" seg310="| 10 |" seg3jack="| Jack |" seg3queen="|Queen |" seg3king="| King |" seg4='''| of |''' seg5s="| Spade |" seg5h="| Heart |" seg5c="| Clubs |" seg5d="|Diamond|" seg6='''| |''' seg7='''|_______|''' a=[seg3a,seg32,seg33,seg34,seg35,seg36,seg37,seg38,seg39,seg310,seg3jack,seg3queen,seg3king] b=[seg5s,seg5h,seg5c,seg5d] count=0 count1=0 print seg1*13,'\n',seg2*13#,#'\n' while count<=12: c=random.choice(a) print c, count+=1 print '\n',seg4*13,'\n' while count1<=12: d=random.choice(b) print d, count1+=1 print '\n',seg6*13,'\n',seg7*13 Hello. I am trying to make this program which is supposed to print out 13 random cards side by side but it formats really strangely. I know it is caused by the while loop which adds an extra space after it prints something out. Thanks. Answer: More information can be found here: [How do I keep Python print from adding newlines or spaces?](http://stackoverflow.com/questions/255147/how-do-i-keep-python-print- from-adding-newlines-or-spaces) This can be applied to your code by `import`ing `sys` and then using: while count<=12: c=random.choice(a) sys.stdout.write(c) count+=1 sys.stdout.flush() and the equivalent for the other while loop. Side note: consider reading some python code by other people. Python users like to keep it short and sweet, and I greatly benefited—at least in terms of coding shortcuts—by doing that. In fact, you could just as well replace that entire loop with: print "".join([random.choice(a) for i in range(13)])
Problems to get element.tagName. Parsing an XML with Python and xml.dom.minidom Question: I'm parsing an XML with Python (xml.dom.minidom) and I cant get the tagName of a node. The interpreter is returning: AttributeError: Text instance has no attribute 'tagName' when I try to extract (for example) the string 'format' from the node: <format>DVD</format> I have found a couple of very similar posts here in Starckoverflow, but I still can't find the solution. I'm aware that there might be alternative modules to deal with this issue, but my intention here is to understand WHY is it failing. Thanks a lot in advance and best regards, Here is my code: from xml.dom.minidom import parse import xml.dom.minidom # Open XML document xml = xml.dom.minidom.parse("movies.xml") # collection Node collection_node = xml.firstChild # movie Nodes movie_nodes = collection_node.childNodes for m in movie_nodes: if len(m.childNodes) > 0: print '\nMovie:', m.getAttribute('title') for tag in m.childNodes: print tag.tagName # AttributeError: Text instance has no attribute 'tagName' for text in tag.childNodes: print text.data And here the XML: <collection shelf="New Arrivals"> <movie title="Enemy Behind"> <type>War, Thriller</type> <format>DVD</format> <year>2003</year> <rating>PG</rating> <stars>10</stars> <description>Talk about a US-Japan war</description> </movie> <movie title="Transformers"> <type>Anime, Science Fiction</type> <format>DVD</format> <year>1989</year> <rating>R</rating> <stars>8</stars> <description>A schientific fiction</description> </movie> </collection> Similar posts: [Get node name with minidom](http://stackoverflow.com/questions/2795462/get- node-name-with-minidom) [Element.tagName for python not working](http://stackoverflow.com/questions/8274739/element-tagname-for- python-not-working) Answer: The error was due to new lines between element nodes are considered a different node which of type _TEXT_NODE_ (see [Node.nodeType](https://docs.python.org/2/library/xml.dom.html#xml.dom.Node.nodeType)), and _TEXT_NODE_ doesn't have `tagName` attribute. You can add a node type checking to avoid printing `tagName` from text nodes : if tag.nodeType != tag.TEXT_NODE: print tag.tagName
Script that converts html tables to CSV (preferably python) Question: I have a large number of html tables that I'd like to convert into CSV. Pasting individual tables into excel and saving them as .csv works, as does pasting the html tables into simple online converters. But I have thousands of individual tables, so I need a script that can automate the conversion process. I was wondering if anyone has any suggestions as to how I could go about doing this? Python is the only language I have a decent knowledge of, so some sort of python script would be ideal. I've searched for similar questions, but all the python examples I've found are quite complicated to me, and go beyond my basic level of understanding. Any advice would be much appreciated :) Answer: Use `pandas`. It has a function to read html tables into a data structure, and then a function that will write that data structure to a csv file. import pandas as pd url = 'http://myurl.com/mypage/' for i, df in enumerate(pd.read_html(url)): df.to_csv('myfile_%s.csv' % i) Note that since an html page may have more than one table, the function to get the table always returns a list of tables (even if there is only one table). That is why I use a loop here.
How to import access table to another access table using Python Question: Good Morning. I'm new to Python and I'm doing a internship at the moment. One part of the script that they want me to make is to import a table from Access database 1 to Access database 2. I was trying to do something with the 2 following libaries: pyodbc and prettytable. Where I wanted to make a temporary table with prettytable from database 1 and get the values from that and put it in database 2. Where I was hoping that I can put a variable in the SQL. But obvious, that never work. So I'm stuck right now. Has somebody an idea? You can read my beautiful code that I use below here: import pyodbc DBfile = 'C:/Users/stage1/Documents/test.accdb' conn = pyodbc.connect("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:/Users/stage1/Documents/test.accdb;") cursor = conn.cursor() DBfile = 'C:/Users/stage1/Documents/test.accdb' conn2 = pyodbc.connect("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:/Users/stage1/Documents/test.accdb;") cursor2 = conn2.cursor() SQL = """select naam FROM naam;""" for row in cursor.execute(SQL): k = row.naam print k cursor2.execute("""insert into testtable(naam) values (?)""", (k)) conn2.commit() cursor.close() conn.close() cursor2.close() conn2.close() THE PROBLEM IS SOLVED. Thanks to mhawke Answer: No comment on what you are doing and why. I don't grok the pretty table stuff, but you need to pass the value of `k` to the execute statement. Change: cursor.execute("""insert into Tableimport(name) values (k)""") to cursor.execute("""insert into Tableimport(name) values (?)""", (k,)) The latter is a parameterised query where the `?` placeholder will be replaced with the _value_ of `k`. * * * Too much back and forth with comments, and the information provided in the question is not clear enough. So, you can try the following code which checks that the required tables and column exist before attempting any queries. It will display a list of tables and/or columns if anything is not as expected. This is intended to help diagnose the problem, it's not production code.\ import pyodbc DBfile = 'C:/Users/stage1/Documents/test.accdb' conn = pyodbc.connect("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:/Users/stage1/Documents/test.accdb;") cursor = conn.cursor() source_table = 'naam' dest_table = 'testtable' column_name = 'naam' if (cursor.columns(table=source_table, column=column_name).fetchone() and cursor.columns(table=dest_table, column=column_name).fetchone()): result = cursor.execute("SELECT naam FROM naam") for row in result: k = row.naam print 'Got k: {!r}'.format(k) cursor.execute("insert into testtable (naam) values (?)", (k,)) conn.commit() else: # tables and/or columns missing show_tables = show_source_columns = show_dest_columns = False if not cursor.tables(table=source_table).fetchone(): print 'No source table named "{}"'.format(source_table) show_tables = True elif not cursor.columns(table=source_table, column=column_name).fetchone(): show_source_columns = True if not cursor.tables(table=dest_table).fetchone(): print 'No destination table named "{}"'.format(source_table) show_tables = True elif not cursor.columns(table=dest_table, column=column_name).fetchone(): show_dest_columns = True if show_tables: print '\n\nAvailable tables are:' for row in cursor.tables(): print row if show_source_columns: print '\n\nSource table "{}" missing column "{}". Available columns:'.format(source_table, column_name) for row in cursor.columns(table=source_table): print '{} : {}'.format(row.column_name, row.data_type) if show_dest_columns: print '\n\nDestination "{}" table missing column "{}". Available columns:'.format(dest_table, column_name) for row in cursor.columns(table=dest_table): print '{} : {}'.format(row.column_name, row.data_type) cursor.close() conn.close()
Python proxy - Need help to send HTTP header to the browser Question: I'm making a proxy for my project and I'm trying to send to the browser (Firefox) an HTTP header to continue the "Conversation" between me(Proxy server) and the browser. The issue is: when I'm refreshing any page, the page Keeping loading. I use socket and select for the proxy: import socket, select #Sending a message for the waiting list def send_Waiting_Messages(wlist): for msg in messages_to_send: clientSocket, data = msg if clientSocket in wlist: clientSocket.send('HTTP/1.1 200 OK\r\nContent-Type: text/html\r\n\r\n') messages_to_send.remove(msg) serverSocket = socket.socket() serverSocket.bind(('0.0.0.0',8080)) serverSocket.listen(10) open_client_sockets = [] messages_to_send = [] while True: rlist, wlist, xlist = select.select([serverSocket] + open_client_sockets, open_client_sockets, []) for currentSocket in rlist: if currentSocket is serverSocket: newSocket, addr = serverSocket.accept() open_client_sockets.append(newSocket) else: data = currentSocket.recv(1024) if data == "": open_client_sockets.remove(currentSocket) print 'Conn is closed' else: print data messages_to_send.append((currentSocket, 'Hello, ' + data)) send_Waiting_Messages(wlist) These lines: clientSocket.send('HTTP/1.1 200 OK\r\n') clientSocket.send('Content-Type: text/html\r\n\r\n') are for sending the header. Thanks for helpers!! Answer: notice that every the `send_Waiting_Messages` function is sending the same response to all of your connected clients. I reccomend to replace it with the basic send function when the socket is in the `wlist`. * * * **This will probably work** import socket, select serverSocket = socket.socket() serverSocket.bind(('0.0.0.0',8080)) serverSocket.listen(10) open_client_sockets = [] while True: rlist, wlist, xlist = select.select([serverSocket] + open_client_sockets, open_client_sockets, []) for currentSocket in rlist: if currentSocket is serverSocket: newSocket, addr = serverSocket.accept() open_client_sockets.append(newSocket) else: data = currentSocket.recv(2048) if data == "": open_client_sockets.remove(currentSocket) print 'Conn is closed' else: print data content_to_send = "The content that you want to send" currentSocket.send("HTTP/1.1 200 OK\r\nContent-Type: text/html\r\nContent-Length:"+str(len(content_to_send))+"\r\n\r\n"+content_to_send)
Python script chokes on a downloaded file because of unicode encode error Question: I run a script 4 times a day that uses the requests module to download a file, which I then throw into a database. 9 times out of 10, the script works flawlessly. But the times it does not work is because of a character in the downloaded file that my script, as it is, does not like. For example, here's the error I got today: `UnicodeEncodeError: 'ascii' codec can't encode characters in position 379-381: ordinal not in range(128)`. I downloaded the file another way and here's the character at position 380 which I believe is responsible for stopping my script, "∞". And, here's the place in my script where it chokes: ##### request file r = requests.get('https://resources.example.com/requested_file.csv') ##### create the database importable csv file ld = open('/requested_file.csv', 'w') print(r.text, file=ld) I know this probably has to do with encoding the file somehow before printing it to the .csv file, and is probably a simple thing for someone who knows what they are doing but, after many hours of research, I'm about to cry. Thanks for your help in advance! Answer: You need to provide an _encoding_ for your file; currently it defaults to ASCII, which is a very limited codec. You could use UTF-8 instead, for example: with open('/requested_file.csv', 'w', encoding='utf8') as ld: print(r.text, file=ld) However, since you are loading from a URL you are now decoding then encoding again. A better idea is to just copy the data straight to disk as bytes. Make a _streaming_ request and have `shutil.copyfileobj()` copy the data in chunks. That way you can handle any size of response without loading everything into memory: import requests import shutil r = requests.get('https://resources.example.com/requested_file.csv', stream=True) with open('/requested_file.csv', 'wb') as ld: r.raw.decode_content = True # decompress gzip or deflate responses shutil.copyfileobj(r.raw, ld)
How to use gst along with pyqt to stream video on pyqt widget Question: Am using gst along with my pyqt. I want to display the video stream in my widget. While doing so my application starts streaming the video and then crashes. What am I doing wrong ? Camera Code from PyQt4 import QtCore import gst class camThread(QtCore.QThread): updateImage = QtCore.pyqtSignal(str) flag = None def __init__(self,windowId): QtCore.QThread.__init__(self) self.windowId =windowId self.player = gst.parse_launch("udpsrc port=5000 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay ! h264parse ! ffdec_h264 ! autovideosink") bus = self.player.get_bus() bus.add_signal_watch() bus.enable_sync_message_emission() bus.connect("sync-message::element", self.on_sync_message) self.bus = bus def on_sync_message(self, bus, message): print "akash 123" if message.structure is None: return message_name = message.structure.get_name() if message_name == "prepare-xwindow-id": win_id = self.windowId assert win_id imagesink = message.src imagesink.set_property("force-aspect-ratio", True) imagesink.set_xwindow_id(win_id) def run(self): print "akash" self.player.set_state(gst.STATE_PLAYING) msg = self.bus.timed_pop_filtered(gst.CLOCK_TIME_NONE, gst.MESSAGE_ERROR | gst.MESSAGE_EOS) self.flag = True while(True): if(self.flag==False): break def quit(self): self.flag = false self.player.set_state(gst.STATE_NULL) #self.cap.release() Calling code def stopCam(self): if(self.cam!=None): self.cam.quit() self.cam = None def startCam(self): if(self.cam==None): self.cam = camThread(self.pic.winId()) self.cam.start() elif(self.cam.isRunning()): pass What am I doing wrong ? Here is the entire code on paste bin [PasteBin file 1](http://pastebin.com/DGQRUHF9) [PasteBin file 2](http://pastebin.com/CdCFueSb) Edit: I opened the python in debugger. The application becomes unresponsive/fails, when I start the gst playing i.e. it fails after gst bus timed pop. One of the possible reason I could see was that a thread relating to the video streaming stops or exits after it is started in the application. After which the application goes black/unresponsive/crashes. Answer: I think you forgot to initialise the threading system. import gobject gobject.threads_init() import gst Your `run()` function also does not need the while loop and the flag `self.flag`. According to the documentation [here](https://people.gnome.org/~gcampagna/docs/Gst-1.0/Gst.Bus.timed_pop_filtered.html), the call to `timed_pop_filtered(gst.CLOCK_TIME_NONE, ...)` will block until it receives the specified message. When you click on the "Stop Cam" button, you terminate the playback. The flag is never used. I also had to change `self.cam = camThread(self.pic.winId())` to `self.cam = camThread(int(self.pic.winId()))` and comment out `imagesink.set_property("force-aspect-ratio", True)` \-- I am using Mac OS X. Hope this helps.
matplotlib interactive plot with slices of image Question: How would I make an interactive plot like the one displayed here? I'd like to show an image with x and y slices of the image taken at a point that can be adjusted by clicking on the image. ![desired interactive plot where the image can be clicked to adjust the position of the slices](http://i.stack.imgur.com/U0RWo.jpg) I know this one was made in Chaco, but since Chaco isn't compatible with python3, just matplotlib or bokeh would be preferable. Answer: Using tacaswells suggestion, I found that the cross_section_2d in bubblegum was just what I was looking for. First I installed bubblegum from github <https://github.com/Nikea/bubblegum.git> Then the following sets up a cross_section image import matplotlib.pyplot as plt import numpy as np from bubblegum.backend.mpl.cross_section_2d import CrossSection fig= plt.figure() cs= CrossSection(fig) img= np.random.rand(100,100) cs.update_image(img) plt.show() Thanks!
Read multiple csv files and write multiple netCDF files Question: I have the following Python code works perfectly fine for a single .csv file to convert for a netCDF file. But, I have multiple files (365), as, 'TRMM_1998_01_02_newntcl.csv', 'TRMM_1998_01_03_newntcl.csv'....upto 'TRMM_1998_12_31_newntcl.csv'. Can somebody help me to write to loop through all the csv files and create 365 netCDF files using this code.? Anyhelp is appreciated. Thanks in advance. import numpy as np def convert_file(filename): data = np.loadtxt(fname=filename, delimiter=',') # filename = "TRMM_{}_{}_{}_newntcl.csv".format(d.year,d.month,d.day) Lat_data = np.loadtxt('Latitude.csv', delimiter=',') Lon_data = np.loadtxt('Longitude.csv', delimiter=',') # create a netcdf Data object with netCDF4.Dataset('TEST_file.nc', mode="w", format='NETCDF4') as ds: # some file-level meta-data attributes: ds.Conventions = "CF-1.6" ds.title = 'precipitation' ds.institution = 'Institute' ds.author = 'Author' lat_arr = data[:,0] # the first column lon_arr = data[:,1] # the second column precip_arr = data[:,2] # the third column nlat = lat_arr.reshape( (161, 321) ) nlon = lon_arr.reshape( (161, 321) ) # ds.createDimension('time', 0) ds.createDimension('latitude', 161) ds.createDimension('longitude', 321) precip = ds.createVariable('precip', 'f4', ('latitude', 'longitude')) precip[:] = data[:,2] ## adds some attributes precip.units = 'mm' precip.long_name = 'Precipitation' lat = ds.createVariable('lat', 'f4', ('latitude')) lat[:] = Lat_data[:] ## adds some attributes lat.units = 'degrees_South' lat.long_name = 'Latitude' lon = ds.createVariable('lon', 'f4', ('longitude')) lon[:] = Lon_data[:] ## adds some attributes lon.units = 'degrees_East' lon.long_name = 'Longitude' print ds # print filename # load the data path='C:\Users\.spyder2' os.chdir(path) d=datetime.date(1998,01,01) while d.year==1998: d+=datetime.timedelta(days=1) convert_file("TRMM_{}_{}_{}_newntcl.csv".format(d.year,d.month,d.day)) Answer: It looks like you can use a [`datetime.date`](https://docs.python.org/2/library/datetime.html#date- objects) object to loop through all of the days in a year. First, you should put the code you have in a function that takes a filename. Then, you can just make a `date` object and call the function in a loop: import datetime d=datetime.date(1998,1,1) while d.year==1998: d+=datetime.timedelta(days=1) convert_file("TRMM_{}_{}_{}_newntcl.csv".format(d.year,d.month,d.day))
Python - taking most frequent element from array/converting numpy array to std array Question: I'm in the process of implementing a K-nearest neighbour algorithm in Python (for those of you that don't know about learning, it's an algorithm used to classify objects based on data that is already classified, using Euclidean distance). I've got my distances computed, and I can take the k nearest distances, and find the classes of those objects. My problem is, if K is greater than 1, say 3 or 5, I'm not sure how I can get the most frequent element in the list. For example, my output is: [10, 9, 7, 10] 10 occurs the most, so I'd like to return this number. In case of a tie (2 or more elements occuring the same frequency), it returns an error (I can deal with this myself). I'd just like some opinion on how to return the maximum of the above list. (**Using python 2.6.6** so I can't use the collections imports). Second question: I'm attempting to convert a numpy array to a normal array. My code looks like this: def getClassesOfIndexes(l): tmp1 = [] for i in l: tmp1.append(classes[i]) return tmp1 print(getClassesOfIndexes([1024, 9128, 394, 39])) This prints something like: `[array([10], dtype=uint8), array([7], dtype=uint8), array([10], dtype=uint8), array([9], dtype=uint8)]` What could I do for it to simply return `[10, 7, 10, 9]`? Thanks for any help. Answer: **Question 2** is the easier (though in the future, please post unrelated questions as two separate questions on SO). The `tolist` function automatically converts numpy arrays to regular lists <http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html> **Question 1** is also pretty straightforward. You say you want "the most frequent element in the list". Here's a complete discussion [Python most common element in a list](http://stackoverflow.com/questions/1518522/python- most-common-element-in-a-list). One solution is to map each element to a dictionary of its frequency, and then grab the key corresponding to the largest value in the map. This might look like ... freq_map = {my_list.count(val):val for val in set(my_list)} return freq_map[max(freq_map.keys())]
Tkinter understanding mainloop Question: Till now, I used to end my Tkiter programs with: `tk.mainloop()`, or nothing would show up! See example: from Tkinter import * import random import time tk = Tk() tk.title = "Game" tk.resizable(0,0) tk.wm_attributes("-topmost", 1) canvas = Canvas(tk, width=500, height=400, bd=0, highlightthickness=0) canvas.pack() class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) def draw(self): pass ball = Ball(canvas, "red") tk.mainloop() However, when tried the next step in this program(making the ball move by time), the book am reading from, says to do the following. Change the draw function to: def draw(self): self.canvas.move(self.id, 0, -1) and add the following code to my program: while 1: ball.draw() tk.update_idletasks() tk.update() time.sleep(0.01) But I noticed that adding this block of code, made the use of `tk.mainloop()` useless, since everything would show up even without it!!! _At this moment I should mention that my book never talks about`tk.mainloop()` (maybe because it uses Python 3) but I learned about it searching the web since my programs didn't work by copying book's code!_ So I tried doing the following that would not work!!! while 1: ball.draw() tk.mainloop() time.sleep(0.01) What's going on? What does `tk.mainloop()`? What does `tk.update_idletasks()` and `tk.update()` do and how that differs from `tk.mainloop()`? Should I use the above loop?`tk.mainloop()`? or both in my programs? Thanks! Answer: `tk.mainloop()` _blocks_. What that means is that execution of your _python_ program halts there. You can see that by writing: while 1: ball.draw() tk.mainloop() print "hello" #NEW CODE time.sleep(0.01) You will never see the output from the print statement. Because there is no loop, the ball doesn't move. On the other hand, the methods `update_idletasks()` and `update()` here: while True: ball.draw() tk.update_idletasks() tk.update() ...do not block; execution continues on after those methods finish, so the while loop executes over and over, which makes the ball move. An infinite loop containing the method calls `update_idletasks()` and `update()` can act as a substitute for calling `tk.mainloop()`. Note that the whole while loop can be said to _block_ just like `tk.mainloop()` because nothing after the while loop will execute. However, `tk.mainloop()` is not a substitute for just the lines: tk.update_idletasks() tk.update() Rather, `tk.mainloop()` is a substitute for the whole while loop: while True: tk.update_idletasks() tk.update() **Response to comment:** Here is what the [tcl docs](http://wiki.tcl.tk/13859) say: > update idletasks > > This subcommand of update flushes all currently-scheduled idle events from > Tcl's event queue. Idle events are used to postpone processing until “there > is nothing else to do”, with the typical use case for them being Tk's > redrawing and geometry recalculations. By postponing these until Tk is idle, > expensive redraw operations are not done until everything from a cluster of > events (e.g., button release, change of current window, etc.) are processed > at the script level. This makes Tk seem much faster, but if you're in the > middle of doing some long running processing, it can also mean that no idle > events are processed for a long time. By calling update idletasks, redraws > due to internal changes of state are processed immediately. (Redraws due to > system events, e.g., being deiconified by the user, need a full update to be > processed.) > > APN As described in Update considered harmful, use of update to handle > redraws not handled by update idletasks has many issues. Joe English in a > comp.lang.tcl posting describes an alternative: So `update_idletasks()` causes some subset of events to be processed that `update()` causes to be processed. From the [update docs](http://wiki.tcl.tk/1252): > update ?idletasks? > > The update command is used to bring the application “up to date” by entering > the Tcl event loop repeatedly until all pending events (including idle > callbacks) have been processed. > > If the idletasks keyword is specified as an argument to the command, then no > new events or errors are processed; only idle callbacks are invoked. This > causes operations that are normally deferred, such as display updates and > window layout calculations, to be performed immediately. > > KBK (12 February 2000) -- My personal opinion is that the [update] command > is not one of the best practices, and a programmer is well advised to avoid > it. I have seldom if ever seen a use of [update] that could not be more > effectively programmed by another means, generally appropriate use of event > callbacks. By the way, this caution applies to all the Tcl commands (vwait > and tkwait are the other common culprits) that enter the event loop > recursively, with the exception of using a single [vwait] at global level to > launch the event loop inside a shell that doesn't launch it automatically. > > The commonest purposes for which I've seen [update] recommended are: 1) > Keeping the GUI alive while some long-running calculation is executing. See > Countdown program for an alternative. 2) Waiting for a window to be > configured before doing things like geometry management on it. The > alternative is to bind on events such as that notify the process of a > window's geometry. See Centering a window for an alternative. > > What's wrong with update? There are several answers. First, it tends to > complicate the code of the surrounding GUI. If you work the exercises in the > Countdown program, you'll get a feel for how much easier it can be when each > event is processed on its own callback. Second, it's a source of insidious > bugs. The general problem is that executing [update] has nearly > unconstrained side effects; on return from [update], a script can easily > discover that the rug has been pulled out from under it. There's further > discussion of this phenomenon over at Update considered harmful. ..... > Is there any chance I can make my program work without the while loop? Yes, but things get a little tricky. You might think something like the following would work: class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) def draw(self): while True: self.canvas.move(self.id, 0, -1) ball = Ball(canvas, "red") ball.draw() tk.mainloop() The problem is that ball.draw() will cause execution to enter an infinite loop in the draw() method, so tk.mainloop() will never execute, and your widgets will never display. In gui programming, infinite loops have to be avoided at all costs in order to keep the widgets responsive to user input, e.g. mouse clicks. So, the question is: how do you execute something over and over again without actually creating an infinite loop? Tkinter has an answer for that problem: a widget's `after()` method: from Tkinter import * import random import time tk = Tk() tk.title = "Game" tk.resizable(0,0) tk.wm_attributes("-topmost", 1) canvas = Canvas(tk, width=500, height=400, bd=0, highlightthickness=0) canvas.pack() class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) def draw(self): self.canvas.move(self.id, 0, -1) self.canvas.after(1, self.draw) #(time_delay, method_to_execute) ball = Ball(canvas, "red") ball.draw() #Changed per Bryan Oakley's comment tk.mainloop() The after() method doesn't _block_ (it actually creates another thread of execution), so execution continues on in your python program after after() is called, which means tk.mainloop() executes next, so your widgets get configured and displayed. The after() method also allows your widgets to remain responsive to other user input. Try running the following program, and then click your mouse on different spots on the canvas: from Tkinter import * import random import time root = Tk() root.title = "Game" root.resizable(0,0) root.wm_attributes("-topmost", 1) canvas = Canvas(root, width=500, height=400, bd=0, highlightthickness=0) canvas.pack() class Ball: def __init__(self, canvas, color): self.canvas = canvas self.id = canvas.create_oval(10, 10, 25, 25, fill=color) self.canvas.move(self.id, 245, 100) self.canvas.bind("<Button-1>", self.canvas_onclick) self.text_id = self.canvas.create_text(300, 200, anchor='se') self.canvas.itemconfig(self.text_id, text='hello') def canvas_onclick(self, event): self.canvas.itemconfig( self.text_id, text="You clicked at ({}, {})".format(event.x, event.y) ) def draw(self): self.canvas.move(self.id, 0, -1) self.canvas.after(50, self.draw) ball = Ball(canvas, "red") ball.draw() #Changed per Bryan Oakley's comment. root.mainloop()
image stack population is slow in numpy Question: I am reading stack of separate tiff's into single 3D array via numpy/python. When files are just read and plugged into some variable, speed scales linearly with number of files, for example, loading 100 files takes 0.2s, loading 1000 files takes 2.46s and so on. However, when I try to create a 3D stack out of those files, using _dstack()_ time starts to scale non-linearly, e.g. 0.21 sec for 10 files, 5.39sec for 100 files and so on. I realize that slow-down is caused by some magic behind dstack(). What is the proper and fastest way to create 3D stacks from set of image files? If I don't use dtack(), but pre-create 3D array an then populate it, script is running faster, but still scales non-linearly (0.2s for 10, 2.2s for 100, 40s for 1000 images) Code for this case: import numpy as np from PIL import Image import time import random def toc(t): return time.time() - t i_max = 1000 t = time.time() for i in range(0,i_max): fname = r"..\Pos0\img_"+("%09d"%i)+"_Default_000.tif" im = np.array(Image.open(fname)) if i>0: stack[:,:,i] = im else: s = im.shape stack = np.empty((s[0],s[1],i_max)) stack[:,:,0] = im print toc(t) PS: Python 2.7.8 Anaconda 2.1, Intel i5 @ 32GB RAM, reading from 4-striped HDDs Answer: You can solve this problem by changing the order of your indexing, making the image index first. Like this: i_max = 1000 sx, sy = 1000,1000 t = time.time() for i in range(0,i_max): im = np.ones((sx,sy)) if i>0: #stack[:,:,i] = im stack[i,:,:] = im else: #stack = np.empty((sx,sy,i_max)) #stack[:,:,0] = im stack = np.empty((i_max, sx, sy)) stack[0,:,:] = im print toc(t) The times I get are: 4.44851183891 # original order, i_max = 100 118.510767937 # original order, i_max = 1000 1.78239989281 # modified order, i_max = 100 23.4904351234 # modified order, i_max = 1000 I think the reason for this is that the data is not handled bit-by-bit, but in chunks of fixed size, regardless of how much data in each chunk is actually being used. With your original order, the chunks were not being used efficiently. For example, see the first few minutes of this video: <https://vimeo.com/97337258> That is, it's not about RAM but CPU cache.
Why can't I import modules from my PYTHONPATH in Python 3.4? Question: I have a package installed in `/u/home/j/joelfred/python-dev-modules`. It looks like: /a __init__.py b.py The source for `b.py` is simply: def hello(): print('hi yourself') And for `__init__.py`: import b First, I make sure I'm in my home directory, and set my PYTHONPATH: $ cd $ export PYTHONPATH=/u/home/j/joelfred/python-dev-modules/ Then I run `python3`: $ python3 Python 3.4.3 (default, Mar 18 2015, 17:28:34) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import a Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/u/home/j/joelfred/python-dev-modules/a/__init__.py", line 1, in <module> import b ImportError: No module named 'b' Okay, that's weird. But if I change `__init__.py` to be blank: $ python3 Python 3.4.3 (default, Mar 18 2015, 17:28:34) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import a.b as b >>> b.hello() hi yourself >>> What on earth is going on? Answer: In Python 3 all imports are absolute. You can't do `import b` unless `b` itself is a top-level module/package available on `sys.path`. If you want to import `b` from inside `a`, use an explicit relative import: from . import b
Compare two lists in python where the elements are in different order Question: I have multiple lists e.g. list1=[1,4,5] list2=[4,1,5] list3=[1,5,4] two lists are considered same if they have the same elements. Also the lists can be nested lists list1=[[1,4],5,4] list2=[5,4,[1,4]] How do i compare them? Answer: You can `flatten` your lists then use `set` to keep the unique elements then compare : >>> from compiler.ast import flatten >>> list1=[[1,4],5,4] >>> list2=[5,4,[1,4]] >>> set(flatten(list1))==set(flatten(list2)) True
py2neo raised finished(self) error Question: Working with py2neo and I'm getting the error below when trying to append a transaction: statement ="MERGE (a:Person {name:\""+actorName+"\"}) "\ "\n"\ "MERGE (b:Series {title:\""+actorsFields[3]+"\", year:\""+actorsFields[5]+"\"}) "\ "\n"\ "CREATE UNIQUE (a)-[:ACTED_IN]->(b)"\ "RETURN a,b" print(statement) tx.append(statement) The traceback is: Traceback (most recent call last): File "/Volumes/PyCharm CE/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 2222, in <module> globals = debugger.run(setup['file'], None, None) File "/Volumes/PyCharm CE/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1648, in run pydev_imports.execfile(file, globals, locals) # execute the script File "/Users/Thibault/PycharmProjects/movieGraph/src/mainCypher.py", line 110, in <module> tx.append(statement) File "/Library/Python/2.7/site-packages/py2neo/cypher/core.py", line 220, in append self.__assert_unfinished() File "/Library/Python/2.7/site-packages/py2neo/cypher/core.py", line 192, in __assert_unfinished raise Finished(self) py2neo.error.Finished any ideas? Answer: You will get this error if you call tx.commit() twice without a tx = graph.cypher.begin() in between. This is an easy mistake to make if you are trying to chunk your commits. To be more explicit: #This will give the above error tx = graph.cypher.begin() for i in range(0,10): tx.append(statement="foo",parameters=bar) tx.commit() #This will work fine for i in range(0,10): tx = graph.cypher.begin() tx.append(statement="foo",parameters=bar) tx.commit()
Django development server seems to be using old version of python source file Question: I'm re-writing the code of my website, and testing with Django's built-in web server using the `manage.py runserver` command. Now I've come across a very strange problem: The server seems to use the current version of `views.py` on the very first page load, but all subsequent refreshes give me a server error because the server is apperently using an older version of `views.py`, but the current versions of all other files, which leads to errors – specifically URL resolver errors, because I changed some code from using hard-coded paths in `views.py` to using the URL resolver, which of course doesn't work if the URL resolver receives a _path_ (from the old `views.py`) when it's expecting a view name (which I put in the new `views.py`). I have already deleted all the `.pyc` files in my django project directory and rebooted the machine, to no avail. The problem persists. I'm using Django 1.7.6 on Python 3.4.2. Here's the current `views.py` (it doesn't really make sense, it's just for testing): from mezgrman.utils import NavigationTemplateResponse NAV_DATA = { 'app_root': 'index', 'app_title': "Item Manager", 'navbar': [ ("Add Item", 'index'), ], 'page_title': "Item Manager", } def index(request): return NavigationTemplateResponse(request, "design_test/index.html", NAV_DATA) The `NavigationTemplateResponse` is a subclass of `TemplateResponse`: from django.template.response import TemplateResponse from django.core.urlresolvers import resolve, reverse class NavigationTemplateResponse(TemplateResponse): def __init__(self, request, template, nav_data, context = None, content_type = None, status = None, current_app = None): if context is None: context = {} url_name = resolve(request.path).url_name app_name = url_name.split(".")[0] view_prefix = app_name + ".views." nav_data['app_root'] = reverse(view_prefix + nav_data.get('app_root', "")) for index, entry in enumerate(nav_data.get('navbar', [])): title, view_name = entry nav_data['navbar'][index] = (title, reverse(view_prefix + view_name)) context.update(nav_data) return super().__init__(request, template, context, content_type, status, current_app) The Django server traceback explicitly proves that it's using an old version of `views.py`, these are the local variables (sans the `WSGIRequest`) at the time of the error, where `nav_data` is the same as in the old `views.py`: content_type None template 'design_test/index.html' url_name 'design_test.views.index' status None self <mezgrman.utils.NavigationTemplateResponse object at 0x7f395f8d15f8> app_name 'design_test' __class__ <class 'mezgrman.utils.NavigationTemplateResponse'> context {} view_prefix 'design_test.views.' current_app None nav_data { 'app_root': '/', 'app_title': 'Item Manager', 'navbar': [('Add Item', '/')], 'page_title': 'Item Manager' } This seems to me like a bug in Django, but I'd like to know if there's another reason for this strange behaviour. Any help would be appreciated. Answer: It's not bug, and also this is not old code of `views.py`. You're simply overwriting your data in NAV_DATA inside your view. In first request from starting server NAV_DATA have it's initial values, but on that request you're overriding some values with reversed urls. That change persists between requests until dev server is reloaded. **Solution 1:** work on copy of your dict: class NavigationTemplateResponse(TemplateResponse): def __init__(self, request, template, nav_data, context = None, content_type = None, status = None, current_app = None): nav_data = nav_data.copy() **Solution 2:** change your logic to store reversed urls in other variables **Solution 3:** change your logic to behave differently when urls are already reversed. That solution is **not** thread safe!
Convert a String into python list that is already in list format Question: i have users emails stored in database like this below. ['[email protected]','[email protected]','[email protected]'] I have to get each email of all users one by one. After querying i wrote the following code. cur.execute("sql query") rows = cur.fetchall() for row in rows: print row[2] print type(row[2]) emails = json.loads(json.dumps(row[2])) print type(emails) <type 'str'> <type 'unicode'> it converts it into Unicode instead of list. Answer: Your `row[2]` is a string. To convert it to a list you could use [`ast.literal_eval`](https://docs.python.org/2/library/ast.html): In [29]: text = "['[email protected]','[email protected]','[email protected]']" In [30]: import ast In [31]: ast.literal_eval(text) Out[31]: ['[email protected]', '[email protected]', '[email protected]'] In [32]: type(ast.literal_eval(text)) Out[32]: list
mock patch not work with nosetests Question: I just tried to learn the [mock](https://pypi.python.org/pypi/mock) and [nosetests](https://nose.readthedocs.org/en/latest/) by running simple examples, but got no luck: john$ nosetests test_mylib.py E ====================================================================== ERROR: test_mylib.test_mylib_foo ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/wjq/py-virtenv-2.7.5/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/Users/wjq/py-virtenv-2.7.5/lib/python2.7/site-packages/mock.py", line 1201, in patched return func(*args, **keywargs) TypeError: test_mylib_foo() takes exactly 2 arguments (1 given) However if I run the test directly, it's ok: john$ python test_mylib.py john$ I think I must miss some key understanding on the tow libraries since I'm new to them. Really appreciate if someone can point them out. The followings are my example codes. test_mylib.py import mock import mylib @mock.patch('mylib.incr') def test_mylib_foo(aa, incr): incr.return_value=5 assert mylib.foo(1) == 6 if __name__ == '__main__': test_mylib_foo(123) mylib.py from depen import incr def foo(aa): return incr(aa) +1 depen.py def incr(aa): return aa+1 Answer: Remove the `aa` argument and it'll work just fine: @mock.patch('mylib.incr') def test_mylib_foo(incr): incr.return_value=5 assert mylib.foo(1) == 6 if __name__ == '__main__': test_mylib_foo() A better `__main__` execution would call [nose.runmodule](http://nose.readthedocs.org/en/latest/api/core.html#nose.core.runmodule): if __name__ == '__main__': import nose nose.runmodule()
pickling and unpickling user-defined class Question: I have a user-defined class 'myclass' that I store on file with the `pickle` module, but I am having problem unpickling it. I have about 20 distinct instances of the same structure, that I save in distinct files. When I read each file, the code works on some files and not on others, when I get the error: 'module' object has no attribute 'myclass' I have generated some files today and some other yesterday, and my code only works on the files generated today (I have NOT changed class definition between yesterday and today). I was wondering if maybe my method is not robust, if I am not doing things as I should do, for example maybe I cannot pickled user-defined class, and if this is introducing some randomness in the process. Another issue could be that the files that I generated yesterday were generated on a different machine --- because I work on an academic cluster, I have some login nodes and some computing nodes, that differs by architecture. So I generated yesterday files on the computing nodes, and today files on the login nodes, and I am reading everything on the login nodes. * * * As suggested in some of the comments, I have installed `dill` and loaded it with `import dill as pickle`. Now I can read the files from computing nodes to login nodes of the same cluster. But if I try to read the files generated on the computing node of one cluster, on the login node of another cluster I cannot. I get `KeyError: 'ClassType'` in _load_type(name) in dill.py Can it be because the python version is different? I have generated the files with python2.7 and I read them with python3.3. * * * EDIT: I can read the pickled files, if I use everywhere python 2.7. Sadly, part of my code, written in python 3, is not automatically compatible with python 2.7 :( Answer: Can you `from mymodule import myclass`? Pickling does not pickle the class, just a reference to it. To load a pickled object python must be able to find the class that was to be used to create the object. eg. import pickle class A(object): pass obj = A() pickled = pickle.dumps(obj) _A = A; del A # hide class try: pickle.loads(pickled) except AttributeError as e: print(e) A = _A # unhide class print(pickle.loads(pickled))
How to output a multi-index DataFrame in latex with pandas? Question: I am trying to output a multi-index DataFrame in a latex output using python and pandas. So far, I have this: import pandas as pd l = [] for a in ['1', '2', '3']: for b in ['first', 'second']: for c in ['one', 'two']: x = 3.2 s = pd.Series({'a':a, 'b':b, 'c':c, 'x':x}) l.append(pd.DataFrame(s).transpose()) df = pd.concat(l, ignore_index=True) df = df.set_index(['a', 'b', 'c']) print(df) print(df.to_latex()) However, I do not have the expected output. x a b c 1 first one 3.2 two 3.2 second one 3.2 two 3.2 2 first one 3.2 two 3.2 second one 3.2 two 3.2 3 first one 3.2 two 3.2 second one 3.2 two 3.2 \begin{tabular}{llll} \toprule & & & x \\ \midrule 1 & first & one & \\ 2 & second & two & 3.2 \\ \bottomrule \end{tabular} Is this a bug or is there something I missed? Answer: I think this may be a bug in `to_latex`. See [this question](http://stackoverflow.com/questions/25734454/nbconvert-multiindex- dataframes-to-latex). Based on @PaulH's comment, you could do the following: df.reset_index().to_latex(index=False) That will output repeated row labels, so it's more cluttered than would be ideal, but at least it outputs the whole table in LaTeX.
Python LinkedIn Search API 403 error Question: I am trying to get public profiles of people who work in company X to get their title, id, and connection. How do I properly use the Search API so I do not get 403 Forbidden error? from linkedin import linkedin CONSUMER_KEY = 'XXX' CONSUMER_SECRET = 'XXX' USER_TOKEN = 'XXX' USER_SECRET = 'XXX' RETURN_URL = '' auth = linkedin.LinkedInDeveloperAuthentication(CONSUMER_KEY, CONSUMER_SECRET, USER_TOKEN, USER_SECRET, RETURN_URL, permissions=linkedin.PERMISSIONS.enums.values()) app = linkedin.LinkedInApplication(auth) mm=app.search_profile(selectors=[{'people': ['headline','id','num-connections',]}], params={'keywords': 'microsoft'}) print mm So this code gives me a error Message File Name Line Position Traceback <module> <module1> 30 search_profile C:\Python27\lib\site-packages\linkedin\linkedin.py 194 raise_for_error C:\Python27\lib\site-packages\linkedin\utils.py 65 LinkedInForbiddenError: 403 Client Error: Forbidden: Access to people search denied. Was the search API updated so search cannot be accessed. I do not want to use the normal search and copy paste everything. Answer: So it appears LinkedIn has a vetted process for developer access <http://developer-programs.linkedin.com/forum/error-403-client-error- forbidden-unknown-error> and you might have call customer service and agree to a TOS <http://community.linkedin.com/questions/116784/how-can-i-get-vetted-api- access-to-use-the-people.html> Another option would be to try using selenium to get the data <http://www.seleniumhq.org/> Python bindings can be found here:<https://selenium-python.readthedocs.org/>
Python sys.stdout.write() strange behavior in Mac terminal Question: I am trying to make a progress bar for a process, but to keep things short consider the following snippet: import sys import time for i in range(10): time.sleep(0.5) sys.stdout.write('*') sys.stdout.write('\n') What I am expecting it to do is to print a star every half a second on the same line (which is exactly what it does when I run the code with an IDE like Sublime Text for example). However, when I run this in the Mac OS Terminal it just waits for 5 seconds then prints the whole string `**********` at once. Any ideas why this might be happening and how to get the desired behavior? Answer: It does that because output is buffered. Try this instead: import sys import time for i in range(10): time.sleep(0.5) sys.stdout.write('*') sys.stdout.flush() # <-- this is the key sys.stdout.write('\n') By flushing output each time you do a write, `sys.stdout` won't wait for the next newline; instead it will write whatever characters are in its buffers immediately.
How do you kill Futures once they have started? Question: I am using the new [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html) module (which also has a Python 2 backport) to do some simple multithreaded I/O. I am having trouble understanding how to cleanly kill tasks started using this module. Check out the following Python 2/3 script, which reproduces the behavior I'm seeing: #!/usr/bin/env python from __future__ import print_function import concurrent.futures import time def control_c_this(): with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: future1 = executor.submit(wait_a_bit, name="Jack") future2 = executor.submit(wait_a_bit, name="Jill") for future in concurrent.futures.as_completed([future1, future2]): future.result() print("All done!") def wait_a_bit(name): print("{n} is waiting...".format(n=name)) time.sleep(100) if __name__ == "__main__": control_c_this() While this script is running it appears impossible to kill cleanly using the regular Control-C keyboard interrupt. I am running on OS X. * On Python 2.7 I have to resort to `kill` from the command line to kill the script. Control-C is just ignored. * On Python 3.4, Control-C works if you hit it twice, but then a lot of strange stack traces are dumped. Most documentation I've found online talks about how to cleanly kill threads with the old `threading` module. None of it seems to apply here. And all the methods provided within the `concurrent.futures` module to stop stuff (like `Executor.shutdown()` and `Future.cancel()`) only work when the Futures haven't started yet or are complete, which is pointless in this case. I want to interrupt the Future immediately. My use case is simple: When the user hits Control-C, the script should exit immediately like any well-behaved script does. That's all I want. So what's the proper way to get this behavior when using `concurrent.futures`? Answer: It's kind of painful. Essentially, your worker threads have to be finished before your main thread can exit. You cannot exit unless they do. The typical workaround is to have some global state, that each thread can check to determine if they should do more work or not. Here's the [quote](https://hg.python.org/cpython/file/2805b0dca798/Lib/concurrent/futures/thread.py#l15) explaining why. In essence, if threads exited when the interpreter does, bad things could happen. Here's a working example. Note that C-c takes at most 1 sec to propagate because the sleep duration of the child thread. #!/usr/bin/env python from __future__ import print_function import concurrent.futures import time import sys quit = False def wait_a_bit(name): while not quit: print("{n} is doing work...".format(n=name)) time.sleep(1) def setup(): executor = concurrent.futures.ThreadPoolExecutor(max_workers=5) future1 = executor.submit(wait_a_bit, "Jack") future2 = executor.submit(wait_a_bit, "Jill") # main thread must be doing "work" to be able to catch a Ctrl+C # http://www.luke.maurits.id.au/blog/post/threads-and-signals-in-python.html while (not (future1.done() and future2.done())): time.sleep(1) if __name__ == "__main__": try: setup() except KeyboardInterrupt: quit = True
Sending binary data multiple times using Sockets in Java/Android Question: I need to send binary data multiple times with Java Sockets on Android devices. This is a simple object that exports run() and send() methods. public class GpcSocket { private Socket socket; private static final int SERVERPORT = 9999; private static final String SERVER_IP = "10.0.1.4"; public void run() { new Thread(new ClientThread()).start(); } public int send(byte[] str) { try { final BufferedOutputStream outStream = new BufferedOutputStream(socket.getOutputStream()); outStream.write(str); outStream.flush(); outStream.close(); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } return str.length; } class ClientThread implements Runnable { @Override public void run() { try { InetAddress serverAddr = InetAddress.getByName(SERVER_IP); socket = new Socket(serverAddr, SERVERPORT); } catch (UnknownHostException e1) { e1.printStackTrace(); } catch (IOException e1) { e1.printStackTrace(); } } } } For Android Programming, I use Scaloid, and this is the code to send the binary data multiple times. `count` is given from a parameter. `result` is Byte[Array] type data, and `gpc` is initialized in `onCreate()` method as `gpc = new GpcSocket()`. gpc.run() for (i <- 1 to count) { val length = gpc.send(result) toast(s"Sent: $i $length") } The issue is that even when I try to send data multiple times, the receiver receives only one packet. This is what is shown in the server (receiver) side when I send the information 5 times: 55:2015-03-21 03:46:51 86: <RECEIVED DATA> 10.0.1.27 wrote: 56:2015-03-21 03:46:51 0: 10.0.1.27 wrote: 57:2015-03-21 03:46:51 0: 10.0.1.27 wrote: 58:2015-03-21 03:46:51 0: 10.0.1.27 wrote: 59:2015-03-21 03:46:51 0: My questions are * Why is this? Why the receiver receives only one time? The sender shows that it sends the information five times. * Do I need to use run() one time only before sending multiple times? Or, do I have to call run() whenever I use send()? This is the server side code in Python: import SocketServer from time import gmtime, strftime count = 1 class MyTCPHandler(SocketServer.BaseRequestHandler): """ The RequestHandler class for our server. It is instantiated once per connection to the server, and must override the handle() method to implement communication to the client. """ def handle(self): # self.request is the TCP socket connected to the client self.data = self.request.recv(1024).strip() print "{} wrote:".format(self.client_address[0]) global count count += 1 print "%d:%s %d:%s" % (count, strftime("%Y-%m-%d %H:%M:%S", gmtime()), len(self.data), self.data) # just send back the same data, but upper-cased self.request.sendall(self.data.upper()) if __name__ == "__main__": HOST, PORT = "10.0.1.4", 9999 # Create the server, binding to localhost on port 9999 server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler) # Activate the server; this will keep running until you # interrupt the program with Ctrl-C server.serve_forever() Answer: From this post: [Sending ByteArray using Java Sockets (Android programming)](http://stackoverflow.com/questions/29178724/sending-bytearray- using-java-sockets-android-programming) and [How to fix java.net.SocketException: Broken pipe?](http://stackoverflow.com/questions/2309561/how-to-fix-java-net- socketexception-broken-pipe), it seems like that the send() method is wrong. ### Close the resources when I'm done using them I modified the send() to be simple that create the socket and close it when I'm done using it. public int send(byte[] str) throws IOException { InetAddress serverAddr = InetAddress.getByName(SERVER_IP); socket = new Socket(serverAddr, SERVERPORT); out = socket.getOutputStream(); out.write(str); out.flush(); out.close(); socket.close(); return str.length; } ### Modification in the caller code The scala code that calls it is modified to invoke the send() method. // gpc.run() for (i <- 1 to count) { ... val length = gpc.send(result) println(s"Sent: $i $length") } ... in initializer gpc = new GpcSocket() ### Change policy mode in Android Hints from this post: [android.os.NetworkOnMainThreadException at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1145)](http://stackoverflow.com/questions/25093546/android- os-networkonmainthreadexception-at-android-os-strictmodeandroidblockgua), this code should be added for Android device to prevent `E/AndroidRuntime﹕ FATAL EXCEPTION: main Process: scaloid.example, PID: 1791 android.os.NetworkOnMainThreadException` error. val policy = new StrictMode.ThreadPolicy.Builder().permitAll().build(); StrictMode.setThreadPolicy(policy);
How to implement recursive all possible combination of any set (JAVA) Question: Can someone give me a few clues or help with writing a combination function where it would output all possible combination of a set. I have an idea. But I find it hard. Something like this in Java. String set[] = {"Java","C++","Python"}; ArrayList<String> list = new ArrayList<String>(); public void combination(String[] set) { if (set.length == 0) { do nothing } else if (this a combination from the set) { add to the list } } Answer: Here's a simple example of combinations of all letters in a String, just to give you an algorithm. See if you can transfer the algorithm to your example. public class StringLib { public static void combine(String str){ int length=str.length(); StringBuffer output=new StringBuffer(); combination(str,length,output,0); } static void combination(String str, int length, StringBuffer output, int level){ /* show arms-length recursion style with better peformance */ if (level==length) return; else{ for(int i=level;i<length;i++){ output.append(str.charAt(i)); System.out.println(output.toString()); combination(str,length,output,i+1); output.deleteCharAt(output.length()-1); } } } public static void main(String[] args){ combine("abc"); } } /*Output: a ab abc ac b bc c */ Just in case you actually need permutations: import java.util.*; public class permutations { public static void main(String[] args) { String str = new String("abc"); StringBuffer output = new StringBuffer(); boolean used[] = {false, false, false}; permute.printPermutations(0, output, used, str); } } public class permute{ static void printPermutations(int level, StringBuffer output, boolean used[], String orig){ int len = orig.length(); if (level == len){ System.out.println(output); } else{ for (int i = 0; i < len; i++){ //recurse if not used already if (!used[i]){ output.append(orig.charAt(i)); used[i] = true; printPermutations(level+1, output, used, orig); output.deleteCharAt(output.length() - 1); used[i] = false; } } } } } /*Output: abc acb bac bca cab cba */
Flask Debugger not working under Windows Question: I am a newbie to Python attempting to experiment with sample code under Windows 8.1. On <http://flask.pocoo.org/docs/0.10/quickstart/> it says "if you enable debug support the server will reload itself on code changes, and it will also provide you with a helpful debugger". I have added to the code `app.run(debug=True)` to the sample code on the above page. The server will now reload itself on code changes (as promised) but when I create a syntax error the "helpful debugger" does not show. Instead I get an error message in my command prompt. Any ideas why? I suspect the answer might be here [Can't enable debug mode in Flask](http://stackoverflow.com/questions/23552231/cant-enable-debug-mode-in- flask) but it is largely uninteligible to me. So far I have tried restarting my machine and putting the code in different locations. I am not in a forked environment (as best as I know). For those that are curious my source code is shown below: from flask import Flask app = Flask(__name__) from werkzeug.debug import DebuggedApplication app.wsgi_app = DebuggedApplication(app.wsgi_app, True) @app.route('/') def hello_world(): return 'Hello World! #note the deliberate syntax error if __name__ == '__main__': app.debug = True app.run() Answer: The debugger is for debugging exceptions in a syntactically valid program. Try raising an exception in your code, visit the URL in a browser window, and you will see the very helpful debugger. See also <http://flask.pocoo.org/snippets/21/> Syntax errors will be shown on the console as you have seen. That's how it works.
Connect to socket in infinate loop with python Question: I have a ssh deamon running on my local machine. I want infinitely connect to ssh deamon and echo received data. Here is my script. [azatuni@noc python-tests]$ cat test.py #!/usr/bin/python import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) PORT = 22 DESTIP = '127.0.0.1' while 1 == 1: s.connect((DESTIP, PORT)) data = s.recv(1024) s.close() print data But when I run script, recieves the following error. But wihout loop, everything works fine. [azatuni@noc python-tests]$ ./test.py SSH-2.0-OpenSSH_5.3 Traceback (most recent call last): File "./test.py", line 9, in <module> s.connect((DESTIP, PORT)) File "<string>", line 1, in connect File "/usr/lib64/python2.6/socket.py", line 167, in _dummy raise error(EBADF, 'Bad file descriptor') socket.error: [Errno 9] Bad file descriptor Thanks beforehand. Answer: You need to create a new socket before connecting again. This will work for your: #!/usr/bin/python import socket PORT = 22 DESTIP = '127.0.0.1' while 1 == 1: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((DESTIP, PORT)) data = s.recv(1024) s.close() print data Be aware that code you've posted has errors in it: data = s.recv(102SSH-2.0-OpenSSH_5.3
scraping data using python Question: Hello I'm new to this, But I wrote the following script to scrape the following standings <http://i.stack.imgur.com/98FPr.png> website: <http://www.bbc.com/sport/football/spanish-la-liga/table> Im trying to print the position and team name. Team name prints fine, but for the position I keep getting none. Can anyone help me get over that please? import urllib2 from bs4 import BeautifulSoup url = "http://www.bbc.com/sport/football/spanish-la-liga/table" soup = BeautifulSoup(urllib2.urlopen(url).read()) for row in soup ("table" , {"class" : "table-stats"})[0].tbody("tr"): tds = row("td") print tds[1].string, tds[2].string Answer: You could have easily found out the problem with an interactive Python shell. The problem is that `tds[1]` (or in full, `soup("table", {"class":"table- stats"})[0].tbody("tr")[0]("td")[1]`, just to name one) has the form <td class="position"><span class="no-movement">No movement</span> <span class="position-number">1</span></td> And its `.string` attribute (i.e., the `.string` attribute of the `position` class) is `None`. You can extract the actual number with `tds[1].contents[2].string`. Fully corrected script: #!/usr/bin/env python import urllib2 from bs4 import BeautifulSoup url = "http://www.bbc.com/sport/football/spanish-la-liga/table" soup = BeautifulSoup(urllib2.urlopen(url).read()) for row in soup ("table" , {"class" : "table-stats"})[0].tbody("tr"): tds = row("td") print tds[1].contents[2].string, tds[2].string Output: 1 Barcelona 2 Real Madrid 3 Valencia 4 Atl Madrid 5 Sevilla 6 Villarreal 7 Málaga 8 Ath Bilbao 9 Espanyol 10 Real Sociedad 11 Celta de Vigo 12 Rayo Vallecano 13 Getafe 14 Eibar 15 Elche 16 Almería 17 Deportivo de La Coruña 18 Levante 19 Granada CF 20 Córdoba
Python/Tkinter doesn't update label Question: I am developing a password program that uses Tkinter to make it nicer and I am having some issues. The Tkinter label does not update, yet it displays the new password in the IDLE. I am still learning so please dumb it down a little bit. If you need my source code, here it is: """Importations""" import os import pygame as pygame from pygame import mixer import random import string import sys from sys import platform as _platform import tkinter as tk from tkinter import ttk """Program Definitions""" #Color Definitions backgroundColor = ("#001F33") #Random password = ("") #Font Sizes LARGE_FONT = ("Times", 16) LARGE_FONT = ("Times", 14) NORMAL_FONT = ("Times", 12) SMALL_FONT = ("Times", 10) XXS_FONT = ("Times", 8) """Various Functions""" #Quit def quitprog(): sys.exit() """Class Setup""" #Window Setup class passwordGeneratorApp(tk.Tk): #Initialize def __init__(self, *args, **kwargs): #Container Setup tk.Tk.__init__(self, *args, **kwargs) tk.Tk.iconbitmap(self, default = "C:/Program Files/passGenerator/assets/images/programIcon.ico") tk.Tk.wm_title(self, "Random Password Generator") container = tk.Frame(self) container.pack(side = "top", fill = "both", expand = True) container.grid_rowconfigure(0, weight = 1) container.grid_columnconfigure(0, weight = 1) #MenuBar menubar = tk.Menu(container) #FileMenuBar filemenu = tk.Menu(menubar, tearoff = 1) filemenu.add_command(label = "Help", command = lambda: forhelp("For Help Contact the Following!")) filemenu.add_separator() filemenu.add_command(label = "Exit", command = quitprog) menubar.add_cascade(label = "File", state = "disabled", menu = filemenu) tk.Tk.config(self, menu = menubar) self.frames = {} #Pages to be Displayed for F in (welcomeScreen, generator): frame = F(container, self) self.frames[F] = frame frame.grid(row = 0, column = 0, sticky = "nsew") self.show_frame(welcomeScreen) #Show Frame def show_frame(self, cont): frame = self.frames[cont] frame.tkraise() """Pages""" #Welcome Screen class welcomeScreen(tk.Frame): #Initialize def __init__(self, parent, controller): tk.Frame.__init__(self, parent) welcomeScreen.configure(self, background = backgroundColor) #Generator def generatePass(): char_set = string.ascii_uppercase + string.digits password = (''.join(random.sample(char_set*6, 6))) print(password) #Openning text_file = open("C:/Program Files/passGenerator/assets/descriptions/welcomemsg.txt", "r") file = text_file.read() text_file.close() #Setups titleLabel = ttk.Label(self, text = "Random Password Generator", font = LARGE_FONT) msgWelcome = ttk.Label(self, text = file, font = NORMAL_FONT) generateButton = ttk.Button(self, text = "Generate", command = generatePass) viewButton = ttk.Button(self, text = "View Passcode", command = lambda: controller.show_frame(generator)) #Placement titleLabel.pack(pady = 10) msgWelcome.pack() generateButton.pack(pady = 5) viewButton.pack() #Generator class generator(tk.Frame): #Initialize def __init__(self, parent, controller): tk.Frame.__init__(self, parent) generator.configure(self, background = backgroundColor) char_set = string.ascii_uppercase + string.digits password = (''.join(random.sample(char_set*6, 6))) print(password) passwordcode = ("You're password is: %s" % password) #Setup titleLabel = ttk.Label(self, text = "Password Generator", font = LARGE_FONT) displayButton = ttk.Button(self, text = "Display Password", command = lambda: controller.show_frame(welcomeScreen)) passwordLabel = ttk.Label(self, text = passwordcode, font = NORMAL_FONT) #Placement titleLabel.pack(pady = 5) displayButton.pack() passwordLabel.pack(pady = 5) pygame.mixer.init() app = passwordGeneratorApp() app.geometry("1280x720") app.mainloop() NOTE: I am running Python 3.4 on Windows. Answer: The problem is that the new `generator` frame (you should write class names in capital btw) is only constructed once in this line: frame = F(container, self) where `F` is the `generator` class. This line is called once in the entire run of your script, meaning the value not gettin updated after the first password generation. You have two options: 1. You can destroy the two frames instead of hiding them and just construct them again when they need to be displayed. 2. You can use your current system with raising windows and update the values of the `passwordLabel` widget.
Creating a form with Elm Question: I would like to create a form in Elm that takes 4 required inputs: * 3 floating point values * 1 input which can take the values of "long" or "short" (presumably) this would be a drop-down When the values are entered, a computation occurs that yields a single line of output based on these values. I have this working as a command-line Python program: #!/usr/bin/env python from __future__ import print_function # core import logging # pypi import argh # local logging.basicConfig( format='%(lineno)s %(message)s', level=logging.WARN ) def main(direction, entry, swing, atr): entry = float(entry) swing = float(swing) atr = float(atr) if direction == 'long': sl = swing - atr tp = entry + (2 * atr) elif direction == 'short': sl = swing + atr tp = entry - (2 * atr) print("For {0} on an entry of {1}, SL={2} and TP={3}".format( direction, entry, sl, tp)) if __name__ == '__main__': argh.dispatch_command(main) but want to use Elm to create a web UI for it. Answer: This isn't an ideal first introduction to Elm, since it takes you through a few of Elm's more difficult areas: mailboxes and the Graphics.Input library. There is also not an easy-to-deploy number input, so I've only implemented the dropdown; you'd use [Graphics.Input.Field](http://package.elm- lang.org/packages/elm-lang/core/2.1.0/Graphics-Input-Field) (which can get particularly dicey). A mailbox is basically a signal that the dropdown input can send a message to. I've chosen for that message to be a function, specifically, how to produce a new state from the old one. We define state to be a record type (like a Python named tuple), so saving the direction involves storing the new direction in the record. The text is rendered using a convenience function that makes it `monospace`. There's a whole [text library](http://package.elm-lang.org/packages/elm- lang/core/2.1.0/Text) you can use, but Elm 0.15 does not have string interpolation, so you're stuck with the quite ugly appends. Finally, there's not much on an Elm community on SO, but you're welcome to join the [mailing list](https://groups.google.com/forum/#!forum/elm-discuss) where questions like this are welcomed. That being said, you really should try to dive into Elm and learn the basics so you can ask a more specific question. This is true of almost any language, library or framework - you have to do some basic exercises before you can write "useful" code. import Graphics.Input as Input import Graphics.Element as Element exposing (Element) mailbox = Signal.mailbox identity type Direction = Short | Long type alias State = {entry : Float, swing : Float, atr : Float, dir : Direction} initialState : State initialState = State 0 0 0 Short dropdown : Element dropdown = Input.dropDown (\dir -> Signal.message mailbox.address (\state -> {state| dir <- dir})) [("Short", Short), ("Long", Long)] state : Signal State state = Signal.foldp (<|) initialState mailbox.signal render : State -> Element render state = dropdown `Element.above` Element.show ("For "++(toString state.dir)++" on an entry of "++(toString state.entry) ++ ", SL="++(toString (sl state))++" and TP="++(toString (tp state))) main = Signal.map render state -- the good news: these are separate, pure functions sl {entry, swing, atr, dir} = case dir of Long -> swing - atr Short -> swing + atr tp {entry, swing, atr, dir} = case dir of Long -> entry + (2*atr) Short -> entry - (2*atr) Update: I wrote [an essay](https://gist.github.com/mgold/461dbf37d4d34767e5da) on mailboxes and how to use them.
Convert Levenshtein ratio to C++ Question: Is there a library for doing the following in C or C++? I don't mean a python library that uses C or C++, but an actual C/C++ library: >>> import Levenshtein >>> ratio = Levenshtein.ratio('StackOver', 'Stackoverflow') 0.7272727272727273 Answer: What about writing your own basing on the Wikipedia's implementation? Here: <http://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Levenshtein_distance#C.2B.2B> template <class T> unsigned int edit_distance(const T& s1, const T& s2) { const size_t len1 = s1.size(), len2 = s2.size(); vector<vector<unsigned int> > d(len1 + 1, vector<unsigned int>(len2 + 1)); d[0][0] = 0; for(unsigned int i = 1; i <= len1; ++i) d[i][0] = i; for(unsigned int i = 1; i <= len2; ++i) d[0][i] = i; for(unsigned int i = 1; i <= len1; ++i) for(unsigned int j = 1; j <= len2; ++j) d[i][j] = std::min( std::min(d[i - 1][j] + 1,d[i][j - 1] + 1), d[i - 1][j - 1] + (s1[i - 1] == s2[j - 1] ? 0 : 1) ); return d[len1][len2]; } Usage: unsigned int distance = edit_distance<std::string>("StackOver", "Stackoverflow");
Selenium using python Question: I am trying to post something on <http://indianrailforums.in> using selenium script. I am able to login and reach this page: <http://indiarailinfo.com/blog> using the selenium script, but after I click post button I am not able to send text in the text area. This is my code: from selenium import webdriver from selenium.common.exceptions import TimeoutException from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.Firefox() driver.get("http://indiarailinfo.com") element = driver.find_element_by_name("e") element.send_keys("[email protected]") element = driver.find_element_by_name("p") element.send_keys("suvrit") element.submit() driver.get("http://indiarailinfo.com/blog") element = driver.find_element_by_link_text('Post') element.click() element = driver.find_element_by_xpath("/html/body/div[1]/div[5]/table/tbody/tr/td[1]/div[1]/div[1]/div/div/form/textarea") #element.sendKeys(Keys.HOME + "abc"); #element = driver.find_element_by_name("TrainBlog") element.send_keys("suvrit") #driver.quit() EDIT: Issue solved by using submit button and using driver.implicitly_wait(10) just before calling xpath Answer: I was able to post with PhantomJS driver. It should work with Firefox either. My code is: from selenium import webdriver from selenium.common.exceptions import TimeoutException from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.PhantomJS() driver.get("http://indiarailinfo.com") element = driver.find_element_by_name("e") element.send_keys("[email protected]") element = driver.find_element_by_name("p") element.send_keys("suvrit") element.submit() driver.get("http://indiarailinfo.com/blog") element = driver.find_element_by_link_text('Post') element.click() element = driver.find_element_by_xpath("/html/body/div[1]/div[5]/table/tbody/tr/td[1]/div[1]/div[1]/div/div/form/textarea") print element.tag_name #element.sendKeys(Keys.HOME + "abc"); #element = driver.find_element_by_name("TrainBlog") element.send_keys("chuh-pook") element.submit() #driver.quit()
how to gfxdraw in a subclass of pygame.Surface [python/pygame] Question: am i subclassing Surface the wrong way? the error says that gfxdraw.aacircle() requires a Surface as 1st argument, but I can't figure out how to do that. runnable code and exception below: import pygame, sys, os from pygame.locals import * from pygame import gfxdraw BLACK = (0,0,0) WHITE = (255,255,255) class basicButton(pygame.Surface): # basic surface for buttons def __init__(self, size, radius): pygame.Surface.__init__(self, size=(size,size)) self.fill(BLACK) # anti-aliased outline pygame.gfxdraw.aacircle(self, radius, radius, radius, WHITE) # color filled circle pygame.gfxdraw.filled_circle(self, radius, radius, radius, WHITE) # set black to be transparent self.set_colorkey(BLACK) # make an instance quitButtonSurf = basicButton(25, 12) pygame.init() screen = pygame.display.set_mode((800, 600)) while True: screen.blit(quitButtonSurf) pygame.display.flip() throws exeption: pygame.gfxdraw.aacircle(self, radius, radius, radius, WHITE) TypeError: surface must be a Surface Answer: Unfortunately this isn't EXACTLY what you were looking for, but it should work. import pygame, sys, os from pygame.locals import * from pygame import gfxdraw pygame.init() BLACK = (0,0,0) WHITE = (255,255,255) class basicButton: def __init__(self, size, radius): self.image = pygame.Surface((size,size)) # as per your example, assuming a square self.image.fill(BLACK) pygame.gfxdraw.aacircle(self.image, radius, radius, radius, WHITE) pygame.gfxdraw.filled_circle(self.image, radius, radius, radius, WHITE) self.image.set_colorkey(BLACK) quitButtonSurf = basicButton(25, 12) screen = pygame.display.set_mode((800, 600)) while True: screen.blit(quitButtonSurf.image,(0,0)) # you'll have to assign a location for this object, I set it to 0,0 simply for testing pygame.display.flip() I hope this helped
Delete results of a grep search across files Question: I have a script which ultimately should grep a chunk of a text from a file and delete it, then repeat this for every file in a directory. But the code I've written below does not include the delete statement because I'm not sure how to do that. [This post](http://stackoverflow.com/questions/4357499/python- delete-in-a-string) makes me think I should instead grep everything I want to keep and write that to a new file instead. I tried inserting `del p` after the `p = re.compile(ur'(.)(\D\D\D\D\D..........)')` line and that seemed to delete one of my input files. Suggestions? import sys import re import os PSGDirectory = "~/TestFolder" for path, dirs, files in os.walk(PSGDirectory): print path print dirs print files def process (files): p = re.compile(ur'(.)(\D\D\D\D\D..........)') return re.search(p, files) for root, dirs, files in os.walk(PSGDirectory): for filename in files: fullpath = os.path.join(root,filename) print process (fullpath) Answer: Use re.sub() From the [Docs:](https://docs.python.org/2/library/re.html) import os import re def process(file): with open(file, 'w') as f: text = f.read() # Use your regex to grab all matching text: # For example, to remove all non-digits from a string: filtered = re.sub(r'\D', "", text) f.write(filtered) return filtered def main(): PSGDirectory = "~/TestFolder" for path, dirs, files in os.walk(PSGDirectory): print path print dirs print files for root, dirs, files in os.walk(PSGDirectory): for filename in files: fullpath = os.path.join(root, filename) print process(full path)
Creating an element with 'class' attribute throws a syntax error Question: When I try to do this with the `lxml` module: div = etree.SubElement(body, "div", class="hmi") I get a: user@localhost:metk $ sudo python mbscan.py -r 192.168.0.0/24 --hmi File "mbscan.py", line 481 div = etree.SubElement(body, "div", class="hmi") ^ SyntaxError: invalid syntax Because class is a keyword reserved for Python for another use. IE this: div = etree.SubElement(body, "div", hello="hmi") works fine. I know there is another engine made specifically for `html` in `lxml` but for the sake of keeping my `xml` and `html` report generators as similar as possible I'm wondering if there isn't another workaround here? **How can I create an`xml` tag with a `class` attribute without upsetting Python?** Answer: Define a dictionary of attributes and pass it as `attrib` argument. For example: >>> from lxml.html import Element, tostring >>> >>> div = Element("div", attrib={"class": "hmi"}) >>> >>> print(tostring(div)) <div class="hmi"></div> Note that I'm using `lxml.html` here, but the same idea applies for `etree`.
ImportError: No module named pygame and how to change the path of pygame? Question: Okay, so I am brand new at this and I really need for this to be dumbed down for me. My python version is 2.7.9 and I downloaded pygame-1.9.1.win32-py2.7.msi and I am on a windows computer. I really need someone to explain why this is not working. I was reading on some of the other questions that you have to change the path of python and I have no idea how to do that. I am literally screwed because I have to turn something in for a major grade and I have literally been trying to figure this out for like about four weeks now. When I try to import pygame, I get this error message and I have absolutely no idea what I am doing wrong: > Traceback (most recent call last): File > "C:\Users\Tiffany\Documents\NetBeansProjects\NewPythonProject\src\newpythonproject.py", > line 3, in import pygame ImportError: No module named pygame Answer: There is only really only one way to do this: * Create a folder * Put the Python Interpreter you are using in that folder * Put the PyGame module you are using in that same folder And your problem is now solved. I hope this helps you!
Simple animation with Tkinter Python Question: I've searched for a simple animation code with Tkinter but I've found very different examples and I can't understand the correct way to write an animation. Here my working code to display a simple moving circle: import tkinter as tk import time root=tk.Tk() canvas=tk.Canvas(root,width=400,height=400) canvas.pack() circle=canvas.create_oval(50,50,80,80,outline="white",fill="blue") def redraw(): canvas.after(100,redraw) canvas.move(circle,5,5) canvas.update() canvas.after(100,redraw) root.mainloop() In this code I can't correctly understand: how the after method works, where correctly put the update and the move method (before after method ?), is there another way to write an animation code? may you post me another example and comment the code please? Thanks :) Answer: **Calling`update`** You should not call `canvas.update()`. As a general rule of thumb you should _never_ call `update`. if you take out the call to `canvas.update()`, you have the proper way to do animation in a tkinter program. **Calling`after` to start the animation** You don't need to call `after` immediately before calling `root.mainloop()`. This works just as well: ... redraw() root.mainloop() The choice to use or not use `after` in this specific case is dependent on if you want the animation to start immediately (possibly even before the widget is visible) or if you want it to happen after a short delay (possibly after the widget is made visible) **How`after` works** `mainloop` is nothing more than an infinite loop that checks the event queue for events. When it finds an event, it pops it off of the list and processes it. `after` is nothing more than making a request that says "in 100 ms, please add a new event to the queue". When the time limit expires, an event is added to the queue that says, in effect, "run this command". The next time the loop checks for events, it sees this event, pulls it off of the queue, and runs the command. When you call `after` from within a method that itself was called by `after`, you're saying in effect "wait 100ms and do it again", creating an infinite loop. If you put the call to `after` before moving the object, you're saying "every 100ms run this function". If you put it after you're saying "run this function 100 ms after the last time it was run". The difference is very subtle and usually not perceptible unless your function takes a long time to run.
Node.js Python-shell: while true loop not working Question: I've this simple `Python` script print out a message every second: #!/usr/bin/python import time while True: print u"Message" time.sleep(1) I'm trying to integrate a 3rd party Python script with the above structure with `Node.js` using [`python-shell`](https://www.npmjs.com/package/python- shell). I'm having this JS script to get all messages from the Python script: var PythonShell = require('python-shell'); var options = { scriptPath: './' }; var pyshell = new PythonShell('test.py',options); pyshell.on('message', function (message) { // received a message sent from the Python script (a simple "print" statement) console.log(message); }); // end the input stream and allow the process to exit pyshell.end(function (err) { if (err) throw err; console.log('finished'); }); But it seems, that the `while True` in Python cause that the on-event is not called. How can I solve this? Can I change the loop inside the `Python` script to something compatible with `python-shell`? Answer: You need to flush `sys.stdout` as the output is buffered because it is piped: import time import sys while True: print u"Message" sys.stdout.flush() time.sleep(1) You will receive the output immediately once flushed: $ nodejs n.js Message Message Message ..... You may be able to set the buffering to line buffered or unbuffered when you start the shell but I am not overly familiar with nodejs. There is actually a way to set the `-u` flag to get unbuffered output with the `pythonOptions` flag: var PythonShell = require('python-shell'); var pyshell = new PythonShell('test.py',{scriptPath:"./", pythonOptions: ['-u']}); pyshell.on('message', function (message) { // received a message sent from the Python script (a simple "print" statement) console.log(message); }); // end the input stream and allow the process to exit pyshell.end(function (err) { if (err) throw err; console.log('finished'); }); The output will be unbuffered so there will be no need to flush stdout.
Exception in thread "main" java.lang.NoClassDefFoundError launch error Question: I have the typical error in Java. I have the next structure: bin/ lib/ src/ junior/ databases/ homework/Main.java My **Main.java** code is: package junior.databases.homework; import java.sql.*; public class Main { private static Connection connection = null; public static void main(String[] args) throws SQLException, ClassNotFoundException { initDatabase(); System.out.println("Done"); } private static void initDatabase() throws SQLException, ClassNotFoundException { Class.forName("org.postgresql.Driver"); connection = DriverManager.getConnection( "jdbc:postgresql://192.168.136.129:5432/postgres", "postgres", "xxxx"); } } When I launch it like this: root@debian:/python_codes/Junior/Level1/DB1/ORM/java/java/src/junior/databases/homework# javac Main.java root@debian:/python_codes/Junior/Level1/DB1/ORM/java/java/src/junior/databases/homework#java Main I got error: Exception in thread "main" java.lang.NoClassDefFoundError: Main (wrong name: junior/databases/homework/Main) I found in this post the solution **[Exception in thread"main" java.lang.NoClassDefFoundError: Hello](http://stackoverflow.com/questions/22431904/exception-in-thread-main- java-lang-noclassdeffounderror-hello)** And my try: root@debian:/python_codes/Junior/Level1/DB1/ORM/java/java/src# javac junior/databases/homework/Main.java root@debian:/python_codes/Junior/Level1/DB1/ORM/java/java/src# java junior.databases.homework.Main works perfectly. The problem is I can launch this code only from src/ folder((( Is there any way I can launch it from **/src/junior/databases/homework** folder? I need to go back to src each time I want to launch the code. Answer: Here you are, assuming you are in `src/junior/databases/homework` directory: javac ../../../junior/databases/homework/Main.java java -cp ../../../ junior.databases.homework.Main
How can I make a multithreading queue-system for incoming lines from a socket in Python? Question: I'm fairly new to Python and I have a code that receives information from a socket and processes each line one after one in a queue-system. The problem is that if a line ever take an unusual amount of time to process, all the other lines in the queue will have to wait. How can I make it so that ones a new line is received, it will process it with a thread, and make a new thread for every other line regardless if the first thread has finished? I have looked at many other examples with multithreading and queues but I just don't know how I can implement something like this into my code. Any suggestions? Here is how my queue-system currently looks like: while 1: readbuffer = readbuffer + s.recv(2048) temp = string.split(readbuffer, "\n") readbuffer = temp.pop() for line in temp: *code processing line* Answer: For your requirement, the low-level threading API suffices. (For more ambitious tasks, see the [threading](https://docs.python.org/3.2/library/threading.html#module- threading) module.) try: from _thread import start_new_thread except ImportError: from thread import start_new_thread def process(line): #*code processing line* … for line in temp: start_new_thread(process, (line,))
Turning an Access database into a web-based platform Question: This might be a newbie question, but I have been thinking about it for a while and here it is: Our company is interested in creating a web-based program to facilitate the reports of its field activities. We have been using Microsoft Access as our reporting tool, but it has been struggling to keep up with our increasing operations. We currently have a standard form that our chief field engineers fill out and email our technical office, where the data are entered into the Access database. The form basically has information regarding our daily operations. After the data are fed in, we get reports on how much work we have done and how much material is needed, etc. We are investigating ways to make this system more efficient. We want to: 1. Use SQL Server 2012 as our relational database as we have invested in SQL for our ERP program. 2. We want our field engineers to input data directly into the system. We are wasting a lot of time right now because the data is entered twice, once by the field engineers into the forms and once again by the people in our technical office into the database. By saving the time that is lost by entering the forms twice, the technical people can have more time to develop the web-based interface. 3. We want to be able to use road maps and maps in general because we want to see where our operations are geographically. We are a road marking company and being able to play with road maps and maps would make it convenient. I have no experience in this field, but if I can integrate something like Google Maps into the app, it would be great. 4. We want our web-based program to run on multiple platforms. All our field engineers have PCs and they use Windows. Moreover, it would be a great plus if the application can work on iPhones and Android phones as the field engineers would not need to take out their laptops in that case. 5. The system might not be inclusive in the beginning, but if successful, we want something that we can build upon. 6. Because we are not a software company, we will only have 1-2 people developing the interface. So we would want a stable, reliable, easy-to-develop system, where a field engineer can log in, enter data, see what he entered and make changes if necessary, and log out. We don't have high graphics etc requirements. I have been given the task to start such an endeavor and I am planning to start learning and writing the code myself. Our plan is to adopt a web-based system in a year's time. I will then either educate the technical staff to develop the system as they are doing with the Access interface right now or we will hire a new software engineer to guide our people. My experience with coding is: 1. Introductory Java 2. Introductory C 3. Introductory Python 4. Intermediate SQL and algorithm knowledge due to messing around with the ERP software 5. Intermediate VBA knowledge due to reports I have created using ERP data Based on our demands and my skill set, which programming language do you think would be the best fit? I might not have the right skills for the task, but I am an avid learner and will put a lot of effort into this. Thanks a lot for your guidance! Answer: There are many routes to go in deploying web applications and you must choose one to ideally fit your team's situation (training, skillset, resources). While I cannot enumerate the choices here, I can describe your migration needs and few options. As you may know, MS Access is a file-server database like SQLlite, compared to its client-server database counterparts such as MySQL, SQL Server, PostgreSQL, and Oracle which scale at enterprise levels. Hence, Access is designed to serve small business solutions. But you can use its data storage (tables); data entry (forms); date processing (queries); and data output (reports) as a working model for your web app. The essence to understand is that Access is both a **FrontEnd** (forms, reports) and **BackEnd** (tables, queries) management system. Therefore, in transitioning to a web solution, you must substitute these components accordingly: 1) developing a frontend GUI system and 2) developing a backend database system. And the beauty of Access is that it can connect to all the aforementioned systems in helping the migration process (import/export, linked tables, ODBC/OLEDB, etc.). Popular tandems to consider include: 1. **[PHP/MySQL](http://www.w3schools.com/php/php_mysql_connect.asp)** \- a very cost-effective, open-source route and popular among webhosting sites such as BlueHost, HostGator, GoDaddy, etc. You can replicate your forms with html/css and run your update/append queries, form processing, report output with php/mysql. Google Map APIs provide PHP among other languages (javascript, python, etc.). 2. **[ASP.Net](http://www.asp.net/)/SQL Server** \- another pair for web applications. ASP.Net is a web application framework built on the C#/VB.Net languages. But using available consoles and IDEs, your team can recreate the user interface and run backend programming. As you know SQL Server and Access works great together, facilitating the integration. 3. **WordPress/Joomla/Drupal ([Content Management Systems](http://code.tutsplus.com/articles/top-10-most-usable-content-management-systems--net-6493))** \- an effective streamlined deployment route to manage your website via a control panel dashboard, templates, plugins, and modules leveraged by a team of developers and pre-built platforms. You can save design time with templates and processing with popular plug-ins especially for forms to database. Still other less extensive routes you can consider: 1. **MS Access/Office365** \- the newer [Microsoft Office 2010 and 2013](http://blogs.office.com/2012/07/30/get-started-with-access-2013-web-apps/) have been pushing the aim of publishing Access desktop databases as web databases. Providing a URL via an Office 365 subscription, your engineers and technical team can interact with the database through web browsers. So any user on Mac, Linux, or smartphones can work on the database. Some intricate VBA/macros must become web-compatible. Consider this if a full-fledged website is not quite necessary but a simple web tool. 2. **PDF forms/MS Access** \- Adobe PDF Reader allows users to enter data on fillable forms. The Reader comes free as a downloadable computer program and mobile tablet/phone app. You can design your forms in any format and even built trigger event/validation the [Adobe PDF SDK Javascript API](http://www.adobe.com/content/dam/Adobe/en/devnet/acrobat/pdfs/VBJavaScript.pdf). Your engineers can work in the field and submit via email their forms on any device. Once received you can use the API to upload form data into database with Access VBA. Do note: designers need [Adobe Acrobat](https://helpx.adobe.com/acrobat/kb/create-fillable-pdf-forms-acrobat.html) to build fillable forms. Finally, do understand moving to a web solution, you must account for security, user access, responsive design, new technologies/best practices, and regular maintenance up an above a standalone Access database which currently resides on your private secured company network.