text
stringlengths
226
34.5k
Mininet - Need custom tree topology script Question: Can someone show me a python script that creates a simple custom topology in Mininet, that uses a tree topology with a depth and fanout of 2? It would be greatly appreciated. Answer: Example: from mininet.topo import Topo class CustomTopo(Topo): def __init__(self, linkopts1, linkopts2, linkopts3, fanout=2, **opts): Topo.__init__(self, **opts) self.fanout = fanout self.linkopts1 = linkopts1 self.linkopts2 = linkopts2 self.linkopts3 = linkopts3 self.coreNumbering = 1 self.aggNumbering = 1 self.edgNumbering = 1 self.hosNumbering = 1 depth = 3 self.createTreeTopo(depth, fanout) def createTreeTopo(self, depth, fanout): thisCore = depth == 3 thisAggregation = depth == 2 thisEdge = depth == 1 if depth > 0: linkopts = dict() if thisCore: node = self.addSwitch('c%s' % self.coreNumbering) self.coreNumbering += 1 linkopts = self.linkopts1 if thisAggregation: node = self.addSwitch('a%s' % self.aggNumbering) self.aggNumbering += 1 linkopts = self.linkopts2 if thisEdge: node = self.addSwitch('e%s' % self.edgNumbering) self.edgNumbering += 1 linkopts = self.linkopts3 for _ in range(fanout): child = self.createTreeTopo(depth - 1, fanout) self.addLink(node, child, **linkopts) else: node = self.addHost('h%s' % self.hosNumbering) self.hosNumbering += 1 return node topos = { 'custom': ( lambda: CustomTopo() ) }
How to develop/include a Django custom reusable app in a new project? Are there some guidelines? Question: following [tutorial on Django reusable apps](https://docs.djangoproject.com/en/1.8/intro/reusable-apps/) things work fine. But I have some questions on the process of developing and packaging a Django app. 1 - In the tutorial, the app is developed first within a project, then copy- paste'd out in another folder for packaging, and then included again in the project vía pip. Is this the way for developing Django apps? For example, if I have to include new features or fix bugs, Should I make changes in the project and then copy-paste them to the package folder outside the project? 2 - Assuming that 1 is not the only way to develop an app, I started creating a package folder for my app with this structure: django-myApp |--myApp | |--models | |--file1.py | |--file2.py |--setup.py |--README.rst After running `python3 setup.py sdist` and installing it with `pip3 install --user myApp.tar.gz` I can successfully import my app from a new Django project shell. But when I run python3 manage.py migrate, tables for models of myApp are not created. I guess it is because there is not migration folder in myApp package, and as far as I know, the only way to create migrations is running `makemigrations` within a project. Or may I am missing something? Can I generate initial migration module without having the app in a project? 3 - Finally, the question is: When developing an app, Should I have to start a project, copy out the app folder for packaging, reincluding it vía installing, and then continue developing in the package folder? Thanks in advance for any comment or guidance. P.D.: Sorry for my English, also comments about it are well-received **EDIT 1:** An example to highlight my doubt: After finish tutorial, App source code is outside the project and suppose I need to change models. I can change them in app folder, release a new version (e.g. 0.2) and install it. Now, How can I generate migrations for these changes? Should I always have a test project? Answer: Additionally, a good workflow during development is to link the reusable app into your Django project. To easily achieve this `pip install` has the [`-e, --editable`](https://pip.pypa.io/en/latest/reference/pip_install.html#install- editable) option, which in turn derives from the `setuptools` [Development mode](https://pythonhosted.org/setuptools/setuptools.html#development-mode). **Reusable app:** django-myApp |--myApp | |--models | |--file1.py | |--file2.py |--setup.py |--README.rst **Django setup:** my-django-project |--my_django_project | |--settings.py | |--urls.py | |--wsgi.py | |--... |--manage.py With your virtualenv activated you can now link your reusable app to the project by running: (myvenv) $ pip install --editable /path/to/django-myApp Now, every change that you make in `django-myApp` is automatically reflected in `my-django-project` without the need to build/package the reusable app first. This becomes convenient in many use cases. E.g. imagine developing the app to be compatible with `Python 2.x` and `Python 3.x`. With linking, you can install the app into two (or more) different Django setups and run your tests.
Portfolio rebalancing with bandwidth method in python Question: We need to calculate a continuously rebalanced portfolio of 2 stocks. Lets call them A and B. They shall both have an equal part of the portfolio. So if I have 100$ in my portfolio 50$ get invested in A and 50$ in B. As both stocks perform very differently they will not keep their equal weights (after 3 month already A may be worth 70$ while B dropped to 45$). The problem is that they have to keep their share of the portfolio within a certain bandwidth of tolerance. This bandwidth is 5%. So I need a function that does: If A > B*1.05 or A*1.05 < B then rebalance. This first part serves only to get the fastest way some data to have a common basis of discussion and to make results comparable, so you can just copy and paste this whole code and it works for you.. import pandas as pd from datetime import datetime import numpy as np df1 = pd.io.data.get_data_yahoo("IBM", start=datetime(1970, 1, 1), end=datetime.today()) df1.rename(columns={'Adj Close': 'ibm'}, inplace=True) df2 = pd.io.data.get_data_yahoo("F", start=datetime(1970, 1, 1), end=datetime.today()) df2.rename(columns={'Adj Close': 'ford'}, inplace=True) df = df1.join(df2.ford, how='inner') del df["Open"] del df["High"] del df["Low"] del df["Close"] del df["Volume"] Nowe start to calculate the relative performance of each stock with the formula: df.ibm/df.ibm[0]. The problem is that as soon as we break the first bandwidth, we need to reset the 0 in our formula: df.ibm/df.ibm[0], since we rebalance and need to start calculating from that point on. So we use df.d for this placeholder function and set it equal to df.t as soon as a bandwidth gets broken df.t basically just counts the length of the dataframe and can tell us therefore always “where we are”. So here the actual calculation starts: tol = 0.05 #settintg the bandwidth tolerance df["d"]= 0 # df["t"]= np.arange(len(df)) tol = 0.3 def flex_relative(x): if df.ibm/df.ibm.iloc[df.d].values < df.ford/df.ford.iloc[df.d].values * (1+tol): return df.iloc[df.index.get_loc(x.name) - 1]['d'] == df.t elif df.ibm/df.ibm.iloc[df.d].values > df.ford/df.ford.iloc[df.d].values * (1+tol): return df.iloc[df.index.get_loc(x.name) - 1]['d'] == df.t else: return df.ibm/df.ibm.iloc[df.d].values, df.ford/df.ford.iloc[df.d].values df["ibm_performance"], df["ford_performance"], = df.apply(flex_relative, axis =1) The problem is, that I am getting this error form the last line of code, where I try to apply the function with `df.apply(flex_relative, axis =1)` `ValueError: ('The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().', u'occurred at index 1972-06-01 00:00:00')` The problem is that none of the given options of the error statement solves my problem, so I really don't know what to do... The only thing I found so far was the link below, but calling a R function won't work for me because I need to apply that to quite big datasets and I may also implement an optimization in this function, so it definitely needs to be built in python. Here is the link anyway: [Finance Lib with portfolio optimization method in python](http://stackoverflow.com/questions/4119054/finance-lib-with-portfolio- optimization-method-in-python) Manually (what is not a good way to handle big data), I calculated that the first date for a rebalancing would be: `03.11.1972 00:00:00` The output of the dataframe at the first rebalancing should look like this: ibm ford d t ibm_performance ford_performance 1972-11-01 00:00:00 6,505655 0,387415 0 107 1,021009107 0,959552418 1972-11-02 00:00:00 6,530709 0,398136 0 108 1,017092172 0,933713605 1972-11-03 00:00:00 6,478513 0,411718 0 109 1,025286667 0,902911702 # this is the day, the rebalancing was detected 1972-11-06 00:00:00 6,363683 0,416007 109 110 1,043787536 0,893602752 # this is the day the day the rebalancing is implemented, therefore df.d gets set = df.t = 109 1972-11-08 00:00:00 6,310883 0,413861 109 111 1,052520384 0,898236364 1972-11-09 00:00:00 6,227073 0,422439 109 112 1,066686226 0,879996875 Thanks a lot for your support! @Alexander: Yes, the rebalancing will take place the following day. @maxymoo: If you implement this code after yours, you get the portfolio weights of each stock and they don't rest between 45 and 55%. It's rather between 75% and 25%: df["ford_weight"] = df.ford_prop*df.ford/(df.ford_prop*df.ford+df.ibm_prop*df.ibm) #calculating the actual portfolio weights df["ibm_weight"] = df.ibm_prop*df.ibm/(df.ford_prop*df.ford+df.ibm_prop*df.ibm) print df print df.ibm_weight.min() print df.ibm_weight.max() print df.ford_weight.min() print df.ford_weight.max() I tried no for an hour or so to fix, but didn't find it. Can I do anything to make this question clearer? Answer: The main idea here is to work in terms of dollars instead of ratios. If you keep track of the number of shares and the relative dollar values of the ibm and ford shares, then you can express the criterion for rebalancing as mask = (df['ratio'] >= 1+tol) | (df['ratio'] <= 1-tol) where the ratio equals df['ratio'] = df['ibm value'] / df['ford value'] and `df['ibm value']`, and `df['ford value']` represent actual dollar values. * * * import datetime as DT import numpy as np import pandas as pd import pandas.io.data as PID def setup_df(): df1 = PID.get_data_yahoo("IBM", start=DT.datetime(1970, 1, 1), end=DT.datetime.today()) df1.rename(columns={'Adj Close': 'ibm'}, inplace=True) df2 = PID.get_data_yahoo("F", start=DT.datetime(1970, 1, 1), end=DT.datetime.today()) df2.rename(columns={'Adj Close': 'ford'}, inplace=True) df = df1.join(df2.ford, how='inner') df = df[['ibm', 'ford']] df['sh ibm'] = 0 df['sh ford'] = 0 df['ibm value'] = 0 df['ford value'] = 0 df['ratio'] = 0 return df def invest(df, i, amount): """ Invest amount dollars evenly between ibm and ford starting at ordinal index i. This modifies df. """ c = dict([(col, j) for j, col in enumerate(df.columns)]) halfvalue = amount/2 df.iloc[i:, c['sh ibm']] = halfvalue / df.iloc[i, c['ibm']] df.iloc[i:, c['sh ford']] = halfvalue / df.iloc[i, c['ford']] df.iloc[i:, c['ibm value']] = ( df.iloc[i:, c['ibm']] * df.iloc[i:, c['sh ibm']]) df.iloc[i:, c['ford value']] = ( df.iloc[i:, c['ford']] * df.iloc[i:, c['sh ford']]) df.iloc[i:, c['ratio']] = ( df.iloc[i:, c['ibm value']] / df.iloc[i:, c['ford value']]) def rebalance(df, tol, i=0): """ Rebalance df whenever the ratio falls outside the tolerance range. This modifies df. """ c = dict([(col, j) for j, col in enumerate(df.columns)]) while True: mask = (df['ratio'] >= 1+tol) | (df['ratio'] <= 1-tol) # ignore prior locations where the ratio falls outside tol range mask[:i] = False try: # Move i one index past the first index where mask is True # Note that this means the ratio at i will remain outside tol range i = np.where(mask)[0][0] + 1 except IndexError: break amount = (df.iloc[i, c['ibm value']] + df.iloc[i, c['ford value']]) invest(df, i, amount) return df df = setup_df() tol = 0.05 invest(df, i=0, amount=100) rebalance(df, tol) df['portfolio value'] = df['ibm value'] + df['ford value'] df['ibm weight'] = df['ibm value'] / df['portfolio value'] df['ford weight'] = df['ford value'] / df['portfolio value'] print df['ibm weight'].min() print df['ibm weight'].max() print df['ford weight'].min() print df['ford weight'].max() # This shows the rows which trigger rebalancing mask = (df['ratio'] >= 1+tol) | (df['ratio'] <= 1-tol) print(df.loc[mask])
python scrapy login redirecting problems Question: I'm trying to use scrapy to crawl a website, but I'm not able to login to my account through scrapy. Here is the spider code: from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from images.items import ImagesItem from scrapy.http import Request from scrapy.http import FormRequest from loginform import fill_login_form import requests import os import scrapy from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.shell import inspect_response class ImageSpider(BaseSpider): counter=0; name = "images" start_urls=['https://poshmark.com/login'] f=open('poshmark.txt','wb') if not os.path.exists('./Image'): os.mkdir('./Image') def parse(self,response): return [FormRequest("https://www.poshmark.com/login", formdata={ 'login_form[username_email]':'Oliver1234', 'login_form[password]':'password'}, callback=self.real_parse)] def real_parse(self, response): print 'you are here' rq=[] mainsites=response.xpath("//body[@class='two-col feed one-col']/div[@class='body-con']/div[@class='main-con clear-fix']/div[@class='right-col']/div[@id='tiles']/div[@class='listing-con shopping-tile masonry-brick']/a/@href").extract() for mainsite in mainsites: r=Request(mainsite, callback=self.get_image) rq.append(r) return rq def get_image(self, response): req=[] sites=response.xpath("//body[@class='two-col small fixed']/div[@class='body-con']/div[@class='main-con']/div[@class='right-col']/div[@class='listing-wrapper']/div[@class='listing']/div[@class='img-con']/img/@src").extract() for site in sites: r = Request('http:'+site, callback=self.DownLload) req.append(r) return req def DownLload(self, response): str = response.url[0:-3]; self.counter = self.counter+1 str = str.split('/'); print '----------------Image Get----------------',self.counter,str[-1],'jpg' imgfile = open('./Image/'+str[-1]+"jpg",'wb') imgfile.write(response.body) imgfile.close() And I get the command window output like below: C:\Python27\Scripts\tutorial\images>scrapy crawl images C:\Python27\Scripts\tutorial\images\images\spiders\images_spider.py:14: ScrapyDe precationWarning: images.spiders.images_spider.ImageSpider inherits from depreca ted class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (w arning only on first subclass, there may be others) class ImageSpider(BaseSpider): 2015-06-09 23:43:29-0400 [scrapy] INFO: Scrapy 0.24.6 started (bot: images) 2015-06-09 23:43:29-0400 [scrapy] INFO: Optional features available: ssl, http11 2015-06-09 23:43:29-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE' : 'images.spiders', 'SPIDER_MODULES': ['images.spiders'], 'BOT_NAME': 'images'} 2015-06-09 23:43:29-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetCons ole, CloseSpider, WebService, CoreStats, SpiderState 2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuth Middleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, Def aultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, Redirec tMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMid dleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddlew are 2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled item pipelines: 2015-06-09 23:43:30-0400 [images] INFO: Spider opened 2015-06-09 23:43:30-0400 [images] INFO: Crawled 0 pages (at 0 pages/min), scrape d 0 items (at 0 items/min) 2015-06-09 23:43:30-0400 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6 023 2015-06-09 23:43:30-0400 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080 2015-06-09 23:43:33-0400 [images] DEBUG: Crawled (200) <GET https://poshmark.com /login> (referer: None) 2015-06-09 23:43:35-0400 [images] DEBUG: Redirecting (302) to <GET https://www.p oshmark.com/feed> from <POST https://www.poshmark.com/login> 2015-06-09 23:43:35-0400 [images] DEBUG: Redirecting (301) to <GET https://poshm ark.com/feed> from <GET https://www.poshmark.com/feed> 2015-06-09 23:43:36-0400 [images] DEBUG: Redirecting (302) to <GET https://poshm ark.com/login?pmrd%5Burl%5D=%2Ffeed> from <GET https://poshmark.com/feed> 2015-06-09 23:43:36-0400 [images] DEBUG: Redirecting (301) to <GET https://poshm ark.com/login> from <GET https://poshmark.com/login?pmrd%5Burl%5D=%2Ffeed> 2015-06-09 23:43:37-0400 [images] DEBUG: Crawled (200) <GET https://poshmark.com /login> (referer: https://poshmark.com/login) you are here 2015-06-09 23:43:37-0400 [images] INFO: Closing spider (finished) 2015-06-09 23:43:37-0400 [images] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 4213, 'downloader/request_count': 6, 'downloader/request_method_count/GET': 5, 'downloader/request_method_count/POST': 1, 'downloader/response_bytes': 9535, 'downloader/response_count': 6, 'downloader/response_status_count/200': 2, 'downloader/response_status_count/301': 2, 'downloader/response_status_count/302': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2015, 6, 10, 3, 43, 37, 213000), 'log_count/DEBUG': 8, 'log_count/INFO': 7, 'request_depth_max': 1, 'response_received_count': 2, 'scheduler/dequeued': 6, 'scheduler/dequeued/memory': 6, 'scheduler/enqueued': 6, 'scheduler/enqueued/memory': 6, 'start_time': datetime.datetime(2015, 6, 10, 3, 43, 30, 788000)} 2015-06-09 23:43:37-0400 [images] INFO: Spider closed (finished) You can see that it is redirected to ./feed from ./login, which seems like a successful login, at the beginning, but then redirected back to ./login in the end. Any ideas about what might be causing it? Answer: When you log onto a website it stores (depending on the method of authentication it could be different) some sort of token on the users session. The problem you are having is that while you are getting authenticated properly, your session data (the way the browser is able to tell the server you are logged in and you are who you say you are) isn't being saved. The person in this thread seems to have managed to do what you are seeking to do here: [Crawling with an authenticated session in Scrapy](http://stackoverflow.com/questions/5851213/crawling-with-an- authenticated-session-in-scrapy) and here: [Using Scrapy with authenticated (logged in) user session](http://stackoverflow.com/questions/5850755/using-scrapy-with- authenticated-logged-in-user-session)
Reusing connections in Django with Python Requests Question: What's the correct way of reusing [Python Requests](http://docs.python- requests.org/en/latest/api/) connections in Django across multiple HTTP requests. That's what I'm doing currently: import requests def do_request(data): return requests.get('http://foo.bar/', data=data, timeout=4) def my_view1(request) req = do_request({x: 1}) ... def my_view2(request) req = do_request({y: 2}) ... So, I have one function that makes the request. This function is called in various Django views. The views get called in separate HTTP requests by users. My question is: Does Python Requests automatically [reuse the same connections](http://docs.python-requests.org/en/latest/user/advanced/#keep- alive) (via urllib3 connection pooling)? Or do I have to first create a Requests session object to work with? s = requests.Session() def do_request(data): return s.get('http://foo.bar/', data=data, auth=('user', 'pass'), timeout=4).text And if so, does the session object have to be created in global scope or should it be inside the function? def do_request(data): s = requests.Session() return s.get('http://foo.bar/', data=data, auth=('user', 'pass'), timeout=4).text I can have multiple HTTP requests at the same time, so the solution needs to e thread safe ... I'm a newbie to connection pooling, so I really am not sure and the Requests docs aren't that extensive here. Answer: Create a session, keep the session maintained by passing it through functions and returning it, or create the session object at global level or class level, so the latest state is maintained whenever it is referenced. And it will work like a charm.
Extract names around each word regex Question: There is a string Mary loves Mike,Jack loves Lily,Ethan loves Lydia I want to extract the names around each `loves` with python.But the code below can't work. names = re.search(r'(\S+) loves (\S+)',str, ) while names: print names.group(1) print names.group(2) Answer: >>> import re >>> re.findall(r"([a-zA-Z]+) loves ([a-zA-Z]+)", "Mary loves Mike,Jack loves Lily,Ethan loves Lydia") [('Mary', 'Mike'), ('Jack', 'Lily'), ('Ethan', 'Lydia')]
How can the variable be used by other functions in TkfileDialog? Question: I intended to write a GUI to import URLs data then process these data, so I had 2 buttons. Below is my code. from Tkinter import * root=Tk() root.title('Videos Episodes') root.geometry('500x300') def OpenFile(): # import URLs data from local machine paths=tkFileDialog.askopenfilename() return paths def read_files(paths): #read data from the directory from OpenFile with open(paths) as myfile: return data Button(root,text='Input',command=OpenFile).pack() Button(root,text='Process',command=read_files).pack() root.mainloop() My problem is that when 'Process' button clicked, error happened: Exception in Tkinter callback Traceback (most recent call last): File "C:\Python27\lib\lib-tk\Tkinter.py", line 1532, in __call__ return self.func(*args) TypeError: read_files() takes exactly 1 argument (0 given) How can I fix the bug? Answer: If you want to pass an argument (you didn't specify what), use a [lambda](http://www.secnetix.de/olli/Python/lambda_functions.hawk): Button(root,text='Process',command=lambda: read_files('whatever')).pack() Perhaps, this is what you wanted to do (?): Button(root,text='Process',command=lambda: read_files(OpenFile())).pack() or alternatively, you meant to store the result of `OpenFile` (from clicking the other button) in a global variable, and pass that as argument of `read_files`...?
How can I duplicate python 2 chr exactly in python 3 Question: Hi I am trying to migrate some code to python 3 but am having the following problem. Python 2 >>> a = chr(217) >>> print a, type(a) � <type 'str'> Python 3 >>> a = chr(217) >>> print(a, type(a)) Ù <class 'str'> The reason is that I have Existing data created with the following code from Crypto.Cipher import AES fle = open("data.dat", "wt") key = "password000*****" pwd = AES.new(key, AES.MODE_ECB) for dat in ("dataname001*****", "dataname002*****"): txt = "" hsh = pwd.encrypt(dat) for c in hsh: txt = txt + str(ord(c)) + "," fle.write("%s\n" % txt[:-1]) fle.close() I need to read the data which I can do using python2 but not python3 fle = open("data.dat", "rt") for dat in fle: txt = "" for c in dat.split(","): txt = txt + chr(int(c)) print(pwd.decrypt(txt)) Answer: [`chr` in Python3.x](https://docs.python.org/3/library/functions.html#chr) is similar to [`unichr` in Python 2.x](https://docs.python.org/2/library/functions.html#unichr). If you want a byte string, use [`bytes`](https://docs.python.org/3/library/functions.html#bytes): >>> bytes([217]) b'\xd9' or [`bytearray`](https://docs.python.org/3/library/functions.html#bytearray): >>> bytearray([217]) bytearray(b'\xd9')
Python thread memory layout (in combination with boost::python) Question: I have a boost::python application written in C++. This code is compiled into a binary that also includes the Python interpreter. The binary is then called with a Python script that imports the C++ module: ./c++executable script.py Now I would like to parallelize the code using Python threads: In the Python code I want to create threads which then (among other things) call functions written in C++. I cannot, however, find information about the memory layout used by the python threads: * Will each thread have its own defined memory section to use or will different threads try to allocate memory in the same memory section? * Provided each thread gets its own (deep copies of) C++ objects, will there be any interference between the threads? This runs on a Linux OS. The application is compiled with the `-lpthread` flag, if that makes a difference. I would be grateful if anyone can shed some light on these questions. Answer: > Will each thread have its own defined memory section to use or will > different threads try to allocate memory in the same memory section? The whole point of threads is to share all memory. > Provided each thread gets its own (deep copies of) C++ objects, will there > be any interference between the threads? That's your responsibility as the programmer -- to ensure that there isn't. Where threads are going to be working on the same data, and particularly where one thread might access something while another thread is modifying it, you have to use proper synchronization. A piece of advice: If you're not familiar with threading in both C++ and Python, learn each of those first, before you try to use both languages, and threads, together.
Retrieve task result by id in Celery Question: I am trying to retreive the result of a task which has completed. **This works** from proj.tasks import add res = add.delay(3,4) res.get() 7 res.status 'SUCCESS' res.id '0d4b36e3-a503-45e4-9125-cfec0a7dca30' But I want to run this from another application. So I rerun python shell and try: from proj.tasks import add res = add.AsyncResult('0d4b36e3-a503-45e4-9125-cfec0a7dca30') res.status 'PENDING' res.get() # Error How can I retrieve the result? Answer: This is due to RabbitMQ [not actually storing the results](http://celery.readthedocs.org/en/latest/userguide/tasks.html#rabbitmq- result-backend). If you need the ability to get the results later on, use redis or SQL as the result backend.
Difficulty installing python modules after installing anaconda Question: I just started working with anaconda. Earlier I was working with Python 2.7 on my system. I was writing a script for devices connected to my laptop via usb. For this, I needed the usb module/package. I initially tried doing in Python 27. I installed using: easy_install libusb1 The output with this is ( screenshot is also there) : Searching for libusb1 Best match: libusb1 1.4.0 Processing libusb1-1.4.0-py3.4.egg libusb1 1.4.0 is already the active version in easy-install.pth Using c:\users\eku\anaconda3\lib\site-packages\libusb1-1.4.0-py3.4.egg Processing dependencies for libusb1 Finished processing dependencies for libusb1 C:\users\eku\anaconda3\ : this is the path according to my system whose name is eku. Installing with pip shows an `error unknown command libusb1` Because I have installed the package before, the screenshot shows the correct result that the package is already installed. But the locatio is where my anaconda's site-packages are there. ![enter image description here](http://i.stack.imgur.com/uVhP6.png) Why is this occurring and how should I correct this. I want to keep both anaconda and the other 2.7 version separate. (If this has something to do with path variable, then yes I am confused about the same). As can be seen from the above output the libusb gets installed in in anaconda, I tried running the same code in Spyder (in anaconda). When I write, import usb1 I get the error: ImportError: No module named 'usb1' Why is this happening? My spyder got installed with anaconda itself. I simply click on its icon and the workspace launches. Nothing more I had to do, it started working and even my other files are working fine. Thanks! Answer: I know it's a bit after the fact, but I had the same issue. I ended up searching my registry (I used a program called RegistryFinder, there are likely others) and finding that there was a registry value pointing to the Anaconda install directory. I deleted it and I was able to install things to my normal Python directory. I had previously uninstalled Anaconda and wasn't worried about having references to it, so you may want to save off those values.
Trying to download data from URL with CSV File Question: I'm slightly new to Python and have a question as to why the following code doesn't produce any output in the `csv` file. The code is as follows: import csv import urllib2 url = 'http://www.rba.gov.au/statistics/tables/csv/f17-yields.csv' response = urllib2.urlopen(url) cr = csv.reader(response) for row in cr: with open("AusCentralbank.csv", "wb") as f: writer = csv.writer(f) writer.writerows(row) Cheers. **Edit** : Brien and Albert solved the initial issue I had. However, I now have one further question. When I download the CSV File which I have listed above which is in "<http://www.rba.gov.au/statistics/tables/#interest-rates>" under Zero- coupon "Interest Rates - Analytical Series - 2009 to Current - F17" and is the F-17 Yields CSV I see that it has 5 workbooks and I actually just want to gather the data in the 5th Workbook. Is there a way I could do this? Cheers. Answer: I could only test my code using Python 3. However, the only diffence should be [`urllib2`](https://docs.python.org/2/library/urllib2.html#urllib2.urlopen), hence I am using [`urllib.respose`](https://docs.python.org/3.4/library/urllib.request.html#urllib.request.urlopen) for opening the desired url. The variable `html` is type `bytes` and can generally be written to a file in binary mode. Additionally, your source is a csv-file already, so there should be no need to convert it somehow: #!/usr/bin/env python3 # coding: utf-8 import urllib url = 'http://www.rba.gov.au/statistics/tables/csv/f17-yields.csv' response = urllib.request.urlopen(url) html = response.read() with open('output.csv', 'wb') as f: f.write(html)
Need Clarification on Using OOP and DRY Method in Python Question: I'm trying to keep my code clean by applying OOP and DRY method; however, I found myself stuck with the following questions. 1) Since **checkremote** and **backup** method are dependent on the **sshlogin** method, is there another way to write it so that my object is fully initialized? 2) If there isn't a better way, where do I write the procedure for the PhoneBook Class to execute in the following manner (1 - checklocal, 2 - sshlogin, 3 - checkremote, 4 - backup)? In main? class PhoneBook: def __init__ (self, name, phone_number, birthdate, location): self.name = name self.phone_number = phone_number self.birthdate = birthdate self.location = location self.ssh = None def checklocal (self): # Check local phonebook for existing names pass def sshlogin (self): # SSH into remote server def checkremote (self): # Check remote phonebook for existing names pass def backup (self): # Backup remote phonebook Answer: In this case, you probably want to use a context manager and the `with` keyword. Since using a `Phonebook` object requires a "set up" phase beforehand, you want to make sure that is handled correctly every time you use it. So you would able to write code like this: with Phonebook(name, phone_number, birthdate, location) as phbk: #do stuff with the phonebook phbk.add(name, phone_number, birthdate, location) All your setup phase steps - checking the local copy, connecting (and disconnecting) the ssh session, checking the remote copy, backup, etc. - would happen "behind the scenes" as part of the setup/tear down of the context manager (in other words, the `with` statement takes care of all that). This is similar to how you are supposed to use `open()`: with open('myfile') as file: lines = file.readlines() Cleanly opening and closing the file happens "behind the scenes", automagically. This is likely what you want to happen with your phonebook. To get the context manager working, you use the python `__enter__` and `__exit__` magic methods. Might look something like this: class PhoneBook: def __init__ (self, name, phone_number, birthdate, location): self.name = name self.phone_number = phone_number self.birthdate = birthdate self.location = location self.ssh = None def sshlogin(self): # SSH into remote server def sshlogout(self): # SSH out of remote server def checklocal(self): # Check local phonebook for existing names pass def checkremote (self): # Check remote phonebook for existing names pass def backup (self): # Backup remote phonebook def __enter__(self): self.checklocal() self.sshlogin() self.checkremote() self.backup() return self def __exit__(self, cls, value, traceback): self.sshlogout() return False As for your `Add` (and `Delete`) class, it really shouldn't be a class. It should probably either be: * a method of the `Phonebook` class, like so: class Phonebook: def __init__(self): self.ssh = None def add(self, name, phone_number, birthdate, location): self.name = name self.phone_number = phone_number self.birthdate = birthdate self.location = location or * a generic function, like so: def Add(phbk, name, phone_number, birthdate, location): # add to remote phonebook A further point: your `Phonebook` class isn't named very well. Based on the `__init__` method, it's more of a phonebook entry. You may want to consider implementing your phonebook (as a context manager, as explained above) separately as a container of some kind to hold your entries (including the `add` and `delete` methods), and focus the `Phonebook` class you have (renamed as `Entry` or something) on being more of a phonebook `Entry` object that can be added to your new `Phonebook` object, in which case the add method might be more like this: def add(self, entry): try: self.entries.append(entry) except AttributeError: self.entries = [entry] One other thing: I noticed you're using tab characters to define your classes, functions, and what not. This is [against the recommended practice](http://stackoverflow.com/questions/120926/why-does-python- pep-8-strongly-recommend-spaces-over-tabs-for-indentation) \- use 4 spaces instead. A further suggestion: for something so simple as my suggested `Entry` class, you might just want to use a `collections.namedtuple`, like so: from collections import namedtuple as nt Entry = nt('Entry', 'name phone_number birthdate location') Now you can do stuff like this: e = Entry('rick', '8675309', '1/1/2001', '125 Sesame St`) print(e.name)
how to write the output of iostream to buffer, python3 Question: I have a program that reads data from cli sys.argv[] and then writes it to a file. I would like to also display the output to the buffer. The manual says to use getvalue(), all I get are errors. [Python3 manual](https://docs.python.org/3/library/io.html) import io import sys label = sys.argv[1] domain = sys.argv[2] ipv4 = sys.argv[3] ipv6 = sys.argv[4] fd = open( domain+".external", 'w+') fd.write(label+"."+domain+". IN AAAA "+ipv6+"\n") output = io.StringIO() output.write('First line.\n') print('Second line.', file=output) # Retrieve file contents -- this will be # 'First line.\nSecond line.\n' contents = output.getvalue() # Close object and discard memory buffer -- # .getvalue() will now raise an exception. output.close() print(fd) fd.getvalue() error: # python3.4 makecustdomain.py bubba domain.com 1.2.3.4 '2001::1' <_io.TextIOWrapper name='domain.com.external' mode='w' encoding='US-ASCII'> Traceback (most recent call last): File "makecustdomain.py", line 84, in <module> fd.getvalue() AttributeError: '_io.TextIOWrapper' object has no attribute 'getvalue How do I output the data from io stream write function data to buffer as well as to file? Answer: You use `open()` to open the file, so it isn't a StringIO object, but a file- like object. To get the contents of the file after you write to it you can open the file with `mode = 'w+'`, and instead of `fd.getvalue()`, do: fd.seek(0) var = fd.read() This will put the contents of the file into var. This will also put you at the beginning of the file, though, so be carefully doing further writes.
How do I get flask-cor to return Access-Control-Allow-Origin on Google App Engine? Question: My python app does not throw any errors on Google App Engine, but the Allow- Control-Access-Origin header is never sent. How can I ensure that I am sending it with flask-cors? import MySQLdb import os import webapp2 import json import flask from flask import Flask, jsonify, render_template, request from flask.ext.cors import CORS, cross_origin app = Flask(__name__) @app.route('/') @cross_origin() def do_search(): if (os.getenv('SERVER_SOFTWARE') and os.getenv('SERVER_SOFTWARE').startswith('Google App Engine/')): db = MySQLdb.connect(unix_socket='/cloudsql/my-instance-name:storeaddressdb', db='store_locator', user='myuser', passwd='password') else: db = MySQLdb.connect(host='localhost', user='root') cursor = db.cursor() query = 'SELECT * FROM stores WHERE 1 LIMIT 5' cursor.execute(query) resp = jsonify(data=cursor.fetchall()) return resp db.close() Answer: It appears that this code actually does work, although I do not see the Allow- Control-Access-Origin header when I use cURL to test the url. The logs in Google App Engine show CORS selectively adding the header unless the client specifically skipped it. Also, the Javascript client is no longer showing an error, and receives the data. It may not be the prettiest python code, but it is functionally correct for my purposes.
python pandas read_excel returns UnicodeDecodeError on describe() Question: I love pandas, but I am having real problems with Unicode errors. read_excel() returns the dreaded Unicode error: import pandas as pd df=pd.read_excel('tmp.xlsx',encoding='utf-8') df.describe() --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ... UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 259: ordinal not in range(128) I figured out that the original Excel had (non-breaking space) at the end of many cells, probably to avoid conversion of long digit strings to float. One way around this is to strip the cells, but there must be something better. for col in df.columns: df[col]=df[col].str.strip() I am using anaconda2.2.0 win64, with pandas 0.16 Answer: Try df=pd.read_excel('tmp.xlsx',encoding='iso-8859-1') If it still doesn't work, maybe try saving your excel file as a csv, and use `pd.read_csv`.
How can I import my own modules on webserver? Question: I am developing a website and want to put some files on the server. The problem is that I can only import python modules ( e.g. "import os"), but somewhy, can not import my own modules: test1.py: import test2 test2.py: print ("Content-type:text/html\n\n") print ("<html>") print ("<head>") print ("<title>Error</title>") print ("</head>") print ("<body>") print (" hello world 2") print ("</body>") print ("</html>") If I click www.mywebsite.com/test2.py, then I recieve "hello world" on the screen. However, if I click www.mywebsite.com/test1.py, I get "500 internal server error." I found out that it is some problem with not being able to import my modules. p.s. since I am on a shared server, I can not changes sys path etc.... Here is a trace I got in errors.log: [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] mod_python (pid=17816, interpreter='delekulator.co.il', phase='PythonHandler', handler='mod_python.cgihandler'): Application error [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] ServerName: 'delekulator.co.il' [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] DocumentRoot: '/var/www/vhosts/mywebsite.com/httpdocs' [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] URI: '/test1.py' [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] Location: None [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] Directory: '/var/www/vhosts/mywebsite.com/httpdocs/' [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] Filename: '/var/www/vhosts/mywebsite.com/httpdocs/test1.py' [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] PathInfo: '' [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] Traceback (most recent call last): [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch\n default=default_handler, arg=req, silent=hlist.silent) [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1229, in _process_target\n result = _execute_target(config, req, object, arg) [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1128, in _execute_target\n result = object(arg) [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] File "/usr/lib/python2.6/dist-packages/mod_python/cgihandler.py", line 96, in handler\n imp.load_module(module_name, fd, path, desc) [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] File "/var/www/vhosts/mywebsite.com/httpdocs/test1.py", line 1, in <module>\n import test2 [Thu Jun 11 00:18:09 2015] [error] [client 85.65.174.249] ImportError: No module named test2 **In the last line, you can see that the server writes "ImportError: No module named test2"** How can I fix it? Answer: Modules should be in the Python path or in the same directory where the executable is. Then you do like usually import test2 test2.some_function() # or whatever.
How do I make a decaying oscilating function in python? Question: I have a code in python to represent the energy decay in a damped oscilator, it reads like this: def E(wt, Q): return (np.e**(-x/Q))*(1-(1/2*Q)*np.sin(2*x)) x = np.linspace(0,20,1000) y0 = E(x,2) y1 = E(x,4) y2 = E(x,8) y3 = E(x,16) plt.plot(x, y0, 'p', label=r'$Q=2$') plt.plot(x, y1, 'r', label=r'$Q=4$') plt.plot(x, y2, 'g', label=r'$Q=8$') plt.plot(x, y3, 'b', label=r'$Q=16$') plt.xlabel(r'$wt$') plt.ylabel(r'$E$') plt.title (r"$E(t) -vs.- wt$") plt.show() yet it should look like this: <https://www.dropbox.com/s/o2mmmi8v6kdnn2v/good_graph.png?dl=0> what am I doing wrong? I have the right function Answer: **Fixed Equation** def E(wt, Q): return np.exp(-x/float(Q)) * ( 1. - (1./2./float(Q))*np.sin(2.* x) ) **Your original equation** def E(wt, Q): return (np.e**(-x/Q))*(1-(1/2*Q)*np.sin(2*x)) **Errors** 1. **Unused Variables** You never use `wt` 2. [**BODMAS**](http://en.wikipedia.org/wiki/Order_of_operations) You don't have a decay parameter set up correctly, so it will oscillate too much. `(1/2*Q)` when you mean `(1/2/Q)` 3. **Integer Division** _(Only for Python < 3)_ You divide by integers which will floor your division. You need to convert the integer values to floats e.g. `(1/2./float(Q))` 4. **Color Parameter** You want to make a purple line by passing `p` in `plt.plot(x, y0, 'p', label=r'$Q=2$')` but `p` creates the strange point plot behaviour. To fix this pass in the color names explicitly e.g. `plt.plot(x, y0, color='purple', label=r'$Q=2$')` ![Edited Layout - see below](http://i.stack.imgur.com/h8z6d.png) **Off Topic** To make your title nicer: from matplotlib import rc rc('font', **{'family': 'serif', 'serif': ['Computer Modern']}) rc('text', usetex=True) plt.title (r"$E(t)$ vs. $w_t$") plt.xlabel(r'$w_t$') plt.ylabel(r'$E(t)$') **Full Code** from matplotlib import rc import bumpy as np import matplotlib.pyplot as plot # set up fonts rc('font', **{'family': 'serif', 'serif': ['Computer Modern']}) rc('text', usetex=True) # set up labels plt.title (r"$E(t)$ vs. $w_t$") plt.xlabel(r'$w_t$') plt.ylabel(r'$E(t)$') # plot and store plots in yList, zip color labels into loop yList = [] for i,color in zip(xrange(1,5),['purple', 'r', 'g', 'b']): y = E(x, 2**i) yList.append(y) plt.plot(x, y, color = color, label= r'$Q=%s$' % i) pt.show()
how use ctypes with msvc*.dll from within matlab on windows Question: i'm using winpython (2.7) on windows 7/64, matlab 2015a, with matlab's new [python bridge](http://www.mathworks.com/help/matlab/call-python- libraries.html). >> py.ctypes.util.find_library('c') ans = Python str with no properties. msvcr90.dll >> py.ctypes.util.find_msvcrt() ans = Python str with no properties. msvcr90.dll >> py.ctypes.CDLL(py.ctypes.util.find_library('c')) Python Error: [Error 1114] A dynamic link library (DLL) initialization routine failed >> x=CDLL('C:\Users\nlab\Downloads\WinPython-64bit-2.7.9.5\python-2.7.9.amd64\msvcr90.dll') Python Error: [Error 1114] A dynamic link library (DLL) initialization routine failed a popup also comes up: Microsoft Visual C++ Runtime Library R6034 "an application has made an attempt to load the c runtime library incorrectly" a [couple other](http://stackoverflow.com/a/14680947/1441998) [SO answers](http://stackoverflow.com/questions/27273594/run-time- error-r6034-when-compiling-from-the-command-prompt#comment43021210_27273594) suggest it's matlab putting its incompatible copy of msvc*.dll somewhere on the path, so i removed everything not from WinPython in sys.path (just matlab's `\bin\win64\` and the `\Python27\site-packages\` from another python install i have): >> py.pprint.PrettyPrinter().pprint(py.sys.path) ['', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\python27.zip', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\DLLs', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib\\plat-win', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib\\lib-tk', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib\\site-packages', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib\\site-packages\\FontTools', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib\\site-packages\\win32', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib\\site-packages\\win32\\lib', 'C:\\Users\\nlab\\Downloads\\WinPython-64bit-2.7.9.5\\python-2.7.9.amd64\\lib\\site-packages\\Pythonwin'] there are still tons of msvc*.dll sprinkled everywhere on the system, and surely some on the PATH: >> x = py.os.environ x = Python _Environ with properties: data: [1x1 py.dict] {'TMP': 'C:\\Users\\nlab\\AppData\\Local\\Temp', <<snip>>, 'USERPROFILE': 'C:\\Users\\nlab'} >> cellfun(@(s)fprintf('%s\n',s),strsplit(char(x{'PATH'}),';')) C:\Program Files\Haskell\bin C:\Program Files\Haskell Platform\2014.2.0.0\lib\extralibs\bin C:\Program Files\Haskell Platform\2014.2.0.0\bin C:\Users\nlab\Downloads\WinPython-64bit-2.7.9.5\python-2.7.9.amd64 C:\Users\nlab\Downloads\opencv\build\x64\vc12\bin C:\ProgramData\Oracle\Java\javapath C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\bin\ C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\libnvvp\ C:\Program Files\ImageMagick-6.8.3-Q16 C:\Program Files (x86)\OSSBuild\GStreamer\v0.10.7\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.1\\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.1\libnvvp\ C:\Program Files (x86)\PHP\ C:\Windows\system32 C:\Windows C:\Windows\System32\Wbem C:\Program Files\Intel\DMIX C:\Program Files\TortoiseSVN\bin C:\Program Files\SlikSvn\bin\ C:\Program Files\MySQL\MySQL Server 5.5\bin C:\Program Files (x86)\Common Files\Acronis\SnapAPI\ C:\Program Files (x86)\PostgreSQL\9.2\bin C:\Python27 C:\Python27\Scripts C:\Program Files\Microsoft SQL Server\110\Tools\Binn\ C:\Windows\System32\WindowsPowerShell\v1.0\ C:\Program Files (x86)\LilyPond\usr\bin C:\Program Files (x86)\Git\cmd C:\Program Files\Microsoft Windows Performance Toolkit\ C:\Program Files\TortoiseGit\bin C:\Program Files (x86)\QuickTime\QTSystem\ C:\Program Files (x86)\Skype\Phone\ C:\Program Files\Mosek\7\tools\platform\win64x86\bin C:\Program Files\Haskell Platform\2014.2.0.0\mingw\bin C:\Users\nlab\AppData\Roaming\cabal\bin C:\Program Files (x86)\SSH Communications Security\SSH Secure Shell C:\Gtk+\bin i notice that in `C:\Program Files\MATLAB\R2015a\bin\win64\` we only have `msvc[r|p][100|110].dll` \-- does that mean it won't work with a distribution of python based on msvcr90 like winpython 2.7.9.5? Answer: define this as file `msvc.py`: import ctypes print ctypes.cdll.msvcrt then in matlab: >> py.importlib.import_module('msvc') <CDLL 'msvcrt', handle fe190000 at 771c2b38> ans = Python module with properties: ctypes: [1x1 py.module] <module 'msvc' from 'msvc.pyc'> >> x = py.ctypes.cdll x = Python LibraryLoader with properties: msvcrt: [1x1 py.ctypes.CDLL] <ctypes.LibraryLoader object at 0x00000000771D6278> >> x.msvcrt ans = Python CDLL with no properties. <CDLL 'msvcrt', handle fe190000 at 771c2b38> i don't know why it's necessary to use `msvc.py` \-- just importing `ctypes` does not give you a `cdll` with an `msvcrt` property.
Processes don't terminate when I call Python script using crontab Question: I have a script called yarn_monitor.py. When I run it, the program executes correctly and when I look at the running processes using `ps -u myname`, everything is clear. But when I run yarn_monitor.py using cron: * * * * * /home/me/projects/yarn_monitor/yarn_monitor.py I see several processes that don't quit. Here they are repeated twice: 19337 ? 00:00:00 yarn_monitor.py 19338 ? 00:00:03 java 19418 ? 00:00:00 sendmail 19419 ? 00:00:00 postdrop 20043 ? 00:00:00 yarn_monitor.py 20046 ? 00:00:02 java 20199 ? 00:00:00 sendmail 20200 ? 00:00:00 postdrop Eventually, as I let cron keep running the job, I get a Java GC out of memory error. As far as I know I'm not using any Java in my process. Here are my imports: from __future__ import print_function import os import sys import re import time from pysnmp.entity.rfc3413.oneliner import ntforg from pysnmp.proto import rfc1902 Any idea what could be happening? Or ways to keep running this job while killing previously created processes? Answer: > It looks like the close isn't working correctly when run through cron, but > it works fine when called manually. Sounds to me like yarn is expecting some environment variables to be set which aren't set when run as a cron job. Try the following to debug this: * * * * * yarn application -list > /home/me/error_log 2>&1 Now wait 1 minute and look into `/home/me/error_log` and see what it's complaining about. This will give you a hint on what you need to do to fix your environment.
How to compare list of arrays in python? Question: I want to compare two variables `input_items` and `temp` for equality. To give you an idea of their datatype - print input_items prints - `[array([ 50., 1., 0., ..., 0., 0., 0.], dtype=float32), array([ 50., -2., 0., ..., 0., 0., 0.], dtype=float32)]` What's the best way to do that in Python? Answer: I suppose that [allclose](http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html) good for your case because you need to compare floats import numpy as np a = np.arange(10) print a #array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) b = np.arange(10) print b #array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) print np.allclose(a, b) #True b[1] = 10 #array([ 0, 10, 2, 3, 4, 5, 6, 7, 8, 9]) print np.allclose(a, b) #False To compare list of arrays you can combine it with [all](https://docs.python.org/2/library/functions.html#all) a = [np.array([1, 2, 3]), np.array([1, 2, 3])] b = [np.array([1, 2, 3]), np.array([1, 2, 3])] all(np.allclose(x, y) for x, y in zip(a, b))#True b = [np.array([1, 2, 3]), np.array([1, 2, 4])] all(np.allclose(x, y) for x, y in zip(a, b))#False PS Sorry for my poor English
Substitute all xml text with beautifulsoup library Question: I need to replace all the text in a xml using Beautifulsoup library in Python. For example, i have this peace of xml: <Paragraph> Procedure general informations <IntLink Target="il_0_mob_411" Type="MediaObject"/> <Strong>DIFFICULTY: </Strong> <IntLink Target="il_0_mob_231" Type="MediaObject"/> <IntLink Target="il_0_mob_231" Type="MediaObject"/> - <Strong>DURATION:</Strong> 15 min.<br/> <Strong>TOOLS REQUIRED:</Strong> 4mm Allen Key, Pin driver </Paragraph> And i need it going to beacome like this: <Paragraph> 0 <IntLink Target="il_0_mob_411" Type="MediaObject"/> <Strong>1</Strong> <IntLink Target="il_0_mob_231" Type="MediaObject"/> <IntLink Target="il_0_mob_231" Type="MediaObject"/> - <Strong>2</Strong>3<br/> <Strong>4</Strong>5 </Paragraph> Thank you! Answer: Here's the code: # -*- coding: utf-8 -*- import HTMLParser import codecs import os import sys from bs4 import BeautifulSoup xml_doc = open("export_2.xml") soup = BeautifulSoup(xml_doc) pars = HTMLParser.HTMLParser() open('export.txt', 'w').close() file_xml = open('export_ph.xml', 'w') counter = 0 all_texts = soup.find_all(text=True) print "Inizio esportazione:" for text in all_texts: s = pars.unescape(text) s = str(counter)+ ";"+ s + "\n" if not (s == "" or s.isspace()): with codecs.open("export.txt", "a", encoding="utf-8") as file_text: file_text.write(s) counter = counter+1 print ".", ## put placeholder in the xml all_xml = soup.find_all() for text in all_xml: s = pars.unescape(text.get_text()) with codecs.open("export_ph.xml", "a", encoding="utf-8") as file_xml: file_xml.write(s) file_xml_info = os.path.getsize('export_ph.xml') file_txt_info = os.path.getsize('export.txt') if (file_txt_info > 0 and file_xml_info > 0): print "\nEsportazione completata: \nFile xml: " + str(file_xml_info) + "B" + "\nFile testo a 3 colonne: " + str(file_txt_info) + "B"
Executing stored procedures in sqlalchemy Question: I am using a raw_connection in sqlalchemy to execute some SQL that runs a stored procedure. the stored proc selects a parameter ID at the end. How can I catch this ID? the python code is: import sqlalchemy as sa SQLCommand = """ DECLARE @Msg varbinary(max) = CAST(%s AS varbinary(max) ) EXECUTE dbo.usp_procedure @Type = 32, @header = %s, @store = @Msg """ % (Datacontent, var) Engine = sa.create_engine(connection_string) Connection = Engine.raw_connection() cursor = Connection.cursor() cursor.execute(SQLCommand) return_key = list(cursor.fetchall()) Connection.commit() Connection.close() I thought return_key would contain the return code from usp_procedure but it errors out and I get: No results. Previous SQL was not a query. The procedure has as a final step: SELECT @ParamID I want to be able to read this code back in my python code Answer: I don't have `sql-server`and tested this only for `oracle`, however this is too long for a comment. Created this simple stored procedure, CREATE OR REPLACE PROCEDURE PROCEDURE1(inp IN VARCHAR2, outp OUT VARCHAR2) AS BEGIN IF (inp = 'true') THEN outp := '1'; RETURN; END IF; IF (inp = 'false') THEN outp := '0'; RETURN; END IF; outp := NULL; RETURN; END PROCEDURE1; and tested it with the following code: Connection = c.manager.engine.raw_connection() cursor = Connection.cursor() result = cursor.callproc("PROCEDURE1", ['true', '']) print(result[1]) result = cursor.callproc("PROCEDURE1", ['false', '']) print(result[1]) The results are 1 and 0 as expected. I've been browsing around and I'd expect that `callproc` is available for `sql-server` e.g. [here](http://pymssql.org/en/latest/pymssql_examples.html#calling-stored- procedures) but not honestly I'm not what `sqlalchemy` will be using. Give it a try, hope it helps.
Python - How to add a space string (' ') to the end of every line in a text/conf file Question: I have a config file that i would like to add a space (' ') to the end of every line in the file **File example:** #xxx configuration IPaddr = 1.1.1.1<add a space here> host = a.b.c.d<add a space here> usrname = aaa<add a space here> dbSID = xxx<add a space here> i already have the number of lines in the file (using len) so i know how much time to repeat the loop of adding the space string. i also know how to open a file for reading and writing. Thank you all. Answer: You can use [fileinput](https://docs.python.org/2/library/fileinput.html) with `inplace=True` to update the original file: import fileinput import sys for line in fileinput.input("in.txt",inplace=True): sys.stdout.write("{} \n".format(line.rstrip())) Or use a [tempfile.NamedTemporaryFile](https://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile) with [shutil.move](https://docs.python.org/2/library/shutil.html#shutil.move): from tempfile import NamedTemporaryFile from shutil import move with open("in.txt") as f, NamedTemporaryFile("w",dir=".", delete=False) as temp: for line in f: temp.write("{} \n".format(line.rstrip())) move(temp.name,"in.txt")
UnicodeDecodeError when reading a text file Question: I am a beginner to Python (I am using 3.4). This is the relevant part of my code. fileObject = open("countable nouns raw.txt", "rt") bigString = fileObject.read() fileObject.close() Whenever I try to read this file I get: UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 82273: character maps to <undefined> I have been reading around and it seems to be something to do with my default encoding not matching the text file encoding. I've read in another post that you can use this method to read a file with a specific encoding: import codecs f = codecs.open("file.txt", "r", "utf-8") But you have to know it in advance. The thing is I don't know how the text file is encoded. A few posts suggested using Chardet. I've installed it but I have no idea how to get it to read a text file. Any ideas on how to get around this?? Answer: There is no need to use `codecs.open()`; that's advice for Python 2. In Python 3 `open()` takes an `encoding` argument: fileObject = open("countable nouns raw.txt", "rt", encoding='utf8') This does require that you know what codec was used for the file, of course. Generally speaking is no easy way for Python to figure that out; individual file formats may include codec information or have standardised on a given codec, but if all you have a generic text file you'll have to figure out what created it and what codec that used to write the data.
python3: convert string to type Question: I have a set of classes `A`, `B`, `C` in a "front end" package `p1`. They all inherit from `p1.X`. I have another set of classes `A`, `B`, `C` in a "back end" package `p2`. They all inherit from `p2.Y`. In `p1.X`, I set one backend, so that `p1.A` uses `p2.A` as backend, `p1.B` uses `p2.B`, etc. This mapping is done based on the class name in an inherited method. Now, I suceed to have, for example, `backend = "p2.A"` (string), but when I try to eval this, python doesn't knows about `p2`, even if this is imported earlier. What did I do wrong? Should I import inside the eval? Should like spaghetti code... Do you have a better idea? Thanks. P.S.: I currently have something like this in the "parent" `p1.X` class, which is awful, but good to clarify what I want: def getBackendClass(self): myClass = ... # (class without package name) if myClass == "A": return p2.A elif myClass == "B": return p2.B ... Answer: Little hacky solution, but should work, and is not hardcoded. p2.py: class Y(object): @classmethod def fromString(cls, s): cls_name = s.split(".")[1] for c in cls.__subclasses__(): if c.__name__() == cls_name: return c() raise ValueError("%s not found in subclasses" % s)
Running Psychopy window from a thread segfaults Question: I'm trying to run my Psychopy window from a separate thread and control what's shown on it from another one, but all I get is Fatal Python error. Here's a small example script that produces the same results as my larger one from threading import Thread from psychopy import visual, core import time class ThreadTest(Thread): def __init__(self): Thread.__init__(self) self.text='Test' self.running = 1 self.start() print 'doing stuff' def run(self): win = visual.Window() msg = visual.TextStim(win, text=self.text) while self.running: msg.setText(self.text) msg.draw() win.flip() print 'Drawing...' core.wait(2) win.close() print 'Stopping thread' def setText(self, text): self.text=text def stopTest(self): self.running = 0 def main(): tt = ThreadTest() time.sleep(3) tt.setText('Test2') time.sleep(3) tt.stopTest() print 'Stopping main thread' if __name__ == '__main__': main() and the output python testy.py doing stuff Fatal Python error: (pygame parachute) Segmentation Fault Aborted (core dumped) This creates the Psychopy window but fails to show any te xt on it and then just crashes. I've also tried creating the window in `__init__()` but that didn't work either. Answer: This seems to be an issue with the text object, which come from pyglet and might interfere with pyglet threading. The code below actually works (to my amazement)! BUT you're doing something that's strongly discouraged and not supported. OpenGL calls (that handle all the rendering) are not thread safe and shouldn't be called from anything other than the main thread. Basically, you're on your own from here! ;-) from threading import Thread from psychopy import visual, core import time class ThreadTest(Thread): def __init__(self): Thread.__init__(self) self.ori=0 self.running = 1 print 'initialised' def run(self): win = visual.Window() print 'created window' stim = visual.PatchStim(win) print 'created stim' while self.running: stim.ori = self.ori stim.draw() win.flip() print '.', core.wait(0.01) print 'Stopping auxil thread' def setOri(self, ori): self.ori=ori def stopTest(self): self.running = 0 def main(): tt = ThreadTest() tt.start() for frameN in range(180): tt.setOri(frameN) time.sleep(0.01) tt.stopTest() print 'Stopping main thread' if __name__ == '__main__': main()
how install jsonrpc on macos for python27 Question: I want to check the operation of the code: from txjsonrpc.web import jsonrpc from twisted.web import server from twisted.internet import reactor class Math(jsonrpc.JSONRPC): """ An example object to be published. """ def jsonrpc_add(self, a, b): """ Return sum of arguments. """ return a + b reactor.listenTCP(7080, server.Site(Math())) reactor.run() but I get an error message: /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7/Users/___/server.py Traceback (most recent call last): File "/Users/___/server.py", line 1, in <module> from txjsonrpc.web import jsonrpc ImportError: No module named txjsonrpc.web Process finished with exit code 1 run: pip install jsonprc Downloading/unpacking jsonprc Could not find any downloads that satisfy the requirement jsonprc Cleaning up... No distributions at all found for jsonprc help the beginning programmer ;) Answer: Installation instructions are just one google search away: <https://pypi.python.org/pypi/txJSON-RPC/0.3> Happy hacking!
Why do the Frechet distributions differ in scipy.stats vs R Question: I've fitted a frechet distribution in R and would like to use this in a python script. However inputting the same distribution parameters in scipy.stats.frechet_r gives me a very different curve. Is this a mistake in my implementation or a fault in scipy ? R distribution: ![R frechet](http://i.stack.imgur.com/p1NP1.png) vs Scipy distribution: ![scipy frechet](http://i.stack.imgur.com/tOmdJ.png) R frechet parameters: loc=17.440, shape=0.198, scale=8.153 python code: from scipy.stats import frechet_r import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(1, 1) F=frechet_r(loc=17.440 ,scale= 8.153, c= 0.198) x=np.arange(0.01,120,0.01) ax.plot(x, F.pdf(x), 'k-', lw=2) plt.show() **edit** \- relevant documentation. The Frechet parameters were calculated in R using the fgev function in the 'evd' package <http://cran.r-project.org/web/packages/evd/evd.pdf> (page 40) Link to the scipy documentation: <http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.frechet_r.html#scipy.stats.frechet_r> Answer: I haven't used the frechet_r function from scipy.stats (when just quickly testing it I got the same plot out as you) but you can get the required behaviour from genextreme in scipy.stats. It is worth noting that for genextreme the Frechet and Weibull shape parameter have the 'opposite' sign to usual. That is, in your case you would need to use a shape parameter of -0.198: from scipy.stats import genextreme as gev import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(1, 1) x=np.arange(0.01,120,0.01) # The order for this is array, shape, loc, scale F=gev.pdf(x,-0.198,loc=17.44,scale=8.153) plt.plot(x,F,'g',lw=2) plt.show() ![Python Frechet distribution](http://i66.tinypic.com/29sg46.png)
Python 3.4 : match from csv and return new csv with matched values Question: # QUESTION How can I scan the reader csv for any items in the reader2 csv and return a new csv with the matched information. # Reader2 csv format 66740,1800,1001463,1467373,896159 # reader csv format 1001385|NORTHWEST PIPE CO|10-Q|2015-05-06|edgar/data/1001385/0001193125-15-174814.txt 1001426|PERICOM SEMICONDUCTOR CORP|10-Q|2015-05-05|edgar/data/1001426/0001145443-15-000628.txt 1001463|Acacia Diversified Holdings, Inc.|10-K|2015-05-20|edgar/data/1001463/0001185185-15-001386.txt 1001463|Acacia Diversified Holdings, Inc.|10-K|2015-05-20|edgar/data/1001463/0001185185-15-001394.txt 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001388.txt 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001390.txt 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001392.txt 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001396.txt # Current Code with open('newCIK.csv') as reader2: reader2 = csv.reader(reader2) with open('search.file') as f_in, open('SP500_10K.csv', 'w') as f_out: reader = csv.reader(f_in, delimiter='|') writer = csv.writer(f_out, delimiter='|') for line in reader: for cik in reader2: if cik in line: writer.writerow(line) Answer: You are trying to treat a file object as a list, looping over it more than once. That won't work without doing extra work. Moreover, you are not looping over the columns of the one row; you are trying to test if the _whole row_ is in the other CSV file rows. You'd want to test each value, and then only against the last column of the rows in the `search.file` CSV data. File objects have a _file position_ ; as you read from the file the position moves from start to end. Once at the end _it won't move to the start again automatically_. You could seek the file object to the start again: with open('newCIK.csv') as reader2_file: reader2 = csv.reader(reader2_file) with open('search.file') as f_in, open('SP500_10K.csv', 'w') as f_out: reader = csv.reader(f_in, delimiter='|') writer = csv.writer(f_out, delimiter='|') for line in reader: reader2_file.seek(0) # rewind to the start for cik in reader2: if cik in line: writer.writerow(line) However, reading a file over and over is _slow_. You'd be better of reading the whole thing into memory at the start. And the above doesn't address the other problem, namely that you are testing each row, and not each column, from `newCIK.csv`. Read the _one_ row into memory, then loop over that: with open('newCIK.csv', newline='') as reader2: reader2 = csv.reader(reader2) cik_values = next(reader2) # first row with open('search.file', newline='') as f_in, open('SP500_10K.csv', 'w', newline='') as f_out: reader = csv.reader(f_in, delimiter='|') writer = csv.writer(f_out, delimiter='|') for line in reader: for cik in cik_values: if cik in line[-1]: # test only the last column writer.writerow(line) Note that I added in `newline=''` arguments to the `open()` calls; the `csv` module needs more control over newlines; not doing so could cause problems on Windows and when handling values containing newlines. Demo: >>> from io import StringIO >>> import csv, sys >>> newcik = '''\ ... 66740,1800,1001463,1467373,896159 ... ''' >>> search_file = '''\ ... 1001385|NORTHWEST PIPE CO|10-Q|2015-05-06|edgar/data/1001385/0001193125-15-174814.txt ... 1001426|PERICOM SEMICONDUCTOR CORP|10-Q|2015-05-05|edgar/data/1001426/0001145443-15-000628.txt ... 1001463|Acacia Diversified Holdings, Inc.|10-K|2015-05-20|edgar/data/1001463/0001185185-15-001386.txt ... 1001463|Acacia Diversified Holdings, Inc.|10-K|2015-05-20|edgar/data/1001463/0001185185-15-001394.txt ... 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001388.txt ... 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001390.txt ... 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001392.txt ... 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001396.txt ... ''' >>> with StringIO(newcik) as reader2: ... reader2 = csv.reader(reader2) ... cik_values = next(reader2) # first row ... >>> with StringIO(search_file) as f_in: ... reader = csv.reader(f_in, delimiter='|') ... writer = csv.writer(sys.stdout, delimiter='|') ... for line in reader: ... for cik in cik_values: ... if cik in line[-1]: # test only the last column ... writer.writerow(line) ... 1001463|Acacia Diversified Holdings, Inc.|10-K|2015-05-20|edgar/data/1001463/0001185185-15-001386.txt 103 1001463|Acacia Diversified Holdings, Inc.|10-K|2015-05-20|edgar/data/1001463/0001185185-15-001394.txt 103 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001388.txt 103 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001390.txt 103 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001392.txt 103 1001463|Acacia Diversified Holdings, Inc.|10-Q|2015-05-20|edgar/data/1001463/0001185185-15-001396.txt 103 The `103` numbers are the number of bytes written in each `writer.writerow()` call, echoed by the REPL.
Opening a Word document that has a password using docx library Question: I'm trying to open a word document that has a password. I'm using docx package - a bit old from docx import opendocx, getdocumenttext and further on document = opendocx(filename) I was wondering if there were options on the opendocx to allow it to open password protected word documents - I do know the password. I checked the github repo here: <https://github.com/mikemaccana/python-docx> but didn't see an option. I'm trying to avoid rewriting the code to use a newer package but that may be inevitable. Answer: [python-docx](https://github.com/mikemaccana/python-docx) doesn't support passwords at the moment. I didn't find it in the code as well, but to be sure, I asked on the [python-docx mailing list](https://groups.google.com/forum/#!forum/python-docx) and [received the following reply](https://groups.google.com/forum/#!topic/python- docx/k5dLjcDdDS0): > Sorry, no. At least there's no built-in feature for it. I'm not sure how all > that works with Word, it might be worth some research. > > If it uses the Zip archive's password protection, you could open the .docx > file (which is a Zip at the top level), and then do something I'm sure to > feed it in. Worst case you could save it as another zip without a password > and use that. And of course the interim zip could be a StringIO in-memory > file. > > If they use their own encryption I expect it would be quite a bit harder :) Docx uses their own encryption, not zip encryption. This way only the internal contents need to be encrypted. Some info on decrypting docx files is available here: * [Super User: How can I unlock a Microsoft .docx document?](http://superuser.com/questions/486844/how-can-i-unlock-a-microsoft-docx-document) One approach that you can use if you don't want to change your code is to fork the docx package and add code to decrypt the docx file. If you had another program to decrypt the document, you could also shell out to do the decryption.
Using Z3 with python in Visual studio 2013 Question: I installed the Python 2.7.10 64 bit. I downloaded the latest Z3 sources from <https://github.com/Z3Prover/z3>. I copied the folder z3-master in the Python27 folder. Then, I opened the Visual Studio 2013 command prompt and build the z3 using instructions provided on the same github page. Build was successful. I added the 'PHYTHONPATH c:\Python27\z3-master\build\z3lib.dll'. Now, when I run any example from visual studio python, it give me an error on the first line, i.e., from z3 import * : The error is 'no module named z3' If I run the example from python shell, it give error "init(Z3_LIBRARY_PATH) must be invoked before using z3-python" I don't see any bin folder inside z3-master or in the build folder. How to use Z3py from within visual studio? Thanks Answer: I have used the instructions given on the following link: [Using Z3Py With Python 3.3](http://stackoverflow.com/questions/15598669/using-z3py-with- python-3-3) Actually, only the last two instructions (comment) worked for me. That's instructions are as follow: **Another option is to use the precompiled DLL in the nightly builds for Windows 64-bit. This link has additional information: research.microsoft.com/en-us/um/people/leonardo/blog/2013/02/15/… – Leonardo de Moura Mar 25 '13 at 23:35 Thanks - Now I'm using my self-compiled unstable build with the libz3.dll file replaced by the one from the nightly channel and I didn't encounter any problems so far. – fdj815** So, I simply went to the link, clicked on the "Nightly builds of Z3". Then click on the "Go to the Z3 Downloads page". Then, I downloaded the 64 bit pre- compiled zip file. Unzip it and added the path of bin folder to both 'PATH' & 'PYTHONPATH' environment variables. But in the Visual studio Python project, I still had to add 'PYTHONPATH' in the search paths of the project. It seems it doesn't automatically look into these environment variables. Link to the nightly builds is: <http://leodemoura.github.io/blog.html>
Python multiprocessing: name of the main process Question: I'm using the multiprocessing module to run a piece of code on different processes. At some point in the code, I need to know whether the code is being executed by the main process or one of the created child processes. In all cases I've tried, the name of the current process is always "MainProcess": >>> import multiprocessing >>> multiprocessing.current_process().name 'MainProcess' Is this a python convention I can rely on to be sure that my piece of code is run by the main process (assuming that no other process is named that way)? Otherwise, is there any other way I should use to know which process is executing a piece of code? Thanks! Answer: It appears that the main process has a different `type` than child processes. The main process is `mulitprocessing.process._MainProcess` while child processes are `multiprocessing.process.Process`. This might be a better way to test for it. Now, since the name of the `_MainProcess` type has a leading underscore, it's meant to be "private," meaning it's an implementation detail that could change. That doesn't seem likely, but you could check to see if the current process is _not_ of type `Process` rather than checking to see if it is of type `_MainProcess`.
Python 3.x AttributeError: 'NoneType' object has no attribute 'groupdict' Question: Being a beginner in python I might be missing out on some kind of basics. But I was going through one of the codes from a project and happened to face this : > AttributeError: 'NoneType' object has no attribute 'groupdict' Following is the section of code,re-phrased though,still this does result in the same problem. import re fmt = (r"\+((?P<day>\d+)d)?((?P<hrs>\d+)h)?((?P<min>\d+)m)?" r"((?P<sec>\d+)s)?((?P<ms>\d+)ms)?$") p = re.compile(fmt) match = p.search('Total run time: 9h 34m 9s 901ms realtime, 7h 6m 29s 699ms uptime') try: d = match.groupdict() except IndexError: print("exception here") Answer: Regular expression functions in Python return `None` when the regular expression does not match the given string. So in your case, `match` is `None`, so calling `match.groupdict()` is trying to call a method on nothing. You should check for `match` first, and then you also don’t need to catch any exception when accessing `groupdict()`: match = p.search('Total run time: 9h 34m 9s 901ms realtime, 7h 6m 29s 699ms uptime') if match: d = match.groupdict() In your particular case, the expression cannot match because at the very beginning, it is looking for a `+` sign. And there is not a single plus sign in your string, so the matching is bound to fail. Also, in the expression, there is no separator beween the various time values. Try this: >>> expr = re.compile(r"((?P<day>\d+)d)?\s*((?P<hrs>\d+)h)?\s*((?P<min>\d+)m)?\s*((?P<sec>\d+)s)?\s*(?P<ms>\d+)ms") >>> match = expr.search('Total run time: 9h 34m 9s 901ms realtime, 7h 6m 29s 699ms uptime') >>> match.groupdict() {'sec': '9', 'ms': '901', 'hrs': '9', 'day': None, 'min': '34'}
Getting time difference in python 2.7 Question: I am importing some data from a REST API into a list. One of the columns contains just date/time information. Column A format/example : `2015-06-11 07:59:10.000 GMT` I need to be able to check the time difference between two rows , i tried using datetime.strptime(variable_name, "%Y-%m-%d %H:%M:%S") however i got the following error : ValueError: unconverted data remains: .000 GMT Is there any way to fix this without having to modify and remove the offending part out of every entry in my list ? Any help would be greatly appreciated. Answer: You need to use `%f` for decimal part and `%Z` for timezone : >>> variable_name='2015-06-11 07:59:10.000 GMT' >>> datetime.strptime(variable_name, "%Y-%m-%d %H:%M:%S.%f %Z") datetime.datetime(2015, 6, 11, 7, 59, 10)
Django + DjangoCMS signal handlers not being called Question: So, in Django 1.7.7 a new way to handle signals was introduced. I'm using `1.7.7` with `django_cms`, running on Python 2. I'm trying to implement this new way, and even though the documentation is scarce but straightforward enough, it **just won't work**. I think Django-CMS or one of its plugins has something to do with it. What I'm trying to do is increase the counter of my `CounterModel` by 1 for each `MyModel` that is saved on the `pre_save` signal of the `MyModel` model. I've concluded it just doesn't work because a `raise Exception('it runs')` in the `increase_counter` function does not get raised.. I have the following: _myapp/models/mymodel.py_ from .counter_model import CounterModel # Imports here class MyModel(models.Model): name = models.CharField(max_length=128) categories = models.ManyToManyField(CounterModel) _myapp/models/counter_model.py_ # Imports here class CounterModel(models.Model): amount_of_mymodels = models.PositiveIntegerField(default=0) _myapp/signals.py_ from .models.mymodel import MyModel # Other imports here @receiver(pre_save, sender=MyModel) def increase_counter(sender, **kwargs): instance = kwargs.get('instance') for category in instance.categories.all(): category.amount_of_mymodels += 1 _myapp/apps.py_ from django.apps import AppConfig # Other imports here class MyAppConfig(AppConfig): name = "myapp" def ready(self): import myapp.signals _myapp/__init__.py_ default_app_config = 'myapp.apps.MyAppConfig' The `import signals` in my `apps.py` is executed, because when i raise an Exception there it's raised in my console (in the `ready()` function). Hope someone can clarify this issue i'm having! **BTW:** I've also added `myapp` to my `INSTALLED_APPS` **UPDATE:** I've tried the new signal approach in another project (Python 3, also Django 1.7) and it works fine. If anyone has any clue as to what could be causing the failure of signals in my other project please let me know! I'm going to try to debug this for now, every form of help is appreciated. **NOTE:** For everyone thinking '_the for loop might be empty, print something_ ', please note the following in the beginning of my question: _I've concluded it just doesn't work because a`raise Exception('it runs')` in the `increase_counter` function does not get raised.._. Thanks! Answer: ## Fix 1 I tested your code by putting the _signals.py_ code into _models.py_ and it works. **Why's that?** From [Django docs](https://docs.djangoproject.com/en/1.8/topics/signals/#connecting- receiver-functions): > **Where should this code live?** > > Strictly speaking, signal handling and registration code can live anywhere > you like, although it's recommended to avoid the application's root module > and its models module to minimize side-effects of importing code. > > In practice, signal handlers are usually defined in a signals submodule of > the application they relate to. Signal receivers are connected in the > ready() method of your application configuration class. If you're using the > receiver() decorator, simply import the signals submodule inside ready(). ## Fix 2 If you want don't want to merge put _signals.py_ code in your _models.py_ , you can do the following class MyAppConfig(...): ... def ready(self): # you've to import your app's signals.py # not Django's signals module import myapp.signals **EDIT** There are some awesome answers and alternatives on this question: [the right place to keep my signals.py files in django](http://stackoverflow.com/questions/7115097/the-right-place-to-keep-my- signals-py-files-in-django)
Identify the column that the highest row value belongs to, python or R Question: So i'm going to a table that will be 20 columns by `x` rows and I need to find for each row which column the highest value belongs to. ex: The Table would be something like this (but larger) A B C D E F G 1 2 3 4 5 6 7 9 8 7 6 5 4 3 7 6 5 8 4 3 2 0.9 0.01 0.02 0.2 0.04 0.3 ... And I'd like it to spit out: `G,A,D,A.` And I'll need to put this into another file. It doesnt even have to be with the letters. I'll be doing something with it later. I've been trying to figure out the best way of doing this and i've been looking into trying to do it with R, this is the script I have so far: #!/usr/bin/env Rscript a=read.table(get(TEST.csv),header=T,sep="",dec=".") apply(a, 1, which.max) It doesnt want to read my Test file. And for python I have the following: import numpy as np import csv a=np.genfromtxt('./TEST.csv',delimiter='\t',skip_header=1) print(a) amax=np.amax(a,axis=1) print(amax) This one properly extracts the highest Value of each row but it doesnt extract the Column number like I'd like it to do. Any and all suggestions would be greatly appreciated. Answer: You can try `max.col` in `R` names(a)[max.col(a, 'first')] #[1] "G" "A" "D" "A"
How to unzip an iterator? Question: Given a list of pairs `xys`, the Python idiom to unzip it into two lists is: xs, ys = zip(*xys) If `xys` is an iterator, how can I unzip it into two iterators, without storing everything in memory? Answer: If you want to consume one iterator independently from the other, there's no way to avoid pulling stuff into memory, since one of the iterators will progress while the other does not (and hence has to buffer). Something like this allows you to iterate over both the 'left items' and the 'right items' of the pairs: import itertools import operator it1, it2 = itertools.tee(xys) xs = map(operator.itemgetter(0), it1)) ys = map(operator.itemgetter(1), it2)) print(next(xs)) print(next(ys)) ...but keep in mind that if you consume only one iterator, the other will buffer items in memory until you start consuming them. (Btw, assuming Python 3. In Python 2 you need to use `itertools.imap()`, not `map()`.)
Using Py2app with a GUI from QT Creator Question: I created a GUI in QT Creatro and stored this as a *.ui file. Using PyQT I made a GUI that works fine when it is launched as $ python pyapp.py In order to build this app into something that can be executed by double clicking on it, I used Py2app. However, upon clicking the icon twice, I get the following error in the dialog that appears: pygui Error After opening the console, it seems that the following gives rise to this error: 12/06/2015 15:58:30.084 pygui[29757]: IOError: [Errno 2] No such file or directory: 'mainwindow.ui' It seems that the gui that I created with QT Creator is not found by the app when it opens. Any idea why this is happening? Thanks in advance. Answer: This is happening because py2app would not be able to find files specified through string paths in the code. It will not include those files in the binary. You can do one of two things to solve your problem. 1) You have to convert your .ui file to .py file using `pyuic4` (included in PyQt4 installation). After this step you will have a .py file. Then instead of using .ui, import .py file and inherit your class from the _class generated in .py file_. This will allow py2app to include the ui from a python module instead of searching for .ui file. 2) You can simply manually place the .ui file in the same directory where the py2app created the binary. It should work without errors. **UPDATE** If you want to try 2nd solution, you need to specify the complete path to the .ui file. You can do this by using `__file__` attribute in the python module. Instead of `uic.loadUiType("mainwindow.ui")`, use `uic.loadUiType(os.path.join(os.path.dirname(__file__), "mainwindow.ui"))`
How to increase the performance for estimating `Pi`in Python Question: I have written the following code in Python, in order to estimate the value of `Pi`. It is called [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method) method. Obviously by increasing the number of samples the code becomes slower and I assume that the slowest part of the code is in the sampling part. How can I make it faster? from __future__ import division import numpy as np a = 1 n = 1000000 s1 = np.random.uniform(0,a,n) s2 = np.random.uniform(0,a,n) ii=0 jj=0 for item in range(n): if ((s1[item])**2 + (s2[item])**2) < 1: ii = ii + 1 print float(ii*4/(n)) Do you suggest other (presumably faster) codes? Answer: The bottleneck here is actually your `for` loop. Python `for` loops are relatively slow, so if you need to iterate over a million items, you can gain a lot of speed by avoiding them altogether. In this case, it's quite easy. Instead of this: for item in range(n): if ((s1[item])**2 + (s2[item])**2) < 1: ii = ii + 1 do this: ii = ((s1 ** 2 + s2 ** 2) < 1).sum() This works because `numpy` has built-in support for optimized array operations. The looping occurs in `c` instead of python, so it's much faster. I did a quick test so you can see the difference: >>> def estimate_pi_loop(x, y): ... total = 0 ... for i in xrange(len(x)): ... if x[i] ** 2 + y[i] ** 2 < 1: ... total += 1 ... return total * 4.0 / len(x) ... >>> def estimate_pi_numpy(x, y): ... return ((x ** 2 + y ** 2) < 1).sum() ... >>> %timeit estimate_pi_loop(x, y) 1 loops, best of 3: 3.33 s per loop >>> %timeit estimate_pi_numpy(x, y) 100 loops, best of 3: 10.4 ms per loop Here are a few examples of the kinds of operations that are possible, just so you have a sense of how this works. Squaring an array: >>> a = numpy.arange(5) >>> a ** 2 array([ 0, 1, 4, 9, 16]) Adding arrays: >>> a + a array([0, 2, 4, 6, 8]) Comparing arrays: >>> a > 2 array([False, False, False, True, True], dtype=bool) Summing boolean values: >>> (a > 2).sum() 2 As you probably realize, there are faster ways to estimate Pi, but I will admit that I've always admired the simplicity and effectiveness of this method.
How to select columns from dataframe by regex Question: I have a dataframe in python pandas. The structure of the dataframe is as the following: a b c d1 d2 d3 10 14 12 44 45 78 I would like to select the columns which begin with d. Is there a simple way to achieve this in python . Answer: You can use [`DataFrame.filter`](http://pandas.pydata.org/pandas- docs/stable/generated/pandas.DataFrame.filter.html) this way: import pandas as pd df = pd.DataFrame(np.array([[2,4,4],[4,3,3],[5,9,1]]),columns=['d','t','didi']) >> d t didi 0 2 4 4 1 4 3 3 2 5 9 1 df.filter(regex=("d.*")) >> d didi 0 2 4 1 4 3 2 5 1 The idea is to select columns by `regex`
How to crawl latest articles in a specific site using specific set keyword? Question: I am trying a python code for crawling article links on specific sites based on key word like name of the article.but i didn't get the links appropriate links. import sys import requests from bs4 import BeautifulSoup import urllib.request from urlparse import urlparse def extract_article_links(url,data): req = urllib.request.Request(url,data) response = urllib.request.urlopen(req) responseData = response.read() #r = requests.get(url) soup = BeautifulSoup(responseData.content) links = soup.find_all('a') for link in links: try: #if 'http' in link: print ("<a href='%s'>%s</a>" % (link.get('href'),link.text)) except Exception as e : print (e) responseData = soup.find_all("div",{"class:info"}) print responseData for item in responseData: print (item.contents[0].text) print (item.contents[1].text) if __name__ == "__main__": from sys import argv if (len(argv)<2): print"Insufficient arguments..!!" sys.exit(1) url = sys.argv[1] values = {'s':'article','submit':'search'} data = urlparse.urlencode(values) data = data.encode('utf-8') extract_article_links(url,data) Answer: Try [lxml](http://lxml.de/), analyze the html and locate elements you are looking for, then you can do this easily with xpath : from lxml import html print map (lambda link: link, html.fromstring(source).xpath('//a/@href')) of course you need to modify the xpath according to the attribute you are looking for.
How to handle a huge stream of JSON dictionaries? Question: I have a file that contains a stream of JSON dictionaries like this: {"menu": "a"}{"c": []}{"d": [3, 2]}{"e": "}"} It also includes nested dictionaries and it looks like I cannot rely on a newline being a separator. I need a parser that could be used like this: for d in getobjects(f): handle_dict(d) The point is that it would be perfect if the iteration only happened at the root level. Is there a Python parser that would handle all JSON's quirks? I am interested in a solution that would work on files that wouldn't fit into RAM. Answer: I think [JSONDecoder.raw_decode](https://docs.python.org/2/library/json.html#json.JSONDecoder.raw_decode) may be what you're looking for. You may have to do some string formatting to get it in the perfect format depending on newlines and such, but with a bit of work, you'll probably be able to get something working. See this example. import json jstring = '{"menu": "a"}{"c": []}{"d": [3, 2]}{"e": "}"}' substr = jstring decoder = json.JSONDecoder() while len(substr) > 0: data,index = decoder.raw_decode(substr) print data substr = substr[index:] Gives output: {u'menu': u'a'} {u'c': []} {u'd': [3, 2]} {u'e': u'}'}
How to display every key and value in a dictionary apart from one in python Question: dict1 = {"Bear":3,"Wine":3,"Spirits":7,"No":0} Here I have a dictionary and I want to display every key and value from Bear, Wine and spirits so it would look something like this: Bear 3 Wine 3 Spirits 7 Answer: just this? not sure if I understand your question completely for i in dict1: if i != 'No': print i, dict1[i] output: >>> for i in dict1: ... if i != 'No': ... print i, dict1[i] ... Spirits 7 Bear 3 Wine 3 I am using the key "No" as the filter because it looks like you have a bunch of categories of drinks listed `Spirits`, `Beer`(i think you have a typo Beer instead of Bear), `Wine`. `No` doesn't seem to fit with these categories so I used it as the check if you want to order the output, you can do this. from collections import OrderedDict ordered_dict1 = OrderedDict(sorted(dict1.items(), key=lambda x:x[1])) for i in ordered_dict1: if i != 'No': print i, ordered_dict1[i] dictionaries dont have an order to them.. you can use an ordered dictionary to achieve these results though... your output looked like it was sorting by the number you have in stock in ascending order. If you want to completely remove the "No" key and its value from the dictionary you can just do this `dict1.pop('No')`
Caffe net.predict() outputs random results (GoogleNet) Question: I used pretrained GoogleNet from <https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet> and finetuned it with my own data (~ 100k images, 101 classes). After one day training I achieved 62% in top-1 and 85% in top-5 classification and try to use this network to predict several images. I just followed example from <https://github.com/BVLC/caffe/blob/master/examples/classification.ipynb>, Here is my Python code: import caffe import numpy as np caffe_root = './caffe' MODEL_FILE = 'caffe/models/bvlc_googlenet/deploy.prototxt' PRETRAINED = 'caffe/models/bvlc_googlenet/bvlc_googlenet_iter_200000.caffemodel' caffe.set_mode_gpu() net = caffe.Classifier(MODEL_FILE, PRETRAINED, mean=np.load('ilsvrc_2012_mean.npy').mean(1).mean(1), channel_swap=(2,1,0), raw_scale=255, image_dims=(224, 224)) def caffe_predict(path): input_image = caffe.io.load_image(path) print path print input_image prediction = net.predict([input_image]) print prediction print "----------" print 'prediction shape:', prediction[0].shape print 'predicted class:', prediction[0].argmax() proba = prediction[0][prediction[0].argmax()] ind = prediction[0].argsort()[-5:][::-1] # top-5 predictions return prediction[0].argmax(), proba, ind In my deploy.prototxt I changed the last layer only to predict my 101 classes. layer { name: "loss3/classifier" type: "InnerProduct" bottom: "pool5/7x7_s1" top: "loss3/classifier" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 101 weight_filler { type: "xavier" } bias_filler { type: "constant" value: 0 } } } layer { name: "prob" type: "Softmax" bottom: "loss3/classifier" top: "prob" } Here is the distribution of softmax output: [[ 0.01106235 0.00343131 0.00807581 0.01530041 0.01077161 0.0081002 0.00989228 0.00972753 0.00429183 0.01377776 0.02028225 0.01209726 0.01318955 0.00669979 0.00720005 0.00838189 0.00335461 0.01461464 0.01485041 0.00543212 0.00400191 0.0084842 0.02134697 0.02500303 0.00561895 0.00776423 0.02176422 0.00752334 0.0116104 0.01328687 0.00517187 0.02234021 0.00727272 0.02380056 0.01210031 0.00582192 0.00729601 0.00832637 0.00819836 0.00520551 0.00625274 0.00426603 0.01210176 0.00571806 0.00646495 0.01589645 0.00642173 0.00805364 0.00364388 0.01553882 0.01549598 0.01824486 0.00483241 0.01231962 0.00545738 0.0101487 0.0040346 0.01066607 0.01328133 0.01027429 0.01581303 0.01199994 0.00371804 0.01241552 0.00831448 0.00789811 0.00456275 0.00504562 0.00424598 0.01309276 0.0079432 0.0140427 0.00487625 0.02614347 0.00603372 0.00892296 0.00924052 0.00712763 0.01101298 0.00716757 0.01019373 0.01234141 0.00905332 0.0040798 0.00846442 0.00924353 0.00709366 0.01535406 0.00653238 0.01083806 0.01168014 0.02076091 0.00542234 0.01246306 0.00704035 0.00529556 0.00751443 0.00797437 0.00408798 0.00891858 0.00444583]] It seems just like random distribution with no sense. Thank you for any help or hint and best regards, Alex Answer: The solution is really simple: I just forgot to rename the last layer in deploy file: layer { name: "loss3/classifier" type: "InnerProduct" bottom: "pool5/7x7_s1" top: "loss3/classifier" param { lr_mult: 1 decay_mult: 1 }
Python doesn't print with import scapy Question: When I enter this code: print "hhhh" from scapy.all import sniff print "bbbb" this is the output: C:\Python27\python.exe C:/Users/Tamir/PycharmProjects/SIP/main.py hhhh WARNING: No route found for IPv6 destination :: (no default route?) Process finished with exit code 0 Why doesn't the second print (of "bbbb") work? When I put the import line in a comment, or import another library, it works. Answer: **sys.stdout** is redirected to readline console. It seems not working well with pycharm in this way. Please check: "PYTONPATH\Lib\site- packages\scapy\arch\windows__init__.py" [![scapy\\arch\\windows__init__.py](http://i.stack.imgur.com/Anzuo.png)](http://i.stack.imgur.com/Anzuo.png) **temporary solution** : redirect stdout to the original one try this: import sys print "hhhh" orig_stdout = sys.stdout from scapy.all import sniff sys.stdout = orig_stdout print "bbbb"
Pass argument from command button on Access form to python script Question: Using Access 2010 and python 2.7.8 Have a command button on Access 2010 form. I am trying to pull the value from the Field1 text box and pass it to a python script. I am struggling with passing the variable. Commented out stuff is other things I tried. Value in Field1 Text box is: C:\\tests\\Project VBA behind command button: Private Sub Command0_Click() arg1 = """" & Field1 & """" 'arg1 = Field1 Debug.Print arg1 'Call Shell("C:\\Python27\\ArcGIS10.3\\python.exe " & "C:\\tests\\Test.py " & "C:\\tests\\Project", vbNormalFocus) Call Shell("C:\Python27\ArcGIS10.3\python.exe " & "C:\tests\Test.py " & arg1, vbNormalFocus) End Sub Python code is: import os.path #fp = "C:\\tests\\Projects" fp = argv[1] os.makedirs(fp) Answer: Try the following in Python # should be sys.argv[1] sys.argv[1] You may want to utilise os.path.join to be certain where your makedirs are going import os import sys sys.argv[1] fp = r'c:\test' os.makedirs(os.path.join(fp,sys.argv[1])) **THE VBA in Excel** \- works for me... Private Sub test() arg1 = Range("A1").Value Debug.Print arg1 Call Shell("""C:\Python27\python.exe"" " & """c:\test\test.py"" " & """arg1""", vbNormalFocus) End Sub
Trouble parsing html files (to csv) using ElementTree xpath in python Question: I am trying to parse a few thousand html files and dump the variables into a csv file (excel spreadsheet). I've come up against several roadblocks--the first one which was (thankfully) solve here, a few days ago. The (hopefully) final roadblock is this: I can not get it to properly parse the file using xpath. Below is a brief explanation, the python code and example of the html code. The trouble starts here: for node in tree.iter(): name = node.attrib.get('/html/body/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2]/table/tbody/tr[1]/td[1]/p/span') if category =='/html/body/center/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2]/table/tbody/tr[1]/td[1]/font': category=node.text It runs, but does not parse. I do not get any traceback errors. I think I am misunderstanding the logic of parsing with ElementTree. There are several headers that are the same--it is therefor difficult to find a unique id/header. Here is an example of the html: <span class="s1">Business: Give Back to the Community and Save Money on Equipment, Technology, Promotional Products, and Market<span class="Apple-converted-space">&nbsp;</span></span> For which the xpath is: /html/body/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2] /table/tbody/tr[1]/td[1]/p/span I would like to scrape the text from this span (among others) and put it in the excel spreadsheet. You can see an example of a similar page [**HERE**](http://www.usprwire.com/Detailed/Banking_Finance_Investment/Confused.com_reveals_that_Life_Insurance_is_more_than_a_form_of_future_protection_284764.shtml) At any rate, because many spans/headers are no uniquely identified, I think I should use xpath. However, I have yet to be able to figure out how to successfully use xpath commands with ElementTree. In searching the documentation, the answer to this question (as well as the logic) eludes me. I have read up on <http://lxml.de/parsing.html> as well as on this site and have yet to find something that works. So far, the code iterates through all the files (in dropbox) nicely. It also creates the csv file and creates the headers (though not in separate columns, only as one line separated by semicolons-- but that should be easy to fix). In sum, I would like it to parse the text from different lines on in each file (webpage) and dump it into the excel file. Any input would be greatly appreciated. The python code: import xml.etree.ElementTree as ET import csv, codecs, os from cStringIO import StringIO # Note: you need to download and install this.. import unicodecsv import lxml.html # TODO: make into command line params (instead of constant) CSV_FILE='output.csv' HTML_PATH='/Users/C/data/Folder_NS' f = open(CSV_FILE, 'wb') w = unicodecsv.writer(f, encoding='utf-8', delimiter=';') w.writerow(['file', 'category', 'about', 'title', 'subtitle', 'date', 'bodyarticle']) # redundant declarations: category='' about='' title='' subtitle='' date='' bodyarticle='' print "headers created" allFiles = os.listdir(HTML_PATH) #with open(CSV_FILE, 'wb') as csvfile: print "all defined" for file in allFiles: #print allFiles if '.html' in file: print "in html loop" tree = lxml.html.parse(HTML_PATH+"/"+file) print '====================' print 'Parsing file: '+file print '====================' for node in tree.iter(): name = node.attrib.get('/html/body/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2]/table/tbody/tr[1]/td[1]/p/span') if category =='/html/body/center/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2]/table/tbody/tr[1]/td[1]/font': print 'Category:' category=node.text f.close() 14 June 2015 (most recent change); I have just changed this section for node in tree.iter(): name = node.attrib.get('/html/body/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2]/table/tbody/tr[1]/td[1]/p/span') if category =='/html/body/center/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2]/table/tbody/tr[1]/td[1]/font': print 'Category:' category=node.text to this: for node in tree.iter(): row = dict.fromkeys(cols) Category_name = tree.xpath('/html/body/table/tbody/tr/td/table/tbody/tr[3]/td/table/tbody/tr[2]/td[2]/table/tbody/tr[1]/td[1]/p/span') row['category'] = Category_name[0].text_content().encode('utf-8') It still runs, but does not parse. Answer: Try following code: from lxml import etree import requests from StringIO import StringIO data = requests.get('http://www.usprwire.com/Detailed/Banking_Finance_Investment/Confused.com_reveals_that_Life_Insurance_is_more_than_a_form_of_future_protection_284764.shtml').content parser = etree.HTMLParser() root = etree.parse(StringIO(data), parser) category = root.xpath('//table/td/font/text()') print category[0] It uses `requests` library to download the `html` code of the page. You can choose whatever method that fits your needs. The important part is the `xpath` that searches any `<table>` followed by `<td>` followed by `<font>`, and it returns a list with two elements. The second one are blank characters and the first one contains the text. Run it and yields just the sentence you are looking for: Banking, Finance & Investment: Confused.com reveals that Life Insurance is more than a form of future protection
Finding the 5 smallest numbers from a list in Python Question: I have a list of names,x, and a list of scores,y, that correspond to the names. x = {a,b,c,d,e,f,g,h,i,j,k} y= {8,8,15,13,12,17,18,12,14,14} So, a has score 8, b has scores 8, c has score 15, ..., k has score 14 I want to find the 5 smallest scores from the list,y, and get their name and have a print out similar to the following: top5 lowest scores: a : 8 b : 8 e : 12 h : 12 d : 13 Currently, I am creating a copy of the list and then using pop to keep reducing the list, but it is giving me incorrect names for the scores. However, when I create my list for the max5 values, everything comes out fine using the same method. I am unsure of a function that lets me do this in python. This is just a sample of my problem, my real problem involves store locations along with scores for those stores that I computed from a function, but I want to get the top 5 highest and 5 lowest scores. Does anyone have an efficient solution to this? Answer: Python has a data structure called `Dictionary` that you can use to store key/value pairs . In Python , dictionary is defined as - dict = {'a':8 , 'b':8, 'c':15, 'd':13 ...} Then you can iterate over the key value pairs in this dictionary to find the 5 smallest numbers. You can convert the dict to a tuple and then sort the tuple based on the second item- import operator dict = {'a':8 , 'b':8, 'c':15, 'd':13 ...} sorted_dict = sorted(dict.items(), key=operator.itemgetter(1)) Instead of using a dict, you can also use a `list of tuples`, and use the last line in above code to sort it based on the second element of each tuple. The list of tuples would look like - scores = [('a',8),('b',8),('c',15),('d',13)..]
With @csrf_exempt still have Set-Cookie: csrftoken Question: With Django 1.8, I do not want to have a cookie set on the homepage of my site when the users are not logged in. So I decorate my view with @csrf_exempt like from django.views.decorators.csrf import csrf_exempt @csrf_exempt def mainhome(request): When I look at the query I can see the cookie still set, why ? rodo@roz-desktop:~/(master)$ curl -I http://127.0.0.1:8000/ HTTP/1.0 200 OK Date: Sat, 13 Jun 2015 08:59:27 GMT Server: WSGIServer/0.1 Python/2.7.8 Content-Type: text/html; charset=utf-8 Vary: Cookie X-QueryInspect-Duplicate-SQL-Queries: 2 X-QueryInspect-Total-SQL-Time: 34 ms X-QueryInspect-Total-Request-Time: 283 ms X-QueryInspect-Num-SQL-Queries: 3 Set-Cookie: csrftoken=sa5x0DyxgBamca0D84ZZnzl2WAL0evkv; expires=Sat, 11-Jun-2016 08:59:27 GMT; Max-Age=31449600; Path=/ Answer: As @Daniel Roseman indicated, `@csrf_exempt` will not help you with that. The middleware responsible for the session cookie is `SessionMiddleware`. You can read more about it in the [Django Docs: How to use sessions](https://docs.djangoproject.com/en/1.8/topics/http/sessions/). Unfortunately, there is no similar decorator in order to exempt some specific view. So in order to customize the middleware's behaviour, you would need to inherit from `SessionMiddleware`. There is a [nice answer](http://stackoverflow.com/questions/16069753/disable-anonymous-user- cookie-with-django) on the matter on SO.
Can not connect to an abstract unix socket in python Question: I have a server written in c++ which creates and binds to an abstract unix socket with a namespace address of `"\0hidden"`. I also have a client which is written in c++ also and this client can **successfully** connect to my server. BTW, I do not have the source code of this client. Now I am trying to connect to my server using a client I have written in python with no success. I do not understand why my python client is not working. I am posting the relevant parts of my server and client codes. Server #define UD_SOCKET_PATH "\0hidden" struct sockaddr_un addr; int fd,cl; if ( (fd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) { syslog(LOG_CRIT, "Error creating socket!"); exit(1); } memset(&addr, 0, sizeof(addr)); addr.sun_family = AF_UNIX; strncpy(addr.sun_path, UD_SOCKET_PATH, sizeof(addr.sun_path)-1); unlink(UD_SOCKET_PATH); if (::bind(fd, (struct sockaddr*)&addr, sizeof(addr)) == -1) { syslog(LOG_CRIT, "Bind error"); exit(1); } if (listen(fd, MAX_CONN_PENDING) == -1) { syslog(LOG_CRIT, "Listen error"); exit(1); } syslog(LOG_INFO, "Start listening."); And my client code #! /opt/python/bin/python import os import socket import sys # Create a UDS socket sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) server_address = "\0hidden" print >>sys.stderr, 'connecting to %s' % server_address.decode("utf-8") try: sock.connect(server_address) except socket.error, msg: print >>sys.stderr, msg sys.exit(1) After running the client I get the following error output: connecting to hidden [Errno 111] Connection refused And for some extra information I am posting the relevant parts of the strace outputs of my working c++ client and non-working python client: Working c++ client: socket(PF_FILE, SOCK_STREAM, 0) = 3 connect(3, {sa_family=AF_FILE, path=@""}, 110) = 0 fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb77d7000 write(1, "Sent message is: 00014 www.googl"..., 38) = 38 write(3, "00014 www.google.com", 20) = 20 recv(3, "014 Search Engines", 99, 0) = 18 write(1, "014 Search Engines\n", 19) = 19 close(3) = 0 exit_group(0) = ? None working python client: socket(PF_FILE, SOCK_STREAM, 0) = 3 connect(3, {sa_family=AF_FILE, path=@"hidden"...}, 9) = -1 ECONNREFUSED (Connection refused) write(2, "Traceback (most recent call last"..., 35) = 35 write(2, " File \"./uds.py\", line 13, in <"..., 40) = 40 open("./uds.py", O_RDONLY|O_LARGEFILE) = 4 fstat64(4, {st_mode=S_IFREG|0755, st_size=839, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7792000 read(4, "#! /opt/python/bin/python\nimport"..., 4096) = 839 write(2, " ", 4) = 4 write(2, "sock.connect('\\0hidden')\n", 25) = 25 close(4) = 0 munmap(0xb7792000, 4096) = 0 write(2, " File \"/opt/python/lib/python2."..., 64) = 64 open("/opt/python/lib/python2.7/socket.py", O_RDONLY|O_LARGEFILE) = 4 fstat64(4, {st_mode=S_IFREG|0755, st_size=20234, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7792000 read(4, "# Wrapper module for _socket, pr"..., 4096) = 4096 read(4, "oo long.\"\n errorTab[10064] = "..., 4096) = 4096 write(2, " ", 4) = 4 write(2, "return getattr(self._sock,name)("..., 39) = 39 close(4) = 0 munmap(0xb7792000, 4096) = 0 write(2, "socket", 6) = 6 write(2, ".", 1) = 1 write(2, "error", 5) = 5 write(2, ": ", 2) = 2 write(2, "[Errno 111] Connection refused", 30) = 30 write(2, "\n", 1) = 1 rt_sigaction(SIGINT, {SIG_DFL, [], 0}, {0x810fbe0, [], 0}, 8) = 0 close(3) = 0 exit_group(1) = ? And also when I run my c++ client, I get this strace output from my server: 0, NULL) = 12 futex(0x80646a4, FUTEX_CMP_REQUEUE_PRIVATE, 1, 2147483647, 0x8064688, 360) = 10 futex(0x8064688, FUTEX_WAKE_PRIVATE, 1) = 1 accept(5, But when I run my python client, no output is shown on strace. So it seems like I am trying to connect to a wrong address, but my address is defined as `"\0hidden"` in both my server and my client. Answer: Your C++ doesn't do quite what you think it does. This line: strncpy(addr.sun_path, UD_SOCKET_PATH, sizeof(addr.sun_path)-1); Copies a single null character `'\0'` into `addr.sun_path`. Note this line in the manpage for `strncpy()`: > If the length of src is less than n, strncpy() writes additional null bytes > to dest to ensure that a total of n bytes are written. As a result your C++ actually connects to an abstract domain socket at `"\0"`. Python does the right thing here and connects to an abstract domain socket at `"\0hidden"`.
how to sort data according to date in python Question: I have a input file in the following format: 457526373620277249 17644162 Sat Apr 19 14:29:22 +0000 2014 0 nc nc U are expressing a wish not a fact ;) @Manicdj99 @ANTIVICTORIA @Nupe117 @cspanwj 457522541926842368 402127017 Sat Apr 19 14:14:09 +0000 2014 0 nc nc @dfwlibrarian You're a great one to call somebody else "educationally challenged!" I'd call that a name call. #YouLose #PJNET #TCOT #TGDNGO YouLose,PJNET,TCOT,TGDNGO 457519476511350786 65713724 Sat Apr 19 14:01:58 +0000 2014 0 nc nc @Manicdj99 @Nupe117 @cspanwj only some RW fringies are upset- &amp; they're ALWAYS angry at something-also too fat 2 get out of lazyboys I need the data to be sorted according to time. I am using `strptime` function but not able to sort the entire data according to time. import datetime dt=[] for line in f: splits = line.split('\t') dt.append(datetime.datetime.strptime(splits[2], "%a %b %d %H:%M:%S +0000 %Y")) dt.sort() Answer: You'd want to produce a list of the lines, and only then sort the whole list; you are only capturing the timestamps and are sorting that list each time you add a new timestamp, ignoring the rest of your data. You can more easily read the data using the [`csv` module](https://docs.python.org/2/library/csv.html): import csv from datetime import datetime from operator import itemgetter rows = [] with open(yourfile, 'rb') as f: reader = csv.reader(f, delimiter='\t') for row in reader: row[2] = datetime.strptime(row[2], "%a %b %d %H:%M:%S +0000 %Y") rows.append(row) rows.sort(key=itemgetter(2)) # sort by the datetime column
How to access fields with StrategyPattern in Python? Question: I'm trying to use the Strategy Pattern to include different behaviours for different sizes of a simulation. I came across [this implementation](http://codereview.stackexchange.com/questions/20718/strategy- design-pattern-with-various-duck-type-classes) from the first example of the book [Head First Design Patterns](http://rads.stackoverflow.com/amzn/click/0596007124). However, I don't understand where and how should I access my data initialised in my simulation. from abc import ABCMeta, abstractmethod ########################################################################### ##### # Abstract Simulation class and concrete Simulation type classes. ################################################################################ class Simulation: def __init__(self, run, plot): self._run_behavior = run self._plot_behavior = plot def run(self): return self._run_behavior.run() def plot(self): return self._plot_behavior.plot() class SmallSimulation(Simulation): def __init__(self): Simulation.__init__(self, Run(), Plot()) print "I'm a small simulation" self.data = 'Small Data' class BigSimulation(Simulation): def __init__(self): Simulation.__init__(self, Run(), Plot()) print "I'm a big simulation" self.data = 'Big Data' class LargeSimulation(Simulation): def __init__(self): Simulation.__init__(self, RunLarge(), Plot()) print "I'm a large simulation" self.data = 'Large Data' ################################################################################ # Run behavior interface and behavior implementation classes. ################################################################################ class RunBehavior: __metaclass__ = ABCMeta @abstractmethod def run(self): pass class Run(RunBehavior): def run(self): print "I'm running standard" print self.data class RunLarge(RunBehavior): def run(self): print "I'm running multilevel" ################################################################################ # Plot behavior interface and behavior implementation classes. ################################################################################ class PlotBehavior: __metaclass__ = ABCMeta @abstractmethod def plot(self): pass class Plot(PlotBehavior): def plot(self): print "I'm plotting results" ################################################################################ # Test Code. ################################################################################ if __name__ == '__main__': smallSimulation = SmallSimulation() bigSimulation = BigSimulation() largeSimulation = LargeSimulation() print('='*20) print('Execution') smallSimulation.run() bigSimulation.run() largeSimulation.run() print('='*20) print('Plotting') smallSimulation.plot() bigSimulation.plot() largeSimulation.plot() The output is I'm a small simulation I'm a big simulation I'm a large simulation ==================== Execution I'm running standard Traceback (most recent call last): File "strategy.py", line 84, in <module> smallSimulation.run() File "strategy.py", line 16, in run return self._run_behavior.run() File "strategy.py", line 52, in run print self.data AttributeError: 'Run' object has no attribute 'data' How should I initialise and access my data? Answer: Your run class doesn't have a data attribute, hence the exception. class Run(RunBehavior): # This class does NOT have a data attribute def run(self): print "I'm running standard" print self.data To access simulation's `data` within `Run`, you can pass it in Run's init(): class Run(RunBehavior): def __init__(self, data): self.data = data def run(self): print "I'm running standard" print self.data
Python internal error Handling Question: I'm having issues with my program just closing at random stages and am not sure why. At first, I thought it was because it was getting an error but I added an error handle. still for some reason it just closes after say a few days of running and no error is displayed. code below import requests import lxml.html as lh import sys import time from clint.textui import puts, colored API_URL = "http://urgmsg.net/livenosaas/ajax/update.php" class Scraper (object): id_stamp = 0 def __init__(self, timeout, recent_messages=True): self.timeout = timeout self.handlers = [] self.recent_messages = recent_messages def register_handler(self, handler): self.handlers.append(handler) return handler def scrape(self): try: resp = requests.get(API_URL, params={'f': self.id_stamp}).json() except requests.exceptions.ConnectionError as e: puts("Error encountered when connecting to urgmsg: ", newline=False) puts(colored.red(e.__class__.__name__), newline=False) puts(" " + e.message) return if not resp['updated']: return old_id_stamp = self.id_stamp self.id_stamp = resp['IDstamp'] # if old_id_stamp is 0, this is the first scrape # which will return a whole bunch of recent past messages if not self.recent_messages and old_id_stamp == 0: return # Pager messages are returned newest to oldest, we want to # process them oldest to newest frags = lh.fragments_fromstring(resp['data'])[::-1] for frag in frags: msg = PagerMessage(frag) for handler in self.handlers: handler(msg) def run(self): while True: self.scrape() time.sleep(self.timeout) class PagerMessage: def __init__(self, fragment): children = fragment.getchildren() self.datetime = children[0].text self.text = children[1].text # channel starts with `- ` self.channel = children[1].getchildren()[0].text[2:] self.response = 'CFSRES' in self.text def __str__(self): return "{} [{}]: {}".format(self.channel, self.datetime, self.text) if __name__ == "__main__": scraper = Scraper(5) @scraper.register_handler def handler(msg): puts(colored.yellow(msg.channel), newline=False) puts(" [", newline=False) puts(colored.green(msg.datetime), newline=False) puts("] ", newline=False) if msg.response: puts(colored.red(msg.text)) else: puts(msg.text) scraper.run() Have I set this part out wrong ? except requests.exceptions.ConnectionError as e: puts("Error encountered when connecting to urgmsg: ", newline=False) puts(colored.red(e.__class__.__name__), newline=False) puts(" " + e.message) return Answer: As suggested by @sobolevn change except: as e: puts("Error encountered", newline=False) puts(colored.red(e.__class__.__name__), newline=False) puts(" " + e.message) return
Traceback from a Python Script: invalid literal Question: In short, what the Python script is supposed to do is to load and calculate ASCII type files. With some previously pre-processed files, it works without errors, while with mine it throws an error. In any case, it looks as though my file is different from what it should be (input-wise). Traceback (most recent call last): File "D:\TER\Scripts python\PuissantPDI.py", line 124, in <module> for i in range (42, (42+(int(type_ncols)*int(type_nrows)))): ValueError: invalid literal for int() with base 10: 'nrows' *It is not run in any QGIS/ArcGIS software, just from the cmd or IDLE. **EDIT** Just a small part of the code: import sys print("\nPDI Processing...\n") ''' OPTION FILE ''' with open("options_PDI.txt") as f: content = f.readlines() content = [x.strip('\n') for x in content] option= [] for elem in content: option.extend(elem.strip().split(" ")) f.close() b_type_file=option[1] b_totalstage_file=option[3] b_function_file=option[5] b_state_file=option[7] b_age_file=option[9] b_material_file=option[11] b_occstage_file=option[13] landcover_file=option[15] landuse_file=option[17] transport_file=option[19] * * * print("Option file loaded...\n") ''' BUILDING TYPE FILE ''' with open(b_type_file) as f: content = f.readlines() content = [x.strip('\n') for x in content] b_type= [] for elem in content: b_type.extend(elem.strip().split(" ")) f.close() type_ncols=b_type[9] type_nrows=b_type[19] type_xll=b_type[25] type_yll=b_type[31] type_pixelsize=b_type[38] type_nodata=b_type[41] type_value=[] for i in range (42, (42+(int(type_ncols)*int(type_nrows)))): type_value.append(b_type[i]) print("Building type file loaded...") Answer: You are splitting on single spaces: option= [] for elem in content: option.extend(elem.strip().split(" ")) You have an extra space somewhere, so all your offsets are off-by-one. You could solve that by simply *removing the argument to `str.split()`. The text will then automatically be stripped, and split on _arbitrary width_ whitespace. It won't matter if there are 1 or 2 or 20 spaces in the file then: with open("options_PDI.txt") as f: option = f.read().split() Note that I don't even bother with splitting the file into lines or stripping away the newlines. Note that your treatment of the files is rather fragile still; you are expecting certain values to exist at certain positions. If your files contain `label value` style lines, you can just read the whole file into a dictionary: with open("options_PDI.txt") as f: options = dict(line.strip().split(None, 1) for line in f if ' ' in line) and use that dictionary to address various values: type_ncols = int(options['ncols'])
Calculate how a value differs from the average of values using the Gaussian Kernel Density (Python) Question: I use this code to calculate a Gaussian Kernel Density on this values from random import randint x_grid=[] for i in range(1000): x_grid.append(randint(0,4)) print (x_grid) This is the code to calculate the Gaussian Kernel Density from statsmodels.nonparametric.kde import KDEUnivariate import matplotlib.pyplot as plt def kde_statsmodels_u(x, x_grid, bandwidth=0.2, **kwargs): """Univariate Kernel Density Estimation with Statsmodels""" kde = KDEUnivariate(x) kde.fit(bw=bandwidth, **kwargs) return kde.evaluate(x_grid) import numpy as np from scipy.stats.distributions import norm # The grid we'll use for plotting from random import randint x_grid=[] for i in range(1000): x_grid.append(randint(0,4)) print (x_grid) # Draw points from a bimodal distribution in 1D np.random.seed(0) x = np.concatenate([norm(-1, 1.).rvs(400), norm(1, 0.3).rvs(100)]) pdf_true = (0.8 * norm(-1, 1).pdf(x_grid) + 0.2 * norm(1, 0.3).pdf(x_grid)) # Plot the three kernel density estimates fig, ax = plt.subplots(1, 2, sharey=True, figsize=(13, 8)) fig.subplots_adjust(wspace=0) pdf=kde_statsmodels_u(x, x_grid, bandwidth=0.2) ax[0].plot(x_grid, pdf, color='blue', alpha=0.5, lw=3) ax[0].fill(x_grid, pdf_true, ec='gray', fc='gray', alpha=0.4) ax[0].set_title("kde_statsmodels_u") ax[0].set_xlim(-4.5, 3.5) plt.show() All the values in the grid are between 0 e 4. If I receive a new value of 5 I want to calculate how that value differs from the average values and assign to it a score between 0 and 1. (setting a threshold) > So if I receive as a new value 5 its score must be close to 0.90, while if I > receive as a new value 500 its score must be close to 0.0. How can I do that? Is my function to calculate the Gaussian Kernel Density correct or is there a better way/library to do that? *** UPDATE *** I read an example in a paper. The weight of a washing machine is typically of 100 kg. Usually vendors use the kg unit to also refer its capacity (example 9 kg). For a human is easy to understand that 9 gk is the capacity and not the total weight of the washing machine. We can “fake” this form of intelligence without deep language understanding, by instead modeling a distribution of values over training data for each attribute. For a given attribute a (weight of a washing machine for example), let Va = {va1, va2, . . . van} (|Va| = n) be the set of values of attribute a corresponding to products in the training data. If I found a new value v Intuitively it is “close” to (the distribution estimated from) Va, then we should feel more confident assigning this value to a (example weight of a washing machine). An idea could be to measure the number of standard deviations by which the new value v differs from the average of values in Va but a better one could be to model a (Gaussian) kernel density on Va, and then express the support at new value v as the density at that point: ![enter image description here](http://i.stack.imgur.com/wWbLh.png) where where σ^(2)ak is the variance of the kth Gaussian, and Z is a constant to make sure S(c.s.v, Va) ∈ [0, 1]. How can I obtain it in Python using the statsmodels library? *** UPDATED 2 *** Example of data... but I think that is not very important... Generated by this code... from random import randint x_grid=[] for i in range(1000): x_grid.append(randint(1,3)) print (x_grid) [2, 2, 1, 2, 2, 3, 1, 1, 1, 2, 2, 2, 1, 1, 3, 3, 1, 2, 1, 3, 2, 3, 3, 1, 2, 3, 1, 1, 3, 2, 2, 1, 1, 1, 2, 3, 2, 1, 2, 3, 3, 2, 2, 3, 3, 2, 2, 1, 2, 1, 2, 2, 3, 3, 1, 1, 2, 3, 3, 2, 1, 2, 3, 3, 3, 3, 2, 1, 3, 2, 2, 1, 3, 3, 1, 2, 1, 3, 2, 3, 3, 1, 2, 3, 3, 2, 1, 2, 3, 2, 1, 1, 2, 1, 1, 2, 3, 2, 1, 2, 2, 2, 3, 2, 3, 3, 1, 1, 3, 2, 1, 1, 3, 3, 3, 2, 1, 2, 2, 1, 3, 2, 3, 1, 3, 1, 2, 3, 1, 3, 2, 2, 1, 1, 2, 2, 3, 1, 1, 3, 2, 2, 1, 2, 1, 2, 3, 1, 3, 3, 1, 2, 1, 2, 1, 3, 1, 3, 3, 2, 1, 1, 3, 2, 2, 2, 3, 2, 1, 3, 2, 1, 1, 3, 3, 3, 2, 1, 1, 3, 2, 1, 2, 2, 2, 1, 3, 1, 3, 2, 3, 1, 2, 1, 1, 2, 2, 2, 3, 3, 3, 3, 2, 2, 2, 3, 1, 1, 2, 2, 1, 1, 1, 3, 3, 3, 3, 1, 3, 1, 3, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 3, 1, 2, 3, 1, 3, 2, 2, 2, 2, 2, 1, 1, 2, 3, 1, 1, 1, 3, 1, 3, 2, 2, 3, 1, 3, 3, 2, 2, 3, 2, 1, 2, 1, 1, 1, 2, 2, 3, 2, 1, 1, 3, 1, 2, 1, 3, 3, 3, 1, 2, 2, 2, 1, 1, 2, 2, 1, 2, 3, 1, 3, 2, 2, 2, 2, 2, 2, 1, 3, 1, 3, 3, 2, 3, 2, 1, 3, 3, 3, 3, 3, 1, 2, 2, 2, 1, 1, 3, 2, 3, 1, 2, 3, 2, 3, 2, 1, 1, 3, 3, 1, 1, 2, 3, 2, 3, 3, 2, 3, 3, 2, 3, 3, 3, 3, 3, 3, 3, 2, 1, 1, 2, 3, 2, 3, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 1, 3, 1, 1, 2, 3, 1, 1, 2, 3, 1, 2, 3, 1, 2, 1, 3, 3, 2, 2, 3, 3, 3, 2, 1, 1, 2, 2, 3, 2, 3, 2, 1, 1, 1, 1, 2, 3, 1, 3, 3, 3, 2, 1, 2, 3, 1, 2, 1, 1, 2, 3, 3, 1, 1, 3, 2, 1, 3, 3, 2, 1, 1, 3, 1, 3, 1, 2, 2, 1, 3, 3, 2, 3, 1, 1, 3, 1, 2, 2, 1, 3, 2, 3, 1, 1, 3, 1, 3, 1, 2, 1, 3, 2, 2, 2, 2, 1, 3, 2, 1, 3, 3, 2, 3, 2, 1, 3, 1, 2, 1, 2, 3, 2, 3, 2, 3, 3, 2, 3, 3, 1, 1, 3, 2, 3, 2, 2, 2, 3, 1, 3, 2, 2, 3, 3, 2, 3, 2, 2, 2, 3, 3, 1, 3, 2, 3, 1, 1, 2, 1, 3, 1, 2, 2, 3, 3, 1, 3, 1, 1, 2, 2, 1, 3, 3, 3, 1, 2, 2, 2, 1, 3, 1, 2, 2, 2, 3, 3, 3, 1, 1, 2, 3, 3, 1, 1, 2, 3, 2, 3, 3, 2, 2, 1, 3, 3, 3, 3, 2, 3, 1, 3, 3, 2, 1, 3, 2, 1, 1, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 2, 3, 3, 3, 2, 1, 3, 1, 1, 1, 1, 3, 1, 2, 3, 3, 3, 2, 3, 1, 2, 2, 2, 3, 2, 1, 2, 3, 3, 2, 3, 3, 1, 2, 3, 3, 3, 3, 2, 3, 3, 2, 1, 1, 1, 2, 3, 1, 3, 3, 2, 1, 3, 3, 3, 2, 2, 1, 2, 3, 2, 3, 3, 3, 3, 2, 3, 2, 1, 2, 1, 1, 3, 3, 3, 2, 2, 3, 1, 3, 2, 1, 3, 1, 1, 3, 3, 1, 2, 2, 2, 3, 3, 1, 2, 1, 2, 1, 3, 2, 3, 3, 3, 3, 3, 3, 3, 1, 2, 3, 1, 3, 3, 2, 2, 1, 3, 1, 1, 3, 2, 1, 2, 3, 2, 1, 3, 3, 3, 2, 3, 1, 2, 3, 3, 1, 2, 2, 2, 3, 1, 2, 1, 1, 1, 3, 1, 3, 1, 3, 3, 2, 3, 1, 3, 2, 3, 3, 1, 2, 1, 3, 2, 2, 2, 2, 2, 2, 1, 2, 2, 3, 2, 2, 3, 2, 2, 2, 3, 1, 1, 3, 3, 1, 3, 1, 2, 1, 2, 1, 3, 2, 2, 1, 3, 1, 3, 3, 1, 3, 1, 1, 1, 1, 3, 2, 1, 2, 3, 1, 1, 3, 1, 1, 3, 1, 3, 3, 3, 1, 1, 3, 1, 3, 2, 2, 2, 1, 1, 2, 3, 3, 2, 3, 3, 1, 2, 3, 2, 2, 3, 1, 2, 2, 2, 1, 1, 3, 1, 2, 2, 2, 1, 1, 2, 3, 1, 3, 1, 1, 3, 2, 2, 3, 2, 2, 3, 3, 1, 1, 2, 2, 3, 1, 1, 2, 3, 2, 2, 3, 1, 2, 2, 1, 1, 3, 2, 3, 1, 1, 3, 1, 3, 2, 3, 3, 3, 3, 3, 2, 2, 3, 2, 1, 1, 1, 3, 3, 1, 2, 1, 3, 2, 3, 2, 2, 1, 2, 3, 3, 1, 1, 1, 1, 3, 3, 1, 3, 3, 1, 1, 3, 1, 3, 1, 3, 2, 3, 1, 3, 3, 3, 1, 1, 2, 2, 3, 2, 3, 2, 2, 1, 2, 1, 2, 1, 2, 2, 3, 1, 1, 3, 2, 2, 3, 2, 3, 3, 2, 2, 2, 2, 2, 2, 3, 2, 3, 1, 2, 2, 1, 1, 2, 3, 3, 1, 3, 3, 1, 3, 3, 1, 3, 2, 2, 2, 1, 1, 2, 1, 3, 1, 1, 1, 2, 3, 3, 2, 3, 1, 3] This array represents the ram of new smartphones in the market... Usually they have 1,2,3 GB of ram. That's the kernel density ![enter image description here](http://i.stack.imgur.com/wWbLh.png) *** UPDATE I try the code with this values > [1024, 1, 1024, 1000, 1024, 128, 1536, 16, 192, 2048, 2000, 2048, 24, 250, > 256, 278, 288, 290, 3072, 3, 3000, 3072, 32, 384, 4096, 4, 4096, 448, 45, > 512, 576, 64, 768, 8, 96] The values are all in mb... do you think that is working well? I think that I must set a threshold 100% cdfv kdev 1 42 0.210097 0.499734 1024 96 0.479597 0.499983 5000 0 0.000359 0.498885 2048 36 0.181609 0.499700 3048 8 0.040299 0.499424 *** UPDATE 3 *** [256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 512, 512, 512, 256, 256, 256, 512, 512, 512, 128, 128, 128, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 128, 128, 128, 512, 512, 512, 256, 256, 256, 256, 256, 256, 1024, 1024, 1024, 512, 512, 512, 128, 128, 128, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 4, 4, 4, 3, 3, 3, 24, 24, 24, 8, 8, 8, 16, 16, 16, 16, 16, 16, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 256, 256, 256, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 2048, 2048, 2048, 2048, 2048, 2048, 4096, 4096, 4096, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 768, 768, 768, 768, 768, 768, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 256, 256, 256, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 3072, 3072, 3072, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 256, 256, 256, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 64, 64, 64, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 64, 64, 64, 64, 64, 64, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 128, 128, 128, 576, 576, 576, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 576, 576, 576, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 2048, 2048, 2048, 768, 768, 768, 768, 768, 768, 768, 768, 768, 512, 512, 512, 192, 192, 192, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 384, 384, 384, 448, 448, 448, 576, 576, 576, 384, 384, 384, 288, 288, 288, 768, 768, 768, 384, 384, 384, 288, 288, 288, 64, 64, 64, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 64, 64, 64, 128, 128, 128, 128, 128, 128, 128, 128, 128, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 256, 256, 256, 768, 768, 768, 768, 768, 768, 768, 768, 768, 256, 256, 256, 192, 192, 192, 256, 256, 256, 64, 64, 64, 256, 256, 256, 192, 192, 192, 128, 128, 128, 256, 256, 256, 192, 192, 192, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 288, 128, 128, 128, 128, 128, 128, 384, 384, 384, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 32, 32, 32, 768, 768, 768, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 128, 128, 128, 128, 128, 128, 1024, 1024, 1024, 1024, 1024, 1024, 128, 128, 128, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 3072, 3072, 3072, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048, 384, 384, 384, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 128, 128, 128, 256, 256, 256, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 768, 768, 768, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 128, 128, 128, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 64, 64, 64, 64, 64, 64, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 16, 16, 16, 3072, 3072, 3072, 3072, 3072, 3072, 256, 256, 256, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 32, 32, 32, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 32, 32, 32, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 512, 512, 512, 1, 1, 1, 1024, 1024, 1024, 32, 32, 32, 32, 32, 32, 45, 45, 45, 8, 8, 8, 512, 512, 512, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 16, 16, 16, 4, 4, 4, 4, 4, 4, 4, 4, 4, 16, 16, 16, 16, 16, 16, 16, 16, 16, 64, 64, 64, 8, 8, 8, 8, 8, 8, 8, 8, 8, 64, 64, 64, 64, 64, 64, 256, 256, 256, 64, 64, 64, 64, 64, 64, 512, 512, 512, 512, 512, 512, 512, 512, 512, 32, 32, 32, 32, 32, 32, 32, 32, 32, 128, 128, 128, 128, 128, 128, 128, 128, 128, 32, 32, 32, 128, 128, 128, 64, 64, 64, 64, 64, 64, 16, 16, 16, 256, 256, 256, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 256, 256, 256, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 256, 256, 256, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 256, 256, 256, 256, 256, 256, 1024, 1024, 1024, 1024, 1024, 1024, 256, 256, 256, 3072, 3072, 3072, 3072, 3072, 3072, 128, 128, 128, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 128, 128, 128, 128, 128, 128, 64, 64, 64, 256, 256, 256, 256, 256, 256, 512, 512, 512, 768, 768, 768, 768, 768, 768, 16, 16, 16, 32, 32, 32, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 512, 512, 512, 2048, 2048, 2048, 1024, 1024, 1024, 3072, 3072, 3072, 3072, 3072, 3072, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 3072, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 3072, 3072, 3072, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 64, 64, 64, 96, 96, 96, 512, 512, 512, 64, 64, 64, 64, 64, 64, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 3072, 3072, 3072, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 64, 64, 64, 64, 64, 64, 256, 256, 256, 1024, 1024, 1024, 512, 512, 512, 256, 256, 256, 512, 512, 512, 1024, 1024, 1024, 512, 512, 512, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 512, 512, 512, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 3072, 3072, 3072, 3072, 3072, 3072, 2048, 2048, 2048, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 1024, 2048, 2048, 2048, 2048, 2048, 2048, 1024, 1024, 1024, 2048, 2048, 2048, 3072, 3072, 3072, 2048, 2048, 2048] With this data if I try as new value this number # new values x = np.asarray([128,512,1024,2048,3072,2800]) Something goes wrong with the 3072 (all values are in MB). This is the result: 100% cdfv kdev 128 26 0.129688 0.499376 512 55 0.275874 0.499671 1024 91 0.454159 0.499936 2048 12 0.062298 0.499150 3072 0 0.001556 0.498364 2800 1 0.004954 0.498573 I can't understand why this happens... the 3072 value appears a lot of time in the data... This is the histogram of my datas... this is very strange because there are some values for 3072 and also for 4096. ![enter image description here](http://i.stack.imgur.com/OEkwe.png) Answer: A few general comments without going into statsmodels details. statsmodels also has cdf kernels, but I don't remember how well they work, and I don't think it has automatic bandwidth selection for it. Related to the answer of glen_b that ali_m linked to in the comment: The cdf estimate converges much faster to the true distribution than the estimate of the density as the sample grows. To balance the bias - variance tradeoff we should use a smaller bandwidth for cdf kernels, that is undersmooth relative to density estimation. The estimates should be more accurate than the corresponding density estimates. Number of tail observations: If your largest observation in the sample is 4 and you want to know the cdf at 5, then your data has no information about it. For tails where you only have very few observations the variance of a nonparametric estimator like kernel distribution estimators will be large in relative terms (is it 1e-5 or 1e-20?). As alternative to kernel density or kernel distribution estimation, we can estimate a Pareto distribution for the tail parts. For example, take the largest 10 or 20 percent of observations and fit a Pareto distribution, and use this to extrapolate the tail density. There are several Python packages for powerlaw estimation, that might be used for the this. **update** The following shows how to calculate "outlyingness" using a parametric normal distribution assumption and a gaussian kernel density estimate with fixed bandwidth. This is only really correct if the sample comes from a continuous distribution or can be approximated by a continuous distribution. Here we **pretend** that a sample that has only 3 distinct values comes from a normal distribution. Essentially, the calculated cdf value is like a distance measure not a probability for a discrete random variable. This uses kde from scipy.stats with fixed bandwidth instead of the statsmodels version. I'm not sure how the bandwidth is set in scipy's gaussian_kde, so, my fixed bandwidth choice equal to `scale` Is likely wrong. I don't know how I would choose a bandwidth if there are only three distinct values, there is not enough information in the data. The default bandwidth is intended for distributions that are approximately normal, or at least single peaked. import numpy as np from scipy import stats # data ram = np.array([2, <truncated from data in description>, 3]) loc = ram.mean() scale = ram.std() # new values x = np.asarray([-1, 0, 2, 3, 4, 5, 100]) # assume normal distribution cdf_val = stats.norm.cdf(x, loc=loc, scale=scale) cdfv = np.minimum(cdf_val, 1 - cdf_val) # use gaussian kde but fix bandwidth kde = stats.gaussian_kde(ram, bw_method=scale) kde_val = np.asarray([kde.integrate_box_1d(-np.inf, xx) for xx in x]) kdev = np.minimum(kde_val, 1 - kde_val) #print(np.column_stack((x, cdfv, kdev))) # use pandas for prettier table import pandas as pd print(pd.DataFrame({'cdfv': cdfv, 'kdev': kdev}, index=x)) ''' cdfv kdev -1 0.000096 0.000417 0 0.006171 0.021262 2 0.479955 0.482227 3 0.119854 0.199565 5 0.000143 0.000472 100 0.000000 0.000000 '''
python Matplotlib gtk - animate plot with FuncAnimation Question: I am trying to update a plot within my GTK window with FunkAnimation. I want to click a button to start updating the plot which gets its data from a txt file. The txt-file gets updated constantly. The intent is to plot a temperature profile. Here is the simplified code: import gtk from matplotlib.backends.backend_gtkagg import FigureCanvasGTKAgg as FigureCanvas import matplotlib.animation as animation from matplotlib import pyplot as plt class MyProgram(): def __init__(self): some_gtk_stuff self.signals = { 'on_button_clicked': self.create_plot, } self.builder.connect_signals(self.signals) self.vbox = self.builder.get_object('vbox') self.figure = plt.figure() self.axis = self.figure.add_subplot(1,1,1) self.init = 0 def create_plot(self): def update_plot(i): #read SampleData from txt File x = [] y = [] readFile = open('SampleData.txt', 'r') sepFile = readFile.read().split('\n') readFile.close() for data in sepFile: xy = data.split(',') x.append(int(x) y.append(int(y) self.axis.plot(x, y) if (self.init == 0): self.canvas = FigureCanvas(self.figure) self.vbox.pack_start(self.canvas) self.canvas.show() ani = animation.FuncAnimation(self.figure, update_plot, interval = 1000) self.canvas.draw() return ani MyProgram() gtk.main() So I guess, the problem is, that the create_plot function is only called once. The plot window is created in the gui, but it doesn't get updated. I couldn't find a solution for my problem. Adding `return any' as suggested [here](http://stackoverflow.com/questions/21099121/python-matplotlib-unable- to-call-funcanimation-from-inside-a-function) didn't work. You can see a working example [here](http://pythonprogramming.net/python- matplotlib-live-updating-graphs/), the code is at the bottom of the page. I'm aware that i have to implement something like threading for the logging and updating at a later time, but i don't even get it to work without. Any Tips? :) Answer: I think this is what you want to achieve. Note that I changed the way you read your file. The `with open() as f:` takes care of the file closing operation which you forgot. It is also possible to write the name of the signal handler in the builder file so one can simply say `self.builder.connect(self)` and omit the `self.signals` dicitonary. import gtk from matplotlib.backends.backend_gtkagg import FigureCanvasGTKAgg as FigureCanvas import matplotlib.animation as animation from matplotlib import pyplot as plt class MyProgram(): def __init__(self): #some_gtk_stuff self.signals = { 'on_button_clicked': self.create_plot, } self.builder.connect_signals(self.signals) self.vbox = self.builder.get_object('vbox') self.figure = plt.figure() self.axis = self.figure.add_subplot(1,1,1) self.canvas = None def create_plot(self, button): self.ani = animation.FuncAnimation(self.figure, self.update_plot, interval = 1000) def update_plot(self, i): #read SampleData from txt File x = [] y = [] with open('SampleData.txt') as f: x_raw, y_raw = f.readline().strip().split(',') x.append(int(x_raw)) y.append(int(y_raw)) self.axis.plot(x, y) if not self.canvas: self.canvas = FigureCanvas(self.figure) self.vbox.pack_start(self.canvas) self.canvas.show() self.canvas.draw() MyProgram() gtk.main()
Python: Using re module to find string, then print values under string Question: I am attempting to use the re module to search a string of a fairly large file. The file I am searching has the following format: 220 BOX 1, STEP 1 C 15.1760586379 13.7666285127 4.1579861659 F 13.7752750995 13.3845518556 4.1992254467 F 15.1122807811 15.0753387163 3.8457966464 H 15.5298304628 13.5873563855 5.1615910859 H 15.6594416869 13.1246597008 3.3754112615 5 BOX 2, STEP 1 C 15.1760586379 13.7666285127 4.1579861659 F 13.7752750995 13.3845518556 4.1992254467 F 15.1122807811 15.0753387163 3.8457966464 H 15.5298304628 13.5873563855 5.1615910859 H 15.6594416869 13.1246597008 3.3754112615 240 BOX 1, STEP 2 C 12.6851133069 2.8636250164 1.1788963097 F 11.7935769268 1.7912366066 1.3042188034 F 13.7887138736 2.3739304018 0.4126088380 H 12.1153838312 3.7024696077 0.7164304431 H 13.0962656950 3.1549047758 2.1436863477 C 12.6745394723 3.6338848332 15.1374252921 F 11.8703828307 4.3473226569 16.0480492173 F 12.2304604843 2.3709059503 14.9433964493 H 12.6002811971 4.1968554204 14.1449118786 H 13.7469256153 3.6086212350 15.5204655285 This format continues on for Box 1 and Box 2 for ~30000 STEPS total, for each BOX. I have code that utilizes the re module to searches this file based on the keyword "STEP". Unfortunately, it does not yield any results when I run it. I need my code to search 1) for **ONLY** Box 1, then 2) print/output all the coordinates(preferably omitting the "C's, F's, H's"; so only the coordinates) beginning after STEP 1 to a file, 3) increment the "STEP" number by 48 and then repeat 2). I also want to ignore the "5" and the "240" in the file that I am searching; so the code should compensate so that this is not included in the output after we search this file. This is what I have thus far (it does not work): import re shakes = open("mc_coordinates", "r") i = 1 for line in shakes: if re.match("(.*)STEP i(.*)", line): print line i+=48 This is an example of what I what my code to do: STEP 1 15.1760586379 13.7666285127 4.1579861659 13.7752750995 13.3845518556 4.1992254467 15.1122807811 15.0753387163 3.8457966464 15.5298304628 13.5873563855 5.1615910859 15.6594416869 13.1246597008 3.3754112615 STEP 49 12.6851133069 2.8636250164 1.1788963097 11.7935769268 1.7912366066 1.3042188034 13.7887138736 2.3739304018 0.4126088380 12.1153838312 3.7024696077 0.7164304431 13.0962656950 3.1549047758 2.1436863477 12.6745394723 3.6338848332 15.1374252921 11.8703828307 4.3473226569 16.0480492173 12.2304604843 2.3709059503 14.9433964493 12.6002811971 4.1968554204 14.1449118786 13.7469256153 3.6086212350 15.5204655285 STEP 97 15.1760586379 13.7666285127 4.1579861659 13.7752750995 13.3845518556 4.1992254467 15.1122807811 15.0753387163 3.8457966464 15.5298304628 13.5873563855 5.1615910859 15.6594416869 13.1246597008 3.3754112615 It should be noted that this is a condensed version, typically there will be ~250 lines of coordinates in between "STEP" numbers. Any ideas or thought will be appreciated. Thanks!! Answer: A quick although maybe not efficent way is to just parse line by line and add some states. # untested code, but i think you get the idea import re shakes = open("mc_coordinates", "r") i = 1 output = False # are we in a block that should be output? for line in shakes: if re.match("(.*)STEP i(.*)", line): # tune this to match only for BOX 1 print line output = true i+=48 elif re.match("(.*)STEP i(.*)", line): # some other box or step output = false elif output: print line # or remove the first few chars to get rid of C,F or Hs.
Ignoring samples in Gibbs sampling Question: import random,math def gibbs(N=50000,thin=1000): x=0 y=0 print "Iter x y" for i in range(N): for j in range(thin): x=random.gammavariate(3,1.0/(y*y+4)) y=random.gauss(1.0/(x+1),1.0/math.sqrt(2*x+2)) print i,x,y gibbs() The above python code is the Gibbs Sampling and the following line confuses me. for j in range(thin): What is the significance of that additional inner loop? Answer: The reason seems to be the introduction of thinning into your Gibbs sampling. Thinning is used to reduce the effect of correlation between consecutive samples. Gibbs sampling generates a Markov Chain of samples and the nearby samples are correlated, while typically the intention is to draw samples that are independent. To achieve that, you can use only every `M-th` value, while ignoring all intermediate values. In this case `M` is stored in the variable `thinning` and you are taking only every `thinning` value which is then printed below the for loop.
How to print iterations per second? Question: I have a small Python script which sends POST requests to a server and gets their response. It iterates 10000 times, and I managed to print the current progress in command prompt using: code=current_requestnumber print('{0}/{1}'.format(str(code),"10000"),end="\r") at the end of each loop. Because this involves interaction with a webserver, I would like to show the current average speed next to this too (updated like every 2 seconds). An example at the bottom of the command prompt would then be like this: (1245/10000), 6.3 requests/second How do I achieve this? Answer: You can get a total average number of events per second like this: #!/usr/bin/env python3 import time import datetime as dt start_time = dt.datetime.today().timestamp() i = 0 while(True): time.sleep(0.1) time_diff = dt.datetime.today().timestamp() - start_time i += 1 print(i / time_diff) Which in this example would print approximately 10. Please note that I used a `timestamp` method of `datetime` which is only availble in Python 3. Now, if you would like to calculate the "current" number of events per second, say over the last 10 events, you can do it like this: #!/usr/bin/env python3 import time import datetime as dt last_time = dt.datetime.today().timestamp() diffs = [] while(True): time.sleep(0.1) # Add new time diff to list new_time = dt.datetime.today().timestamp() diffs.append(new_time - last_time) last_time = new_time # Clip the list if len(diffs) > 10: diffs = diffs[-10:] print(len(diffs) / sum(diffs)) Here, I'm keeping a list of durations of last 10 iterations over which I can then use to get the average number of events per second.
Python: How to use an item from a python list which appears a specific number of times? Question: Suppose I have a python list num = [1,2,5,3,4,4] and I know that there is an item which appears 2 times in num. Now I want to use only that item. Is there a predefined function to choose that item? *There is no upper limit on value of items in num. Answer: There is no predefined function, no. You'll have to count all the items and figure out which one can be found twice in the list. You can use a [`collections.Counter()` object](https://docs.python.org/2/library/collections.html#collections.Counter) to get the counts, and from there it is reasonably easy to enumerate the values that appear twice: from collections import Counter counts = Counter(num) twice = [n for n, count in counts.iteritems() if count == 2]
Run 3 variables at once in a python for loop. Question: For loop with multiple variables in python 2.7. Hello, I am not certain how to go about this, I have a function that goes to a site and downloads a .csv file. It saves the .csv file in a particular format: name_uniqueID_dataType.csv. here is the code import requests name = "name1" id = "id1" dataType = "type1" def downloadData(): URL = "http://www.website.com/data/%s" %name #downloads the file from the website. The last part of the URL is the name r = requests.get(URL) with open("data/%s_%s_%s.csv" %(name, id, dataType), "wb") as code: #create the file in the format name_id_dataType code.write(r.content) downloadData() The code downloads the file and saves it perfectly fine. I want to run a for loop on the function that takes those three variables each time. The variables will be written as lists. name = ["name1", "name2"] id = ["id1", "id2"] dataType = ["type1", "type2"] There will be over 100 different items listed in each list with the same amount of items in each variable. Is there any way to accomplish this using a for loop in python 2.7. I have been doing research on this for the better part of a day but I can't find a way to do it. Please note that I am new to python and this is my first question. Any assistance or guidance would be greatly appreciated. Answer: [zip](https://docs.python.org/2/library/functions.html#zip) the lists and use a for loop: def downloadData(n,i,d): for name, id, data in zip(n,i,d): URL = "http://www.website.com/data/{}".format(name) #downloads the file from the website. The last part of the URL is the name r = requests.get(URL) with open("data/{}_{}_{}.csv".format(name, id, data), "wb") as code: #create the file in the format name_id_dataType code.write(r.content) Then pass the lists to your function when calling: names = ["name1", "name2"] ids = ["id1", "id2"] dtypes = ["type1", "type2"] downloadData(names, ids, dtypes) zip will group your elements by index: In [1]: names = ["name1", "name2"] In [2]: ids = ["id1", "id2"] In [3]: dtypes = ["type1", "type2"] In [4]: zip(names,ids,dtypes) Out[4]: [('name1', 'id1', 'type1'), ('name2', 'id2', 'type2')] So the first iteration name,id and data will be `('name1', 'id1', 'type1')` and so on..
Convert float to log space in python Question: I am implementing the Viterbi algorithm (a dynamic algorithm) in Python, and I notice that for large input files, the probabilities keep getting multiplied and shrinking beyond the floating point precision. I need to store the numbers in log space. Can anyone give a simple example Python code-snippet of how to convert, say, 0.0000003 to log-space? (I'm not sure if it needs to be natural log or some other log. I have only heard of "log-space" but don't really know about it.) Thanks! Answer: To move to log space, use `log`. To move back again use `exp`. [The rules in log space are different](https://en.wikipedia.org/wiki/Logarithm#Logarithmic_identities) \- eg. to perform multiplication is to add in logspace. >>> from math import log, exp >>> log(0.0000003) -15.01948336229021 >>> exp(-15.01948336229021) 3.0000000000000015e-07 >>> log(0.0000003) + log(0.0000003) -30.03896672458042 >>> exp(-30.03896672458042) 9.000000000000011e-14 # 0.0000003 * 0.0000003 Here is an example using some small probabilities >>> probabilities = [0.0000003, 0.0000004, 0.0000005] >>> exp(sum(log(p) for p in probabilities)) 5.999999999999992e-20
UnicodeEncodeError Python 2.7 Question: I am using Tweepy for authentication and I am trying to print text, but I am unable to print the text. I am getting some UnicodeEncodeError. I tried some method but I was unable to solve it. # -*- coding: utf-8 -*- import tweepy consumer_key = "" consumer_secret = "" access_token = '' access_token_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) public_tweets = api.home_timeline() for tweet in public_tweets: print tweet.text.decode("utf-8")+'\n' Error: (venv) C:\Users\e2sn7cy\Documents\GitHub\Tweepy>python tweepyoauth.py Throwback to my favourite! Miss this cutie :) #AdityaRoyKapur https://t.co/sxm8g1qhEb/n Cristiano Ronaldo: 3 hat-tricks in his last 3 matches. Lionel Messi: 3 trophies in his last 3 matches. http://t.co/For1It4QxF/n How to Bring the Outdoors in With Indoor Gardens http://t.co/efQjwcszDo http://t.co/1NLxSzHxlI/n Traceback (most recent call last): File "tweepyoauth.py", line 17, in <module> print tweet.text.decode("utf-8")+'/n' File "C:\myPython\venv\lib\encodings\utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-7: ordinal not in range(128) Answer: This line `print tweet.text.decode("utf-8")+'/n'` is the cause. You decode `tweet.text` as utf-8 into an unicode string. Fine until here. But you next try to concatenate it with a raw string '/n' (BTW, I think you really wanted `\n`) and python try to convert the unicode string to an ascii raw string giving the error. You should concatenate with a **unicode** string to obtain a unicode string without conversion : print tweet.text.decode("utf-8") + u'\n' If this is not enough, it could be because your environment cannot directly print unicode strings. Then you should explictely encode it in the native charset of your system : print (tweet.text.decode("utf-8") + u'\n').encode('cp850') [here replace 'cp850' (_my_ charset) with the charset on _your_ system]
Problems while importing pyaudio Question: I recently bought a new laptop , and i moved my files from my old laptop. I was working on a project in pycharm which used the module pyaudio , i tried to run it and i got an error saying there is no module called pyaudio. I ran "apt-get install python-pyaudio" , it was successful but the error persisted. Then i downloaded pyaudio from its website and installed the package.. the problem is still here , any ideas ? Also , i changed the interpreter from 2.7 to 3 i , if thats any help. Answer: Try to install PyAudio from [here](http://people.csail.mit.edu/hubert/pyaudio/#downloads) If you installed package for certain Python version, after changing interpreter in PyCharm it will not be accessible. So install for other version also. This should work
Import on class instanciation Question: I'm creating a module with several classes in it. My problem is that some of these classes need to import very specific modules that needs to be manually compiled or need specific hardware to work. There is no interest in importing every specific module up front, and as some modules need specific hardware to work, it could even raise errors. I would like to know if it is possible to import these module only when needed, that is on instantiation of a precise class, like so : class SpecificClassThatNeedRandomModule(object): import randomModule Also I am not sure that this would be a good pythonic way of doing the trick, so I'm open to suggestion for a proper way. Answer: It is possible to import a module at instantiation: class SpecificClassThatNeedRandomModule(object): def __init__(self): import randomModule self.random = randomModule.Random() However, this is a bad practice because it makes it hard to know when the import is done. You might want to modify your module so that it doesn't raise an exception, or catch the `ImportError`: try: import randomModule except ImportError: randomModule = None
Django TemplateDoesNotExist Error on Windows machine Question: I have been following the tutorial for Django **[Tango with Django](http://www.tangowithdjango.com/book17/chapters/setup.html#django- basics)** I was trying to add a template as instructed on the [link](http://www.tangowithdjango.com/book17/chapters/templates_static.html). I am working with Python 2.7, Django 1.8 on a windows 7 machine. Below is the error that I get: TemplateDoesNotExist at /rango/ rango/index.html Request Method: GET Request URL: http://127.0.0.1:8000/rango/ Django Version: 1.8 Exception Type: TemplateDoesNotExist Exception Value:rango/index.html Exception Location: C:\Python27\lib\site-packages\django\template\loader.py in get_template, line 46 * * * Django tried loading these templates, in this order: Using loader django.template.loaders.filesystem.Loader: Using loader django.template.loaders.app_directories.Loader: C:\Python27\lib\site-packages\django\contrib\admin\templates\rango\index.html (File does not exist) C:\Python27\lib\site-packages\django\contrib\auth\templates\rango\index.html (File does not exist) Below is my file structure: tango_with_django_project +-- rango | +--views.py | +--other files +-- tango_with_django_project | +--templates | | +--rango | | | +--index.html | +--settings.py | +--other files +-- db.sqlite3 +-- manage.py` I have given the template path as below in **`settings.py`** : BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) TEMPLATE_PATH = os.path.join(BASE_DIR,'templates') TEMPLATE_DIRS = ( TEMPLATE_PATH, ) And have set the view as in **`views.py`** : from django.shortcuts import render from django.http import HttpResponse def index(request): context_dict = {'boldmessage': "I am bold font from the context"} return render(request, 'rango/index.html', context_dict) Without using the template, it works fine and displays plain text. But when I do the changes required for the template, it does not work and throws the Error. There are many links on SO with the same issue but none have worked for me. The [one](http://stackoverflow.com/questions/14150341/django-the- templatedoesnotexist-error) with almost the same Error description and on Windows machine has a vague answer. I have tried directly specifying the **absolute path** instead of getting it from `os.path` and have also tried placing the template folder in different paths. Any help would be appreciated. Answer: The solution that worked for me was removing the below piece of code from `settings.py`: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] And adding : ` TEMPLATE_DIRS = ('C:/Users/vaulstein/tango_with_django_project/templates',)` **OR** Changing the line 4th line below : TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': ['C:/Users/vaulstein/tango_with_django_project/templates'], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ]
Python import statement in a loop: does import run every loop iteration? Question: For a code I am writing, I am running `scipy.curve_fit()` tens of thousands of times. I noticed in the [relevant `curve_fit()` source code](https://github.com/scipy/scipy/blob/v0.14.0/scipy/optimize/minpack.py), specifically on [lines 430](https://github.com/scipy/scipy/blob/v0.14.0/scipy/optimize/minpack.py#L430) and 431 in the source (in the `leastsq()` function), there are two `import` statements: from numpy.dual import inv from numpy.linalg import LinAlgError I call `curve_fit()` inside a loop. I am wondering if the modules loaded by these `import` statements are kept once an iteration of the loop is completed or if the modules fall out of scope and need to be reloaded in every iteration of the loop. Note: the `import` statements are only called if the `if full_output:` statement on line 427 of the source code evaluates to true. `full_output=1` is what is passed to `leastsq()` by `curve_fit()`, so the `import` statements are indeed called. Additional note: I am **not asking about importing modules multiple times (so much)** , but rather if a module imported in a loop is still accessible by the code after the loop completes (or after each iteration of the loop). More notes: >>>for x in range(0,1): ... import os ... >>> os <module 'os' from '/home/lars/env/common/lib64/python2.7/os.pyc'> this works, but if I instead define a function: def a(b): if a==True: import scipy then for i in range(10): a(True) scipy NameError: name 'scipy' is not defined What is up with that? Answer: This behavior has nothing to do with a loop, it's all about this funciton. As doc sais, > The basic import statement (no from clause) is executed in two steps: find a > module, loading and initializing it if necessary define a name or names in > the local namespace for the scope where the import statement occurs. And funcitons do have their own scope, that's why you can't see imported module outside of it. <https://docs.python.org/3/reference/simple_stmts.html#the-import-statement>
cx_Oracle imports wrong module Question: I am trying to connect to an Oracle DB v.9. I downloaded latest Instant Client (12.1.0.2.0) + SDK, then cx_Oracle. When trying to connect to the DB it says cx_Oracle.DatabaseError: ORA-03134: Connections to this server version are no longer supported. so I tried installing an older version of Instant Client 10.2.0.5: sudo yum remove oracle-instantclient12.1-basic sudo yum install /vagrant/oracle-instantclient-devel-10.2.0.5-1.x86_64.rpm and updated my env: export ORACLE_HOME=/usr/lib/oracle/10.2.0.5/client64/ export LD_LIBRARY_PATH=/usr/lib/oracle/10.2.0.5/client64/lib/ but when I try to run my Python script, cx_Oracle says: ImportError: libclntsh.so.12.1: cannot open shared object file: No such file or directory Of course there's no such file or directory: I have version 10.2.0.5, not 12.1. And my file is called `libclntsh.so.10.1`, not `libclntsh.so.12.1`. How do I make cx_Oracle import the right file? Or is there a way to make Instant Client v.12 work with server v.9? Answer: Oracle guarantees backwards compatibility for two versions. So Version 10 can access version 8, 9, 10. Version 12 can only access version 10 and up. You need to do a clean install of the correct client.
how to make a python script run repeatedly Question: I have a simple script which saves some values to a database , I also have a window built in Tkinter. So basically my problem is I want the savebase() function to be continuously called upon, till the window remains open. How can this be done ? Till now I'm able just to run the function only once, when the window opens itself. I can also put a button to repeat it , but it doesn't solve the purpose as i want this thing to be done like 2 times per second. A simplified version of my attempt at this is like : import Tkinter import saveDB def doing_it(): a = saveDB.save() a.savebase() window = Tkinter.Tk() window.title("Saving Database") window.geometry("300x300+100+100") first_button=Tkinter.Button(window, text='Save DB', command=doing_it, fg='white', bg='black').grid(row=3,column=2) window.mainloop() is if _name_ == main() , the way to do it ? during my search for the above mentioned problem, I came across it , though i know nothing about it. Answer: Use the `after`-method: def doing_it(): a = saveDB.save() a.savebase() window.after(500, doing_it) window = Tkinter.Tk() window.title("Saving Database") window.geometry("300x300+100+100") window.after(500, doing_it) window.mainloop()
Compare 2 seperate csv files and write difference to a new csv file - Python 2.7 Question: I am trying to compare two csv files in python and save the difference to a third csv file in python 2.7. import csv f1 = open ("olddata/file1.csv") oldFile1 = csv.reader(f1) oldList1 = [] for row in oldFile1: oldList1.append(row) f2 = open ("newdata/file2.csv") oldFile2 = csv.reader(f2) oldList2 = [] for row in oldFile2: oldList2.append(row) f1.close() f2.close() set1 = tuple(oldList1) set2 = tuple(oldList2) print oldList2.difference(oldList1) I get the error message: Traceback (most recent call last): File "compare.py", line 21, in <module> print oldList2.difference(oldList1) AttributeError: 'list' object has no attribute 'difference' I am new to python, and coding in general, and I am not done with this code just yet (I have to make sure to store the differences to a variable and write the difference to a new csv file.). I have been trying to solve this all day and I simply can't. Your help would be greatly appreciated. Answer: What do you mean by difference? The answer to that gives you two distinct possibilities. If a row is considered same when **all columns** are same, then you can get your answer via the following code: import csv f1 = open ("olddata/file1.csv") oldFile1 = csv.reader(f1) oldList1 = [] for row in oldFile1: oldList1.append(row) f2 = open ("newdata/file2.csv") oldFile2 = csv.reader(f2) oldList2 = [] for row in oldFile2: oldList2.append(row) f1.close() f2.close() print [row for row in oldList1 if row not in oldList2] However, if two rows are same if a **certain key field (i.e. column)** is same, then the following code will give you your answer: import csv f1 = open ("olddata/file1.csv") oldFile1 = csv.reader(f1) oldList1 = [] for row in oldFile1: oldList1.append(row) f2 = open ("newdata/file2.csv") oldFile2 = csv.reader(f2) oldList2 = [] for row in oldFile2: oldList2.append(row) f1.close() f2.close() keyfield = 0 # Change this for choosing the column number oldList2keys = [row[keyfield] for row in oldList2] print [row for row in oldList1 if row[keyfield] not in oldList2keys] **Note:** The above code might run slow for extremely large files. If instead, you wish to speed up code through hashing, you can use `set` after converting the `oldList`s using the following code: set1 = set(tuple(row) for row in oldList1) set2 = set(tuple(row) for row in oldList2) After this, you can use `set1.difference(set2)`
Trouble reading in Unicode strings from CSV file to DictReader in Python Question: I have a CSV file I'm trying to read in using DictReader. But doing just this: with("BeerRatings.csv", "r", "utf-8") as f: reader = csv.DictReader(f) for line in reader: print line gives me some ugly unicode as such: {'Rating': '4', 'Brewery': 'Tr\xc3\xb6egs Brewing Company', 'Beer name': 'Tr\xc3\xb6egs Hopback Amber Ale'} {'Rating': '4.59', 'Brewery': 'Brasserie Dieu Du Ciel', 'Beer name': 'P\xc3\xa9ch\xc3\xa9 Mortel - Bourbon Barrel Aged'} etc. So, reading on stackoverflow, I editted my code to this, using the codecs module: import codecs with codecs.open("BeerRatings.csv", "r", "utf-8") as f: reader = csv.DictReader(f) for line in reader: print line But this is giving me a `UnicodeEncodeError: 'ascii' codec can't encode character u'\xea' in position 9: ordinal not in range(128)`. Any tips on how to go fix this? UPDATE aka more flailing around: def UnicodeDictReader(utf8_data, **kwargs): csv_reader = csv.DictReader(utf8_data, **kwargs) for row in csv_reader: yield {key: unicode(value, 'utf-8') for key, value in row.iteritems()} with open("BeerRatings.csv", "r") as f: reader = UnicodeDictReader(f) for line in reader: print line THis still gives me a less than ideal output... {'Rating': u'4', 'Brewery': u'Tr\xf6egs Brewing Company', 'Beer name': u'Tr\xf6egs Hopback Amber Ale'} {'Rating': u'4.59', 'Brewery': u'Brasserie Dieu Du Ciel', 'Beer name': u'P\xe9ch\xe9 Mortel - Bourbon Barrel Aged'} Answer: The `csv` module in Python 2.X expects the input file to be opened in binary, and does not support encodings. It is, however, compatible with UTF-8, but you have to decode to Unicode yourself: import csv with open('BeerRatings.csv','rb') as f: reader = csv.DictReader(f) for line in reader: for k,v in line.iteritems(): print k.decode('utf8'),':',v.decode('utf8') print Output: Rating : 4 Brewery : Tröegs Brewing Company Beer name : Tröegs Hopback Amber Ale Rating : 4.59 Brewery : Brasserie Dieu Du Ciel Beer name : Péché Mortel - Bourbon Barrel Aged ### Edit Per your `UnicodeDictReader`, you still need to print the key/value pairs as I did or you get the default printing for a `dict`, which shows escaped data via the `repr()` of the string. Also open in binary mode. It matters on some OSes, particularly Windows. import csv def UnicodeDictReader(utf8_data, **kwargs): csv_reader = csv.DictReader(utf8_data, **kwargs) for row in csv_reader: yield {key.decode('utf8'):value.decode('utf8') for key, value in row.iteritems()} def prettydict(D): return u'{' + u', '.join(u"'{}': '{}'".format(k,v) for k,v in D.iteritems()) + u'}' with open("BeerRatings.csv", "rb") as f: reader = UnicodeDictReader(f) for line in reader: print prettydict(line) Output: {'Rating': '4', 'Brewery': 'Tröegs Brewing Company', 'Beer name': 'Tröegs Hopback Amber Ale'} {'Rating': '4.59', 'Brewery': 'Brasserie Dieu Du Ciel', 'Beer name': 'Péché Mortel - Bourbon Barrel Aged'}
Error in executing .jar file from a Python script called from another Python script, as a subprocess Question: This is in extension to a resolved post that I had posted [here](http://stackoverflow.com/questions/30766563/error-in-executing-a-jar- file-in-remote-machine). I have a `python script` which has the following `jar` execution code in it(alongwith some other codes): **python_file2.py** import os cmd_txt = "ssh -i pem_file.pem user@" + host_name + " 'cd /user/folder1/ && java -cp jar-file.jar'" os.system(cmd_txt) Now this `python script` file(python_file2.py) is called as a `subprocess` from another `python script`. **Main_script.py** ret = subprocess.Popen([sys.executable,"python_file2.py"]) When I run this **Main_script.py** then the execution of the `jar` file from `python_file2.py` seems to get hanged. When I run the first script alone it works fine but when I try to execute it as `subprocess` then the `jar` execution hangs and timeouts. What could be a possible reason for this `jar` execution getting hanged when executed from inside a `subprocess` script? All I want to is run a jar file which is present on remote machine from inside a python script which is ran as a `subprocess`. Answer: Use `subprocess.call` instead of `subprocess.Popen`: ret = subprocess.call([sys.executable,"python_file2.py"]) assert not ret, ret
Python: Urllib proxyhandler error Question: So I'm fairly new to using the urllib.request library and I'm trying to run a proxy through proxy handler, however I keep getting this error message assert hasattr(proxies, 'keys'), "proxies must be a mapping" AssertionError: proxies must be a mapping My code is import urllib.request proxy = "https://107.170.206.225" handler = urllib.request.ProxyHandler(proxy) opener = urllib.request.build_opener(handler) urllib.request.install_opener(opener) response = urllib.request.urlopen('http://youtube.com/') I've tried looking through the documentation and it said to make sure to use the dictionary mapping protocol but I'm unsure of how to do that so any help would be appreciated. Answer: You are right [the documentation for ProxyHandler](https://docs.python.org/3.0/library/urllib.request.html#urllib.request.ProxyHandler) says that _"`proxies` must be a dictionary mapping protocol names to URLs of proxies"_. And it must be read as _"`proxies` must be a dictionary that maps protocol names to URLs of proxies"_. In your case `proxies` should be defined as follows. proxies = {'https': '107.170.206.225'} handler = urllib.request.ProxyHandler(proxies)
How to create a module[qpython3] Question: Noob alert! I'm wondering how you go about importing a module(one made in qpython)? I've tried making a new folder and adding a setup.py then trying to import but just get error about module not found(or something).. Thanks in advance Answer: Your folder must to consist in Path variable. You must append path to your folder in this variable. Add the following lines to the top of your project main file: import sys sys.path.append(yourPath) yourPath must be a string type. For example: '/storage/sdcard0/myfolder'
How to execute Python code generated by Blockly right in the browser? Question: I was following the example [Blockly Code Generators](https://developers.google.com/blockly/installation/code- generators) and was able to generate Python codes. But when I run the Python code, I get an error. It seems the error is 'eval(code)' below, what should I do if I want to execute the Python code right inside the browser? Thanks for any help! Blockly.JavaScript.addReservedWords('code'); var code = Blockly.JavaScript.workspaceToCode(workspace); try { eval(code); } catch (e) { alert(e); } [here is the snapshot](http://postimg.org/image/t4j6l6y2p/) Unfortunately i dont have enough points to post the image here Answer: Can you try this with a simple code , like - `print('Hello World!')` According to the image , the issue could be with indentation , and indentation is very important in python, othewise it can cause syntax errors . You should have also changed the code to - Blockly.Python.addReservedWords('code'); var code = Blockly.JavaScript.workspaceToCode(workspace); try { eval(code); } catch (e) { alert(e); }
Create Popup in MapMarkerPopup on the python-side(not in kv file) in kivy-Garden-Mapview Question: I went to the [MapView- Doncumentation](http://mapview.readthedocs.org/en/latest/#) and also to the [Source code](https://github.com/kivy-garden/garden.mapview) but this doesn't seems to help much. I created this templete in kv file so that I could dynamically create a mapmarkerpopup in the Map, but when I try this it creates another widget (which is obvious as I did add_widget in the load_content method because I couldn't find any other way) This is the map_data.kv file #:import MapSource kivy.garden.mapview.MapSource #:import MapMarkerPopup kivy.garden.mapview.MapMarkerPopup [MakePopup@BoxLayout]: MapMarkerPopup: lat: ctx.lat lon: ctx.lon popup_size: 400,400 Bubble: Image: source: ctx.image mipmap: True Label: text: ctx.label markup: True halign: "center" <Toolbar@BoxLayout>: size_hint_y: None height: '48dp' padding: '4dp' spacing: '4dp' canvas: Color: rgba: .2, .2, .2, .6 Rectangle: pos: self.pos size: self.size <Map_Data>: Toolbar: top: root.top #Spinner created to select places. Spinner: text: "Sydney" values: root.map_values.keys() on_text: if (self.text == 'France'): root.load_content() else: pass MapView: id: mapview lat: 28.89335172 lon: 76.59449171 zoom: 24 This is the main.py file class Map_Data(BoxLayout): .... def load_content(self): self.add_widget(Builder.template('MakePopup', lat ='28.89335152', lon='76.59449153', image="goku.jpg",label='label')) This is the output that I get from the above code. **I want that marker on the map**. ![enter image description here](http://i.stack.imgur.com/YPT5d.png) Now we see that mapview has a function "add_marker" but via this method **I cannot add the image and label.** if (self.text == 'Sydney'): mapview.add_marker(MapMarkerPopup(lat=-33.8670512,lon=151.206)) else: pass It works totally fine and adds the marker on the map. But how to add Image and label ie. content??? mapview.add_marker(MapMarkerPopup(lat=-33.8670512,lon=151.206, content=???)) Now that expected result could be generated by manually creating, as in <https://github.com/kivy- garden/garden.mapview/blob/master/examples/map_with_marker_popup.py> But what about creating it dynamically??? Any help is appreciated. **EDIT 1:** I also tried to do this. if (self.text == 'Sydney'): mapview.add_marker(MapMarkerPopup(lat=-33.8670512, lon=151.206,popup_size=(400,400)).add_widget(Button(text = "stackoverflow"))) else: pass but it shows this error: marker._layer = self AttributeError: 'NoneType' object has no attribute '_layer' Answer: It's been a while since you asked this question but I faced the same problem recently and maybe there is some interest in an answer. You pointed out elsewhere how to add content dynamically (<https://github.com/kivy- garden/garden.mapview/issues/5>) but the problem that the popup showed up in the wrong place remained and you suggested that the `set_marker_position` method needs to be changed. Changing it to def set_marker_position(self, mapview, marker): x, y = mapview.get_window_xy_from(marker.lat, marker.lon, mapview.zoom) marker.x = int(x - marker.width * marker.anchor_x) marker.y = int(y - marker.height * marker.anchor_y) if isinstance(marker, MapMarkerPopup): marker.placeholder.x = marker.x - marker.width / 2 marker.placeholder.y = marker.y + marker.height i.e. adding the last three lines did the trick for me.
How to minus time with Python Question: I'd like to get the time before X seconds before `datetime.time.now()`. For example, if the `time.now()` is `12:59:00`, and I minus `59`, I want to get `12:00:00`. How can I do that? Answer: [You can use time delta](https://docs.python.org/2/library/datetime.html#timedelta-objects) like this: import datetime print datetime.datetime.now() - datetime.timedelta(minutes=59)
Python Pandas to_sql, how to create a table with a primary key? Question: I would like to create a MySQL table with Pandas' to_sql function which has a primary key (it is usually kind of good to have a primary key in a mysql table) as so: group_export.to_sql(con = db, name = config.table_group_export, if_exists = 'replace', flavor = 'mysql', index = False) but this creates a table without any primary key, (or even without any index). The documentation mentions the parameter indel_label which could be used to create an index but doesn't mention any option for primary keys. [Documentation](http://pandas.pydata.org/pandas- docs/dev/generated/pandas.DataFrame.to_sql.html) Answer: Disclaimer: this answer is more experimental then practical, but maybe worth mention. I found that class `pandas.io.sql.SQLTable` has named argument `key` and if you assign it the name of the field then this field becomes the primary key: Unfortunately you can't just transfer this argument from `DataFrame.to_sql()` function. To use it you should: 1. create `pandas.io.SQLDatabase` instance engine = sa.create_engine('postgresql:///somedb') pandas_sql = pd.io.sql.pandasSQL_builder(engine, schema=None, flavor=None) 2. define function analoguous to `pandas.io.SQLDatabase.to_sql()` but with additional `*kwargs` argument which is passed to `pandas.io.SQLTable` object created inside it (i've just copied original `to_sql()` method and added `*kwargs`): def to_sql_k(self, frame, name, if_exists='fail', index=True, index_label=None, schema=None, chunksize=None, dtype=None, **kwargs): if dtype is not None: from sqlalchemy.types import to_instance, TypeEngine for col, my_type in dtype.items(): if not isinstance(to_instance(my_type), TypeEngine): raise ValueError('The type of %s is not a SQLAlchemy ' 'type ' % col) table = pd.io.sql.SQLTable(name, self, frame=frame, index=index, if_exists=if_exists, index_label=index_label, schema=schema, dtype=dtype, **kwargs) table.create() table.insert(chunksize) 3. call this function with your `SQLDatabase` instance and the dataframe you want to save to_sql_k(pandas_sql, df2save, 'tmp', index=True, index_label='id', keys='id', if_exists='replace') And we get something like CREATE TABLE public.tmp ( id bigint NOT NULL DEFAULT nextval('tmp_id_seq'::regclass), ... ) in the database. PS You can of course monkey-patch `DataFrame`, `io.SQLDatabase` and `io.to_sql()` functions to use this workaround with convenience.
Type error in python program Question: I always get a Type Error when I run the following python code (abc.py) as follows: ./abc.py activatelink alphabeta Type Error: ['alphabeta'] My code: #!/usr/bin/python import urllib2 from urllib2 import URLError from urllib2 import HTTPError import requests import urllib import json import time import os import sys import hashlib def activate_user(link): print invoke_rest('GET', link) def invoke_rest(request_type, rest_url, payload, headers): try: api_url = rest_url if request_type == 'GET': r = requests.get(api_url) to_ret = {'code':r.status_code, 'reply':r.text} return to_ret elif request_type == 'POST': r = requests.post(api_url, data=payload, headers=headers) to_ret = {'code':r.status_code, 'reply':r.text} return to_ret else: return "Invalid request type ", request_type except Exception, e: return "Exception:", e, " in getting the API call" def help(): print ('Usage: %s { activate | help }', os.path.basename(sys.argv[0]) if __name__ == '__main__': actions = {'activatelink': activate_user, 'help': help} try: action = str(sys.argv[1]) except IndexError: print "IndexError: ", sys.argv[1] action = 'help' args = sys.argv[2:] try: actions[action](*args) except (KeyError): print "Key Error:", args help() except (TypeError): print "Type Error:", args help() Am I doing anything wrong? I added some other functions other than activatelink, which work fine, can anyone point out whats wrong in here? Answer: Your `invoke_rest()` function takes _four_ arguments: def invoke_rest(request_type, rest_url, payload, headers): but you pass in just the two: print invoke_rest('GET', link) That raises a `TypeError` exception: >>> def invoke_rest(request_type, rest_url, payload, headers): ... pass ... >>> invoke_rest('GET', 'alphabeta') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: invoke_rest() takes exactly 4 arguments (2 given) Perhaps you wanted those two extra arguments (`payload` and `headers`) to be optional. If so, make them keyword arguments and set their default value to `None`: def invoke_rest(request_type, rest_url, payload=None, headers=None): which is fine by the `requests` library.
Issue with large files in Python 2.7 Question: I am currently experiencing an issue while reading big files with Python 2.7 [GCC 4.9] on Ubuntu 14.04 LTS, 32-bit. I read other posts on the same topic, such as [Reading a large file in python](http://stackoverflow.com/questions/19796402/reading-a-large-file-in- python) , and tried to follow their advice, but I still obtain MemoryErrors. The file I am attempting to read is not that big (~425MB), so first I tried a naive block of code like: data = [] isFirstLine = True lineNumber = 0 print "Reading input file \"" + sys.argv[1] + "\"..." with open(sys.argv[1], 'r') as fp : for x in fp : print "Now reading line #" + str(lineNumber) + "..." if isFirstLine : keys = [ y.replace('\"', '') for y in x.rstrip().split(',') ] isFirstLine = False else : data.append( x.rstrip().split(',') ) lineNumber += 1 The code above crashes around line #3202 (of 3228), with output: Now reading line #3200... Now reading line #3201... Now reading line #3202... Segmentation fault (core dumped) I tried invoking `gc.collect()` after reading every line, but I got the same error (and the code became slower). Then, following some indications I found here on StackOverflow, I tried `numpy.loadtxt()`: data = numpy.loadtxt(sys.argv[1], skiprows=1, delimiter=',') This time, I got a slightly more verbose error: Traceback (most recent call last): File "plot-memory-efficient.py", line 360, in <module> if __name__ == "__main__" : main() File "plot-memory-efficient.py", line 40, in main data = numpy.loadtxt(sys.argv[1], skiprows=1, delimiter=',') File "/usr/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 856, in loadtxt X = np.array(X, dtype) MemoryError So, I am under the impression that something is not right. What am I missing? Thanks in advance for your help! **UPDATE** Following hd1's answer below, I tried the `csv` module, and it worked. However, I think there's something important that I might have overlooked: I was parsing each line, and I was actually storing the values as strings. Using `csv` like this still causes some errors: with open(sys.argv[1], 'r') as fp : reader = csv.reader(fp) # get the header keys = reader.next() for line in reader: print "Now reading line #" + str(lineNumber) + "..." data.append( line ) lineNumber += 1 But storing the values as `float` solves the issue! with open(sys.argv[1], 'r') as fp : reader = csv.reader(fp) # get the header keys = reader.next() for line in reader: print "Now reading line #" + str(lineNumber) + "..." floatLine = [float(x) for x in line] data.append( floatLine ) lineNumber += 1 So, another issue might be connected with the data structures. Answer: numpy's loadtxt method is [known to be memory- inefficient](http://stackoverflow.com/a/26562070/783412). That may address your first problem. Per the second, why not use the [csv module](https://docs.python.org/2/library/csv.html): data = [] isFirstLine = True lineNumber = 0 print "Reading input file \"" + sys.argv[1] + "\"..." with open(sys.argv[1], 'r') as fp : reader = csv.reader(fp) reader.next() for line in reader: pass # line is an array of comma-delimited fields in the file
Importing Python libraries in Openshift cron jobs Question: I have a script in .openshift/cron/daily that looks like this #!/usr/bin/python import sys import os sys.path.append(os.environ['OPENSHIFT_REPO_DIR']) import EmilyBlogModel EmilyBlogModel.Poll() EmilyBlogModel.py is in $OPENSHIFT_REPO_DIR However, when the cron job runs, I get an ImportError No module named EmilyBlogModel Why isn't this working? Answer: Have you tried printing the system path before the import statement? OPENSHIFT_REPO_DIR might be pointing to the wrong path.
Update label's text when pressing a button in Kivy for Python Question: Here is my code: I want to make a game where the main_label changes text when you press a button but I've looked everywhere for a week and still don't understand how to do it. I looked on Kivy's website but I don't understand. As you can see I'm new to kivy and not very experienced from kivy.app import App from kivy.uix.button import Button from kivy.uix.label import Label from kivy.uix.floatlayout import FloatLayout from kivy.clock import Clock energy = 100 hours = 4 class app1(App): def build(self): self.f = FloatLayout() #Labels self.energy_label = Label(text = "Energy = " + str(energy), size_hint=(.1, .15),pos_hint={'x':.05, 'y':.9}) self.time_label = Label(text = "Hours = " + str(hours), size_hint=(.1, .15),pos_hint={'x':.9, 'y':.9}) self.name_label = Label(text = "Game", size_hint=(.1, .15),pos_hint={'x':.45, 'y':.9}) self.main_label = Label(text = "Default_text", size_hint=(1, .55),pos_hint={'x':0, 'y':.35}) #Main Buttons self.inventory_button = Button(text = "Inventory", size_hint=(.3, .1),pos_hint={'x':.65, 'y':.2}) self.help_button = Button(text = "Help", size_hint=(.3, .1),pos_hint={'x':.65, 'y':.1}) self.craft_button = Button(text = "Craft", size_hint=(.3, .1),pos_hint={'x':.05, 'y':.1}) self.food_button = Button(text = "Food", size_hint=(.3, .1),pos_hint={'x':.35, 'y':.2}) self.go_button = Button(text = "Go", size_hint=(.3, .1),pos_hint={'x':.35, 'y':.1}) self.walk_button = Button(text = "Walk", size_hint=(.3, .1),pos_hint={'x':.05, 'y':.2}) def update(self, *args): self.main_widget.text = str(self.current_text) self.f.add_widget(self.energy_label) self.f.add_widget(self.main_label) self.f.add_widget(self.time_label) self.f.add_widget(self.inventory_button) self.f.add_widget(self.help_button) self.f.add_widget(self.craft_button) self.f.add_widget(self.food_button) self.f.add_widget(self.go_button) self.f.add_widget(self.walk_button) self.f.add_widget(self.name_label) self.current_text = "Default" Clock.schedule_interval(update, 1) return self.f def update_label(input): input = self.current_text help_button.bind(on_press = update_label("success!")) if __name__=="__main__": app1().run() **How can I update my code so that by pressing the help_button, main_label changes its text ?** Thank you for your help. Answer: Well! there was a real need for improvement in your code. (I understand it as you are not experienced.) **Improvement: 1** An application can be built if you return a widget on build(), or if you set self.root.(You shouldn't make all of the gui in build function itself.) def build(self): return Hello() #That's what is done here **Improvement: 2** on_release/on_press both are always useful. self.help_button = Button(text = "Help", size_hint=(.3, .1),pos_hint={'x':.65, 'y':.1},on_press = self.update) **Improvement: 3** As help_button is pressed, update function is called which changes the text of main_label. def update(self,event): self.main_label.text = "Changed to change" Here is you full improved code from kivy.app import App from kivy.uix.button import Button from kivy.uix.label import Label from kivy.uix.floatlayout import FloatLayout from kivy.clock import Clock energy = 100 hours = 4 class Hello(FloatLayout): def __init__(self,**kwargs): super(Hello,self).__init__(**kwargs) self.energy_label = Label(text = "Energy = " + str(energy), size_hint=(.1, .15),pos_hint={'x':.05, 'y':.9}) self.time_label = Label(text = "Hours = " + str(hours), size_hint=(.1, .15),pos_hint={'x':.9, 'y':.9}) self.name_label = Label(text = "Game", size_hint=(.1, .15),pos_hint={'x':.45, 'y':.9}) self.main_label = Label(text = "Default_text", size_hint=(1, .55),pos_hint={'x':0, 'y':.35}) #Main Buttons self.inventory_button = Button(text = "Inventory", size_hint=(.3, .1),pos_hint={'x':.65, 'y':.2}) self.help_button = Button(text = "Help", size_hint=(.3, .1),pos_hint={'x':.65, 'y':.1},on_press = self.update) self.craft_button = Button(text = "Craft", size_hint=(.3, .1),pos_hint={'x':.05, 'y':.1}) self.food_button = Button(text = "Food", size_hint=(.3, .1),pos_hint={'x':.35, 'y':.2}) self.go_button = Button(text = "Go", size_hint=(.3, .1),pos_hint={'x':.35, 'y':.1}) self.walk_button = Button(text = "Walk", size_hint=(.3, .1),pos_hint={'x':.05, 'y':.2}) self.add_widget(self.energy_label) self.add_widget(self.main_label) self.add_widget(self.time_label) self.add_widget(self.inventory_button) self.add_widget(self.help_button) self.add_widget(self.craft_button) self.add_widget(self.food_button) self.add_widget(self.go_button) self.add_widget(self.walk_button) self.add_widget(self.name_label) self.current_text = "Default" def update(self,event): self.main_label.text = "Changed to change" class app1(App): def build(self): return Hello() if __name__=="__main__": app1().run()
Run python code on a folder of XMLs Question: I would like to be able to run the following code on a folder of XML files rather than a single one. I also do no want to change the `xmlfile = 'test.xml'` line 100+ times for each file. This is the example `elementTree` code that I found and am testing. from openpyxl import Workbook import xml.etree.ElementTree as ET xmlfile = 'test.xml' element_tree = ET.parse(xmlfile) root = element_tree.getroot() agreement = root.find(".//tag").text print (agreement) wb = Workbook() kevin = ["1", "2", "3"] # grab the active worksheet ws = wb.active # Data can be assigned directly to cells ws['A1'] = 42 # Rows can also be appended ws.append[(agreement)] ws.append(kevin) # Save the file wb.save("sample.xlsx") Answer: Better go with this. import os import fnmatch #os.getcwd() or a specific path of the need path = os.getcwd() + '/open' mee = [os.path.join(dirpath, f) for dirpath, dirnames, files in os.walk(path) for f in fnmatch.filter(files, '*.xml')] for ffile in mee: #do other stuff
Map step that involves subprocess with pipe fails in PySpark Question: My goal is to to read binary (gpg-encrypted) files on hdfs consisting of csv data. My approach -- following [this answer](http://stackoverflow.com/a/30765052/2708667) \-- has been to define a Python function to read and decrypt a gpg file, yielding each line, and to apply this function as a `flatMap` to a parallel list of files. Essentially, the Python function spawns a subprocess that reads the file using `hadoop` and pipes the result to `gpg` to decrypt. This works just fine when running Spark in local mode. However, running it distributed (`yarn-client`), a simple line count returns `0`, essentially because Python thinks the `stdout` pipe is always closed. The issue seems to be that the subprocess involves a pipe between two commands. When I remove the latter (just a line count of the encrypted file), the line count matches what I get on command line. I've tried this multiple ways, all with the same result. Here's the Python function: import subprocess as sp def read_gpg_file_on_hdfs(filename): # Method 1: p = sp.Popen('hadoop fs -cat {} | gpg -d'.format(filename), shell=True, stdout=sp.PIPE) # Method 2: p1 = sp.Popen(['hadoop', 'fs', '-cat', filename], stdout=sp.PIPE) p = sp.Popen(['gpg', '-d'], stdin=p1.stdout, stdout=sp.PIPE) p1.stdout.close() # Method 3: p = sp.Ppen('gpg -d <(hadoop fs -cat {})'.format(filename), shell=True, stdout=sp.PIPE, stderr=sp.PIPE) for line in p.stdout: yield line.strip() And here's the Spark command: sc.parallelize(['/path/to/file.gpg']).flatMap(read_gpg_file_on_hdfs).count() Now I know that PySpark uses pipes to communicate with Spark, but I don't follow the details and I don't know if this would affect what I'm trying to do. My question is whether there is a way to accomplish what I'm trying to do. Note that I'm using Spark 1.2.1 distributed (the latest release from MapR). Also, I considered using `binaryFiles`, but this would fail for large gpg files, which I sometimes encounter. Thanks in advance! Answer: It turns out that the `gpg` command is actually the issue. Presumably it's related to the details of how subprocesses are launched in local mode versus distributed, but in local mode the `homedir` of `gpg` is set correctly. But when launched in distributed mode, `homedir` was pointing to an incorrect directory and the second subprocess immediately failed. This error message didn't appear to be logged anywhere, so `stdout` was just returned as an empty string.
pcap file viewing library in python 3 Question: I'm looking at trying to read pcap files from various CTF events. Ideally, I would like something that can do the breakdown of information such as wireshark, but just being able to read the timestamp and return the packet as a bytestring of some kind would be welcome. The problem is that there is little or no python 3 support with all the commonly cited libraries: dpkt, pylibpcap, pcapy, etc. Does anyone know of a pcap library that works with python 3? Answer: to my knowledge, there are at least 2 packages that seems to work with Python 3: **`pure-pcapfile`** and **`dpkt`** : * `pure-pcapfile` is easy to install in python 3 using `pip`. It's very easy to use but still limited to decoding **Ethernet** and **IP** data. The rest is left to you. But it works right _out of the box_. * `dpkt` doesn't work right _out of the box_ and needs some manipulation before. They are porting it to Python 3 and plan to have a Python 2 and 3 compatible version for version 2.0. Unfortunately, it's not there yet. However, it is way more complete than `pure-pcapfile` and can decode many protocols. If your packet embeds several layers of protocols, it will decode them automatically for you. The only problem is that you need to make a few corrections here and there to make it work (as the time of writing this comment). ## pure-pcapfile the only one that I found working for Python 3 so far is pcapfile. You can find it at <https://pypi.python.org/pypi/pypcapfile/> or install it by doing `pip3 install pypcapfile`. There are just basic functionalities but it works very well for me and has been updated quite recently (at the time of writing this message): from pcapfile import savefile file = open('mypcapfile.pcp' , 'rb') pcapfile = savefile.load_savefile(f,verbose=True) If everything goes well, you should see something like this: [+] attempting to load mypcapfile.pcap [+] found valid header [+] loaded 1234 packets [+] finished loading savefile. A few remarks now. I'm using Python 3.4.3. And doing `import pcapfile` will not import anything from it (I'm still a beginner with Python) but the only basic information and functions from the package. Next, you have to explicitly open your file in binary mode by passing `'rb'` as the mode in the `open()` function. In the documentation they don't say it explicitly. The rest is like in the documentation: packet = pcapfile.packets[12] to access the packet number 12 (the 13th packet then, the first one being at 0). And you have basic functionalities like packet.timestamp to get a timestamp or packet.raw() to get raw data. The documentation mentions functions to do packet decoding of some standard formats like _Ethernet_ and _IP_. ## dpkt `dpkt` is not available for Python 3 so you need to do the following, assuming you have access to a command line. The code is available on <https://github.com/kbandla/dpkt.git> and you must download it before: git clone https://github.com/kbandla/dpkt.git cd dpkt git checkout --track origin/migrate_py3 git pull This 4 commands do the following: 1. clone (download) the code from its git repository on github 2. go into the newly created directory named `dpkt` 3. switch to the branch name `migrate_py3` which contains the Python 3 code. As you can see from the name of this branch, it's still experimental. So far it works for me. 4. (just in case) download again the code then copy the directory named `dpkt` in your project or wherever Python 3 can find it. Later on, in Python 3 here is what you have to do to get started: import dpkt file = open('mypcapfile.pcap','rb') will open your file. Don't forget the `'rb'` binary mode in Python 3 (same thing as in `pure-pcapfile`). pcap = dpkt.pcap.Reader(file) will read and decode your `file` for ts, buf in pcap: eth = dpkt.ethernet.Ethernet(buf) print(eth) will, for example, decode Ethernet packet and print them. Then read the documentation on how to use `dpkt`. If your packets contain IP or TCP layer, then `dpkt.ethernet.Ethernet(buf)` will decode them as well. Also note that in the `for` loop, we have access to the timestamps in `ts`. You may want to iterate it in a less constrained form and doing as follows will help: (ts,buf) = next(pcap) eth = dpkt.ethernet.Ethernet(buf) where the first line get the next tuple from the pcap file. If pcap is `False` then you read everything.
Python and conflicting module names Question: It seems that if a file is called `io.py` and it imports `scipy.ndimage`, the latter somehow ends up failing to find its own submodule, also called `io`: $ echo "import scipy.ndimage" > io.py $ python io.py Traceback (most recent call last): File "io.py", line 1, in <module> import scipy.ndimage File "/usr/lib/python2.7/dist-packages/scipy/__init__.py", line 70, in <module> from numpy import show_config as show_numpy_config File "/usr/lib/python2.7/dist-packages/numpy/__init__.py", line 153, in <module> from . import add_newdocs File "/usr/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 13, in <module> from numpy.lib import add_newdoc File "/usr/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 22, in <module> from .npyio import * File "/usr/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 4, in <module> from . import format File "/usr/lib/python2.7/dist-packages/numpy/lib/format.py", line 141, in <module> import io File "/tmp/rm_me/io.py", line 1, in <module> import scipy.ndimage File "/usr/lib/python2.7/dist-packages/scipy/ndimage/__init__.py", line 172, in <module> from .filters import * File "/usr/lib/python2.7/dist-packages/scipy/ndimage/filters.py", line 37, in <module> from scipy.misc import doccer File "/usr/lib/python2.7/dist-packages/scipy/misc/__init__.py", line 45, in <module> from .common import * File "/usr/lib/python2.7/dist-packages/scipy/misc/common.py", line 10, in <module> from numpy import exp, log, asarray, arange, newaxis, hstack, product, array, \ ImportError: cannot import name exp Is this a bug in `SciPy`, or am I using Python wrong? **Update** : I think less surprising behavior would be if `import mod2` in `mod1` resolved paths relative to `mod1` rather than relative to whomever imported `mod1`. Answer: The simple fix is to avoid naming your module `io`, because it's conflicting with a core library module name. It's not really a bug in numpy, but user error: just as we shouldn't use `list` as a variable name because it's shadowing the builtin `list` name, we shouldn't use `io` as a module name because it's shadowing the core library `io` module name. See this part: File "/usr/lib/python2.7/dist-packages/numpy/lib/format.py", line 141, in <module> import io Here `numpy` tries to import the [`io` module](https://docs.python.org/2/library/io.html). This is intended to find the stdlib `io` because of the absolute import - a numpy submodule would have been loaded with _relative_ import like you can see in the traceback where there is another line beginning `from .npyio import *`. Of course your own `io.py` module is found first, because in the current working directory is generally the first entry in `sys.path`. Whoops!
Abbreviation Reference for NLTK Parts of Speach Question: I'm using nltk to find the parts of speech for each word in a sentence. It returns abbreviations that I both can't fully intuit and can't find good documentation for. Running: import nltk sample = "There is no spoon." tokenized_words = nltk.word_tokenize(sample) tagged_words = nltk.pos_tag(tokenized_words) print tagged_words Returns: [('There', 'EX'), ('is', 'VBZ'), ('no', 'DT'), ('spoon', 'NN'), ('.', '.')] In the above example, I'm looking for what `DT`, `EX`, and the rest mean. The best I have so far is to search for mentions of the abbreviations of concern in [Natural Language Processing with Python](http://www.nltk.org/book/ch05.html), but there has to be something better. I did also find a few literature-based resources, but I don't know how to tell which nltk is using. Answer: The [link](http://www.nltk.org/book/ch05.html) that you have already mentioned has two different tagsets. > For tagset documentation, see `nltk.help.upenn_tagset()` and > `nltk.help.brown_tagset()`. In this particular example, these tags are from [Penn Treebank tagset](http://ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html). You can also read about these tags by: nltk.help.upenn_tagset('DT') nltk.help.upenn_tagset('EX')
JSON in Python: encoding issue on OS X, no issue on Windows Question: I have a file `log.json` that contains one line: {"k":"caf\u00e9"} I run the following code on Windows 7 SP1 x64 Ultimate: import json a = json.load(open('log.json', 'r')) f = open('test.txt', 'w') f.write(a['k']) I don't have any issue. When I run the same code on Max OS X 10.10 x64: Traceback (most recent call last): File "/Users/francky/Documents/workspace/test.py", line 4, in <module> f.write(a['k']) UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 3: ordinal not in range(128) How comes it works fine on Windows but not on OS X? * * * The versions of the Python interpreter as well as of the JSON Python package are the same on the two OS: import json import sys print json.__version__ print(sys.version) returns on OS X: 2.0.9 2.7.6 (default, Sep 9 2014, 15:04:36) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] and on Windows: 2.0.9 2.7.6 (default, Nov 10 2013, 19:24:24) [MSC v.1500 64 bit (AMD64)] * * * The culprit was PyDev, which [used to use the Eclipse workspace or project setting "Text file encoding" and sets this as the Python default encoding](https://sw-brainwy.rhcloud.com/tracker/PyDev/315) (fixed in PyDev 3.4.0 and after). and to support some non-ASCII characted I had switched Python file to UTF-8: ![enter image description here](http://i.stack.imgur.com/qHKmw.png) which caused Python's `sys.getdefaultencoding()` to be `UTF-8`: ![enter image description here](http://i.stack.imgur.com/KHTNk.png) FYI: [Dangers of sys.setdefaultencoding('utf-8')](http://stackoverflow.com/q/28657010/395857) Answer: I get your OSX failure on Windows, and it _should_ fail because writing a Unicode string to a file requires an encoding. When you write Unicode strings to a file Python 2 will implicitly convert it to a byte string using the default `ascii` codec and fails for non-ASCII characters. Are you sure you are running Python 2.7? Python 3 gives no error. `io.open` is the Python 3 equivalent of `open` on Python 2 and defaults to using `sys.getfilesystemencoding()`. Here's how to fix Python 2: import json import io data = r'{"k":"caf\u00e9"}' a = json.loads(data) with io.open('test.txt','w') as f: f.write(a['k']) You can optionally specify the exact encoding you want for the output as an additional parameter: with io.open('test.txt','w',encoding='utf8') as f:
How to index an array value in a MATLAB-Function in Simulink? Question: I'm using a matlab-function in simulink to call a python script, that do some calculations from the input values. The python-script gives me a string back to the matlab-function, that I split to an array. The splitted string has always to be a cell array with 6 variable strings: dataStringArray = '[[-5.01 0.09785429][-8.01 0.01284927]...' '10.0' '20.0' '80.0' '80.0' '50.0' To call the functions like strsplit or the python-script itself with a specific m-file, I'm using `coder.extrinsic('*')` method. Now I want to index to a specific value for example with `dataStringArray(3)` to get '20.0' and define it as an output value of the matlab-function, but this doesn't work! I tried to predefine the dataStringArray with `dataStringArray = cell(1,6);` but get always the same 4 errors: Subscripting into an mxArray is not supported. Function 'MATLAB Function' (#23.1671.1689), line 42, column 24: "dataStringArray(3)" 2x Errors occurred during parsing of MATLAB function 'MATLAB Function' Error in port widths or dimensions. Output port 1 of 's_function_Matlab/MATLAB Function/constIn5' is a one dimensional vector with 1 elements. What do I'm wrong? **SAMPLE CODE** **The commented code behind the output definitions is what I need.** : function [dataArrayOutput, constOut1, constOut2, constOut3, constOut4, constOut5] = fcn(dataArrayInput, constIn1, constIn2, constIn3, constIn4, constIn5) coder.extrinsic('strsplit'); % Python-Script String Output pythonScriptOutputString = '[[-5.01 0.088068861]; [-4.96 0.0]]|10.0|20.0|80.0|80.0|50.0'; dataStringArray = strsplit(pythonScriptOutputString, '|'); % Outputs dataArrayOutput = dataArrayInput; % str2num(char((dataStringArray(1)))); constOut1 = constIn1; % str2double(dataStringArray(2)); constOut2 = constIn2; % str2double(dataStringArray(3)); constOut3 = constIn3; % str2double(dataStringArray(4)); constOut4 = constIn4; % str2double(dataStringArray(5)); constOut5 = constIn5; % str2double(dataStringArray(6)); **SOLUTION 1** Cell arrays are not supported in Matlab function blocks, only the native Simulink datatypes are possible. A workaround is to define the whole code as normal function and execute it from the MATLAB-Function defined with extrinsic. It`s important to initialize the output variables with a known type and size before executing the extrinsic function. **SOLUTION 2** Another solution is to use the `strfind` function, that gives you a double matrix with the position of the splitter char. With that, you can give just the range of the char positions back that you need. In this case, your whole code will be in the MATLAB-Function block. function [dataArrayOutput, constOut1, constOut2, constOut3, constOut4, constOut5] = fcn(dataArrayInput, constIn1, constIn2, constIn3, constIn4, constIn5) coder.extrinsic('strsplit', 'str2num'); % Python-Script String Output pythonScriptOutputString = '[[-5.01 0.088068861]; [-4.96 0.0]; [-1.01 7.088068861]]|10.0|20.0|80.0|80.0|50.0'; dataStringArray = strfind(pythonScriptOutputString,'|'); % preallocate dataArrayOutput = zeros(3, 2); constOut1 = 0; constOut2 = 0; constOut3 = 0; constOut4 = 0; constOut5 = 0; % Outputs dataArrayOutput = str2num(pythonScriptOutputString(1:dataStringArray(1)-1)); constOut1 = str2num(pythonScriptOutputString(dataStringArray(1)+1:dataStringArray(2)-1)); constOut2 = str2num(pythonScriptOutputString(dataStringArray(2)+1:dataStringArray(3)-1)); constOut3 = str2num(pythonScriptOutputString(dataStringArray(3)+1:dataStringArray(4)-1)); constOut4 = str2num(pythonScriptOutputString(dataStringArray(4)+1:dataStringArray(5)-1)); constOut5 = str2num(pythonScriptOutputString(dataStringArray(5)+1:end)); Answer: When using an extrinsic function, the data type returned is of `mxArray`, which you cannot index into as the error message suggests. To work around this problem, you first need to initialise the variable(s) of interest to cast them to the right data type (e.g. double). See [Working with mxArrays](http://uk.mathworks.com/help/simulink/ug/calling-matlab- functions.html#bq1h2z9-46) in the documentation for examples of how to do that. The second part of the error message is a dimension. Without seeing the code of the function, the Simulink model and how the inputs/outputs of the function are defined, it's difficult to tell what's going on, but you need to make sure you have the correct size and data type defined in the [Ports and Data manager](http://uk.mathworks.com/help/simulink/ug/matlab-function-block- editor.html#bqgwvsq-1).
Using sympy on strings Question: I have a file with some equations. I want to solve them using sympy. I can use open('problems.txt',mode='r') to open the file. But how to proceed with sympy? I'm getting following error > sympy.core.sympify.SympifyError: Sympify of expression 'could not parse > 'x+x+x-x = 18 + 4'' failed, because of exception being raised: SyntaxError: > invalid syntax (, line 1) I'm using Python 3.4.2 Answer: [parse_expr](http://docs.sympy.org/dev/modules/parsing.html#sympy.parsing.sympy_parser.parse_expr) should be able to get you going: import sympy from sympy.parsing.sympy_parser import ( parse_expr, standard_transformations, implicit_multiplication_application ) s = 'x+x+x-x = 18 + 4' def my_parse(s, transfm=None): lhs, rhs = s.split('=') if transfm is None: transfm = (standard_transformations + (implicit_multiplication_application,)) return sympy.Eq( parse_expr(lhs, transformations=transfm), parse_expr(rhs, transformations=transfm)) expr = my_parse(s) print(expr) output: 2*x == 22 using sympy version 0.7.6
Json to CSV using python and blender 2.74 Question: I have a project in which i have to convert a json file into a CSV file. The Json sample : { "P_Portfolio Group": { "depth": 1, "dataType": "PortfolioOverview", "levelId": "P_Portfolio Group", "path": [ { "label": "Portfolio Group", "levelId": "P_Portfolio Group" } ], "label": "Portfolio Group", "header": [ { "id": "Label", "label": "Security name", "type": "text", "contentType": "text" }, { "id": "SecurityValue", "label": "MioCHF", "type": "text", "contentType": "number" }, { "id": "SecurityValuePct", "label": "%", "type": "text", "contentType": "pct" } ], "data": [ { "dataValues": [ { "value": "Client1", "type": "text" }, { "value": 2068.73, "type": "number" }, { "value": 14.0584, "type": "pct" } ] }, { "dataValues": [ { "value": "Client2", "type": "text" }, { "value": 1511.9, "type": "number" }, { "value": 10.2744, "type": "pct" } ] }, { "dataValues": [ { "value": "Client3", "type": "text" }, { "value": 1354.74, "type": "number" }, { "value": 9.2064, "type": "pct" } ] }, { "dataValues": [ { "value": "Client4", "type": "text" }, { "value": 1225.78, "type": "number" }, { "value": 8.33, "type": "pct" } ] } ], "summary": [ { "value": "Total", "type": "text" }, { "value": 11954.07, "type": "number" }, { "value": 81.236, "type": "pct" } ] } } And i want o obtain something like: Client1,2068.73,14.0584 Client2,1511.9,10.2744 Client3,871.15,5.92 Client4,11954.07,81.236 Can you please give me a hint. import csv import json with open("C:\Users\SVC\Desktop\test.json") as file: x = json.load(file) f = csv.writer(open("C:\Users\SVC\Desktop\test.csv", "wb+")) for x in x: f.writerow(x["P_Portfolio Group"]["data"]["dataValues"]["value"]) but it doesn't work. Can you please give me a hint. Answer: import csv import json with open('C:\Users\SVC\Desktop\test.json') as json_file: portfolio_group = json.load(json_file) with open('C:\Users\SVC\Desktop\test.csv', 'w') as csv_file: csv_obj = csv.writer(csv_file) for data in portfolio_group['P_Portfolio Group']['data']: csv_obj.writerow([d['value'] for d in data['dataValues']]) This results in the following `C:\Users\SVC\Desktop\test.csv` content: Client1,2068.73,14.0584 Client2,1511.9,10.2744 Client3,1354.74,9.2064 Client4,1225.78,8.33
Python: [Errno 2] No such file or directory: Question: I have imported Python Project in Eclipse. When i run the script on mac machine i am getting error as below: `[Errno 2] No such file or directory: '/Users/noimac- mini4/Documents/workspace/MobileAutomationPy\\..\\..\\..\\..\\'` what could be possible cause of error here? Same project imported in windows machine and it is working fine but not in mac Answer: What i see here is a lot of _backslashes_ (`\`) in your path which are used on Windows platforms, that is why it works when imported on Windows machine. My guess would be that you have this path stored inside a string or something. Better approach would be to use [`os.path`](https://docs.python.org/2/library/os.path.html) and `os.sep` when dealing with platform independent path's.
How to show minor tick labels on log-scale with Matplotlib Question: Does anyone know how to show the labels of the minor ticks on a logarithmic scale with Python/Matplotlib? Thanks! Answer: You can use `plt.tick_params(axis='y', which='minor')` to set the minor ticks on and format them with the `matplotlib.ticker` `FormatStrFormatter`. For example, import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import FormatStrFormatter x = np.linspace(0,4,1000) y = np.exp(x) plt.plot(x, y) ax = plt.gca() ax.set_yscale('log') plt.tick_params(axis='y', which='minor') ax.yaxis.set_minor_formatter(FormatStrFormatter("%.1f")) plt.show() ![enter image description here](http://i.stack.imgur.com/Dkj8M.png)
Python argv and cmd Question: I'm trying to make a Python program that can correct exams automaticly, I have extra time and don't wanna wait for my teacher to correct them manually... Annyways when i use python argv like this: import sys def hello(a): print(a) a = sys.argv[1:] hello(a) And i want to insert a list, I can no longer insert just one variable because of the way argv works, and I can't know how long the list will be because not all tasks are the same. I'm using subprocess.check_output to return the program output after my checker runs it in a cmd window... Now if someone knows a better way to approach correcting the programs without making the students replace their input with sys.argv(if there is a better way to input arguments to a seperate python program when you run it) or can tell me how to fix the argv issue? Answer: You could use [Popen.communicate](https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate) instead of `check_output`: echo.py: print(input()) test.py: from subprocess import Popen, PIPE p = Popen(['python3', 'echo.py'], stdout=PIPE, stdin=PIPE, stderr=PIPE) out, err = p.communicate(input="hello!".encode()) assert out.decode().strip() == "hello!"
only length-1 arrays can be converted to Python scalars using int function Question: I currently got an array in python which is an element of a dictionary and, in python console, looks like this: TUMEC['spans'][0] array([0. , 0.25, 0.5, 0.75, 1.]) I want to pass the whole array to a function but this has to go through some simple math operation like: int(TUMEC['spans'][0]*100) but I got the error message saying: only length-1 arrays can be converted to Python scalars. So, I tried to `import numpy` and type: int(TUMEC['spans'][0]*100) but I still have the same issue. Does anybody know how I can bypass this issue. Answer: you can try converting to list and then comprehension : [int(a*100) for a in list(TUMEC['spans'][0])]
IntelliJ/Webstorm not finding import reference Question: I have the following project structure: * root * src * scripts * main.js * foo.js Inside of my `main.js` file, I'm importing `foo.js` like so: import 'src/scripts/foo.js' When I click on the import statement above and go to `Navigate -> Declaration` I get a super helpful message that says `Cannot find declaration to go`. This makes it super frustrating to work with because the editor basically has no idea which files import other files. This means I can't use the helpful features of the IDE like _search for references_ when moving a file, find usages, etc If I change the import statement to be relative, it works altogether: import './foo.js' However, we are striving for absolute imports, [a habit we picked up from writing python apps](http://stackoverflow.com/questions/14132789/python- relative-imports-for-the-billionth-time). I came across [Webstorm: "Cannot Resolve Directory"](http://stackoverflow.com/questions/21987225/webstorm-cannot- resolve-directory) and that gave me the idea to `mark` my `src` directory as a `Sources Root`. After doing that, I could change my import statement in `main.js` to import '/scripts/foo.js' //notice the leading forward slash Well, that's a little better because now I can at least `Navigate -> Declaration` but it's not ideal because now I can't mark any of the directories underneath `src` as a test, resource, etc. So why is IntelliJ/webstorm making this so difficult to do? Answer: > Because now I can't mark any of the directories underneath src as a test, > resource, etc. Yes, you can. It is not possible to mark subfolders of already marked folders in the Project View. But you can do this in Project Structure (`Ctrl` \+ `Shift` \+ `Alt` \+ `S`). Go to `Modules`, select your module and switch to the `Sources` tab. Now you can mark your `src` folder as `Sources` (which you already did) and mark `src/test` as `Tests` etc. According to the [Web Help](https://www.jetbrains.com/webstorm/help/configuring-folders-within-a- content-root.html), in WebStorm this setting is hidden in `Settings > Directories` instead of the Project Structure. Here's another solution using only the Project View: unmark your source folder, mark your test/resource subfolders and then mark the (parent) source folder again. I'm not sure, why it doesn't work the other way around.
Issue with UDF on a column of Vectors in PySpark DataFrame Question: I am having trouble using a UDF on a column of Vectors in PySpark which can be illustrated here: from pyspark import SparkContext from pyspark.sql import Row from pyspark.sql.types import DoubleType from pyspark.sql.functions import udf from pyspark.mllib.linalg import Vectors FeatureRow = Row('id', 'features') data = sc.parallelize([(0, Vectors.dense([9.7, 1.0, -3.2])), (1, Vectors.dense([2.25, -11.1, 123.2])), (2, Vectors.dense([-7.2, 1.0, -3.2]))]) df = data.map(lambda r: FeatureRow(*r)).toDF() vector_udf = udf(lambda vector: sum(vector), DoubleType()) df.withColumn('feature_sums', vector_udf(df.features)).first() This fails with the following stack trace: Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 31.0 failed 1 times, most recent failure: Lost task 5.0 in stage 31.0 (TID 95, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/Users/colin/src/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main process() File "/Users/colin/src/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process serializer.dump_stream(func(split_index, iterator), outfile) x1 File "/Users/colin/src/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream vs = list(itertools.islice(iterator, batch)) File "/Users/colin/src/spark/python/pyspark/sql/functions.py", line 469, in <lambda> func = lambda _, it: map(lambda x: f(*x), it) File "/Users/colin/pokitdok/spark_mapper/spark_mapper/filters.py", line 143, in <lambda> TypeError: unsupported operand type(s) for +: 'int' and 'NoneType' Looking at what gets passed to the UDF, there seems to be something strange. The argument passed should be a Vector, but instead it gets passed a Python tuple like this: (1, None, None, [9.7, 1.0, -3.2]) Is it not possible to use UDFs on DataFrame columns of Vectors? **EDIT** So it was pointed out on the mailing list that this is a [known issue](https://issues.apache.org/jira/browse/SPARK-7902). Going to accept the answer from @hyim since it does provider a temporary workaround for dense vectors. Answer: In spark-sql, vectors are treated (type, size, indices, value) tuple. You can use udf on vectors with pyspark. Just modify some code to work with values in vector type. vector_udf = udf(lambda vector: sum(vector[3]), DoubleType()) df.withColumn('feature_sums', vector_udf(df.features)).first() <https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/linalg/Vectors.scala>
Converting list to array with NumPy asarray method Question: I try get mean from csv line. I get data from csv in string list, further i convert it to array with numpy. Its work perfect when i try plot some graphics. But when i calculate mean i get some errors with my data. If i use NumPy i get: TypeError: cannot perform reduce with flexible type If i use statistics library i get: TypeError: can't convert type 'string' to numerator/denominator If i check my array with comand 'type' on iPython i see that it numpy.ndarray type. Whats wrong with my array? Can you explain, why convert numpy.asarray for matplotlib work perfect, but get wrong type for different operation. import csv import numpy as np import statistics as stat life_exp=[] with open('country.csv') as csvfile: datareader = csv.reader(csvfile) for row in datareader: if datareader.line_num!=1: life_exp.append(row[1]) array_life_exp = np.asarray(life_exp) print(stat.mean(array_life_exp)) print(np.mean(array_life_exp)) Answer: Try this: from pandas import read_csv data = read_csv('country.csv') print(data.iloc[:,1].mean()) This code will convert your csv to pandas dataframe with automatic type conversion and print mean of the second column.
Extract substrings in python Question: I want to parse a string to extract all the substrings in curly braces: 'The value of x is {x}, and the list is {y} of len {}' should produce: (x, y) Then I want to format the string to print the initial string with the values: str.format('The value of x is {x}, and the list is {y} of len {}', x, y, len(y)) How can I do that? Example usage: def somefunc(): x = 123 y = ['a', 'b'] MyFormat('The value of x is {x}, and the list is {y} of len {}',len(y)) output: The value of x is 123, and the list is ['a', 'b'] of len 2 Answer: You can use [string.Formatter.parse](https://docs.python.org/2/library/string.html#string.Formatter.parse): > Loop over the format_string and return an iterable of tuples (literal_text, > field_name, format_spec, conversion). This is used by vformat() to break the > string into either literal text, or replacement fields. > > The values in the tuple conceptually represent a span of literal text > followed by a single replacement field. If there is no literal text (which > can happen if two replacement fields occur consecutively), then literal_text > will be a zero-length string. If there is no replacement field, then the > values of field_name, format_spec and conversion will be None. from string import Formatter s = 'The value of x is {x}, and the list is {y} of len {}' print([t[1] for t in Formatter().parse(s) if t[1]]) ['x', 'y'] Not sure how that really helps what you are trying to do as you can just pass x and y to str.format in your function or use **locals: def somefunc(): x = 123 y = ['a', 'b'] print('The value of x is {x}, and the list is {y} of len {}'.format(len(y),**locals())) If you wanted to print the named args you could add the Formatter output: def somefunc(): x = 123 y = ['a', 'b'] print("The named args are {}".format( [t[1] for t in Formatter().parse(s) if t[1]])) print('The value of x is {x}, and the list is {y} of len {}'.format(len(y), **locals())) Which would output: The named args are ['x', 'y'] The value of x is 123, and the list is ['a', 'b'] of len 2