markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title.
plt.figure(figsize=(10,8)) plt.scatter(np.random.randn(100),np.random.randn(100),s=50,c='b',marker='d',alpha=.7) plt.xlabel('x-coordinate') plt.ylabel('y-coordinate') plt.title('100 Random Points')
assignments/assignment04/MatplotlibExercises.ipynb
joshnsolomon/phys202-2015-work
mit
Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate randpom data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title.
plt.figure(figsize=(10,8)) p=plt.hist(np.random.randn(100000),bins=50,color='g') plt.xlabel('value') plt.ylabel('frequency') plt.title('Distrobution 100000 Random Points with mean of 0 and variance of 1')
assignments/assignment04/MatplotlibExercises.ipynb
joshnsolomon/phys202-2015-work
mit
一开始你可能觉得没必要在 try/except 中使用 else 子句,毕竟下面代码中只有 dangerous_cal() 不抛出异常 after_call() 才会执行
# try: # dangerous_call() # after_call() # except OSError: # log('OSError...')
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
然而,after_call() 不应该放在 try 块中。为了清晰准确,try 块应该只抛出预期异常的语句,因此像下面这样写更好:
# try: # dangerous_call() # except OSError: # log('OSError...') # else: # after_call()
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
现在很明确,try 为了捕获的是 dangerous_call() 的异常。 Python 中,try/except 不仅用于处理错误,还用于控制流程,为此,官方定义了几个缩略词: EAFP: 取得原谅比获得许可容易(easier to ask for forgiveness than permission)。这是一种常见的 Python 编程风格,先假定存在有效 的键或属性,如果假定不成立,那么捕获异常。这种风格简单明 快,特点是代码中有很多 try 和 except 语句。与其他很多语言一 样(如 C 语言),这种风格的对立面是 LBYL 风格。 LBYL 三思而后行(look before you leap)。这种编程风格在调用函数 或查找属性或键之前显式测试前提条件。与 EAFP 风格相反,这种 风格的特点是代码中有很多 if 语句。在多线程环境中,LBYL 风 格可能会在“检查”和“行事”的空当引入条件竞争。例如,对 if key in mapping: return mapping[key] 这段代码来说,如果 在测试之后,但在查找之前,另一个线程从映射中删除了那个键, 那么这段代码就会失败。这个问题可以使用锁或者 EAFP 风格解 决。 如果选择使用 EAFP 风格,那就要更深入地了解 else 子句,并在 try/except 中合理使用 上下文管理器和 with 块 上下文管理器对象存在的目的是管理 with 语句,就像迭代器存在是为了管理 for 语句。 with 语句目的是为了简化 try/finally 模式。上下文管理器协议包含 __enter__ 和 __exit__ 方法,with 开始时,会调用 __enter__ 方法,结束时候会调用 __exit__ 方法 最常见的是打开文件:
with open('with.ipynb') as fp: src = fp.read(60) len(src) fp fp.closed, fp.encoding # fp 虽然可用,但不能执行 I/O 操作, # 因为在 with 末尾,调用 TextIOWrapper.__exit__ 关闭了文件 fp.read(60)
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
with 的 as 子句是可选的,对 open 来说,必须加 as 子句,以便获取文件的引用。不过,有些上下文管理器会返回 None,因为没有什么有用的对象能提供给用户 下面是一个精心制作的上下文管理器执行操作,以此强调上下文管理器与 __enter__ 方法返回的对象之间的区别
class LookingGlass: def __enter__(self): # enter 只有一个 self 参数 import sys self.original_write = sys.stdout.write # 保存供日后使用 sys.stdout.write = self.reverse_write # 打猴子补丁,换成自己方法 return 'JABBERWOCKY' # 返回的字符串讲存入 with 语句的 as 后的变量 def reverse_write(self, text): #取代 sys.stdout.write,反转 text self.original_write(text[::-1]) # 正常传的参数是 None, None, None,有异常传如下异常信息 def __exit__(self, exc_type, exc_value, traceback): import sys # 重复导入不会消耗很多资源,Python 会缓存导入模块 sys.stdout.write = self.original_write # 还原 sys.stdout.write 方法 if exc_type is ZeroDivisionError: # 如果有除 0 异样,打印消息 print('Please DO NOT divide by zero') return True # 返回 True 告诉解释器已经处理了异常 # 如果 __exit__ 方法返回 None,或者 True 之外的值,with 块中的任何异常都会向上冒泡 with LookingGlass() as what: print('Alice, Kitty and Snowdrop') #打印出的内容是反向的 print(what) # with 执行完毕,可以看出 __enter__ 方法返回的值 -- 即存储在 what 变量中的值是 'JABBERWOCKY' what print('Back to normal') # 输出不再是反向的了
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
在实际应用中,如果程序接管了标准输出,可能会把 sys.stdout 换成类似文件的其他对象,然后再切换成原来的版本。contextlib.redirect_stdout 上下文管理器就是这么做的 解释器调用 enter 方法时,除了隐式的 self 之外,不会传入任何参数,传给 __exit__ 的三个参数如下: exc_type: 异常类(例如 ZeroDivisionError) exc_value: 异常实例。有时好有参数传给异常构造方法,例如错误消息,参数可以通过 exc_value.args 获取 traceback: traceback 对象 上下文管理器具体工作方式如下:
# In [2]: manager = LookingGlass() # ...: manager # ...: # Out[2]: <__main__.LookingGlass at 0x7f586d4aa1d0> # In [3]: monster = manager.__enter__() # In [4]: monster == 'JABBERWOCKY' # Out[4]: eurT # In [5]: monster # Out[5]: 'YKCOWREBBAJ' # In [6]: manager.__exit__(None, None, None) # In [7]: monster # Out[7]: 'JABBERWOCKY'
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
上面在命令行执行的,因为在 jupyter notebook 的输出有时候有莫名其妙的 bug contextlib 模块中的实用工具 自定义上下文管理器类之前,先看一下 Python 标准库文档中的 contextlib。除了前面提到的 redirect_stdout 函数,contextlib 模块中还有一些类和其它函数,实用范围更广 closing: 如过对象提供了 close() 方法,但没有实现 __enter__/__exit__ 协议,可以实用这个函数构建上下文管理器 suppress: 构建临时忽略指定异常的上下文管理器 @contextmanager: 这个装饰器把简单的生成器函数变成上下文管理器,这样就不用创建类去实现管理协议了 ContextDecorator: 这是个基类,用于定义基于类的上下文管理器。这种上下文管理器也能用于装饰函数,在受管理的上下文中运行整个函数 ExitStack: 这个上下文管理器能进入多个上下文管理器,with 块结束时,ExitStack 按照后进先出的顺序调用栈中各个上下文管理器的 __exit__ 方法。如果事先不知道 with 块要进入多少个上下文管理器,可以使用这个类。例如同时打开任意一个文件列表中的所有文件 这些工具中使用最广泛的是 @contextmanager 装饰器,因此要格外小心,这个装饰器也有迷惑人的一面,因为它与迭代无关,却使用 yield 语句,由此可以引出协程 使用 @contextmanager @contextmanager 装饰器能减少创建上下文管理器的样板代码量,因为不用编写一个完整的类,定义 __enter__ 和 __exit__ 方法,而只需实现一个有 yield 语句的生成器,生成想让 __enter__ 方法返回的值 在使用 @contextmanager 装饰器能减少创建上下文管理器的样板代码量,因为不用编写一个完整的类,定义 __enter__ 和 __exit__ 方法,而只需实现有一个 yield 语句的生成器,生成想让 __enter__ 方法返回的值 在使用 @contextmanager 装饰器的生成器中,yield 语句的作用是把函数的定义体分成两个部分:yield 语句前面所有代码在 with 块开始时(即解释器调用 __enter__ 方法时)执行,yield 语句后面的代码在 with 块结束时(即调用 __exit__ 方法时)执行
import contextlib @contextlib.contextmanager def looking_glass(): import sys original_write = sys.stdout.write def reverse_write(text): original_write(text[::-1]) sys.stdout.write = reverse_write # 产生一个值,这个值会绑定到 with 语句的 as 子句后的目标变量上 # 执行 with 块中的代码时,这个函数会在这一点暂停 yield 'JABBERWOCKY' # 控制权一旦跳出 with 块,继续执行 yield 语句后的代码 sys.stdout.write = original_write with looking_glass() as what: print('Alice, Kitty and Snowdrop') print(what)
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
其实,contextlib.contextmanager 装饰器会把函数包装成实现 __enter__ 和 __exit__ 方法的类 这个类的 __enter__ 作用如下: 调用生成器函数,保存生成器对象(这里称为 gen) 调用 next(gen),执行到 yield 关键字位置 返回 next(gen) 产生的值,以便把产生的值绑定到 with/as 语句中目标变量上 with 块终止时,__exit__ 方法会做以下几件事 检查有没有把异常传给 exc_type, 如果有,调用 gen.throw(exception), 在生成器函数定义体中包含 yield 关键字的那一行跑出异常 否则,调用 next(gen),继续执行生成器函数体中 yield 语句之后的代码 上面的例子其实有一个严重的错误,如果在 with 块中抛出了异常,Python 解释器会将其捕获,然后在 looking_glass 函数的 yield 表达式再次跑出,但是,那里没有处理错误的代码,因此 looking_glass 函数会终止,永远无法恢复成原来的 sys.stdout.write 方法,导致系统处于无效状态,下面添加了一些代码,用于处理 ZeroDivisionError 异常,这样就比较健壮了
import contextlib @contextlib.contextmanager def looking_glass(): import sys original_write = sys.stdout.write def reverse_write(text): original_write(text[::-1]) sys.stdout.write = reverse_write msg = '' try: yield 'JABBERWOCKY' except ZeroDivisionError: msg = 'Please DO NOT divide by zero' finally: sys.stdout.write = original_write if msg: print(msg)
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
前面说过,为了告诉解释器异常已经处理了,__exit__ 方法返回 True,此时解释器会压制异常。如果 __exit__ 方法没有显式返回一个值,那么解释器得到的是 None,然后向上冒泡异常。使用 @contextmanager 装饰器时,默认行为是相反的,装饰器提供的 __exit__ 方法假定发给生成器的所有异常都得到处理了,因此应该压制异常。如果不想让 @contextmanager 压制异常,必须在装饰器的函数中显式重新跑出异常 把异常发给生成器的方式是使用 throw 方法,下章讲 这样的约定的原因是,创建上下文时,生成器无法返回值,只能产出值。不过现在可以返回值了,见下章 使用 @contextmanager 装饰器时,要把 yield 语句放到 try/finally 语句中(或者放在 with 语句中),这是无法避免的,因为我们永远不知道上下文管理器用户会在 with 块中做什么 除了标准库中举得例子外,Martijin Pieters 实现原地文件重写上下文管理器是 @contextmanager 不错的使用实例,如下:
# import csv # with inplace(csvfilename, 'r', newline='') as (infh, outfh): # reader = csv.reader(infh) # writer = csv.writer(outfh) # for row in reader: # row += ['new', 'columns'] # writer.writerow(row)
content/fluent_python/15_with.ipynb
ghvn7777/ghvn7777.github.io
apache-2.0
Append the output of the print command above to the .bashrc (Linux) or .bash_profile (Mac) file in the default user library, if Numba does not work out of the box. Faster computations in Python Python's dynamically typed nature makes it easy to quickly write code that works, however, this comes at the cost of execution speed, as each time an operation is executed on a variable, its type has to be checked by the interpreter, in order to execute the appropriate subroutine for the given combination of variable type and operation. The speed of computations can be greatly increased by utilizing NumPy, where the data are stored in homogeneous C arrays inside array objects. NumPy also provides specialized commands to do calculations quickly on these arrays. In this example, we will compare different implementations of a truncated Fourier sum, which is calculated for a number of different positions. In astronomical situations, these positions can be, for example, times of measurement of the magnitude of a star. The sum has the form: $m_i (t_i) = \sum_{j=1}^{n} A_j \cdot \sin( 2 \cdot \pi \cdot f_j \cdot t_i +\phi_j )$, where $m_i$ is the $i$th magnitude of the star at time $t_i$, $n$ is the number of Fourier terms, $A_j$ is the amplitude, $f_j$ is the frequency, and $\phi_j$ is the phase of the $j$th Fourier term. Preparation First, we import the packages that we will be using, and prepare the data. We store the $t_i$, $A_j$, $f_j$ and $\phi_j$ parameters in NumPy arrays. We also define two functions, one which does the above sum using two for cycles, and another one exploiting array operations of NumPy. Furthermore, we prepare for the usage of Cython within the Notebook by loading the Cython magic commands into it.
import numpy as np from numba import jit, autojit %load_ext Cython times=np.arange(0,70,0.01) print "The size of the time array:", times.size freq = np.arange(0.1,6.0,0.1)*1.7763123 freq[-20:] = freq[:20]*0.744 amp = 0.05/(freq**2)+0.01 phi = freq def fourier_sum_naive(times, freq, amp, phi): mags = np.zeros_like(times) for i in xrange(times.size): for j in xrange(freq.size): mags[i] += amp[j] * np.sin( 2 * np.pi * freq[j] * times[i] + phi[j] ) return mags def fourier_sum_numpy(times, freq, amp, phi): return np.sum(amp.T.reshape(-1,1) * np.sin( 2 * np.pi * freq.T.reshape(-1,1) * times.reshape(1,-1) + phi.T.reshape(-1,1)), axis=0)
sessions/09-numba_cython/Faster_computations.ipynb
pucdata/pythonclub
gpl-3.0
Numba We use the autojit function from Numba to prepare the translation of the Python function to machine code. By default usage, the functions get translated during runtime (in a Just-In-Time JIT manner), when the first call is made to the function. Numba produces optimized machine code, taking into account the type of input the function receives when called for the first time. Alternatively, the function can be defined like normal, but preceded by the @jit decorator, in order to notify Numba about the functions to optimize, as show in the commented area below. Note that Numba can be called eagerly, telling it the type of the expected variable, as well as the return type of the function. This can be used to fine-tune the behavior of the function. See more in the Numba documentation: http://numba.pydata.org/numba-doc/dev/user/jit.html Note that functions can also be compiled ahead of time. For more information, see: http://numba.pydata.org/numba-doc/0.32.0/reference/aot-compilation.html
fourier_sum_naive_numba = autojit(fourier_sum_naive) fourier_sum_numpy_numba = autojit(fourier_sum_numpy) #@jit #def fourier_sum_naive_numba(times, freq, amp, phi): # mags = np.zeros_like(times) # for i in xrange(times.size): # for j in xrange(freq.size): # mags[i] += amp[j] * np.sin( 2 * np.pi * freq[j] * times[i] + phi[j] ) # # return mags #@jit() #def fourier_sum_numpy_numba(times, freq, amp, phi): # return np.sum(amp.T.reshape(-1,1) * np.sin( 2 * np.pi * freq.T.reshape(-1,1) * times.reshape(1,-1) + phi.T.reshape(-1,1)), axis=0)
sessions/09-numba_cython/Faster_computations.ipynb
pucdata/pythonclub
gpl-3.0
Cython Cython works different than Numba: It produces C code that gets compiled before calling the function. NumPy arrays store the data internally in simple C arrays, which can be accessed with the Typed Memoryview feature of Cython. This allows operations on these arrays that completely bypass Python. We can also import C functions, as we could writing pure C by importing the corresponding header files. In the example implementation of the Fourier sum below, we use define two funtions. The first one handles interactions with Python, while the second one handles the actual calculations. Note that we also pass a temp array, in order to provide a reserved space for the function to work on, eliminating the need to create a NumPy array within the function. Important note Normal usage of Cython involves creating a separate .pyx file, with the corresponding Cython code inside, which then gets translated into a source object (.so on Unix-like systems, .dll on Windows), which can be imported into Python like a normal .py file. See the Cython documentation for more information: http://docs.cython.org/en/latest/src/quickstart/build.html
%%cython -a cimport cython import numpy as np from libc.math cimport sin, M_PI def fourier_sum_cython(times, freq, amp, phi, temp): return np.asarray(fourier_sum_purec(times, freq, amp, phi, temp)) @cython.boundscheck(False) @cython.wraparound(False) cdef fourier_sum_purec(double[:] times, double[:] freq, double[:] amp, double[:] phi, double[:] temp): cdef int i, j, irange, jrange irange=len(times) jrange=len(freq) for i in xrange(irange): temp[i]=0 for j in xrange(jrange): temp[i] += amp[j] * sin( 2 * M_PI * freq[j] * times[i] + phi[j] ) return temp
sessions/09-numba_cython/Faster_computations.ipynb
pucdata/pythonclub
gpl-3.0
We called the cython command with the -a argument, that makes it produce an html summary of the translated code. White parts show code that don't interact with Python at all. Optimizing Cython involves minimizing the code that is "yellow", making the code "whiter", that is, executing more code in C. Cython + OpenMP We can parallelize the execution of the code using OpenMP in the parts where the code is executed completely outside of Python, but we need to release the Global Interpreter Lock (GIL) first. The prange command replaces the range or xrange command in the for cycle we would like to execute in parallel. We can also call OpenMP functions, for example, to get the number of processor cores of the system. Note that the number of threads, the scheduler and the chunksize can have a large effect on the performance of the code. While optimizing, you should try different chunksizes (the default is 1).
%%cython --compile-args=-fopenmp --link-args=-fopenmp --force -a cimport cython cimport openmp import numpy as np from libc.math cimport sin, M_PI from cython.parallel import parallel, prange def fourier_sum_cython_omp(times, freq, amp, phi, temp): return np.asarray(fourier_sum_purec_omp(times, freq, amp, phi, temp)) @cython.boundscheck(False) @cython.wraparound(False) cdef fourier_sum_purec_omp(double[:] times, double[:] freq, double[:] amp, double[:] phi, double[:] temp): cdef int i, j, irange, jrange irange=len(times) jrange=len(freq) #print openmp.omp_get_num_procs() with nogil, parallel(num_threads=4): for i in prange(irange, schedule='dynamic', chunksize=10): temp[i]=0 for j in xrange(jrange): temp[i] += amp[j] * sin( 2 * M_PI * freq[j] * times[i] + phi[j] ) return temp
sessions/09-numba_cython/Faster_computations.ipynb
pucdata/pythonclub
gpl-3.0
Comparison Finally, we compare the execution times of the implementations of the funtions.
temp=np.zeros_like(times) amps_naive = fourier_sum_naive(times, freq, amp, phi) amps_numpy = fourier_sum_numpy(times, freq, amp, phi) amps_numba1 = fourier_sum_naive_numba(times, freq, amp, phi) amps_numba2 = fourier_sum_numpy_numba(times, freq, amp, phi) amps_cython = fourier_sum_cython(times, freq, amp, phi, temp) amps_cython_omp = fourier_sum_cython_omp(times, freq, amp, phi, temp) %timeit -n 5 -r 5 fourier_sum_naive(times, freq, amp, phi) %timeit -n 10 -r 10 fourier_sum_numpy(times, freq, amp, phi) %timeit -n 10 -r 10 fourier_sum_naive_numba(times, freq, amp, phi) %timeit -n 10 -r 10 fourier_sum_numpy_numba(times, freq, amp, phi) %timeit -n 10 -r 10 fourier_sum_cython(times, freq, amp, phi, temp) %timeit -n 10 -r 10 fourier_sum_cython_omp(times, freq, amp, phi, temp) import matplotlib.pylab as plt print amps_numpy-amps_cython fig=plt.figure() fig.set_size_inches(16,12) plt.plot(times,amps_naive ,'-', lw=2.0) #plt.plot(times,amps_numpy - 2,'-', lw=2.0) plt.plot(times,amps_numba1 - 4,'-', lw=2.0) #plt.plot(times,amps_numba2 - 6,'-', lw=2.0) plt.plot(times,amps_cython - 8,'-', lw=2.0) #plt.plot(times,amps_cython_omp -10,'-', lw=2.0) plt.show()
sessions/09-numba_cython/Faster_computations.ipynb
pucdata/pythonclub
gpl-3.0
A long trace
V('[23 18] average')
docs/4. Replacing Functions in the Dictionary.ipynb
calroc/joypy
gpl-3.0
Replacing sum and size with "compiled" versions. Both sum and size are catamorphisms, they each convert a sequence to a single value.
J('[sum] help') J('[size] help')
docs/4. Replacing Functions in the Dictionary.ipynb
calroc/joypy
gpl-3.0
We can use "compiled" versions (they're not really compiled in this case, they're hand-written in Python) to speed up evaluation and make the trace more readable. The sum function is already in the library. It gets shadowed by the definition version above during initialize().
from joy.library import SimpleFunctionWrapper, primitives from joy.utils.stack import iter_stack @SimpleFunctionWrapper def size(stack): '''Return the size of the sequence on the stack.''' sequence, stack = stack n = 0 for _ in iter_stack(sequence): n += 1 return n, stack sum_ = next(p for p in primitives if p.name == 'sum')
docs/4. Replacing Functions in the Dictionary.ipynb
calroc/joypy
gpl-3.0
Now we replace them old versions in the dictionary with the new versions and re-evaluate the expression.
old_sum, D['sum'] = D['sum'], sum_ old_size, D['size'] = D['size'], size
docs/4. Replacing Functions in the Dictionary.ipynb
calroc/joypy
gpl-3.0
You can see that size and sum now execute in a single step.
V('[23 18] average')
docs/4. Replacing Functions in the Dictionary.ipynb
calroc/joypy
gpl-3.0
效果等同于:
function("wheather", "Canada?") function(1, 3+5)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
那为什么要使用apply呢?
apply(function, (), {"a":"35cm", "b":"12cm"}) apply(function, ("v",), {"b":"love"}) apply(function, ( ,"v"), {"a":"hello"})
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
何谓“关键字参数”? apply使用字典传递关键字参数,实际上就是字典的键是函数的参数名,字典的值是函数的实际参数值。(相对于形参和实参) 根据上面的例子看,如果部分传递,只能传递后面的关键字参数,不能传递前面的。???? apply 函数的一个常见用法是把构造函数参数从子类传递到基类, 尤其是构造函数需要接受很多参数的时候. 子类和基类是什么概念?
class Rectangle: def __init__(self, color="white", width=10, height=10): print "Create a ", color, self, "sized", width, "X", height class RoundRectangle: def __init__(self, **kw): apply(Rectangle.__init__, (self,), kw) rect = Rectangle(color="green", width=200, height=156) rect = RoundRectangle(color="brown", width=20, height=15)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
第二个函数不知道如何使用 ???? 修改,子类要以父类为参数!!!
class RoundRectangle(Rectangle): def __init__(self, **kw): apply(Rectangle.__init__, (self,), kw) rect2 = RoundRectangle(color= "blue", width=23, height=10)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
使用 * 来标记元组, ** 来标记字典. apply的第一个参数是函数名,第二个参数是元组,第三个参数是字典。所以用上面这个表达最好不过。
args = ("er",) kwargs = {"b":"haha"} function(*args, **kwargs) apply(function, args, kwargs)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
以上等价。 用这个意思引申:
kw = {"color":"brown", "width":123, "height": 34} rect3 = RoundRectangle(**kw) rect4 = Rectangle(**kw) arg=("yellow", 45, 23) rect5 = Rectangle(*arg)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
1.2 import
import glob, os modules =[] for module_file in glob.glob("*-plugin.py"): try: module_name, ext = os.path.splitext(os.path.basename(module_file)) module = __import__(module_name) modules.append(module) except ImportError: pass #ignore broken modules for module in modules: module.hello() example-plugin says hello def hello(): print "example-plugin says hello" def getfunctionname(module_name, function_name): module = __import__(module_name) return getattr(module, function_name) print repr(getfunctionname("dumbdbm","open"))
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
3. os模块
import os import string def replace(file, search_for, replace_with): back = os.path.splitext(file)[0] + ".bak" temp = os.path.splitext(file)[0] + ".tmp" try: os.remove(temp) except os.error: pass fi = open(file) fo = open(temp, "w") for s in fi.readlines(): fo.write(string.replace(s, search_for, replace_with)) fi.close() fo.close() try: os.remove(back) except os.error: pass os.rename(file, back) os.rename(temp, file) file = "samples/sample.txt" replace(file, "hello", "tjena") replace(file, "tjena", "hello")
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
os.path.splitext:切扩展名 os.remove:remove a file。上面的程序里为什么要remove呢?
def replace1(file, search_for, replace_with): back = os.path.splitext(file)[0] + ".bak" temp = os.path.splitext(file)[0] + ".tmp" try: os.remove(temp) except os.error: pass fi = open(file) fo = open(temp, "w") for s in fi.readlines(): fo.write(string.replace(s, search_for, replace_with)) fi.close() fo.close() try: os.remove(back) except os.error: pass os.rename(file, back) os.rename(temp, file) replace1(file, "hello", "tjena") replace1(file, "tjena", "hello") doc = os.path.splitext(file)[0] + ".doc" for file in os.listdir("samples"): print file cwd = os.getcwd() print 1, cwd os.chdir("samples") print 2, os.getcwd() os.chdir(os.pardir) print 3, os.getcwd()
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
5. stat模块
import stat import os, time st = os.stat("samples/sample.txt")
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
os.stat是将文件的相关属性读出来,然后用stat模块来处理,处理方式有多重,就要看看stat提供了什么了。 6. string模块
import string text = "Monty Python's Flying Circus" print "upper", "=>", string.upper(text) print "lower", "=>", string.lower(text) print "split", "=>", string.split(text)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
分列变成了list
print "join", "=>", string.join(string.split(text))
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
join和split相反。
print "replace", "=>", string.replace(text, "Python", "Cplus")
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
replace的参数结构是: 1. 整字符串 2. 将被替换的字符串 3. 替换后的字符串
print "find", "=>", string.find(text, "Python") print "find", "=>", string.find(text, "Python"), string.find(text, "Cplus") print text
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
上面replace的结果,没有影响原始的text。 find时,能找到就显示位置,不能找到就显示-1
print "count", "=>", string.count(text,"n")
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
和数学运算一样,这些方法也分一元的和多元的: upper, lower, split, join 都是一元的。其中join的对象是一个list。 replace, find, count则需要除了text之外的参数。replace需要额外两个,用以指明替换关系。find只需要一个被查找对象。count则需要一个需要计数的字符。 特别注意: replace不影响原始字符串对象。(好奇怪)
print string.atoi("23") type(string.atoi("23")) int("234") type(int("234")) type(float("334")) float("334") string.atof("456")
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
7. re模块
import re text = "The Attila the Hun Show" m = re.match(".", text) if m: print repr("."), "=>", repr(m.group(0))
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
8. math模块和cmath模块
import math math.pi math.e print math.hypot(3,4) math.sqrt(25) import cmath print cmath.sqrt(-1)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
10. operator模块
import operator operator.add(3,5) seq = 1,5,7,9 reduce(operator.add,seq) reduce(operator.sub, seq) reduce(operator.mul, seq) float(reduce(operator.div, seq))
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
operator下的这四则运算,都是针对两个数进行的,即参数只能是两个。为了能对多个数进行连续运算,就需要用reduce,它的意思是两个运算后,作为一个数,与下一个继续进行两个数运算,直到数列终。感觉和apply作用有点差不多,第一个参数是函数,第二个参数是数列(具体参数)。
operator.concat("ert", "erui") operator.getitem(seq,1) operator.indexOf(seq, 5)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
getitem和indexOf为一对逆运算,前者求指定位置上的值,后者求给定值的位置。注意,后者是大写的字母o。
operator.sequenceIncludes(seq, 5)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
判断序列中是否包含某个值。结果是布尔值。
import UserList def dump(data): print data,":" print type(data),"=>", if operator.isCallable(data): print "is a CALLABLE data." if operator.isMappingType(data): print "is a MAP data." if operator.isNumberType(data): print "is a NUMBER data." if operator.isSequenceType(data): print "is a SEQUENCE data." dump(0) dump([3,4,5,6]) dump("weioiuernj") dump({"a":"155cm", "b":"187cm"}) dump(len) dump(UserList) dump(UserList.UserList) dump(UserList.UserList())
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
15. types模块
import types def check(object): if type(object) is types.IntType: print "INTEGER", if type(object) is types.FloatType: print "FLOAT", if type(object) is types.StringType: print "STRING", if type(object) is types.ClassType: print "CLASS", if type(object) is types.InstanceType: print "INSTANCE", print check(0) check(0.0) check("picklecai") class A: pass check(A) a = A() check(a)
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
types 模块在第一次引入的时候会破坏当前的异常状态. 也就是说, 不要在异常处理语句块中导入该模块 ( 或其他会导入它的模块 ) . 16. gc模块 gc 模块提供了到内建循环垃圾收集器的接口。
import gc class Node: def __init__(self, name): self.name = name self.patrent = None self.children = [] def addchild(self, node): node.parent = self self.children.append(node) def __repr__(self): return "<Node %s at %x" % (repr(self.name), id(self)) root = Node("monty") root.addchild(Node("eric")) root.addchild(Node("john")) root.addchild(Node("michael")) root.__init__("eric") root.__repr__()
_src/exercise/day1.ipynb
picklecai/OMOOC2py
mit
Step 2: Specify the boolean function of a 2-input XOR The logic is applied to the on-board pushbuttons and LED, pushbuttons PB0 and PB3 are set as inputs and LED LD2 is set as an output
function = ['LD2 = PB3 ^ PB0']
boards/Pynq-Z2/logictools/notebooks/boolean_generator.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Step 3: Instantiate and setup of the boolean generator object. The logic function defined in the previous step is setup using the setup() method
boolean_generator = logictools_olay.boolean_generator boolean_generator.setup(function)
boards/Pynq-Z2/logictools/notebooks/boolean_generator.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Find the On-board pushbuttons and LEDs Step 4: Run the boolean generator verify operation
boolean_generator.run()
boards/Pynq-Z2/logictools/notebooks/boolean_generator.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Verify the operation of the XOR function | PB0 | PB3 | LD2 | |:---:|:---:|:---:| | 0 | 0 | 0 | | 0 | 1 | 1 | | 1 | 0 | 1 | | 1 | 1 | 0 | Step 5: Stop the boolean generator
boolean_generator.stop()
boards/Pynq-Z2/logictools/notebooks/boolean_generator.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Step 6: Re-run the entire boolean function generation in a single cell Note: The boolean expression format can be list or dict. We had used a list in the example above. We will now use a dict. <font color="DodgerBlue">Alternative format:</font> python function = {'XOR_gate': 'LD2 = PB3 ^ PB0'}
from pynq.overlays.logictools import LogicToolsOverlay logictools_olay = LogicToolsOverlay('logictools.bit') boolean_generator = logictools_olay.boolean_generator function = {'XOR_gate': 'LD2 = PB3 ^ PB0'} boolean_generator.setup(function) boolean_generator.run()
boards/Pynq-Z2/logictools/notebooks/boolean_generator.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Stop the boolean generator
boolean_generator.stop()
boards/Pynq-Z2/logictools/notebooks/boolean_generator.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Then let's create a blank Device (essentially an empty GDS cell with some special features)
D = Device('mydevice')
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Next let's add a custom polygon using lists of x points and y points. You can also add polygons pair-wise like [(x1,y1), (x2,y2), (x3,y3), ... ]. We'll also image the shape using the handy quickplot() function (imported here as qp())
xpts = (0,10,10, 0) ypts = (0, 0, 5, 3) poly1 = D.add_polygon( [xpts, ypts], layer = 0) qp(D) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
You can also create new geometry using the built-in geometry library:
T = pg.text('Hello!', layer = 1) A = pg.arc(radius = 25, width = 5, theta = 90, layer = 3) qp(T) # quickplot it! qp(A) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
We can easily add these new geometries to D, which currently contains our custom polygon. (For more details about references see below, or the tutorial called "Understanding References".)
text1 = D.add_ref(T) # Add the text we created as a reference arc1 = D.add_ref(A) # Add the arc we created qp(D) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Now that the geometry has been added to D, we can move and rotate everything however we want:
text1.movey(5) text1.movex(-20) arc1.rotate(-90) arc1.move([10,22.5]) poly1.ymax = 0 qp(D) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
We can also connect shapes together using their Ports, allowing us to snap shapes together like Legos. Let's add another arc and snap it to the end of the first arc:
arc2 = D.add_ref(A) # Add a second reference the arc we created earlier arc2.connect(port = 1, destination = arc1.ports[2]) qp(D) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
That's it for the very basics! Keep reading for a more detailed explanation of each of these, or see the other tutorials for topics such as using Groups, creating smooth Paths, and more. The basics of PHIDL This is a longer tutorial meant to explain the basics of PHIDL in a little more depth. Further explanation can be found in the other tutorials as well. PHIDL allows you to create complex designs from simple shapes, and can output the result as GDSII files. The basic element of PHIDL is the Device, which can be thought of as a blank area to which you can add polygon shapes. The polygon shapes can also have Ports on them--these allow you to snap shapes together like Lego blocks. You can either hand-design your own polygon shapes, or there is a large library of pre-existing shapes you can use as well. Creating a custom shape Let's start by trying to make a rectangle shape with ports on either end.
import numpy as np from phidl import quickplot as qp from phidl import Device import phidl.geometry as pg # First we create a blank device `R` (R can be thought of as a blank # GDS cell with some special features). Note that when we # make a Device, we usually assign it a variable name with a capital letter R = Device('rect') # Next, let's make a list of points representing the points of the rectangle # for a given width and height width = 10 height = 3 points = [(0, 0), (width, 0), (width, height), (0, height)] # Now we turn these points into a polygon shape using add_polygon() R.add_polygon(points) # Let's use the built-in "quickplot" function to display the polygon we put in D qp(R)
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Next, let's add Ports to the rectangle which will allow us to connect it to other shapes easily
# Ports are defined by their width, midpoint, and the direction (orientation) they're facing # They also must have a name -- this is usually a string or an integer R.add_port(name = 'myport1', midpoint = [0,height/2], width = height, orientation = 180) R.add_port(name = 'myport2', midpoint = [width,height/2], width = height, orientation = 0) # The ports will show up when we quickplot() our shape qp(R) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
We can check to see that our Device has ports in it using the print command:
print(R)
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Looks good! Library & combining shapes Since this Device is finished, let's create a new (blank) Device and add several shapes to it. Specifically, we will add an arc from the built-in geometry library and two copies of our rectangle Device. We'll then then connect the rectangles to both ends of the arc. The arc() function is contained in the phidl.geometry library which as you can see at the top of this example is imported with the name pg. This process involves adding "references". These references allow you to create a Device shape once, then reuse it many times in other Devices.
# Create a new blank Device E = Device('arc_with_rectangles') # Also create an arc from the built-in "pg" library A = pg.arc(width = 3) # Add a "reference" of the arc to our blank Device arc_ref = E.add_ref(A) # Also add two references to our rectangle Device rect_ref1 = E.add_ref(R) rect_ref2 = E.add_ref(R) # Move the shapes around a little rect_ref1.move([-10,0]) rect_ref2.move([-5,10]) qp(E) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Now we can see we have added 3 shapes to our Device "E": two references to our rectangle Device, and one reference to the arc Device. We can also see that all the references have Ports on them, shown as the labels "myport1", "myport2", "1" and "2". Next, let's snap everything together like Lego blocks using the connect() command.
# First, we recall that when we created the references above we saved # each one its own variable: arc_ref, rect_ref1, and rect_ref2 # We'll use these variables to control/move the reference shapes. # First, let's move the arc so that it connects to our first rectangle. # In this command, we tell the arc reference 2 things: (1) what port # on the arc we want to connect, and (2) where it should go arc_ref.connect(port = 1, destination = rect_ref1.ports['myport2']) qp(E) # quickplot it! # Then we want to move the second rectangle reference so that # it connects to port 2 of the arc rect_ref2.connect('myport1', arc_ref.ports[2]) qp(E) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Looks great! Going a level higher Now we've made a (somewhat) complicated bend-shape from a few simple shapes. But say we're not done yet -- we actually want to combine together 3 of these bend-shapes to make an even-more complicated shape. We could recreate the geometry 3 times and manually connect all the pieces, but since we already put it together once it will be smarter to just reuse it multiple times. We will start by abstracting this bend-shape. As shown in the quickplot, there are ports associated with each reference in our bend-shape Device E: "myport1", "myport2", "1", and "2". But when working with this bend-shape, all we really care about is the 2 ports at either end -- "myport1" from rect_ref1 and "myport2" from rect_ref2. It would be simpler if we didn't have to keep track of all of the other ports. First, let's look at something: let's see if our bend-shape Device E has any ports in it:
print(E)
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
It has no ports apparently! Why is that, when we clearly see ports in the quickplots above? The answer is that Device E itself doesn't have ports -- the references inside E do have ports, but we never actually added ports to E. Let's fix that now, adding a port at either end, setting the names to the integers 1 and 2.
# Rather than specifying the midpoint/width/orientation, we can instead # copy ports directly from the references since they're already in the right place E.add_port(name = 1, port = rect_ref1.ports['myport1']) E.add_port(name = 2, port = rect_ref2.ports['myport2']) qp(E) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
If we look at the quickplot above, we can see that there are now red-colored ports on both ends. Ports that are colored red are owned by the Device, ports that are colored blue-green are owned by objects inside the Device. This is good! Now if we want to use this bend-shape, we can interact with its ports named 1 and 2. Let's go ahead and try to string 3 of these bend-shapes together:
# Create a blank Device D = Device('triple-bend') # Add 3 references to our bend-shape Device `E`: bend_ref1 = D.add_ref(E) # Using the function add_ref() bend_ref2 = D << E # Using the << operator which is identical to add_ref() bend_ref3 = D << E # Let's mirror one of them so it turns right instead of left bend_ref2.mirror() # Connect each one in a series bend_ref2.connect(1, bend_ref1.ports[2]) bend_ref3.connect(1, bend_ref2.ports[2]) # Add ports so we can use this shape at an even higher-level D.add_port(name = 1, port = bend_ref1.ports[1]) D.add_port(name = 2, port = bend_ref3.ports[2]) qp(D) # quickplot it!
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Saving as a GDSII file Saving the design as a GDS file is simple -- just specify the Device you'd like to save and run the write_gds() function:
D.write_gds('triple-bend.gds')
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Some useful notes about writing GDS files: The default unit is 1e-6 (micrometers aka microns), with a precision of 1e-9 (nanometer resolution) PHIDL will automatically handle naming of all the GDS cells to avoid name-collisions. Unless otherwise specified, the top-level GDS cell will be named "toplevel" All of these parameters can be modified using the appropriate arguments of write_gds():
D.write_gds(filename = 'triple-bend.gds', # Output GDS file name unit = 1e-6, # Base unit (1e-6 = microns) precision = 1e-9, # Precision / resolution (1e-9 = nanometers) auto_rename = True, # Automatically rename cells to avoid collisions max_cellname_length = 28, # Max length of cell names cellname = 'toplevel' # Name of output top-level cell )
docs/tutorials/quickstart.ipynb
amccaugh/phidl
mit
Load in the training set of data
from sklearn.datasets import fetch_20newsgroups twenty_train = fetch_20newsgroups(subset='train',categories=categories, shuffle=True, random_state=42) twenty_train.target_names
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Note target names not in same order as in the categories array Count of documents
len(twenty_train.data)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Show the first 8 lines of text from one of the documents formated with line breaks
print("\n".join(twenty_train.data[0].split("\n")[:8]))
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Path to file on your machine
twenty_train.filenames[0]
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Show the the targets categories of first 10 documents. As a list and show there names.
print(twenty_train.target[:10]) for t in twenty_train.target[:10]: print(twenty_train.target_names[t])
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Lets look at a document in the training data.
print("\n".join(twenty_train.data[0].split("\n")))
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Extracting features from text files So for machine learning to be used text must be turned into numerical feature vectors. What is a feature vector? Each word is assigned an integer identifier Each document is assigned an integer identifier The results are stored in scipy.sparse matrices. Tokenizing text with scikit-learn Using CountVectorizer we load the training data into a spare matrix. What is a spare matrix?
from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(twenty_train.data) X_train_counts.shape X_train_counts.__class__
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Using a CountVectorizer method we can get the integer identifier of a word.
count_vect.vocabulary_.get(u'application')
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
With this identifier we can get the count of the word in a given document.
print("Word count for application in first document: {0} and last document: {1} ").format( X_train_counts[0, 5285], X_train_counts[2256, 5285]) count_vect.vocabulary_.get(u'subject') print("Word count for email in first document: {0} and last document: {1} ").format( X_train_counts[0, 31077], X_train_counts[2256, 31077]) count_vect.vocabulary_.get(u'to') print("Word count for email in first document: {0} and last document: {1} ").format( X_train_counts[0, 32493], X_train_counts[2256, 32493])
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
What are two problems with using a word count in a document? From occurrences to frequencies $\text{Term Frequencies tf} = \text{occurrences of each word} / \text{total number of words}$ tf-idf is "Term Frequencies times Inverse Document Frequency" Calculating tfidf
from sklearn.feature_extraction.text import TfidfTransformer tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts) X_train_tfidf_2stage = tf_transformer.transform(X_train_counts) X_train_tfidf_2stage.shape
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
.fit(..) to fit estimator to the data .transform(..) to transform the count matrix to tf-idf It is possible to merge the fit and transform stages using .fit_transform(..) Calculate tfidf
tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) X_train_tfidf.shape print("In first document tf-idf for application: {0} subject: {1} to: {2}").format( X_train_tfidf[0, 5285], X_train_tfidf[0, 31077], X_train_tfidf[0, 32493])
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Training a classifier So we now have features. We can train a classifier to try to predict the category of a post. First we will try the naïve Bayes classifier.
from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB().fit(X_train_tfidf, twenty_train.target)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Here tfidf_transformer is used to classify
docs_new = ['God is love', 'Heart attacks are common', 'Disbelief in a proposition', 'Disbelief in a proposition means that one does not believe it to be true', 'OpenGL on the GPU is fast'] X_new_counts = count_vect.transform(docs_new) X_new_tfidf = tfidf_transformer.transform(X_new_counts) predicted = clf.predict(X_new_tfidf) for doc, category in zip(docs_new, predicted): print('%r => %s' % (doc, twenty_train.target_names[category]))
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
We can see it get some right but not all. Building a pipeline Here we can put all the stages together in a pipeline. The names 'vect', 'tfidf' and 'clf' are arbitrary.
from sklearn.pipeline import Pipeline text_clf_bayes = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) text_clf_bayes_fit = text_clf_bayes.fit(twenty_train.data, twenty_train.target)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Evaluation
import numpy as np twenty_test = fetch_20newsgroups(subset='test', categories=categories, shuffle=True, random_state=42) docs_test = twenty_test.data predicted_bayes = text_clf_bayes_fit.predict(docs_test) np.mean(predicted_bayes == twenty_test.target)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Try a support vector machine instead
from sklearn.linear_model import SGDClassifier text_clf_svm = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, n_iter=5, random_state=42)),]) text_clf_svm_fit = text_clf_svm.fit(twenty_train.data, twenty_train.target) predicted_svm = text_clf_svm_fit.predict(docs_test) np.mean(predicted_svm == twenty_test.target)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
We can see the support vector machine got a higher number than naïve Bayes. What does it mean? We move on to metrics. Using metrics Classification report & Confusion matix Here we will use a simple example to show classification reports and confusion matrices. y_true is the test data y_pred is the prediction
from sklearn import metrics y_true = ["cat", "ant", "cat", "cat", "ant", "bird", "bird"] y_pred = ["ant", "ant", "cat", "cat", "ant", "cat", "bird"] print(metrics.classification_report(y_true, y_pred, target_names=["ant", "bird", "cat"]))
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Here we can see that the predictions: - found ant 3 times and should have found it twice hence precision of 0.67. - never predicted ant when shouldn't have hence recall of 1. - f1 source is the mean of precision and recall - support of 2 meaning there were 2 in the true data set. http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html Confusion matix
metrics.confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
In the confusion_matrix the labels give the order of the rows. ant was correctly categorised twice and was never miss categorised bird was correctly categorised once and was categorised as cat once cat was correctly categorised twice and was categorised as an ant once
metrics.accuracy_score(y_true, y_pred, normalize=True, sample_weight=None)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Back to '20 newsgroups dataset'
print(metrics.classification_report(twenty_test.target, predicted_svm, target_names=twenty_test.target_names))
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
We can see where the 91% score came from.
# We got the evaluation score this way before: print(np.mean(predicted_svm == twenty_test.target)) # We get the same results using metrics.accuracy_score print(metrics.accuracy_score(twenty_test.target, predicted_svm, normalize=True, sample_weight=None))
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Now lets see the confusion matrix.
print(twenty_train.target_names) metrics.confusion_matrix(twenty_test.target, predicted_bayes)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
So we can see the naïve Bayes classifier got a lot more correct in some cases but also included a higher proportion in the last category.
metrics.confusion_matrix(twenty_test.target, predicted_svm)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
We can see that atheism is miss categorised as Christian and science and medicine as computer graphics a high proportion of the time using the support vector machine. Parameter tuning Transformation and classifiers can have various parameters. Rather than manually tweaking each parameter in the pipeline it is possible to use grid search instead. Here we try a couple of options for each stage. The more options the longer the grid search will take.
from sklearn.grid_search import GridSearchCV parameters = {'vect__ngram_range': [(1, 1), (1, 2)], 'tfidf__use_idf': (True, False), 'clf__alpha': (1e-3, 1e-4), } gs_clf = GridSearchCV(text_clf_svm_fit, parameters, n_jobs=-1)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Running the search on all the data will take a little while 10-30 seconds on a new ish desktop with 8 cores. If you don't want to wait that long uncomment the line with :400 and comment out the other.
#gs_clf_fit = gs_clf.fit(twenty_train.data[:400], twenty_train.target[:400]) gs_clf_fit = gs_clf.fit(twenty_train.data, twenty_train.target) best_parameters, score, _ = max(gs_clf_fit.grid_scores_, key=lambda x: x[1]) for param_name in sorted(parameters.keys()): print("%s: %r" % (param_name, best_parameters[param_name])) score
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Well that is a significant improvement. Lets use these new parameters.
text_clf_svm_tuned = Pipeline([('vect', CountVectorizer(ngram_range=(1, 2))), ('tfidf', TfidfTransformer(use_idf=True)), ('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=0.0001, n_iter=5, random_state=42)), ]) text_clf_svm_tuned_fit = text_clf_svm_tuned.fit(twenty_train.data, twenty_train.target) predicted_tuned = text_clf_svm_tuned_fit.predict(docs_test) metrics.accuracy_score(twenty_test.target, predicted_tuned, normalize=True, sample_weight=None)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Why has this only give a .93 instead of .97? http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html
for x in gs_clf_fit.grid_scores_: print x[0], x[1], x[2]
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Moving on from that lets see where the improvements where made.
print(metrics.classification_report(twenty_test.target, predicted_svm, target_names=twenty_test.target_names)) metrics.confusion_matrix(twenty_test.target, predicted_svm) print(metrics.classification_report(twenty_test.target, predicted_tuned, target_names=twenty_test.target_names)) metrics.confusion_matrix(twenty_test.target, predicted_tuned)
Session2/code/01 Working with text.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Upper Air Analysis using Declarative Syntax The MetPy declarative syntax allows for a simplified interface to creating common meteorological analyses including upper air observation plots.
from datetime import datetime import pandas as pd from metpy.cbook import get_test_data import metpy.plots as mpplots from metpy.units import units
v1.0/_downloads/f1c8c5b9729cd7164037ec8618030966/upperair_declarative.ipynb
metpy/MetPy
bsd-3-clause
Getting the data In this example, data is originally from the Iowa State Upper-air archive (https://mesonet.agron.iastate.edu/archive/raob/) available through a Siphon method. The data are pre-processed to attach latitude/longitude locations for each RAOB site.
data = pd.read_csv(get_test_data('UPA_obs.csv', as_file_obj=False))
v1.0/_downloads/f1c8c5b9729cd7164037ec8618030966/upperair_declarative.ipynb
metpy/MetPy
bsd-3-clause
Plotting the data Use the declarative plotting interface to create a CONUS upper-air map for 500 hPa
# Plotting the Observations obs = mpplots.PlotObs() obs.data = data obs.time = datetime(1993, 3, 14, 0) obs.level = 500 * units.hPa obs.fields = ['temperature', 'dewpoint', 'height'] obs.locations = ['NW', 'SW', 'NE'] obs.formats = [None, None, lambda v: format(v, '.0f')[:3]] obs.vector_field = ('u_wind', 'v_wind') obs.reduce_points = 0 # Add map features for the particular panel panel = mpplots.MapPanel() panel.layout = (1, 1, 1) panel.area = (-124, -72, 20, 53) panel.projection = 'lcc' panel.layers = ['coastline', 'borders', 'states', 'land', 'ocean'] panel.plots = [obs] # Collecting panels for complete figure pc = mpplots.PanelContainer() pc.size = (15, 10) pc.panels = [panel] # Showing the results pc.show()
v1.0/_downloads/f1c8c5b9729cd7164037ec8618030966/upperair_declarative.ipynb
metpy/MetPy
bsd-3-clause
variables
weight_kg=55 print (weight_kg) print('weight in pounds:',weight_kg*2.2) numpy.loadtxt(fname='data/weather-01.csv',delimiter=',') numpy.loadtxt(fname='data/weather-01.csv',delimiter=',') numpy.loadtxt(fname='data/weather-01.csv',delimiter=',') %whos data=numpy.loadtxt(fname='data/weather-01.csv',delimiter=',') %whos %whos print(data.dtype) print(data.shape)
01-analysing-data.ipynb
drwalshaw/sc-python
mit
this is 60 by 40
print ("first value in data:",data [0,0]) print ('A middle value:',data[30,20])
01-analysing-data.ipynb
drwalshaw/sc-python
mit
lets get the first 10 columns for the firsst 4 rows print(data[0:4, 0:10]) start at index 0 and go up to but not including index 4
print (data[0:4, 0:10])
01-analysing-data.ipynb
drwalshaw/sc-python
mit