markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Vaya, seguimos sin estar muy a gusto con estos resultados. Seguimos sin definir el tipo del valor de entrada. La función mágica %%cython dispone de una serie de funcionalidades entre la que se encuentra -a o --annotate (además del -n o --name que ya hemos visto). Si le pasamos este parámetro podremos ver una representación del código con colores marcando las partes más lentas (amarillo más oscuro) y más optmizadas (más claro) o a la velocidad de C (blanco). Vamos a usarlo para saber donde tenemos cuellos de botella (aplicado a nuestra última versión del código):
%%cython --annotate import numpy as np cdef tuple cbusca_min_cython3(malla): cdef list minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 minimosx = [] minimosy = [] for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array(minimosx), np.array(minimosy) def busca_min_cython3(malla): return cbusca_min_cython3(malla)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
El if parece la parte más lenta. Estamos usando el valor de entrada que no tiene un tipo Cython definido. Los bucles parece que están optimizados (las variables envueltas en el bucle las hemos declarado como unsigned int). Pero todas las partes por las que pasa el numpy array parece que no están muy optimizadas... Cythonizando, que es gerundio (toma 4). Ahora mismo, haciendo import numpy as np tenemos acceso a la funcionalidad Python de numpy. Para poder acceder a la funcionalidad C de numpy hemos de hacer un cimport de numpy. El cimport se usa para importar información especial del módulo numpy en el momento de compilación. Esta información se encuentra en el fichero numpy.pxd que es parte de la distribución Cython. El cimport también se usa para poder importar desde la stdlib de C. Vamos a usar esto para declarar el tipo del array de numpy.
%%cython --name probandocython4 import numpy as np cimport numpy as np cpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla): cdef list minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 minimosx = [] minimosy = [] for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array(minimosx), np.array(minimosy) %timeit busca_min_cython4(data)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Guauuuu!!! Acabamos de obtener un incremento de entre 25x a 30x veces más rápido. Vamos a comprobar que el resultado sea el mismo que la función original:
a, b = busca_min(data) print(a) print(b) aa, bb = busca_min_cython4(data) print(aa) print(bb) print(np.array_equal(a, aa)) print(np.array_equal(b, bb))
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Pues parece que sí :-) Vamos a ver si hemos dejado la mayoría del código anterior en blanco o más clarito usando --annotate.
%%cython --annotate import numpy as np cimport numpy as np cpdef tuple busca_min_cython4(np.ndarray[double, ndim = 2] malla): cdef list minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 minimosx = [] minimosy = [] for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array(minimosx), np.array(minimosy)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Vemos que muchas de las partes oscuras ahora son más claras!!! Pero parece que sigue quedando espacio para la mejora. Cythonizando, que es gerundio (toma 5). Vamos a ver si definiendo el tipo del resultado de la función como un numpy array en lugar de como una tupla nos introduce alguna mejora:
%%cython --name probandocython5 import numpy as np cimport numpy as np cpdef np.ndarray[int, ndim = 2] busca_min_cython5(np.ndarray[double, ndim = 2] malla): cdef list minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 minimosx = [] minimosy = [] for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array([minimosx, minimosy]) %timeit busca_min_cython5(data)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Vaya, parece que con respecto a la versión anterior solo obtenemos una ganancia de un 2% - 4%. Cythonizando, que es gerundio (toma 6). Vamos a dejar de usar listas y vamos a usar numpy arrays vacios que iremos 'rellenando' con numpy.append. A ver si usando todo numpy arrays conseguimos algún tipo de mejora:
%%cython --name probandocython6 import numpy as np cimport numpy as np cpdef tuple busca_min_cython6(np.ndarray[double, ndim = 2] malla): cdef np.ndarray[long, ndim = 1] minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 minimosx = np.array([], dtype = np.int) minimosy = np.array([], dtype = np.int) for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): np.append(minimosx, i) np.append(minimosy, j) return minimosx, minimosy %timeit busca_min_cython6(data) np.append?
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
En realidad, en la anterior porción de código estoy usando algo muy ineficiente. La función numpy.append no funciona como una lista a la que vas anexando elementos. Lo que estamos haciendo en realidad es crear copias del array existente para convertirlo a un nuevo array con un elemento nuevo. Esto no es lo que pretendiamos!!!! Cythonizando, que es gerundio (toma 7). En Python existen arrays eficientes para valores numéricos (según reza la documentación) que también pueden ser usados de la forma en que estoy usando las listas en mi función (arrays vacios a los que les vamos añadiendo elementos). Vamos a usarlos con Cython.
%%cython --name probandocython7 import numpy as np cimport numpy as np from cpython cimport array as c_array from array import array cpdef tuple busca_min_cython7(np.ndarray[double, ndim = 2] malla): cdef c_array.array minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 minimosx = array('L', []) minimosy = array('L', []) for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array(minimosx), np.array(minimosy) %timeit busca_min_cython7(data)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Parece que hemos ganado otro 25% - 30% con respecto a lo anterior más eficiente que habíamos conseguido. Con respecto a la implementación inicial en Python puro tenemos una mejora de 30x - 35x veces la velocidad inicial. Vamos a comprobar si seguimos teniendo los mismos resultados.
a, b = busca_min(data) print(a) print(b) aa, bb = busca_min_cython7(data) print(aa) print(bb) print(np.array_equal(a, aa)) print(np.array_equal(b, bb))
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
¿Qué pasa si el tamaño del array se incrementa?
data2 = np.random.randn(5000, 5000) %timeit busca_min(data2) %timeit busca_min_cython7(data2) a, b = busca_min(data2) print(a) print(b) aa, bb = busca_min_cython7(data2) print(aa) print(bb) print(np.array_equal(a, aa)) print(np.array_equal(b, bb))
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Parece que al ir aumentando el tamaño de los datos de entrada a la función los números son consistentes y el rendimiento se mantiene. En este caso concreto parece que ya hemos llegado a rendimientos de más de ¡¡35x!! con respecto a la implementación inicial. Cythonizando, que es gerundio (toma 8). Podemos usar directivas de compilación que ayuden al compilador a decidir mejor qué es lo que tiene que hacer. Entre ellas se encuentra una opción que es boundscheck que evita mirar la posibilidad de obtener IndexError asumiendo que el código está libre de estos errores de indexación. Lo vamos a usar conjuntamente con wraparound. Esta última opción se encarga de evitar mirar indexaciones relativas al final del iterable (por ejemplo, mi_iterable[-1]). En este caso concreto, la segunda opción no aporta nada de mejora de rendimiento pero la dijamos ya que la hemos probado.
%%cython --name probandocython8 import numpy as np cimport numpy as np from cpython cimport array as c_array from array import array cimport cython @cython.boundscheck(False) @cython.wraparound(False) cpdef tuple busca_min_cython8(np.ndarray[double, ndim = 2] malla): cdef c_array.array minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 minimosx = array('L', []) minimosy = array('L', []) for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array(minimosx), np.array(minimosy) %timeit busca_min_cython8(data)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Parece que hemos conseguido arañar otro poquito de rendimiento. Cythonizando, que es gerundio (toma 9). En lugar de usar numpy arrays vamos a usar memoryviews. Los memoryviews son arrays de acceso rápido. Si solo queremos almacenar cosas y no necesitamos ninguna de las características de un numpy array pueden ser una buena solución. Si necesitamos alguna funcionalidad extra siempre lo podemos convertir en un numpy array usando numpy.asarray.
%%cython --name probandocython9 import numpy as np cimport numpy as np from cpython cimport array as c_array from array import array cimport cython @cython.boundscheck(False) @cython.wraparound(False) #cpdef tuple busca_min_cython9(np.ndarray[double, ndim = 2] malla): cpdef tuple busca_min_cython9(double [:,:] malla): cdef c_array.array minimosx, minimosy cdef unsigned int i, j cdef unsigned int ii = malla.shape[1]-1 cdef unsigned int jj = malla.shape[0]-1 cdef unsigned int start = 1 #cdef float [:, :] malla_view = malla minimosx = array('L', []) minimosy = array('L', []) for i in range(start, ii): for j in range(start, jj): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array(minimosx), np.array(minimosy) %timeit busca_min_cython9(data)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Parece que, virtualmente, el rendimiento es parecido a lo que ya teniamos por lo que parece que nos hemos quedado igual. Bonus track Voy a intentar usar pypy (2.4 (CPython 2.7)) conjuntamente con numpypy para ver lo que conseguimos.
%%pypy import numpy as np import time np.random.seed(0) data = np.random.randn(2000,2000) def busca_min(malla): minimosx = [] minimosy = [] for i in range(1, malla.shape[1]-1): for j in range(1, malla.shape[0]-1): if (malla[j, i] < malla[j-1, i-1] and malla[j, i] < malla[j-1, i] and malla[j, i] < malla[j-1, i+1] and malla[j, i] < malla[j, i-1] and malla[j, i] < malla[j, i+1] and malla[j, i] < malla[j+1, i-1] and malla[j, i] < malla[j+1, i] and malla[j, i] < malla[j+1, i+1]): minimosx.append(i) minimosy.append(j) return np.array(minimosx), np.array(minimosy) resx, resy = busca_min(data) print(data) print(len(resx), len(resy)) print(resx) print(resy) t = [] for i in range(100): t0 = time.time() busca_min(data) t1 = time.time() - t0 t.append(t1) print(sum(t) / 100.)
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
El último valor del output anterior es el tiempo promedio después de repetir el cálculo 100 veces. Wow!! Parece que sin hacer modificaciones tenemos que el resultado es 10x - 15x veces más rápido que el obtenido usando la función inicial. Y llega a ser solo 3.5x veces más lento que lo que hemos conseguido con Cython. Resumen de resultados. Vamos a ver los resultados completos en un breve resumen. Primero vamos a ver los tiempos de las diferentes versiones de la función busca_min_xxx:
funcs = [busca_min, busca_min_numba, busca_min_cython1, busca_min_cython2, busca_min_cython3, busca_min_cython4, busca_min_cython5, busca_min_cython6, busca_min_cython7, busca_min_cython8, busca_min_cython9] t = [] for func in funcs: res = %timeit -o func(data) t.append(res.best) index = np.arange(len(t)) plt.figure(figsize = (12, 6)) plt.bar(index, t) plt.xticks(index + 0.4, [func.__name__[9:] for func in funcs]) plt.tight_layout()
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
En el gráfico anterior, la primera barra corresponde a la función de partida (busca_min). Recordemos que la versión de pypy ha tardado unos 0.38 segundos. Y ahora vamos a ver los tiempos entre busca_min (la versión original) y la última versión de cython que hemos creado, busca_min_cython9 usando diferentes tamaños de la matriz de entrada:
tamanyos = [10, 100, 500, 1000, 2000, 5000] t_p = [] t_c = [] for i in tamanyos: data = np.random.randn(i, i) res = %timeit -o busca_min(data) t_p.append(res.best) res = %timeit -o busca_min_cython9(data) t_c.append(res.best) plt.figure(figsize = (10,6)) plt.plot(tamanyos, t_p, 'bo-') plt.plot(tamanyos, t_c, 'ro-') ratio = np.array(t_p) / np.array(t_c) plt.figure(figsize = (10,6)) plt.plot(tamanyos, ratio, 'bo-')
C elemental, querido Cython..ipynb
Ykharo/notebooks
bsd-2-clause
Given a 2D set of points spanned by axes $x$ and $y$ axes, we will try to fit a line that best approximates the data. The equation of the line, in slope-intercept form, is defined by: $y = mx + b$.
def generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise): # randomly select x x = np.random.uniform(-abs_value, abs_value, num_points) # y = mx + b + noise y = slope*x + intercept + np.random.uniform(-abs_noise, abs_noise, num_points) return x, y def plot_points(x,y): plt.scatter(x, y) plt.title('Scatter plot of x and y') plt.xlabel('x') plt.ylabel('y') slope = 4 intercept = -3 num_points = 20 abs_value = 4 abs_noise = 2 x, y = generate_random_points_along_a_line (slope, intercept, num_points, abs_value, abs_noise) plot_points(x, y)
src/linear_regression/linear_regression.ipynb
kaushikpavani/neural_networks_in_python
mit
If $N$ = num_points, then the error in fitting a line to the points (also defined as Cost, $C$) can be defined as: $C = \sum_{i=0}^{N} (y-(mx+b))^2$ To perform gradient descent, we need the partial derivatives of Cost $C$ with respect to slope $m$ and intercept $b$. $\frac{\partial C}{\partial m} = \sum_{i=0}^{N} -2(y-(mx+b)).x$ $\frac{\partial C}{\partial b} = \sum_{i=0}^{N} -2(y-(mx+b))$
# this function computes gradient with respect to slope m def grad_m (x, y, m, b): return np.sum(np.multiply(-2*(y - (m*x + b)), x)) # this function computes gradient with respect to intercept b def grad_b (x, y, m, b): return np.sum(-2*(y - (m*x + b))) # Performs gradient descent def gradient_descent (x, y, num_iterations, learning_rate): # Initialize m and b m = np.random.uniform(-1, 1, 1) b = np.random.uniform(-1, 1, 1) # Update m and b in direction opposite to that of the gradient to minimize loss for i in range(num_iterations): m = m - learning_rate * grad_m (x, y, m, b) b = b - learning_rate * grad_b (x, y, m, b) # Return final slope and intercept return m, b # Plot point along with the best fit line def plot_line (m, b, x, y): plot_points(x,y) plt.plot(x, x*m + b, 'r') plt.show() # In general, keep num_iterations high and learning_rate low. num_iterations = 1000 learning_rate = 0.0001 m, b = gradient_descent (x, y, num_iterations, learning_rate) plot_line (m, b, x, y) plt.show()
src/linear_regression/linear_regression.ipynb
kaushikpavani/neural_networks_in_python
mit
Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster. Note! You can only have one SparkContext at a time the way we are running things here.
sc = SparkContext()
udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb
AtmaMani/pyChakras
mit
Basic Operations We're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file. Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file:
%%writefile example.txt first line second line third line fourth line
udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb
AtmaMani/pyChakras
mit
Creating the RDD Now we can take in the textfile using the textFile method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
textFile = sc.textFile('example.txt')
udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb
AtmaMani/pyChakras
mit
Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. Actions We have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows. RDDs have actions, which return values, and transformations, which return pointers to new RDDs. Let’s start with a few actions:
textFile.count() textFile.first()
udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb
AtmaMani/pyChakras
mit
Transformations Now we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that.
secfind = textFile.filter(lambda line: 'second' in line) # RDD secfind # Perform action on transformation secfind.collect() # Perform action on transformation secfind.count()
udemy_ml_bootcamp/Big-Data-and-Spark/Introduction to Spark and Python.ipynb
AtmaMani/pyChakras
mit
Load gromacs trajectory/topology Gromacs was used to sample a dilute solution of sodium chloride in SPC/E water for 100 ns. The trajectory and .gro loaded below have been stripped from hydrogens to reduce disk space.
traj = md.load('gmx/traj_noh.xtc', top='gmx/conf_noh.gro') traj
nacl-water/nacl.ipynb
mlund/kirkwood-buff
mit
Calculate average number densities for solute and solvent
volume=0 for vec in traj.unitcell_lengths: volume = volume + vec[0]*vec[1]*vec[2] / traj.n_frames N_c = len(traj.topology.select('name NA or name CL')) N_w = len(traj.topology.select('name O')) rho_c = N_c / volume rho_w = N_w / volume print "Simulation time = ", traj.time[-1]*1e-3, 'ns' print "Average volume = ", volume, 'nm-3' print "Average side-length = ", volume**(1/3.), 'nm' print "Number of solute molecules = ", N_c print "Number of water molecules = ", N_w print "Solute density = ", rho_c, 'nm-3' print "Water density = ", rho_w, 'nm-3' steps=range(traj.n_frames) plt.xlabel('steps') plt.ylabel('box sidelength, x (nm)') plt.plot(traj.unitcell_lengths[:,0])
nacl-water/nacl.ipynb
mlund/kirkwood-buff
mit
Compute and plot RDFs Note: The radial distribution function in mdtraj differs from i.e. Gromacs g_rdf in the way data is normalized and the $g(r)$ may need rescaling. It seems that densities are calculated by the number of selected pairs which for the cc case exclude all the self terms. This can be easily corrected and is obviously not needed for the wc case.
rmax = (volume)**(1/3.)/2 select_cc = traj.topology.select_pairs('name NA or name CL', 'name NA or name CL') select_wc = traj.topology.select_pairs('name NA or name CL', 'name O') r, g_cc = md.compute_rdf(traj, select_cc, r_range=[0.0,rmax], bin_width=0.01, periodic=True) r, g_wc = md.compute_rdf(traj, select_wc, r_range=[0.0,rmax], bin_width=0.01, periodic=True) g_cc = g_cc * len(select_cc) / (0.5*N_c**2) # re-scale to account for diagonal in pair matrix np.savetxt('g_cc.dat', np.column_stack( (r,g_cc) )) np.savetxt('g_wc.dat', np.column_stack( (r,g_wc) )) plt.xlabel('$r$/nm') plt.ylabel('$g(r)$') plt.plot(r, g_cc, 'r-') plt.plot(r, g_wc, 'b-')
nacl-water/nacl.ipynb
mlund/kirkwood-buff
mit
Calculate KB integrals Here we calculate the number of solute molecules around other solute molecules (cc) and around water (wc). For example, $$ N_{cc} = 4\pi\rho_c\int_0^{\infty} \left ( g(r)_{cc} -1 \right ) r^2 dr$$ The preferential binding parameter is subsequently calculated as $\Gamma = N_{cc}-N_{wc}$.
dr = r[1]-r[0] N_cc = rho_c * 4*pi*np.cumsum( ( g_cc - 1 )*r**2*dr ) N_wc = rho_c * 4*pi*np.cumsum( ( g_wc - 1 )*r**2*dr ) Gamma = N_cc - N_wc plt.xlabel('$r$/nm') plt.ylabel('$\\Gamma = N_{cc}-N_{wc}$') plt.plot(r, Gamma, 'r-')
nacl-water/nacl.ipynb
mlund/kirkwood-buff
mit
Finite system size corrected KB integrals As can be seen in the above figure, the KB integrals do not converge since in a finite sized $NVT$ simulation, $g(r)$ can never exactly go to unity at large separations. To correct for this, a simple scaling factor can be applied, as describe in the link on top of the page, $$ g_{gc}^{\prime} (r) = g_{jc}(r) \cdot \frac{N_j\left (1-V(r)/V\right )}{N_j\left (1-V(r)/V\right )-\Delta N_{jc}(r)-\delta_{jc}} $$ Lastly, we take a little extra care in producing a refined PDF file for the uncorrected and corrected integrals.
Vn = 4*pi/3*r**3 / volume g_ccc = g_cc * N_c * (1-Vn) / ( N_c*(1-Vn)-N_cc-1) g_wcc = g_wc * N_w * (1-Vn) / ( N_w*(1-Vn)-N_wc-0) N_ccc = rho_c * 4*pi*dr*np.cumsum( ( g_ccc - 1 )*r**2 ) N_wcc = rho_c * 4*pi*dr*np.cumsum( ( g_wcc - 1 )*r**2 ) Gammac = N_ccc - N_wcc plt.xlabel('$r$/nm') plt.ylabel('$\\Gamma = N_{cc}-N_{wc}$') plt.plot(r, Gamma, color='red', ls='-', lw=2, label='uncorrected') plt.plot(r, Gammac, color='green', lw=2, label='corrected') plt.legend(loc=0,frameon=False, fontsize=16) plt.yticks( np.arange(-0.4, 0.5, 0.1)) plt.ylim((-0.45,0.45)) plt.savefig('gamma.pdf', bbox_inches='tight')
nacl-water/nacl.ipynb
mlund/kirkwood-buff
mit
Index Label이 없는 경우의 주의점 Label이 지정되지 않는 경우에는 integer slicing을 label slicing으로 간주하여 마지막 값을 포함한다
df = pd.DataFrame(np.random.randn(5, 3)) df df.columns = ["c1", "c2", "c3"] df.ix[0:2, 1:2]
통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/14.Pandas 고급 인덱싱.ipynb
kimkipyo/dss_git_kkp
mit
loc 인덱서 라벨 기준 인덱싱 숫자가 오더라도 라벨로 인식한다. 라벨 리스트 가능 라벨 슬라이싱 가능 불리언 배열 가능 iloc 인덱서 숫자 기준 인덱싱 문자열 라벨은 불가 숫자 리스트 가능 숫자 슬라이싱 가능 불리언 배열 가능
np.random.seed(1) df = pd.DataFrame(np.random.randint(1, 11, size=(4,3)), columns=["A", "B", "C"], index=["a", "b", "c", "d"]) df df.ix[["a", "c"], "B":"C"] df.ix[[0, 2], 1:3] df.loc[["a", "c"], "B":"C"] df.ix[2:4, 1:3] df.loc[2:4, 1:3] df.iloc[2:4, 1:3] df.iloc[["a", "c"], "B":"C"]
통계, 머신러닝 복습/160502월_1일차_분석 환경, 소개/14.Pandas 고급 인덱싱.ipynb
kimkipyo/dss_git_kkp
mit
A Bioinformatics Library for Data Scientists, Students, and Developers Jai Rideout and Evan Bolyen Caporaso Lab, Northern Arizona University What is scikit-bio? A Python bioinformatics library for: data scientists students developers "The first step in developing a new genetic analysis algorithm is to decide how to make the input data file format different from all pre-existing analysis data file formats." - Law's First Law <span style='line-height:2em; word-spacing:2em'>Axt BAM SAM BED bedGraph bigBed bigGenePred table bigWig Chain GenePred table GFF GTF HAL MAF Microarray Net Personal Genome SNP format PSL VCF WIG abi ace clustal embl fasta fastq genbank ig imgt nexus phred phylip pir seqxml sff stockholm swiss tab qual uniprot-xml emboss PhyolXML NexML newick CDAO MDL bcf caf gcproj scf SBML lsmat ordination qseq BIOM ASN.1 .2bit .nib ENCODE ... </span> <span style='line-height:2em; word-spacing:2em'>Axt BAM SAM BED bedGraph bigBed bigGenePred table bigWig Chain GenePred table GFF GTF HAL MAF Microarray Net Personal Genome SNP format PSL VCF WIG abi ace <span class='supio'>clustal</span> embl <span class='supio'>fasta</span> <span class='supio'>fastq</span> genbank ig imgt nexus phred <span class='supio'>phylip</span> pir seqxml sff stockholm swiss tab qual uniprot-xml emboss PhyolXML NexML <span class='supio'>newick</span> CDAO MDL bcf caf gcproj scf SBML <span class='supio'>lsmat</span> <span class='supio'>ordination</span> <span class='supio'>qseq</span> BIOM ASN.1 .2bit .nib ENCODE ... </span> I/O in bioinformatics is hard format redundancy (many-to-many) format ambiguity heterogeneous sources How can we solve this? An I/O Registry! Format redundancy (many-to-many)
from skbio import DNA seq1 = DNA.read('data/seqs.fasta', qual='data/seqs.qual') seq2 = DNA.read('data/seqs.fastq', variant='illumina1.8') seq1 seq1 == seq2
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Format ambiguity
import skbio.io skbio.io.sniff('data/mystery_file.gz')
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Heterogeneous sources Read a gzip file from a URL:
from skbio import TreeNode tree1 = skbio.io.read('http://localhost:8888/files/data/newick.gz', into=TreeNode) print(tree1.ascii_art())
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Read a bz2 file from a file path:
import io with io.open('data/newick.bz2', mode='rb') as open_filehandle: tree2 = skbio.io.read(open_filehandle, into=TreeNode) print(tree2.ascii_art())
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Read a list of lines:
tree3 = skbio.io.read(['((a, b, c), d:15):0;'], into=TreeNode) print(tree3.ascii_art())
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Let's make a format! YASF (Yet Another Sequence Format)
!cat data/yasf-seq.yml import yaml yasf = skbio.io.create_format('yasf') @yasf.sniffer() def yasf_sniffer(fh): return fh.readline().rstrip() == "#YASF", {} @yasf.reader(DNA) def yasf_to_dna(fh): seq = yaml.load(fh.read()) return DNA(seq['Sequence'], metadata={ 'id': seq['ID'], 'location': seq['Location'], 'description': seq['Description'] }) seq = DNA.read("data/yasf-seq.yml") seq
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Convert YASF to FASTA
seq.write("data/not-yasf.fna", format='fasta') !cat data/not-yasf.fna
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
We are in beta - should you even use our software? YES! API Lifecycle
from skbio.util._decorator import stable @stable(as_of='0.4.0') def add(a, b): """add two numbers. Parameters ---------- a, b : int Numbers to add. Returns ------- int Sum of `a` and `b`. """ return a + b help(add)
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
What is stable: skbio.io skbio.sequence &nbsp; &nbsp; What is next: skbio.alignment skbio.tree skbio.diversity skbio.stats &lt;your awesome subpackage!&gt; Sequence API: putting the scikit in scikit-bio
seq = DNA("AacgtGTggA", lowercase='exon') seq
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Made with numpy
seq.values
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
And a pinch of pandas
seq.positional_metadata
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Slicing with positional metadata:
seq[seq.positional_metadata['exon']]
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Application: building a taxonomy classifier
aligned_seqs_fp = 'data/gg_13_8_otus/rep_set_aligned/82_otus.fasta' taxonomy_fp = 'data/gg_13_8_otus/taxonomy/82_otu_taxonomy.txt' from skbio import DNA fwd_primer = DNA("GTGCCAGCMGCCGCGGTAA", metadata={'label':'fwd-primer'}) rev_primer = DNA("GGACTACHVGGGTWTCTAAT", metadata={'label':'rev-primer'}).reverse_complement() def seq_to_regex(seq): result = [] for base in str(seq): if base in DNA.degenerate_chars: result.append('[{0}]'.format( ''.join(DNA.degenerate_map[base]))) else: result.append(base) return ''.join(result) regex = '({0}.*{1})'.format(seq_to_regex(fwd_primer), seq_to_regex(rev_primer)) import numpy as np import skbio starts = [] stops = [] for seq in skbio.io.read(aligned_seqs_fp, format='fasta', constructor=DNA): for match in seq.find_with_regex(regex, ignore=seq.gaps()): starts.append(match.start) stops.append(match.stop) locus = slice(int(np.median(starts)), int(np.median(stops))) locus kmer_counts = [] seq_ids = [] for seq in skbio.io.read(aligned_seqs_fp, format='fasta', constructor=DNA): seq_ids.append(seq.metadata['id']) sliced_seq = seq[locus].degap() kmer_counts.append(sliced_seq.kmer_frequencies(8)) from sklearn.feature_extraction import DictVectorizer X = DictVectorizer().fit_transform(kmer_counts) taxonomy_level = 3 # class id_to_taxon = {} with open(taxonomy_fp) as f: for line in f: id_, taxon = line.strip().split('\t') id_to_taxon[id_] = '; '.join(taxon.split('; ')[:taxonomy_level]) y = [id_to_taxon[seq_id] for seq_id in seq_ids] from sklearn.feature_selection import SelectPercentile X = SelectPercentile().fit_transform(X, y) from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) from sklearn.svm import SVC y_pred = SVC(C=10, kernel='linear', degree=3, gamma=0.001).fit(X_train, y_train).predict(X_test) from sklearn.metrics import confusion_matrix, f1_score cm = confusion_matrix(y_test, y_pred) cm_normalized = cm / cm.sum(axis=1)[:, np.newaxis] plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix') print("F-score: %1.3f" % f1_score(y_test, y_pred, average='micro'))
scipy-2015/A Bioinformatics Library for Data Scientists, Students, and Developers.ipynb
biocore/scikit-bio-presentations
bsd-3-clause
Below we define a function to generate random intervals with various properties, returning a dataframe of intervals.
def make_random_intervals( n=1e5, n_chroms=1, max_coord=None, max_length=10, sort=False, categorical_chroms=False, ): n = int(n) n_chroms = int(n_chroms) max_coord = (n // n_chroms) if max_coord is None else int(max_coord) max_length = int(max_length) chroms = np.array(['chr'+str(i+1) for i in range(n_chroms)])[ np.random.randint(0, n_chroms, n)] starts = np.random.randint(0, max_coord, n) ends = starts + np.random.randint(0, max_length, n) df = pd.DataFrame({ 'chrom':chroms, 'start':starts, 'end':ends }) if categorical_chroms: df['chrom'] = df['chrom'].astype('category') if sort: df = df.sort_values(['chrom','start','end']).reset_index(drop=True) return df
docs/guide-performance.ipynb
open2c/bioframe
mit
Overlap In this chapter we characterize the performance of the key function, bioframe.overlap. We show that the speed depends on: - the number of intervals - number of intersections (or density of intervals) - type of overlap (inner, outer, left) - dtype of chromosomes vs number of intervals
timings = {} for n in [1e2, 1e3, 1e4, 1e5, 1e6]: df = make_random_intervals(n=n, n_chroms=1) df2 = make_random_intervals(n=n, n_chroms=1) timings[n] = %timeit -o -r 1 bioframe.overlap(df, df2) plt.loglog( list(timings.keys()), list([r.average for r in timings.values()]), 'o-', ) plt.xlabel('N intervals') plt.ylabel('time, seconds') plt.gca().set_aspect(1.0) plt.grid()
docs/guide-performance.ipynb
open2c/bioframe
mit
vs total number of intersections Note that not only the number of intervals, but also the density of intervals determines the performance of overlap.
timings = {} n_intersections = {} n = 1e4 for avg_interval_len in [3, 1e1, 3e1, 1e2, 3e2]: df = make_random_intervals(n=n, n_chroms=1, max_length=avg_interval_len*2) df2 = make_random_intervals(n=n, n_chroms=1, max_length=avg_interval_len*2) timings[avg_interval_len] = %timeit -o -r 1 bioframe.overlap(df, df2) n_intersections[avg_interval_len] = bioframe.overlap(df, df2).shape[0] plt.loglog( list(n_intersections.values()), list([r.average for r in timings.values()]), 'o-', ) plt.xlabel('N intersections') plt.ylabel('time, seconds') plt.gca().set_aspect(1.0) plt.grid()
docs/guide-performance.ipynb
open2c/bioframe
mit
vs number of chromosomes If we consider a genome of the same length, divided into more chromosomes, the timing is relatively unaffected.
timings = {} n_intersections = {} n = 1e5 for n_chroms in [1, 3, 10, 30, 100, 300, 1000]: df = make_random_intervals(n, n_chroms) df2 = make_random_intervals(n, n_chroms) timings[n_chroms] = %timeit -o -r 1 bioframe.overlap(df, df2) n_intersections[n_chroms] = bioframe.overlap(df, df2).shape[0]
docs/guide-performance.ipynb
open2c/bioframe
mit
Note this test preserves the number of intersections, which is likely why performance remains similar over the considered range.
n_intersections plt.loglog( list(timings.keys()), list([r.average for r in timings.values()]), 'o-', ) plt.ylim([1e-1, 10]) plt.xlabel('# chromosomes') plt.ylabel('time, seconds') # plt.gca().set_aspect(1.0) plt.grid()
docs/guide-performance.ipynb
open2c/bioframe
mit
vs other parameters: join type, sorted or categorical inputs Note that default for overlap: how='left', keep_order=True, and the returned dataframe is sorted after the overlaps have been ascertained. Also note that keep_order=True is only a valid argument for how='left' as the order is not well-defined for inner or outer overlaps.
df = make_random_intervals() df2 = make_random_intervals() %timeit -r 1 bioframe.overlap(df, df2) %timeit -r 1 bioframe.overlap(df, df2, how='left', keep_order=False) df = make_random_intervals() df2 = make_random_intervals() %timeit -r 1 bioframe.overlap(df, df2, how='outer') %timeit -r 1 bioframe.overlap(df, df2, how='inner') %timeit -r 1 bioframe.overlap(df, df2, how='left', keep_order=False)
docs/guide-performance.ipynb
open2c/bioframe
mit
Note below that detection of overlaps takes a relatively small fraction of the execution time. The majority of the time the user-facing function spends on formatting the output table.
df = make_random_intervals() df2 = make_random_intervals() %timeit -r 1 bioframe.overlap(df, df2) %timeit -r 1 bioframe.overlap(df, df2, how='inner') %timeit -r 1 bioframe.ops._overlap_intidxs(df, df2) %timeit -r 1 bioframe.ops._overlap_intidxs(df, df2, how='inner')
docs/guide-performance.ipynb
open2c/bioframe
mit
Note that sorting inputs provides a moderate speedup, as well as storing chromosomes as categoricals
print('Default inputs (outer/inner joins):') df = make_random_intervals() df2 = make_random_intervals() %timeit -r 1 bioframe.overlap(df, df2) %timeit -r 1 bioframe.overlap(df, df2, how='inner') print('Sorted inputs (outer/inner joins):') df_sorted = make_random_intervals(sort=True) df2_sorted = make_random_intervals(sort=True) %timeit -r 1 bioframe.overlap(df_sorted, df2_sorted) %timeit -r 1 bioframe.overlap(df_sorted, df2_sorted, how='inner') print('Categorical chromosomes (outer/inner joins):') df_cat = make_random_intervals(categorical_chroms=True) df2_cat = make_random_intervals(categorical_chroms=True) %timeit -r 1 bioframe.overlap(df_cat, df2_cat) %timeit -r 1 bioframe.overlap(df_cat, df2_cat, how='inner')
docs/guide-performance.ipynb
open2c/bioframe
mit
Vs Pyranges Default arguments The core intersection function of PyRanges is faster, since PyRanges object splits intervals by chromosomes at the object construction stage
def df2pr(df): return pyranges.PyRanges( chromosomes=df.chrom, starts=df.start, ends=df.end, ) timings_bf = {} timings_pr = {} for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]: df = make_random_intervals(n=n, n_chroms=1) df2 = make_random_intervals(n=n, n_chroms=1) pr = df2pr(df) pr2 = df2pr(df2) timings_bf[n] = %timeit -o -r 1 bioframe.overlap(df, df2,how='inner') timings_pr[n] = %timeit -o -r 1 pr.join(pr2) plt.loglog( list(timings_bf.keys()), list([r.average for r in timings_bf.values()]), 'o-', label='bioframe' ) plt.loglog( list(timings_pr.keys()), list([r.average for r in timings_pr.values()]), 'o-', label='pyranges' ) plt.gca().set( xlabel='N intervals', ylabel='time, seconds', aspect=1.0, xticks=10**np.arange(2,6.1) ) plt.grid() plt.legend()
docs/guide-performance.ipynb
open2c/bioframe
mit
With roundtrips to dataframes Note that pyranges performs useful calculations at the stage of creating a PyRanges object. Thus a direct comparison for one-off operations on pandas DataFrames between bioframe and pyranges should take this step into account. This roundrip is handled by pyranges_intersect_dfs below.
def pyranges_intersect_dfs(df, df2): return df2pr(df).intersect(df2pr(df2)).as_df() timings_bf = {} timings_pr = {} for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]: df = make_random_intervals(n=n, n_chroms=1) df2 = make_random_intervals(n=n, n_chroms=1) timings_bf[n] = %timeit -o -r 1 bioframe.overlap(df, df2, how='inner') timings_pr[n] = %timeit -o -r 1 pyranges_intersect_dfs(df, df2) plt.loglog( list(timings_bf.keys()), list([r.average for r in timings_bf.values()]), 'o-', label='bioframe' ) plt.loglog( list(timings_pr.keys()), list([r.average for r in timings_pr.values()]), 'o-', label='pyranges' ) plt.gca().set( xlabel='N intervals', ylabel='time, seconds', aspect=1.0 ) plt.grid() plt.legend()
docs/guide-performance.ipynb
open2c/bioframe
mit
Memory usage
from memory_profiler import memory_usage import time def sleep_before_after(func, sleep_sec=0.5): def _f(*args, **kwargs): time.sleep(sleep_sec) func(*args, **kwargs) time.sleep(sleep_sec) return _f mem_usage_bf = {} mem_usage_pr = {} for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]: df = make_random_intervals(n=n, n_chroms=1) df2 = make_random_intervals(n=n, n_chroms=1) mem_usage_bf[n] = memory_usage( (sleep_before_after(bioframe.overlap), (df, df2), dict( how='inner')), backend='psutil_pss', include_children=True, interval=0.1) mem_usage_pr[n] = memory_usage( (sleep_before_after(pyranges_intersect_dfs), (df, df2), dict()), backend='psutil_pss', include_children=True, interval=0.1) plt.figure(figsize=(8,6)) plt.loglog( list(mem_usage_bf.keys()), list([max(r) - r[4] for r in mem_usage_bf.values()]), 'o-', label='bioframe' ) plt.loglog( list(mem_usage_pr.keys()), list([max(r) - r[4] for r in mem_usage_pr.values()]), 'o-', label='pyranges' ) plt.gca().set( xlabel='N intervals', ylabel='Memory usage, Mb', aspect=1.0 ) plt.grid() plt.legend()
docs/guide-performance.ipynb
open2c/bioframe
mit
The 2x memory consumption of bioframe is due to the fact that bioframe store genomic coordinates as int64 by default, while pyranges uses int32:
print('Bioframe dtypes:') display(df.dtypes) print() print('Pyranges dtypes:') display(df2pr(df).dtypes) ### Combined performance figure. fig, axs = plt.subplot_mosaic( 'AAA.BBB', figsize=(9.0,4)) plt.sca(axs['A']) plt.text(-0.25, 1.0, 'A', horizontalalignment='center', verticalalignment='center', transform=plt.gca().transAxes, fontsize=19) plt.loglog( list(timings_bf.keys()), list([r.average for r in timings_bf.values()]), 'o-', color='k', label='bioframe' ) plt.loglog( list(timings_pr.keys()), list([r.average for r in timings_pr.values()]), 'o-', color='gray', label='pyranges' ) plt.gca().set( xlabel='N intervals', ylabel='time, s', aspect=1.0, xticks=10**np.arange(2,6.1), yticks=10**np.arange(-3,0.1), ) plt.grid() plt.legend() plt.sca(axs['B']) plt.text(-0.33, 1.0, 'B', horizontalalignment='center', verticalalignment='center', transform=plt.gca().transAxes, fontsize=19) plt.loglog( list(mem_usage_bf.keys()), list([max(r) - r[4] for r in mem_usage_bf.values()]), 'o-', color='k', label='bioframe' ) plt.loglog( list(mem_usage_pr.keys()), list([max(r) - r[4] for r in mem_usage_pr.values()]), 'o-', color='gray', label='pyranges' ) plt.gca().set( xlabel='N intervals', ylabel='Memory usage, Mb', aspect=1.0, xticks=10**np.arange(2,6.1), ) plt.grid() plt.legend()
docs/guide-performance.ipynb
open2c/bioframe
mit
Slicing
timings_slicing_bf = {} timings_slicing_pr = {} for n in [1e2, 1e3, 1e4, 1e5, 1e6, 3e6]: df = make_random_intervals(n=n, n_chroms=1) timings_slicing_bf[n] = %timeit -o -r 1 bioframe.select(df, ('chr1', n//2, n//4*3)) pr = df2pr(df) timings_slicing_pr[n] = %timeit -o -r 1 pr['chr1', n//2:n//4*3] plt.loglog( list(timings_slicing_bf.keys()), list([r.average for r in timings_bf.values()]), 'o-', label='bioframe' ) plt.loglog( list(timings_slicing_pr.keys()), list([r.average for r in timings_pr.values()]), 'o-', label='pyranges' ) plt.gca().set( xlabel='N intervals', ylabel='time, s', aspect=1.0 ) plt.grid() plt.legend()
docs/guide-performance.ipynb
open2c/bioframe
mit
The normal distribution test:
x=df.sort_values("temperature",axis=0) t=x["temperature"] #print(np.mean(t)) plot_fit = stats.norm.pdf(t, np.mean(t), np.std(t)) plt.plot(t,plot_fit,'-o') plt.hist(df.temperature, bins = 20 ,normed = True) plt.ylabel('Frequency') plt.xlabel('Temperature') plt.show() stats.normaltest(t)
Human_Temp.ipynb
SATHVIKRAJU/Inferential_Statistics
mit
To check if the distribution of temperature is normal, it is always better to visualize it. We plot the histogram of the values and plot the fitted values to obtain a normal distribution. We see that there are a few outliers in the distribution on the right side but still it correlates as a normal distribution. Performing the Normaltest using Scipy's normal function and we obtain the p value of 0.25. Assuming the statistical significance to be 0.05 and the Null hypothesis being the distribution is normal. We can accept the Null hypothesis as the obtained p-value is greater than 0.05 which can also confirm the normal distribution.
#Question 2: no_of_samples=df["temperature"].count() print(no_of_samples)
Human_Temp.ipynb
SATHVIKRAJU/Inferential_Statistics
mit
We see the sample size is n= 130 and as a general rule of thumb inorder for CLT to be validated it is necessary for n>30. Hence the sample size is compartively large. Question 3 HO: The true population mean is 98.6 degrees F (Null hypothesis) H1: The true population mean is not 98.6 degrees F (Alternative hypothesis) Alternatively we can state that, HO: μ1 = μ2 H1: μ1 ≠ μ2
from statsmodels.stats.weightstats import ztest from scipy.stats import ttest_ind from scipy.stats import ttest_1samp t_score=ttest_1samp(t,98.6) t_score_abs=abs(t_score[0]) t_score_p_abs=abs(t_score[1]) z_score=ztest(t,value=98.6) z_score_abs=abs(z_score[0]) p_value_abs=abs(z_score[1]) print("The z score is given by: %F and the p-value is given by %6.9F"%(z_score_abs,p_value_abs)) print("The t score is given by: %F and the p-value is given by %6.9F"%(t_score_abs,t_score_p_abs))
Human_Temp.ipynb
SATHVIKRAJU/Inferential_Statistics
mit
Choosing one sample test vs two sample test: The problem defined has a single sample and we need to test against the population mean and hence we would use a one sample test as against the two sample test. T-test vs Z-test: T-test is chosen and best suited when n<30 and hence we can choose z-test for this particular distribution.Also here we are comparing the mean of the population against a predetermined value i.e. 98.6 and it is best to use z-test. T- test is more useful when we compare the means of two sample distributions and check to see if there is a difference between them. The p value is 0.000000049 which is less than the usual significance level 0.05 and hence we can reject the Null hypothesis and say that the population mean is not 98.6 Trying the t-test: Since we are comparing the mean value to a reference number, the calculation of both z score and t score remains same and hence value remains same. However the p-value differs slighlty from the other.
#Question 4: #For a 95% Confidence Interval the Confidence interval can be computed as: variance_=np.std(t)/np.sqrt(no_of_samples) mean_=np.mean(t) confidence_interval = stats.norm.interval(0.95, loc=mean_, scale=variance_) print("The Confidence Interval Lies between %F and %F"%(confidence_interval[0],confidence_interval[1]))
Human_Temp.ipynb
SATHVIKRAJU/Inferential_Statistics
mit
Any temperatures out of this range should be considered abnormal. Question 5: Here we use t-test statistic because we want to compare the mean of two groups involved, the male and the female group and it is better to use a t-test.
temp_male=df.temperature[df.gender=='M'] female_temp=df.temperature[df.gender=='F'] ttest_ind(temp_male,female_temp)
Human_Temp.ipynb
SATHVIKRAJU/Inferential_Statistics
mit
2 基本用法
import requests cs_url = 'http://httpbin.org' r = requests.get("%s/%s" % (cs_url, 'get')) r = requests.post("%s/%s" % (cs_url, 'post')) r = requests.put("%s/%s" % (cs_url, 'put')) r = requests.delete("%s/%s" % (cs_url, 'delete')) r = requests.patch("%s/%s" % (cs_url, 'patch')) r = requests.options("%s/%s" % (cs_url, 'get'))
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
3 URL 传参 https://encrypted.google.com/search?q=hello <协议>://<域名>/<接口>?<键1>=<值1>&<键2>=<值2> requests 库提供的 HTTP 方法,都提供了名为 params 的参数。这个参数可以接受一个 Python 字典,并自动格式化为上述格式。
import requests cs_url = 'https://www.so.com/s' param = {'ie':'utf-8','q':'query'} r = requests.get(cs_url,params = param) print r.url
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
4 设置超时 requests 的超时设置以秒为单位。例如,对请求加参数 timeout = 5 即可设置超时为 5 秒
import requests cs_url = 'https://www.zhihu.com' r = requests.get(cs_url,timeout=100)
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
5 请求头
import requests cs_url = 'http://httpbin.org/get' r = requests.get (cs_url) print r.content
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
通常我们比较关注其中的 User-Agent 和 Accept-Encoding。如果我们要修改 HTTP 头中的这两项内容,只需要将一个合适的字典参数传给 headers 即可。
import requests my_headers = {'User-Agent' : 'From Liam Huang', 'Accept-Encoding' : 'gzip'} cs_url = 'http://httpbin.org/get' r = requests.get (cs_url, headers = my_headers) print r.content
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
6 响应头
import requests cs_url = 'http://httpbin.org/get' r = requests.get (cs_url) print r.headers
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
7 响应内容 长期以来,互联网都存在带宽有限的情况。因此,网络上传输的数据,很多情况下都是经过压缩的。经由 requests 发送的请求,当收到的响应内容经过 gzip 或 deflate 压缩时,requests 会自动为我们解包。我们可以用 Response.content 来获得以字节形式返回的相应内容。
import requests cs_url = 'https://www.zhihu.com' r = requests.get (cs_url) if r.status_code == requests.codes.ok: print r.content
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
如果相应内容不是文本,而是二进制数据(比如图片),则需要进行响应的解码
import requests from PIL import Image from StringIO import StringIO cs_url = 'http://liam0205.me/uploads/avatar/avatar-2.jpg' r = requests.get (cs_url) if r.status_code == requests.codes.ok: Image.open(StringIO(r.content)).show()
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
文本模式解码
import requests cs_url = 'https://www.zhihu.com' r = requests.get (cs_url,auth=('[email protected]','gaofengcumt')) if r.status_code == requests.codes.ok: print r.text else: print 'bad request'
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
8 反序列化 JSON 数据
import requests cs_url = 'http://ip.taobao.com/service/getIpInfo.php' my_param = {'ip':'8.8.8.8'} r = requests.get(cs_url, params = my_param) print r.json()['data']['country'].encode('utf-8')
python-statatics-tutorial/advance-theme/Request.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Data preprocessing The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit. You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string. First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
from string import punctuation all_text = ''.join([c for c in reviews if c not in punctuation]) reviews = all_text.split('\n') all_text = ' '.join(reviews) words = all_text.split() all_text[:2000] words[:100]
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0. Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
# Create your dictionary that maps vocab words to integers here vocab_to_int = {word: idx+1 for (idx, word) in enumerate(set(words))} print("Vocab to int") print("len words: ", len(set(words))) print("len vocab: ", len(vocab_to_int)) print("Sample: ", vocab_to_int['in']) # Convert the reviews to integers, same shape as reviews list, but with integers reviews_ints = [] for review in reviews: word_ints = [vocab_to_int[word] for word in review.split()] reviews_ints.append(word_ints) print() print("Reviews ints") print("Review length: ", len(reviews)) print("Length: ", len(reviews_ints)) print("Sample: ", reviews_ints[0])
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively.
# Convert labels to 1s and 0s for 'positive' and 'negative' labels = np.array([0 if a == "negative" else 1 for a in labels_.split()]) print(len(labels)) print(labels[:100]) print(labels_[:100])
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
If you built labels correctly, you should see the next output.
from collections import Counter review_lens = Counter([len(x) for x in reviews_ints]) print("Zero-length reviews: {}".format(review_lens[0])) print("Maximum review length: {}".format(max(review_lens)))
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove the review with zero length from the reviews_ints list.
# Filter out that review with 0 length # for i, review in enumerate(reviews_ints): # if len(review) == 0: # np.delete(reviews_ints, i) # break reviews_ints = [r for r in reviews_ints if len(r) > 0] print("Reviews ints len: ", len(reviews_ints)) print("Labels len: ", len(labels))
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector. This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
seq_len = 200 features = [] for review in reviews_ints: cut = review[:seq_len] feature = ([0] * (seq_len - len(cut))) + cut features.append(feature) features = np.array(features)
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
If you build features correctly, it should look like that cell output below.
features[:10,:100]
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=0.2) train_x = x_train train_y = y_train val_x = x_test[:len(x_test)//2] val_y = y_test[:len(y_test)//2] test_x = x_test[len(x_test)//2:] test_y = y_test[len(y_test)//2:] print("\t\t\tFeature Shapes:") print("Train set: \t\t{}".format(train_x.shape), "\nValidation set: \t{}".format(val_x.shape), "\nTest set: \t\t{}".format(test_x.shape))
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like: Feature Shapes: Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200) Build the graph Here, we'll build the graph. First up, defining the hyperparameters. lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc. lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting. batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. learning_rate: Learning rate
lstm_size = 256 lstm_layers = 1 batch_size = 500 learning_rate = 0.001
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
n_words = len(vocab_to_int) + 1 # Adding 1 because we use 0's for padding, dictionary started at 1 # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder(tf.int32, [None, None], name="inputs") labels_ = tf.placeholder(tf.int32, [None, None], name="labels") keep_prob = tf.placeholder(tf.float32, name="keep_prob")
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights. Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
# Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.truncated_normal((n_words, embed_size), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_)
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
LSTM cell <img src="assets/network_diagram.png" width=400px> Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph. To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation: tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;) you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like lstm = tf.contrib.rnn.BasicLSTMCell(num_units) to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell: cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers) Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list. So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell. Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell. Here is a tutorial on building RNNs that will help you out.
with graph.as_default(): # Your basic LSTM cell lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # Add dropout to the cell drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # Stack up multiple LSTM layers, for deep learning cell = tf.contrib.rnn.MultiRNNCell([drop]*lstm_layers) # Getting an initial state of all zeros initial_state = cell.zero_state(batch_size, tf.float32)
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
RNN forward pass <img src="assets/network_diagram.png" width=400px> Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network. outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state) Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer. Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
with graph.as_default(): outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Output We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
with graph.as_default(): predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid) cost = tf.losses.mean_squared_error(labels_, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Validation accuracy Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
with graph.as_default(): correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Batching This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
def get_batches(x, y, batch_size=100): n_batches = len(x)//batch_size x, y = x[:n_batches*batch_size], y[:n_batches*batch_size] for ii in range(0, len(x), batch_size): yield x[ii:ii+batch_size], y[ii:ii+batch_size]
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Training Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
epochs = 10 with graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 0.5, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_x, val_y, batch_size): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/sentiment.ckpt")
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
Testing
test_acc = [] with tf.Session(graph=graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1): feed = {inputs_: x, labels_: y[:, None], keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
sentiment-rnn/Sentiment_RNN.ipynb
msanterre/deep_learning
mit
2 - Overview of the Problem set Problem Statement: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px). You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat. Let's get more familiar with the dataset. Load the data by running the following code.
# Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing). Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
# Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. Exercise: Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image) Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].
train_set_y.shape ### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_y.shape[1] m_test = test_set_y.shape[1] num_px = train_set_x_orig.shape[1] ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape))
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output for m_train, m_test and num_px: <table style="width:15%"> <tr> <td>**m_train**</td> <td> 209 </td> </tr> <tr> <td>**m_test**</td> <td> 50 </td> </tr> <tr> <td>**num_px**</td> <td> 64 </td> </tr> </table> For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns. Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1). A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use: python X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
# Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table style="width:35%"> <tr> <td>**train_set_x_flatten shape**</td> <td> (12288, 209)</td> </tr> <tr> <td>**train_set_y shape**</td> <td>(1, 209)</td> </tr> <tr> <td>**test_set_x_flatten shape**</td> <td>(12288, 50)</td> </tr> <tr> <td>**test_set_y shape**</td> <td>(1, 50)</td> </tr> <tr> <td>**sanity check after reshaping**</td> <td>[17 31 56 22 33]</td> </tr> </table> To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255. One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). <!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !--> Let's standardize our dataset.
train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255.
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
<font color='blue'> What you need to remember: Common steps for pre-processing a new dataset are: - Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...) - Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) - "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images. You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network! <img src="images/LogReg_kiank.png" style="width:650px;height:400px;"> Mathematical expression of the algorithm: For one example $x^{(i)}$: $$z^{(i)} = w^T x^{(i)} + b \tag{1}$$ $$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$ The cost is then computed by summing over all training examples: $$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$ Key steps: In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm ## The main steps for building a Neural Network are: 1. Define the model structure (such as number of input features) 2. Initialize the model's parameters 3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent) You often build 1-3 separately and integrate them into one function we call model(). 4.1 - Helper functions Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
# GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1.0 + np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td>**sigmoid([0, 2])**</td> <td> [ 0.5 0.88079708]</td> </tr> </table> 4.2 - Initializing parameters Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
# GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (≈ 1 line of code) w = np.zeros((dim, 1)) b = 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b))
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table style="width:15%"> <tr> <td> ** w ** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> ** b ** </td> <td> 0 </td> </tr> </table> For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagation Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters. Exercise: Implement a function propagate() that computes the cost function and its gradient. Hints: Forward Propagation: - You get X - You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$ - You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$ Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$ $$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
# GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T, X) + b) # compute activation cost = - 1.0 / m * np.sum(Y * np.log(A) + (1.0 - Y) * np.log(1-A)) # compute cost ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = 1.0 / m * np.dot(X, (A - Y).T) db = 1.0 / m * np.sum(A - Y) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost))
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table style="width:50%"> <tr> <td> ** dw ** </td> <td> [[ 0.99993216] [ 1.99980262]]</td> </tr> <tr> <td> ** db ** </td> <td> 0.499935230625 </td> </tr> <tr> <td> ** cost ** </td> <td> 6.000064773192205</td> </tr> </table> d) Optimization You have initialized your parameters. You are also able to compute a cost function and its gradient. Now, you want to update the parameters using gradient descent. Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
# GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation (≈ 1-4 lines of code) ### START CODE HERE ### grads, cost = propagate(w, b, X, Y) ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (≈ 2 lines of code) ### START CODE HERE ### w = w - learning_rate * dw b = b - learning_rate * db ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training examples if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"]))
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table style="width:40%"> <tr> <td> **w** </td> <td>[[ 0.1124579 ] [ 0.23106775]] </td> </tr> <tr> <td> **b** </td> <td> 1.55930492484 </td> </tr> <tr> <td> **dw** </td> <td> [[ 0.90158428] [ 1.76250842]] </td> </tr> <tr> <td> **db** </td> <td> 0.430462071679 </td> </tr> </table> Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions: Calculate $\hat{Y} = A = \sigma(w^T X + b)$ Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
# GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T, X) + b) ### END CODE HERE ### for i in range(A.shape[1]): # Convert probabilities A[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) Y_prediction[0, i] = A[0,i] > 0.5 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction print ("predictions = " + str(predict(w, b, X)))
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table style="width:30%"> <tr> <td> **predictions** </td> <td> [[ 1. 1.]] </td> </tr> </table> <font color='blue'> What to remember: You've implemented several functions that: - Initialize (w,b) - Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent - Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order. Exercise: Implement the model function. Use the following notation: - Y_prediction for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize()
# GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### # initialize parameters with zeros (≈ 1 line of code) w, b = initialize_with_zeros(X_train.shape[0]) # Gradient descent (≈ 1 line of code) parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = print_cost) # Retrieve parameters w and b from dictionary "parameters" w = parameters["w"] b = parameters["b"] # Predict test/train set examples (≈ 2 lines of code) Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Run the following cell to train your model.
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table style="width:40%"> <tr> <td> **Train Accuracy** </td> <td> 99.04306220095694 % </td> </tr> <tr> <td>**Test Accuracy** </td> <td> 70.0 % </td> </tr> </table> Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week! Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
# Example of a picture that was wrongly classified. index = 1 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
liufuyang/deep_learning_tutorial
mit