markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Nicely formatted results IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a preview of what IPython notebook can do.
# If you run this cell, you should see the values displayed as a table. # Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course. import pandas as pd df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]}) df # If you run this cell, you should see a scatter plot of the function y = x^2 %pylab inline import matplotlib.pyplot as plt xs = range(-30, 31) ys = [x ** 2 for x in xs] plt.scatter(xs, ys)
ipython_notebook_tutorial.ipynb
tsavo-sevenoaks/garth
gpl-3.0
Creating cells To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created. To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons. Some Markdown data Re-running cells If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
class_name = "BRUCE Woodley Intro to Data Analysis" message = class_name + " is awesome!" message
ipython_notebook_tutorial.ipynb
tsavo-sevenoaks/garth
gpl-3.0
Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second. You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third. You should have seen the output change to "your name is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
import unicodecsv with open("enrollments.csv","rb") as filein : line = unicodecsv.DictReader(filein) print("type(line) \t",type(line)) enrollments = list(line) print enrollments[0] import unicodecsv with open("daily_engagement.csv","rb") as filein : line = unicodecsv.DictReader(filein) #print("type(line) \t",type(line)) daily_engagement = list(line) print daily_engagement[0] import unicodecsv with open("project_submissions.csv","rb") as filein : line = unicodecsv.DictReader(filein) project_submissions_fieldnames = line.fieldnames #print("type(line) \t",type(line)) print("project_submissions_fieldnames = ",str(project_submissions_fieldnames)) project_submissions = list(line) print project_submissions[0]
ipython_notebook_tutorial.ipynb
tsavo-sevenoaks/garth
gpl-3.0
Fixing Data Types.
# Fixing Data Types. # Hit shift + enter or use the run button to run this cell and see the results from datetime import datetime as dt # Takes a date as a string, and returns a Python datetime object. # If there is no date given, returns None def parse_date(date): if date == '': return None else: return dt.strptime(date, '%Y-%m-%d') # Takes a string which is either an empty string or represents an integer, # and returns an int or None. def parse_maybe_int(i): if i == '': return None else: return int(i) print(" type(enrollment) " , type(enrollment)) # Clean up the data types in the enrollments table for enrollment in enrollments: enrollment['cancel_date'] = parse_date(enrollment['cancel_date']) enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel']) enrollment['is_canceled'] = enrollment['is_canceled'] == 'True' enrollment['is_udacity'] = enrollment['is_udacity'] == 'True' enrollment['join_date'] = parse_date(enrollment['join_date']) enrollments[0] # enrollments # daily_engagement # project_submission # these are all a "List of Dictionaries" import sys import os import string import time #print(type(enrollments),len(enrollments) ) enrollments_set = set() for line in enrollments : enrollments_set.add(line['account_key'] ) print("enrollments",type(enrollments), " row total: ",len(enrollments), " total students: ", len(enrollments_set) ) #print(type(daily_engagement), len(daily_engagement) ) daily_engagement_set = set() for line in daily_engagement : daily_engagement_set.add(line['acct'] ) print("daily_engagement", type(daily_engagement)," row total: ",len(daily_engagement), " total students: ", len(daily_engagement_set) ) #print(type(project_submissions), len(project_submissions) ) project_submissions_set = set() for line in project_submissions : project_submissions_set.add(line['account_key'] ) print("project_submissions", type(project_submissions)," row total: ",len(project_submissions), " total students: ", len(project_submissions_set) ) print(" ") print('REM: these are all a "List of Dictionaries"...!')
ipython_notebook_tutorial.ipynb
tsavo-sevenoaks/garth
gpl-3.0
Line with Gaussian noise Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$: $$ y = m x + b + N(0,\sigma^2) $$ Be careful about the sigma=0.0 case.
def random_line(m, b, sigma, size=10): """Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0] Parameters ---------- m : float The slope of the line. b : float The y-intercept of the line. sigma : float The standard deviation of the y direction normal distribution noise. size : int The number of points to create for the line. Returns ------- x : array of floats The array of x values for the line with `size` points. y : array of floats The array of y values for the lines with `size` points. """ # YOUR CODE HERE #http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randn.html#numpy.random.randn x = np.linspace(-1.0, 1.0, num=size) y = (m * x) + b + (sigma * np.random.randn(size)) return x, y print(random_line(2, 3, 2, 20)) m = 0.0; b = 1.0; sigma=0.0; size=3 x, y = random_line(m, b, sigma, size) assert len(x)==len(y)==size assert list(x)==[-1.0,0.0,1.0] assert list(y)==[1.0,1.0,1.0] sigma = 1.0 m = 0.0; b = 0.0 size = 500 x, y = random_line(m, b, sigma, size) assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1) assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
assignments/assignment05/InteractEx04.ipynb
SJSlavin/phys202-2015-work
mit
Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function: Make the marker color settable through a color keyword argument with a default of red. Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$. Customize your plot to make it effective and beautiful.
def ticks_out(ax): """Move the ticks to the outside of the box.""" ax.get_xaxis().set_tick_params(direction='out', width=1, which='both') ax.get_yaxis().set_tick_params(direction='out', width=1, which='both') def plot_random_line(m, b, sigma, size=10, color='red'): """Plot a random line with slope m, intercept b and size points.""" x, y = random_line(m, b, sigma, size) ax = plt.subplot(111) plt.scatter(x, y , color=color) ticks_out(ax) plt.xlim((-1.1, 1.1)) plt.ylim((-10.0, 10.0)) plot_random_line(5.0, -1.0, 2.0, 50) assert True # use this cell to grade the plot_random_line function
assignments/assignment05/InteractEx04.ipynb
SJSlavin/phys202-2015-work
mit
Use interact to explore the plot_random_line function using: m: a float valued slider from -10.0 to 10.0 with steps of 0.1. b: a float valued slider from -5.0 to 5.0 with steps of 0.1. sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01. size: an int valued slider from 10 to 100 with steps of 10. color: a dropdown with options for red, green and blue.
# YOUR CODE HERE interact(plot_random_line, m=(-10.0, 10.0, 0.1), b=(-5.0, 5.0, 0.1), sigma = (0.0, 5.0, 0.01), size = (10, 100, 10), color = ["green", "red", "blue"]) #### assert True # use this cell to grade the plot_random_line interact
assignments/assignment05/InteractEx04.ipynb
SJSlavin/phys202-2015-work
mit
Supongamos que esta curva representa a una funciรณn cuyo mรกximo buscamos, y supongamos que el eje x representa parรกmetros de los que la funciรณn depende.
x = np.linspace(0,50,500) y = np.sin(x) * np.sin(x/17) plt.figure(None, figsize=(10,5)) plt.ylim(-1.1, 1.1) plt.plot(x,y)
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
Supongamos que con un algoritmo hemos encontrado un punto alto, pero que corresponde a un รณptimo local, por ejemplo:
plt.figure(None, figsize=(10,5)) plt.ylim(-1.1, 1.1) plt.plot(x,y) plt.plot([21,21],[0,1],'r--') plt.plot(21, 0.75, 'ko')
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
El dilema Exploraciรณn-Explotaciรณn hace referencia a a dos fuerzas contrapuestas que necesitamos equilibrar cuidadosamente cuando usemos estos tipos de algoritmos. La Exploraciรณn se refiere a buscar soluciones alejadas de lo que tenemos, abrir nuestro abanico de bรบsqueda. Nos permite escapar de mรกximos locales y encontrar el global. Nos permite encontrar soluciones atรญpicas y novedosas a problemas complicados. Demasiada exploraciรณn nos impedirรก guardar nuestras soluciones y refinarlas, y tendremos a nuestro algoritmo saltando de un lado a otro sin sacar nada en claro. La Explotaciรณn se refiere a la capacidad de nuestro algoritmo de mantener las soluciones buenas que ha encontrado y refinarlas, buscando en entornos cercanos. Nos permite encontrar mรกximos de la funciรณn y mantenerlos. Demasiada Explotaciรณn nos bloquearรก en mรกximos locales y nos impedirรก encontrar el global.
# EJEMPLO DE RESULTADO CON DEMASIADA EXPLORACIร“N: NO SE ENCUENTRA NADA x2 = np.array([7,8,12,28,31,35,40,49]) y2 = np.sin(x2) * np.sin(x2/17) plt.figure(None, figsize=(10,5)) plt.ylim(-1.1, 1.1) plt.plot(x,y) plt.plot([21,21],[0,1],'r--') plt.plot(21, 0.75, 'ko') plt.plot(x2, y2, 'go') # EJEMPLO DE RESULTADO CON DEMASIADA EXPLOTACIร“N: Sร“LO SE LLEGA AL LOCAL x2 = np.linspace(20.2, 21, 10) y2 = np.sin(x2) * np.sin(x2/17) plt.figure(None, figsize=(10,5)) plt.ylim(-1.1, 1.1) plt.plot(x,y) plt.plot([21,21],[0,1],'r--') plt.plot(21, 0.75, 'ko') plt.plot(x2, y2, 'go')
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
Este tipo de estrategias se modulan mediante todos los parรกmetros de los algoritmos, pero quizรกs el parรกmetro que mรกs claramente influye en este equilibrio es el de la mutaciรณn en los algoritmos genรฉticos: Reduciendo el รญndice de mutaciรณn potenciaremos la explotaciรณn, mientras que si lo aumentamos, potenciamos la exploraciรณn. Ejemplo: Laberinto
#Usaremos el paquete en el ejercicio del laberinto import Ejercicios.Laberinto.laberinto.laberinto as lab ag = lab.ag
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
Supongamos que tenemos el siguiente laberinto, al que accedemos por la izquierda y que queremos resolver:
mapa1 = lab.Map() mapa1.draw_tablero()
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
En el ejercicio se detalla mรกs el proceso, llamemos aquรญ simplemente al algoritmo genรฉtico que lo resuelve:
mapa1 = lab.Map() lab.avanzar(mapa1) lab.draw_all(mapa1)
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
Lo mรกs probable es que hayas obtenido una soluciรณn o un camino cerrado en un bucle. Puedes ejecutar la celda superior varias veces para hecerte una idea aproximada de con quรฉ frecuencia aparece cada situaciรณn. Pero, ยฟpor quรฉ aparecen estos bucles? Examinemos quรฉ aspecto tiene una soluciรณn: Cada casilla contiene una flecha que indica cuรกl es la siguiente casilla a la que cruzar. Esto es lo que se describe en el genoma de cada camino. Si la casilla apunta a una pared, el programa intentarรก cruzar de todos modos a una casilla aleatoria diferente.
mapa1.list_caminos[0].draw_directions() mapa1.list_caminos[0].draw_path(0.7)
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
La respuesta a por quรฉ se forman bucles estรก en cรณmo se define la funciรณn de fitness o puntuaciรณn de cada camino: Se recorren 50 casillas, intentando seguir el camino que determinan las flechas Cada vez que se choca con una pared, o que se vuelve a la casilla anterior (por ejemplo, si dos flechas se apuntan mutuamente), se pierden puntos. Se obtiene una puntuaciรณn mejor cuanto mรกs a la derecha acabe el caminante. Se obtiene una gran bonificaciรณn si se llega a la salida En este ejercicio, un bucle es un optimo local: Al no chocarse con nada al recorrerlo, la puntuaciรณn es mejor que la de caminos ligeramente diferentes, que terminarรญan chocando con las paredes varias veces. Sin embargo, no es la soluciรณn que buscamos. Tenemos que potenciar la exploraciรณn lejos de estos mรกximos locales. Una manera de hacerlo es con feromonas, parecido a lo que hicimos con las hormigas. Supongamos que cada persona que camina por el laberinto, deja por cada casilla por la que pasa un olor desagradable, que hace que los que vuelvan a pasar por allรญ intenten evitar ese camino. La manera de implementar esto en el algoritmo es aรฑadir un rastro de feromonas, y luego tener en cuenta la cantidad de feromonas encontradas al calcular la puntuaciรณn. ยฟCรณmo crees que eso afectarรญa a los bucles? Probรฉmoslo!
mapa1 = lab.Map(veneno=1) lab.avanzar(mapa1) lab.draw_all(mapa1)
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
Prueba e ejecutarlo varias veces. ยฟNotas si ha cambiado la cantidad de bucles? Por รบltimo, veamos que ocurre si potenciamos la exploraciรณn demasiado:
mapa1 = lab.Map(veneno=100) lab.avanzar(mapa1) lab.draw_all(mapa1)
Teoria III - Exploracion-Explotacion.ipynb
AeroPython/Taller-PyConEs-2015
mit
MaxPooling1D [pooling.MaxPooling1D.0] input 6x6, pool_size=2, strides=None, padding='valid'
data_in_shape = (6, 6) L = MaxPooling1D(pool_size=2, strides=None, padding='valid') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(250) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.1] input 6x6, pool_size=2, strides=1, padding='valid'
data_in_shape = (6, 6) L = MaxPooling1D(pool_size=2, strides=1, padding='valid') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(251) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.2] input 6x6, pool_size=2, strides=3, padding='valid'
data_in_shape = (6, 6) L = MaxPooling1D(pool_size=2, strides=3, padding='valid') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(252) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.3] input 6x6, pool_size=2, strides=None, padding='same'
data_in_shape = (6, 6) L = MaxPooling1D(pool_size=2, strides=None, padding='same') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(253) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.3'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.4] input 6x6, pool_size=2, strides=1, padding='same'
data_in_shape = (6, 6) L = MaxPooling1D(pool_size=2, strides=1, padding='same') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(254) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.4'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.5] input 6x6, pool_size=2, strides=3, padding='same'
data_in_shape = (6, 6) L = MaxPooling1D(pool_size=2, strides=3, padding='same') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(255) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.5'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.6] input 6x6, pool_size=3, strides=None, padding='valid'
data_in_shape = (6, 6) L = MaxPooling1D(pool_size=3, strides=None, padding='valid') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(256) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.6'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.7] input 7x7, pool_size=3, strides=1, padding='same'
data_in_shape = (7, 7) L = MaxPooling1D(pool_size=3, strides=1, padding='same') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(257) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.7'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
[pooling.MaxPooling1D.8] input 7x7, pool_size=3, strides=3, padding='same'
data_in_shape = (7, 7) L = MaxPooling1D(pool_size=3, strides=3, padding='same') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(258) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.MaxPooling1D.8'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
export for Keras.js tests
import os filename = '../../../test/data/layers/pooling/MaxPooling1D.json' if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) with open(filename, 'w') as f: json.dump(DATA, f) print(json.dumps(DATA))
notebooks/layers/pooling/MaxPooling1D.ipynb
transcranial/keras-js
mit
Step 0 - hyperparams vocab_size is all the potential words you could have (classification for translation case) and max sequence length are the SAME thing decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
input_len = 60 target_len = 30 batch_size = 50 with_EOS = False csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Actual Run
data_path = '../../../../Dropbox/data' ph_data_path = data_path + '/price_history' assert path.isdir(ph_data_path) npz_full = ph_data_path + '/price_history_per_mobile_phone.npz' #dataset_gen = PriceHistoryDatasetPerMobilePhone(random_state=random_state) dic = np.load(npz_full) dic.keys()[:10]
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Arima
parameters = OrderedDict([ ('p_auto_regression_order', range(6)), #0-5 ('d_integration_level', range(3)), #0-2 ('q_moving_average', range(6)), #0-5 ]) cart = cartesian_coord(*parameters.values()) cart.shape cur_key = dic.keys()[0] cur_key cur_sku = dic[cur_key][()] cur_sku.keys() train_mat = cur_sku['train'] train_mat.shape target_len inputs = train_mat[:, :-target_len] inputs.shape targets = train_mat[:, -target_len:] targets.shape easy_mode = False score_dic_filepath = data_path + "/arima/scoredic_easy_mode_{}_{}.npy".format(easy_mode, cur_key) path.abspath(score_dic_filepath) %%time with warnings.catch_warnings(): warnings.filterwarnings("ignore") scoredic = ArimaCV.cross_validate(inputs=inputs, targets=targets, cartesian_combinations=cart, score_dic_filepath=score_dic_filepath, easy_mode=easy_mode) #4h 4min 51s / 108 cases => ~= 136 seconds per case ! arr = np.array(list(scoredic.iteritems())) arr.shape #np.isnan() filtered_arr = arr[ np.logical_not(arr[:, 1] != arr[:, 1]) ] filtered_arr.shape plt.plot(filtered_arr[:, 1]) minarg = np.argmin(filtered_arr[:, 1]) minarg best_params = filtered_arr[minarg, 0] best_params test_mat = cur_sku['test'] test_ins = test_mat[:-target_len] test_ins.shape test_tars = test_mat[-target_len:] test_tars.shape test_ins_vals = test_ins.values.reshape(1, -1) test_ins_vals.shape test_tars_vals = test_tars.values.reshape(1, -1) test_tars_vals.shape
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Testing with easy mode on
%%time with warnings.catch_warnings(): warnings.filterwarnings("ignore") ae = ArimaEstimator(p_auto_regression_order=best_params[0], d_integration_level=best_params[1], q_moving_average=best_params[2], easy_mode=True) score = ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals) score plt.figure(figsize=(15,7)) plt.plot(ae.preds.flatten(), label='preds') test_tars.plot(label='real') plt.legend() plt.show()
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Testing with easy mode off
%%time with warnings.catch_warnings(): warnings.filterwarnings("ignore") ae = ArimaEstimator(p_auto_regression_order=best_params[0], d_integration_level=best_params[1], q_moving_average=best_params[2], easy_mode=False) score = ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals) score plt.figure(figsize=(15,7)) plt.plot(ae.preds.flatten(), label='preds') test_tars.plot(label='real') plt.legend() plt.show()
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Conclusion If you are training in easy mode then what you get at the end is that the model only cares for the previous value in order to do its predictions and this makes it much easier for everybody but in reality we might not have advantage Trying
args = np.argsort(filtered_arr[:, 1]) args filtered_arr[args[:10], 0] %%time with warnings.catch_warnings(): warnings.filterwarnings("ignore") ae = ArimaEstimator(p_auto_regression_order=4, d_integration_level=1, q_moving_average=3, easy_mode=False) print ae.fit(test_ins_vals, test_tars_vals).score(test_ins_vals, test_tars_vals) plt.figure(figsize=(15,7)) plt.plot(ae.preds.flatten(), label='preds') test_tars.plot(label='real') plt.legend() plt.show()
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
All tests
from arima.arima_testing import ArimaTesting best_params, target_len, npz_full %%time keys, scores, preds = ArimaTesting.full_testing(best_params=best_params, target_len=target_len, npz_full=npz_full) # render graphs here score_arr = np.array(scores) np.mean(score_arr[np.logical_not(score_arr != score_arr)])
04_time_series_prediction/.ipynb_checkpoints/30_price_history_dataset_per_mobile_phone-arima-checkpoint.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Test on Mock
main('/path/to/data/directory', 10000)
notebooks/Bootstrap_Analysis.ipynb
cavestruz/StrongCNN
mit
Test on SLACS
main('/path/to/data/directory', 10000)
notebooks/Bootstrap_Analysis.ipynb
cavestruz/StrongCNN
mit
Test on SLACS separating out different bands
main('/path/to/data/directory', 10000, bands=['435', '814'])
notebooks/Bootstrap_Analysis.ipynb
cavestruz/StrongCNN
mit
Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
autoencoder/Simple_Autoencoder_Solution.ipynb
Lstyle1/Deep_learning_projects
mit
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
# Size of the encoding layer (the hidden layer) encoding_dim = 32 image_size = mnist.train.images.shape[1] inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs') targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets') # Output of hidden layer encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu) # Output layer logits logits = tf.layers.dense(encoded, image_size, activation=None) # Sigmoid output from decoded = tf.nn.sigmoid(logits, name='output') loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(0.001).minimize(cost)
autoencoder/Simple_Autoencoder_Solution.ipynb
Lstyle1/Deep_learning_projects
mit
Training
# Create the session sess = tf.Session()
autoencoder/Simple_Autoencoder_Solution.ipynb
Lstyle1/Deep_learning_projects
mit
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost))
autoencoder/Simple_Autoencoder_Solution.ipynb
Lstyle1/Deep_learning_projects
mit
Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close()
autoencoder/Simple_Autoencoder_Solution.ipynb
Lstyle1/Deep_learning_projects
mit
Question 2: WHERE Statements Write a Python function get_coeffs that returns an array of 7 coefficients. The function should take in two parameters: 1.) species_name and 2.) temp_range, an indicator variable ('low' or 'high') to indicate whether the coefficients should come from the low or high temperature range. The function should use SQL commands and WHERE statements on the table you just created in Question 1 (rather than taking data from the XML directly). python def get_coeffs(species_name, temp_range): ''' Fill in here''' return coeffs Write a python function get_species that returns all species that have a temperature range above or below a given value. The function should take in two parameters: 1.) temp and 2.) temp_range, an indicator variable ('low' or 'high'). When temp_range is 'low', we are looking for species with a temperature range lower than the given temperature, and for a 'high' temp_range, we want species with a temperature range higher than the given temperature. This exercise may be useful if different species have different LOW and HIGH ranges. And as before, you should accomplish this through SQL queries and where statements. python def get_species(temp, temp_range): ''' Fill in here''' return coeffs
def get_coeffs(species_name, temp_range): if temp_range=="low": function = '''SELECT coeff_1,coeff_2,coeff_3,coeff_4,coeff_5,coeff_6,coeff_7 FROM low WHERE species_name=?''' coeffs=cursor.execute(function,(species_name)).fetchone() else: function = '''SELECT coeff_1,coeff_2,coeff_3,coeff_4,coeff_5,coeff_6,coeff_7 FROM high WHERE species_name=?''' coeffs=cursor.execute(function,(species_name)).fetchone() return coeffs get_coeffs("H","high") def get_species(temp, temp_range): ''' Fill in here''' if temp_range=="low": function = '''SELECT species_name FROM low WHERE tlow<?''' species=cursor.execute(function,(temp_range,)).fetchall() else: function = '''SELECT species_name FROM high WHERE thigh>?''' species=cursor.execute(function,(temp_range,)).fetchall() return species get_species("low",1) all_cols = [col[1] for col in cursor.execute("PRAGMA table_info(ALL_TEMPS)")] query = '''SELECT species_name FROM low WHERE tlow<?''' viz_tables(all_cols,(query,(1,)))
homeworks/HW10/HW10.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Question 3: JOIN STATEMENTS Create a table named ALL_TEMPS that has the following columns: SPECIES_NAME TEMP_LOW TEMP_HIGH This table should be created by joining the tables LOW and HIGH on the value SPECIES_NAME. Write a Python function get_range that returns the range of temperatures for a given species_name. The range should be computed within the SQL query (i.e. you should subtract within the SELECT statement in the SQL query). python def get_range(species_name): '''Fill in here''' return range Note that TEMP_LOW is the lowest temperature in the LOW range and TEMP_HIGH is the highest temperature in the HIGH range.
def get_range(species_name): function = '''SELECT tlow FROM low WHERE species_name=?''' temp_low=cursor.execute(function, (species_name,)).fetchall()[0] function = '''SELECT thigh FROM high WHERE species_name=?''' temp_high=cursor.execute(function,(species_name,)).fetchall()[0] return (temp_low[0],temp_high[0]) get_range("HO2") cursor.execute("DROP TABLE IF EXISTS all_temps") cursor.execute('''CREATE TABLE all_temps( species_name TEXT PRIMARY KEY NOT NULL, temp_low INT, temp_high INT)''') function = '''SELECT species_name FROM low''' species=cursor.execute(function).fetchall() for specie in species: temp_low,temp_high=get_range(specie[0]) print(specie,temp_low,temp_high) cursor.execute('''INSERT INTO all_temps(species_name, temp_low, temp_high) VALUES(?, ?, ?)''',(specie[0],temp_low,temp_high)) cursor.execute("DROP TABLE IF EXISTS ALL_TEMPS") cursor.execute(''' CREATE TABLE ALL_TEMPS AS SELECT HIGH.SPECIES_NAME, HIGH.THIGH AS TEMP_HIGH, LOW.TLOW AS TEMP_LOW FROM HIGH JOIN LOW ON HIGH.SPECIES_NAME = LOW.SPECIES_NAME''') db.commit() all_cols = [col[1] for col in cursor.execute("PRAGMA table_info(ALL_TEMPS)")] query = '''SELECT * FROM ALL_TEMPS''' viz_tables(all_cols, query)
homeworks/HW10/HW10.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Load amazon review dataset
products = graphlab.SFrame('amazon_baby.gl/')
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Extract word counts and sentiments As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following: Remove punctuation. Remove reviews with "neutral" sentiment (rating 3). Set reviews with rating 4 or more to be positive and those with 2 or less to be negative.
def remove_punctuation(text): import string return text.translate(None, string.punctuation) # Remove punctuation. review_clean = products['review'].apply(remove_punctuation) # Count words products['word_count'] = graphlab.text_analytics.count_words(review_clean) # Drop neutral sentiment reviews. products = products[products['rating'] != 3] # Positive sentiment to +1 and negative sentiment to -1 products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Now, let's remember what the dataset looks like by taking a quick peek:
products
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Split data into training and test sets We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
train_data, test_data = products.random_split(.8, seed=1)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Train a logistic regression classifier We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results. Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
model = graphlab.logistic_classifier.create(train_data, target='sentiment', features=['word_count'], validation_set=None)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Model Evaluation We will explore the advanced model evaluation concepts that were discussed in the lectures. Accuracy One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}} $$ To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows:
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy'] print "Test Accuracy: %s" % accuracy
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Baseline: Majority class prediction Recall from an earlier assignment that we used the majority class classifier as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points. Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data) print "Baseline accuracy (majority class classifier): %s" % baseline
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: Using accuracy as the evaluation metric, was our logistic regression model better than the baseline (majority class classifier)? Confusion Matrix The accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the confusion matrix. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows: +---------------------------------------------+ | Predicted label | +----------------------+----------------------+ | (+1) | (-1) | +-------+-----+----------------------+----------------------+ | True |(+1) | # of true positives | # of false negatives | | label +-----+----------------------+----------------------+ | |(-1) | # of false positives | # of true negatives | +-------+-----+----------------------+----------------------+ To print out the confusion matrix for a classifier, use metric='confusion_matrix':
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix'] confusion_matrix
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: How many predicted values in the test set are false positives?
round(1443 / (26689 + 1443 ), 2)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Computing the cost of mistakes Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.) Suppose you know the costs involved in each kind of mistake: 1. \$100 for each false positive. 2. \$1 for each false negative. 3. Correctly classified reviews incur no cost. Quiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set?
100*1443 + 1*1406
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Precision and Recall You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in: $$ [\text{precision}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all data points with positive predictions]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false positives}]} $$ So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher. First, let us compute the precision of the logistic regression classifier on the test_data.
precision = model.evaluate(test_data, metric='precision')['precision'] print "Precision on test data: %s" % precision
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25)
round(1 - precision, 2)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz) A complementary metric is recall, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews: $$ [\text{recall}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all positive data points]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false negatives}]} $$ Let us compute the recall on the test_data.
recall = model.evaluate(test_data, metric='recall')['recall'] print "Recall on test data: %s" % recall
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier? Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data? Precision-recall tradeoff In this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve. Varying the threshold False positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold. Write a function called apply_threshold that accepts two things * probabilities (an SArray of probability values) * threshold (a float between 0 and 1). The function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold.
def apply_threshold(probabilities, threshold): ### YOUR CODE GOES HERE # +1 if >= threshold and -1 otherwise. return probabilities.apply(lambda x: +1 if x >= threshold else -1)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
probabilities = model.predict(test_data, output_type='probability') predictions_with_default_threshold = apply_threshold(probabilities, 0.5) predictions_with_high_threshold = apply_threshold(probabilities, 0.9) print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum() print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum()
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9? Exploring the associated precision and recall as the threshold varies By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:
# Threshold = 0.5 precision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'], predictions_with_default_threshold) recall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'], predictions_with_default_threshold) # Threshold = 0.9 precision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'], predictions_with_high_threshold) recall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'], predictions_with_high_threshold) print "Precision (threshold = 0.5): %s" % precision_with_default_threshold print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold print "Precision (threshold = 0.9): %s" % precision_with_high_threshold print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question (variant 1): Does the precision increase with a higher threshold? Quiz Question (variant 2): Does the recall increase with a higher threshold? Precision-recall curve Now, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.
threshold_values = np.linspace(0.5, 1, num=100) print threshold_values
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
For each of the values of threshold, we compute the precision and recall scores.
precision_all = [] recall_all = [] probabilities = model.predict(test_data, output_type='probability') for threshold in threshold_values: predictions = apply_threshold(probabilities, threshold) precision = graphlab.evaluation.precision(test_data['sentiment'], predictions) recall = graphlab.evaluation.recall(test_data['sentiment'], predictions) precision_all.append(precision) recall_all.append(recall)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
import matplotlib.pyplot as plt %matplotlib inline def plot_pr_curve(precision, recall, title): plt.rcParams['figure.figsize'] = 7, 5 plt.locator_params(axis = 'x', nbins = 5) plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F') plt.title(title) plt.xlabel('Precision') plt.ylabel('Recall') plt.rcParams.update({'font.size': 16}) plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.
for i, p in enumerate(precision_all): print str(i) + " -> " + str(p) round(threshold_values[67], 3)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.)
predictions_with_98_threshold = apply_threshold(probabilities, 0.98) cm = graphlab.evaluation.confusion_matrix(test_data['sentiment'], predictions_with_98_threshold) cm
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier. Evaluating specific search terms So far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon. Precision-Recall on all baby related items From the test set, select all the reviews for all products with the word 'baby' in them.
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Now, let's predict the probability of classifying these reviews as positive:
probabilities = model.predict(baby_reviews, output_type='probability')
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Let's plot the precision-recall curve for the baby_reviews dataset. First, let's consider the following threshold_values ranging from 0.5 to 1:
threshold_values = np.linspace(0.5, 1, num=100)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.
precision_all = [] recall_all = [] for threshold in threshold_values: # Make predictions. Use the `apply_threshold` function ## YOUR CODE HERE predictions = apply_threshold(probabilities, threshold) # Calculate the precision. # YOUR CODE HERE precision = graphlab.evaluation.precision(baby_reviews['sentiment'], predictions) # YOUR CODE HERE recall = graphlab.evaluation.recall(baby_reviews['sentiment'], predictions) # Append the precision and recall scores. precision_all.append(precision) recall_all.append(recall)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places.
round(threshold_values[72], 3) for i, p in enumerate(precision_all): print str(i) + " -> " + str(p)
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Quiz Question: Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%? Finally, let's plot the precision recall curve.
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
ml-classification/week-6/module-9-precision-recall-assignment-blank.ipynb
zomansud/coursera
mit
Now we define the domain of the function to optimize as usual.
mixed_domain =[{'name': 'var1', 'type': 'continuous', 'domain': (-5,5),'dimensionality': 3}, {'name': 'var2', 'type': 'discrete', 'domain': (3,8,10)}, {'name': 'var3', 'type': 'categorical', 'domain': (0,1,2)}, {'name': 'var4', 'type': 'continuous', 'domain': (-1,2)}] myBopt = GPyOpt.methods.BayesianOptimization(f=func.f, # Objective function domain=mixed_domain, # Box-constraints of the problem initial_design_numdata = 5, # Number data initial design acquisition_type='EI', # Expected Improvement exact_feval = True, evaluator_type = 'local_penalization', batch_size = 5 ) # True evaluations, no sample noise
manual/GPyOpt_context.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Now, we run the optimization for 20 iterations or a maximum of 60 seconds and we show the convergence plots.
max_iter = 2 ## maximum number of iterations max_time = 60 ## maximum allowed time eps = 0 ## tolerance, max distance between consicutive evaluations.
manual/GPyOpt_context.ipynb
SheffieldML/GPyOpt
bsd-3-clause
To set a context, we just need to create a dicctionary with the variables to fix and pass it to the Bayesian ottimization object when running the optimization. Note that, everytime we run new iterations we can set other variables to be the context. Note that for variables in which the dimaensionality has been specified in the domain, a subindex is internally asigned. For instance if the variables is called 'var1' and has dimensionality 3, the first three positions in the internal representation of the domain will be occupied by variables 'var1_1', 'var1_2' and 'var1_3'. If no dimensionality is added, the internal naming remains the same. For instance, in the example above 'var3' should be fixed its original name. See below for details.
myBopt.run_optimization(max_iter,eps=eps) myBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':.3, 'var1_2':0.4}) myBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':0, 'var3':2}) myBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':0, 'var2':3},) myBopt.run_optimization(max_iter,eps=eps,context = {'var1_1':0.3, 'var3':1, 'var4':-.4}) myBopt.run_optimization(max_iter,eps=eps)
manual/GPyOpt_context.ipynb
SheffieldML/GPyOpt
bsd-3-clause
We can now visualize the results
np.round(myBopt.X,2)
manual/GPyOpt_context.ipynb
SheffieldML/GPyOpt
bsd-3-clause
Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?
import matplotlib.pyplot as plt %matplotlib inline
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
therealAJ/python-sandbox
gpl-3.0
Exercise 1 Follow along with these steps: * Create a figure object called fig using plt.figure() * Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. * Plot (x,y) on that axes and set the labels and titles to match the plot below:
fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x,y) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('title')
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
therealAJ/python-sandbox
gpl-3.0
Exercise 2 Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.
fig = plt.figure() ax1 = fig.add_axes([0,0,1,1]) ax2 = fig.add_axes([0.2,0.5,0.2,0.2])
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
therealAJ/python-sandbox
gpl-3.0
Now plot (x,y) on both axes. And call your figure object to show it.
ax1.plot(x,y,color='black') ax2.plot(x,y,color='red') fig
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
therealAJ/python-sandbox
gpl-3.0
Exercise 3 Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]
fig = plt.figure() ax1 = fig.add_axes([0,0,1,1]) ax2 = fig.add_axes([0.2,0.5,0.4,0.4]) #Large ax1.set_xlabel('x') ax1.set_ylabel('z') ax1.plot(x,z) #Inserted ax2.set_xlabel('x') ax2.set_ylabel('y') ax2.set_title('zoom') ax2.plot(x,y) ax2.set_xlim(left=20,right=22) ax2.set_ylim(bottom=30,top=50)
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
therealAJ/python-sandbox
gpl-3.0
Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot: Exercise 4 Use plt.subplots(nrows=1, ncols=2) to create the plot below.
fig,axes = plt.subplots(1,2) axes[0].plot(x,y,lw=3,ls='--') axes[1].plot(x,z,color='r',lw=4)
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
therealAJ/python-sandbox
gpl-3.0
Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.
fig,axes = plt.subplots(1,2,figsize=(12,2)) axes[0].plot(x,y,lw=3,ls='--') axes[1].plot(x,z,color='r',lw=4)
data-science/learning/ud2/Part 1 Exercise Solutions/Matplotlib Exercises .ipynb
therealAJ/python-sandbox
gpl-3.0
Init SparkContext
from zoo.common.nncontext import init_spark_on_local, init_spark_on_yarn import numpy as np import os hadoop_conf_dir = os.environ.get('HADOOP_CONF_DIR') if hadoop_conf_dir: sc = init_spark_on_yarn( hadoop_conf=hadoop_conf_dir, conda_name=os.environ.get("ZOO_CONDA_NAME", "zoo"), # The name of the created conda-env num_executors=2, executor_cores=4, executor_memory="2g", driver_memory="2g", driver_cores=1, extra_executor_memory_for_ray="3g") else: sc = init_spark_on_local(cores = 8, conf = {"spark.driver.memory": "2g"}) # It may take a while to ditribute the local environment including python and java to cluster import ray from zoo.ray import RayContext ray_ctx = RayContext(sc=sc, object_store_memory="4g") ray_ctx.init() #ray.init(num_cpus=30, include_webui=False, ignore_reinit_error=True)
apps/ray/parameter_server/sharded_parameter_server.ipynb
intel-analytics/analytics-zoo
apache-2.0
ํŒŒ์ด์ฌ ๊ธฐ๋ณธ ์ž๋ฃŒํ˜• ๋ฌธ์ œ ์‹ค์ˆ˜(๋ถ€๋™์†Œ์ˆ˜์ )๋ฅผ ํ•˜๋‚˜ ์ž…๋ ฅ๋ฐ›์•„, ๊ทธ ์ˆซ์ž๋ฅผ ๋ฐ˜์ง€๋ฆ„์œผ๋กœ ํ•˜๋Š” ์›์˜ ๋ฉด์ ๊ณผ ๋‘˜๋ ˆ์˜ ๊ธธ์ด๋ฅผ ํŠœํ”Œ๋กœ ๋ฆฌํ„ดํ•˜๋Š” ํ•จ์ˆ˜ circle_radius๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ, ``` . ``` ๋ฌธ์ž์—ด ์ž๋ฃŒํ˜• ์•„๋ž˜ ์‚ฌ์ดํŠธ๋Š” ์ปคํ”ผ ์ฝฉ์˜ ํ˜„์žฌ ์‹œ์„ธ๋ฅผ ๋ณด์—ฌ์ค€๋‹ค. http://beans-r-us.appspot.com/prices.html ์œ„ ์‚ฌ์ดํŠธ์˜ ๋‚ด์šฉ์„ html ์†Œ์Šค์ฝ”๋“œ๋กœ ๋ณด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์œผ๋ฉฐ, ๊ฒ€์ƒ‰๋œ ์‹œ๊ฐ„์˜ ์ปคํ”ผ์ฝฉ์˜ ๊ฐ€๊ฒฉ์€ Current price of coffee beans ๋ฌธ์žฅ์ด ๋‹ด๊ฒจ ์žˆ๋Š” ์ค„์— ๋ช…์‹œ๋˜์–ด ์žˆ๋‹ค. ```html <html><head><title>Welcome to the Beans'R'Us Pricing Page</title> <link rel="stylesheet" type="text/css" href="beansrus.css" /> </head><body> <h2>Welcome to the Beans'R'Us Pricing Page</h2> <p>Current price of coffee beans = <strong>$5.94</strong></p> <p>Price valid for 15 minutes from Sun Sep 10 12:21:58 2017.</p> </body></html> ``` ๋ฌธ์ œ ์•„๋ž˜ ์ฝ”๋“œ๊ฐ€ ํ•˜๋Š” ์ผ์„ ์„ค๋ช…ํ•˜๋ผ. ``` from future import print_function import urllib2 import time def price_setter(b_price, a_price): bean_price = b_price while 5.5 < bean_price < 6.0: time.sleep(1) page = urllib2.urlopen("http://beans-r-us.appspot.com/prices.html") text = page.read().decode("utf8") price_index = text.find("&gt;$") + 2 bean_price_str = text[price_index : price_index + 4] bean_price = float(bean_price_str) print("ํ˜„์žฌ ์ปคํ”ผ์ฝฉ ๊ฐ€๊ฒฉ์ด", bean_price, "๋‹ฌ๋Ÿฌ ์ž…๋‹ˆ๋‹ค.") if bean_price &lt;= 5.5: print("์•„๋ฉ”๋ฆฌ์นด๋…ธ ๊ฐ€๊ฒฉ์„", a_price, "๋‹ฌ๋Ÿฌ๋งŒํผ ์ธํ•˜ํ•˜์„ธ์š”!") else: print("์•„๋ฉ”๋ฆฌ์นด๋…ธ ๊ฐ€๊ฒฉ์„", a_price, "๋‹ฌ๋Ÿฌ๋งŒํผ ์ธ์ƒํ•˜์„ธ์š”!") ``` ``` .``` ์˜ค๋ฅ˜ ๋ฐ ์˜ˆ์™ธ ์ฒ˜๋ฆฌ ๋ฌธ์ œ ์•„๋ž˜ ์ฝ”๋“œ๊ฐ€ ํ•˜๋Š” ์ผ์„ ์„ค๋ช…ํ•˜๋ผ. ``` number_to_square = raw_input("A number to divide 100: ") try: number = float(number_to_square) print("100์„ ์ž…๋ ฅํ•œ ๊ฐ’์œผ๋กœ ๋‚˜๋ˆˆ ๊ฒฐ๊ณผ๋Š”", 100/number, "์ž…๋‹ˆ๋‹ค.") except ZeroDivisionError: raise ZeroDivisionError('0์ด ์•„๋‹Œ ์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.') except ValueError: raise ValueError('์ˆซ์ž๋ฅผ ์ž…๋ ฅํ•˜์„ธ์š”.') ``` ``` .``` ๋ฆฌ์ŠคํŠธ ๋ฌธ์ œ ์•„๋ž˜ ์„ค๋ช… ์ค‘์—์„œ ๋ฆฌ์ŠคํŠธ ์ž๋ฃŒํ˜•์˜ ์„ฑ์งˆ์— ํ•ด๋‹นํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ชจ๋‘ ๊ณจ๋ผ๋ผ. ๊ฐ€๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค. ๋ถˆ๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค. ์ธ๋ฑ์Šค์™€ ์Šฌ๋ผ์ด์‹ฑ์„ ํ™œ์šฉํ•˜์—ฌ ํ•ญ๋ชฉ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•˜๊ณ  ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ํ•ญ๋ชฉ๋“ค์ด ์ž„์˜์˜ ์ž๋ฃŒํ˜•์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋‹ค. ๋ฆฌ์ŠคํŠธ ๊ธธ์ด์— ์ œํ•œ์ด ์žˆ๋‹ค. ์‹ ์„ฑ์ •๋ณด ๋“ฑ ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด๊ด€ํ•  ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. ``` ``` ๊ฒฌ๋ณธ๋‹ต์•ˆ: 1, 3, 4 ์‚ฌ์ „ record_list.txt ํŒŒ์ผ์€ ์—ฌ๋Ÿ ๋ช…์˜ ์ˆ˜์˜ ์„ ์ˆ˜์˜ 50m ๊ธฐ๋ก์„ ๋‹ด๊ณ  ์žˆ๋‹ค. txt player1 21.09 player2 20.32 player3 21.81 player4 22.97 player5 23.29 player6 22.09 player7 21.20 player8 22.16 ๋ฌธ์ œ ์•„๋ž˜์ฝ”๋“œ๊ฐ€ ํ•˜๋Š” ์ผ์„ ์„ค๋ช…ํ•˜๋ผ. ```python from future import print_function record_f = open("record_list.txt", 'r') record = record_f.read().decode('utf8').split('\n') record_dict = {} for line in record: (player, p_record) = line.split() record_dict[p_record] = player record_f.close() record_list = record_dict.keys() record_list.sort() for i in range(3): item = record_list[i] print(str(i+1) + ":", record_dict[item], item) ``` ``` .``` ํŠœํ”Œ ๋ฌธ์ œ ์•„๋ž˜ ์„ค๋ช… ์ค‘์—์„œ ํŠœํ”Œ ์ž๋ฃŒํ˜•์˜ ์„ฑ์งˆ์— ํ•ด๋‹นํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ชจ๋‘ ๊ณจ๋ผ๋ผ. ๊ฐ€๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค. ๋ถˆ๋ณ€ ์ž๋ฃŒํ˜•์ด๋‹ค. ์ธ๋ฑ์Šค์™€ ์Šฌ๋ผ์ด์‹ฑ์„ ํ™œ์šฉํ•˜์—ฌ ํ•ญ๋ชฉ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•˜๊ณ  ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ํ•ญ๋ชฉ๋“ค์ด ์ž„์˜์˜ ์ž๋ฃŒํ˜•์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋‹ค. ํŠœํ”Œ ๊ธธ์ด์— ์ œํ•œ์ด ์žˆ๋‹ค. ์‹ ์„ฑ์ •๋ณด ๋“ฑ ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณด๊ด€ํ•  ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. ``` ``` ๊ฒฌ๋ณธ๋‹ต์•ˆ: 2, 3, 4, 6 ๋ฆฌ์ŠคํŠธ ์กฐ๊ฑด์ œ์‹œ๋ฒ• ์•„๋ž˜ ์ฝ”๋“œ๋Š” 0๋ถ€ํ„ฐ 1000 ์‚ฌ์ด์˜ ํ™€์ˆ˜๋“ค์˜ ์ œ๊ณฑ์˜ ๋ฆฌ์ŠคํŠธ๋ฅผ ์กฐ๊ฑด์ œ์‹œ๋ฒ•์œผ๋กœ ์ƒ์„ฑํ•œ๋‹ค
odd_1000 = [x**2 for x in range(0, 1000) if x % 2 == 1] # ๋ฆฌ์ŠคํŠธ์˜ ์ฒ˜์Œ ๋‹ค์„ฏ ๊ฐœ ํ•ญ๋ชฉ odd_1000[:5]
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋ฌธ์ œ 0๋ถ€ํ„ฐ 1000๊นŒ์ง€์˜ ์ˆซ์ž๋“ค ์ค‘์—์„œ ํ™€์ˆ˜์ด๋ฉด์„œ 7์˜ ๋ฐฐ์ˆ˜์ธ ์ˆซ์ž๋“ค์˜ ๋ฆฌ์ŠคํŠธ๋ฅผ ์กฐ๊ฑด์ œ์‹œ๋ฒ•์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ. ``` . ``` ๋ชจ๋ฒ”๋‹ต์•ˆ:
odd_3x7 = [x for x in range(0, 1000) if x % 2 == 1 and x % 7 == 0] # ๋ฆฌ์ŠคํŠธ์˜ ์ฒ˜์Œ ๋‹ค์„ฏ ๊ฐœ ํ•ญ๋ชฉ odd_3x7[:5]
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋ฌธ์ œ 0๋ถ€ํ„ฐ 1000๊นŒ์ง€์˜ ์ˆซ์ž๋“ค ์ค‘์—์„œ ํ™€์ˆ˜์ด๋ฉด์„œ 7์˜ ๋ฐฐ์ˆ˜์ธ ์ˆซ์ž๋“ค์„ ์ œ๊ณฑํ•˜์—ฌ 1์„ ๋”ํ•œ ๊ฐ’๋“ค์˜ ๋ฆฌ์ŠคํŠธ๋ฅผ ์กฐ๊ฑด์ œ์‹œ๋ฒ•์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ. ํžŒํŠธ: ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜๋œ ํ•จ์ˆ˜๋ฅผ ํ™œ์šฉํ•œ๋‹ค. $$f(x) = x^2 + 1$$ ``` .``` ๊ฒฌ๋ณธ๋‹ต์•ˆ:
def square_plus1(x): return x**2 + 1 odd_3x7_spl = [square_plus1(x) for x in odd_3x7] # ๋ฆฌ์ŠคํŠธ์˜ ์ฒ˜์Œ ๋‹ค์„ฏ ๊ฐœ ํ•ญ๋ชฉ odd_3x7_spl[:5]
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
csv ํŒŒ์ผ ์ฝ์–ด๋“ค์ด๊ธฐ 'Seoul_pop2.csv' ํŒŒ์ผ์—๋Š” ์•„๋ž˜ ๋‚ด์šฉ์ด ์ €์žฅ๋˜์–ด ์žˆ๋‹ค" ```csv 1949๋…„๋ถ€ํ„ฐ 2010๋…„ ์‚ฌ์ด์˜ ์„œ์šธ๊ณผ ์ˆ˜๋„๊ถŒ ์ธ๊ตฌ ์ฆ๊ฐ€์œจ(%) ๊ตฌ๊ฐ„,์„œ์šธ,์ˆ˜๋„๊ถŒ 1949-1955,9.12,-5.83 1955-1960,55.88,32.22 1960-1966,55.12,32.76 1966-1970,45.66,28.76 1970-1975,24.51,22.93 1975-1980,21.38,21.69 1980-1985,15.27,18.99 1985-1990,10.15,17.53 1990-1995,-3.64,8.54 1995-2000,-3.55,5.45 2000-2005,-0.93,6.41 2005-2010,-1.34,3.71 ``` ํ™•์žฅ์ž๊ฐ€ csv์ธ ํŒŒ์ผ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•˜๊ธฐ ์œ„ํ•ด ์ฃผ๋กœ ์‚ฌ์šฉํ•œ๋‹ค. csv๋Š” Comma-Separated Values์˜ ์ค„์ž„๋ง๋กœ ๋ฐ์ดํ„ฐ๊ฐ€ ์‰ผํ‘œ(์ฝค๋งˆ)๋กœ ๊ตฌ๋ถ„๋˜์–ด ์ •๋ฆฌ๋˜์–ด ์žˆ๋Š” ํŒŒ์ผ์„ ์˜๋ฏธํ•œ๋‹ค. csv ํŒŒ์ผ์„ ์ฝ์–ด๋“œ๋ฆฌ๋Š” ๋ฐฉ๋ฒ•์€ csv ๋ชจ๋“ˆ์˜ reader() ํ•จ์ˆ˜๋ฅผ ํ™œ์šฉํ•˜๋ฉด ๋งค์šฐ ์‰ฝ๋‹ค. reader() ํ•จ์ˆ˜์˜ ๋ฆฌํ„ด๊ฐ’์€ csv ํŒŒ์ผ์— ์ €์žฅ๋œ ๋‚ด์šฉ์„ ์ค„ ๋‹จ์œ„๋กœ, ์‰ผํ‘œ ๋‹จ์œ„๋กœ ๋Š์–ด์„œ 2์ฐจ์› ๋ฆฌ์ŠคํŠธ์ด๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์•„๋ž˜ ์ฝ”๋“œ๋Š” ์–ธ๊ธ‰๋œ ํŒŒ์ผ์— ์ €์žฅ๋œ ๋‚ด์šฉ์˜ ๊ฐ ์ค„์„ ์ถœ๋ ฅํ•ด์ค€๋‹ค.
import csv with open('Seoul_pop2.csv', 'rb') as f: reader = csv.reader(f) for row in reader: if len(row) == 0 or row[0][0] == '#': continue else: print(row)
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋ฌธ์ œ ์œ„ ์ฝ”๋“œ์—์„œ 5๋ฒˆ ์งธ ์ค„์„ ์•„๋ž˜์™€ ๊ฐ™์ด ํ•˜๋ฉด ์˜ค๋ฅ˜ ๋ฐœ์ƒํ•œ๋‹ค. if row[0][0] == '#' or len(row) == 0: ์ด์œ ๋ฅผ ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ. ``` . ``` ๋„˜ํŒŒ์ด ํ™œ์šฉ ๊ธฐ์ดˆ 1 ๋„˜ํŒŒ์ด ์–ด๋ ˆ์ด๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ช‡ ๊ฐœ์˜ ๊ธฐ๋ณธ์ ์ธ ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•˜๋ฉด ๋œ๋‹ค. np.arange() np.zeros() np.ones() np.diag() ์˜ˆ์ œ:
np.arange(3, 10, 3) np.zeros((2,3)) np.ones((2,)) np.diag([1, 2, 3, 4]) np.ones((3,3)) * 2
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋ฌธ์ œ ์•„๋ž˜ ๋ชจ์–‘์˜ ์–ด๋ ˆ์ด๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ. ๋‹จ, ์–ธ๊ธ‰๋œ ๋„ค ๊ฐœ์˜ ํ•จ์ˆ˜๋“ค๋งŒ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑ๋œ ๋ฆฌ์ŠคํŠธ๋‚˜ ์–ด๋ ˆ์ด๋Š” ํ—ˆ์šฉ๋˜์ง€ ์•Š๋Š”๋‹ค. $$\left [ \begin{matrix} 2 & 0 & 0 \ 0 & 2 & 0 \ 0 & 0 & 2 \end{matrix} \right ]$$ ``` . ``` ๊ฒฌ๋ณธ๋‹ต์•ˆ:
np.diag(np.ones((3,))*2)
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋ฌธ์ œ ์•„๋ž˜ ๋ชจ์–‘์˜ ์–ด๋ ˆ์ด๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ. ๋‹จ, ์–ธ๊ธ‰๋œ ๋„ค ๊ฐœ์˜ ํ•จ์ˆ˜๋งŒ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ์ˆ˜๋™์œผ๋กœ ์ƒ์„ฑ๋œ ๋ฆฌ์ŠคํŠธ๋‚˜ ์–ด๋ ˆ์ด๋Š” ํ—ˆ์šฉ๋˜์ง€ ์•Š๋Š”๋‹ค. $$\left [ \begin{matrix} 2 & 0 & 0 \ 0 & 4 & 0 \ 0 & 0 & 6 \end{matrix} \right ]$$ ``` . ``` ๊ฒฌ๋ณธ๋‹ต์•ˆ:
np.diag(np.arange(2, 7, 2))
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋„˜ํŒŒ์ด์˜ linspace() ํ•จ์ˆ˜ ํ™œ์šฉ numpy ๋ชจ๋“ˆ์˜ linspace() ํ•จ์ˆ˜๋Š” ์ง€์ •๋œ ๊ตฌ๊ฐ„์„ ์ •ํ•ด์ง„ ํฌ๊ธฐ๋กœ ์ผ์ •ํ•˜๊ฒŒ ์ชผ๊ฐœ๋Š” ์–ด๋ž˜์ด๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, 0๋ถ€ํ„ฐ 3์‚ฌ์ด์˜ ๊ตฌ๊ฐ„์„ ๊ท ๋“ฑํ•˜๊ฒŒ 30๊ฐœ๋กœ ์ชผ๊ฐœ๊ณ ์ž ํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์ด ์‹คํ–‰ํ•˜๋ฉด ๋œ๋‹ค.
xs = np.linspace(0, 3, 30) xs
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋ฌธ์ œ 0๋ถ€ํ„ฐ 1์‚ฌ์ด์˜ ๊ตฌ๊ฐ„์„ ๊ท ๋“ฑํ•˜๊ฒŒ 10๊ฐœ๋กœ ์ชผ๊ฐœ์–ด ๊ฐ ํ•ญ๋ชฉ์„ ์ œ๊ณฑํ•˜๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ผ. ``` . ``` ๊ฒฌ๋ณธ๋‹ต์•ˆ:
np.linspace(0,1, 10) ** 2
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋„˜ํŒŒ์ด ํ™œ์šฉ ๊ธฐ์ดˆ 2 population.txt ํŒŒ์ผ์€ 1900๋…„๋ถ€ํ„ฐ 1920๋…„๊นŒ์ง€ ์บ๋‚˜๋‹ค ๋ถ๋ถ€์ง€์—ญ์—์„œ ์„œ์‹ํ•œ ์‚ฐํ† ๋ผ(hare)์™€ ์Šค๋ผ์†Œ๋‹ˆ(lynx)์˜ ์ˆซ์ž, ๊ทธ๋ฆฌ๊ณ  ์ฑ„์†Œ์ธ ๋‹น๊ทผ(carrot)์˜ ์žฌ๋ฐฐ์ˆซ์ž๋ฅผ ์•„๋ž˜ ๋‚ด์šฉ์œผ๋กœ ์ˆœ์ˆ˜ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋‹ด๊ณ  ์žˆ๋‹ค. ``` year hare lynx carrot 1900 30e3 4e3 48300 1901 47.2e3 6.1e3 48200 1902 70.2e3 9.8e3 41500 1903 77.4e3 35.2e3 38200 1904 36.3e3 59.4e3 40600 1905 20.6e3 41.7e3 39800 1906 18.1e3 19e3 38600 1907 21.4e3 13e3 42300 1908 22e3 8.3e3 44500 1909 25.4e3 9.1e3 42100 1910 27.1e3 7.4e3 46000 1911 40.3e3 8e3 46800 1912 57e3 12.3e3 43800 1913 76.6e3 19.5e3 40900 1914 52.3e3 45.7e3 39400 1915 19.5e3 51.1e3 39000 1916 11.2e3 29.7e3 36700 1917 7.6e3 15.8e3 41800 1918 14.6e3 9.7e3 43300 1919 16.2e3 10.1e3 41300 1920 24.7e3 8.6e3 47300 ``` ์•„๋ž˜ ์ฝ”๋“œ๋Š” ์—ฐ๋„, ํ† ๋ผ ๊ฐœ์ฒด์ˆ˜, ์Šค๋ผ์†Œ๋ฆฌ ๊ฐœ์ฒด์ˆ˜, ๋‹น๊ทผ ๊ฐœ์ฒด์ˆ˜๋ฅผ ๋”ฐ๋กœ๋”ฐ๋กœ ๋–ผ์–ด ๋‚ด์–ด ๊ฐ๊ฐ ์–ด๋ ˆ์ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ year, hares, lynxes, carrots ๋ณ€์ˆ˜์— ์ €์žฅํ•˜๋Š” ์ฝ”๋“œ์ด๋‹ค.
data = np.loadtxt('populations.txt') year, hares, lynxes, carrots = data.T
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
๋ฌธ์ œ ์œ„ ์ฝ”๋“œ์—์„œ np.loadtxt ํ•จ์ˆ˜์˜ ์ž‘๋™๋ฐฉ์‹์„ ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ. ``` . ``` ๋ฌธ์ œ ์œ„ ์ฝ”๋“œ์—์„œ data.T์— ๋Œ€ํ•ด ๊ฐ„๋‹จํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋ผ. ``` . ``` ์•„๋ž˜ ์ฝ”๋“œ๋Š” ํ† ๋ผ, ์Šค๋ผ์†Œ๋‹ˆ, ๋‹น๊ทผ ๊ฐ๊ฐ์˜ ๊ฐœ์ฒด์ˆ˜์˜ ์—ฐ๋„๋ณ„ ๋ณ€ํ™”๋ฅผ ์„ ๊ทธ๋ž˜ํ”„๋กœ ๋ณด์—ฌ์ฃผ๋„๋ก ํ•˜๋Š” ์ฝ”๋“œ์ด๋‹ค.
plt.axes([0.2, 0.1, 0.5, 0.8]) plt.plot(year, hares, year, lynxes, year, carrots) plt.legend(('Hare', 'Lynx', 'Carrot'), loc=(1.05, 0.5))
ref_materials/exams/2017/A02/midterm-a02.ipynb
liganega/Gongsu-DataSci
gpl-3.0
Obtain Analytics Raster Identify road feed feature for download We want to download the most recent feature from the feed for road detection in Kirazli, Turkey.
# This ID is for a subscription for monthly road detection in Kirazli, Turkey SUBSCRIPTION_ID = 'f184516c-b948-406f-b257-deaa66c3f38a' results = analytics_client.list_collection_features(SUBSCRIPTION_ID).get() features = results['features'] print('{} features in collection'.format(len(features))) # sort features by acquisition date features.sort(key=lambda k: k['properties']['first_acquired']) feature = features[-1] print(feature['properties']['first_acquired'])
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
planetlabs/notebooks
apache-2.0
Download Quad Raster
RESOURCE_TYPE = 'target-quad' def create_save_dir(root_dir='data'): save_dir = root_dir if not os.path.isdir(save_dir): os.makedirs(save_dir) return save_dir dest = 'data' create_save_dir(dest)
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
planetlabs/notebooks
apache-2.0
We want to save each all of the images in one directory. But all of the images for a single target quad have the same name, L15_{target_quad_id}. We use the function write_to_file to save the image, and that function pulls the name from the resource name attribute, which we can't set. So, we are going to make a new object that functions just like resource, but has the name attribute set to the acquisition date. It would be nice if the write_to_file function just allowed us to set the name, like it allows us to set the directory.
from planet.api.models import Body from planet.api.utils import write_to_file def download_feature(feature, subscription_id, resource_type, dest=dest): print('{}: acquired {}'.format(feature['id'], get_date(feature))) resource = analytics_client.get_associated_resource_for_analytic_feature(subscription_id, feature['id'], resource_type) named_resource = NamedBody(resource, get_name(feature)) filename = download_resource(named_resource, dest) return filename def get_date(feature): feature_acquired = feature['properties']['first_acquired'] return feature_acquired.split('T',1)[0] def get_name(feature): return feature['properties']['target_quad_id'] + '_' + get_date(feature) + '.tif' def download_resource(resource, dest, overwrite=False): writer = write_to_file(dest, overwrite=overwrite) writer(resource) filename = os.path.join(dest, resource.name) print('file saved to: {}'.format(filename)) return filename class NamedBody(Body): def __init__(self, body, name): super(NamedBody, self).__init__(body._request, body.response, body._dispatcher) self._name = name @property def name(self): return self._name filename = download_feature(feature, SUBSCRIPTION_ID, RESOURCE_TYPE)
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
planetlabs/notebooks
apache-2.0
Visualize Roads Image The output of the analytics road detection is a boolean image where road pixels are given a value of True and non-road pixels are given a value of False.
def _open(filename, factor=1): with rasterio.open(filename) as dataset: height = int(dataset.height / factor) width = int(dataset.width / factor) data = dataset.read( out_shape=(dataset.count, height, width) ) return data def open_bool(filename, factor=1): data = _open(filename, factor=factor) return data[0,:,:] def get_figsize(factor): return tuple(2 * [int(25/factor)]) factor = 1 figsize = (15, 15) roads = open_bool(filename, factor=factor) fig = plt.figure(figsize=figsize) show(roads, title="roads", cmap="binary")
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
planetlabs/notebooks
apache-2.0
Convert Roads to Vector Features GDAL Command-Line Interface (CLI) GDAL provides a python script that can be run via the CLI. It is quite easy to run and fast, though it doesn't allow for some of the convenient pixel-space filtering and processing that rasterio provides and we will use later on.
gdal_output_filename = os.path.join('data', 'test_gdal.shp') !gdal_polygonize.py $filename $gdal_output_filename
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
planetlabs/notebooks
apache-2.0
Rasterio - no filtering In this section we use rasterio to convert the binary roads raster into a vector dataset. The vectors are written to disk as a shapefile. The shapefile can be imported into geospatial programs such as QGIS or ArcGIS for visualization and further processing. This is basic conversion to vector shapes. No filtering based on size (useful for removing small 1 or 2 pixel road segments), smoothing to remove pixel edges, or conversion to the road centerlines is performed here. These additional 'features' will be provided in sections below this one in the future.
def roads_as_vectors(filename): with rasterio.open(filename) as dataset: roads = dataset.read(1) road_mask = roads == 255 # mask non-road pixels # transforms roads features to image crs road_shapes = rfeatures.shapes(roads, mask=road_mask, connectivity=8, transform=dataset.transform) road_geometries = (r for r, _ in road_shapes) crs = dataset.crs return (road_geometries, crs) def save_as_shapefile(output_filename, geometries, crs): driver='ESRI Shapefile' schema = {'geometry': 'Polygon', 'properties': []} with fiona.open(output_filename, mode='w', driver=driver, schema=schema, crs=crs) as c: count = 0 for g in geometries: count += 1; c.write({'geometry': g, 'properties': {}}) print('wrote {} geometries to {}'.format(count, output_filename)) road_geometries, crs = roads_as_vectors(filename) output_filename = os.path.join('data', 'test.shp') save_as_shapefile(output_filename, road_geometries, crs)
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
planetlabs/notebooks
apache-2.0
Rasterio - Filtering and Simplifying In this section, we use shapely to filter the road vectors by size and simplify them so we don't have a million pixel edges.
def roads_as_vectors_with_filtering(filename, min_pixel_size=5): with rasterio.open(filename) as dataset: roads = dataset.read(1) road_mask = roads == 255 # mask non-road pixels # we skip transform on vectorization so we can perform filtering in pixel space road_shapes = rfeatures.shapes(roads, mask=road_mask, connectivity=8) road_geometries = (r for r, _ in road_shapes) geo_shapes = (sshape(g) for g in road_geometries) # filter to shapes bigger than min_pixel_size geo_shapes = (s for s in geo_shapes if s.area > min_pixel_size) # simplify so we don't have a million pixel edge points tolerance = 1 #1.5 geo_shapes = (g.simplify(tolerance, preserve_topology=False) for g in geo_shapes) # apply image transform # rasterio transform: (a, b, c, d, e, f, 0, 0, 1), c and f are offsets # shapely: a b d e c/xoff f/yoff d = dataset.transform shapely_transform = [d[0], d[1], d[3], d[4], d[2], d[5]] proj_shapes = (shapely.affinity.affine_transform(g, shapely_transform) for g in geo_shapes) road_geometries = (shapely.geometry.mapping(s) for s in proj_shapes) crs = dataset.crs return (road_geometries, crs) road_geometries_filt, crs = roads_as_vectors_with_filtering(filename) output_filename = os.path.join('data', 'test_filt.shp') save_as_shapefile(output_filename, road_geometries_filt, crs)
jupyter-notebooks/analytics-snippets/roads_as_vector.ipynb
planetlabs/notebooks
apache-2.0