markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
These validators are simple enough that closures work instead of full-fledged objects. The important part here is to maintain a consistent interface -- if we need to use classes all of a sudden, we need to define a __call__ on them to maintain this interface. We can also change our register callable to accept the repository as well:
def register_user(user_repository): email_checker = is_email_free(user_repository) username_checker = is_username_free(user_repository) def register_user(username, email, password): if not username_checker(username): raise OurValidationError('Username in use already', 'username') if not email_checker(email): raise OurValidationError('Email in use already', 'email') user = User(username=username, email=email, password=password) user_repository.persist(user) return register_user
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Of course the tests break now, and that's okay. We made a very sweeping change to the architecture here. We need to go back through and alter the tests one by one, but instead of patching everything out we can do something better: Dependency Injection.
def test_duplicated_email_causes_false(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_user_repository.find_by_email.return_value = True checker = is_email_free(fake_user_repository) assert not checker('[email protected]') def test_duplicated_username_causes_false(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_user_repository.find_by_username.return_value = True checker = is_username_free(fake_user_repository) assert not checker('fred') def test_register_user_happy_path(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_user_repository.find_by_email.return_value = False fake_user_repository.find_by_username.return_value = False registrar = register_user(fake_user_repository) registrar('fred', '[email protected]', 'fredpassword') assert fake_user_repository.persist.call_count
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
But to test that our validators function correctly in this context, we need to fake out find_by_email and find_by_username indpendently. This is a symptom of our code not being Open-Closed. The Open-Closed Problem Revisiting the other major issue from how the code is laid out right now is that it's not Open-Closed. If you're not familiar with the principle, Wikipedia says this: "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification" Or in a different way, "You should be able to change functionality without editing existing code." -- I believe I need to credit Sandi Metz with this, but I'm not sure. We've actually already used this idea by injecting the User Repository. In tests, we inject a fake or in memory repository, but in production it can be a SQLAlchemy implementation, or maybe wrap that up into a caching repository. We can do the same thing with the validators.
def register_user(user_repository, validator): def registrar(username, email, password): user = User(username, email, password) validator(user) user_repository.persist(user) return registrar
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Of course, our tests break again, so let's revisit the currently breaking one first:
def test_register_user_happy_path(): fake_user_repository = mock.create_autospec(AbstractUserRepository) registrar = register_user(fake_user_repository, lambda user: None) registrar('fred', '[email protected]', 'fredpassword') assert fake_user_repository.persist.call_count def test_register_user_fails_validation(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_validator = mock.Mock(side_effect=OurValidationError('username in use already', 'username')) registrar = register_user(fake_user_repository, fake_validator) try: registrar('fred', '[email protected]', 'fredpassword') except OurValidationError as e: assert e.msg == 'username in use already' assert e.field == 'username' else: assert False, "Did not Raise"
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
We'll need to tweak the validation logic some to make up for the fact that we're passing the whole user object now:
def validate_username(user_repoistory): def validator(user): if not user_repoistory.find_by_username(user.username): raise OurValidationError('Username in use already', 'username') return True return validator def validate_email(user_repoistory): def validator(user): if not user_repoistory.find_by_email(user.email): raise OurValidationError("Email in use already", 'email') return True return validator
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
The tests for these are pretty straight forward as well, so I'll omit them. But we need a way to stitch them together...
def validate_many(*validators): def checker(input): return all(validator(input) for validator in validators) return checker
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
And then hook it all up like this:
validator = validate_username(validate_email(user_repository), validate_username(user_repository)) registrar = register_user(user_repository, validator)
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Our neglected Controller We've spent a lot of time looking at how to compartmentalize the registration logic and portion out its concerns. However, the controller itself needs some attention as well. When we last left, it looked like this:
@app.route('/register', methods=['GET', 'POST']) def register_user_view(): form = RegisterUserForm() if form.validate_on_submit(): try: register_user(form.username.data, form.email.data, form.password.data) except OurValidationError as e: form.errors[e.field] = [e.msg] return render_template('register.html', form=form) else: return redirect('homepage') return render_template('register.html', form=form)
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
But we can do beter than that. The problem here is that the logic is set in stone, nested flows of control. But mostly, I really like any excuse to use class based views.
class RegisterUser(MethodView): def __init__(self, form, registrar, template, redirect): self.form = form self.registrar = registrar self.template = template self.redirect = redirect def get(self): return self._render() def post(self): if self.form.validate_on_submit(): return self._register() else: return self._render() def _register(self): try: self.registrar(self.form.username.data, self.form.email.data, self.form.password.data) except OurValidationError as e: self._handle_error(e) self._render() else: return self._redirect() def _render(self): return render_template(self.template, self.form=form) def _redirect(self): return redirect(url_for(self.redirect)) def _handle_error(self, e): self.form.error[e.field] = [e.msg]
hexagonal/refactoring_and_interfaces.ipynb
justanr/notebooks
mit
Pero miren lo que pasa con la mediana
print np.percentile(Serie,50) print np.median(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Que ocurre La media esperada es 0, sin embargo esta presenta una diferencia. Igualmente ocurre conla desviación estandar. Pero no ocurre lo mismo con la mediana, en 1000 datos se presenta dos ordenes de magnitud más cerca de 0 La siguiente figura ejemplifica esto: Definición de función para graficar
def GraficaHistogramaParam(Values,bins=15): # Genera el histograma de valores h,b = np.histogram(Values,bins=bins) h = h.astype(float); h = h / h.sum() b = (b[1:]+b[:-1])/2.0 # Obtiene la figura fig=pl.figure(figsize=(10,8)) ax=fig.add_subplot(111) ax.plot(b,h,'b',lw=2) ax.fill_between(b,h,color='b',alpha=0.2) ax.set_xlabel('$X$',size=15) ax.set_ylabel('$f(x)$',size=15) ax.set_xlim(-3,3) ax.set_ylim(0,h.max()+0.05) ax.grid(True) ax.legend(loc=0) # Grafica las localizaciones ax.vlines(Values.mean(),0,h.max()+0.05,lw=2,color='r') ax.vlines([Values.mean()+Values.std(),Values.mean()-Values.std()],0,h.max()+0.05,lw=1,color='r') pl.show() def GraficaHistogramaNoParam(Values,bins=15): # Genera el histograma de valores h,b = np.histogram(Values,bins=bins) h = h.astype(float); h = h / h.sum() b = (b[1:]+b[:-1])/2.0 # Obtiene la figura fig=pl.figure(figsize=(10,8)) ax=fig.add_subplot(111) ax.plot(b,h,'b',lw=2) ax.fill_between(b,h,color='b',alpha=0.2) ax.set_xlabel('$X$',size=15) ax.set_ylabel('$f(x)$',size=15) ax.set_xlim(-3,3) ax.set_ylim(0,h.max()+0.05) ax.grid(True) ax.legend(loc=0) # Grafica las localizaciones ax.vlines(np.percentile(Values,50),0,h.max()+0.05,lw=2,color='r') ax.vlines([np.percentile(Values,10),np.percentile(Values,90)],0,h.max()+0.05,lw=1,color='r') pl.show()
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Grafica de las medidas de localización paramétricas
GraficaHistogramaParam(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Gráfica de medidas no paramétricas
GraficaHistogramaNoParam(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Caso con menos datos En un caso con menor cantida dde datos se espera tener una mayor diferencia entre ambas medidas, de ahí si inestabilidad:
Serie = np.random.uniform(2.5,10,2e5) print Serie.mean() print np.median(Serie) from scipy import stats as st print st.skew(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Ejercicio para observar robustez en ambas medidas En el siguiente ejercicio generamos 200 veces series aleatorias cada una con 25 entradas, luego vamos a comprar como son las diferencias entre las medias y las medianas encontradas para cada uno de los casos.
medianas = np.zeros(20000) medias=np.zeros(20000) for i in range(20000): Serie = np.random.normal(0,1,25) medias[i] = Serie.mean() medianas[i]=np.median(Serie) def ComparaHistogramas(Vec1,Vec2,bins=15): # Genera el histograma de valores h1,b1 = np.histogram(Vec1,bins=bins) h1 = h1.astype(float); h1 = h1 / h1.sum() b1 = (b1[1:]+b1[:-1])/2.0 h2,b2 = np.histogram(Vec2,bins=bins) h2 = h2.astype(float); h2 = h2 / h2.sum() b2 = (b2[1:]+b2[:-1])/2.0 #Genera la figura fig=pl.figure(figsize=(10,8)) ax=fig.add_subplot(111) ax.plot(b1,h1,'b',lw=2,label='Vec 1') ax.plot(b2,h2,'r',lw=2,label='Vec 2') ax.fill_between(b1,h1,color='b',alpha=0.2) ax.fill_between(b2,h2,color='r',alpha=0.2) ax.set_xlabel('$X$',size=15) ax.set_ylabel('$f(x)$',size=15) ax.set_xlim(-1,1) ax.set_ylim(0,h1.max()+0.05) ax.grid(True) ax.legend(loc=0) # Grafica las localizaciones pl.show() return h1,h2 HistMedianas, HistMedias = ComparaHistogramas(medianas,medias)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Suceptibilidad a Datos Atípicos Un dato atípico se define como aquel dato que se encuentra fuera del rango de oscilación de los datos, o bien que no es coherente con la física del fenómeno que se está sensando, los siguientes son ejemplos de datos atípicos: Valores exageradamente altos. Valores negativos en casos de fenómenos sin valores negativos. Valores fuera de un rango definido. Secuencia de valores con el mismo valor (no es tanto atípico, pero si es un indicio de problemas) Una forma de identificarlos es a partir de la media de los valores y la desviación, o los percentiles sobre los que se ubiquen. $ValAtipico > \mu + N \sigma$, donde $N$ oscila de acuerdo a lo fuerte que se quiera hacer la pregunta $ValAtipico > P_{99.9}$ Dependiendo de la cantidad de registros en los datos, de la cantidad de valores atípicos y de los valores que estos tengan pueden tener o no consecuencias sobre la serie y sobre posteriores análisis que se realicen sobre la misma. Ejemplo de robustez ante datos atípicos
Serie = np.random.normal(0,1,50) fig = pl.figure(figsize=(9,7)) pl.plot(Serie) pl.grid(True)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Incertemos un dato loco, que se salga
Serie2 = np.copy(Serie) Serie2[10] = 50.0 fig = pl.figure(figsize=(9,7)) pl.plot(Serie2) pl.plot(Serie)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Ahora veamos que ocurre con la media:
print Serie.mean() print Serie2.mean()
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Y que ocurre con la mediana:
print np.median(Serie) print np.median(Serie2)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Introducción de múltiples Outliers Que pasa si se introduce una alta cantidad de datos atípicos?, es decir como es la tasa a la cual la media puede ir pasando a ser cada ves un estimador con un mayor error?.
def CreaOutliers(vect,NumOut,Mult=10): # Encuentra el rango de oscilacion Per = np.array([np.percentile(vect,i) for i in [0.1,99.9]]) # Genera los aleatorios vectOut = np.copy(vect) for i in np.random.choice(vect.shape[0],NumOut): p = np.random.choice(2,1)[0] vectOut[i] = vectOut[i] + Per[p]*Mult*np.random.uniform(2,15,1)[0] return vectOut print Serie3.mean() print Serie.mean() print '----------' print np.median(Serie3) print np.median(Serie) # Definición de variables N = 1000 S1 = np.random.normal(0,1,N) Medias = []; Std = [] Medianas = []; R25_75 = [] # Introduccion de outliers for i in np.arange(5,200): S2 = CreaOutliers(S1, i) Medias.append(S2.mean()) Medianas.append(np.median(S2)) Std.append(S2.std()) R25_75.append(np.percentile(S2,75)-np.percentile(S2,25)) Medias = np.array(Medias) Medianas = np.array(Medianas)
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Resultados: Según lo obtenido la mediana se ve altamente afectada, y la desviación también: Caso de una distribución Normal
# Definición de variables N = 1000 S1 = np.random.uniform(0,1,N) Medias = []; Std = [] Medianas = []; R25_75 = [] # Introduccion de outliers for i in np.arange(5,200): S2 = CreaOutliers(S1, i) Medias.append(S2.mean()) Medianas.append(np.median(S2)) Std.append(S2.std()) R25_75.append(np.percentile(S2,75)-np.percentile(S2,25)) Medias = np.array(Medias) Medianas = np.array(Medianas) fig = pl.figure(figsize=(13,5)) ax = fig.add_subplot(121) ax.scatter(Medianas,Medias,c=np.arange(5,200)) ax.set_xlabel('Mediana',size=14) ax.set_ylabel('Media $\mu$',size=14) ax = fig.add_subplot(122) ax.scatter(R25_75,Std,c=np.arange(5,200)) #ax.set_xlim(0,1) ax.set_xlabel('Rango $25%$ - $75\%$',size=14) ax.set_ylabel('Desviacion $\sigma$',size=14) pl.show()
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Caso de una distribución uniforme
fig = pl.figure(figsize=(13,5)) ax = fig.add_subplot(121) ax.scatter(Medianas,Medias,c=np.arange(5,200)) ax.set_xlabel('Mediana',size=14) ax.set_ylabel('Media $\mu$',size=14) ax = fig.add_subplot(122) ax.scatter(R25_75,Std,c=np.arange(5,200)) #ax.set_xlim(0,1) ax.set_xlabel('Rango $25%$ - $75\%$',size=14) ax.set_ylabel('Desviacion $\sigma$',size=14) pl.show()
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Cuantiles Como una medida no paramétrica de la distribución de los datos se encuentran los cuantiles, el más conocido es la mediana, sin embargo se pueden obtener cuantiles de cualquier medida. Que representan : El cuantil del 25% igual a 3.56, indica que el 25% de los datos son iguales o inferiores a 3.56. Al ser una medida no paramétrica se ve poco afectada por errores en los datos y por datos atípicos.
S1 = np.random.normal(0,1,100) a=pl.boxplot(S1) a=pl.xlabel('Serie')
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
Caso de Introducción de Outliers QQ plot de las series donde se introducen outliers y la serie en donde no
S1 = np.random.normal(0,1,100) S2 = CreaOutliers(S1,10) Per1 = np.array([np.percentile(S1,i) for i in range(10,91,10)]) Per2 = np.array([np.percentile(S2,i) for i in range(10,91,10)]) fig = pl.figure(figsize=(9,7)) ax = fig.add_subplot(111) ax.scatter(Per1,Per2,s=40) ax.set_xlim(-2,2) ax.set_ylim(-2,2) ax.grid(True) ax.set_xlabel('Deciles Observados',size =14); ax.set_ylabel('Deciles Alterados',size=14) ax.plot([-2,2],[-2,2],lw=0.5,c='k') pl.show()
02_Medidas_Localizacion.ipynb
nicolas998/Analisis_Datos
gpl-3.0
careful with "=": xx= x means they are the same object xx = x .... whatever or x.copy() they are two different objects
xx = x.copy() xx+=2 xx x
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
Masking This only works with numpy array. numpy array vs. list
xlist = [3,4,5,6,7,8,9] xarray = np.asarray([3,4,5,6,7,8,9]) # np.asarray(xlist) xlist*2 xarray*2 strangelist = ["toto",3,{},[]] np.asarray(strangelist)*2
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
how to apply masking? Use Numpy ARRAY
x mask = x>2 mask x[mask] # x[x>2] x[ (x>2) & (x<2.5) ] # x[ (x>2) * (x>1.5) ] # both have to be true x[ (x>2) | (x>1.5) ] # x[ (x>2) + (x>1.5) ] # any have to be true
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
The case of the NaN Value
iamnan = np.NaN iamnan iamnan==iamnan np.inf==np.inf xwithnan = np.asarray([3,4,5,6,7,2,3,np.NaN,75,75]) xwithnan xwithnan*2 4+np.NaN 4/np.NaN 4**np.NaN np.mean(xwithnan) np.nanmean(xwithnan) np.mean(xwithnan[xwithnan==xwithnan]) ~(xwithnan==xwithnan) xwithnan!=xwithnan np.isnan(xwithnan) xwithnan = [3,4,5,6,7,2,3,np.NaN,75,75] xwithnan[xwithnan==xwithnan] 0 == False 1 == True
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
Your first plot For ploting we are going to use matplotlib. let's plot 2 random variable a vs. b
a = np.random.rand(30) b = np.random.rand(30) # plot within the notebook %matplotlib inline import matplotlib.pyplot as mpl pl = mpl.hist(a) mpl.scatter(a,b,s=150, facecolors="None", edgecolors="b",lw=3)
notebooks/Basic_Python.ipynb
HUDataScience/StatisticalMethods2016
apache-2.0
Stochastic gradient descent (SGD) Stochastic gradient descent (often shortened to SGD), also known as incremental gradient descent, is a stochastic approximation of the gradient descent optimization and iterative method for minimizing an objective function that is written as a sum of differentiable functions. There are a number of challenges in applying the gradient descent rule. To understand what the problem is, let's look back at the quadratic cost $E_D$. Notice that this cost function has the form $E=\sum_n E_{\bf x}^{(n)}$ In practice, to compute the gradient $\nabla E_D$ we need to compute the gradients $\nabla E_{\bf x}^{(n)}$ separately for each training input, ${\bf x^{(n)}}$ and then average them. . Unfortunately, when the number of training inputs is very large this can take a long time, and learning thus occurs slowly. Stochastic gradient descent can be used to speed up learning. The idea is to estimate the gradient $\nabla E$ by computing $\nabla E_{\bf x}$ for a small sample of randomly chosen training inputs. By averaging over this small sample it turns out that we can quickly get a good estimate of the true gradient. <!--To make these ideas more precise, stochastic gradient descent works by randomly picking out a small number $m$ of randomly chosen training inputs. We'll label those random training inputs ${\bf x^{(1)},x^{(2)},…,x^{(m)}}$ , and refer to them as a mini-batch. Provided the sample size m is large enough we expect that the average value of the $\nabla E_x$ will be roughly equal to the average over all of them, that is $$\frac{1}{m}\sum _{j=1}^m \nabla E_{x^{j}} \approx \frac{1}{n}\sum _{j=1}^n \nabla E_{x^{j}}$$ where the second sum is over the entire set of training data. !--> To connect this explicitly to learning in neural networks, suppose $w_k$ and $b_l$ denote the weights and biases in our neural network. Then stochastic gradient descent works by picking out a randomly chosen mini-batch of training inputs, and training with those, $$ w_k \rightarrow w_k - \eta \sum_{j=1}^m \frac{\partial{E_{\bf x}^{(j)}}}{\partial w_k} $$ $$ b_l \rightarrow b_l - \eta \sum_{j=1}^m \frac{\partial{E_{\bf x}^{(j)}}}{\partial b_l} $$ where the sums are over all the training examples in the current mini-batch. Then we pick out another randomly chosen mini-batch and train with those. And so on, until we have exhausted the training inputs, which is said to complete an epoch of training. At that point we start over with a new training epoch. The pseudocode would look like: Choose an initial vector of parameters $w$ and learning rate $\eta$. Repeat until an approximate minimum is obtained: Randomly shuffle examples in the training set. For i=1,2,...,n , do: $\quad \quad \quad \quad \quad w:=w-\eta \nabla E_{i}(w).$ Example: linear regression As seen previously, the objective function to be minimized is: $$ \begin{aligned} E(w)=\sum {i=1}^{n}E{i}(w)=\sum {i=1}^{n}\left(w{1}+w_{2}x_{i}-y_{i}\right)^{2}. \end{aligned} $$ And the gradent descent equations can be written in matrix form as: $$ \begin{bmatrix}w_{1}\w_{2}\end{bmatrix}:={\begin{bmatrix}w_{1}\w_{2}\end{bmatrix}}-\eta {\begin{bmatrix}2(w_{1}+w_{2}x_{i}-y_{i})\2x_{i}(w_{1}+w_{2}x_{i}-y_{i})\end{bmatrix}}. $$ We'll generate a series of 100 random points aligned more or less along the line $y=a+bx$ with $a=1$ and $b=2$
%matplotlib inline from matplotlib import pyplot import numpy as np a = 1 b = 2 num_points = 100 np.random.seed(637163) # we make sure we always generate the same sequence x_data = np.random.rand(num_points)*20. y_data = x_data*b+a+3*(2.*np.random.rand(num_points)-1) pyplot.scatter(x_data,y_data) pyplot.plot(x_data, b*x_data+a) #### Least squares fit sum_x = np.sum(x_data) sum_y = np.sum(y_data) sum_x2 = np.sum(x_data**2) sum_xy = np.sum(x_data*y_data) det = num_points*sum_x2-sum_x**2 fit_a = (sum_y*sum_x2-sum_x*sum_xy)/det fit_b = (num_points*sum_xy-sum_x*sum_y)/det print(fit_a,fit_b) pyplot.xlim(-1,22) pyplot.ylim(-1,24) pyplot.plot(x_data, fit_b*x_data+fit_a);
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
We now write an SGD code for this problem. The training_data is a list of tuples (x, y) representing the training inputs and corresponding desired outputs. The variables epochs and mini_batch_size are what you'd expect - the number of epochs to train for, and the size of the mini-batches to use when sampling. eta is the learning rate, $\eta$. If the optional argument test_data is supplied, then the program will evaluate the network after each epoch of training, and print out partial progress. This is useful for tracking progress, but slows things down substantially. The code works as follows. In each epoch, it starts by randomly shuffling the training data, and then partitions it into mini-batches of the appropriate size. This is an easy way of sampling randomly from the training data. Then for each mini_batch we apply a single step of gradient descent. This is done by the code self.update_mini_batch(mini_batch, eta), which updates the coefficients according to a single iteration of gradient descent, using just the training data in mini_batch.
epochs = 1000 mini_batch_size = 10 eta = 0.01/mini_batch_size a = 3. b = 3. def update_mini_batch(mini_batch, eta): global a, b a0 = a b0 = b for x, y, in mini_batch: e = eta*(a0+b0*x-y) a -= e b -= x*e training_data = list(zip(x_data,y_data)) for j in range(epochs): np.random.shuffle(training_data) mini_batches = [training_data[k:k+mini_batch_size] for k in range(0, len(training_data), mini_batch_size)] for mini_batch in mini_batches: update_mini_batch(mini_batch, eta) print ("Epoch {0}: {1} {2}".format(j,a,b))
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Challenge 14.2 Use SGD to train the single neuron in the previous notebook using a linearly separable set of 100 points, divided by the line $-\frac{5}{2}x+\frac{3}{2}y+3=0$
### We provide a set of randomly generated training points num_points = 100 w1 = -2.5 w2 = 1.5 w0 = 3. np.random.seed(637163) # we make sure we always generate the same sequence x_data = np.random.rand(num_points)*10. y_data = np.random.rand(num_points)*10. z_data = np.zeros(num_points) for i in range(len(z_data)): if (y_data[i] > (-w0-w1*x_data[i])/w2): z_data[i] = 1. pyplot.scatter(x_data,y_data,c=z_data,marker='o',linewidth=1.5,edgecolors='black') pyplot.plot(x_data,(-w1*x_data-w0)/w2) pyplot.gray() pyplot.xlim(0,10) pyplot.ylim(0,10);
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
You will need the following auxiliary functions:
def sigmoid(z): """The sigmoid function.""" return 1.0/(1.0+np.exp(-z)) def sigmoid_prime(z): """Derivative of the sigmoid function.""" return sigmoid(z)*(1-sigmoid(z))
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
A simple network to classify handwritten digits Most of this section has been taken from M. Nielsen's free on-line book: "Neural Networks and Deep Learning" http://neuralnetworksanddeeplearning.com/ In this section we discuss a neural network which can solve the more interesting and difficult problem, namely, recognizing individual handwritten digits. The input layer of the network contains neurons encoding the values of the input pixels. Our training data for the network will consist of many 28 by 28 pixel images of scanned handwritten digits, and so the input layer contains 784=28×28 neurons. The input pixels are greyscale, with a value of 0.0 representing white, a value of 1.0 representing black, and in between values representing gradually darkening shades of grey. The second layer of the network is a hidden layer. We denote the number of neurons in this hidden layer by $n$ , and we'll experiment with different values for $n$ . The example shown illustrates a small hidden layer, containing just $n=15$ neurons. The output layer of the network contains 10 neurons. If the first neuron fires, i.e., has an output $\sim 1$ , then that will indicate that the network thinks the digit is a 0 . If the second neuron fires then that will indicate that the network thinks the digit is a 1 . And so on. A little more precisely, we number the output neurons from 0 through 9 , and figure out which neuron has the highest activation value. If that neuron is, say, neuron number 6 , then our network will guess that the input digit was a 6 . And so on for the other output neurons. <img src="figures/nnetwork.png" style="width: 500px;"/> Network to identify single digits. The output layer has 10 neurons, one for each digit. The first thing we'll need is a data set to learn from - a so-called training data set. We'll use the MNIST data set, which contains tens of thousands of scanned images of handwritten digits, together with their correct classifications. MNIST's name comes from the fact that it is a modified subset of two data sets collected by NIST, the United States' National Institute of Standards and Technology. Here's a few images from MNIST: <img src="figures/digits_separate.png" style="width: 250px;"/> The MNIST data comes in two parts. The first part contains 60,000 images to be used as training data. These images are scanned handwriting samples from 250 people, half of whom were US Census Bureau employees, and half of whom were high school students. The images are greyscale and 28 by 28 pixels in size. The second part of the MNIST data set is 10,000 images to be used as test data. Again, these are 28 by 28 greyscale images. We'll use the test data to evaluate how well our neural network has learned to recognize digits. To make this a good test of performance, the test data was taken from a different set of 250 people than the original training data (albeit still a group split between Census Bureau employees and high school students). This helps give us confidence that our system can recognize digits from people whose writing it didn't see during training. In practice, we are going to split the data a little differently. We'll leave the test images as is, but split the 60,000-image MNIST training set into two parts: a set of 50,000 images, which we'll use to train our neural network, and a separate 10,000 image validation set. We'll use the notation $x$ to denote a training input. It'll be convenient to regard each training input $x$ as a 28×28=784-dimensional vector. Each entry in the vector represents the grey value for a single pixel in the image. We'll denote the corresponding desired output by y=y(x) , where y is a 10 -dimensional vector. For example, if a particular training image, $x$ , depicts a 6 , then $y(x)=(0,0,0,0,0,0,1,0,0,0)^T$ is the desired output from the network. Note that T here is the transpose operation, turning a row vector into an ordinary (column) vector.
""" mnist_loader ~~~~~~~~~~~~ A library to load the MNIST image data. For details of the data structures that are returned, see the doc strings for ``load_data`` and ``load_data_wrapper``. In practice, ``load_data_wrapper`` is the function usually called by our neural network code. """ #### Libraries # Standard library import pickle import gzip # Third-party libraries import numpy as np def load_data(): """Return the MNIST data as a tuple containing the training data, the validation data, and the test data. The ``training_data`` is returned as a tuple with two entries. The first entry contains the actual training images. This is a numpy ndarray with 50,000 entries. Each entry is, in turn, a numpy ndarray with 784 values, representing the 28 * 28 = 784 pixels in a single MNIST image. The second entry in the ``training_data`` tuple is a numpy ndarray containing 50,000 entries. Those entries are just the digit values (0...9) for the corresponding images contained in the first entry of the tuple. The ``validation_data`` and ``test_data`` are similar, except each contains only 10,000 images. This is a nice data format, but for use in neural networks it's helpful to modify the format of the ``training_data`` a little. That's done in the wrapper function ``load_data_wrapper()``, see below. """ f = gzip.open('data/mnist.pkl.gz', 'rb') training_data, validation_data, test_data = pickle.load(f, encoding='latin1') f.close() return (training_data, validation_data, test_data) def load_data_wrapper(): """Return a tuple containing ``(training_data, validation_data, test_data)``. Based on ``load_data``, but the format is more convenient for use in our implementation of neural networks. In particular, ``training_data`` is a list containing 50,000 2-tuples ``(x, y)``. ``x`` is a 784-dimensional numpy.ndarray containing the input image. ``y`` is a 10-dimensional numpy.ndarray representing the unit vector corresponding to the correct digit for ``x``. ``validation_data`` and ``test_data`` are lists containing 10,000 2-tuples ``(x, y)``. In each case, ``x`` is a 784-dimensional numpy.ndarry containing the input image, and ``y`` is the corresponding classification, i.e., the digit values (integers) corresponding to ``x``. Obviously, this means we're using slightly different formats for the training data and the validation / test data. These formats turn out to be the most convenient for use in our neural network code.""" tr_d, va_d, te_d = load_data() training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]] training_results = [vectorized_result(y) for y in tr_d[1]] training_data = list(zip(training_inputs, training_results)) validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]] validation_data = list(zip(validation_inputs, va_d[1])) test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]] test_data = list(zip(test_inputs, te_d[1])) return (training_data, validation_data, test_data) def vectorized_result(j): """Return a 10-dimensional unit vector with a 1.0 in the jth position and zeroes elsewhere. This is used to convert a digit (0...9) into a corresponding desired output from the neural network.""" e = np.zeros((10, 1)) e[j] = 1.0 return e
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Note also that the biases and weights are stored as lists of Numpy matrices. So, for example net.weights[1] is a Numpy matrix storing the weights connecting the second and third layers of neurons. (It's not the first and second layers, since Python's list indexing starts at 0.) Since net.weights[1] is rather verbose, let's just denote that matrix $w$ . It's a matrix such that $w_{jk}$ is the weight for the connection between the $k^{th}$ neuron in the second layer, and the $j^{th}$ neuron in the third layer. This ordering of the $j$ and $k$ indices may seem strange. The big advantage of using this ordering is that it means that the vector of activations of the third layer of neurons is: $$a'=\mathrm {sigmoid}(wa+b)$$ There's quite a bit going on in this equation, so let's unpack it piece by piece. $a$ is the vector of activations of the second layer of neurons. To obtain $a'$ we multiply $a$ by the weight matrix $w$ , and add the vector $b$ of biases. We then apply the function sigmoid elementwise to every entry in the vector $wa+b$. Of course, the main thing we want our Network objects to do is to learn. To that end we'll give them an SGD method which implements stochastic gradient descent. <!-- The training_data is a list of tuples `(x, y)` representing the training inputs and corresponding desired outputs. The variables `epochs` and `mini_batch_size` are what you'd expect - the number of epochs to train for, and the size of the mini-batches to use when sampling. `eta` is the learning rate, $\eta$. If the optional argument `test_data` is supplied, then the program will evaluate the network after each epoch of training, and print out partial progress. This is useful for tracking progress, but slows things down substantially. The code works as follows. In each epoch, it starts by randomly shuffling the training data, and then partitions it into mini-batches of the appropriate size. This is an easy way of sampling randomly from the training data. Then for each `mini_batch` we apply a single step of gradient descent. This is done by the code `self.update_mini_batch(mini_batch, eta)`, which updates the network weights and biases according to a single iteration of gradient descent, using just the training data in `mini_batch`. --> Most of the work is done by the line delta_nabla_b, delta_nabla_w = self.backprop(x, y) This invokes something called the backpropagation algorithm, which is a fast way of computing the gradient of the cost function. So update_mini_batch works simply by computing these gradients for every training example in the mini_batch, and then updating self.weights and self.biases appropriately. The activation $a_{lj}$ of the $j^{th}$ neuron in the $l^{th}$ layer is related to the activations in the $(l-1)^{th}$ layer by the equation $$a^l_j=\mathrm{sigmoid}(\sum_k w_{jk}^l a^{l-1}k+b^l_j)$$ where the sum is over all neurons $k$ in the $(l−1)^{th}$ layer. To rewrite this expression in a matrix form we define a weight matrix $w^l$ for each layer, $l$ . The entries of the weight matrix $w^l$ are just the weights connecting to the $l^{th}$ layer of neurons, that is, the entry in the $j^{th}$ row and $k^{th}$ column is $w^l{jk}$. Similarly, for each layer $l$ we define a bias vector, $b^l$. You can probably guess how this works - the components of the bias vector are just the values $b^l_j$ , one component for each neuron in the $l^{th}$ layer. And finally, we define an activation vector $a^l$ whose components are the activations $a^l_j$. With these notations in mind, these equations can be rewritten in the beautiful and compact vectorized form $$a^l=\mathrm{sigmoid}(w^la^{l-1}+b^l).$$ This expression gives us a much more global way of thinking about how the activations in one layer relate to activations in the previous layer: we just apply the weight matrix to the activations, then add the bias vector, and finally apply the sigmoid function. Apart from self.backprop the program is self-explanatory - all the heavy lifting is done in self.SGD and self.update_mini_batch, which we've already discussed. The self.backprop method makes use of a few extra functions to help in computing the gradient, namely sigmoid_prime, which computes the derivative of the sigmoid function, and self.cost_derivative. You can get the gist of these (and perhaps the details) just by looking at the code and documentation strings. Note that while the program appears lengthy, much of the code is documentation strings intended to make the code easy to understand. In fact, the program contains just 74 lines of non-whitespace, non-comment code.
""" network.py ~~~~~~~~~~ A module to implement the stochastic gradient descent learning algorithm for a feedforward neural network. Gradients are calculated using backpropagation. Note that I have focused on making the code simple, easily readable, and easily modifiable. It is not optimized, and omits many desirable features. """ #### Libraries # Standard library import random # Third-party libraries import numpy as np class Network(object): def __init__(self, sizes): """The list ``sizes`` contains the number of neurons in the respective layers of the network. For example, if the list was [2, 3, 1] then it would be a three-layer network, with the first layer containing 2 neurons, the second layer 3 neurons, and the third layer 1 neuron. The biases and weights for the network are initialized randomly, using a Gaussian distribution with mean 0, and variance 1. Note that the first layer is assumed to be an input layer, and by convention we won't set any biases for those neurons, since biases are only ever used in computing the outputs from later layers.""" self.num_layers = len(sizes) self.sizes = sizes self.biases = [np.random.randn(y, 1) for y in sizes[1:]] self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] def feedforward(self, a): """Return the output of the network if ``a`` is input.""" for b, w in zip(self.biases, self.weights): a = sigmoid(np.dot(w, a)+b) return a def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None): """Train the neural network using mini-batch stochastic gradient descent. The ``training_data`` is a list of tuples ``(x, y)`` representing the training inputs and the desired outputs. The other non-optional parameters are self-explanatory. If ``test_data`` is provided then the network will be evaluated against the test data after each epoch, and partial progress printed out. This is useful for tracking progress, but slows things down substantially.""" if test_data: n_test = len(test_data) n = len(training_data) for j in range(epochs): random.shuffle(training_data) mini_batches = [ training_data[k:k+mini_batch_size] for k in range(0, n, mini_batch_size)] for mini_batch in mini_batches: self.update_mini_batch(mini_batch, eta) if test_data: print ("Epoch {0}: {1} / {2}".format( j, self.evaluate(test_data), n_test)) else: print ("Epoch {0} complete".format(j)) def update_mini_batch(self, mini_batch, eta): """Update the network's weights and biases by applying gradient descent using backpropagation to a single mini batch. The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta`` is the learning rate.""" nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] for x, y in mini_batch: delta_nabla_b, delta_nabla_w = self.backprop(x, y) nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] self.weights = [w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)] self.biases = [b-(eta/len(mini_batch))*nb for b, nb in zip(self.biases, nabla_b)] def backprop(self, x, y): """Return a tuple ``(nabla_b, nabla_w)`` representing the gradient for the cost function C_x. ``nabla_b`` and ``nabla_w`` are layer-by-layer lists of numpy arrays, similar to ``self.biases`` and ``self.weights``.""" nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] # feedforward activation = x activations = [x] # list to store all the activations, layer by layer zs = [] # list to store all the z vectors, layer by layer for b, w in zip(self.biases, self.weights): z = np.dot(w, activation)+b zs.append(z) activation = sigmoid(z) activations.append(activation) # backward pass delta = self.cost_derivative(activations[-1], y) * \ sigmoid_prime(zs[-1]) nabla_b[-1] = delta nabla_w[-1] = np.dot(delta, activations[-2].transpose()) # Note that the variable l in the loop below is used a little # differently to the notation in Chapter 2 of the book. Here, # l = 1 means the last layer of neurons, l = 2 is the # second-last layer, and so on. It's a renumbering of the # scheme in the book, used here to take advantage of the fact # that Python can use negative indices in lists. for l in range(2, self.num_layers): z = zs[-l] sp = sigmoid_prime(z) delta = np.dot(self.weights[-l+1].transpose(), delta) * sp nabla_b[-l] = delta nabla_w[-l] = np.dot(delta, activations[-l-1].transpose()) return (nabla_b, nabla_w) def evaluate(self, test_data): """Return the number of test inputs for which the neural network outputs the correct result. Note that the neural network's output is assumed to be the index of whichever neuron in the final layer has the highest activation.""" test_results = [(np.argmax(self.feedforward(x)), y) for (x, y) in test_data] return sum(int(x == y) for (x, y) in test_results) def cost_derivative(self, output_activations, y): """Return the vector of partial derivatives \partial C_x / \partial a for the output activations.""" return (output_activations-y) #### Miscellaneous functions def sigmoid(z): """The sigmoid function.""" return 1.0/(1.0+np.exp(-z)) def sigmoid_prime(z): """Derivative of the sigmoid function.""" return sigmoid(z)*(1-sigmoid(z))
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
We first load the MNIST data:
training_data, validation_data, test_data = load_data_wrapper()
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
After loading the MNIST data, we'll set up a Network with 30 hidden neurons.
net = Network([784, 30, 10])
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Finally, we'll use stochastic gradient descent to learn from the MNIST training_data over 30 epochs, with a mini-batch size of 10, and a learning rate of $\eta$=3.0:
net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
14_02_multilayer-networks.ipynb
afeiguin/comp-phys
mit
Note that head([]) is an error since you can't find the first item in an empty list.
tail([1,2]) tail([1])
python/Main.ipynb
banbh/little-pythoner
apache-2.0
Note that tail([]) is an error since the tail of a list is what's left over when you remove the head, and the empty list has no head.
cons(1, [2,3]) cons(1, []) is_num(99) is_num('hello') is_str(99) is_str('hello') is_str_eq('hello', 'hello') is_str_eq('hello', 'goodbye') add1(99) sub1(99)
python/Main.ipynb
banbh/little-pythoner
apache-2.0
Note that sub1(0) is an error because you can't subtract 1 from 0. (Actually it is possible if you allow negative numbers, but in these exercises we will not allow such numbers.) All Strings Write a function, is_list_of_strings, that determines whether a list contains only strings. Below are some examples of how it should behave.
from solutions import is_list_of_strings is_list_of_strings(['hello', 'goodbye']) is_list_of_strings([1, 'aa']) is_list_of_strings([])
python/Main.ipynb
banbh/little-pythoner
apache-2.0
The Spector dataset is distributed with statsmodels. You can access a vector of values for the dependent variable (endog) and a matrix of regressors (exog) like this:
data = sm.datasets.spector.load_pandas() exog = data.exog endog = data.endog print(sm.datasets.spector.NOTE) print(data.exog.head())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Them, we add a constant to the matrix of regressors:
exog = sm.add_constant(exog, prepend=True)
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
To create your own Likelihood Model, you simply need to overwrite the loglike method.
class MyProbit(GenericLikelihoodModel): def loglike(self, params): exog = self.exog endog = self.endog q = 2 * endog - 1 return stats.norm.logcdf(q*np.dot(exog, params)).sum()
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Estimate the model and print a summary:
sm_probit_manual = MyProbit(endog, exog).fit() print(sm_probit_manual.summary())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Compare your Probit implementation to statsmodels' "canned" implementation:
sm_probit_canned = sm.Probit(endog, exog).fit() print(sm_probit_canned.params) print(sm_probit_manual.params) print(sm_probit_canned.cov_params()) print(sm_probit_manual.cov_params())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Notice that the GenericMaximumLikelihood class provides automatic differentiation, so we didn't have to provide Hessian or Score functions in order to calculate the covariance estimates. Example 2: Negative Binomial Regression for Count Data Consider a negative binomial regression model for count data with log-likelihood (type NB-2) function expressed as: $$ \mathcal{L}(\beta_j; y, \alpha) = \sum_{i=1}^n y_i ln \left ( \frac{\alpha exp(X_i'\beta)}{1+\alpha exp(X_i'\beta)} \right ) - \frac{1}{\alpha} ln(1+\alpha exp(X_i'\beta)) + ln \Gamma (y_i + 1/\alpha) - ln \Gamma (y_i+1) - ln \Gamma (1/\alpha) $$ with a matrix of regressors $X$, a vector of coefficients $\beta$, and the negative binomial heterogeneity parameter $\alpha$. Using the nbinom distribution from scipy, we can write this likelihood simply as:
import numpy as np from scipy.stats import nbinom def _ll_nb2(y, X, beta, alph): mu = np.exp(np.dot(X, beta)) size = 1/alph prob = size/(size+mu) ll = nbinom.logpmf(y, size, prob) return ll
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
New Model Class We create a new model class which inherits from GenericLikelihoodModel:
from statsmodels.base.model import GenericLikelihoodModel class NBin(GenericLikelihoodModel): def __init__(self, endog, exog, **kwds): super(NBin, self).__init__(endog, exog, **kwds) def nloglikeobs(self, params): alph = params[-1] beta = params[:-1] ll = _ll_nb2(self.endog, self.exog, beta, alph) return -ll def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds): # we have one additional parameter and we need to add it for summary self.exog_names.append('alpha') if start_params == None: # Reasonable starting values start_params = np.append(np.zeros(self.exog.shape[1]), .5) # intercept start_params[-2] = np.log(self.endog.mean()) return super(NBin, self).fit(start_params=start_params, maxiter=maxiter, maxfun=maxfun, **kwds)
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Two important things to notice: nloglikeobs: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix). start_params: A one-dimensional array of starting values needs to be provided. The size of this array determines the number of parameters that will be used in optimization. That's it! You're done! Usage Example The Medpar dataset is hosted in CSV format at the Rdatasets repository. We use the read_csv function from the Pandas library to load the data in memory. We then print the first few columns:
import statsmodels.api as sm medpar = sm.datasets.get_rdataset("medpar", "COUNT", cache=True).data medpar.head()
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
The model we are interested in has a vector of non-negative integers as dependent variable (los), and 5 regressors: Intercept, type2, type3, hmo, white. For estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.
y = medpar.los X = medpar[["type2", "type3", "hmo", "white"]].copy() X["constant"] = 1
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Then, we fit the model and extract some information:
mod = NBin(y, X) res = mod.fit()
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Extract parameter estimates, standard errors, p-values, AIC, etc.:
print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('P-values: ', res.pvalues) print('AIC: ', res.aic)
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
As usual, you can obtain a full list of available information by typing dir(res). We can also look at the summary of the estimation results.
print(res.summary())
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
Testing We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.
res_nbin = sm.NegativeBinomial(y, X).fit(disp=0) print(res_nbin.summary()) print(res_nbin.params) print(res_nbin.bse)
examples/notebooks/generic_mle.ipynb
ChadFulton/statsmodels
bsd-3-clause
From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - Survived: Outcome of survival (0 = No; 1 = Yes) - Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - Name: Name of passenger - Sex: Sex of the passenger - Age: Age of the passenger (Some entries contain NaN) - SibSp: Number of siblings and spouses of the passenger aboard - Parch: Number of parents and children of the passenger aboard - Ticket: Ticket number of the passenger - Fare: Fare paid by the passenger - Cabin Cabin number of the passenger (Some entries contain NaN) - Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton) Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets. Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
# Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head())
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i]. To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100) else: return "Number of predictions does not match number of outcomes!" # Test the 'accuracy_score' function predictions = pd.Series(np.ones(5, dtype = int)) print accuracy_score(outcomes[:5], predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off. Making Predictions If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The predictions_0 function below will always predict that a passenger did not survive.
def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_0(data)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 1 Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived? Hint: Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Answer: Replace this text with the prediction accuracy you found above. Predictions have an accuracy of 61.62%. Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the visuals.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across. Run the code cell below to plot the survival outcomes of passengers based on their sex.
vs.survival_stats(data, outcomes, 'Sex')
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here #pass if passenger['Sex']=="female": predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 2 How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive? Hint: Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Answer: Replace this text with the prediction accuracy you found above. Predictions have an accuracy of 78.68%. Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age.
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here #pass if passenger["Sex"]=="female": predictions.append(1) #elif passenger["Sex"]=="male": # predictions.append(0) elif passenger["Sex"]=="male" and passenger["Age"] < 10: predictions.append(1) #elif passenger["Sex"]=="male" and passenger["Age"] > 10: # predictions.append(0) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 3 How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived? Hint: Run the code cell below to see the accuracy of this prediction.
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Predictions have an accuracy of 79.35%. Answer: Replace this text with the prediction accuracy you found above. Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. Pclass, Sex, Age, SibSp, and Parch are some suggested features to try. Use the survival_stats function below to to examine various survival statistics. Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age &lt; 18"]
vs.survival_stats(data, outcomes, 'Sex', [ "Pclass == 3" ])
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'female'" , "Embarked == C"])
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here #pass #if passenger["Sex"] == "female" : if passenger["Sex"] == "female": if passenger["Pclass"] ==3 : predictions.append(0) else: predictions.append(1) else: if passenger['Age'] < 10 and passenger['Pclass'] in (1, 2): predictions.append(1) elif passenger['Age'] < 18 and passenger['Pclass'] == 1: predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
Question 4 Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions? Hint: Run the code cell below to see the accuracy of your predictions.
print accuracy_score(outcomes, predictions)
titanic/titanic_survival_exploration[1].ipynb
aattaran/Machine-Learning-with-Python
bsd-3-clause
The sock problem Created by Yuzhong Huang There are two drawers of socks. The first drawer has 40 white socks and 10 black socks; the second drawer has 20 white socks and 30 black socks. We randomly get 2 socks from a drawer, and it turns out to be a pair(same color) but we don't know the color of these socks. What is the chance that we picked the first drawer. To make calculating our likelihood easier, we start by defining a multiply function. The function is written in a functional way primarily for fun.
from functools import reduce import operator def multiply(items): """ multiply takes a list of numbers, multiplies all of them, and returns the result Args: items (list): The list of numbers Return: the items multiplied together """ return reduce(operator.mul, items, 1)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next we define a drawer suite. This suite will allow us to take n socks up to the least number of socks in a drawer. To make our likelihood function simpler, we ignore the case where we take 11 black socks and that only drawer 2 is possible.
class Drawers(Suite): def Likelihood(self, data, hypo): """ Likelihood returns the likelihood given a bayesian update consisting of a particular hypothesis and new data. In the case of our drawer problem, the probabilities change with the number of pairs we take (without replacement) so we we start by defining lists for each color sock in each drawer. Args: data (int): The number of socks we take hypo (str): The hypothesis we are updating Return: the likelihood for a hypothesis """ drawer1W = [] drawer1B = [] drawer2W = [] drawer2B = [] for i in range(data): drawer1W.append(40-i) drawer1B.append(10-i) drawer2W.append(20-i) drawer2B.append(30-i) if hypo == 'drawer1': return multiply(drawer1W)+multiply(drawer1B) if hypo == 'drawer2': return multiply(drawer2W)+multiply(drawer2B)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, define our hypotheses and create the drawer Suite.
hypos = ['drawer1','drawer2'] drawers = Drawers(hypos) drawers.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, update the drawers by taking two matching socks.
drawers.Update(2) drawers.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
It seems that the drawer with many of a single sock (40 white 10 black) is more likely after the update. To confirm this suspicion, let's restart the problem by taking 5 pairs of socks.
hypos = ['drawer1','drawer2'] drawers5 = Drawers(hypos) drawers5.Update(5) drawers5.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
We see that after we take 5 pairs of socks, the probability of the socks coming from drawer 1 is 80.6%. We can now conclude that the drawer with a more extreme numbers of socks is more likely be chosen if we are updating with matching color socks. Chess-playing twins Allen Downey Two identical twins are members of my chess club, but they never show up on the same day; in fact, they strictly alternate the days they show up. I can't tell them apart except that one is a better player than the other: Avery beats me 60% of the time and I beat Blake 70% of the time. If I play one twin on Monday and win, and the other twin on Tuesday and lose, which twin did I play on which day? To solve this problem, we first need to create our hypothesis. In this case, we have: hypo1: Avery Monday, Blake Tuesday hypo2: Blake Monday, Avery Tuesday We will abreviate Avery to A and Blake to B.
twins = Pmf() twins['AB'] = 1 twins['BA'] = 1 twins.Normalize() twins.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Now we update our hypotheses with us winning the first day. We have a 40% chance of winning against Avery and a 70% chance of winning against Blake.
#win day 1 twins['AB'] *= .4 twins['BA'] *= .7 twins.Normalize() twins.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
At this point in time, there is only a 36% chance that we play Avery the first day while a 64% chance that we played Blake the first day. However, let's see what happens when we update with a loss.
#lose day 2 twins['AB'] *= .6 twins['BA'] *= .3 twins.Normalize() twins.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Interesting. Now there is a 53% chance that we played Avery then Blake and a 47% chance that we played Blake then Avery. Who saw that movie? Nathan Yee Every year the MPAA (Motion Picture Association of America) publishes a report about theatrical market statistics. Included in the report, are both the gender and the ethnicity share of the top 5 most grossing films. If a randomly selected person in the United States went to Pixar's "Inside Out", what is the probability that they are both female and Asian? Data: | Gender | Male (%) | Female (%) | | :-------------------------- | :------- | :---------- | | Furious 7 | 56 | 44 | | Inside Out | 46 | 54 | | Avengers: Age of Ultron | 58 | 42 | | Star Wars: The Force Awakens| 58 | 42 | | Jurassic World | 55 | 45 | | Ethnicity | Caucasian (%) | African-American (%) | Hispanic (%) | Asian (%) | Other (%) | | :-------------------------- | :------------ | :------------------- | :----------- | :-------- | :-------- | | Furious 7 | 40 | 22 | 25 | 8 | 5 | | Inside Out | 54 | 15 | 16 | 9 | 5 | | Avengers: Age of Ultron | 50 | 16 | 20 | 10 | 5 | | Star Wars: The Force Awakens| 61 | 12 | 15 | 7 | 5 | | Jurassic World | 39 | 16 | 19 | 11 | 6 | Since we are picking a random person in the United States, we can use demographics of the United States as an informed prior. | Demographic | Caucasian (%) | African-American (%) | Hispanic (%) | Asian (%) | Other (%) | | :-------------------------- | :------------ | :------------------- | :----------- | :-------- | :-------- | | Population United States | 63.7 | 12.2 | 16.3 | 4.7 | 3.1 | Note: Demographic data was gathered from the US Census Bureau. There may be errors within 2% due to rounding. Also note that certian races were combined to fit our previous demographic groupings. To make writing code easier, we will encoude data in a numerical structure. The first item in the tuple corresponds to gender, the second item in the tuple corresponds to ethnicity. | Gender | Male | Female | | :-------------------------- | :--- | :----- | | Encoding number | 0 | 1 | | Ethnicity | Caucasian | African-American | Hispanic | Asian | Other | | :-------------------------- | :-------- | :--------------- | :------- | :---- | :---- | | Encoding number | 0 | 1 | 2 | 3 | 4 | Such that a (female, asian) = (1, 3) The first piece of code we write will be our Movie class. This version of Suite will have a special likelihood function that takes in a movie, and returns the probability of the gender and the ethnicity.
class Movie(Suite): def Likelihood(self, data, hypo): """ Likelihood returns the likelihood given a bayesian update consisting of a particular hypothesis and data. In this case, we need to calculate the probability of seeing a gender seeing a movie. Then we calculat the probability that an ethnicity saw a movie. Finally we multiply the two to calculate the a person of a gender and ethnicity saw a movie. Args: data (str): The title of the movie hypo (str): The hypothesis we are updating Return: the likelihood for a hypothesis """ movie = data gender = hypo[0] ethnicity = hypo[1] # first calculate update based on gender movies_gender = {'Furious 7' : {0:56, 1:44}, 'Inside Out' : {0:46, 1:54}, 'Avengers: Age of Ultron' : {0:58, 1:42}, 'Star Wars: The Force Awakens' : {0:58, 1:42}, 'Jurassic World' : {0:55, 1:45} } like_gender = movies_gender[movie][gender] # second calculate update based on ethnicity movies_ethnicity = {'Furious 7' : {0:40, 1:22, 2:25, 3:8 , 4:5}, 'Inside Out' : {0:54, 1:15, 2:16, 3:9 , 4:4}, 'Avengers: Age of Ultron' : {0:50, 1:16, 2:20, 3:10, 4:5}, 'Star Wars: The Force Awakens' : {0:61, 1:12, 2:15, 3:7 , 4:5}, 'Jurassic World' : {0:39, 1:16, 2:19, 3:11, 4:6} } like_ethnicity = movies_ethnicity[movie][ethnicity] # multiply the two together and return return like_gender * like_ethnicity
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next we make our hypotheses and input them as tuples into the Movie class.
genders = range(0,2) ethnicities = range(0,5) pairs = [(gender, ethnicity) for gender in genders for ethnicity in ethnicities] movie = Movie(pairs)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
We decided that we are picking a random person in the United states. So, we can use population demographics of the United States as an informed prior. We will assume that the United States is 50% male and 50% female. Population percent is defined in the order which we enumerate ethnicities.
population_percent = [63.7, 12.2, 16.3, 4.7, 3.1, 63.7, 12.2, 16.3, 4.7, 3.1] for i in range(len(population_percent)): movie[pairs[i]] = population_percent[i] movie.Normalize() movie.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next update with the two movies
movie.Update('Inside Out') movie.Normalize() movie.Print()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Given that a random person has seen Inside Out, the probability that the person is both female and Asian is .58%. Interestingly, when we update our hypotheses with our data, the the chance that the randomly selected person is caucasian goes up to 87%. It seems that our model just increases the chance that the randomly selected person is caucasian after seeing a movie. Validation: To make ourselves convinced that model is working properly, what happens if we just look at gender data. We know that 54% of people who saw inside out were female. So, if we sum together the female audience, we should get 54%.
total = 0 for pair in pairs: if pair[0] == 1: total += movie[pair] print(total)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Parking meter theft From DASL(http://lib.stat.cmu.edu/DASL/Datafiles/brinkdat.html) The variable CON in the datafile Parking Meter Theft represents monthly parking meter collections by the principle contractor in New York City from May 1977 to March 1981. In addition to contractor collections, the city made collections from a number of "control" meters close to City Hall. These are recorded under the varia- ble CITY. From May 1978 to April 1980 the contractor was Brink's. In 1983 the city presented evidence in court that Brink's employees has been stealing parking meter moneys - delivering to the city less than the total collections. The court was satisfied that theft has taken place, but the actual amount of shortage was in question. Assume that there was no theft before or after Brink's tenure and estimate the monthly short- age and its 95% confidence limits. So we are asking three questions. What is the probability that that money has been stolen? What is the probability that the variance of the Brink collections is higher. And how much money was stolen? This problem is very similar to that of "Improving Reading Ability" by Allen Downey To do this, we want to calculate First we load our data from the csv file.
import pandas as pd df = pd.read_csv('parking.csv', skiprows=17, delimiter='\t') df.head()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
First we need to normalize the CON (contractor) collections by the amount gathered by the CITY. This will give us a ratio of contractor collections to city collections. If we just use the raw contractor collections, fluctuations throughout the months could mislead us.
df['RATIO'] = df['CON'] / df['CITY']
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, lets see what the means of the RATIO data compare between the general contractors and BRINK.
grouped = df.groupby('BRINK') for name, group in grouped: print(name, group.RATIO.mean())
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
We see that for a dollar gathered by the city, general contractors report 244.7 dollars while BRINK only reports 230 dollars. Now, we will fit the data to a Normal class to compute the likelihood of a sameple from the normal distribution. This is a similar process to what we did in the improved reading ability problem.
from scipy.stats import norm class Normal(Suite, Joint): def Likelihood(self, data, hypo): """ data: sequence of test scores hypo: mu, sigma """ mu, sigma = hypo likes = norm.pdf(data, mu, sigma) return np.prod(likes)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, we need to calculate a marginal distribution for both brink and general contractors. To get the marginal distribution of the general contractors, start by generating a bunch of prior distributions for mu and sigma. These will be generated uniformly.
mus = np.linspace(210, 270, 301) sigmas = np.linspace(10, 65, 301)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, use itertools.product to enumerate all pairs of mu and sigma.
from itertools import product general = Normal(product(mus, sigmas)) data = df[df.BRINK==0].RATIO general.Update(data)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next we will plot the probability of each mu-sigma pair on a contour plot.
thinkplot.Contour(general, pcolor=True) thinkplot.Config(xlabel='mu', ylabel='sigma')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, extract the marginal distribution of mu from general.
pmf_mu0 = general.Marginal(0) thinkplot.Pdf(pmf_mu0) thinkplot.Config(xlabel='mu', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
And the marginal distribution of sigma from the general.
pmf_sigma0 = general.Marginal(1) thinkplot.Pdf(pmf_sigma0) thinkplot.Config(xlabel='sigma', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Next, we will run this again for BRINK and see what the difference is between the group. This will give us insight into whether or not Brink employee's are stealing parking money from the city. First use the same range of mus and sigmas calcualte the marginal distributions of brink.
brink = Normal(product(mus, sigmas)) data = df[df.BRINK==1].RATIO brink.Update(data)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Plot the mus and sigmas on a contour plot to see what is going on.
thinkplot.Contour(brink, pcolor=True) thinkplot.Config(xlabel='mu', ylabel='sigma')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Extract the marginal distributions of mu from brink.
pmf_mu1 = brink.Marginal(0) thinkplot.Pdf(pmf_mu1) thinkplot.Config(xlabel='mu', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Extract the marginal distributions sigma from brink
pmf_sigma1 = brink.Marginal(1) thinkplot.Pdf(pmf_sigma1) thinkplot.Config(xlabel='sigma', ylabel='Pmf')
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
From here, we want to compare the two distributions. To do this, we will start by taking the difference between the distributions.
pmf_diff = pmf_mu1 - pmf_mu0 pmf_diff.Mean()
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
From here we can calculate the probability that money was stolen from the city.
cdf_diff = pmf_diff.MakeCdf() thinkplot.Cdf(cdf_diff) cdf_diff[0]
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
So we can calculate that the probability money was stolen from the city is 93.9% Next, we want to calculate how much money was stolen from the city. We first need to calculate how much money the city collected during Brink times. Then we can multiply this times our pmf_diff to get a probability distribution of potential stolen money.
money_city = np.where(df['BRINK']==1, df['CITY'], 0).sum(0) print((pmf_diff * money_city).CredibleInterval(50)) thinkplot.Pmf(pmf_diff * money_city)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Above we see a plot of stolen money in millions. We have also calculated a credible interval that tells us that there is a 50% chance that Brink stole between 1.4 to 3.6 million dollars. In pursuit of more evidence, we find the probability that the standard deviation in the Brink collections is higher than that of the general contractors.
pmf_sigma1.ProbGreater(pmf_sigma0)
code/report03.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Building a dynamic model In the previous notebook, <a href="mnist_linear.ipynb">mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module. The boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnist_models/trainer/__init__.py) file. It still needs the model and a trainer to run it, so let's make them. Let's start with the trainer file first. This file parses command line arguments to feed into the model.
%%writefile mnist_models/trainer/task.py import argparse import json import os import sys from . import model def _parse_arguments(argv): """Parses command-line arguments.""" parser = argparse.ArgumentParser() parser.add_argument( '--model_type', help='Which model type to use', type=str, default='linear') parser.add_argument( '--epochs', help='The number of epochs to train', type=int, default=10) parser.add_argument( '--steps_per_epoch', help='The number of steps per epoch to train', type=int, default=100) parser.add_argument( '--job-dir', help='Directory where to save the given model', type=str, default='mnist_models/') return parser.parse_known_args(argv) def main(): """Parses command line arguments and kicks off model training.""" args = _parse_arguments(sys.argv[1:])[0] # Configure path for hyperparameter tuning. trial_id = json.loads( os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '') output_path = args.job_dir if not trial_id else args.job_dir + '/' model_layers = model.get_layers(args.model_type) image_model = model.build_model(model_layers, args.job_dir) model_history = model.train_and_evaluate( image_model, args.epochs, args.steps_per_epoch, args.job_dir) if __name__ == '__main__': main()
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab.
%%writefile mnist_models/trainer/util.py import tensorflow as tf def scale(image, label): """Scales images from a 0-255 int range to a 0-1 float range""" image = tf.cast(image, tf.float32) image /= 255 image = tf.expand_dims(image, -1) return image, label def load_dataset( data, training=True, buffer_size=5000, batch_size=100, nclasses=10): """Loads MNIST dataset into a tf.data.Dataset""" (x_train, y_train), (x_test, y_test) = data x = x_train if training else x_test y = y_train if training else y_test # One-hot encode the classes y = tf.keras.utils.to_categorical(y, nclasses) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.map(scale).batch(batch_size) if training: dataset = dataset.shuffle(buffer_size).repeat() return dataset
notebooks/image_models/solutions/2_mnist_models.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0