markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
By generating a scatter plot using the second feature `features[:, 1]` and `labels`,we can clearly observe the linear correlation between the two.
d2l.set_figsize() # The semicolon is for displaying the plot only d2l.plt.scatter(features[:, (1)].numpy(), labels.numpy(), 1);
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Reading the DatasetRecall that training models consists ofmaking multiple passes over the dataset,grabbing one minibatch of examples at a time,and using them to update our model.Since this process is so fundamentalto training machine learning algorithms,it is worth defining a utility functionto shuffle the dataset and access it in minibatches.In the following code, we [**define the `data_iter` function**] (~~that~~)to demonstrate one possible implementation of this functionality.The function (**takes a batch size, a matrix of features,and a vector of labels, yielding minibatches of the size `batch_size`.**)Each minibatch consists of a tuple of features and labels.
def data_iter(batch_size, features, labels): num_examples = len(features) indices = list(range(num_examples)) # The examples are read at random, in no particular order random.shuffle(indices) for i in range(0, num_examples, batch_size): j = tf.constant(indices[i: min(i + batch_size, num_examples)]) yield tf.gather(features, j), tf.gather(labels, j)
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
In general, note that we want to use reasonably sized minibatchesto take advantage of the GPU hardware,which excels at parallelizing operations.Because each example can be fed through our models in paralleland the gradient of the loss function for each example can also be taken in parallel,GPUs allow us to process hundreds of examples in scarcely more timethan it might take to process just a single example.To build some intuition, let us read and printthe first small batch of data examples.The shape of the features in each minibatch tells usboth the minibatch size and the number of input features.Likewise, our minibatch of labels will have a shape given by `batch_size`.
batch_size = 10 for X, y in data_iter(batch_size, features, labels): print(X, '\n', y) break
tf.Tensor( [[ 0.34395403 0.250355 ] [ 0.8474066 -0.08658892] [ 1.332213 -0.05381915] [-1.0579451 0.5105379 ] [-0.48678052 0.12689345] [-0.19708689 -0.7590605 ] [-1.4754761 -0.98582214] [ 0.35217085 0.43196547] [-1.7024363 0.54085165] [-0.10568867 -1.4778754 ]], shape=(10, 2), dtype=float32) tf.Tensor( [[ 4.034952 ] [ 6.1658163 ] [ 7.0530744 ] [ 0.32585293] [ 2.8073056 ] [ 6.393605 ] [ 4.5981565 ] [ 3.43894 ] [-1.0478138 ] [ 9.006084 ]], shape=(10, 1), dtype=float32)
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
As we run the iteration, we obtain distinct minibatchessuccessively until the entire dataset has been exhausted (try this).While the iteration implemented above is good for didactic purposes,it is inefficient in ways that might get us in trouble on real problems.For example, it requires that we load all the data in memoryand that we perform lots of random memory access.The built-in iterators implemented in a deep learning frameworkare considerably more efficient and they can dealwith both data stored in files and data fed via data streams. Initializing Model Parameters[**Before we can begin optimizing our model's parameters**] by minibatch stochastic gradient descent,(**we need to have some parameters in the first place.**)In the following code, we initialize weights by samplingrandom numbers from a normal distribution with mean 0and a standard deviation of 0.01, and setting the bias to 0.
w = tf.Variable(tf.random.normal(shape=(2, 1), mean=0, stddev=0.01), trainable=True) b = tf.Variable(tf.zeros(1), trainable=True)
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
After initializing our parameters,our next task is to update them untilthey fit our data sufficiently well.Each update requires taking the gradientof our loss function with respect to the parameters.Given this gradient, we can update each parameterin the direction that may reduce the loss.Since nobody wants to compute gradients explicitly(this is tedious and error prone),we use automatic differentiation,as introduced in :numref:`sec_autograd`, to compute the gradient. Defining the ModelNext, we must [**define our model,relating its inputs and parameters to its outputs.**]Recall that to calculate the output of the linear model,we simply take the matrix-vector dot productof the input features $\mathbf{X}$ and the model weights $\mathbf{w}$,and add the offset $b$ to each example.Note that below $\mathbf{Xw}$ is a vector and $b$ is a scalar.Recall the broadcasting mechanism as described in :numref:`subsec_broadcasting`.When we add a vector and a scalar,the scalar is added to each component of the vector.
def linreg(X, w, b): #@save """The linear regression model.""" return tf.matmul(X, w) + b
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Defining the Loss FunctionSince [**updating our model requires takingthe gradient of our loss function,**]we ought to (**define the loss function first.**)Here we will use the squared loss functionas described in :numref:`sec_linear_regression`.In the implementation, we need to transform the true value `y`into the predicted value's shape `y_hat`.The result returned by the following functionwill also have the same shape as `y_hat`.
def squared_loss(y_hat, y): #@save """Squared loss.""" return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Defining the Optimization AlgorithmAs we discussed in :numref:`sec_linear_regression`,linear regression has a closed-form solution.However, this is not a book about linear regression:it is a book about deep learning.Since none of the other models that this book introducescan be solved analytically, we will take this opportunity to introduce your first working example ofminibatch stochastic gradient descent.[~~Despite linear regression has a closed-form solution, other models in this book don't. Here we introduce minibatch stochastic gradient descent.~~]At each step, using one minibatch randomly drawn from our dataset,we will estimate the gradient of the loss with respect to our parameters.Next, we will update our parametersin the direction that may reduce the loss.The following code applies the minibatch stochastic gradient descent update,given a set of parameters, a learning rate, and a batch size.The size of the update step is determined by the learning rate `lr`.Because our loss is calculated as a sum over the minibatch of examples,we normalize our step size by the batch size (`batch_size`),so that the magnitude of a typical step sizedoes not depend heavily on our choice of the batch size.
def sgd(params, grads, lr, batch_size): #@save """Minibatch stochastic gradient descent.""" for param, grad in zip(params, grads): param.assign_sub(lr*grad/batch_size)
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
TrainingNow that we have all of the parts in place,we are ready to [**implement the main training loop.**]It is crucial that you understand this codebecause you will see nearly identical training loopsover and over again throughout your career in deep learning.In each iteration, we will grab a minibatch of training examples,and pass them through our model to obtain a set of predictions.After calculating the loss, we initiate the backwards pass through the network,storing the gradients with respect to each parameter.Finally, we will call the optimization algorithm `sgd`to update the model parameters.In summary, we will execute the following loop:* Initialize parameters $(\mathbf{w}, b)$* Repeat until done * Compute gradient $\mathbf{g} \leftarrow \partial_{(\mathbf{w},b)} \frac{1}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} l(\mathbf{x}^{(i)}, y^{(i)}, \mathbf{w}, b)$ * Update parameters $(\mathbf{w}, b) \leftarrow (\mathbf{w}, b) - \eta \mathbf{g}$In each *epoch*,we will iterate through the entire dataset(using the `data_iter` function) oncepassing through every example in the training dataset(assuming that the number of examples is divisible by the batch size).The number of epochs `num_epochs` and the learning rate `lr` are both hyperparameters,which we set here to 3 and 0.03, respectively.Unfortunately, setting hyperparameters is trickyand requires some adjustment by trial and error.We elide these details for now but revise themlater in:numref:`chap_optimization`.
lr = 0.03 num_epochs = 3 net = linreg loss = squared_loss for epoch in range(num_epochs): for X, y in data_iter(batch_size, features, labels): with tf.GradientTape() as g: l = loss(net(X, w, b), y) # Minibatch loss in `X` and `y` # Compute gradient on l with respect to [`w`, `b`] dw, db = g.gradient(l, [w, b]) # Update parameters using their gradient sgd([w, b], [dw, db], lr, batch_size) train_l = loss(net(features, w, b), labels) print(f'epoch {epoch + 1}, loss {float(tf.reduce_mean(train_l)):f}')
epoch 1, loss 0.029337
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
In this case, because we synthesized the dataset ourselves,we know precisely what the true parameters are.Thus, we can [**evaluate our success in trainingby comparing the true parameterswith those that we learned**] through our training loop.Indeed they turn out to be very close to each other.
print(f'error in estimating w: {true_w - tf.reshape(w, true_w.shape)}') print(f'error in estimating b: {true_b - b}')
error in estimating w: [-0.00040174 -0.00101519] error in estimating b: [0.00056839]
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Solutions for Matrices: Tensor Product_prepared by Abuzer Yakaryilmaz_ Task 1 Find $ u \otimes v $ and $ v \otimes u $ for the given vectors $ u = \myrvector{-2 \\ -1 \\ 0 \\ 1} $ and $ v = \myrvector{ 1 \\ 2 \\ 3 } $. Solution
u = [-2,-1,0,1] v = [1,2,3] uv = [] vu = [] for i in range(len(u)): # one element of u is picked for j in range(len(v)): # now we iteratively select every element of v uv.append(u[i]*v[j]) # this one element of u is iteratively multiplied with every element of v print("u-tensor-v is",uv) for i in range(len(v)): # one element of v is picked for j in range(len(u)): # now we iteratively select every element of u vu.append(v[i]*u[j]) # this one element of v is iteratively multiplied with every element of u print("v-tensor-u is",vu)
_____no_output_____
Apache-2.0
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
Task 2 Find $ A \otimes B $ for the given matrices$ A = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -1 & 2} ~~\mbox{and}~~ B = \mymatrix{rr}{0 & 2 \\ 3 & -1 \\ -1 & 1 }.$ Solution
A = [ [-1,0,1], [-2,-1,2] ] B = [ [0,2], [3,-1], [-1,1] ] print("A =") for i in range(len(A)): print(A[i]) print() # print a line print("B =") for i in range(len(B)): print(B[i]) # let's define A-tensor-B as a (6x6)-dimensional zero matrix AB = [] for i in range(6): AB.append([]) for j in range(6): AB[i].append(0) # let's find A-tensor-B for i in range(2): for j in range(3): # for each A(i,j) we execute the following codes a = A[i][j] # we access each element of B for m in range(3): for n in range(2): b = B[m][n] # now we put (a*b) in the appropriate index of AB AB[3*i+m][2*j+n] = a * b print() # print a line print("A-tensor-B =") print() # print a line for i in range(6): print(AB[i])
_____no_output_____
Apache-2.0
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
Task 3 Find $ B \otimes A $ for the given matrices$ A = \mymatrix{rrr}{-1 & 0 & 1 \\ -2 & -1 & 2} ~~\mbox{and}~~ B = \mymatrix{rr}{0 & 2 \\ 3 & -1 \\ -1 & 1 }.$ Solution
A = [ [-1,0,1], [-2,-1,2] ] B = [ [0,2], [3,-1], [-1,1] ] print() # print a line print("B =") for i in range(len(B)): print(B[i]) print("A =") for i in range(len(A)): print(A[i]) # let's define B-tensor-A as a (6x6)-dimensional zero matrix BA = [] for i in range(6): BA.append([]) for j in range(6): BA[i].append(0) # let's find B-tensor-A for i in range(3): for j in range(2): # for each B(i,j) we execute the following codes b = B[i][j] # we access each element of A for m in range(2): for n in range(3): a = A[m][n] # now we put (a*b) in the appropriate index of AB BA[2*i+m][3*j+n] = b * a print() # print a line print("B-tensor-A =") print() # print a line for i in range(6): print(BA[i])
_____no_output_____
Apache-2.0
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
Mask R-CNN - Train on Nuclei Dataset (updated from train_shape.ipynb)This notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
import os import sys import random import math import re import time import tqdm import numpy as np import cv2 import matplotlib import matplotlib.pyplot as plt from config import Config import utils import model as modellib import visualize from model import log %matplotlib inline # Root directory of the project ROOT_DIR = os.getcwd() # Directory to save logs and trained model # MODEL_DIR = os.path.join(ROOT_DIR, "logs") MODEL_DIR = "/data/lf/Nuclei/logs" DATA_DIR = os.path.join(ROOT_DIR, "data") # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "models", "mask_rcnn_coco.h5") # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH)
/home/lf/anaconda3/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) /home/lf/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend.
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Configurations
class NucleiConfig(Config): """Configuration for training on the toy shapes dataset. Derives from the base Config class and overrides values specific to the toy shapes dataset. """ # Give the configuration a recognizable name NAME = "nuclei" # Train on 1 GPU and 8 images per GPU. We can put multiple images on each # GPU because the images are small. Batch size is 8 (GPUs * images/GPU). GPU_COUNT = 1 IMAGES_PER_GPU = 4 # Number of classes (including background) NUM_CLASSES = 1 + 1 # background + 3 shapes # Use small images for faster training. Set the limits of the small side # the large side, and that determines the image shape. IMAGE_MIN_DIM = 512 IMAGE_MAX_DIM = 512 # Use smaller anchors because our image and objects are small RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels # Reduce training ROIs per image because the images are small and have # few objects. Aim to allow ROI sampling to pick 33% positive ROIs. TRAIN_ROIS_PER_IMAGE = 32 # Use a small epoch since the data is simple STEPS_PER_EPOCH = 100 # use small validation steps since the epoch is small VALIDATION_STEPS = 5 config = NucleiConfig() config.display() type(config.display())
Configurations: BACKBONE_SHAPES [[128 128] [ 64 64] [ 32 32] [ 16 16] [ 8 8]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 4 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 4 IMAGE_MAX_DIM 512 IMAGE_MIN_DIM 512 IMAGE_PADDING True IMAGE_SHAPE [512 512 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME nuclei NUM_CLASSES 2 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (8, 16, 32, 64, 128) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 100 TRAIN_ROIS_PER_IMAGE 32 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 5 WEIGHT_DECAY 0.0001 Configurations: BACKBONE_SHAPES [[128 128] [ 64 64] [ 32 32] [ 16 16] [ 8 8]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 4 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 4 IMAGE_MAX_DIM 512 IMAGE_MIN_DIM 512 IMAGE_PADDING True IMAGE_SHAPE [512 512 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME nuclei NUM_CLASSES 2 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (8, 16, 32, 64, 128) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 100 TRAIN_ROIS_PER_IMAGE 32 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 5 WEIGHT_DECAY 0.0001
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Notebook Preferences
def get_ax(rows=1, cols=1, size=8): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Change the default size attribute to control the size of rendered images """ _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows)) return ax
_____no_output_____
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
DatasetLoad the nuclei datasetExtend the Dataset class and add a method to get the nuclei dataset, `load_image_info()`, and override the following methods:* load_image()* load_mask()* image_reference()
class NucleiDataset(utils.Dataset): """Load the images and masks from dataset.""" def load_image_info(self, set_path, img_set): """Get the picture names(ids) of the dataset.""" # Add classes self.add_class("nucleis", 1, "regular") # TO DO : Three different image types into three classes # Add images # Get the images ids of training/testing set # train_ids = next(os.walk(set_path))[1] with open(img_set) as f: read_data = f.readlines() train_ids = [read_data[i][:-1] for i in range(0,len(read_data))] # Get the info of the images for i, id_ in enumerate(train_ids): file_path = os.path.join(set_path, id_) img_path = os.path.join(file_path, "images") masks_path = os.path.join(file_path, "masks") img_name = id_ + ".png" img = cv2.imread(os.path.join(img_path, img_name)) width, height, _ = img.shape self.add_image("nucleis", image_id=id_, path=file_path, img_path=img_path, masks_path=masks_path, width=width, height=height, nucleis="nucleis") def load_image(self, image_id): """Load image from file of the given image ID.""" info = self.image_info[image_id] img_path = info["img_path"] img_name = info["id"] + ".png" image = cv2.imread(os.path.join(img_path, img_name)) return image def image_reference(self, image_id): """Return the path of the given image ID.""" info = self.image_info[image_id] if info["source"] == "nucleis": return info["path"] else: super(self.__class__).image_reference(self, image_id) def load_mask(self, image_id): """Load the instance masks of the given image ID.""" info = self.image_info[image_id] mask_files = next(os.walk(info["masks_path"]))[2] masks = np. zeros([info['width'], info['height'], len(mask_files)], dtype=np.uint8) for i, id_ in enumerate(mask_files): single_mask = cv2.imread(os.path.join(info["masks_path"], id_), 0) masks[:, :, i:i+1] = single_mask[:, :, np.newaxis] class_ids = np.ones(len(mask_files)) return masks, class_ids.astype(np.int32) kFOLD_DIR = os.path.join(ROOT_DIR, "kfold_dataset") with open(kFOLD_DIR + '/10-fold-val-3.txt') as f: read_data = f.readlines() train_ids = [read_data[i][:-1] for i in range(0,len(read_data))] print(train_ids) # Training dataset TRAINSET_DIR = os.path.join(DATA_DIR, "stage1_train_fixed") # VALSET_DIR = os.path.join(DATA_DIR, "stage1_val") TESTSET_DIR = os.path.join(DATA_DIR, "stage1_test") kFOLD_DIR = os.path.join(ROOT_DIR, "kfold_dataset") dataset_train = NucleiDataset() dataset_train.load_image_info(TRAINSET_DIR, os.path.join(kFOLD_DIR, "10-fold-train-10.txt")) dataset_train.prepare() dataset_val = NucleiDataset() dataset_val.load_image_info(TRAINSET_DIR, os.path.join(kFOLD_DIR, "10-fold-val-10.txt")) dataset_val.prepare() print("Loading {} training images, {} validation images" .format(dataset_train.num_images, dataset_val.num_images)) # Load and display random samples image_ids = np.random.choice(dataset_train.image_ids, 4) print(dataset_train.num_images) for i, image_id in enumerate(image_ids): image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
594
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Bounding BoxesAlthough we don't have the specific box coordinates in the dataset, we can compute the bounding boxes from masks instead. This allows us to handle bounding boxes consistently regardless of the source dataset, and it also makes it easier to resize, rotate, or crop images because we simply generate the bounding boxes from the updates masks rather than computing bounding box transformation for each type of image transformation.
# Load random image and mask. image_id = random.choice(dataset_train.image_ids) image = dataset_train.load_image(image_id) mask, class_ids = dataset_train.load_mask(image_id) # Compute Bounding box bbox = utils.extract_bboxes(mask) # Display image and additional stats print("image_id ", image_id, dataset_train.image_reference(image_id)) log("image", image) log("mask", mask) log("class_ids", class_ids) log("bbox", bbox) # Display image and instances visualize.display_instances(image, bbox, mask, class_ids, dataset_train.class_names)
image_id 527 /home/lf/Nuclei/data/stage1_train_fixed/3b0709483b1e86449cc355bb797e841117ba178c6ae1ed955384f4da6486aa20 image shape: (256, 320, 3) min: 28.00000 max: 214.00000 mask shape: (256, 320, 17) min: 0.00000 max: 255.00000 class_ids shape: (17,) min: 1.00000 max: 1.00000 bbox shape: (17, 4) min: 0.00000 max: 320.00000
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Ceate Model
# Create model in training mode model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR) # Which weights to start with? init_with = "coco" # imagenet, coco, or last if init_with == "imagenet": model.load_weights(model.get_imagenet_weights(), by_name=True) elif init_with == "coco": # Load weights trained on MS COCO, but skip layers that # are different due to the different number of classes # See README for instructions to download the COCO weights model.load_weights(COCO_MODEL_PATH, by_name=True, exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"]) elif init_with == "last": # Load the last model you trained and continue training model.load_weights(model.find_last()[1], by_name=True)
_____no_output_____
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
TrainingTrain in two stages:1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
# Train the head branches # Passing layers="heads" freezes all layers except the head # layers. You can also pass a regular expression to select # which layers to train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=1, layers='heads') # Fine tune all layers # Passing layers="all" trains all layers. You can also # pass a regular expression to select which layers to # train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE / 10, epochs=2, layers="all") import datetime print(now) import time rq = "config-" + time.strftime('%Y%m%d%H%M', time.localtime(time.time())) +".log" print(rq) # Save weights # Typically not needed because callbacks save after every epoch # Uncomment to save manually # model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5") # model.keras_model.save_weights(model_path)
_____no_output_____
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Detection Example
class InferenceConfig(NucleiConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 DETECTION_NMS_THRESHOLD = 0.3 DETECTION_MAX_INSTANCES = 300 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference", config=inference_config, model_dir=MODEL_DIR) # Get path to saved weights # Either set a specific path or find last trained weights # model_path = os.path.join(ROOT_DIR, ".h5 file name here") model_path = "/data2/liangfeng/nuclei_models/nuclei20180202T1847/mask_rcnn_nuclei_0080.h5" # Load trained weights (fill in path to trained weights here) assert model_path != "", "Provide path to trained weights" print("Loading weights from ", model_path) model.load_weights(model_path, by_name=True) # Test on a random image(load_image_gt will resize the image!) image_id = random.choice(dataset_val.image_ids) original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) print("image_id ", image_id, dataset_val.image_reference(image_id)) log("original_image", original_image) log("image_meta", image_meta) log("gt_class_id", gt_class_id) log("gt_bbox", gt_bbox) log("gt_mask", gt_mask) visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, dataset_train.class_names, figsize=(8, 8)) results = model.detect([original_image], verbose=1) r = results[0] # print(r) visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], dataset_val.class_names, r['scores'], ax=get_ax())
Processing 1 images image shape: (1024, 1024, 3) min: 0.00000 max: 232.00000 molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 128.10000 image_metas shape: (1, 10) min: 0.00000 max: 1024.00000
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Evaluation
# Compute VOC-Style mAP @ IoU=0.5 # Running on 10 images. Increase for better accuracy. # image_ids = np.random.choice(dataset_val.image_ids, 10) image_ids = dataset_val.image_ids APs = [] for image_id in image_ids: # Load image and ground truth data image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0) # Run object detection results = model.detect([image], verbose=0) r = results[0] # Compute AP AP, precisions, recalls, overlaps =\ utils.compute_ap(gt_bbox, gt_class_id, r["rois"], r["class_ids"], r["scores"]) APs.append(AP) print("mAP: ", np.mean(APs))
mAP: 0.808316577444
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Writing the Results
# Get the Test set. TESTSET_DIR = os.path.join(DATA_DIR, "stage1_test") dataset_test = NucleiDataset() dataset_test.load_image_info(TESTSET_DIR) dataset_test.prepare() print("Predict {} images".format(dataset_test.num_images)) # Load random image and mask(Original Size). image_id = np.random.choice(dataset_test.image_ids) image = dataset_test.load_image(image_id) plt.figure() plt.imshow(image) plt.title(image_id, fontsize=9) plt.axis('off') # images = dataset_test.load_image(image_ids) # mask, class_ids = dataset_test.load_mask(image_id) # Compute Bounding box # bbox = utils.extract_bboxes(mask) # Display image and additional stats # print("image_id ", image_id, dataset_test.image_reference(image_id)) # log("image", image) # log("mask", mask) # log("class_ids", class_ids) # log("bbox", bbox) # Display image and instances # visualize.display_instances(image, bbox, mask, class_ids, dataset_test.class_names) results = model.detect([image], verbose=1) r = results[0] mask_exist = np.zeros(r['masks'].shape[:-1], dtype=np.uint8) mask_sum = np.zeros(r['masks'].shape[:-1], dtype=np.uint8) for i in range(r['masks'].shape[-1]): _mask = r['masks'][:,:,i] mask_sum += _mask # print(np.multiply(mask_exist, _mask)) # print(np.where(np.multiply(mask_exist, _mask) == 1)) index_ = np.where(np.multiply(mask_exist, _mask) == 1) _mask[index_] = 0 mask_exist += _mask # masks_sum = np.sum(r['masks'] ,axis=2) # overlap = np.where(masks_sum > 1) # print(overlap) # plt.figure() plt.subplot(1,2,1) plt.imshow(mask_exist) plt.subplot(1,2,2) plt.imshow(mask_sum) # visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], # dataset_test.class_names, r['scores'], ax=get_ax()) a = [[0, 1],[0, 0]] np.any(a) def rle_encoding(x): dots = np.where(x.T.flatten() == 1)[0] run_lengths = [] prev = -2 for b in dots: if (b>prev+1): run_lengths.extend((b + 1, 0)) run_lengths[-1] += 1 prev = b return run_lengths import pandas as pd test_ids = [] test_rles = [] id_ = dataset_val.image_info[image_id]["id"] results = model.detect([image], verbose=1) r = results[0] for i in range(len(r['scores'])): test_ids.append(id_) test_rles.append(rle_encoding(r['masks'][:, : , i])) sub = pd.DataFrame() sub['ImageId'] = test_ids sub['EncodedPixels'] = pd.Series(test_rles).apply(lambda x: ' '.join(str(y) for y in x)) model_path csvpath = "{}.csv".format(model_path) print(csvpath) sub.to_csv(csvpath, index=False) # plt.imshow('image',r['masks'][0])
Processing 1 images image shape: (256, 256, 3) min: 10.00000 max: 255.00000 molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 151.10000 image_metas shape: (1, 10) min: 0.00000 max: 912.00000 /data2/liangfeng/nuclei_models/nuclei20180202T1847/mask_rcnn_nuclei_0040.h5.csv
MIT
train_nuclei.ipynb
xumm94/2018_data_science_bowl
Índice1  Socioeconomic data validation1.0.1  Goals1.0.2  Data scources1.0.3  Methodology1.0.4  Results1.0.4.1  Outputs1.0.5  Authors1.1  Import data1.2  INSE data analysis1.2.1  Filtering model (refence) and risk (attention) schools1.2.2  Join INSE data1.2.3  Comparing INSE data in categories1.3  Statistical INSE analysis1.3.1  Normality test1.3.1.1  D'Agostino and Pearson's1.3.1.2  Shapiro-Wiki1.3.2  t test1.3.2.1  Model x risk schools1.3.3  Cohen's D1.3.3.1  Model x risk schools1.3.3.2  Best evolution model x risk schools1.3.3.3  Other model x risk schools2  Testes estatísticos2.1  Cohen's D3  Tentando inferir causalidade3.1  Regressões lineares3.2  Testes pareados Socioeconomic data validation---A literatura indica que o fator mais importante para o desempenho das escolas é o nível sócio econômico dos alunos. Estamos pressupondo que escolas próximas possuem alunos de nível sócio econômico próximo, mas isso precisa ser testado. Usei os dados do [INSE](http://portal.inep.gov.br/web/guest/indicadores-educacionais) para medir qual era o nível sócio econômico dos alunos de cada escola em 2015. GoalsExamining the geolocated IDEB data frrom schools and modeling *risk* and *model* schools for the research.Combining the school's IDEB (SAEB + approval rate) marks with Rio de Janeiro's municipal shapefile, we hope to discover some local standards in school performance over the years. The time interval we will analyze is from 2011 until today. Data scources- `ideb_merged.csv`: resulted data from the geolocalization, IDEB by years on columns - `ideb_merged_kepler.csv`: resulted data from the geolocalization, format for kepler input MethodologyThe goal is to determine the "model" schools in a certain ratio. We'll define those "models" as schools that had a great grown and stands nearby "high risk" schools, the ones in the lowest strata. For that, we construct the model below with suggestions by Ragazzo:We are interested in the following groups: - Group 1: Schools from very low ( 6) - Group 2: Schools from low (4 6) - Group 3: Schools went to high (> 6) with delta > 2 The *attention level* (or risk) of a school is defined by which quartile it belongs on IDEB 2017 distribution (most recent), from the lowest quartile (level 4) to the highest (level 1). Results1. [Identify the schools with most IDEB variation from 2005 to 2017](1)2. [Identify schools that jumped from low / very low IDEB ( 6), from 2005 to 2017](2)2. [Model neighboors: which schools had a large delta and were nearby schools on the highest attention level (4)?](3)3. [See if the education census contains information on who was the principal of each school each year.](4) - actually, we use an indicator of school's "managment complexity" with the IDEB data. We didn't find any difference between levels of "managment complexity" related to IDEB marks from those schools in each level. Outputs- `model_neighboors_closest_multiple.csv`: database with the risk schools and closest model schools- `top_15_delta.csv`, `bottom_15_delta.csv`: top and bottom schools evolution from 2005 to 2017- `kepler_with_filters.csv`: database for plot in kepler with schools categories (from the methology) AuthorsOriginal code by Guilherme Almeida here, adapted by Fernanda Scovino - 2019.
# Import config import os import sys sys.path.insert(0, '../') from config import RAW_PATH, TREAT_PATH, OUTPUT_PATH # DATA ANALYSIS & VIZ TOOLS from copy import deepcopy import pandas as pd import numpy as np pd.options.display.max_columns = 999 import geopandas as gpd from shapely.wkt import loads import matplotlib.pyplot as plt import seaborn as sns %pylab inline pylab.rcParams['figure.figsize'] = (12, 15) # CONFIGS %load_ext autoreload #%autoreload 2 #import warnings #warnings.filterwarnings('ignore') palette = ['#FEC300', '#F1920E', '#E3611C', '#C70039', '#900C3F', '#5A1846', '#3a414c', '#29323C'] sns.set()
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Import data
inse = pd.read_excel(RAW_PATH / "INSE_2015.xlsx") schools_ideb = pd.read_csv(OUTPUT_PATH / "kepler_with_filters.csv")
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
INSE data analysis
inse.rename(columns={"CO_ESCOLA" : "cod_inep"}, inplace=True) inse.head() schools_ideb['ano'] = pd.to_datetime(schools_ideb['ano']) schools_ideb.head()
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Filtering model (`refence`) and risk (`attention`) schools
reference = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & ((schools_ideb['pessimo_pra_bom_bin'] == 1) | (schools_ideb['ruim_pra_bom_bin'] == 1))] reference.info() attention = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & (schools_ideb['nivel_atencao'] == 4)] attention.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 176 entries, 4127 to 4728 Data columns (total 14 columns): ano 176 non-null datetime64[ns] cod_inep 176 non-null int64 geometry 176 non-null object ideb 176 non-null float64 nome_abrev 176 non-null object nome_escola 176 non-null object lon 176 non-null float64 lat 176 non-null float64 pessimo_pra_bom_bin 176 non-null int64 ruim_pra_bom_bin 176 non-null int64 melhora_com_final_bom_bin 176 non-null int64 inicial_baixo_bin 176 non-null int64 inicial_baixissimo_bin 176 non-null int64 nivel_atencao 176 non-null float64 dtypes: datetime64[ns](1), float64(4), int64(6), object(3) memory usage: 20.6+ KB
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Join INSE data
inse_cols = ["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"] reference = pd.merge(reference, inse[inse_cols], how = "left", on = "cod_inep") attention = pd.merge(attention, inse[inse_cols], how = "left", on = "cod_inep") reference['tipo_escola'] = 'Escola referência' reference.info() attention['tipo_escola'] = 'Escola de risco' attention.info() df_inse = attention.append(reference) df_inse['escola_risco'] = df_inse['nivel_atencao'].apply(lambda x : 1 if x == 4 else 0) df_inse['tipo_especifico'] = df_inse[['pessimo_pra_bom_bin', 'ruim_pra_bom_bin', 'escola_risco']].idxmax(axis=1) del df_inse['escola_risco'] df_inse.head() df_inse['tipo_especifico'].value_counts() df_inse.to_csv(TREAT_PATH / "risk_and_model_schools_inse.csv", index = False)
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Comparing INSE data in categories
sns.distplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas de risco') sns.distplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), bins='fd', label='Escolas modelo') plt.legend() pylab.rcParams['figure.figsize'] = (10, 8) title = "Comparação do nível sócio-econômico das escolas selecionadas" ylabel="INSE (2015) médio da escola" xlabel="Tipo da escola" sns.boxplot(y ="INSE_VALOR_ABSOLUTO", x="tipo_escola", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title) pylab.rcParams['figure.figsize'] = (10, 8) xlabel = "Tipo da escola (específico)" sns.boxplot(y = "INSE_VALOR_ABSOLUTO", x="tipo_especifico", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title)
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Statistical INSE analysis Normality testFrom [this article:](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/)> According to the available literature, **assessing the normality assumption should be taken into account for using parametric statistical tests.** It seems that the most popular test for normality, that is, the K-S test, should no longer be used owing to its low power. It is preferable that normality be assessed both visually and through normality tests, of which the Shapiro-Wilk test, provided by the SPSS software, is highly recommended. The normality assumption also needs to be considered for validation of data presented in the literature as it shows whether correct statistical tests have been used.
from scipy.stats import normaltest, shapiro, probplot
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
D'Agostino and Pearson's
normaltest(attention["INSE_VALOR_ABSOLUTO"].dropna()) normaltest(reference["INSE_VALOR_ABSOLUTO"].dropna())
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Shapiro-Wiki
shapiro(attention["INSE_VALOR_ABSOLUTO"].dropna()) qs = probplot(reference["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt) shapiro(reference["INSE_VALOR_ABSOLUTO"].dropna()) ws = probplot(attention["INSE_VALOR_ABSOLUTO"].dropna(), plot=plt)
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
*t* testAbout parametric tests: [here](https://www.healthknowledge.org.uk/public-health-textbook/research-methods/1b-statistical-methods/parametric-nonparametric-tests)We can test the hypothesis of INSE be related to IDEB scores from the risk ($\mu_r$) and model schools ($\mu_m$) as it follows:$H_0 = \mu_r = \mu_m$$H_a = \mu_r != \mu_m$For the *t* test, we need to ensure that:1. the variances arer equal (1.94 close ennough to 2.05)2. the samples have the same size (?)3.
from scipy.stats import ttest_ind as ttest, normaltest, kstest attention["INSE_VALOR_ABSOLUTO"].dropna().describe() reference["INSE_VALOR_ABSOLUTO"].dropna().describe()
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Model x risk schools
ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=True) ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit", equal_var=False)
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Cohen's DMinha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/).
from numpy.random import randn from numpy.random import seed from numpy import mean from numpy import var from math import sqrt # == Code made by Guilherme Almeida, 2019 == # function to calculate Cohen's d for independent samples def cohend(d1, d2): # calculate the size of samples n1, n2 = len(d1), len(d2) # calculate the variance of the samples s1, s2 = var(d1, ddof=1), var(d2, ddof=1) # calculate the pooled standard deviation s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2)) # calculate the means of the samples u1, u2 = mean(d1), mean(d2) # calculate the effect size result = abs(u1 - u2) / s return result
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Model x risk schools
ttest(attention["INSE_VALOR_ABSOLUTO"], reference["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(reference["INSE_VALOR_ABSOLUTO"], attention["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Best evolution model x risk schools
best_evolution = df_inse[df_inse['tipo_especifico'] == "pessimo_pra_bom_bin"] ttest(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(attention["INSE_VALOR_ABSOLUTO"], best_evolution["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Other model x risk schools
medium_evolution = df_inse[df_inse['tipo_especifico'] == "ruim_pra_bom_bin"] ttest(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(attention["INSE_VALOR_ABSOLUTO"], medium_evolution["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
ruim_pra_bom["tipo_especifico"] = "Ruim para bom"pessimo_pra_bom["tipo_especifico"] = "Muito ruim para bom"risco["tipo_especifico"] = "Desempenho abaixo\ndo esperado"
referencias.head() referencias = pd.merge(referencias, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how = "left", on = "cod_inep") risco = pd.merge(risco, inse[["cod_inep", "NOME_ESCOLA", "INSE_VALOR_ABSOLUTO", "INSE_CLASSIFICACAO"]], how="left", on="cod_inep") referencias.INSE_VALOR_ABSOLUTO.describe() risco.INSE_VALOR_ABSOLUTO.describe() risco["tipo"] = "Escolas com desempenho abaixo do esperado" referencias["tipo"] = "Escolas-referência" df = risco.append(referencias) df.to_csv("risco_referencia_inse.csv", index = False) df = pd.read_csv("risco_referencia_inse.csv") sen.sen_boxplot(x = "tipo", y = "INSE_VALOR_ABSOLUTO", y_label = "INSE (2015) médio da escola", x_label = " ", plot_title = "Comparação do nível sócio-econômico das escolas selecionadas", palette = {"Escolas com desempenho abaixo do esperado" : "indianred", "Escolas-referência" : "skyblue"}, data = df, output_path = "inse_op1.png") df = pd.read_csv("risco_referencia_inse.csv") sen.sen_boxplot(x = "tipo_especifico", y = "INSE_VALOR_ABSOLUTO", y_label = "INSE (2015) médio da escola", x_label = " ", plot_title = "Comparação do nível sócio-econômico das escolas selecionadas", palette = {"Desempenho abaixo\ndo esperado" : "indianred", "Ruim para bom" : "skyblue", "Muito ruim para bom" : "lightblue"}, data = df, output_path = "inse_op2.png")
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Testes estatísticos Cohen's DMinha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/).
from numpy.random import randn from numpy.random import seed from numpy import mean from numpy import var from math import sqrt # function to calculate Cohen's d for independent samples def cohend(d1, d2): # calculate the size of samples n1, n2 = len(d1), len(d2) # calculate the variance of the samples s1, s2 = var(d1, ddof=1), var(d2, ddof=1) # calculate the pooled standard deviation s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2)) # calculate the means of the samples u1, u2 = mean(d1), mean(d2) # calculate the effect size return (u1 - u2) / s
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Todas as escolas referência vs. escolas risco
ttest(risco["INSE_VALOR_ABSOLUTO"], referencias["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(referencias["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Só as escolas muito ruim pra bom vs. escolas risco
ttest(risco["INSE_VALOR_ABSOLUTO"], referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], nan_policy="omit") cohend(referencias.query("tipo_especifico == 'Muito ruim para bom'")["INSE_VALOR_ABSOLUTO"], risco["INSE_VALOR_ABSOLUTO"])
_____no_output_____
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Tentando inferir causalidadeSabemos que existe uma diferença significativa entre os níveis sócio econômicos dos 2 grupos. Mas até que ponto essa diferença no INSE é capaz de explicar a diferença no IDEB? Será que resta algum efeito que pode ser atribuído às práticas de gestão? Esses testes buscam encontrar uma resposta para essa pergunta. Regressões lineares
#pega a nota do IDEB pra servir de DV ideb = pd.read_csv("./pr-educacao/data/output/ideb_merged_kepler.csv") ideb["ano_true"] = ideb["ano"].apply(lambda x: int(x[0:4])) ideb = ideb.query("ano_true == 2017").copy() nota_ideb = ideb[["cod_inep", "ideb"]] df = pd.merge(df, nota_ideb, how = "left", on = "cod_inep") df.dropna(subset=["INSE_VALOR_ABSOLUTO"], inplace = True) df["tipo_bin"] = np.where(df["tipo"] == "Escolas-referência", 1, 0) from statsmodels.regression.linear_model import OLS as ols_py from statsmodels.tools.tools import add_constant ivs_multi = add_constant(df[["tipo_bin", "INSE_VALOR_ABSOLUTO"]]) modelo_multi = ols_py(df[["ideb"]], ivs_multi).fit() print(modelo_multi.summary())
OLS Regression Results ============================================================================== Dep. Variable: ideb R-squared: 0.843 Model: OLS Adj. R-squared: 0.841 Method: Least Squares F-statistic: 391.6 Date: qua, 22 mai 2019 Prob (F-statistic): 2.13e-59 Time: 12:22:10 Log-Likelihood: -23.834 No. Observations: 149 AIC: 53.67 Df Residuals: 146 BIC: 62.68 Df Model: 2 Covariance Type: nonrobust ======================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------------- const 4.1078 0.652 6.297 0.000 2.819 5.397 tipo_bin 1.3748 0.056 24.678 0.000 1.265 1.485 INSE_VALOR_ABSOLUTO 0.0169 0.013 1.293 0.198 -0.009 0.043 ============================================================================== Omnibus: 7.292 Durbin-Watson: 1.867 Prob(Omnibus): 0.026 Jarque-Bera (JB): 11.543 Skew: -0.180 Prob(JB): 0.00312 Kurtosis: 4.315 Cond. No. 1.40e+03 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.4e+03. This might indicate that there are strong multicollinearity or other numerical problems.
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
O problema de fazer a regressão da maneira como eu coloquei acima é que tipo_bin foi criada parcialmente em função do IDEB (ver histogramas abaixo), então não é uma variável verdadeiramente independente. Talvez uma estratégia seja comparar modelos simples só com INSE e só com tipo_bin.
df.ideb.hist() df.query("tipo_bin == 0").ideb.hist() df.query("tipo_bin == 1").ideb.hist() #correlação simples from scipy.stats import pearsonr pearsonr(df[["ideb"]], df[["INSE_VALOR_ABSOLUTO"]]) iv_inse = add_constant(df[["INSE_VALOR_ABSOLUTO"]]) iv_ideb = add_constant(df[["tipo_bin"]]) modelo_inse = ols_py(df[["ideb"]], iv_inse).fit() modelo_tipo = ols_py(df[["ideb"]], iv_ideb).fit() print(modelo_inse.summary()) print("-----------------------------------------------------------") print(modelo_tipo.summary())
OLS Regression Results ============================================================================== Dep. Variable: ideb R-squared: 0.187 Model: OLS Adj. R-squared: 0.182 Method: Least Squares F-statistic: 33.90 Date: qua, 22 mai 2019 Prob (F-statistic): 3.51e-08 Time: 12:22:15 Log-Likelihood: -146.25 No. Observations: 149 AIC: 296.5 Df Residuals: 147 BIC: 302.5 Df Model: 1 Covariance Type: nonrobust ======================================================================================= coef std err t P>|t| [0.025 0.975] --------------------------------------------------------------------------------------- const -2.4509 1.350 -1.815 0.072 -5.119 0.217 INSE_VALOR_ABSOLUTO 0.1561 0.027 5.822 0.000 0.103 0.209 ============================================================================== Omnibus: 3.939 Durbin-Watson: 0.621 Prob(Omnibus): 0.140 Jarque-Bera (JB): 3.892 Skew: 0.353 Prob(JB): 0.143 Kurtosis: 2.642 Cond. No. 1.28e+03 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.28e+03. This might indicate that there are strong multicollinearity or other numerical problems. ----------------------------------------------------------- OLS Regression Results ============================================================================== Dep. Variable: ideb R-squared: 0.841 Model: OLS Adj. R-squared: 0.840 Method: Least Squares F-statistic: 777.9 Date: qua, 22 mai 2019 Prob (F-statistic): 1.40e-60 Time: 12:22:15 Log-Likelihood: -24.683 No. Observations: 149 AIC: 53.37 Df Residuals: 147 BIC: 59.37 Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ const 4.9505 0.029 173.049 0.000 4.894 5.007 tipo_bin 1.4058 0.050 27.891 0.000 1.306 1.505 ============================================================================== Omnibus: 6.509 Durbin-Watson: 1.870 Prob(Omnibus): 0.039 Jarque-Bera (JB): 9.934 Skew: -0.147 Prob(JB): 0.00696 Kurtosis: 4.230 Cond. No. 2.42 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Testes pareadosNossa unidade de observação, na verdade, não deveria ser uma escola, mas sim um par de escolas. Abaixo, tento fazer as análises levando em consideração o delta de INSE e o delta de IDEB para cada par de escolas. Isso é importante: sabemos que o INSE faz a diferença no IDEB geral, mas a pergunta é se ele consegue explicar as diferenças na performance dentro de cada par.
pairs = pd.read_csv("sponsors_mais_proximos.csv") pairs.head() pairs.shape inse_risco = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]] inse_risco.columns = ["cod_inep_risco","inse_risco"] inse_ref = inse[["cod_inep", "INSE_VALOR_ABSOLUTO"]] inse_ref.columns = ["cod_inep_referencia","inse_referencia"] pairs = pd.merge(pairs, inse_risco, how = "left", on = "cod_inep_risco") pairs = pd.merge(pairs, inse_ref, how = "left", on = "cod_inep_referencia") #calcula os deltas pairs["delta_inse"] = pairs["inse_referencia"] - pairs["inse_risco"] pairs["delta_ideb"] = pairs["ideb_referencia"] - pairs["ideb_risco"] pairs["delta_inse"].describe() pairs["delta_inse"].hist() pairs["delta_ideb"].describe() pairs["delta_ideb"].hist() pairs[pairs["delta_inse"].isnull()] clean_pairs = pairs.dropna(subset = ["delta_inse"]) import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize = sen.aspect_ratio_locker([16, 9], 0.6)) inse_plot = sns.regplot("delta_inse", "delta_ideb", data = clean_pairs) plt.title("Correlação entre as diferenças do IDEB (2017) e do INSE (2015)\npara cada par de escolas mais próximas") plt.xlabel("$INSE_{referência} - INSE_{desempenho\,abaixo\,do\,esperado}$", fontsize = 12) plt.ylabel("$IDEB_{referência} - IDEB_{desempenh\,abaixo\,do\,esperado}$", fontsize = 12) inse_plot.get_figure().savefig("delta_inse.png", dpi = 600) pearsonr(clean_pairs[["delta_ideb"]], clean_pairs[["delta_inse"]]) X = add_constant(clean_pairs[["delta_inse"]]) modelo_pairs = ols_py(clean_pairs[["delta_ideb"]], X).fit() print(modelo_pairs.summary())
OLS Regression Results ============================================================================== Dep. Variable: delta_ideb R-squared: 0.000 Model: OLS Adj. R-squared: -0.010 Method: Least Squares F-statistic: 0.0004740 Date: qua, 22 mai 2019 Prob (F-statistic): 0.983 Time: 11:12:12 Log-Likelihood: -47.659 No. Observations: 100 AIC: 99.32 Df Residuals: 98 BIC: 104.5 Df Model: 1 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ const 1.4143 0.051 27.838 0.000 1.313 1.515 delta_inse 0.0004 0.017 0.022 0.983 -0.034 0.035 ============================================================================== Omnibus: 8.509 Durbin-Watson: 1.977 Prob(Omnibus): 0.014 Jarque-Bera (JB): 8.171 Skew: 0.654 Prob(JB): 0.0168 Kurtosis: 3.498 Cond. No. 3.97 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Testando a assumption de que distância física se correlaciona com distância de INSE
pairs.head() sns.regplot("distancia", "delta_inse", data = clean_pairs.query("distancia < 4000")) multi_iv = add_constant(clean_pairs[["distancia", "delta_inse"]]) modelo_ze = ols_py(clean_pairs[["delta_ideb"]], multi_iv).fit() print(modelo_ze.summary())
OLS Regression Results ============================================================================== Dep. Variable: delta_ideb R-squared: 0.000 Model: OLS Adj. R-squared: -0.021 Method: Least Squares F-statistic: 0.004600 Date: qua, 22 mai 2019 Prob (F-statistic): 0.995 Time: 11:40:22 Log-Likelihood: -47.654 No. Observations: 100 AIC: 101.3 Df Residuals: 97 BIC: 109.1 Df Model: 2 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ const 1.4200 0.080 17.851 0.000 1.262 1.578 distancia -3.958e-06 4.24e-05 -0.093 0.926 -8.8e-05 8.01e-05 delta_inse 0.0006 0.018 0.033 0.974 -0.034 0.035 ============================================================================== Omnibus: 8.544 Durbin-Watson: 1.973 Prob(Omnibus): 0.014 Jarque-Bera (JB): 8.212 Skew: 0.656 Prob(JB): 0.0165 Kurtosis: 3.500 Cond. No. 3.63e+03 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 3.63e+03. This might indicate that there are strong multicollinearity or other numerical problems.
MIT
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
Intro & Resources* [Sutton/Barto ebook](https://goo.gl/7utZaz); [Silver online course](https://goo.gl/AWcMFW) Learning to Optimize Rewards* Definitions: software *agents* make *observations* & take *actions* within an *environment*. In return they can receive *rewards* (positive or negative). Policy Search* **Policy**: the algorithm used by an agent to determine a next action. OpenAI Gym ([link:](https://gym.openai.com/))* A toolkit for various simulated environments.
!pip3 install --upgrade gym import gym env = gym.make("CartPole-v0") obs = env.reset() obs env.render()
[2017-04-27 13:05:47,311] Making new env: CartPole-v0
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
* **make()** creates environment* **reset()** returns a 1st env't* **CartPole()** - each observation = 1D numpy array (hposition, velocity, angle, angularvelocity)![cartpole](pics/cartpole.png)
img = env.render(mode="rgb_array") img.shape # what actions are possible? # in this case: 0 = accelerate left, 1 = accelerate right env.action_space # pole is leaning right. let's go further to the right. action = 1 obs, reward, done, info = env.step(action) obs, reward, done, info
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
* new observation: * hpos = obs[0]<0 * velocity = obs[1]>0 = moving to the right * angle = obs[2]>0 = leaning right * ang velocity = obs[3]<0 = slowing down?* reward = 1.0* done = False (episode not over)* info = (empty)
# example policy: # (1) accelerate left when leaning left, (2) accelerate right when leaning right # average reward over 500 episodes? def basic_policy(obs): angle = obs[2] return 0 if angle < 0 else 1 totals = [] for episode in range(500): episode_rewards = 0 obs = env.reset() for step in range(1000): # 1000 steps max, we don't want to run forever action = basic_policy(obs) obs, reward, done, info = env.step(action) episode_rewards += reward if done: break totals.append(episode_rewards) import numpy as np np.mean(totals), np.std(totals), np.min(totals), np.max(totals)
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
NN Policies* observations as inputs - actions to be executed as outputs - determined by p(action)* approach lets agent find best balance between **exploring new actions** & **reusing known good actions**. Evaluating Actions: Credit Assignment problem* Reinforcement Learning (RL) training not like supervised learning. * RL feedback is via rewards (often sparse & delayed)* How to determine which previous steps were "good" or "bad"? (aka "*credit assigmnment problem*")* Common tactic: applying a **discount rate** to older rewards.* Use normalization across many episodes to increase score reliability. NN Policy | Discounts & Rewards- | -![nn-policy](pics/nn-policy.png) | ![discount-rewards](pics/discount-rewards.png)
import tensorflow as tf from tensorflow.contrib.layers import fully_connected # 1. Specify the neural network architecture n_inputs = 4 # == env.observation_space.shape[0] n_hidden = 4 # simple task, don't need more hidden neurons n_outputs = 1 # only output prob(accelerating left) initializer = tf.contrib.layers.variance_scaling_initializer() # 2. Build the neural network X = tf.placeholder( tf.float32, shape=[None, n_inputs]) hidden = fully_connected( X, n_hidden, activation_fn=tf.nn.elu, weights_initializer=initializer) logits = fully_connected( hidden, n_outputs, activation_fn=None, weights_initializer=initializer) outputs = tf.nn.sigmoid(logits) # logistic (sigmoid) ==> return 0.0-1.0 # 3. Select a random action based on the estimated probabilities p_left_and_right = tf.concat( axis=1, values=[outputs, 1 - outputs]) action = tf.multinomial( tf.log(p_left_and_right), num_samples=1) init = tf.global_variables_initializer()
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Policy Gradient (PG) algorithms* example: ["reinforce" algo, 1992](https://goo.gl/tUe4Sh) Markov Decision processes (MDPs)* Markov chains = stochastic processes, no memory, fixed states, random transitions* Markov decision processes = similar to MCs - agent can choose action; transition probabilities depend on the action; transitions can return reward/punishment.* Goal: find policy with maximum rewards over time.Markov Chain | Markov Decision Process- | -![markov-chain](pics/markov-chain.png) | ![alt](pics/markov-decision-process.png)* **Bellman Optimality Equation**: a method to estimate optimal state value of any state *s*.* Knowing optimal states = useful, but doesn't tell agent what to do. **Q-Value algorithm** helps solve this problem. Optimal Q-Value of a state-action pair = sum of discounted future rewards the agent can expect on average.
# Define MDP: nan=np.nan # represents impossible actions T = np.array([ # shape=[s, a, s'] [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]], [[0.0, 1.0, 0.0], [nan, nan, nan], [0.0, 0.0, 1.0]], [[nan, nan, nan], [0.8, 0.1, 0.1], [nan, nan, nan]], ]) R = np.array([ # shape=[s, a, s'] [[10., 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], [[10., 0.0, 0.0], [nan, nan, nan], [0.0, 0.0, -50.]], [[nan, nan, nan], [40., 0.0, 0.0], [nan, nan, nan]], ]) possible_actions = [[0, 1, 2], [0, 2], [1]] # run Q-Value Iteration algo Q = np.full((3, 3), -np.inf) for state, actions in enumerate(possible_actions): Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions learning_rate = 0.01 discount_rate = 0.95 n_iterations = 100 for iteration in range(n_iterations): Q_prev = Q.copy() for s in range(3): for a in possible_actions[s]: Q[s, a] = np.sum([ T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp])) for sp in range(3) ]) print("Q: \n",Q) print("Optimal action for each state:\n",np.argmax(Q, axis=1)) # change discount rate to 0.9, see how policy changes: discount_rate = 0.90 for iteration in range(n_iterations): Q_prev = Q.copy() for s in range(3): for a in possible_actions[s]: Q[s, a] = np.sum([ T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp])) for sp in range(3) ]) print("Q: \n",Q) print("Optimal action for each state:\n",np.argmax(Q, axis=1))
Q: [[ 1.89189499e+01 1.70270580e+01 1.36216526e+01] [ 3.09979853e-05 -inf -4.87968388e+00] [ -inf 5.01336811e+01 -inf]] Optimal action for each state: [0 0 1]
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Temporal Difference Learning & Q-Learning* In general - agent has no knowledge of transition probabilities or rewards* **Temporal Difference Learning** (TD Learning) similar to value iteration, but accounts for this lack of knowlege.* Algorithm tracks running average of most recent awards & anticipated rewards.* **Q-Learning** algorithm adaptation of Q-Value Iteration where initial transition probabilities & rewards are unknown.
import numpy.random as rnd learning_rate0 = 0.05 learning_rate_decay = 0.1 n_iterations = 20000 s = 0 # start in state 0 Q = np.full((3, 3), -np.inf) # -inf for impossible actions for state, actions in enumerate(possible_actions): Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions for iteration in range(n_iterations): a = rnd.choice(possible_actions[s]) # choose an action (randomly) sp = rnd.choice(range(3), p=T[s, a]) # pick next state using T[s, a] reward = R[s, a, sp] learning_rate = learning_rate0 / (1 + iteration * learning_rate_decay) Q[s, a] = learning_rate * Q[s, a] + (1 - learning_rate) * (reward + discount_rate * np.max(Q[sp])) s = sp # move to next state print("Q: \n",Q) print("Optimal action for each state:\n",np.argmax(Q, axis=1))
Q: [[ -inf 2.47032823e-323 -inf] [ 0.00000000e+000 -inf 0.00000000e+000] [ -inf 0.00000000e+000 -inf]] Optimal action for each state: [1 0 1]
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Exploration Policies* Q-Learning works only if exploration is thorough - not always possible.* Better alternative: explore more interesting routes using a *sigma* probability Approximate Q-Learning* TODO Ms Pac-Man with Deep Q-Learning
env = gym.make('MsPacman-v0') obs = env.reset() obs.shape, env.action_space # action_space = 9 possible joystick actions # observations = atari screenshots as 3D NumPy arrays mspacman_color = np.array([210, 164, 74]).mean() # crop image, shrink to 88x80 pixels, convert to grayscale, improve contrast def preprocess_observation(obs): img = obs[1:176:2, ::2] # crop and downsize img = img.mean(axis=2) # to greyscale img[img==mspacman_color] = 0 # improve contrast img = (img - 128) / 128 - 1 # normalize from -1. to 1. return img.reshape(88, 80, 1)
_____no_output_____
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
Ms PacMan Observation | Deep-Q net- | -![observation](pics/mspacman-before-after.png) | ![alt](pics/mspacman-deepq.png)
# Create DQN # 3 convo layers, then 2 FC layers including output layer from tensorflow.contrib.layers import convolution2d, fully_connected input_height = 88 input_width = 80 input_channels = 1 conv_n_maps = [32, 64, 64] conv_kernel_sizes = [(8,8), (4,4), (3,3)] conv_strides = [4, 2, 1] conv_paddings = ["SAME"]*3 conv_activation = [tf.nn.relu]*3 n_hidden_in = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each n_hidden = 512 hidden_activation = tf.nn.relu n_outputs = env.action_space.n # 9 discrete actions are available initializer = tf.contrib.layers.variance_scaling_initializer() # training will need ***TWO*** DQNs: # one to train the actor # another to learn from trials & errors (critic) # q_network is our net builder. def q_network(X_state, scope): prev_layer = X_state conv_layers = [] with tf.variable_scope(scope) as scope: for n_maps, kernel_size, stride, padding, activation in zip( conv_n_maps, conv_kernel_sizes, conv_strides, conv_paddings, conv_activation): prev_layer = convolution2d( prev_layer, num_outputs=n_maps, kernel_size=kernel_size, stride=stride, padding=padding, activation_fn=activation, weights_initializer=initializer) conv_layers.append(prev_layer) last_conv_layer_flat = tf.reshape( prev_layer, shape=[-1, n_hidden_in]) hidden = fully_connected( last_conv_layer_flat, n_hidden, activation_fn=hidden_activation, weights_initializer=initializer) outputs = fully_connected( hidden, n_outputs, activation_fn=None, weights_initializer=initializer) trainable_vars = tf.get_collection( tf.GraphKeys.TRAINABLE_VARIABLES, scope=scope.name) trainable_vars_by_name = {var.name[len(scope.name):]: var for var in trainable_vars} return outputs, trainable_vars_by_name # create input placeholders & two DQNs X_state = tf.placeholder( tf.float32, shape=[None, input_height, input_width, input_channels]) actor_q_values, actor_vars = q_network(X_state, scope="q_networks/actor") critic_q_values, critic_vars = q_network(X_state, scope="q_networks/critic") copy_ops = [actor_var.assign(critic_vars[var_name]) for var_name, actor_var in actor_vars.items()] # op to copy all trainable vars of critic DQN to actor DQN... # use tf.group() to group all assignment ops together copy_critic_to_actor = tf.group(*copy_ops) # Critic DQN learns by matching Q-Value predictions # to actor's Q-Value estimations during game play # Actor will use a "replay memory" (5 tuples): # state, action, next-state, reward, (0=over/1=continue) # use normal supervised training ops # occasionally copy critic DQN to actor DQN # DQN normally returns one Q-Value for every poss. action # only need Q-Value of action actually chosen # So, convert action to one-hot vector [0...1...0], multiple by Q-values # then sum over 1st axis. X_action = tf.placeholder( tf.int32, shape=[None]) q_value = tf.reduce_sum( critic_q_values * tf.one_hot(X_action, n_outputs), axis=1, keep_dims=True) # training setup tf.reset_default_graph() y = tf.placeholder( tf.float32, shape=[None, 1]) cost = tf.reduce_mean( tf.square(y - q_value)) # non-trainable. minimize() op will manage incrementing it global_step = tf.Variable( 0, trainable=False, name='global_step') optimizer = tf.train.AdamOptimizer(learning_rate) training_op = optimizer.minimize(cost, global_step=global_step) init = tf.global_variables_initializer() saver = tf.train.Saver() # use a deque list to build the replay memory from collections import deque replay_memory_size = 10000 replay_memory = deque( [], maxlen=replay_memory_size) def sample_memories(batch_size): indices = rnd.permutation( len(replay_memory))[:batch_size] cols = [[], [], [], [], []] # state, action, reward, next_state, continue for idx in indices: memory = replay_memory[idx] for col, value in zip(cols, memory): col.append(value) cols = [np.array(col) for col in cols] return (cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1)) # create an actor # use epsilon-greedy policy # gradually decrease epsilon from 1.0 to 0.05 across 50K training steps eps_min = 0.05 eps_max = 1.0 eps_decay_steps = 50000 def epsilon_greedy(q_values, step): epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps) if rnd.rand() < epsilon: return rnd.randint(n_outputs) # random action else: return np.argmax(q_values) # optimal action # training setup: the variables n_steps = 100000 # total number of training steps training_start = 1000 # start training after 1,000 game iterations training_interval = 3 # run a training step every 3 game iterations save_steps = 50 # save the model every 50 training steps copy_steps = 25 # copy the critic to the actor every 25 training steps discount_rate = 0.95 skip_start = 90 # skip the start of every game (it's just waiting time) batch_size = 50 iteration = 0 # game iterations checkpoint_path = "./my_dqn.ckpt" done = True # env needs to be reset # let's get busy import os with tf.Session() as sess: # restore models if checkpoint file exists if os.path.isfile(checkpoint_path): saver.restore(sess, checkpoint_path) # otherwise normally initialize variables else: init.run() while True: step = global_step.eval() if step >= n_steps: break # iteration = total number of game steps from beginning iteration += 1 if done: # game over, start again obs = env.reset() for skip in range(skip_start): # skip the start of each game obs, reward, done, info = env.step(0) state = preprocess_observation(obs) # Actor evaluates what to do q_values = actor_q_values.eval(feed_dict={X_state: [state]}) action = epsilon_greedy(q_values, step) # Actor plays obs, reward, done, info = env.step(action) next_state = preprocess_observation(obs) # Let's memorize what just happened replay_memory.append((state, action, reward, next_state, 1.0 - done)) state = next_state if iteration < training_start or iteration % training_interval != 0: continue # Critic learns X_state_val, X_action_val, rewards, X_next_state_val, continues = ( sample_memories(batch_size)) next_q_values = actor_q_values.eval( feed_dict={X_state: X_next_state_val}) max_next_q_values = np.max( next_q_values, axis=1, keepdims=True) y_val = rewards + continues * discount_rate * max_next_q_values training_op.run( feed_dict={X_state: X_state_val, X_action: X_action_val, y: y_val}) # Regularly copy critic to actor if step % copy_steps == 0: copy_critic_to_actor.run() # And save regularly if step % save_steps == 0: saver.save(sess, checkpoint_path) print("\n",np.average(y_val))
1.09000234097 1.35392784142 1.56906713688 2.5765440191 1.57079289043 1.75170834792 1.97005553639 1.97246688247 2.16126081383 1.550295331 1.75750140131 1.56052656734 1.7519523176 1.74495741558 1.95223849511 1.35289915931 1.56913152564 2.96387254691 1.76067311585 1.35536773229 1.54768545294 1.53594982147 1.56104325151 1.96987313104 2.35546155441 1.5688166486 3.08286282682 3.28864161086 3.2878398273 3.09510449028 3.09807873964 3.90697311211 3.07757974195 3.09214673901 3.28402029777 3.28337000942 3.4255889504 3.49763186431 2.85764229989 3.04482784653 2.68228099513 3.28635532999 3.29647485089 3.07898310328 3.10530596256 3.27691918874 3.09561720395 2.67830030346 3.09576807404 3.288335078 3.0956065948 5.21222548962 4.21721751595 4.7905973649 4.59864345837 4.39875211382 4.51839643717 4.59503188992 5.01186150789 4.77968219852 4.78787856865 4.20382899523 4.20432999897 5.0028930707 5.20069698572 4.80375980473 5.19750945711 4.20367767668 4.19593407536 4.40061367989 4.6054182477 4.79921974087 4.38844807434 4.20397897291 4.60095557356 4.59488785553 5.75924422598 5.75949315596 5.16320213652 5.36019721937 5.56076610899 5.16949163198 5.75895399189 5.96050115204 5.97032629395
MIT
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
SQLAlchemy Homework - Surfs Up! Before You Begin1. Create a new repository for this project called `sqlalchemy-challenge`. **Do not add this homework to an existing repository**.2. Clone the new repository to your computer.3. Add your Jupyter notebook and `app.py` to this folder. These will be the main scripts to run for analysis.4. Push the above changes to GitHub or GitLab.![surfs-up.png](Images/surfs-up.png)Congratulations! You've decided to treat yourself to a long holiday vacation in Honolulu, Hawaii! To help with your trip planning, you need to do some climate analysis on the area. The following outlines what you need to do. Step 1 - Climate Analysis and ExplorationTo begin, use Python and SQLAlchemy to do basic climate analysis and data exploration of your climate database. All of the following analysis should be completed using SQLAlchemy ORM queries, Pandas, and Matplotlib.* Use the provided [starter notebook](climate_starter.ipynb) and [hawaii.sqlite](Resources/hawaii.sqlite) files to complete your climate analysis and data exploration.* Choose a start date and end date for your trip. Make sure that your vacation range is approximately 3-15 days total.* Use SQLAlchemy `create_engine` to connect to your sqlite database.* Use SQLAlchemy `automap_base()` to reflect your tables into classes and save a reference to those classes called `Station` and `Measurement`. Precipitation Analysis* Design a query to retrieve the last 12 months of precipitation data.* Select only the `date` and `prcp` values.* Load the query results into a Pandas DataFrame and set the index to the date column.* Sort the DataFrame values by `date`.* Plot the results using the DataFrame `plot` method. ![precipitation](Images/precipitation.png)* Use Pandas to print the summary statistics for the precipitation data. Station Analysis* Design a query to calculate the total number of stations.* Design a query to find the most active stations. * List the stations and observation counts in descending order. * Which station has the highest number of observations? * Hint: You will need to use a function such as `func.min`, `func.max`, `func.avg`, and `func.count` in your queries.* Design a query to retrieve the last 12 months of temperature observation data (TOBS). * Filter by the station with the highest number of observations. * Plot the results as a histogram with `bins=12`. ![station-histogram](Images/station-histogram.png)- - - Step 2 - Climate AppNow that you have completed your initial analysis, design a Flask API based on the queries that you have just developed.* Use Flask to create your routes. Routes* `/` * Home page. * List all routes that are available.* `/api/v1.0/precipitation` * Convert the query results to a dictionary using `date` as the key and `prcp` as the value. * Return the JSON representation of your dictionary.* `/api/v1.0/stations` * Return a JSON list of stations from the dataset.* `/api/v1.0/tobs` * Query the dates and temperature observations of the most active station for the last year of data. * Return a JSON list of temperature observations (TOBS) for the previous year.* `/api/v1.0/` and `/api/v1.0//` * Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start or start-end range. * When given the start only, calculate `TMIN`, `TAVG`, and `TMAX` for all dates greater than and equal to the start date. * When given the start and the end date, calculate the `TMIN`, `TAVG`, and `TMAX` for dates between the start and end date inclusive. Hints* You will need to join the station and measurement tables for some of the queries.* Use Flask `jsonify` to convert your API data into a valid JSON response object.- - -
%matplotlib inline from matplotlib import style style.use('fivethirtyeight') import matplotlib.pyplot as plt import numpy as np import pandas as pd import datetime as dt import seaborn as sns from scipy.stats import linregress from sklearn import datasets
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Reflect Tables into SQLAlchemy ORM Precipitation Analysis* Design a query to retrieve the last 12 months of precipitation data.* Select only the `date` and `prcp` values.* Load the query results into a Pandas DataFrame and set the index to the date column.* Sort the DataFrame values by `date`.* Plot the results using the DataFrame `plot` method. *![precipitation](Images/precipitation.png)* Use Pandas to print the summary statistics for the precipitation data.
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, inspect engine = create_engine("sqlite:///Resources/hawaii.sqlite") #Base.metadata.create_all(engine) inspector = inspect(engine) inspector.get_table_names() # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine,reflect= True) # Reflect Database into ORM class #Base.classes.measurement # Create our session (link) from Python to the DB session = Session(bind=engine) session = Session(engine) # We can view all of the classes that automap found Base.classes.keys() # Save references to each table Measurement = Base.classes.measurement Station = Base.classes.station engine.execute('Select * from measurement').fetchall() # Get columns of 'measurement' table columns = inspector.get_columns('measurement') for c in columns: print(c) # A very odd way to get all column values if they are made by tuples with keys and values, it's more straightforward # and sensible to just do columns = inspector.get_columns('measurement') the a for loop: for c in columns: print(c) columns = inspector.get_columns('measurement') for c in columns: print(c.keys()) for c in columns: print(c.values())
dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key']) dict_values(['id', INTEGER(), False, None, 'auto', 1]) dict_values(['station', TEXT(), True, None, 'auto', 0]) dict_values(['date', TEXT(), True, None, 'auto', 0]) dict_values(['prcp', FLOAT(), True, None, 'auto', 0]) dict_values(['tobs', FLOAT(), True, None, 'auto', 0])
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Exploratory Climate Analysis
# Design a query to retrieve the last 12 months of precipitation data and plot the results # Design a query to retrieve the last 12 months of precipitation data. max_date = session.query(func.max(Measurement.date)).all()[0][0] # Select only the date and prcp values. #datetime.datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S.%f') import datetime print(max_date) print(type(max_date)) # Calculate the date 1 year ago from the last data point in the database min_date = datetime.datetime.strptime(max_date,'%Y-%m-%d') - datetime.timedelta(days = 365) print(min_date) print(min_date.year, min_date.month, min_date.day) # Perform a query to retrieve the data and precipitation scores results = session.query(Measurement.prcp, Measurement.date).filter(Measurement.date >= min_date).all() results # Load the query results into a Pandas DataFrame and set the index to the date column. prcp_anal_df = pd.DataFrame(results, columns = ['prcp','date']).set_index('date') # Sort the DataFrame values by date. prcp_anal_df.sort_values(by=['date'], inplace=True) prcp_anal_df # Create Plot(s) prcp_anal_df.plot(rot = 90) plt.xlabel('Date') plt.ylabel('Precipitation (inches)') plt.title('Precipitation over One Year in Hawaii') plt.savefig("histo_prcp_date.png") plt.show() sns.set() plot1 = prcp_anal_df.plot(figsize = (10, 5)) fig = plot1.get_figure() plt.title('Precipitation in Hawaii') plt.xlabel('Date') plt.ylabel('Precipitation') plt.legend(["Precipitation"],loc="best") plt.xticks(rotation=45) plt.tight_layout() plt.savefig("Precipitation in Hawaii_bar.png") plt.show() prcp_anal_df.describe() # I wanted a range of precipitation amounts for plotting purposes...the code on line 3 and 4 and 5 didn't work ## prcp_anal.max_prcp = session.query(func.max(Measurement.prcp.filter(Measurement.date >= '2016-08-23' ))).\ ## order_by(func.max(Items.UnitPrice * Items.Quantity).desc()).all() ## prcp_anal.max_prcp prcp_anal_max_prcp = session.query(Measurement.prcp, func.max(Measurement.prcp)).\ filter(Measurement.date >= '2016-08-23').\ group_by(Measurement.date).\ order_by(func.max(Measurement.prcp).asc()).all() prcp_anal_max_prcp # I initially did the following in a cell below. Again, I wanted a range of prcp values for the year in our DataFrame # so here I got the min but realized both the min and the max, or both queries are useless to me here unless I were # use plt.ylim in my plots, which I don't, I just allow the DF to supply its intrinsic values # and both give identical results. I will leave it here in thes assignment just to show my thought process # prcp_anal_min_prcp = session.query(Measurement.prcp, func.min(Measurement.prcp)).\ # filter(Measurement.date > '2016-08-23').\ # group_by(Measurement.date).\ # order_by(func.min(Measurement.prcp).asc()).all() # prcp_anal_min_prcp
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
***STATION ANALYSIS***.\1) Design a query to calculate the total number of stations.\2) Design a query to find the most active stations.\3) List the stations and observation counts in descending order.\4) Which station has the highest number of observations?.\ Hint: You will need to use a function such as func.min, func.max, func.avg, and func.count in your queries..\5) Design a query to retrieve the last 12 months of temperature observation data (TOBS)..\6) Filter by the station with the highest number of observations..\7) Plot the results as a histogram with bins=12.
Station = Base.classes.station session = Session(engine) # Getting column values from each table, here 'station' columns = inspector.get_columns('station') for c in columns: print(c) # Get columns of 'measurement' table columns = inspector.get_columns('measurement') for c in columns: print(c) engine.execute('Select * from station').fetchall() # Design a query to show how many stations are available in this dataset? session.query(Station.station).count() # What are the most active stations? (i.e. what stations have the most rows)? # List the stations and the counts in descending order. # List the stations and the counts in descending order. Think about somehow using this from extra activity Active_Stations = session.query(Station.station ,func.count(Measurement.tobs)).filter(Station.station == Measurement.station).\ group_by(Station.station).order_by(func.count(Measurement.tobs).desc()).all() print(f"The most active station {Active_Stations[0][0]} has {Active_Stations[0][1]} observations!") Active_Stations # Using the station id from the previous query, calculate the lowest temperature recorded, # highest temperature recorded, and average temperature of the most active station? Station_Name = session.query(Station.name).filter(Station.station == Active_Stations[0][0]).all() print(Station_Name) Temp_Stats = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)).\ filter(Station.station == Active_Stations[0][0]).all() print(Temp_Stats) # Choose the station with the highest number of temperature observations. Station_Name = session.query(Station.name).filter(Station.station == Active_Stations[0][0]).all() Station_Name # Query the last 12 months of temperature observation data for this station results_WAHIAWA = session.query(Measurement.date,Measurement.tobs).filter(Measurement.date > min_date).\ filter(Station.station == Active_Stations[0][0]).all() results_WAHIAWA # Make a DataFrame from the query results above showing dates and temp observation at the most active station results_WAHIAWA_df = pd.DataFrame(results_WAHIAWA) results_WAHIAWA_df # Plot the results as a histogram sns.set() plt.figure(figsize=(10,5)) plt.hist(results_WAHIAWA_df['tobs'],bins=12,color='magenta') plt.xlabel('Temperature',weight='bold') plt.ylabel('Frequency',weight='bold') plt.title('Station Analysis',weight='bold') plt.legend(["Temperature Observation"],loc="best") plt.savefig("Station_Analysis_hist.png") plt.show()
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Bonus Challenge Assignment
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date string in the format %Y-%m-%d end_date (string): A date string in the format %Y-%m-%d Returns: TMIN, TAVE, and TMAX """ return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\ filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all() # function usage example print(calc_temps('2012-02-28', '2012-03-05')) # Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax calc_temps('2017-06-22', '2017-07-05') # for your trip using the previous year's data for those same dates. (calc_temps('2016-06-22', '2016-07-05')) # Plot the results from your previous query as a bar chart. # Use "Trip Avg Temp" as your Title # Use the average temperature for the y value # Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr) # Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates. # Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation # Create a query that will calculate the daily normals # (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day) def daily_normals(date): """Daily Normals. Args: date (str): A date string in the format '%m-%d' Returns: A list of tuples containing the daily normals, tmin, tavg, and tmax """ sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)] return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all() daily_normals("01-01") # calculate the daily normals for your trip # push each tuple of calculations into a list called `normals` # Set the start and end date of the trip # Use the start and end date to create a range of dates # Stip off the year and save a list of %m-%d strings # Loop through the list of %m-%d strings and calculate the normals for each date # Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index # Plot the daily normals as an area plot with `stacked=False`
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
Step 2 - Climate AppNow that you have completed your initial analysis, design a Flask API based on the queries that you have just developed.* Use Flask to create your routes. Routes* `/` * Home page. * List all routes that are available.* `/api/v1.0/precipitation` * Convert the query results to a dictionary using `date` as the key and `prcp` as the value. * Return the JSON representation of your dictionary.* `/api/v1.0/stations` * Return a JSON list of stations from the dataset.* `/api/v1.0/tobs` * Query the dates and temperature observations of the most active station for the last year of data. * Return a JSON list of temperature observations (TOBS) for the previous year.* `/api/v1.0/` and `/api/v1.0//` * Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start or start-end range. * When given the start only, calculate `TMIN`, `TAVG`, and `TMAX` for all dates greater than and equal to the start date. * When given the start and the end date, calculate the `TMIN`, `TAVG`, and `TMAX` for dates between the start and end date inclusive. Hints* You will need to join the station and measurement tables for some of the queries.* Use Flask `jsonify` to convert your API data into a valid JSON response object.- - -
import numpy as np import datetime as dt from datetime import timedelta, datetime import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, distinct, text, desc from flask import Flask, jsonify ################################################# # Database Setup ################################################# #engine = create_engine("sqlite:///Resources/hawaii.sqlite") engine = create_engine("sqlite:///Resources/hawaii.sqlite?check_same_thread=False") # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine, reflect=True) # Save reference to the table Measurement = Base.classes.measurement Station = Base.classes.station ################################################# # Flask Setup ################################################# app = Flask(__name__) ################################################# # Flask Routes ################################################# @app.route("/") def welcome(): """List all available api routes.""" return ( f"Available Routes:<br/>" f"/api/v1.0/precipitation<br/>" f"/api/v1.0/stations<br/>" f"/api/v1.0/tobs<br/>" f"/api/v1.0/<br/>" f"/api/v1.0/" ) @app.route("/api/v1.0/precipitation") def precipitation(): # Create our session (link) from Python to the DB session = Session(engine) """Return a list of all precipitation data""" # Query Precipitation data annual_rainfall = session.query(Measurement.date, Measurement.prcp).order_by(Measurement.date).all() session.close() # Convert list of tuples into normal list all_rain = dict(annual_rainfall) return jsonify(all_rain) if __name__ == '__main__': app.run(debug=True)
_____no_output_____
ADSL
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
License.Copyright 2021 Tristan Behrens.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. Sampling using the trained model.
import tensorflow as tf from transformers import GPT2LMHeadModel, TFGPT2LMHeadModel from transformers import PreTrainedTokenizerFast from tokenizers import Tokenizer import os import numpy as np from source.helpers.samplinghelpers import * # Where the checkpoint lives. # Note can be downloaded from: https://ai-guru.s3.eu-central-1.amazonaws.com/mmm-jsb/mmm_jsb_checkpoints.zip check_point_path = os.path.join("checkpoints", "20210411-1426") # Load the validation data. validation_data_path = os.path.join(check_point_path, "datasets", "jsb_mmmtrack", "token_sequences_valid.txt") # Load the tokenizer. tokenizer_path = os.path.join(check_point_path, "datasets", "jsb_mmmtrack", "tokenizer.json") tokenizer = Tokenizer.from_file(tokenizer_path) tokenizer = PreTrainedTokenizerFast(tokenizer_file=tokenizer_path) tokenizer.add_special_tokens({'pad_token': '[PAD]'}) # Load the model. model_path = os.path.join(check_point_path, "training", "jsb_mmmtrack", "best_model") model = GPT2LMHeadModel.from_pretrained(model_path) print("Model loaded.") priming_sample, priming_sample_original = get_priming_token_sequence( validation_data_path, stop_on_track_end=0, stop_after_n_tokens=20, return_original=True ) generated_sample = generate(model, tokenizer, priming_sample) print("Original sample") render_token_sequence(priming_sample_original, use_program=False) print("Reduced sample") render_token_sequence(priming_sample, use_program=False) print("Reconstructed sample") render_token_sequence(generated_sample, use_program=False)
_____no_output_____
Apache-2.0
sample.ipynb
AI-Guru/MMM-JSB
Maximum Likelihood Estimation (Generic models) This tutorial explains how to quickly implement new maximum likelihood models in `statsmodels`. We give two examples: 1. Probit model for binary dependent variables2. Negative binomial model for count dataThe `GenericLikelihoodModel` class eases the process by providing tools such as automatic numeric differentiation and a unified interface to ``scipy`` optimization functions. Using ``statsmodels``, users can fit new MLE models simply by "plugging-in" a log-likelihood function. Example 1: Probit model
import numpy as np from scipy import stats import statsmodels.api as sm from statsmodels.base.model import GenericLikelihoodModel
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
The ``Spector`` dataset is distributed with ``statsmodels``. You can access a vector of values for the dependent variable (``endog``) and a matrix of regressors (``exog``) like this:
data = sm.datasets.spector.load_pandas() exog = data.exog endog = data.endog print(sm.datasets.spector.NOTE) print(data.exog.head())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Them, we add a constant to the matrix of regressors:
exog = sm.add_constant(exog, prepend=True)
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
To create your own Likelihood Model, you simply need to overwrite the loglike method.
class MyProbit(GenericLikelihoodModel): def loglike(self, params): exog = self.exog endog = self.endog q = 2 * endog - 1 return stats.norm.logcdf(q*np.dot(exog, params)).sum()
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Estimate the model and print a summary:
sm_probit_manual = MyProbit(endog, exog).fit() print(sm_probit_manual.summary())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Compare your Probit implementation to ``statsmodels``' "canned" implementation:
sm_probit_canned = sm.Probit(endog, exog).fit() print(sm_probit_canned.params) print(sm_probit_manual.params) print(sm_probit_canned.cov_params()) print(sm_probit_manual.cov_params())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Notice that the ``GenericMaximumLikelihood`` class provides automatic differentiation, so we did not have to provide Hessian or Score functions in order to calculate the covariance estimates. Example 2: Negative Binomial Regression for Count DataConsider a negative binomial regression model for count data withlog-likelihood (type NB-2) function expressed as:$$ \mathcal{L}(\beta_j; y, \alpha) = \sum_{i=1}^n y_i ln \left ( \frac{\alpha exp(X_i'\beta)}{1+\alpha exp(X_i'\beta)} \right ) - \frac{1}{\alpha} ln(1+\alpha exp(X_i'\beta)) + ln \Gamma (y_i + 1/\alpha) - ln \Gamma (y_i+1) - ln \Gamma (1/\alpha)$$with a matrix of regressors $X$, a vector of coefficients $\beta$,and the negative binomial heterogeneity parameter $\alpha$. Using the ``nbinom`` distribution from ``scipy``, we can write this likelihoodsimply as:
import numpy as np from scipy.stats import nbinom def _ll_nb2(y, X, beta, alph): mu = np.exp(np.dot(X, beta)) size = 1/alph prob = size/(size+mu) ll = nbinom.logpmf(y, size, prob) return ll
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
New Model ClassWe create a new model class which inherits from ``GenericLikelihoodModel``:
from statsmodels.base.model import GenericLikelihoodModel class NBin(GenericLikelihoodModel): def __init__(self, endog, exog, **kwds): super(NBin, self).__init__(endog, exog, **kwds) def nloglikeobs(self, params): alph = params[-1] beta = params[:-1] ll = _ll_nb2(self.endog, self.exog, beta, alph) return -ll def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds): # we have one additional parameter and we need to add it for summary self.exog_names.append('alpha') if start_params == None: # Reasonable starting values start_params = np.append(np.zeros(self.exog.shape[1]), .5) # intercept start_params[-2] = np.log(self.endog.mean()) return super(NBin, self).fit(start_params=start_params, maxiter=maxiter, maxfun=maxfun, **kwds)
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Two important things to notice: + ``nloglikeobs``: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix). + ``start_params``: A one-dimensional array of starting values needs to be provided. The size of this array determines the number of parameters that will be used in optimization. That's it! You're done! Usage ExampleThe [Medpar](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/doc/COUNT/medpar.html)dataset is hosted in CSV format at the [Rdatasets repository](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets). We use the ``read_csv``function from the [Pandas library](https://pandas.pydata.org) to load the datain memory. We then print the first few columns:
import statsmodels.api as sm medpar = sm.datasets.get_rdataset("medpar", "COUNT", cache=True).data medpar.head()
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
The model we are interested in has a vector of non-negative integers asdependent variable (``los``), and 5 regressors: ``Intercept``, ``type2``,``type3``, ``hmo``, ``white``.For estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.
y = medpar.los X = medpar[["type2", "type3", "hmo", "white"]].copy() X["constant"] = 1
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Then, we fit the model and extract some information:
mod = NBin(y, X) res = mod.fit()
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Extract parameter estimates, standard errors, p-values, AIC, etc.:
print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('P-values: ', res.pvalues) print('AIC: ', res.aic)
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
As usual, you can obtain a full list of available information by typing``dir(res)``.We can also look at the summary of the estimation results.
print(res.summary())
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Testing We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.
res_nbin = sm.NegativeBinomial(y, X).fit(disp=0) print(res_nbin.summary()) print(res_nbin.params) print(res_nbin.bse)
_____no_output_____
BSD-3-Clause
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
Connecting MLOS to a C++ applicationThis notebook walks through connecting MLOS to a C++ application within a docker container.We will start a docker container, and run an MLOS Agent within it. The MLOS Agent will start the actual application, and communicate with it via a shared memory channel.In this example, the MLOS Agent controls the execution of the workloads on the application, and we will later connect to the agent to optimize the configuration of our application.The application is a "SmartCache" similar to the one in the SmartCacheOptimization notebook, though with some more parameters to tune.The source for this example is in the `source/Examples/SmartCache` folder. Building the applicationTo build and run the necessary components for this example you need to create and run a docker image.To that end, open a separate terminal and go to the MLOS main folder. Within that folder, run the following commands:1. [Build the Docker image](https://microsoft.github.io/MLOS/documentation/01-Prerequisites/build-the-docker-image) using the [`Dockerfile`](../../Dockerfilemlos-github-tree-view) at the root of the repository. ```shell docker build --build-arg=UbuntuVersion=20.04 -t mlos/build:ubuntu-20.04 . ```2. [Run the Docker image](https://microsoft.github.io/MLOS/documentation/02-Build/create-a-new-container-instance) you just built. ```shell docker run -it -v $PWD:/src/MLOS -p 127.0.0.1:50051:50051/tcp \ --name mlos-build mlos/build:ubuntu-20.04 ``` This will open a shell inside the docker container. We're also exposing port 50051 on the docker container to port 50051 of our host machine. This will allow us later to connect to the optimizer that runs inside the docker container.3. Inside the container, [build the compiled software](https://microsoft.github.io/MLOS/documentation/02-Build/cli-make) with `make`: ```sh make dotnet-build cmake-build cmake-install ``` The relevant output will be at:- Mlos.Agent.Server: This file corresponds to the main entry point for MLOS, written in C. You can find the source in `source/Mlos.Agent.Server/MlosAgentServer.cs` and the binary at `target/bin/Release/Mlos.Agent.Server.dll`- SmartCache: This is the C++ executable that implements the SmartCache and executes some workloads. You can find the source in `source/Examples/SmartCache/Main.cpp` and the binary at `target/bin/Release/SmartCache`- SmartCache.SettingsRegistry: This is the C code that declares the configuration options for the SmartCache component, and defines the communication between the the MLOS Agent and the SmartCache component. You can find the source in `source/Examples/SmartCache/SmartCache.SettingsRegistry/AssemblyInitializer.cs` and the binary at `target/bin/Release/SmartCache.SettingsRegistry.dll` Starting the MLOS Agent and executing the workloads:Within the docker container, we can now tell the agent where the configuration options are stored, by setting the `MLOS_Settings_REGISTRY_PATH`.Then, we can run the MLOS Agent, which will in turn run the SmartCache executable.```shexport MLOS_SETTINGS_REGISTRY_PATH="target/bin/Release"tools/bin/dotnet target/bin/Release/Mlos.Agent.Server.dll \ --executable target/bin/Release/SmartCache``` The main loop of ``SmartCache`` contains the following:```cpp for (int observations = 0; observations < 100; observations++) { // run 100 observations std::cout << "observations: " << observations << std::endl; for (int i = 0; i < 20; i++) { // run a workload 20 times CyclicalWorkload(2048, smartCache); } bool isConfigReady = false; std::mutex waitForConfigMutex; std::condition_variable waitForConfigCondVar; // Setup a callback. // // OMMITTED // [...] // Send a request to obtain a new configuration. SmartCache::RequestNewConfigurationMessage msg = { 0 }; mlosContext.SendTelemetryMessage(msg); // wait for MLOS Agent so send a message with a new configuration std::unique_lock lock(waitForConfigMutex); while (!isConfigReady) { waitForConfigCondVar.wait(lock); } config.Update(); smartCache.Reconfigure(); }``` After each iteration, a TelemetryMessage is sent to the MLOS Agent, and the SmartCache blocks until it receives a new configuration to run the next workload.By default, the agent is not connected to any optimizer, and will not change the original configuration, so the workload will just run uninterrupted. Starting an OptimizerWe can now also start an Optimizer service for the MLOS Agent to connect to so that we can actually optimize the parameters for this workload.As the optimizer is running in a separate process, we need to create a new shell on the running docker container using the following command:```shelldocker exec -it mlos-build /bin/bash```Within the container, we now install the Python optimizer service:```shellpip install -e source/Mlos.Python/```And run it:```shellstart_optimizer_microservice launch --port 50051``` Connecting the Agent to the OptimizerNow we can start the agent again, this time also pointing it to the optimizer:```shtools/bin/dotnet target/bin/Release/Mlos.Agent.Server.dll \ --executable target/bin/Release/SmartCache \ --optimizer-uri http://localhost:50051``` This will run the workload again, this time using the optimizer to suggest better configurations. You should see output both in the terminal the agent is running in and in the terminal the OptimizerMicroservice is running in. Inspecting resultsAfter (or even while) the optimization is running, we can connect to the optimizer via another GRPC channel.The optimizer is running within the docker container, but when we started docker, we exposed the port 50051 as the same port 50051 on the host machine (on which this notebook is running). So we can now connect to the optimizer within the docker container at `127.0.0.1:50051`.This assumes this notebook runs in an environment with the `mlos` Python package installed ([see the documentation](https://microsoft.github.io/MLOS/documentation/01-Prerequisites/python-quickstart)).
from mlos.Grpc.OptimizerMonitor import OptimizerMonitor import grpc # create a grpc channel and instantiate the OptimizerMonitor channel = grpc.insecure_channel('127.0.0.1:50051') optimizer_monitor = OptimizerMonitor(grpc_channel=channel) optimizer_monitor # There should be one optimizer running in the docker container # corresponding to the C++ SmartCache optimization problem # An OptimizerMicroservice can run multiple optimizers, which would all be listed here optimizers = optimizer_monitor.get_existing_optimizers() optimizers
_____no_output_____
MIT
source/Mlos.Notebooks/SmartCacheCPP.ipynb
HeatherJia/MLOS
We can now get the observations exactly the same way as for the Python example in `SmartCacheOptimization.ipynb`
optimizer = optimizers[0] features_df, objectives_df = optimizer.get_all_observations() import pandas as pd features, targets = optimizer.get_all_observations() data = pd.concat([features, targets], axis=1) data.to_json("CacheLatencyMainCPP.json") data lru_data, mru_data = data.groupby('cache_implementation') import matplotlib.pyplot as plt line_lru = lru_data[1].plot( y='PushLatency', label='LRU', marker='o', linestyle='none', alpha=.6,figsize=(16, 6)) mru_data[1].plot( y='PushLatency', label='MRU', marker='o', linestyle='none', alpha=.6, ax=plt.gca(),figsize=(16, 6)) plt.ylabel("Cache Latency") plt.xlabel("Observations") plt.legend() plt.savefig("Cache Latency&Observations-Main.png") lru_data, mru_data = data.groupby('cache_implementation') import matplotlib.pyplot as plt line_lru = lru_data[1].plot(x='lru_cache_config.cache_size', y='PushLatency', label='LRU', marker='o', linestyle='none', alpha=.6,figsize=(16, 6)) mru_data[1].plot(x='mru_cache_config.cache_size', y='PushLatency', label='MRU', marker='o', linestyle='none', alpha=.6, ax=plt.gca(),figsize=(16, 6)) plt.ylabel("Cache Latency") plt.xlabel("Cache Size") plt.legend() plt.savefig("Cache Latency & Size - Main.png")
_____no_output_____
MIT
source/Mlos.Notebooks/SmartCacheCPP.ipynb
HeatherJia/MLOS
algorithm
def permute(values): n = len(values) # i: position of pivot for i in reversed(range(n - 1)): if values[i] < values[i + 1]: break else: # very last permutation values[:] = reversed(values[:]) return values # j: position of the next candidate for j in reversed(range(i, n)): if values[i] < values[j]: # swap pivot and reverse the tail values[i], values[j] = values[j], values[i] values[i + 1:] = reversed(values[i + 1:]) break return values
_____no_output_____
MIT
100days/day 03 - next permutation.ipynb
gopala-kr/ds-notebooks
run
x = [4, 3, 2, 1] for i in range(25): print(permute(x)) permute(list('FADE'))
_____no_output_____
MIT
100days/day 03 - next permutation.ipynb
gopala-kr/ds-notebooks
Fairness This exercise we explore the concepts and techniques in fairness in machine learning Through this exercise one can * Increase awareness of different types of biases that can occur * Explore feature data to identify potential sources of biases before training the model. * Evaluate model performance in subgroup rather than aggregate Dataset: We use the Adult census Income dataset commonly used in machine learning. Task is to predict if the person makes over $50,000 a year while performing different methodologies to ensure fairness
### setup %tensorflow_version 2.x from __future__ import absolute_import, division, print_function, unicode_literals ## title Import revelant modules and install Facets import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras import layers from matplotlib import pyplot as plt from matplotlib import rcParams import seaborn as sns # adjust the granularity of reporting. pd.options.display.max_rows = 10 pd.options.display.float_format = "{:.1f}".format from google.colab import widgets # code for facets from IPython.core.display import display, HTML import base64 !pip install facets-overview==1.0.0 from facets_overview.feature_statistics_generator import FeatureStatisticsGenerator ## load the adult data set. COLUMNS = ["age", "workclass", "fnlwgt", "education", "education_num", "marital_status", "occupation", "relationship", "race", "gender", "capital_gain", "capital_loss", "hours_per_week", "native_country", "income_bracket"] train_csv = tf.keras.utils.get_file('adult.data', 'https://download.mlcc.google.com/mledu-datasets/adult_census_train.csv') test_csv = tf.keras.utils.get_file('adult.data', 'https://download.mlcc.google.com/mledu-datasets/adult_census_test.csv') train_df = pd.read_csv(train_csv, names=COLUMNS, sep=r'\s*,\s*', engine='python', na_values="?") test_df = pd.read_csv(test_csv, names=COLUMNS, sep=r'\s*,\s*', skiprows=[0], engine='python', na_values="?")
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
Analysing the dataset with facets We analyse the dataset to identify any peculiarities before we train the model Here are some of the questions to ask before we can go ahead with the training * Are there missing feature values for a large number of observations? * Are there features that are missing that might affect other features? * Are there any unexpected feature values? * What signs of data skew do you see? We use the Facets overview to analyze the distribution of values across the Adult dataset
## title Visualize the Data in Facets fsg = FeatureStatisticsGenerator() dataframes = [{'table': train_df, 'name': 'trainData'}] censusProto = fsg.ProtoFromDataFrames(dataframes) protostr = base64.b64encode(censusProto.SerializeToString()).decode("utf-8") HTML_TEMPLATE = """<script src="https://cdnjs.cloudflare.com/ajax/libs/webcomponentsjs/1.3.3/webcomponents-lite.js"></script> <link rel="import" href="https://raw.githubusercontent.com/PAIR-code/facets/1.0.0/facets-dist/facets-jupyter.html"> <facets-overview id="elem"></facets-overview> <script> document.querySelector("#elem").protoInput = "{protostr}"; </script>""" html = HTML_TEMPLATE.format(protostr=protostr) display(HTML(html))
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
Task 1 We can perform the fairness analysis on the visualization dataset in the faucet, click on the Show Raw Data button on the histograms and categorical features to see the distribution of values, and from that try to find if there are any missing features?, features missing that can affect other features? are there any unexpected feature values? are there any skews in the dataset? Going further, using the knowledge of the Adult datset we can now construct a neural network to predict income by using the Tensorflow's Keras API.
## first convert the pandas data frame of the adult datset to tensor flow arrays. def pandas_to_numpy(data): # Drop empty rows. data = data.dropna(how="any", axis=0) # Separate DataFrame into two Numpy arrays labels = np.array(data['income_bracket'] == ">50K") features = data.drop('income_bracket', axis=1) features = {name:np.array(value) for name, value in features.items()} return features, labels ## map the data to columns that maps to the tensor flow using tf.feature_columns ##title Create categorical feature columns # we use categorical_column_with_hash_bucket() for the occupation and native_country columns to help map # each feature string into an integer ID. # since we dont know the full range of values for this columns. occupation = tf.feature_column.categorical_column_with_hash_bucket( "occupation", hash_bucket_size=1000) native_country = tf.feature_column.categorical_column_with_hash_bucket( "native_country", hash_bucket_size=1000) # since we know what the possible values for the other columns # we can be more explicit and use categorical_column_with_vocabulary_list() gender = tf.feature_column.categorical_column_with_vocabulary_list( "gender", ["Female", "Male"]) race = tf.feature_column.categorical_column_with_vocabulary_list( "race", [ "White", "Asian-Pac-Islander", "Amer-Indian-Eskimo", "Other", "Black" ]) education = tf.feature_column.categorical_column_with_vocabulary_list( "education", [ "Bachelors", "HS-grad", "11th", "Masters", "9th", "Some-college", "Assoc-acdm", "Assoc-voc", "7th-8th", "Doctorate", "Prof-school", "5th-6th", "10th", "1st-4th", "Preschool", "12th" ]) marital_status = tf.feature_column.categorical_column_with_vocabulary_list( "marital_status", [ "Married-civ-spouse", "Divorced", "Married-spouse-absent", "Never-married", "Separated", "Married-AF-spouse", "Widowed" ]) relationship = tf.feature_column.categorical_column_with_vocabulary_list( "relationship", [ "Husband", "Not-in-family", "Wife", "Own-child", "Unmarried", "Other-relative" ]) workclass = tf.feature_column.categorical_column_with_vocabulary_list( "workclass", [ "Self-emp-not-inc", "Private", "State-gov", "Federal-gov", "Local-gov", "?", "Self-emp-inc", "Without-pay", "Never-worked" ]) # title Create numeric feature columns # For Numeric features, we can just call on feature_column.numeric_column() # to use its raw value instead of having to create a map between value and ID. age = tf.feature_column.numeric_column("age") fnlwgt = tf.feature_column.numeric_column("fnlwgt") education_num = tf.feature_column.numeric_column("education_num") capital_gain = tf.feature_column.numeric_column("capital_gain") capital_loss = tf.feature_column.numeric_column("capital_loss") hours_per_week = tf.feature_column.numeric_column("hours_per_week") ## make age a categorical feature age_buckets = tf.feature_column.bucketized_column( age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) # Define the model features. # we define the gender as a subgroup and can be used later for special handling. # subgroup is a group of individuals who share a common set of characteristics. # List of variables, with special handling for gender subgroup. variables = [native_country, education, occupation, workclass, relationship, age_buckets] subgroup_variables = [gender] feature_columns = variables + subgroup_variables
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
We can now train a neural network based on the features which we derived earlier, we use a feed-forward neural network withtwo hidden layers.We first convert our high dimensional categorical features into a real-valued vector, which we call an embedded vector.We use 'gender' for filtering the test for subgroup evaluations.
deep_columns = [ tf.feature_column.indicator_column(workclass), tf.feature_column.indicator_column(education), tf.feature_column.indicator_column(age_buckets), tf.feature_column.indicator_column(relationship), tf.feature_column.embedding_column(native_country, dimension=8), tf.feature_column.embedding_column(occupation, dimension=8), ] ## define Deep Neural Net Model # Parameters from form fill-ins HIDDEN_UNITS_LAYER_01 = 128 #@param HIDDEN_UNITS_LAYER_02 = 64 #@param LEARNING_RATE = 0.1 #@param L1_REGULARIZATION_STRENGTH = 0.001 #@param L2_REGULARIZATION_STRENGTH = 0.001 #@param RANDOM_SEED = 512 tf.random.set_seed(RANDOM_SEED) # List of built-in metrics that we'll need to evaluate performance. METRICS = [ tf.keras.metrics.TruePositives(name='tp'), tf.keras.metrics.FalsePositives(name='fp'), tf.keras.metrics.TrueNegatives(name='tn'), tf.keras.metrics.FalseNegatives(name='fn'), tf.keras.metrics.BinaryAccuracy(name='accuracy'), tf.keras.metrics.Precision(name='precision'), tf.keras.metrics.Recall(name='recall'), tf.keras.metrics.AUC(name='auc'), ] regularizer = tf.keras.regularizers.l1_l2( l1=L1_REGULARIZATION_STRENGTH, l2=L2_REGULARIZATION_STRENGTH) model = tf.keras.Sequential([ layers.DenseFeatures(deep_columns), layers.Dense( HIDDEN_UNITS_LAYER_01, activation='relu', kernel_regularizer=regularizer), layers.Dense( HIDDEN_UNITS_LAYER_02, activation='relu', kernel_regularizer=regularizer), layers.Dense( 1, activation='sigmoid', kernel_regularizer=regularizer) ]) model.compile(optimizer=tf.keras.optimizers.Adagrad(LEARNING_RATE), loss=tf.keras.losses.BinaryCrossentropy(), metrics=METRICS) ## title Fit Deep Neural Net Model to the Adult Training Dataset EPOCHS = 10 BATCH_SIZE = 1000 features, labels = pandas_to_numpy(train_df) model.fit(x=features, y=labels, epochs=EPOCHS, batch_size=BATCH_SIZE) ## Evaluate Deep Neural Net Performance features, labels = pandas_to_numpy(test_df) model.evaluate(x=features, y=labels);
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
Confusion Matrix A confusion matrix is a gird which evaluates a models performance with predictions vs ground truth for your model and summarizes how often the model made the correct prediction and how often it made the wrong prediction. Let's start by creating a binary confusion matrix for our income-prediction model—binary because our label (income_bracket) has only two possible values (50K). We'll define an income of >50K as our positive label, and an income of <50k as our negative label. The matrix represents four possible states * true positive: Model predicts >50K, and that is the ground truth. * true negative: Model predicts <50K, and that is the ground truth. * false positive: Model predicts >50K, and that contradicts reality. * false negative: Model predicts <50K, and that contradicts reality.
## Function to Visualize and plot the Binary Confusion Matrix def plot_confusion_matrix( confusion_matrix, class_names, subgroup, figsize = (8,6)): df_cm = pd.DataFrame( confusion_matrix, index=class_names, columns=class_names, ) rcParams.update({ 'font.family':'sans-serif', 'font.sans-serif':['Liberation Sans'], }) sns.set_context("notebook", font_scale=1.25) fig = plt.figure(figsize=figsize) plt.title('Confusion Matrix for Performance Across ' + subgroup) # Combine the instance (numercial value) with its description strings = np.asarray([['True Positives', 'False Negatives'], ['False Positives', 'True Negatives']]) labels = (np.asarray( ["{0:g}\n{1}".format(value, string) for string, value in zip( strings.flatten(), confusion_matrix.flatten())])).reshape(2, 2) heatmap = sns.heatmap(df_cm, annot=labels, fmt="", linewidths=2.0, cmap=sns.color_palette("GnBu_d")); heatmap.yaxis.set_ticklabels( heatmap.yaxis.get_ticklabels(), rotation=0, ha='right') heatmap.xaxis.set_ticklabels( heatmap.xaxis.get_ticklabels(), rotation=45, ha='right') plt.ylabel('References') plt.xlabel('Predictions') return fig
_____no_output_____
MIT
fairness.ipynb
ravikirankb/machine-learning-tutorial
# restart (or reset) your virtual machine #!kill -9 -1
_____no_output_____
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
[Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection)
!git clone https://github.com/tensorflow/models.git
Cloning into 'models'... remote: Enumerating objects: 18, done. remote: Counting objects: 100% (18/18), done. remote: Compressing objects: 100% (17/17), done. remote: Total 30176 (delta 7), reused 11 (delta 1), pack-reused 30158 Receiving objects: 100% (30176/30176), 510.33 MiB | 15.16 MiB/s, done. Resolving deltas: 100% (18883/18883), done. Checking out files: 100% (3061/3061), done.
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
COCO API installation
!git clone https://github.com/cocodataset/cocoapi.git %cd cocoapi/PythonAPI !make !cp -r pycocotools /content/models/research/
Cloning into 'cocoapi'... remote: Enumerating objects: 959, done. remote: Total 959 (delta 0), reused 0 (delta 0), pack-reused 959 Receiving objects: 100% (959/959), 11.69 MiB | 6.35 MiB/s, done. Resolving deltas: 100% (571/571), done. /content/cocoapi/PythonAPI python setup.py build_ext --inplace running build_ext cythoning pycocotools/_mask.pyx to pycocotools/_mask.c /usr/local/lib/python3.6/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/cocoapi/PythonAPI/pycocotools/_mask.pyx tree = Parsing.p_module(s, pxd, full_module_name) building 'pycocotools._mask' extension creating build creating build/common creating build/temp.linux-x86_64-3.6 creating build/temp.linux-x86_64-3.6/pycocotools x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I../common -I/usr/include/python3.6m -c ../common/maskApi.c -o build/temp.linux-x86_64-3.6/../common/maskApi.o -Wno-cpp -Wno-unused-function -std=c99 ../common/maskApi.c: In function ‘rleDecode’: ../common/maskApi.c:46:7: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation] for( k=0; k<R[i].cnts[j]; k++ ) *(M++)=v; v=!v; }} ^~~ ../common/maskApi.c:46:49: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘for’ for( k=0; k<R[i].cnts[j]; k++ ) *(M++)=v; v=!v; }} ^ ../common/maskApi.c: In function ‘rleFrPoly’: ../common/maskApi.c:166:3: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation] for(j=0; j<k; j++) x[j]=(int)(scale*xy[j*2+0]+.5); x[k]=x[0]; ^~~ ../common/maskApi.c:166:54: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘for’ for(j=0; j<k; j++) x[j]=(int)(scale*xy[j*2+0]+.5); x[k]=x[0]; ^ ../common/maskApi.c:167:3: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation] for(j=0; j<k; j++) y[j]=(int)(scale*xy[j*2+1]+.5); y[k]=y[0]; ^~~ ../common/maskApi.c:167:54: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘for’ for(j=0; j<k; j++) y[j]=(int)(scale*xy[j*2+1]+.5); y[k]=y[0]; ^ ../common/maskApi.c: In function ‘rleToString’: ../common/maskApi.c:212:7: warning: this ‘if’ clause does not guard... [-Wmisleading-indentation] if(more) c |= 0x20; c+=48; s[p++]=c; ^~ ../common/maskApi.c:212:27: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘if’ if(more) c |= 0x20; c+=48; s[p++]=c; ^ ../common/maskApi.c: In function ‘rleFrString’: ../common/maskApi.c:220:3: warning: this ‘while’ clause does not guard... [-Wmisleading-indentation] while( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0; ^~~~~ ../common/maskApi.c:220:22: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘while’ while( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0; ^~~~ ../common/maskApi.c:228:5: warning: this ‘if’ clause does not guard... [-Wmisleading-indentation] if(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x; ^~ ../common/maskApi.c:228:34: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘if’ if(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x; ^~~~ ../common/maskApi.c: In function ‘rleToBbox’: ../common/maskApi.c:141:31: warning: ‘xp’ may be used uninitialized in this function [-Wmaybe-uninitialized] if(j%2==0) xp=x; else if(xp<x) { ys=0; ye=h-1; } ^ x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I../common -I/usr/include/python3.6m -c pycocotools/_mask.c -o build/temp.linux-x86_64-3.6/pycocotools/_mask.o -Wno-cpp -Wno-unused-function -std=c99 creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/pycocotools x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/../common/maskApi.o build/temp.linux-x86_64-3.6/pycocotools/_mask.o -o build/lib.linux-x86_64-3.6/pycocotools/_mask.cpython-36m-x86_64-linux-gnu.so copying build/lib.linux-x86_64-3.6/pycocotools/_mask.cpython-36m-x86_64-linux-gnu.so -> pycocotools rm -rf build
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Protobuf Compilation
%cd /content/models/research/ !protoc object_detection/protos/*.proto --python_out=.
/content/models/research
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Add Libraries to PYTHONPATH
%cd /content/models/research/ %env PYTHONPATH=/env/python:/content/models/research:/content/models/research/slim:/content/models/research/object_detection %env
/content/models/research env: PYTHONPATH=/env/python:/content/models/research:/content/models/research/slim:/content/models/research/object_detection
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Testing the Installation
!python object_detection/builders/model_builder_test.py %cd /content/models/research/object_detection
/content/models/research/object_detection
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
[Tensorflow Face Detector](https://github.com/yeephycho/tensorflow-face-detection)
%cd /content !git clone https://github.com/yeephycho/tensorflow-face-detection.git %cd tensorflow-face-detection !wget https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg filename = 'grace_hopper.jpg' #!python inference_usbCam_face.py grace_hopper.jpg import sys import time import numpy as np import tensorflow as tf import cv2 from utils import label_map_util from utils import visualization_utils_color as vis_util # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = './model/frozen_inference_graph_face.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = './protos/face_label_map.pbtxt' NUM_CLASSES = 2 label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) class TensoflowFaceDector(object): def __init__(self, PATH_TO_CKPT): """Tensorflow detector """ self.detection_graph = tf.Graph() with self.detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') with self.detection_graph.as_default(): config = tf.ConfigProto() config.gpu_options.allow_growth = True self.sess = tf.Session(graph=self.detection_graph, config=config) self.windowNotSet = True def run(self, image): """image: bgr image return (boxes, scores, classes, num_detections) """ image_np = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0') # Each score represent how level of confidence for each of the objects. # Score is shown on the result image, together with the class label. scores = self.detection_graph.get_tensor_by_name('detection_scores:0') classes = self.detection_graph.get_tensor_by_name('detection_classes:0') num_detections = self.detection_graph.get_tensor_by_name('num_detections:0') # Actual detection. start_time = time.time() (boxes, scores, classes, num_detections) = self.sess.run( [boxes, scores, classes, num_detections], feed_dict={image_tensor: image_np_expanded}) elapsed_time = time.time() - start_time print('inference time cost: {}'.format(elapsed_time)) return (boxes, scores, classes, num_detections) # This is needed to display the images. %matplotlib inline tDetector = TensoflowFaceDector(PATH_TO_CKPT) original = cv2.imread(filename) image = cv2.cvtColor(original, cv2.COLOR_BGR2RGB) (boxes, scores, classes, num_detections) = tDetector.run(image) vis_util.visualize_boxes_and_labels_on_image_array( image, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=4) from matplotlib import pyplot as plt plt.imshow(image)
inference time cost: 2.3050696849823
Apache-2.0
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
Writing a Molecular Monte Carlo SimulationStarting today, make sure you have the functions1. `calculate_LJ` - written in class1. `read_xyz` - provided in class1. `calculate_total_energy` - modified version provided in this notebook written for homework which has cutoff1. `calculate_distance` - should be the version written for homework which accounts for periodic boundaries.1. `calculate_tail_correction` - written for homework
# add imports here import math import random def calculate_total_energy(coordinates, box_length, cutoff): """ Calculate the total energy of a set of particles using the Lennard Jones potential. Parameters ---------- coordinates : list A nested list containing the x, y,z coordinate for each particle box_length : float The length of the box. Assumes cubic box. cutoff : float The cutoff length Returns ------- total_energy : float The total energy of the set of coordinates. """ total_energy = 0 num_atoms = len(coordinates) for i in range(num_atoms): for j in range(i+1, num_atoms): # Calculate the distance between the particles - exercise. dist_ij = calculate_distance(coordinates[i], coordinates[j], box_length) if dist_ij < cutoff: # Calculate the pairwise LJ energy LJ_ij = calculate_LJ(dist_ij) # Add to total energy. total_energy += LJ_ij return total_energy def read_xyz(filepath): """ Reads coordinates from an xyz file. Parameters ---------- filepath : str The path to the xyz file to be processed. Returns ------- atomic_coordinates : list A two dimensional list containing atomic coordinates """ with open(filepath) as f: box_length = float(f.readline().split()[0]) num_atoms = float(f.readline()) coordinates = f.readlines() atomic_coordinates = [] for atom in coordinates: split_atoms = atom.split() float_coords = [] # We split this way to get rid of the atom label. for coord in split_atoms[1:]: float_coords.append(float(coord)) atomic_coordinates.append(float_coords) return atomic_coordinates, box_length def calculate_LJ(r_ij): """ The LJ interaction energy between two particles. Computes the pairwise Lennard Jones interaction energy based on the separation distance in reduced units. Parameters ---------- r_ij : float The distance between the particles in reduced units. Returns ------- pairwise_energy : float The pairwise Lennard Jones interaction energy in reduced units. Examples -------- >>> calculate_LJ(1) 0 """ r6_term = math.pow(1/r_ij, 6) r12_term = math.pow(r6_term, 2) pairwise_energy = 4 * (r12_term - r6_term) return pairwise_energy def calculate_distance(coord1, coord2, box_length=None): """ Calculate the distance between two points. When box_length is set, the minimum image convention is used to calculate the distance between the points. Parameters ---------- coord1, coord2 : list The coordinates of the points, [x, y, z] box_length : float, optional The box length Returns ------- distance : float The distance between the two points accounting for periodic boundaries """ distance = 0 for i in range(3): hold_dist = abs(coord2[i] - coord1[i]) if (box_length): if hold_dist > box_length/2: hold_dist = hold_dist - (box_length * round(hold_dist/box_length)) distance += math.pow(hold_dist, 2) return math.sqrt(distance) ## Add your group's tail correction function def calculate_tail_correction(num_particles, box_length, cutoff): """ The tail correction associated with using a cutoff radius. Computes the tail correction based on a cutoff radius used in the LJ energy calculation in reduced units. Parameters ---------- num_particles : int The number of particles in the system. box_length : int Size of the box length of the system, used to calculate volume. cutoff : int Cutoff distance. Returns ------- tail_correction : float The tail correction associated with using the cutoff. """ brackets = (1/3*math.pow(1/cutoff,9)) - math.pow(1/cutoff,3) volume = box_length**3 constant = ((8*math.pi*(num_particles**2))/(3*volume)) tail_correction = constant * brackets return tail_correction
_____no_output_____
BSD-3-Clause
day3.ipynb
msse-2021-bootcamp/team2-project
The Metropolis Criterion$$ P_{acc}(m \rightarrow n) = \text{min} \left[ 1,e^{-\beta \Delta U} \right] $$
def accept_or_reject(delta_U, beta): """ Accept or reject a move based on the Metropolis criterion. Parameters ---------- detlta_U : float The change in energy for moving system from state m to n. beta : float 1/temperature Returns ------- boolean Whether the move is accepted. """ if delta_U <= 0.0: accept = True else: #Generate a random number on (0,1) random_number = random.random() p_acc = math.exp(-beta*delta_U) if random_number < p_acc: accept = True else: accept = False return accept # Sanity checks - test cases delta_energy = -1 beta = 1 accepted = accept_or_reject(delta_energy, beta) assert accepted # Sanity checks - test cases delta_energy = 0 beta = 1 accepted = accept_or_reject(delta_energy, beta) assert accepted # To test function with random numbers # can set random seed #To set seed random.seed(0) random.random() delta_energy = 1 beta = 1 random.seed(0) accepted = accept_or_reject(delta_energy, beta) assert accepted is False #Clear seed random.seed() def calculate_pair_energy(coordinates, i_particle, box_length, cutoff): """ Calculate the interaction energy of a particle with its environment (all other particles in the system) Parameters ---------- coordinates : list The coordinates for all the particles in the system. i_particle : int The particle number for which to calculate the energy. cutoff : float The simulation cutoff. Beyond this distance, interactions are not calculated. box_length : float The length of the box for periodic bounds Returns ------- e_total : float The pairwise interaction energy of the ith particles with all other particles in the system """ e_total = 0.0 #creates a list of the coordinates for the i_particle i_position = coordinates[i_particle] num_atoms = len(coordinates) for j_particle in range(num_atoms): if i_particle != j_particle: #creates a list of coordinates for the j_particle j_position = coordinates[j_particle] rij = calculate_distance(i_position, j_position, box_length) if rij < cutoff: e_pair = calculate_LJ(rij) e_total += e_pair return e_total ## Sanity checks test_coords = [[0, 0, 0], [0, 0, 2**(1/6)], [0, 0, 2*2**(1/6)]] # What do you expect the result to be for particle index 1 (use cutoff of 3)? assert calculate_pair_energy(test_coords, 1, 10, 3) == -2 # What do you expect the result to be for particle index 0 (use cutoff of 2)? assert calculate_pair_energy(test_coords, 0, 10, 2) == -1 assert calculate_pair_energy(test_coords, 0, 10, 3) == calculate_pair_energy(test_coords, 2, 10, 3)
_____no_output_____
BSD-3-Clause
day3.ipynb
msse-2021-bootcamp/team2-project
Monte Carlo Loop
# Read or generate initial coordinates coordinates, box_length = read_xyz('lj_sample_configurations/lj_sample_config_periodic1.txt') # Set simulation parameters reduced_temperature = 0.9 num_steps = 5000 max_displacement = 0.1 cutoff = 3 #how often to print an update freq = 1000 # Calculated quantities beta = 1 / reduced_temperature num_particles = len(coordinates) # Energy calculations total_energy = calculate_total_energy(coordinates, box_length, cutoff) print(total_energy) total_correction = calculate_tail_correction(num_particles, box_length, cutoff) print(total_correction) total_energy += total_correction for step in range(num_steps): # 1. Randomly pick one of the particles. random_particle = random.randrange(num_particles) # 2. Calculate the interaction energy of the selected particle with the system. current_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff) # 3. Generate a random x, y, z displacement. x_rand = random.uniform(-max_displacement, max_displacement) y_rand = random.uniform(-max_displacement, max_displacement) z_rand = random.uniform(-max_displacement, max_displacement) # 4. Modify the coordinate of Nth particle by generated displacements. coordinates[random_particle][0] += x_rand coordinates[random_particle][1] += y_rand coordinates[random_particle][2] += z_rand # 5. Calculate the interaction energy of the moved particle with the system and store this value. proposed_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff) delta_energy = proposed_energy - current_energy # 6. Calculate if we accept the move based on energy difference. accept = accept_or_reject(delta_energy, beta) # 7. If accepted, move the particle. if accept: total_energy += delta_energy else: #Move not accepted, roll back coordinates coordinates[random_particle][0] -= x_rand coordinates[random_particle][1] -= y_rand coordinates[random_particle][2] -= z_rand # 8. Print the energy if step is a multiple of freq. if step % freq == 0: print(step, total_energy/num_particles)
-4351.540194543858 -198.4888837441566 0 -5.6871567358709845 1000 -5.651180182170634 2000 -5.637020769853117 3000 -5.63623029990943 4000 -5.62463482708468
BSD-3-Clause
day3.ipynb
msse-2021-bootcamp/team2-project
Data Acquisition
arxiv_files = sorted(glob('../data/arxiv/*')) scirate_files = sorted(glob('../data/scirate/*')) arxiv_data = [] for file in arxiv_files: with open(file, 'r') as f: arxiv_data.append(json.load(f)) print(len(arxiv_data)) scirate_data = [] for file in scirate_files: with open(file, 'r') as f: scirate_data.append(json.load(f)) print(len(scirate_data)) arxiv_data[-1]['date'] # 2018-03-30 Arxiv top arxiv_data[-1]['papers'][0] # 2018-03-30 Scirate top scirate_data[-1]['papers'][0]
_____no_output_____
MIT
playground/eda.ipynb
tukai21/arxiv-ranking
EDA Entry ID: paper name (DOI?)We can create an arbitrary paper id that corresponds to each paper title, authors, and DOI.Possible features:- Arxiv order- Scirate order- Paper length (pages)- Title length (words)- Number of authors- Total of citations of the authors (or first author? last author?)- Bag of Words of title- Bag of Words of abstract
# obtain features from both Arxiv and Scirate paper lists index = [] title = [] authors = [] num_authors = [] title_length = [] arxiv_order = [] submit_time = [] submit_weekday = [] paper_size = [] num_versions = [] for res in arxiv_data: date = res['date'] papers = res['papers'] for paper in papers: # create arbitrary paper id - currently, it is "date + Arxiv order" if paper['order'] < 10: idx = '_000' + str(paper['order']) elif 10 <= paper['order'] < 100: idx = '_00' + str(paper['order']) elif 100 <= paper['order'] < 1000: idx = '_0' + str(paper['order']) else: idx = '_' + str(paper['order']) index.append(date + idx) title.append(paper['title']) authors.append(paper['authors']) num_authors.append(len(paper['authors'])) title_length.append(len(paper['title'])) arxiv_order.append(paper['order']) submit_time.append(paper['submit_time']) submit_weekday.append(paper['submit_weekday']) paper_size.append(int(re.findall('\d+', paper['size'])[0])) num_versions.append(paper['num_versions']) len(index) # Scirate rank - string matching to find index of each paper in Arxiv list ### This process is pretty slow - needs to be refactored ### scirate_rank = [-1 for _ in range(len(index))] scite_score = [-1 for _ in range(len(index))] for res in scirate_data: papers = res['papers'] for paper in papers: title_sci = paper['title'] try: idx = title.index(title_sci) except: # if there is no just match, use difflib SequenceMatcher for title matching str_match = np.array([SequenceMatcher(a=title_sci, b=title_arx).ratio() for title_arx in title]) idx = np.argmax(str_match) scirate_rank[idx] = paper['rank'] scite_score[idx] = paper['scite_count'] # columns for pandas DataFrame columns = ['title', 'authors', 'num_authors', 'title_length', 'arxiv_order', 'submit_time', 'submit_weekday', 'paper_size', 'num_versions', 'scirate_rank', 'scite_score'] # this is too dirty... title = np.array(title).reshape(-1, 1) authors = np.array(authors).reshape(-1, 1) num_authors = np.array(num_authors).reshape(-1, 1) title_length = np.array(title_length).reshape(-1, 1) arxiv_order = np.array(arxiv_order).reshape(-1, 1) submit_time = np.array(submit_time).reshape(-1, 1) submit_weekday = np.array(submit_weekday).reshape(-1, 1) paper_size = np.array(paper_size).reshape(-1, 1) num_versions = np.array(num_versions).reshape(-1, 1) scirate_rank = np.array(scirate_rank).reshape(-1, 1) scite_score = np.array(scite_score).reshape(-1, 1) data = np.concatenate([ title, authors, num_authors, title_length, arxiv_order, submit_time, submit_weekday, paper_size, num_versions, scirate_rank, scite_score ], axis=1) df = pd.DataFrame(data=data, columns=columns, index=index) len(df) df.head() df[['arxiv_order', 'scite_score', 'scirate_rank']].astype(float).corr(method='spearman')
_____no_output_____
MIT
playground/eda.ipynb
tukai21/arxiv-ranking
Portfolio Exercise: Starbucks Background InformationThe dataset you will be provided in this portfolio exercise was originally used as a take-home assignment provided by Starbucks for their job candidates. The data for this exercise consists of about 120,000 data points split in a 2:1 ratio among training and test files. In the experiment simulated by the data, an advertising promotion was tested to see if it would bring more customers to purchase a specific product priced at $10. Since it costs the company 0.15 to send out each promotion, it would be best to limit that promotion only to those that are most receptive to the promotion. Each data point includes one column indicating whether or not an individual was sent a promotion for the product, and one column indicating whether or not that individual eventually purchased that product. Each individual also has seven additional features associated with them, which are provided abstractly as V1-V7. Optimization StrategyYour task is to use the training data to understand what patterns in V1-V7 to indicate that a promotion should be provided to a user. Specifically, your goal is to maximize the following metrics:* **Incremental Response Rate (IRR)** IRR depicts how many more customers purchased the product with the promotion, as compared to if they didn't receive the promotion. Mathematically, it's the ratio of the number of purchasers in the promotion group to the total number of customers in the purchasers group (_treatment_) minus the ratio of the number of purchasers in the non-promotional group to the total number of customers in the non-promotional group (_control_).$$ IRR = \frac{purch_{treat}}{cust_{treat}} - \frac{purch_{ctrl}}{cust_{ctrl}} $$* **Net Incremental Revenue (NIR)**NIR depicts how much is made (or lost) by sending out the promotion. Mathematically, this is 10 times the total number of purchasers that received the promotion minus 0.15 times the number of promotions sent out, minus 10 times the number of purchasers who were not given the promotion.$$ NIR = (10\cdot purch_{treat} - 0.15 \cdot cust_{treat}) - 10 \cdot purch_{ctrl}$$For a full description of what Starbucks provides to candidates see the [instructions available here](https://drive.google.com/open?id=18klca9Sef1Rs6q8DW4l7o349r8B70qXM).Below you can find the training data provided. Explore the data and different optimization strategies. How To Test Your Strategy?When you feel like you have an optimization strategy, complete the `promotion_strategy` function to pass to the `test_results` function. From past data, we know there are four possible outomes:Table of actual promotion vs. predicted promotion customers: ActualPredictedYesNoYesIIINoIIIIVThe metrics are only being compared for the individuals we predict should obtain the promotion – that is, quadrants I and II. Since the first set of individuals that receive the promotion (in the training set) receive it randomly, we can expect that quadrants I and II will have approximately equivalent participants. Comparing quadrant I to II then gives an idea of how well your promotion strategy will work in the future. Get started by reading in the data below. See how each variable or combination of variables along with a promotion influences the chance of purchasing. When you feel like you have a strategy for who should receive a promotion, test your strategy against the test dataset used in the final `test_results` function.
# load in packages from itertools import combinations from test_results import test_results, score import numpy as np import pandas as pd import scipy as sp import sklearn as sk import matplotlib.pyplot as plt import seaborn as sb %matplotlib inline # load in the data train_data = pd.read_csv('./training.csv') train_data.head() # Cells for you to work and document as necessary - # definitely feel free to add more cells as you need def promotion_strategy(df): ''' INPUT df - a dataframe with *only* the columns V1 - V7 (same as train_data) OUTPUT promotion_df - np.array with the values 'Yes' or 'No' related to whether or not an individual should recieve a promotion should be the length of df.shape[0] Ex: INPUT: df V1 V2 V3 V4 V5 V6 V7 2 30 -1.1 1 1 3 2 3 32 -0.6 2 3 2 2 2 30 0.13 1 1 4 2 OUTPUT: promotion array(['Yes', 'Yes', 'No']) indicating the first two users would recieve the promotion and the last should not. ''' return promotion # This will test your results, and provide you back some information # on how well your promotion_strategy will work in practice test_results(promotion_strategy)
_____no_output_____
MIT
Project/Starbucks/.ipynb_checkpoints/Starbucks-checkpoint.ipynb
kundan7kumar/Machine-Learning
리눅스 명령어
!ls !ls -l !pwd # 현재 위치 !ls -l ./sample_data !ls -l ./ !ls -l ./Wholesale_customers_data.csv import pandas as pd df = pd.read_csv('./Wholesale_customers_data.csv') df.info() X = df.iloc[:,:] X.shape from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X) X = scaler.transform(X)
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
K-means 클러스터> max_iterint, default=300 /> Maximum number of iterations of the k-means algorithm for a single run.
from sklearn import cluster kmeans =cluster.KMeans(n_clusters=5) kmeans.fit(X) kmeans.labels_
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
첫번재 라인은 무엇으로 label , 두번째 라인은 무엇으로 label 해줌.> 이친구들을 df에 label을 붙여줌.
df['label'] = kmeans.labels_ df.head()
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
보고서 작성에는 2차원으로 보는게 젤 좋음.시각화의 시점은 무조건 XY로
df.plot(kind='scatter', x='Grocery',y='Frozen',c='label', cmap='Set1', figsize=(10,10))
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning