markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Discriminator The discriminator takes as input ($x^*$) the 784 dimensional output of the generator or a real MNIST image, re-shapes the input to a 28 x 28 image and outputs the estimated probability that the input image is a real MNIST image. The network is modeled using strided convolution with Leaky ReLU activation except for the last layer. We use a sigmoid activation on the last layer to ensure the discriminator output lies in the inteval of [0,1].
def convolutional_discriminator(x): with default_options(init=C.normal(scale=0.02)): dfc_dim = 1024 df_dim = 64 print('Discriminator convolution input shape', x.shape) x = C.reshape(x, (1, img_h, img_w)) h0 = Convolution2D(dkernel, 1, strides=dstride)(x) h0 = bn_with_leaky_relu(h0, leak=0.2) print('h0 shape :', h0.shape) h1 = Convolution2D(dkernel, df_dim, strides=dstride)(h0) h1 = bn_with_leaky_relu(h1, leak=0.2) print('h1 shape :', h1.shape) h2 = Dense(dfc_dim, activation=None)(h1) h2 = bn_with_leaky_relu(h2, leak=0.2) print('h2 shape :', h2.shape) h3 = Dense(1, activation=C.sigmoid)(h2) print('h3 shape :', h3.shape) return h3
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
We use a minibatch size of 128 and a fixed learning rate of 0.0002 for training. In the fast mode (isFast = True) we verify only functional correctness with 5000 iterations. Note: In the slow mode, the results look a lot better but it requires in the order of 10 minutes depending on your hardware. In general, the more number of minibatches one trains, the better is the fidelity of the generated images.
# training config minibatch_size = 128 num_minibatches = 5000 if isFast else 10000 lr = 0.0002 momentum = 0.5 #equivalent to beta1
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
Build the graph The rest of the computational graph is mostly responsible for coordinating the training algorithms and parameter updates, which is particularly tricky with GANs for couple reasons. The GANs are sensitive to the choice of learner and the parameters. Many of the parameters chosen here are based on many hard learnt lessons from the community. You may directly go to the code if you have read the basic GAN tutorial. First, the discriminator must be used on both the real MNIST images and fake images generated by the generator function. One way to represent this in the computational graph is to create a clone of the output of the discriminator function, but with substituted inputs. Setting method=share in the clone function ensures that both paths through the discriminator model use the same set of parameters. Second, we need to update the parameters for the generator and discriminator model separately using the gradients from different loss functions. We can get the parameters for a Function in the graph with the parameters attribute. However, when updating the model parameters, update only the parameters of the respective models while keeping the other parameters unchanged. In other words, when updating the generator we will update only the parameters of the $G$ function while keeping the parameters of the $D$ function fixed and vice versa. Training the Model The code for training the GAN very closely follows the algorithm as presented in the original NIPS 2014 paper. In this implementation, we train $D$ to maximize the probability of assigning the correct label (fake vs. real) to both training examples and the samples from $G$. In other words, $D$ and $G$ play the following two-player minimax game with the value function $V(G,D)$: $$ \min_G \max_D V(D,G)= \mathbb{E}{x}[ log D(x) ] + \mathbb{E}{z}[ log(1 - D(G(z))) ] $$ At the optimal point of this game the generator will produce realistic looking data while the discriminator will predict that the generated image is indeed fake with a probability of 0.5. The algorithm referred below is implemented in this tutorial.
def build_graph(noise_shape, image_shape, generator, discriminator): input_dynamic_axes = [C.Axis.default_batch_axis()] Z = C.input(noise_shape, dynamic_axes=input_dynamic_axes) X_real = C.input(image_shape, dynamic_axes=input_dynamic_axes) X_real_scaled = X_real / 255.0 # Create the model function for the generator and discriminator models X_fake = generator(Z) D_real = discriminator(X_real_scaled) D_fake = D_real.clone( method = 'share', substitutions = {X_real_scaled.output: X_fake.output} ) # Create loss functions and configure optimazation algorithms G_loss = 1.0 - C.log(D_fake) D_loss = -(C.log(D_real) + C.log(1.0 - D_fake)) G_learner = adam( parameters = X_fake.parameters, lr = learning_rate_schedule(lr, UnitType.sample), momentum = momentum_schedule(0.5) ) D_learner = adam( parameters = D_real.parameters, lr = learning_rate_schedule(lr, UnitType.sample), momentum = momentum_schedule(0.5) ) # Instantiate the trainers G_trainer = Trainer( X_fake, (G_loss, None), G_learner ) D_trainer = Trainer( D_real, (D_loss, None), D_learner ) return X_real, X_fake, Z, G_trainer, D_trainer
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
With the value functions defined we proceed to interatively train the GAN model. The training of the model can take significnantly long depending on the hardware especiallly if isFast flag is turned off.
def train(reader_train, generator, discriminator): X_real, X_fake, Z, G_trainer, D_trainer = \ build_graph(g_input_dim, d_input_dim, generator, discriminator) # print out loss for each model for upto 25 times print_frequency_mbsize = num_minibatches // 25 print("First row is Generator loss, second row is Discriminator loss") pp_G = ProgressPrinter(print_frequency_mbsize) pp_D = ProgressPrinter(print_frequency_mbsize) k = 2 input_map = {X_real: reader_train.streams.features} for train_step in range(num_minibatches): # train the discriminator model for k steps for gen_train_step in range(k): Z_data = noise_sample(minibatch_size) X_data = reader_train.next_minibatch(minibatch_size, input_map) if X_data[X_real].num_samples == Z_data.shape[0]: batch_inputs = {X_real: X_data[X_real].data, Z: Z_data} D_trainer.train_minibatch(batch_inputs) # train the generator model for a single step Z_data = noise_sample(minibatch_size) batch_inputs = {Z: Z_data} G_trainer.train_minibatch(batch_inputs) G_trainer.train_minibatch(batch_inputs) pp_G.update_with_trainer(G_trainer) pp_D.update_with_trainer(D_trainer) G_trainer_loss = G_trainer.previous_minibatch_loss_average return Z, X_fake, G_trainer_loss reader_train = create_reader(train_file, True, d_input_dim, label_dim=10) # G_input, G_output, G_trainer_loss = train(reader_train, dense_generator, dense_discriminator) G_input, G_output, G_trainer_loss = train(reader_train, convolutional_generator, convolutional_discriminator) # Print the generator loss print("Training loss of the generator is: {0:.2f}".format(G_trainer_loss))
simpleGan/CNTK_206B_DCGAN.ipynb
olgaliak/cntk-cyclegan
mit
This gives us a nice way to move from our preference $x_i$ to a probability of switching styles. Here $\beta$ is inversely related to noise. For large $\beta$, the noise is small and we basically map $x > 0$ to a 100% probability of switching, and $x<0$ to a 0% probability of switching. As $\beta$ gets smaller, the probabilities get less and less distinct. The Code Let's see this model in action. We'll start by defining a class which implements everything we've gone through above:
class HipsterStep(object): """Class to implement hipster evolution Parameters ---------- initial_style : length-N array values > 0 indicate one style, while values <= 0 indicate the other. is_hipster : length-N array True or False, indicating whether each person is a hipster influence_matrix : N x N array Array of non-negative values. influence_matrix[i, j] indicates how much influence person j has on person i delay_matrix : N x N array Array of positive integers. delay_matrix[i, j] indicates the number of days delay between person j's influence on person i. """ def __init__(self, initial_style, is_hipster, influence_matrix, delay_matrix, beta=1, rseed=None): self.initial_style = initial_style self.is_hipster = is_hipster self.influence_matrix = influence_matrix self.delay_matrix = delay_matrix self.rng = np.random.RandomState(rseed) self.beta = beta # make s array consisting of -1 and 1 self.s = -1 + 2 * (np.atleast_2d(initial_style) > 0) N = self.s.shape[1] # make eps array consisting of -1 and 1 self.eps = -1 + 2 * (np.asarray(is_hipster) > 0) # create influence_matrix and delay_matrix self.J = np.asarray(influence_matrix, dtype=float) self.tau = np.asarray(delay_matrix, dtype=int) # validate all the inputs assert self.s.ndim == 2 assert self.s.shape[1] == N assert self.eps.shape == (N,) assert self.J.shape == (N, N) assert np.all(self.J >= 0) assert np.all(self.tau > 0) @staticmethod def phi(x, beta): return 0.5 * (1 + np.tanh(beta * x)) def step_once(self): N = self.s.shape[1] # iref[i, j] gives the index for the j^th individual's # time-delayed influence on the i^th individual iref = np.maximum(0, self.s.shape[0] - self.tau) # sref[i, j] gives the previous state of the j^th individual # which affects the current state of the i^th individual sref = self.s[iref, np.arange(N)] # m[i] is the mean of weighted influences of other individuals m = (self.J * sref).sum(1) / self.J.sum(1) # From m, we use the sigmoid function to compute a transition probability transition_prob = self.phi(-self.eps * m * self.s[-1], beta=self.beta) # Now choose steps stochastically based on this probability new_s = np.where(transition_prob > self.rng.rand(N), -1, 1) * self.s[-1] # Add this to the results, and return self.s = np.vstack([self.s, new_s]) return self.s def step(self, N): for i in range(N): self.step_once() return self.s
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Now we'll create a function which will return an instance of the HipsterStep class with the appropriate settings:
def get_sim(Npeople=500, hipster_frac=0.8, initial_state_frac=0.5, delay=20, log10_beta=0.5, rseed=42): rng = np.random.RandomState(rseed) initial_state = (rng.rand(1, Npeople) > initial_state_frac) is_hipster = (rng.rand(Npeople) > hipster_frac) influence_matrix = abs(rng.randn(Npeople, Npeople)) influence_matrix.flat[::Npeople + 1] = 0 delay_matrix = 1 + rng.poisson(delay, size=(Npeople, Npeople)) return HipsterStep(initial_state, is_hipster, influence_matrix, delay_matrix=delay_matrix, beta=10 ** log10_beta, rseed=rseed)
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Exploring this data Now that we've defined the simulation, we can start exploring this data. I'll quickly demonstrate how to advance simulation time and get the results. First we initialize the model with a certain fraction of hipsters:
sim = get_sim(hipster_frac=0.8)
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
To run the simulation a number of steps we execute sim.step(Nsteps) giving us a matrix of identities for each invidual at each timestep.
result = sim.step(200) result
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Now we can simply go right ahead and visualize this data using an Image Element type, defining the dimensions and bounds of the space.
%%opts Image [width=600] hv.Image(result.T, bounds=(0, 0, 100, 500), kdims=['Time', 'individual'], vdims=['State'])
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
Now that you know how to run the simulation and access the data have a go at exploring the effects of different parameters on the population dynamics or apply some custom analyses to this data. Here are two quick examples of what you can do:
%%opts Curve [width=350] Image [width=350] hipster_frac = hv.HoloMap(kdims=['Hipster Fraction']) for i in np.linspace(0.1, 1, 10): sim = get_sim(hipster_frac=i) hipster_frac[i] = hv.Image(sim.step(200).T, (0, 0, 500, 500), group='Population Dynamics', kdims=['Time', 'individual'], vdims=['Bearded']) (hipster_frac + hipster_frac.reduce(individual=np.mean).to.curve('Time', 'Bearded')) %%opts Overlay [width=600] Curve (color='black') aggregated = hipster_frac.table().aggregate(['Time', 'Hipster Fraction'], np.mean, np.std) aggregated.to.curve('Time') * aggregated.to.errorbars('Time')
doc/Examples/HipsterDynamics.ipynb
vascotenner/holoviews
bsd-3-clause
1. Create data loaders Use DataLoader to create a <tt>train_loader</tt> and a <tt>test_loader</tt>. Batch sizes should be 10 for both.
# CODE HERE # DON'T WRITE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
2. Examine a batch of images Use DataLoader, <tt>make_grid</tt> and matplotlib to display the first batch of 10 images.<br> OPTIONAL: display the labels as well
# CODE HERE # DON'T WRITE HERE # IMAGES ONLY # DON'T WRITE HERE # IMAGES AND LABELS
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
Downsampling <h3>3. If a 28x28 image is passed through a Convolutional layer using a 5x5 filter, a step size of 1, and no padding, what is the resulting matrix size?</h3> <div style='border:1px black solid; padding:5px'> <br><br> </div>
################################################## ###### ONLY RUN THIS TO CHECK YOUR ANSWER! ###### ################################################ # Run the code below to check your answer: conv = nn.Conv2d(1, 1, 5, 1) for x,labels in train_loader: print('Orig size:',x.shape) break x = conv(x) print('Down size:',x.shape)
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
4. If the sample from question 3 is then passed through a 2x2 MaxPooling layer, what is the resulting matrix size? <div style='border:1px black solid; padding:5px'> <br><br> </div>
################################################## ###### ONLY RUN THIS TO CHECK YOUR ANSWER! ###### ################################################ # Run the code below to check your answer: x = F.max_pool2d(x, 2, 2) print('Down size:',x.shape)
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
CNN definition 5. Define a convolutional neural network Define a CNN model that can be trained on the Fashion-MNIST dataset. The model should contain two convolutional layers, two pooling layers, and two fully connected layers. You can use any number of neurons per layer so long as the model takes in a 28x28 image and returns an output of 10. Portions of the definition have been filled in for convenience.
# CODE HERE class ConvolutionalNetwork(nn.Module): def __init__(self): super().__init__() pass def forward(self, X): pass return torch.manual_seed(101) model = ConvolutionalNetwork()
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
Trainable parameters 6. What is the total number of trainable parameters (weights & biases) in the model above? Answers will vary depending on your model definition. <div style='border:1px black solid; padding:5px'> <br><br> </div>
# CODE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
7. Define loss function & optimizer Define a loss function called "criterion" and an optimizer called "optimizer".<br> You can use any functions you want, although we used Cross Entropy Loss and Adam (learning rate of 0.001) respectively.
# CODE HERE # DON'T WRITE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
8. Train the model Don't worry about tracking loss values, displaying results, or validating the test set. Just train the model through 5 epochs. We'll evaluate the trained model in the next step.<br> OPTIONAL: print something after each epoch to indicate training progress.
# CODE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
9. Evaluate the model Set <tt>model.eval()</tt> and determine the percentage correct out of 10,000 total test images.
# CODE HERE
torch/PYTORCH_NOTEBOOKS/03-CNN-Convolutional-Neural-Networks/05-CNN-Exercises.ipynb
rishuatgithub/MLPy
apache-2.0
The dataset contains information (21 features, including the price) related to 21613 houses. Our target variable (i.e., what we want to predict when a new house gets on sale) is the price. Baseline: the simplest model Now let's compute the loss in the case of the simplest model: a fixed price equal to the average of historic prices, independently on house size, rooms, location, ...
# Let's compute the mean of the House Prices in King County y = sales['price'] # extract the price column avg_price = y.mean() # this is our baseline print ("average price: ${:.0f} ".format(avg_price)) ExamplePrice = y[0] ExamplePrice
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The predictions are very easy to calculate, just the baseline value:
def get_baseline_predictions(): # Simplest version: return the baseline as predicted values predicted_values = avg_price return predicted_values
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Example:
my_house_size = 2500 estimated_price = get_baseline_predictions() print ("The estimated price for a house with {} squared feet is {:.0f}".format(my_house_size, estimated_price))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The estimated price for the example house will still be around 540K, wile the real value is around 222K. Quite an error! Measures of loss There are several way of implementing the loss, I use the squared error here. $L = [y - f(X)]^2$
import numpy as np def get_loss(yhat, target): """ Arguments: yhat -- vector of size m (predicted labels) target -- vector of size m (true labels) Returns: loss -- the value of the L2 loss function """ # compute the residuals (since we are squaring it doesn't matter # which order you subtract) # np.dot will square the residuals and add them up loss = np.dot((target - yhat), (target - yhat)) return(loss)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
To better see the value of the cost function we use also the RMSE, the Root Mean Square Deviation. Basically the average of the losses, rooted.
baselineCost = get_loss(get_baseline_predictions(), y) print ("Training Error for baseline RSS: {:.0f}".format(baselineCost)) print ("Average Training Error for baseline RMSE: {:.0f}".format(np.sqrt(baselineCost/m)))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
As you can see, it is quite high error, especially related to the average selling price. Now, we can look at how training error behaves as model complexity increases. Learning a better but still simple model Using a constant value, the average, is easy but does not make too much sense. Let's create a linear model with the house size as the feature. We expect that the price is dependent on the size: bigger house, more expensive.
from sklearn import linear_model simple_model = linear_model.LinearRegression() simple_features = sales[['sqft_living']] # input X: the house size simple_model.fit(simple_features, y)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now that we have fit the model we can extract the regression weights (coefficients) as follows:
simple_model_intercept = simple_model.intercept_ print (simple_model_intercept) simple_model_weights = simple_model.coef_ print (simple_model_weights)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
This means that our simple model to predict a house price y is (approximated): $y = -43581 + 281x $ where x is the size in squared feet. It is not anymore a horizontal line but a diagonal one, with a slope. Making Predictions Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
training_predictions = simple_model.predict(simple_features) print (training_predictions[0])
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
We are getting closer to the real value for the example house (recall, it's around 222K). Compute the Training Error Now that we can make predictions given the model, let's again compute the RSS and the RMSE.
# First get the predictions using the features subset predictions = simple_model.predict(sales[['sqft_living']]) simpleCost = get_loss(predictions, y) print ("Training Error for baseline RSS: {:.0f}".format(simpleCost)) print ("Average Training Error for baseline RMSE: {:.0f}".format(np.sqrt(simpleCost/m)))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The simple model reduced greatly the training error. Learning a multiple regression model We can add more features to the model, for example the number of bedrooms and bathrooms.
more_features = sales[['sqft_living', 'bedrooms', 'bathrooms']] # input X
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
We can learn a multiple regression model predicting 'price' based on the above features on the data with the following code:
better_model = linear_model.LinearRegression() better_model.fit(more_features, y)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now that we have fitted the model we can extract the regression weights (coefficients) as follows:
betterModel_intercept = better_model.intercept_ print (betterModel_intercept) betterModel_weights = better_model.coef_ print (betterModel_weights)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
The better model is therefore: $y = 74847 + 309x1 - 57861x2 + 7933x3$ Note that the equation has now three variables: the size, the bedrooms and the bathrooms. Making Predictions Again we can use the .predict() function to find the predicted values for data we pass. For the model above:
better_predictions = better_model.predict(more_features) print (better_predictions[0])
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Again, a little bit closer to the real value (222K) Compute the Training Error Now that we can make predictions given the model, let's write a function to compute the RSS of the model.
predictions = better_model.predict(more_features) betterCost = get_loss(predictions, y) print ("Training Error for baseline RSS: {:.0f}".format(betterCost)) print ("Average Training Error for baseline RMSE: {:.0f}".format(np.sqrt(betterCost/m)))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Only a slight improvement this time Create some new features Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
from math import log
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Next we create the following new features as column : * bedrooms_squared = bedrooms*bedrooms * bed_bath_rooms = bedrooms*bathrooms * log_sqft_living = log(sqft_living) * lat_plus_long = lat + long * more polynomial features: bedrooms ^ 4, bathrooms ^ 7, size ^ 3
sales['bedrooms_squared'] = sales['bedrooms'].apply(lambda x: x**2) sales['bed_bath_rooms'] = sales['bedrooms'] * sales.bathrooms sales['log_sqft_living'] = sales['sqft_living'].apply(lambda x: log(x)) sales['lat_plus_long'] = sales['lat'] + sales.long sales['bedrooms_4'] = sales['bedrooms'].apply(lambda x: x**4) sales['bathrooms_7'] = sales['bathrooms'].apply(lambda x: x**7) sales['size_3'] = sales['sqft_living'].apply(lambda x: x**3) sales.head()
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms. bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large. Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values. Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why) Learning Multiple Models Now we will learn the weights for five (nested) models for predicting house prices. The first model will have the fewest features, the second model will add more features and so on:
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long', 'sqft_lot', 'floors'] model_2_features = model_1_features + ['log_sqft_living', 'bedrooms_squared', 'bed_bath_rooms'] model_3_features = model_2_features + ['lat_plus_long'] model_4_features = model_3_features + ['bedrooms_4', 'bathrooms_7'] model_5_features = model_4_features + ['size_3']
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now that we have the features, we learn the weights for the five different models for predicting target = 'price' using and look at the value of the weights/coefficients:
model_1 = linear_model.LinearRegression() model_1.fit(sales[model_1_features], y) model_2 = linear_model.LinearRegression() model_2.fit(sales[model_2_features], y) model_3 = linear_model.LinearRegression() model_3.fit(sales[model_3_features], y) model_4 = linear_model.LinearRegression() model_4.fit(sales[model_4_features], y) model_5 = linear_model.LinearRegression() model_5.fit(sales[model_5_features], y) # You can examine/extract each model's coefficients, for example: print (model_1.coef_) print (model_2.coef_)
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Interesting: in the previous model the weight coefficient for the size lot was positive but now in the model_2 is negative. This is an effect of adding the feature logging the size. Comparing multiple models Now that you've learned three models and extracted the model weights we want to evaluate which model is best. We can use the loss function from earlier to compute the RSS on training data for each of the models.
# Compute the RSS for each of the models: print (get_loss(model_1.predict(sales[model_1_features]), y)) print (get_loss(model_2.predict(sales[model_2_features]), y)) print (get_loss(model_3.predict(sales[model_3_features]), y)) print (get_loss(model_4.predict(sales[model_4_features]), y)) print (get_loss(model_5.predict(sales[model_5_features]), y))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
model_5 has the lowest RSS on the training data. The most complex model. The test error Training error decreases quite significantly with model complexity. This is quite intuitive, because the model was fit on the training points and then as we increase the model complexity, we are better able to fit the training data points. A natural question is whether a training error is a good measure of predictive performance? The issue is that the training error is overly optimistic and that's because the beta parameters were fit on the training data to minimise the residual sum of squares, which can often be related to the training error. So, in general, having small training error does not imply having good predictive performance. This takes us to something called test error (or out-of-sample error): we hold out some houses from the data set and we're putting these into what's called a test set. And when we fit our models, we just fit our models on the training data set. But then when we go to assess our performance of that model we look at these test houses in the test dataset and these are hopefully serving as a proxy of everything out there in the world. Bottom line, the test error is a (noisy) approximation of the true error. Split data into training and testing. Let's see how can be applied to our example. First we split the data into a training set and a testing set using a function from sklearn, the train_test_split(). We use a seed for reproducibility.
from sklearn.model_selection import train_test_split train_data,test_data = train_test_split(sales, test_size=0.3, random_state=999) train_data.head() train_data.shape # test_data = pd.read_csv('kc_house_test_data.csv', dtype=dtype_dict) test_data.head() test_data.shape
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
In this case the testing set will be the 30% (therefore the training set is 70% of the original data)
train_y = train_data.price # extract the price column test_y = test_data.price
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Retrain the models on training data only:
model_1.fit(train_data[model_1_features], train_y) model_2.fit(train_data[model_2_features], train_y) model_3.fit(train_data[model_3_features], train_y) model_4.fit(train_data[model_4_features], train_y) model_5.fit(train_data[model_5_features], train_y) # Compute the RSS on TRAINING data for each of the models print (get_loss(model_1.predict(train_data[model_1_features]), train_y)) print (get_loss(model_2.predict(train_data[model_2_features]), train_y)) print (get_loss(model_3.predict(train_data[model_3_features]), train_y)) print (get_loss(model_4.predict(train_data[model_4_features]), train_y)) print (get_loss(model_5.predict(train_data[model_5_features]), train_y))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Now compute the RSS on TEST data for each of the models.
# Compute the RSS on TESTING data for each of the three models and record the values: print (get_loss(model_1.predict(test_data[model_1_features]), test_y)) print (get_loss(model_2.predict(test_data[model_2_features]), test_y)) print (get_loss(model_3.predict(test_data[model_3_features]), test_y)) print (get_loss(model_4.predict(test_data[model_4_features]), test_y)) print (get_loss(model_5.predict(test_data[model_5_features]), test_y))
01-Regression/overfit.ipynb
Mashimo/datascience
apache-2.0
Training with k-Fold Cross-Validation This recipe repeatedly trains a logistic regression classifier over different subsets (folds) of sample data. It attempts to match the percentage of each class in every fold to its percentage in the overall dataset (stratification). It evaluates each model against a test set and collects the confusion matrices for each test fold into a pandas.Panel. This recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to the instance classes as human readable names.
# <help:scikit_cross_validation> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them import pandas import sklearn import sklearn.datasets import sklearn.metrics as metrics from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import StratifiedKFold # load the iris dataset dataset = sklearn.datasets.load_iris() # define feature vectors (X) and target (y) X = dataset.data y = dataset.target labels = dataset.target_names labels # <help:scikit_cross_validation> # use log reg classifier clf = LogisticRegression() cms = {} scores = [] cv = StratifiedKFold(y, n_folds=10) for i, (train, test) in enumerate(cv): # train then immediately predict the test set y_pred = clf.fit(X[train], y[train]).predict(X[test]) # compute the confusion matrix on each fold, convert it to a DataFrame and stash it for later compute cms[i] = pandas.DataFrame(metrics.confusion_matrix(y[test], y_pred), columns=labels, index=labels) # stash the overall accuracy on the test set for the fold too scores.append(metrics.accuracy_score(y[test], y_pred)) # Panel of all test set confusion matrices pl = pandas.Panel(cms) cm = pl.sum(axis=0) #Sum the confusion matrices to get one view of how well the classifiers perform cm # <help:scikit_cross_validation> # accuracy predicting the test set for each fold scores
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Principal Component Analysis Plots This recipe performs a PCA and plots the data against the first two principal components in a scatter plot. It then prints the eigenvalues and eigenvectors of the covariance matrix and finally prints the precentage of total variance explained by each component. This recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes.
# <help:scikit_pca> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them from __future__ import division import math import pandas as pd import numpy as np import matplotlib.pyplot as plt import sklearn.datasets import sklearn.metrics as metrics from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler # load the iris dataset dataset = sklearn.datasets.load_iris() # define feature vectors (X) and target (y) X = dataset.data y = dataset.target labels = dataset.target_names # <help:scikit_pca> # define the number of components to compute, recommend n_components < y_features pca = PCA(n_components=2) X_pca = pca.fit_transform(X) # plot the first two principal components fig, ax = plt.subplots() plt.scatter(X_pca[:,0], X_pca[:,1]) plt.grid() plt.title('PCA of the dataset') ax.set_xlabel('Component #1') ax.set_ylabel('Component #2') plt.show() # <help:scikit_pca> # eigendecomposition on the covariance matrix cov_mat = np.cov(X_pca.T) eig_vals, eig_vecs = np.linalg.eig(cov_mat) print('Eigenvectors \n%s' %eig_vecs) print('\nEigenvalues \n%s' %eig_vals) # <help:scikit_pca> # prints the percentage of overall variance explained by each component print(pca.explained_variance_ratio_)
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
K-Means Clustering Plots This recipe performs a K-means clustering k=1..n times. It prints and plots the the within-clusters sum of squares error for each k (i.e., inertia) as an indicator of what value of k might be appropriate for the given dataset. This recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes. To change the number of clusters, modify k.
# <help:scikit_k_means_cluster> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them from time import time import numpy as np import matplotlib.pyplot as plt import sklearn.datasets from sklearn.cluster import KMeans # load datasets and assign data and features dataset = sklearn.datasets.load_iris() # define feature vectors (X) and target (y) X = dataset.data y = dataset.target # set the number of clusters, must be >=1 n = 6 inertia = [np.NaN] # perform k-means clustering over i=0...k for k in range(1,n): k_means_ = KMeans(n_clusters=k) k_means_.fit(X) print('k = %d, inertia= %f' % (k, k_means_.inertia_ )) inertia.append(k_means_.inertia_) # plot the SSE of the clusters for each value of i ax = plt.subplot(111) ax.plot(inertia, '-o') plt.xticks(range(n)) plt.title("Inertia") ax.set_ylabel('Inertia') ax.set_xlabel('# Clusters') plt.show()
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
SVM Classifier Hyperparameter Tuning with Grid Search This recipe performs a grid search for the best settings for a support vector machine, predicting the class of each flower in the dataset. It splits the dataset into training and test instances once. This recipe defaults to using the Iris data set. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes. Modify parameters to change the grid search space or the scoring='accuracy' value to optimize a different metric for the classifier (e.g., precision, recall).
#<help_scikit_grid_search> import numpy as np import matplotlib.pyplot as plt import sklearn.datasets import sklearn.metrics as metrics from sklearn.svm import SVC from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from sklearn.cross_validation import train_test_split from sklearn.preprocessing import label_binarize # load datasets and features dataset = sklearn.datasets.load_iris() # define feature vectors (X) and target (y) X = dataset.data y = dataset.target labels = dataset.target_names # separate datasets into training and test datasets once, no folding X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) #<help_scikit_grid_search> #define the parameter dictionary with the kernels of SVCs parameters = [ {'kernel': ['rbf'], 'gamma': [1e-3, 1e-4, 1e-2], 'C': [1, 10, 100, 1000]}, {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}, {'kernel': ['poly'], 'degree': [1, 3, 5], 'C': [1, 10, 100, 1000]} ] # find the best parameters to optimize accuracy svc_clf = SVC(C=1, probability= True) clf = GridSearchCV(svc_clf, parameters, cv=5, scoring='accuracy') #5 folds clf.fit(X_train, y_train) #train the model print("Best parameters found from SVM's:") print clf.best_params_ print("Best score found from SVM's:") print clf.best_score_
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Plot ROC Curves This recipe plots the reciever operating characteristic (ROC) curve for a SVM classifier trained over the given dataset. This recipe defaults to using the Iris data set which has three classes. The recipe uses a one-vs-the-rest strategy to create the binary classifications appropriate for ROC plotting. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes. Note that the recipe adds noise to the iris features to make the ROC plots more realistic. Otherwise, the classification is nearly perfect and the plot hard to study. Remove the noise generator if you use your own data!
# <help:scikit_roc> import warnings warnings.filterwarnings('ignore') #notebook outputs warnings, let's ignore them import numpy as np import matplotlib.pyplot as plt import sklearn.datasets import sklearn.metrics as metrics from sklearn.svm import SVC from sklearn.multiclass import OneVsRestClassifier from sklearn.cross_validation import train_test_split from sklearn.preprocessing import label_binarize # load iris, set and data dataset = sklearn.datasets.load_iris() X = dataset.data # binarize the output for binary classification y = label_binarize(dataset.target, classes=[0, 1, 2]) labels = dataset.target_names # <help:scikit_roc> # add noise to the features so the plot is less ideal # REMOVE ME if you use your own dataset! random_state = np.random.RandomState(0) n_samples, n_features = X.shape X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # <help:scikit_roc> # split data for cross-validation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # classify instances into more than two classes, one vs rest # add param to create probabilities to determine Y or N as the classification clf = OneVsRestClassifier(SVC(kernel='linear', probability=True)) # fit estiamators and return the distance of each sample from the decision boundary y_score = clf.fit(X_train, y_train).decision_function(X_test) # <help:scikit_roc> # plot the ROC curve, best for it to be in top left corner plt.figure(figsize=(10,5)) plt.plot([0, 1], [0, 1], 'k--') # add a straight line representing a random model for i, label in enumerate(labels): # false positive and true positive rate for each class fpr, tpr, _ = metrics.roc_curve(y_test[:, i], y_score[:, i]) # area under the curve (auc) for each class roc_auc = metrics.auc(fpr, tpr) plt.plot(fpr, tpr, label='ROC curve of {0} (area = {1:0.2f})'.format(label, roc_auc)) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.title('Receiver Operating Characteristic for Iris data set') plt.xlabel('False Positive Rate') # 1- specificity plt.ylabel('True Positive Rate') # sensitivity plt.legend(loc="lower right") plt.show()
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Build a Transformation and Classification Pipeline This recipe builds a transformation and training pipeline for a model that can classify a snippet of text as belonging to one of 20 USENET newgroups. It then prints the precision, recall, and F1-score for predictions over a held-out test set as well as the confusion matrix. This recipe defaults to using the 20 USENET newsgroup dataset. To use your own data, set X to your instance feature vectors, y to the instance classes as a factor, and labels to human-readable names of the classes. Then modify the pipeline components to perform appropriate transformations for your data. <div class="alert alert-block alert-warning" style="margin-top: 20px">**Warning:** Running this recipe with the sample data may consume a significant amount of memory.</div>
# <help:scikit_pipeline> import pandas import sklearn.metrics as metrics from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.feature_extraction.text import HashingVectorizer from sklearn.linear_model import Perceptron from sklearn.naive_bayes import MultinomialNB from sklearn.linear_model import SGDClassifier from sklearn.pipeline import Pipeline from sklearn.cross_validation import train_test_split from sklearn.datasets import fetch_20newsgroups # download the newsgroup dataset dataset = fetch_20newsgroups('all') # define feature vectors (X) and target (y) X = dataset.data y = dataset.target labels = dataset.target_names labels # <help:scikit_pipeline> # split data holding out 30% for testing the classifier X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) # pipelines concatenate functions serially, output of 1 becomes input of 2 clf = Pipeline([ ('vect', HashingVectorizer(analyzer='word', ngram_range=(1,3))), # count frequency of words, using hashing trick ('tfidf', TfidfTransformer()), # transform counts to tf-idf values, ('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, n_iter=5)) ]) # <help:scikit_pipeline> # train the model and predict the test set y_pred = clf.fit(X_train, y_train).predict(X_test) # standard information retrieval metrics print metrics.classification_report(y_test, y_pred, target_names=labels) # <help:scikit_pipeline> # show the confusion matrix in a labeled dataframe for ease of viewing index_labels = ['{} {}'.format(i, l) for i, l in enumerate(labels)] pandas.DataFrame(metrics.confusion_matrix(y_test,y_pred), index=index_labels)
scikit-learn/sklearn_cookbook.ipynb
knowledgeanyhow/notebooks
mit
Defining Geometry At this point, we have three materials defined, exported to XML, and ready to be used in our model. To finish our model, we need to define the geometric arrangement of materials. OpenMC represents physical volumes using constructive solid geometry (CSG), also known as combinatorial geometry. The object that allows us to assign a material to a region of space is called a Cell (same concept in MCNP, for those familiar). In order to define a region that we can assign to a cell, we must first define surfaces which bound the region. A surface is a locus of zeros of a function of Cartesian coordinates $x$, $y$, and $z$, e.g. A plane perpendicular to the x axis: $x - x_0 = 0$ A cylinder parallel to the z axis: $(x - x_0)^2 + (y - y_0)^2 - R^2 = 0$ A sphere: $(x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2 - R^2 = 0$ Between those three classes of surfaces (planes, cylinders, spheres), one can construct a wide variety of models. It is also possible to define cones and general second-order surfaces (tori are not currently supported). Note that defining a surface is not sufficient to specify a volume -- in order to define an actual volume, one must reference the half-space of a surface. A surface half-space is the region whose points satisfy a positive or negative inequality of the surface equation. For example, for a sphere of radius one centered at the origin, the surface equation is $f(x,y,z) = x^2 + y^2 + z^2 - 1 = 0$. Thus, we say that the negative half-space of the sphere, is defined as the collection of points satisfying $f(x,y,z) < 0$, which one can reason is the inside of the sphere. Conversely, the positive half-space of the sphere would correspond to all points outside of the sphere. Let's go ahead and create a sphere and confirm that what we've told you is true.
sph = openmc.Sphere(R=1.0)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
Pin cell geometry We now have enough knowledge to create our pin-cell. We need three surfaces to define the fuel and clad: The outer surface of the fuel -- a cylinder parallel to the z axis The inner surface of the clad -- same as above The outer surface of the clad -- same as above These three surfaces will all be instances of openmc.ZCylinder, each with a different radius according to the specification.
fuel_or = openmc.ZCylinder(R=0.39) clad_ir = openmc.ZCylinder(R=0.40) clad_or = openmc.ZCylinder(R=0.46)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
OpenMC also includes a factory function that generates a rectangular prism that could have made our lives easier.
box = openmc.get_rectangular_prism(width=pitch, height=pitch, boundary_type='reflective') type(box)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
Geometry plotting We saw before that we could call the Universe.plot() method to show a universe while we were creating our geometry. There is also a built-in plotter in the Fortran codebase that is much faster than the Python plotter and has more options. The interface looks somewhat similar to the Universe.plot() method. Instead though, we create Plot instances, assign them to a Plots collection, export it to XML, and then run OpenMC in geometry plotting mode. As an example, let's specify that we want the plot to be colored by material (rather than by cell) and we assign yellow to fuel and blue to water.
p = openmc.Plot() p.filename = 'pinplot' p.width = (pitch, pitch) p.pixels = (200, 200) p.color_by = 'material' p.colors = {uo2: 'yellow', water: 'blue'}
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
That was a little bit cumbersome. Thankfully, OpenMC provides us with a function that does all that "boilerplate" work.
openmc.plot_inline(p)
examples/jupyter/pincell.ipynb
wbinventor/openmc
mit
Linear regression
# build a linear regression model from sklearn.linear_model import LinearRegression linreg = LinearRegression() linreg.fit(X_train, y_train) # examine the coefficients print(linreg.coef_) # make predictions y_pred = linreg.predict(X_test) # calculate RMSE from sklearn import metrics import numpy as np print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Ridge regression Ridge documentation alpha: must be positive, increase for more regularization normalize: scales the features (without using StandardScaler)
# alpha=0 is equivalent to linear regression from sklearn.linear_model import Ridge ridgereg = Ridge(alpha=0, normalize=True) ridgereg.fit(X_train, y_train) y_pred = ridgereg.predict(X_test) print(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # try alpha=0.1 ridgereg = Ridge(alpha=0.1, normalize=True) ridgereg.fit(X_train, y_train) y_pred = ridgereg.predict(X_test) print(np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # examine the coefficients print(ridgereg.coef_)
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
RidgeCV: ridge regression with built-in cross-validation of the alpha parameter alphas: array of alpha values to try
# create an array of alpha values alpha_range = 10.**np.arange(-2, 3) alpha_range # select the best alpha with RidgeCV from sklearn.linear_model import RidgeCV ridgeregcv = RidgeCV(alphas=alpha_range, normalize=True, scoring='neg_mean_squared_error') ridgeregcv.fit(X_train, y_train) ridgeregcv.alpha_ # predict method uses the best alpha value y_pred = ridgeregcv.predict(X_test) print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Lasso regression Lasso documentation alpha: must be positive, increase for more regularization normalize: scales the features (without using StandardScaler)
# try alpha=0.001 and examine coefficients from sklearn.linear_model import Lasso lassoreg = Lasso(alpha=0.001, normalize=True) lassoreg.fit(X_train, y_train) print(lassoreg.coef_) # try alpha=0.01 and examine coefficients lassoreg = Lasso(alpha=0.01, normalize=True) lassoreg.fit(X_train, y_train) print(lassoreg.coef_) # calculate RMSE (for alpha=0.01) y_pred = lassoreg.predict(X_test) print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
LassoCV: lasso regression with built-in cross-validation of the alpha parameter n_alphas: number of alpha values (automatically chosen) to try
# select the best alpha with LassoCV import warnings warnings.filterwarnings('ignore') from sklearn.linear_model import LassoCV lassoregcv = LassoCV(n_alphas=100, normalize=True, random_state=1,cv=5) lassoregcv.fit(X_train, y_train) lassoregcv.alpha_ # examine the coefficients print(lassoregcv.coef_) # predict method uses the best alpha value y_pred = lassoregcv.predict(X_test) print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Part 5: Regularized classification in scikit-learn Wine dataset from the UCI Machine Learning Repository: data, data dictionary Goal: Predict the origin of wine using chemical analysis Load and prepare the wine dataset
# read in the dataset url = 'https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/wine.data' wine = pd.read_csv(url, header=None) wine.head() # examine the response variable wine[0].value_counts() # define X and y X = wine.drop(0, axis=1) y = wine[0] # split into training and testing sets from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Logistic regression (unregularized)
# build a logistic regression model from sklearn.linear_model import LogisticRegression logreg = LogisticRegression(C=1e9,solver='liblinear',multi_class='auto') logreg.fit(X_train, y_train) # examine the coefficients print(logreg.coef_) # generate predicted probabilities y_pred_prob = logreg.predict_proba(X_test) print(y_pred_prob) # calculate log loss print(metrics.log_loss(y_test, y_pred_prob))
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
Logistic regression (regularized) LogisticRegression documentation C: must be positive, decrease for more regularization penalty: l1 (lasso) or l2 (ridge)
# standardize X_train and X_test from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X_train = X_train.astype(float) X_test = X_test.astype(float) scaler.fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # try C=0.1 with L1 penalty logreg = LogisticRegression(C=0.1, penalty='l1',solver='liblinear',multi_class='auto') logreg.fit(X_train_scaled, y_train) print(logreg.coef_) # generate predicted probabilities and calculate log loss y_pred_prob = logreg.predict_proba(X_test_scaled) print(metrics.log_loss(y_test, y_pred_prob)) # try C=0.1 with L2 penalty logreg = LogisticRegression(C=0.1, penalty='l2',multi_class='auto',solver='liblinear') logreg.fit(X_train_scaled, y_train) print(logreg.coef_) # generate predicted probabilities and calculate log loss y_pred_prob = logreg.predict_proba(X_test_scaled) print(metrics.log_loss(y_test, y_pred_prob))
notebooks/07-regularization.ipynb
albahnsen/PracticalMachineLearningClass
mit
<br> Import required modules
import json import time import numpy as np import pandas as pd from geopy.geocoders import GoogleV3 from geopy.exc import GeocoderQueryError, GeocoderQuotaExceeded
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Load the datafile spb2018_-_cleaned.csv, which contains the form responses to the Suomen Parhaat Boulderit 2018 survey.
# Load cleaned dataset spb2018_df = pd.read_csv("data/survey_-_cleaned.csv") # Drop duplicates (exclude the Timestamp column from comparisons) spb2018_df = spb2018_df.drop_duplicates(subset=spb2018_df.columns.values.tolist()[1:]) spb2018_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Load the datafile boulders_-_prefilled.csv, which contains manually added details of each voted boulder.
boulder_details_df = pd.read_csv("data/boulders_-_prefilled.csv", index_col="Name") boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column VotedBy
""" # Simpler but slower (appr. four times) implementation # 533 ms ± 95.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) def add_column_votedby(column_name="VotedBy"): # Gender mappings from Finnish to English gender_dict = { "Mies": "Male", "Nainen": "Female" } # Iterate over boulders for index, row in boulder_details_df.iterrows(): boulder_name = index gender_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name) | (spb2018_df["Boulderin nimi.1"] == boulder_name) | (spb2018_df["Boulderin nimi.2"] == boulder_name), "Sukupuoli"] boulder_details_df.loc[boulder_name, column_name] = gender_dict[gender_s.iloc[0]] if gender_s.nunique() == 1 else "Both" """ """ # More complex but faster (appr. four times) implementation # 136 ms ± 5.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) def add_column_votedby(column_name="VotedBy"): # Initialize the new column boulder_details_df[column_name] = "" # Gender mappings from Finnish to English gender_dict = { "Mies": "Male", "Nainen": "Female" } def update_genders(gender, boulder_names): for boulder_name in boulder_names: previous_gender = boulder_details_df.loc[boulder_name, column_name] if previous_gender == "" or previous_gender == gender: boulder_details_df.loc[boulder_name, column_name] = gender else: boulder_details_df.loc[boulder_name, column_name] = "Both" # Iterate over form responses for index, row in spb2018_df.iterrows(): gender = gender_dict[row["Sukupuoli"]] boulder_names = [row["Boulderin nimi"], row["Boulderin nimi.1"], row["Boulderin nimi.2"]] boulder_names = [boulder_name for boulder_name in boulder_names if pd.notnull(boulder_name)] update_genders(gender, boulder_names) """ # Typical implementation # 430 ms ± 78.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) def add_column_votedby(column_name="VotedBy"): # Gender mappings from Finnish to English gender_dict = { "Mies": "Male", "Nainen": "Female" } def set_voted_by(row): boulder_name = row.name gender_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name) | (spb2018_df["Boulderin nimi.1"] == boulder_name) | (spb2018_df["Boulderin nimi.2"] == boulder_name), "Sukupuoli"] return gender_dict[gender_s.iloc[0]] if gender_s.nunique() == 1 else "Both" boulder_details_df[column_name] = boulder_details_df.apply(set_voted_by, axis=1) add_column_votedby() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column Votes.
def add_column_votes(column_name="Votes"): boulder_name_columns = [spb2018_df["Boulderin nimi"], spb2018_df["Boulderin nimi.1"], spb2018_df["Boulderin nimi.2"]] all_voted_boulders_s = pd.concat(boulder_name_columns, ignore_index=True).dropna() boulder_votes_s = all_voted_boulders_s.value_counts() boulder_details_df[column_name] = boulder_votes_s add_column_votes() boulder_details_df.sort_values(by=["Votes"], ascending=[False]).loc[boulder_details_df["Votes"] >= 3]
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add columns Latitude and Longitude.
def add_columns_latitude_and_longitude(column_names=["Latitude", "Longitude"]): boulder_details_df[[column_names[0], column_names[1]]] = boulder_details_df["Coordinates"].str.split(",", expand=True).astype(float) add_columns_latitude_and_longitude() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column GradeNumeric.
def add_column_gradenumeric(column_name="GradeNumeric"): # Grade mappings from Font to numeric grade_dict = { "?": 0, "1": 1, "2": 2, "3": 3, "4": 4, "4+": 5, "5": 6, "5+": 7, "6A": 8, "6A+": 9, "6B": 10, "6B+": 11, "6C": 12, "6C+": 13, "7A": 14, "7A+": 15, "7B": 16, "7B+": 17, "7C": 18, "7C+": 19, "8A": 20, "8A+": 21, "8B": 22, "8B+": 23, "8C": 24, "8C+": 25, "9A": 26 } boulder_details_df[column_name] = boulder_details_df.apply(lambda row: str(grade_dict[row["Grade"]]) if pd.notnull(row["Grade"]) else np.nan, axis=1) boulder_details_df[column_name] = boulder_details_df[column_name].astype(int) add_column_gradenumeric() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column Adjectives
def add_column_adjectives(column_name="Adjectives"): def set_adjectives(row): boulder_name = row.name adjectives1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Kuvaile boulderia kolmella (3) adjektiivilla"] adjectives2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] == boulder_name), "Kuvaile boulderia kolmella (3) adjektiivilla.1"] adjectives3_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.2"] == boulder_name), "Kuvaile boulderia kolmella (3) adjektiivilla.2"] adjectives_s = adjectives1_s.append(adjectives2_s).append(adjectives3_s) adjectives = ",".join(adjectives_s) # Clean adjectives adjectives = ",".join(sorted(list(set([adjective.strip().lower() for adjective in adjectives.split(",")])))) return adjectives boulder_details_df[column_name] = boulder_details_df.apply(set_adjectives, axis=1) add_column_adjectives() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column MainHoldTypes
def add_column_main_hold_types(column_name="MainHoldTypes"): def set_main_hold_types(row): boulder_name = row.name main_hold_types1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Boulderin pääotetyypit"] main_hold_types2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] == boulder_name), "Boulderin pääotetyypit.1"] main_hold_types3_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.2"] == boulder_name), "Boulderin pääotetyypit.2"] main_hold_types_s = main_hold_types1_s.append(main_hold_types2_s).append(main_hold_types3_s) main_hold_types = ",".join(main_hold_types_s) # Clean main_hold_types main_hold_types = ",".join(sorted(list(set([main_hold_type.strip().lower() for main_hold_type in main_hold_types.split(",")])))) return main_hold_types boulder_details_df[column_name] = boulder_details_df.apply(set_main_hold_types, axis=1) add_column_main_hold_types() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column MainProfiles
def add_column_main_profiles(column_name="MainProfiles"): def set_main_profiles(row): boulder_name = row.name main_profiles1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Boulderin pääprofiilit"] main_profiles2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] == boulder_name), "Boulderin pääprofiilit.1"] main_profiles3_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.2"] == boulder_name), "Boulderin pääprofiilit.2"] main_profiles_s = main_profiles1_s.append(main_profiles2_s).append(main_profiles3_s) main_profiles = ",".join(main_profiles_s) # Clean main_profiles main_profiles = ",".join(sorted(list(set([main_profile.strip().lower() for main_profile in main_profiles.split(",")])))) return main_profiles boulder_details_df[column_name] = boulder_details_df.apply(set_main_profiles, axis=1) add_column_main_profiles() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column MainSkillsNeeded
def add_column_main_skills_needed(column_name="MainSkillsNeeded"): def set_main_skills_needed(row): boulder_name = row.name main_skills_needed1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Boulderin kiipeämiseen vaadittavat pääkyvyt"] main_skills_needed2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] == boulder_name), "Boulderin kiipeämiseen vaadittavat pääkyvyt.1"] main_skills_needed3_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.2"] == boulder_name), "Boulderin kiipeämiseen vaadittavat pääkyvyt.2"] main_skills_needed_s = main_skills_needed1_s.append(main_skills_needed2_s).append(main_skills_needed3_s) main_skills_needed = ",".join(main_skills_needed_s) # Clean main_skills_needed main_skills_needed = ",".join(sorted(list(set([main_skill_needed.strip().lower() for main_skill_needed in main_skills_needed.split(",")])))) return main_skills_needed boulder_details_df[column_name] = boulder_details_df.apply(set_main_skills_needed, axis=1) add_column_main_skills_needed() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add column Comments
def add_column_comments(column_name="Comments"): def set_comments(row): boulder_name = row.name comments1_s = spb2018_df.loc[(spb2018_df["Boulderin nimi"] == boulder_name), "Kuvaile boulderia omin sanoin (vapaaehtoinen)"] comments2_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.1"] == boulder_name), "Kuvaile boulderia omin sanoin (vapaaehtoinen).1"] comments3_s = spb2018_df.loc[(spb2018_df["Boulderin nimi.2"] == boulder_name), "Kuvaile boulderia omin sanoin (vapaaehtoinen).2"] comments_s = comments1_s.append(comments2_s).append(comments3_s) comments = [] for index, value in comments_s.iteritems(): if pd.notnull(value): comments.append(value.strip()) return ",".join("\"{}\"".format(comment) for comment in comments) boulder_details_df[column_name] = boulder_details_df.apply(set_comments, axis=1) add_column_comments() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Add columns AreaLevel1, AreaLevel2, and AreaLevel3
def add_columns_arealevel1_arealevel2_and_arealevel3(column_names=["AreaLevel1", "AreaLevel2", "AreaLevel3"]): boulder_details_df.drop(columns=[column_names[0], column_names[1], column_names[2]], inplace=True, errors="ignore") geolocator = GoogleV3(api_key=GOOGLE_MAPS_JAVASCRIPT_API_KEY) def extract_administrative_area_levels(location_results, approximateLocation, area_levels_dict): # List of location result types that we are interested in location_result_types = ["administrative_area_level_1", "administrative_area_level_2", "administrative_area_level_3"] # Iterate over location results for location_result in location_results: location_result_json = location_result.raw # Extract data only from those location results that we are interested in if any(location_result_type in location_result_json["types"] for location_result_type in location_result_types): # Extract location result type location_result_type = location_result_json["types"][0] # Iterate over address components for address_component in location_result_json["address_components"]: # Extract data only from the matched location result type if location_result_type in address_component["types"]: # Extract the name of the administrative area level 1 if location_result_type == location_result_types[0]: area_levels_dict["AreaLevel1"] = address_component["long_name"] # Extract the name of the administrative area level 2 if location_result_type == location_result_types[1] and approximateLocation == "No": area_levels_dict["AreaLevel2"] = address_component["long_name"] # Extract the name of the administrative area level 3 if location_result_type == location_result_types[2] and approximateLocation == "No": area_levels_dict["AreaLevel3"] = address_component["long_name"] return area_levels_dict def get_area_levels(row): # Area levels template area_levels_dict = { column_names[0]: "", column_names[1]: "", column_names[2]: "" } geocoded = False while geocoded is not True: # Reverse geocode coordinates try: location_results = geolocator.reverse(row["Coordinates"], language="fi") area_levels_dict = extract_administrative_area_levels(location_results, row["ApproximateCoordinates"], area_levels_dict) geocoded = True except GeocoderQueryError as gqe: print("Geocoding error with {}: {}".format(row.name, str(gqe))) print("Skipping {}".format(row.name)) geocoded = True except GeocoderQuotaExceeded as gqe: print("Geocoding quota exceeded: {}".format(str(gqe))) print("Backing off for a bit") time.sleep(30 * 60) # sleep for 30 minutes print("Back in action") return pd.Series(area_levels_dict) boulder_area_levels_df = boulder_details_df[["Coordinates", "ApproximateCoordinates"]].apply(get_area_levels, axis=1) return pd.merge(boulder_details_df, boulder_area_levels_df, how="outer", left_index=True, right_index=True) boulder_details_df = add_columns_arealevel1_arealevel2_and_arealevel3() boulder_details_df.head()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
<br> Create boulders final file boulders_-_final.csv.
def create_boulders_final(): boulder_details_reset_df = boulder_details_df.reset_index() boulder_details_reset_df = boulder_details_reset_df[["Votes", "VotedBy", "Name", "Grade", "GradeNumeric", "InFinland", "AreaLevel1", "AreaLevel2", "AreaLevel3", "Crag", "ApproximateCoordinates", "Coordinates", "Latitude", "Longitude", "Url27crags", "UrlVideo", "UrlStory", "MainProfiles", "MainHoldTypes", "MainSkillsNeeded", "Adjectives", "Comments"]] boulder_details_reset_df = boulder_details_reset_df.sort_values(by=["Votes", "GradeNumeric", "Name"], ascending=[False, False, True]) boulder_details_reset_df.to_csv("data/boulders_-_final.csv", index=False) create_boulders_final()
2018/data_wrangling/Create Boulders Final.ipynb
mplaine/www.laatukiikut.fi
mit
Exercise #1: What are the most predictive features? Determine correlation for each feature with the label. You may find the corr function useful. Train Gradient Boosting model Training Steps to build model an ensemble of $K$ estimators. 1. At $k=0$ build base model , $\hat{y}{0}$: $\hat{y}{0}=base_predicted$ 3. Compute residuals $r = \sum_{i=0}^n (y_{k,i} - \hat{y}{k,i})$; $n: number\ train\ examples$ 4. Train new model, fitting on residuals, $r$. We will call the predictions from this model $e{k}_predicted$ 5. Update model predictions at step $k$ by adding residual to current predictions: $\hat{y}{k} = \hat{y}{k-1} + e_{k}_predicted$ 6. Repeat steps 2 - 5 K times. In summary, the goal is to build K estimators that learn to predict the residuals from the prior model; thus we are learning to "correct" the predictions up until this point. <br> $\hat{y}{K} = base_predicted\ +\ \sum{j=1}^Ke_{j}_predicted$ Build base model Exercise #2: Make an initial prediction using the BaseModel class -- configure the predict method to predict the training mean.
class BaseModel(object): """Initial model that predicts mean of train set.""" def __init__(self, y_train): self.train_mean = # TODO def predict(self, x): """Return train mean for every prediction.""" return # TODO def compute_residuals(label, pred): """Compute difference of labels and predictions. When using mean squared error loss function, the residual indicates the negative gradient of the loss function in prediction space. Thus by fitting the residuals, we performing gradient descent in prediction space. See for more detail: https://explained.ai/gradient-boosting/L2-loss.html """ return label - pred def compute_rmse(x): return np.sqrt(np.mean(np.square(x))) # Build a base model that predicts the mean of the training set. base_model = BaseModel(y_train) test_pred = base_model.predict(x_test) test_residuals = compute_residuals(y_test, test_pred) compute_rmse(test_residuals)
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Train Boosting model Returning back to boosting, let's use our very first base model as are initial prediction. We'll then perform subsequent boosting iterations to improve upon this model. create_weak_model
def create_weak_learner(**tree_params): """Initialize a Decision Tree model.""" model = DecisionTreeRegressor(**tree_params) return model
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make initial prediction. Exercise #3: Update the prediction on the training set (train_pred) and on the testing set (test_pred) using the weak learner that predicts the residuals.
base_model = BaseModel(y_train) # Training parameters. tree_params = { 'max_depth': 1, 'criterion': 'mse', 'random_state': 123 } N_ESTIMATORS = 50 BOOSTING_LR = 0.1 # Initial prediction, residuals. train_pred = base_model.prediction(x_train) test_pred = base_model.prediction(x_test) train_residuals = compute_residuals(y_train, train_pred) test_residuals = compute_residuals(y_test, test_pred) # Boosting. train_rmse, test_rmse = [], [] for _ in range(0, N_ESTIMATORS): train_rmse.append(compute_rmse(train_residuals)) test_rmse.append(compute_rmse(test_residuals)) # Train weak learner. model = create_weak_learner(**tree_params) model.fit(x_train, train_residuals) # Boosting magic happens here: add the residual prediction to correct # the prior model. grad_approx = # TODO train_pred += # TODO train_residuals = compute_residuals(y_train, train_pred) # Keep track of residuals on validation set. grad_approx = # TODO test_pred += # TODO test_residuals = compute_residuals(y_test, test_pred)
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Interpret results Can you improve the model results?
plt.figure() plt.plot(train_rmse, label='train error') plt.plot(test_rmse, label='test error') plt.ylabel('rmse', size=20) plt.xlabel('Boosting Iterations', size=20); plt.legend()
courses/machine_learning/deepdive/supplemental_gradient_boosting/labs/a_boosting_from_scratch.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We need to pick some first guess parameters. Because we're lazy we'll just start by setting them all to 1:
log_a = 0.0;log_b = 0.0; log_c = 0.0; log_P = 0.0 kernel = CustomTerm(log_a, log_b, log_c, log_P) gp = celerite.GP(kernel, mean=0.0) yerr = 0.000001*np.ones(time.shape) gp.compute(time,yerr) print("Initial log-likelihood: {0}".format(gp.log_likelihood(value))) t = np.arange(np.min(time),np.max(time),0.1) # calculate expectation and variance at each point: mu, cov = gp.predict(value, t) std = np.sqrt(np.diag(cov)) ax = pl.subplot(111) pl.plot(t,mu) ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True) pl.scatter(time,value,s=2) pl.axis([0.,60.,-1.,1.]) pl.ylabel("Relative flux [ppt]") pl.xlabel("Time [days]") pl.show()
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
The key parameter here is the period, which is the fourth number along. We expect this to be about 3.9 and... we're getting 4.24, so not a million miles off. From the paper: This star has a published rotation period of 3.88 ± 0.58 days, measured using traditional periodogram and autocorrelation function approaches applied to Kepler data from Quarters 0–16 (Mathur et al. 2014), covering about four years. Let's now pass these optimised parameters to george and recompute our prediction:
# pass the parameters to the george kernel: gp.set_parameter_vector(results.x) t = np.arange(np.min(time),np.max(time),0.1) # calculate expectation and variance at each point: mu, cov = gp.predict(value, t) std = np.sqrt(np.diag(cov)) ax = pl.subplot(111) pl.plot(t,mu) ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True) pl.scatter(time,value,s=2) pl.axis([0.,60.,-1.,1.]) pl.ylabel("Relative flux [ppt]") pl.xlabel("Time [days]") pl.show()
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
First we need to define a log(likelihood). We'll use the log(likelihood) implemented in the george library, which implements: $$ \ln L = -\frac{1}{2}(y - \mu)^{\rm T} C^{-1}(y - \mu) - \frac{1}{2}\ln |C\,| + \frac{N}{2}\ln 2\pi $$ (see Eq. 5 in https://arxiv.org/pdf/1706.05459.pdf).
# set the loglikelihood: def lnlike(p, x, y): lnB = np.log(p[0]) lnC = p[1] lnL = np.log(p[2]) lnP = np.log(p[3]) p0 = np.array([lnB,lnC,lnL,lnP]) # update kernel parameters: gp.set_parameter_vector(p0) # calculate the likelihood: ll = gp.log_likelihood(y) # return return ll if np.isfinite(ll) else 1e25
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
We also need to specify our parameter priors. Here we'll just use uniform logarithmic priors. The ranges are the same as specified in Table 3 of https://arxiv.org/pdf/1703.09710.pdf. <img src="table3.png">
# set the logprior def lnprior(p): # These ranges are taken from Table 4 # of https://arxiv.org/pdf/1703.09710.pdf lnB = np.log(p[0]) lnC = p[1] lnL = np.log(p[2]) lnP = np.log(p[3]) # really crappy prior: if (-10<lnB<0.) and (-5.<lnC<5.) and (-5.<lnL<1.5) and (-3.<lnP<5.): return 0.0 return -np.inf #return gp.log_prior()
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
The paper then says: initialize 32 walkers by sampling from an isotropic Gaussian with a standard deviation of $10^{−5}$ centered on the MAP parameters. So, let's do that:
# put all the data into a single array: data = (x_train,y_train) # set your initial guess parameters # as the output from the scipy optimiser # remember celerite keeps these in ln() form! # C looks like it's going to be a very small # value - so we will sample from ln(C): # A, lnC, L, P p = gp.get_parameter_vector() initial = np.array([np.exp(p[0]),p[1],np.exp(p[2]),np.exp(p[3])]) print "Initial guesses: ",initial # set the dimension of the prior volume # (i.e. how many parameters do you have?) ndim = len(initial) print "Number of parameters: ",ndim # The number of walkers needs to be more than twice # the dimension of your parameter space. nwalkers = 32 # perturb your inital guess parameters very slightly (10^-5) # to get your starting values: p0 = [np.array(initial) + 1e-5 * np.random.randn(ndim) for i in xrange(nwalkers)]
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
The paper says: We run 500 steps of burn-in, followed by 5000 steps of MCMC using emcee. First let's run the burn-in:
# run a few samples as a burn-in: print("Running burn-in") p0, lnp, _ = sampler.run_mcmc(p0, 500) sampler.reset()
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
Now let's run the production MCMC:
# take the highest likelihood point from the burn-in as a # starting point and now begin your production run: print("Running production") p = p0[np.argmax(lnp)] p0 = [p + 1e-5 * np.random.randn(ndim) for i in xrange(nwalkers)] p0, _, _ = sampler.run_mcmc(p0, 5000) print "Finished" import acor # calculate the convergence time of our # MCMC chains: samples = sampler.flatchain s2 = np.ndarray.transpose(samples) tau, mean, sigma = acor.acor(s2) print "Convergence time from acor: ", tau print "Number of independent samples:", 5000.-(20.*tau) # get rid of the samples that were taken # before convergence: delta = int(20*tau) samples = sampler.flatchain[delta:,:] samples[:, 2] = np.exp(samples[:, 2]) b_mcmc, c_mcmc, l_mcmc, p_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]), zip(*np.percentile(samples, [16, 50, 84], axis=0))) # specify prediction points: t = np.arange(np.min(time),np.max(time),0.1) # update the kernel hyper-parameters: hp = np.array([b_mcmc[0], c_mcmc[0], l_mcmc[0], p_mcmc[0]]) lnB = np.log(p[0]) lnC = p[1] lnL = np.log(p[2]) lnP = np.log(p[3]) p0 = np.array([lnB,lnC,lnL,lnP]) gp.set_parameter_vector(p0) print hp # calculate expectation and variance at each point: mu, cov = gp.predict(value, t) ax = pl.subplot(111) pl.plot(t,mu) ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True) pl.scatter(time,value,s=2) pl.axis([0.,60.,-1.,1.]) pl.ylabel("Relative flux [ppt]") pl.xlabel("Time [days]") pl.show() import corner # Plot it. figure = corner.corner(samples, labels=[r"$B$", r"$lnC$", r"$L$", r"$P$"], quantiles=[0.16,0.5,0.84], #levels=[0.39,0.86,0.99], levels=[0.68,0.95,0.99], title="KIC 1430163", show_titles=True, title_args={"fontsize": 12})
TIARA/Tutorial/KeplerLightCurveCelerite.ipynb
as595/AllOfYourBases
gpl-3.0
Activate the Economics and Reliability themes
names = theme_menu.get_available(new_core, new_project) message = html_list(names) HTML(message) theme_menu.activate(new_core, new_project, "Economics") # Here we are expecting Hydrodynamics assert _get_connector(new_project, "modules").get_current_interface_name(new_core, new_project) == "Hydrodynamics" from aneris.utilities.analysis import get_variable_network, count_atomic_variables req_inputs, opt_inputs, outputs, req_inter, opt_inter = get_variable_network(new_core.control, new_project.get_pool(), new_project.get_simulation(), "modules") req_inputs[req_inputs.Type=="Shared"].reset_index() shared_req_inputs = req_inputs[req_inputs.Type=="Shared"] len(shared_req_inputs["Identifier"].unique()) count_atomic_variables(shared_req_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) opt_inputs[opt_inputs.Type=="Shared"].reset_index() shared_opt_inputs = opt_inputs[opt_inputs.Type=="Shared"] len(shared_opt_inputs["Identifier"].unique()) count_atomic_variables(shared_opt_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) req_inter len(req_inter["Identifier"].unique()) count_atomic_variables(req_inter["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) opt_inter len(opt_inter["Identifier"].unique()) count_atomic_variables(opt_inter["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) hyrdo_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Hydrodynamics'] len(hyrdo_req_inputs["Identifier"].unique()) count_atomic_variables(hyrdo_req_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) hyrdo_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Hydrodynamics'] len(hyrdo_opt_inputs["Identifier"].unique()) count_atomic_variables(hyrdo_opt_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) electro_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Electrical Sub-Systems'] len(electro_req_inputs["Identifier"].unique()) count_atomic_variables(electro_req_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) electro_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Electrical Sub-Systems'] len(electro_opt_inputs["Identifier"].unique()) count_atomic_variables(electro_opt_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) moorings_req_inputs = req_inputs.loc[req_inputs['Interface'] == 'Mooring and Foundations'] len(moorings_req_inputs["Identifier"].unique()) count_atomic_variables(moorings_req_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) moorings_opt_inputs = opt_inputs.loc[opt_inputs['Interface'] == 'Mooring and Foundations'] len(moorings_opt_inputs["Identifier"].unique()) count_atomic_variables(moorings_opt_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"]) total_req_inputs = req_inputs.loc[req_inputs['Interface'] != 'Shared'] len(total_req_inputs["Identifier"].unique()) count_atomic_variables(total_req_inputs["Identifier"].unique(), new_core.data_catalog, "labels", ["TableData", "TableDataColumn", "IndexTable", "LineTable", "LineTableColumn", "TimeTable", "TimeTableColumn"])
notebooks/DTOcean Floating Wave Scenario Analysis.ipynb
DTOcean/dtocean-core
gpl-3.0
Objective Build a model to make predictions on blighted buildings based on real data from data.detroitmi.gov as given by coursera. Building demolition is very important for the city to turn around and revive its economy. However, it's no easy task. Accurate predictions can provide guidance on potential blighted buildings and help avoid complications at early stages. Building List The buildings were defined as described below: Building sizes were estimated using parcel info downloaded here at data.detroitmi.gov. Details can be found in this notebook. A event table was constructed from the 4 files (detroit-311.csv, detroit-blight-violations.csv, detroit-crime.csv, and detroit-demolition-permits.tsv) using their coordinates, as shown here. Buildings were defined using these coordinates with an estimated building size (median of all parcels). Each building was represented as a same sized rectangle.
# The resulted buildings: Image("./data/buildings_distribution.png")
Final_Report.ipynb
cyang019/blight_fight
mit
Features Three kinds (311-calls, blight-violations, and crimes) of incident counts and coordinates (normalized) was used in the end. I also tried to generate more features by differentiating each kind of crimes or each kind of violations in this notebook. However, these differentiated features lead to smaller AUC scores. Data The buildings were down-sampled to contain same number of blighted buildings and non-blighted ones. The ratio between train and test was set at a ratio of 80:20. During training using xgboost, the train data was further separated into train and evaluation with a ratio of 80:20 for monitoring. Model A Gradient Boosted Tree model using Xgboost achieved AUC score of 0.85 on evaluation data set:
Image('./data/train_process.png')
Final_Report.ipynb
cyang019/blight_fight
mit
This model resulted in an AUC score of 0.858 on test data. Feature importances are shown below:
Image('./data/feature_f_scores.png')
Final_Report.ipynb
cyang019/blight_fight
mit
Locations were most important features in this model. Although I tried using more features generated by differentiating different kind of crimes or violations, the AUC scores did not improve. Feature importance can also be viewed using tree representation:
Image('./data/bst_tree.png')
Final_Report.ipynb
cyang019/blight_fight
mit
To reduce variance of the model, since overfitting was observed during training. I also tried to reduce variance by including in more nonblighted buildings by sampling again multiple times with replacement (bagging). A final AUC score of 0.8625 was achieved. The resulted ROC Curve on test data is shown below:
Image('./data/ROC_Curve_combined.png')
Final_Report.ipynb
cyang019/blight_fight
mit
Challenge: You have a couple of airports and want to bring them into a numerical representation to enable processing with neural networks. How do you do that?
# https://en.wikipedia.org/wiki/List_of_busiest_airports_by_passenger_traffic airports = { 'HAM': ["germany europe regional", 18], 'TXL': ["germany europe regional", 21], 'FRA': ["germany europe hub", 70], 'MUC': ["germany europe hub", 46], 'CPH': ["denmark capital scandinavia europe hub", 29], 'ARN': ["sweden capital scandinavia europe regional", 27], 'BGO': ["norway scandinavia europe regional", 6], 'OSL': ["norway capital scandinavia europe regional", 29], 'LHR': ["gb capital europe hub", 80], 'CDG': ["france capital europe hub", 72], 'SFO': ["usa california regional", 58], 'IAD': ["usa capital regional", 21], 'AUS': ["usa texas regional", 16], 'EWR': ["usa new_jersey hub", 46], 'JFK': ["usa new_york hub", 62], 'ATL': ["usa georgia hub", 110], 'STL': ["usa missouri regional", 16], 'LAX': ["usa california hub", 88] } airport_names = list(airports.keys()) airport_numbers = list(range(0, len(airports))) airport_to_number = dict(zip(airport_names, airport_numbers)) number_to_airport = dict(zip(airport_numbers, airport_names)) airport_descriptions = [value[0] for value in list(airports.values())] airport_passengers = [value[1] for value in list(airports.values())]
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
Encode Texts in multi-hot frequency
tokenizer = tf.keras.preprocessing.text.Tokenizer() tokenizer.fit_on_texts(airport_descriptions) description_matrix = tokenizer.texts_to_matrix(airport_descriptions, mode='freq') aiport_count, word_count = description_matrix.shape dictionary_size = word_count aiport_count, word_count x = airport_numbers Y = description_matrix
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
2d embeddings
%%time import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import Flatten, GlobalAveragePooling1D, Dense, LSTM, GRU, SimpleRNN, Bidirectional, Embedding from tensorflow.keras.models import Sequential, Model from tensorflow.keras.initializers import glorot_normal seed = 3 input_dim = len(airports) embedding_dim = 2 model = Sequential() model.add(Embedding(name='embedding', input_dim=input_dim, output_dim=embedding_dim, input_length=1, embeddings_initializer=glorot_normal(seed=seed))) model.add(GlobalAveragePooling1D()) model.add(Dense(units=50, activation='relu', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed))) model.add(Dense(units=dictionary_size, name='output', activation='softmax', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed))) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) EPOCHS=1000 BATCH_SIZE=2 %time history = model.fit(x, Y, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=0) plt.yscale('log') plt.plot(history.history['loss']) loss, accuracy = model.evaluate(x, Y) loss, accuracy embedding_layer = model.get_layer('embedding') embedding_model = Model(inputs=model.input, outputs=embedding_layer.output) embeddings_2d = embedding_model.predict(airport_numbers).reshape(-1, 2) # for printing only # plt.figure(dpi=600) plt.axis('off') plt.scatter(embeddings_2d[:, 0], embeddings_2d[:, 1]) for name, x_pos, y_pos in zip(airport_names, embeddings_2d[:, 0], embeddings_2d[:, 1]): print(name, (x_pos, y_pos)) plt.annotate(name, (x_pos, y_pos))
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
1d embeddings
seed = 3 input_dim = len(airports) embedding_dim = 1 model = Sequential() model.add(Embedding(name='embedding', input_dim=input_dim, output_dim=embedding_dim, input_length=1, embeddings_initializer=glorot_normal(seed=seed))) model.add(GlobalAveragePooling1D()) model.add(Dense(units=50, activation='relu', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed))) model.add(Dense(units=dictionary_size, name='output', activation='softmax', bias_initializer='zeros', kernel_initializer=glorot_normal(seed=seed))) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc']) EPOCHS=1500 BATCH_SIZE=2 %time history = model.fit(x, Y, epochs=EPOCHS, batch_size=BATCH_SIZE, verbose=0) plt.yscale('log') plt.plot(history.history['loss']) import numpy as np embedding_layer = model.get_layer('embedding') embedding_model = Model(inputs=model.input, outputs=embedding_layer.output) embeddings_1d = embedding_model.predict(airport_numbers).reshape(-1) # for printing only # plt.figure(figsize=(20,5)) # plt.figure(dpi=600) plt.axis('off') plt.scatter(embeddings_1d, np.zeros(len(embeddings_1d))) for name, x_pos in zip(airport_names, embeddings_1d): print(name, (x_pos, y_pos)) plt.annotate(name, (x_pos, 0), rotation=80)
notebooks/2019_tf/embeddings-viz.ipynb
DJCordhose/ai
mit
What country are most billionaires from? For the top ones, how many billionaires per billion people?
recent = df[df['year'] == 2014] #recent is a variable, a variable can be assigned to different things, here it was assigned to a data frame recent.head() recent.columns.values
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
where are all the billionaires from?
recent['countrycode'].value_counts() #value_counts counts每个country出现的次数 recent.sort_values(by='networthusbillion', ascending=False).head(10) #sort_values reorganizes the data basde on the by column
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
What's the average wealth of a billionaire? Male? Female?
recent['networthusbillion'].describe() # the average wealth of a billionaire is $3.9 billion recent.groupby('gender')['networthusbillion'].describe()#group by is a function, group everything by gender, and show the billionnetworth # female mean is 3.920556 billion # male mean is 3.902716 billion
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
Who is the poorest billionaire? Who are the top 10 poorest billionaires?
recent.sort_values(by='rank',ascending=False).head(10)
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit