markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Generate sinusoids in two spatially distant labels
# The known signal is all zero-s off of the two labels of interest signal = np.zeros((n_labels, T)) idx = label_names.index('inferiorparietal-lh') signal[idx, :] = 1e-7 * np.sin(5 * 2 * np.pi * times) idx = label_names.index('rostralmiddlefrontal-rh') signal[idx, :] = 1e-7 * np.sin(7 * 2 * np.pi * times)
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Find the center vertices in source space of each label We want the known signal in each label to only be active at the center. We create a mask for each label that is 1 at the center vertex and 0 at all other vertices in the label. This mask is then used when simulating source-space data.
hemi_to_ind = {'lh': 0, 'rh': 1} for i, label in enumerate(labels): # The `center_of_mass` function needs labels to have values. labels[i].values.fill(1.) # Restrict the eligible vertices to be those on the surface under # consideration and within the label. surf_vertices = fwd['src'][hemi_to_ind[label.hemi]]['vertno'] restrict_verts = np.intersect1d(surf_vertices, label.vertices) com = labels[i].center_of_mass(subject='sample', subjects_dir=subjects_dir, restrict_vertices=restrict_verts, surf='white') # Convert the center of vertex index from surface vertex list to Label's # vertex list. cent_idx = np.where(label.vertices == com)[0][0] # Create a mask with 1 at center vertex and zeros elsewhere. labels[i].values.fill(0.) labels[i].values[cent_idx] = 1.
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Create source-space data with known signals Put known signals onto surface vertices using the array of signals and the label masks (stored in labels[i].values).
stc_gen = simulate_stc(fwd['src'], labels, signal, times[0], dt, value_fun=lambda x: x)
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot original signals Note that the original signals are highly concentrated (point) sources.
kwargs = dict(subjects_dir=subjects_dir, hemi='split', smoothing_steps=4, time_unit='s', initial_time=0.05, size=1200, views=['lat', 'med']) clim = dict(kind='value', pos_lims=[1e-9, 1e-8, 1e-7]) figs = [mlab.figure(1), mlab.figure(2), mlab.figure(3), mlab.figure(4)] brain_gen = stc_gen.plot(clim=clim, figure=figs, **kwargs)
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Simulate sensor-space signals Use the forward solution and add Gaussian noise to simulate sensor-space (evoked) data from the known source-space signals. The amount of noise is controlled by nave (higher values imply less noise).
evoked_gen = simulate_evoked(fwd, stc_gen, evoked.info, cov, nave, random_state=seed) # Map the simulated sensor-space data to source-space using the inverse # operator. stc_inv = apply_inverse(evoked_gen, inv_op, lambda2, method=method)
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot the point-spread of corrupted signal Notice that after applying the forward- and inverse-operators to the known point sources that the point sources have spread across the source-space. This spread is due to the minimum norm solution so that the signal leaks to nearby vertices with similar orientations so that signal ends up crossing the sulci and gyri.
figs = [mlab.figure(5), mlab.figure(6), mlab.figure(7), mlab.figure(8)] brain_inv = stc_inv.plot(figure=figs, **kwargs)
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Current seed values Scripts used to generate a new polytrope for DMESTAR models rely on a piece-wise function to generate an appropriate combination of $T_{\rm eff}$ and luminosity for a model based on the requested stellar mass and solar composition. That piece-wise function is \begin{align} \log(T) & = 3.64 & M \ge 3.9 \ \log(L) & = 0.2\cdot (M - 5.0) + 2.6 & \ & \ \log(T) & = -0.028\cdot M + 3.875 & 3.9 > M \ge 3.0 \ \log(L) & = 0.55 \cdot M + 0.1 & \ & \ \log(T) & = 0.039\cdot M + 3.5765 & 3.0 > M \ge 1.5 \ \log(L) & = 1.7 & \ & \ \log(T) & = 0.039\cdot M + 3.5765 & 1.5 > M \ge 0.23 \ \log(L) & = 0.85\cdot M + 0.4 & \ & \ \log(T) & = 0.614\cdot M + 3.3863 & 0.23 > M \ \log(L) & = -0.16877\cdot M - 0.117637 & \ \end{align} While models with masses below $0.23 M$ are found to converge, the greatest issues occur right in the vicinity of the final piecewise condition. We can view this graphically,
fig, ax = plt.subplots(2, 1, figsize=(8, 8)) masses = np.arange(0.08, 5.0, 0.02) # compute and plot temperature relationship p1 = [3.64 for m in masses if m >= 3.9] p2 = [-0.028*m + 3.875 for m in masses if 3.9 > m >= 3.0] p3 = [0.039*m + 3.5765 for m in masses if 3.0 > m >= 0.23] p4 = [0.614*m + 3.3863 for m in masses if m < 0.23] tr = p4 + p3 + p2 + p1 ax[0].set_xlabel("initial mass [Msun]") ax[0].set_ylabel("log(T / K)") ax[0].plot(masses, tr, '-', c='#dc143c', lw=3) # plot luminosity relationship # compute and plot temperature relationship p1 = [0.2*(m - 5.0) + 2.6 for m in masses if m >= 3.9] p2 = [0.55*m + 0.1 for m in masses if 3.9 > m >= 3.0] p3 = [1.7 for m in masses if 3.0 > m >= 1.5] p4 = [0.85*m + 0.4 for m in masses if 1.5 > m >= 0.23] p5 = [-0.16877*m - 0.117637 for m in masses if m < 0.23] lr = p5 + p4 + p3 + p2 + p1 ax[1].set_xlabel("initial mass [Msun]") ax[1].set_ylabel("log(L / Lsun)") ax[1].plot(masses, lr, '-', c='#dc143c', lw=3)
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
Relaxed model values We can compare the relationship(s) quoted above with model values for temperature and luminosity after the model has relaxed to a stable configuration. This takes only a couple time steps to achieve, so we will look at the model relationship during the third time step for all models with masses between 0.08 and 5.0 Msun. Models are taken from a recent study where we used the most up-to-date version of the Dartmouth models for young stars (Feiden 2016).
model_directory = "../../papers/MagneticUpperSco/models/trk/std/" # get all file names from os import listdir all_fnames = listdir(model_directory) # sort out only those file names that end in .trk fnames = [f for f in all_fnames if f[-4:] == ".trk"] # sort numerically fnames = sorted(fnames)
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
To select which model time step is most representative of a relaxed model, we can step through the first 50 iterations to find if there are any noticable jumps in model properties.
fig, ax = plt.subplots(2, 1, figsize=(8, 8)) model_props = np.empty((len(fnames), 3)) for j in range(0, 50): for i, f in enumerate(fnames): model_props[i, 0] = float(f[1:5])/1000.0 try: trk = np.genfromtxt(model_directory + f, usecols=(0, 1, 2, 3)) except ValueError: model_props[i, 1] = 0.0 # temperature model_props[i, 2] = 0.0 # luminosity continue model_props[i, 1] = trk[j, 1] # temperature model_props[i, 2] = trk[j, 3] # luminosity ax[0].semilogx(model_props[:,0], model_props[:,1], '-', c='#008b8b', lw=3) ax[1].semilogx(model_props[:,0], model_props[:,2], '-', c='#008b8b', lw=3)
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
We can now iterate through these filenames and save the third timestep to an array.
model_props = np.empty((len(fnames), 3)) for i, f in enumerate(fnames): model_props[i, 0] = float(f[1:5])/1000.0 try: trk = np.genfromtxt(model_directory + f, usecols=(0, 1, 2, 3)) except ValueError: model_props[i, 1] = 0.0 # temperature model_props[i, 2] = 0.0 # luminosity continue model_props[i, 1] = trk[1, 1] # temperature model_props[i, 2] = trk[1, 3] # luminosity
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
Plotting these two relations, we can compare against the function used to generate the polytrope seed model.
fig, ax = plt.subplots(2, 1, figsize=(8, 8)) masses = np.arange(0.08, 5.0, 0.02) ax[0].set_xlabel("initial mass [Msun]") ax[0].set_ylabel("log(T / K)") ax[0].semilogx(model_props[:,0], model_props[:,1], '-', c='#008b8b', lw=3) ax[0].semilogx(masses, tr, '-', c='#dc143c', lw=3) ax[1].set_xlabel("initial mass [Msun]") ax[1].set_ylabel("log(L / Lsun)") ax[1].semilogx(model_props[:,0], model_props[:,2], '-', c='#008b8b', lw=3) ax[1].semilogx(masses, lr, '-', c='#dc143c', lw=3)
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
There are clear discrepancies, particularly in the low-mass regime. However, we note there are significant differences in relaxed effective temperatures starting around 1.5 solar masses. Luminosities tend to trace the relaxed models quite well until approximately 0.4 Msun. Since these are logarithmic values, noticeable differences are quite sizeable when it comes to model adjustments during runtime. It's quite likely that corrections will exceed tolerances in the allowed parameter adjustments during a model's evolution. Effective temperature
tp1 = np.array([line for line in model_props if line[0] < 0.23]) tp2 = np.array([line for line in model_props if 0.23 <= line[0] < 1.5]) tpoly1 = np.polyfit(tp1[:,0], tp1[:,1], 2) tpoly2 = np.polyfit(tp2[:,0], tp2[:,1], 3) fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.semilogx(tp1[:,0], tp1[:,1], '-', c='#008b8b', lw=3) ax.semilogx(tp2[:,0], tp2[:,1], '-', c='#008b8b', lw=3) ax.semilogx(tp1[:,0], tpoly1[0]*tp1[:,0]**2 + tpoly1[1]*tp1[:,0] + tpoly1[2], '--', c='black', lw=3) ax.semilogx(tp2[:,0], tpoly2[0]*tp2[:,0]**3 + tpoly2[1]*tp2[:,0]**2 + tpoly2[2]*tp2[:,0] + tpoly2[3], '--', c='black', lw=3)
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
Luminosity Above 1.5 Msun, there appear to be very little deviations of the true model sequence from the initial seed model sequence. We can thus leave this parameteriztion alone. Below 1.5 Msun, we can alter the shape of the relationship down to 0.23 Msun. In addition, we can prescribe a new shape to the relationship for objects with masses below 0.23 Msun.
p1 = np.array([line for line in model_props if line[0] < 0.23]) p2 = np.array([line for line in model_props if 0.23 <= line[0] < 1.5]) poly1 = np.polyfit(p1[:,0], p1[:,2], 2) poly2 = np.polyfit(p2[:,0], p2[:,2], 2) fig, ax = plt.subplots(1, 1, figsize=(8, 4)) ax.semilogx(p1[:,0], p1[:,2], '-', c='#008b8b', lw=3) ax.semilogx(p2[:,0], p2[:,2], '-', c='#008b8b', lw=3) ax.semilogx(p1[:,0], poly1[0]*p1[:,0]**2 + poly1[1]*p1[:,0] + poly1[2], '--', c='black', lw=3) ax.semilogx(p2[:,0], poly2[0]*p2[:,0]**2 + poly2[1]*p2[:,0] + poly2[2], '--', c='black', lw=3)
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
Implementation These new fits better represent the relaxed models, but will they work when implemented as seed values?
print "log(T) and log(L) Coefficients for the lowest mass objects: \n", tpoly1, poly1 print "\n\nlog(T) and log(L) Coefficients for low mass objects: \n", tpoly2, poly2
Daily/20160910_low_mass_polytrope_lum.ipynb
gfeiden/Notebook
mit
Despite having different slopes at the starting position (filled circle), the Newton scheme performs only a single step (open circle) to move to the exact minimum, from any starting position, if the function is quadratic. This is even more useful because Any smooth function close to its minimum looks like a quadratic function! That's a consequence of the Taylor expansion because the first-order term $g$ vanishes close to the minimum, so all deviations from the quadratic form are of order 3 or higher in $x-x_0$. So, why doesn't everyone compute the Hessian for optimization. Well, it's typically expensive to compute a second derivative. And in $d$ dimensions (one for each parameter), the Hessian is a matrix with $d(d+1)/2$ elements. This is why there are several quasi-Newton methods like BFGS, that accumulate information from previous iterations into an estimate of $H$. Newton's Method for finding a root Newton's method was initially designed to find the root of a function, not its minimum. So, let's find out how these two are connected. The central idea is to approximate $f$ by its tangent at some initial position $x_0$: $$ y = f(x_0) + g(x_0) (x-x_0) $$ As we can see in this animation from Wikipedia, the $x$-intercept of this line is then closer to the root than the starting position $x_0$: That is, we need to solve the linear relation $$ f(x_0) + g(x_0) (x-x_0) = 0 $$ for $x$ to get the updated position. In 1D: $x_1 = x_0 - f(x_0)/g(x_0)$. Repeating this sequence $$ x_{t+1} = x_t - \frac{f(x_t)}{g(x_t)} $$ will yield a fixed point, which is the root of $f$ if one exists in the vicinity of $x_0$.
def newtons_method(f, df, x0, tol=1E-6): x_n = x0 while abs(f(x_n)) > tol: x_n = x_n - f(x_n)/df(x_n) return x_n
day4/Newton-Method.ipynb
timothydmorton/usrp-sciprog
mit
Minimizing a function As the maximum and minimum of a function are defined by $f'(x) = 0$, we can use Newton's method to find extremal points by applying it to the first derivative. That's the origin for the Newton update formula above: $$ x_{t+1} = x_t - H^{-1}(x_t) \ g(x_t) $$ Let's try this with a simply function with known minimum:
# define a test function def f(x): return (x-3)**2 - 9 def df(x): return 2*(x-3) def df2(x): return 2. root = newtons_method(f, df, x0=0.1) print ("root {0}, f(root) = {1}".format(root, f(root))) minimum = newtons_method(df, df2, x0=0.1) print ("minimum {0}, f'(minimum) = {1}".format(minimum, df(minimum)))
day4/Newton-Method.ipynb
timothydmorton/usrp-sciprog
mit
There is an important qualifier in the statement about fixed points: a root needs to exist in the vicinity of $x_0$! Let's see what happens if that's not the case:
def g(x): return (x-3)**2 + 1 dg = df # same derivatives for f and g newtons_method(g, dg, x0=0.1)
day4/Newton-Method.ipynb
timothydmorton/usrp-sciprog
mit
As long as you don't interrupt the execution of this cell (Tip: click "Interrupt Kernel"), newtons_method will not terminate and come back with a result. With a little more defensive programming we can make sure that the function will terminate after a given number of iterations:
def newtons_method2(f, df, x0, tol=1E-6, maxiter=100000): x_n = x0 for _ in range(maxiter): x_n = x_n - f(x_n)/df(x_n) if abs(f(x_n)) < tol: return x_n raise RuntimeError("Failed to find a minimum within {} iterations ".format(maxiter)) newtons_method2(g, dg, x0=0.1)
day4/Newton-Method.ipynb
timothydmorton/usrp-sciprog
mit
Using scipy.optimize scipy comes with a pretty feature-rich optimization package, for one- and multi-dimensional optimization. As so often, it's better (as in faster and more reliable) to leverage exisiting and battle-tested code than to try to implement it yourself. Exercise 1: Find the minimum of f with scipy.optimize.minimize_scalar. Look up the various arguments to function in the documentation (either online or by typing scipy.optimize.minimize_scalar?) and choose appropriate inputs. When done, visualize your result to confirm its correctness. Exercise 2: To make this more interesting, we'll create a new multi-dimensional function that resembles f:
def h(x, p): return np.sum(np.abs(x-3)**p, axis=-1) - 9
day4/Newton-Method.ipynb
timothydmorton/usrp-sciprog
mit
Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a> This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function. This version of the function does not include batch normalization.
""" DO NOT MODIFY THIS CELL """ def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu) return layer
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network. This version of the function does not include batch normalization.
""" DO NOT MODIFY THIS CELL """ def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu) return conv_layer
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
""" DO NOT MODIFY THIS CELL """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]]}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate)
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.) Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches. Add batch normalization We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things. TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu) layer = tf.layers.batch_normalization(layer, training=is_training) layer = tf.nn.relu(layer) return layer
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) conv_layer = tf.nn.relu(conv_layer) return conv_layer
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) #boolean to hold if network is training is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training:False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training:False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training:False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training:False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training:False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate)
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference. Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a> Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things. This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization. Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps. Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu) gamma = tf.Variable(tf.ones([num_units])) beta = tf.Variable(tf.zeros([num_units])) pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False) pop_variance = tf.Variable(tf.ones([num_units]), trainable=False) epsilon = 1e-3 def batch_norm_training(): batch_mean, batch_variance = tf.nn.moments(layer, [0]) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output)
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps. Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 in_channels = prev_layer.get_shape().as_list()[3] out_channels = layer_depth*4 weights = tf.Variable( tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)) bias = tf.Variable(tf.zeros(out_channels)) conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME') conv_layer = tf.nn.bias_add(conv_layer, bias) gamma = tf.Variable(tf.ones([out_channels])) beta = tf.Variable(tf.zeros([out_channels])) pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False) pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False) epsilon = 1e-3 def batch_norm_training(): # Important to use the correct dimensions here to ensure the mean and variance are calculated # per feature map instead of for the entire layer batch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(conv_layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(conv_layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output)
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # boolean variable if network is training is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training:False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training:False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training:False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training:False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training:False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate)
batch-norm/Batch_Normalization_Exercises.ipynb
vvishwa/deep-learning
mit
EEG processing and Event Related Potentials (ERPs) For a generic introduction to the computation of ERP and ERF see tut_epoching_and_averaging. Here we cover the specifics of EEG, namely: - setting the reference - using standard montages :func:`mne.channels.Montage` - Evoked arithmetic (e.g. differences)
import mne from mne.datasets import sample
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Setup for reading the raw data
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.set_eeg_reference('average', projection=True) # set EEG average reference
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's restrict the data to the EEG channels
raw.pick_types(meg=False, eeg=True, eog=True)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
By looking at the measurement info you will see that we have now 59 EEG channels and 1 EOG channel
print(raw.info)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
In practice it's quite common to have some EEG channels that are actually EOG channels. To change a channel type you can use the :func:mne.io.Raw.set_channel_types method. For example to treat an EOG channel as EEG you can change its type using
raw.set_channel_types(mapping={'EOG 061': 'eeg'}) print(raw.info)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And to change the nameo of the EOG channel
raw.rename_channels(mapping={'EOG 061': 'EOG'})
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's reset the EOG channel back to EOG type.
raw.set_channel_types(mapping={'EOG': 'eog'})
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The EEG channels in the sample dataset already have locations. These locations are available in the 'loc' of each channel description. For the first channel we get
print(raw.info['chs'][0]['loc'])
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And it's actually possible to plot the channel locations using :func:mne.io.Raw.plot_sensors.
raw.plot_sensors() raw.plot_sensors('3d') # in 3D
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Setting EEG montage In the case where your data don't have locations you can set them using a :class:mne.channels.Montage. MNE comes with a set of default montages. To read one of them do:
montage = mne.channels.read_montage('standard_1020') print(montage)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To apply a montage on your data use the set_montage method. function. Here don't actually call this function as our demo dataset already contains good EEG channel locations. Next we'll explore the definition of the reference. Setting EEG reference Let's first remove the reference from our Raw object. This explicitly prevents MNE from adding a default EEG average reference required for source localization.
raw_no_ref, _ = mne.set_eeg_reference(raw, [])
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We next define Epochs and compute an ERP for the left auditory condition.
reject = dict(eeg=180e-6, eog=150e-6) event_id, tmin, tmax = {'left/auditory': 1}, -0.2, 0.5 events = mne.read_events(event_fname) epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax, reject=reject) evoked_no_ref = mne.Epochs(raw_no_ref, **epochs_params).average() del raw_no_ref # save memory title = 'EEG Original reference' evoked_no_ref.plot(titles=dict(eeg=title)) evoked_no_ref.plot_topomap(times=[0.1], size=3., title=title)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Average reference: This is normally added by default, but can also be added explicitly.
raw_car, _ = mne.set_eeg_reference(raw, 'average', projection=True) evoked_car = mne.Epochs(raw_car, **epochs_params).average() del raw_car # save memory title = 'EEG Average reference' evoked_car.plot(titles=dict(eeg=title)) evoked_car.plot_topomap(times=[0.1], size=3., title=title)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Custom reference: Use the mean of channels EEG 001 and EEG 002 as a reference
raw_custom, _ = mne.set_eeg_reference(raw, ['EEG 001', 'EEG 002']) evoked_custom = mne.Epochs(raw_custom, **epochs_params).average() del raw_custom # save memory title = 'EEG Custom reference' evoked_custom.plot(titles=dict(eeg=title)) evoked_custom.plot_topomap(times=[0.1], size=3., title=title)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Evoked arithmetics Trial subsets from Epochs can be selected using 'tags' separated by '/'. Evoked objects support basic arithmetic. First, we create an Epochs object containing 4 conditions.
event_id = {'left/auditory': 1, 'right/auditory': 2, 'left/visual': 3, 'right/visual': 4} epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax, reject=reject) epochs = mne.Epochs(raw, **epochs_params) print(epochs)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Next, we create averages of stimulation-left vs stimulation-right trials. We can use basic arithmetic to, for example, construct and plot difference ERPs.
left, right = epochs["left"].average(), epochs["right"].average() # create and plot difference ERP mne.combine_evoked([left, -right], weights='equal').plot_joint()
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This is an equal-weighting difference. If you have imbalanced trial numbers, you could also consider either equalizing the number of events per condition (using :meth:epochs.equalize_event_counts &lt;mne.Epochs.equalize_event_counts&gt;). As an example, first, we create individual ERPs for each condition.
aud_l = epochs["auditory", "left"].average() aud_r = epochs["auditory", "right"].average() vis_l = epochs["visual", "left"].average() vis_r = epochs["visual", "right"].average() all_evokeds = [aud_l, aud_r, vis_l, vis_r] print(all_evokeds)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This can be simplified with a Python list comprehension:
all_evokeds = [epochs[cond].average() for cond in sorted(event_id.keys())] print(all_evokeds) # Then, we construct and plot an unweighted average of left vs. right trials # this way, too: mne.combine_evoked(all_evokeds, weights=(0.25, -0.25, 0.25, -0.25)).plot_joint()
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Often, it makes sense to store Evoked objects in a dictionary or a list - either different conditions, or different subjects.
# If they are stored in a list, they can be easily averaged, for example, # for a grand average across subjects (or conditions). grand_average = mne.grand_average(all_evokeds) mne.write_evokeds('/tmp/tmp-ave.fif', all_evokeds) # If Evokeds objects are stored in a dictionary, they can be retrieved by name. all_evokeds = dict((cond, epochs[cond].average()) for cond in event_id) print(all_evokeds['left/auditory']) # Besides for explicit access, this can be used for example to set titles. for cond in all_evokeds: all_evokeds[cond].plot_joint(title=cond)
0.15/_downloads/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Exercise 1 : Uniform Distribution Plot the histogram of 10 tosses with a fair coin (let 1 be heads and 2 be tails). Plot the histogram of 1000000 tosses of a fair coin
# Histograms with 10 tosses. cointoss = DiscreteRandomVariable(1, 3) plt.hist(cointoss.draw(10), align = 'mid') plt.xlabel('Value') plt.ylabel('Occurences') plt.legend(['Coin Tosses']); # Histograms with 1000000 tosses. cointoss = DiscreteRandomVariable(1, 3) plt.hist(cointoss.draw(1000000), align = 'mid') plt.xlabel('Value') plt.ylabel('Occurences') plt.legend(['Coin Tosses']);
notebooks/lectures/Random_Variables/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 2 : Binomial Distributions. Graph the histogram of 1000000 samples from a binomial distribution of probability 0.25 and $n = 20$ Find the value that occurs the most often Calculate the probability of the value that occurs the most often occurring. Use the factorial(x) function to find factorials
# Binomial distribution with p=0.25 and n=20 binomialdistribution = BinomialRandomVariable(20, 0.25) bins = np.arange(0,21,1) n, bins, patches = plt.hist(binomialdistribution.draw(1000000), bins=bins) plt.title('Binomial Distribution with p=0.25 and n=20') plt.xlabel('Value') plt.ylabel('Occurrences') plt.legend(['Die Rolls']); # Finding x which occurs most often elem = np.argmax(n) print 'Maximum occurance for x =', elem # Calculating the probability of finding x. n = 20 p = 0.5 x = elem n_factorial = factorial(n) x_factorial = factorial(x) n_x_factorial = factorial(n-x) fact = n_factorial / (n_x_factorial * x_factorial) probability = fact * (p**x) * ((1-p)**(n-x)) print 'proabability of x = %d' % x, probability
notebooks/lectures/Random_Variables/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 3 : Normal Distributions a. Graphing Graph a normal distribution using the Probability Density Function bellow, with a mean of 0 and standard deviation of 5. $$f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x - \mu)^2}{2\sigma^2}}$$
# Graphing a normal distribution pdf. mu = 0 sigma = 5 x = np.linspace(-30, 30, 200) y = (1/(sigma * np.sqrt(2 * 3.14159))) * np.exp(-(x - mu)*(x - mu) / (2 * sigma * sigma)) plt.plot(x, y) plt.title('Graph of PDF with mu = 0 and sigma = 5') plt.xlabel('Value') plt.ylabel('Probability');
notebooks/lectures/Random_Variables/answers/notebook.ipynb
quantopian/research_public
apache-2.0
b. Confidence Intervals. Calculate the first, second, and third confidence intervals. Plot the PDF and the first, second, and third confidence intervals.
# finding the 1st, 2nd, and third confidence intervals. first_ci = (-sigma, sigma) second_ci = (-2*sigma, 2*sigma) third_ci = (-3*sigma, 3*sigma) print '1-sigma -> mu +/-', sigma print '2-sigma -> mu +/-', second_ci[1] print '3-sigma -> mu +/-', third_ci[1] plt.axvline(first_ci[0], linestyle='dashdot', label='68% of observations', color = 'blue') plt.axvline(first_ci[1], linestyle='dashdot', label='68% of observations', color = 'blue') plt.axvline(second_ci[0], linestyle='dashdot', label='95% of observations', color = 'red') plt.axvline(second_ci[1],linestyle='dashdot', color = 'red') plt.axvline(third_ci[0], linestyle='dashdot', label='99% of observations', color = 'green') plt.axvline(third_ci[1], linestyle='dashdot', color = 'green') plt.plot(x,y) plt.title('Graph of PDF with 3 confidence intervals.') plt.legend();
notebooks/lectures/Random_Variables/answers/notebook.ipynb
quantopian/research_public
apache-2.0
Exercise 4: Financial Applications: Fit the returns of SPY from 2016-01-01 to 2016-05-01 to a normal distribution. - Fit the returns to a normal distribution by clacluating the values of $\mu$ and $\sigma$ - Plot the returns and the distribution, along with 3 confidence intervals. - Use the Jarque-Bera test to check for normality.
# Collect prices and returns. prices = get_pricing('SPY', start_date = '2016-01-01', end_date='2016-05-01', fields = 'price') returns = prices.pct_change()[1:] # Calculating the mean and standard deviation. sample_mean = np.mean(returns) sample_std_dev = np.std(returns) x = np.linspace(-(sample_mean + 4 * sample_std_dev), (sample_mean + 4 * sample_std_dev), len(returns)) sample_distribution = ((1/(sample_std_dev * 2 * np.pi)) * np.exp(-(x - sample_mean)*(x - sample_mean) / (2 * sample_std_dev * sample_std_dev))) # Plotting histograms and confidence intervals. plt.hist(returns, range=(returns.min(), returns.max()), normed = True); plt.plot(x, sample_distribution) plt.axvline(sample_std_dev, linestyle='dashed', color='red', label='1st Confidence Interval') plt.axvline(-sample_std_dev, linestyle='dashed', color='red') plt.axvline(2*sample_std_dev, linestyle='dashed', color='k', label='2st Confidence Interval') plt.axvline(-2*sample_std_dev, linestyle='dashed', color='k') plt.axvline(3*sample_std_dev, linestyle='dashed', color='green', label='3st Confidence Interval') plt.axvline(-3*sample_std_dev, linestyle='dashed', color='green') plt.legend(); # Run the JB test for normality. cutoff = 0.01 _, p_value, skewness, kurtosis = stattools.jarque_bera(returns) print "The JB test p-value is: ", p_value print "We reject the hypothesis that the data are normally distributed ", p_value < cutoff print "The skewness of the returns is: ", skewness print "The kurtosis of the returns is: ", kurtosis
notebooks/lectures/Random_Variables/answers/notebook.ipynb
quantopian/research_public
apache-2.0
The Key to Understanding Joins To understand joins in Hail, we need to revisit one of the crucial properties of tables: the key. A table has an ordered list of fields known as the key. Our users table has one key, the id field. We can see all the fields, as well as the keys, of a table by calling describe().
users.describe()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
key is a struct expression of all of the key fields, so we can refer to the key of a table without explicitly specifying the names of the key fields.
users.key.describe()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Keys need not be unique or non-missing, although in many applications they will be both. When tables are joined in Hail, they are joined based on their keys. In order to join two tables, they must share the same number of keys, same key types (i.e. string vs integer), and the same order of keys. Let's look at a simple example of a join. We'll use the Table.parallelize() method to create two small tables, t1 and t2.
t1 = hl.Table.parallelize([ {'a': 'foo', 'b': 1}, {'a': 'bar', 'b': 2}, {'a': 'bar', 'b': 2}], hl.tstruct(a=hl.tstr, b=hl.tint32), key='a') t2 = hl.Table.parallelize([ {'t': 'foo', 'x': 3.14}, {'t': 'bar', 'x': 2.78}, {'t': 'bar', 'x': -1}, {'t': 'quam', 'x': 0}], hl.tstruct(t=hl.tstr, x=hl.tfloat64), key='t') t1.show() t2.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Now, we can join the tables.
j = t1.annotate(t2_x = t2[t1.a].x) j.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Let's break this syntax down. t2[t1.a] is an expression referring to the row of table t2 with value t1.a. So this expression will create a map between the keys of t1 and the rows of t2. You can view this mapping directly:
t2[t1.a].show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Since we only want the field x from t2, we can select it with t2[t1.a].x. Then we add this field to t1 with the anntotate_rows() method. The new joined table j has a field t2_x that comes from the rows of t2. The tables could be joined, because they shared the same number of keys (1) and the same key type (string). The keys do not need to share the same name. Notice that the rows with keys present in t2 but not in t1 do not show up in the final result. This join syntax performs a left join. Tables also have a SQL-style inner/left/right/outer join() method. The magic of keys is that they can be used to create a mapping, like a Python dictionary, between the keys of one table and the row values of another table: table[expr] will refer to the row of table that has a key value of expr. If the row is not unique, one such row is chosen arbitrarily. Here's a subtle bit: if expr is an expression indexed by a row of table2, then table[expr] is also an expression indexed by a row of table2. Also note that while they look similar, table['field'] and table1[table2.key] are doing very different things! table['field'] selects a field from the table, while table1[table2.key] creates a mapping between the keys of table2 and the rows of table1.
t1['a'].describe() t2[t1.a].describe()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Joining Tables Now that we understand the basics of how joins work, let's use a join to compute the average movie rating per genre. We have a table ratings, which contains user_id, movie_id, and rating fields. Group by movie_id and aggregate to get the mean rating of each movie.
t = (ratings.group_by(ratings.movie_id) .aggregate(rating = hl.agg.mean(ratings.rating))) t.describe()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
To get the mean rating by genre, we need to join in the genre field from the movies table.
t = t.annotate(genres = movies[t.movie_id].genres) t.describe() t.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
We want to group the ratings by genre, but they're packed up in an array. To unpack the genres, we can use explode. explode creates a new row for each element in the value of the field, which must be a collection (array or set).
t = t.explode(t.genres) t.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Finally, we can get group by genre and aggregate to get the mean rating per genre.
t = (t.group_by(t.genres) .aggregate(rating = hl.agg.mean(t.rating))) t.show(n=100)
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Let's do another example. This time, we'll see if we can determine what the highest rated movies are, on average, for each occupation. We start by joining the two tables movies and users.
movie_data = ratings.annotate( movie = movies[ratings.movie_id].title, occupation = users[ratings.user_id].occupation) movie_data.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Next, we'll use group_by along with the aggregator hl.agg.mean to determine the average rating of each movie by occupation. Remember that the group_by operation is always associated with an aggregation.
ratings_by_job = movie_data.group_by( movie_data.occupation, movie_data.movie).aggregate( mean = hl.agg.mean(movie_data.rating)) ratings_by_job.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Now we can use another group_by to determine the highest rated movie, on average, for each occupation. The syntax here needs some explaining. The second step in the cell below is just to clean up the table created by the preceding step. If you examine the intermediate result (for example, by giving a new name to the output of the first step), you will see that there are two columns corresponding to occupation, occupation and val.occupation. This is an artifact of the aggregator syntax and the fact that we are retaining the entire row from ratings_by_job. So in the second line, we use select to keep those columns that we want, and also rename them to drop the val. syntax. Since occupation is a key of this table, we don't need to select for it.
highest_rated = ratings_by_job.group_by( ratings_by_job.occupation).aggregate( val = hl.agg.take(ratings_by_job.row,1, ordering = -ratings_by_job.mean)[0] ) highest_rated = highest_rated.select(movie = highest_rated.val.movie, mean = highest_rated.val.mean) highest_rated.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
Let's try to get a deeper understanding of this result. Notice that every movie displayed has an average rating of 5, which means that every person gave these movies the highest rating. Is that unlikely? We can determine how many people rated each of these movies by working backwards and filtering our original movie_data table by fields in highest_rated. Note that in the second line below, we are taking advantage of the fact that Hail tables are keyed.
highest_rated = highest_rated.key_by( highest_rated.occupation, highest_rated.movie) counts_temp = movie_data.filter( hl.is_defined(highest_rated[movie_data.occupation, movie_data.movie])) counts = counts_temp.group_by(counts_temp.occupation, counts_temp.movie).aggregate( counts = hl.agg.count()) counts.show()
hail/python/hail/docs/tutorials/06-joins.ipynb
hail-is/hail
mit
The performance here is very poor. We really need to train with more samples and for more epochs.
from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, predictions) np.fill_diagonal(cm, 0) plt.bone() plt.matshow(cm) plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label')
Wk09-Advanced-ML-tasks-Deep-Learning.ipynb
streety/biof509
mit
Linear classifier on sensor data with plot patterns and filters Decoding, a.k.a MVPA or supervised machine learning applied to MEG and EEG data in sensor space. Fit a linear classifier with the LinearModel object providing topographical patterns which are more neurophysiologically interpretable [1] than the classifier filters (weight vectors). The patterns explain how the MEG and EEG data were generated from the discriminant neural sources which are extracted by the filters. Note patterns/filters in MEG data are more similar than EEG data because the noise is less spatially correlated in MEG than EEG. [1] Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.-D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96–110. doi:10.1016/j.neuroimage.2013.10.067
# Authors: Alexandre Gramfort <[email protected]> # Romain Trachel <[email protected]> # # License: BSD (3-clause) import mne from mne import io from mne.datasets import sample from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression # import a linear classifier from mne.decoding from mne.decoding import LinearModel print(__doc__) data_path = sample.data_path()
0.12/_downloads/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(2, None, method='iir') # replace baselining with high-pass events = mne.read_events(event_fname) # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, decim=4, baseline=None, preload=True) labels = epochs.events[:, -1] # get MEG and EEG data meg_epochs = epochs.copy().pick_types(meg=True, eeg=False) meg_data = meg_epochs.get_data().reshape(len(labels), -1) eeg_epochs = epochs.copy().pick_types(meg=False, eeg=True) eeg_data = eeg_epochs.get_data().reshape(len(labels), -1)
0.12/_downloads/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Decoding in sensor space using a LogisticRegression classifier
clf = LogisticRegression() sc = StandardScaler() # create a linear model with LogisticRegression model = LinearModel(clf) # fit the classifier on MEG data X = sc.fit_transform(meg_data) model.fit(X, labels) # plot patterns and filters model.plot_patterns(meg_epochs.info, title='MEG Patterns') model.plot_filters(meg_epochs.info, title='MEG Filters') # fit the classifier on EEG data X = sc.fit_transform(eeg_data) model.fit(X, labels) # plot patterns and filters model.plot_patterns(eeg_epochs.info, title='EEG Patterns') model.plot_filters(eeg_epochs.info, title='EEG Filters')
0.12/_downloads/plot_linear_model_patterns.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
generators return their contents 'lazily'. This leaves a minimal memory footprint, at the cost of making the generator nonreusable.
y = (x*x for x in [1, 2, 3]) type(y) dir(y) y.send?? y[5] next(y) y.send(1) next(y) # run this cell twice - what happens?
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
'range' is something like a generator, but with special properties because of its intended use case (in 'for' loops or similar structures.
z = range(10, 5, -1) dir(range) # let's filter that list a little [x for x in dir(range) if not x.startswith('_')] z.start len(z) # __ function - overloaded operator
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
From the docs (https://docs.python.org/3/library/stdtypes.html#typesseq-range): The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed). Range objects implement the collections.abc.Sequence ABC, and provide features such as containment tests, element index lookup, slicing and support for negative indices (see Sequence Types — list, tuple, range):
for i in z: print(i)
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
zips produced iterators from pairs:
GPA = zip(['bob', 'sue', 'mary'], [2.3, 4.0, 3.7]) type(GPA) dir(GPA) next(GPA) next(GPA)[1]
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
More on Dicts The dict data structure shows up all over Python.
dict?
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
from assignment:
GPA_2 = dict(bob=2.0, sue=3.4, mary=4.0)
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
from iterator:
names = ['bob', 'mary', 'sue', 'lisa'] gpas = [3.2, 4.0, 3.1, 2.8] GPA_3 = dict(zip(names, gpas)) GPA_3
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
In function definitions:
# explicitly named arguments are also positional # Anything after * in a function is a positional argument - tuple # Anything after ** is a named argument # the latter are unpacked as dicts def arg_explainer(x, y, *args, **kwargs): print('-'*30) print('x is %d, even though you didn\'t specify it, because of its position.' % x) print('same with y, which is %d.' %y) if args: print('-'*30) print('type(*args) = %s' % type(args)) print('these are the *args arguments: ') for arg in args: print(arg) else: print('-'*30) print('no *args today!') if kwargs: print('-'*30) print('type(**kwargs) == %s' % type(kwargs)) for key in kwargs: print(key, kwargs[key]) else: print('-'*30) print('no **kwargs today!') print('-'*30) arg_explainer(2, 4, 3, 7, 8, 9, 10, plot=True, sharey=True, rotate=False)
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
In function calls:
my_kwargs = {'plot': False, 'sharey': True} arg_explainer(1, 2, **my_kwargs)
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
This allows, for instance, matplotlibs plot function to accept a huge range of different plotting options, or few to none at all.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline ?plt.plot x = np.linspace(-5, 5, 100) y1 = np.sin(x) y2 = np.cos(x) plt.plot(x, y1) # all of these arguments are *args plt.plot(x, y2, color='red', label='just on the cosine, for no reason at all') # starting w/ color, **kwargs plt.legend(loc='center');
notebooks/more_data_structures.ipynb
jwjohnson314/data-801
mit
Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
from google.cloud import bigquery import pandas as pd
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lab Task #1: Set up environment variables so that we can use them throughout the notebook
%%bash # TODO 1 # TODO -- Your code here. echo "Your current GCP Project Name is: "$PROJECT PROJECT = "cloud-training-demos" # Replace with your PROJECT
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create ML datasets by sampling using BigQuery We'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.
bq = bigquery.Client(project = PROJECT)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash within the module. Feel free to play around with these values to get the perfect combination.
modulo_divisor = 100 train_percent = 80.0 eval_percent = 10.0 train_buckets = int(modulo_divisor * train_percent / 100.0) eval_buckets = int(modulo_divisor * eval_percent / 100.0)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.
def display_dataframe_head_from_query(query, count=10): """Displays count rows from dataframe head from query. Args: query: str, query to be run on BigQuery, results stored in dataframe. count: int, number of results from head of dataframe to display. Returns: Dataframe head with count number of results. """ df = bq.query( query + " LIMIT {limit}".format( limit=count)).to_dataframe() return df.head(count)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.
# Get label, features, and columns to hash and split into buckets hash_cols_fixed_query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, year, month, CASE WHEN day IS NULL THEN CASE WHEN wday IS NULL THEN 0 ELSE wday END ELSE day END AS date, IFNULL(state, "Unknown") AS state, IFNULL(mother_birth_state, "Unknown") AS mother_birth_state FROM publicdata.samples.natality WHERE year > 2000 AND weight_pounds > 0 AND mother_age > 0 AND plurality > 0 AND gestation_weeks > 0 """ display_dataframe_head_from_query(hash_cols_fixed_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here. Next query will combine our hash columns and will leave us just with our label, features, and our hash values.
data_query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT( CONCAT( CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING) ) ) AS hash_values FROM ({CTE_hash_cols_fixed}) """.format(CTE_hash_cols_fixed=hash_cols_fixed_query) display_dataframe_head_from_query(data_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.
# Get the counts of each of the unique hash of our splitting column first_bucketing_query = """ SELECT hash_values, COUNT(*) AS num_records FROM ({CTE_data}) GROUP BY hash_values """.format(CTE_data=data_query) display_dataframe_head_from_query(first_bucketing_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.
# Get the number of records in each of the hash buckets second_bucketing_query = """ SELECT ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index, SUM(num_records) AS num_records FROM ({CTE_first_bucketing}) GROUP BY ABS(MOD(hash_values, {modulo_divisor})) """.format( CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor) display_dataframe_head_from_query(second_bucketing_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.
# Calculate the overall percentages percentages_query = """ SELECT bucket_index, num_records, CAST(num_records AS FLOAT64) / ( SELECT SUM(num_records) FROM ({CTE_second_bucketing})) AS percent_records FROM ({CTE_second_bucketing}) """.format(CTE_second_bucketing=second_bucketing_query) display_dataframe_head_from_query(percentages_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We'll now select the range of buckets to be used in training.
# Choose hash buckets for training and pull in their statistics train_query = """ SELECT *, "train" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= 0 AND bucket_index < {train_buckets} """.format( CTE_percentages=percentages_query, train_buckets=train_buckets) display_dataframe_head_from_query(train_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We'll do the same by selecting the range of buckets to be used evaluation.
# Choose hash buckets for validation and pull in their statistics eval_query = """ SELECT *, "eval" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= {train_buckets} AND bucket_index < {cum_eval_buckets} """.format( CTE_percentages=percentages_query, train_buckets=train_buckets, cum_eval_buckets=train_buckets + eval_buckets) display_dataframe_head_from_query(eval_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lastly, we'll select the hash buckets to be used for the test split.
# Choose hash buckets for testing and pull in their statistics test_query = """ SELECT *, "test" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= {cum_eval_buckets} AND bucket_index < {modulo_divisor} """.format( CTE_percentages=percentages_query, cum_eval_buckets=train_buckets + eval_buckets, modulo_divisor=modulo_divisor) display_dataframe_head_from_query(test_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after.
# Union the training, validation, and testing dataset statistics union_query = """ SELECT 0 AS dataset_id, * FROM ({CTE_train}) UNION ALL SELECT 1 AS dataset_id, * FROM ({CTE_eval}) UNION ALL SELECT 2 AS dataset_id, * FROM ({CTE_test}) """.format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query) display_dataframe_head_from_query(union_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to that we were hoping to get.
# Show final splitting and associated statistics split_query = """ SELECT dataset_id, dataset_name, SUM(num_records) AS num_records, SUM(percent_records) AS percent_records FROM ({CTE_union}) GROUP BY dataset_id, dataset_name ORDER BY dataset_id """.format(CTE_union=union_query) display_dataframe_head_from_query(split_query)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train, eval, test sets do not overlap and takes a subsample of our global splits. Lab Task #2: Sample the natality dataset
# TODO 2 # TODO -- Your code here. # every_n allows us to subsample from each of the hash values # This helps us get approximately the record counts we want print("There are {} examples in the train dataset.".format(len(train_df))) print("There are {} examples in the validation dataset.".format(len(eval_df))) print("There are {} examples in the test dataset.".format(len(test_df)))
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Preprocess data using Pandas We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is.
train_df.head()
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
train_df.describe()
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect. Lab Task #3: Preprocess the data in Pandas dataframe
# TODO 3 # TODO -- Your code here. # Modify plurality field to be a string twins_etc = dict(zip([1,2,3,4,5], ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"])) df["plurality"].replace(twins_etc, inplace=True) # Clone data and mask certain columns to simulate lack of ultrasound no_ultrasound = df.copy(deep=True) # Modify is_male no_ultrasound["is_male"] = "Unknown" # Modify plurality condition = no_ultrasound["plurality"] != "Single(1)" no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)" # Concatenate both datasets together and shuffle return pd.concat( [df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's process the train, eval, test set and see a small sample of the training data after our preprocessing:
train_df = preprocess(train_df) eval_df = preprocess(eval_df) test_df = preprocess(test_df) train_df.head() train_df.tail()
courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0