markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ inputs = tf.placeholder(tf.int32, [None, None], name = 'input') targets = tf.placeholder(tf.int32, [None, None], name = 'targets') learning_rate = tf.placeholder(tf.float32, shape = None, name = 'learning_rate') keep_prob = tf.placeholder(tf.float32, shape = None, name = 'keep_prob') return inputs, targets, learning_rate, keep_prob """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ GO_ID = target_vocab_to_int['<GO>'] target_data = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1]) concat_data = tf.fill([batch_size, 1], GO_ID) target_data = tf.concat([concat_data, target_data], 1) return target_data """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ encoding_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) encoding_cell = tf.contrib.rnn.DropoutWrapper(encoding_cell, keep_prob) _, rnn_state = tf.nn.dynamic_rnn(encoding_cell, rnn_inputs, dtype = tf.float32) return rnn_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob) train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope = decoding_scope) train_logits = output_fn(train_pred) return train_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob) infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length - 1, vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, infer_decoder_fn, scope = decoding_scope) return inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference.
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob) with tf.variable_scope("decoding") as decoding_scope: output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope = decoding_scope) train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) decoding_scope.reuse_variables() inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length - 1, vocab_size, decoding_scope, output_fn, keep_prob) return train_logits, inference_logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function. Apply embedding to the target data for the decoder. Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size) encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob) dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size) dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) return decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability
# Number of Epochs epochs = 4 # Batch Size batch_size = 512 # RNN Size rnn_size = 100 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 50 decoding_embedding_size = 50 # Learning Rate learning_rate = 0.01 # Dropout Keep Probability keep_probability = 0.9
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ sentence = sentence.lower() word_list = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split(' ')] return word_list """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq)
Udacity-Deep-Learning-Foundation-Nanodegree/Project-4/dlnd_language_translation.ipynb
joelowj/Udacity-Projects
apache-2.0
Load the libstempo Python extension. It requires a source installation of tempo2, as well as current Python and compiler, and the numpy and Cython packages. (Both Python 2.7 and 3.4 are supported; this means that in Python 2.7 all returned strings will be unicode strings, while in Python 3 all function arguments should be default unicode strings rather than bytes. This should work transparently, although there are limitations to what characters can be passed to tempo2; you should probably restrain yourself to ASCII.
from libstempo.libstempo import * import libstempo libstempo.__path__ import libstempo as T T.data = T.__path__[0] + '/data/' # example files print("Python version :",sys.version.split()[0]) print("libstempo version:",T.__version__) print("Tempo2 version :",T.libstempo.tempo2version())
demo/libstempo-demo.ipynb
vallis/libstempo
mit
We load a single-pulsar object. Doing this will automatically run the tempo2 fit routine once.
psr = T.tempopulsar( parfile=T.data + "/J1909-3744_NANOGrav_dfg+12.par", timfile=T.data + "/J1909-3744_NANOGrav_dfg+12.tim" )
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Let's start simple: what is the name of this pulsar? (You can change it, by the way.)
psr.name
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Next, let's look at observations: there are psr.nobs of them; we can get numpy arrays of the site TOAs [in MJDs] with psr.stoas, of the TOA measurement errors [in microseconds] with psr.toaerrs, and of the measurement frequencies with psr.freqs. These arrays are views of the tempo2 data, so you can write to them (but you cannot currently change the number of observations).
psr.nobs psr.stoas psr.toaerrs.min() psr.toaerrs psr.freqs
demo/libstempo-demo.ipynb
vallis/libstempo
mit
By contrast, barycentric TOAs and frequencies are computed on the basis of current pulsar parameters, so you get them by calling psr methods (with parentheses), and you get a copy of the current values. Writing to it has no effect on the tempo2 data.
psr.toas() psr.ssbfreqs()
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Residuals (in seconds) are returned by residuals(). The method takes a few options... I'll let its docstring help describe them. libstempo is fully documented in this way (try help(T.tempopulsar)).
help(psr.residuals) psr.residuals().min() psr.residuals()
demo/libstempo-demo.ipynb
vallis/libstempo
mit
We can plot TOAs vs. residuals, but we should first sort the arrays; otherwise the array follow the order in the tim file, which may not be chronological.
# get sorted array of indices i = N.argsort(psr.toas()) # use numpy fancy indexing to order residuals P.errorbar(psr.toas()[i],psr.residuals()[i],yerr=1e-6*psr.toaerrs[i],fmt='.',alpha=0.2);
demo/libstempo-demo.ipynb
vallis/libstempo
mit
We can also see what flags have been set on the observations, and what their values are. The latter returns a numpy vector of strings. Flags are not currently writable.
psr.flags() psr.flagvals('chanid')
demo/libstempo-demo.ipynb
vallis/libstempo
mit
In fact, there's a commodity routine in libstempo.plot to plot residuals, taking flags into account.
import libstempo.plot as LP LP.plotres(psr,group='pta',alpha=0.2)
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Timing-model parameters can be accessed by using psr as a Python dictionary. Each parameter is a special object with properties val, err (as well as fit, which is true is the parameter is currently being fitted, and set, which is true if the parameter was assigned a value).
psr['RAJ'].val, psr['RAJ'].err, psr['RAJ'].fit, psr['RAJ'].set
demo/libstempo-demo.ipynb
vallis/libstempo
mit
The names of all fitted parameters, of all set parameters, and of all parameters are returned by psr.pars(which='fit'). We show only the first few.
fitpars = psr.pars() # defaults to fitted parameters setpars = psr.pars(which='set') allpars = psr.pars(which='all') print(len(fitpars),len(setpars),len(allpars)) print(fitpars[:10])
demo/libstempo-demo.ipynb
vallis/libstempo
mit
The number of fitting parameters is psr.ndim.
psr.ndim
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Changing the parameter values results in different residuals.
# look +/- 3 sigmas around the current value x0, dx = psr['RAJ'].val, psr['RAJ'].err xs = x0 + dx * N.linspace(-3,3,20) res = [] for x in xs: psr['RAJ'].val = x res.append(psr.rms()/1e-6) psr['RAJ'].val = x0 # restore the original value P.plot(xs,res)
demo/libstempo-demo.ipynb
vallis/libstempo
mit
We can also call a least-squares fitting routine, which will fit around the current parameter values, replacing them with their new best values. Individual parameters can be included or excluded in the fitting by setting their 'fit' field. (Note: as of version 2.3.0, libstempo provides its own fit, although it does call tempo2 to compute the design matrix.)
psr['DM'].fit psr['DM'].fit = True print(psr['DM'].val) ret = psr.fit() print(psr['DM'].val,psr['DM'].err)
demo/libstempo-demo.ipynb
vallis/libstempo
mit
The fit returns a tuple consisting of best-fit vector, standard errors, covariance matrix, and linearized chisq. Note that these vectors and matrix are (ndim+1)- or (ndim+1)x(ndim+1)-dimensional, with the first row/column corresponding to a constant phase offset referenced to the first TOA (even if that point is not used). The exact chisq can be recomputed by psr.chisq() (which evaluates N.sum(psr.residuals()**2 / (1e-12 * psr.toaerrs**2))). The pulsar parameters can be read in bulk by calling psr.vals(which='fit'), which will default to fitted parameters, but can also be given 'all', 'set', or even a list of parameter names.
fitvals = psr.vals() print(fitvals) psr.vals(which=['RAJ','DECJ','PMRA'])
demo/libstempo-demo.ipynb
vallis/libstempo
mit
To set parameter values in bulk, you give a first argument to vals. Or call it with a dictionary.
psr.vals([5.1,-0.6],which=['RAJ','DECJ','PMRA']) psr.vals({'PMRA': -9.5}) print(psr.vals(which=['RAJ','DECJ','PMRA'])) # restore original values psr.vals(fitvals)
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Be careful about loss of precision; tempopar.val is a numpy longdouble, so you should be careful about assigning it a regular Python double. By contrast, doing arithmetics with numpy longdoubles will preserve their nature and precision. You can access errors in a similar way with psr.errs(...). It's also possible to obtain the design matrix computed at the current parameter values, which has shape psr.nobs * (len(psr.pars) + 1), since a constant offset is always included among the fitting parameters.
d = psr.designmatrix()
demo/libstempo-demo.ipynb
vallis/libstempo
mit
These, for instance, are the derivatives with respect to RAJ and DECJ, evaluated at the TOAs.
# we need the sorted-index array compute above P.plot(psr.toas()[i]/365.25,d[i,1],'-x'); P.plot(psr.toas()[i]/365.25,d[i,2],'-x')
demo/libstempo-demo.ipynb
vallis/libstempo
mit
It's easy to save the current timing-model to a new par file. Omitting the argument will overwrite the original parfile.
psr.savepar('./foo.par') !head foo.par
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Same for writing tim files.
psr.savetim('./foo.tim') !head foo.tim
demo/libstempo-demo.ipynb
vallis/libstempo
mit
With libstempo, it's easy to replicate some of the "toasim" plugin functionality. By subtracting the residuals from the site TOAs (psr.stoas, vs. the barycentered psr.toas) and refitting, we can create a "perfect" timing solution. (Note that 1 ns is roughly tempo2's claimed accuracy.)
print(math.sqrt(N.mean(psr.residuals()**2)) / 1e-6) psr.stoas[:] -= psr.residuals() / 86400.0 ret = psr.fit(iters = 4) print(math.sqrt(N.mean(psr.residuals()**2)) / 1e-6)
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Then we can add, e.g., homoskedastic white measurement noise at 100 ns (remember the tempo units: days for TOAs, us for errors, s for residuals).
psr.stoas[:] += 0.1e-6 * N.random.randn(psr.nobs) / 86400.0 psr.toaerrs[:] = 0.1 ret = psr.fit() i = N.argsort(psr.toas()) P.errorbar(psr.toas()[i],psr.residuals()[i],yerr=1e-6*psr.toaerrs[i],fmt='.')
demo/libstempo-demo.ipynb
vallis/libstempo
mit
Reading and writing raw files In this example, we read a raw file. Plot a segment of MEG data restricted to MEG channels. And save these data in a new raw file.
# Author: Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) import mne from mne.datasets import sample print(__doc__) data_path = sample.data_path() fname = data_path + '/MEG/sample/sample_audvis_raw.fif' raw = mne.io.read_raw_fif(fname) # Set up pick list: MEG + STI 014 - bad channels want_meg = True want_eeg = False want_stim = False include = ['STI 014'] raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more picks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim, include=include, exclude='bads') some_picks = picks[:5] # take 5 first start, stop = raw.time_as_index([0, 15]) # read the first 15s of data data, times = raw[some_picks, start:(stop + 1)] # save 150s of MEG data in FIF file raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks, overwrite=True)
0.12/_downloads/plot_read_and_write_raw_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show MEG data
raw.plot()
0.12/_downloads/plot_read_and_write_raw_data.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The aim of our visualization is to illustrate the total number of posts by each community member, and the flows of posts between pairs of friends.
import platform, plotly import numpy as np from numpy import pi print(f'Python version: {platform.python_version()}') print(f'Plotly version: {plotly.__version__}')
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Define the array of data:
matrix = np.array([[16, 3, 28, 0, 18], [18, 0, 12, 5, 29], [ 9, 11, 17, 27, 0], [19, 0, 31, 11, 12], [23, 17, 10, 0, 34]], dtype=int) def check_data(data_matrix): L, M = data_matrix.shape if L != M: raise ValueError('Data array must have a (n,n) shape') return L L = check_data(matrix)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
A chord diagram encodes information in two graphical objects: - ideograms, represented by distinctly colored arcs of circles; - ribbons, that are planar shapes bounded by two quadratic Bezier curves and two arcs of circle, that can degenerate to a point; Ideograms Summing up the entries on each matrix row, one gets a value (in our example this value is equal to the number of posts by a community member). Let us denote by total_comments the total number of posts recorded in this community. Theoretically the interval [0, total_comments) is mapped linearly onto the unit circle, identified with the interval $[0,2\pi)$. For a better looking plot one proceeds as follows: starting from the angular position $0$, in counter-clockwise direction, one draws succesively, around the unit circle, two parallel arcs of length equal to a mapped row sum value, minus a fixed gap. Click the image below:
from IPython.display import IFrame IFrame('https://plot.ly/~empet/12234/', width=377, height=420)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Now we are defining functions that process data in order to get the ideogram ends. As we pointed out, the unit circle is oriented counter-clockwise. In order to get an arc of circle of end angular coordinates $\theta_0<\theta_1$, we define the function, moduloAB, that resolves the case when an arc contains the point of angular coordinate $0$ (for example $\theta_0=2\pi-\pi/12$, $\theta_1=\pi/9$). The function corresponding to $a=-\pi, b=\pi$ allows to map the interval $[0,2\pi)$ onto $[-\pi, \pi)$. Via this transformation we have: $\theta_0\mapsto \theta'_0=-\pi/12$, and $ \theta_1=\mapsto \theta'_1=\pi/9$, and now $\theta'_0<\theta'_1$.
def moduloAB(x, a, b): #maps a real number onto the unit circle identified with #the interval [a,b), b-a=2*PI if a>= b: raise ValueError('Incorrect interval ends') y = (x-a) % (b-a) return y+b if y < 0 else y+a def test_2PI(x): return 0 <= x < 2*pi
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Compute the row sums and the lengths of corresponding ideograms:
row_sum = [matrix[k,:].sum() for k in range(L)] #set the gap between two consecutive ideograms gap = 2*pi*0.005 ideogram_length = 2*pi * np.asarray(row_sum) / sum(row_sum) - gap*np.ones(L)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
The next function returns the list of end angular coordinates for each ideogram arc:
def get_ideogram_ends(ideogram_len, gap): ideo_ends = [] left = 0 for k in range(len(ideogram_len)): right = left + ideogram_len[k] ideo_ends.append([left, right]) left = right + gap return ideo_ends ideo_ends = get_ideogram_ends(ideogram_length, gap)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
The function make_ideogram_arc returns equally spaced points on an ideogram arc, expressed as complex numbers in polar form:
def make_ideogram_arc(R, phi, a=50): # R is the circle radius # phi is the list of angle coordinates of an arc ends # a is a parameter that controls the number of points to be evaluated on an arc if not test_2PI(phi[0]) or not test_2PI(phi[1]): phi = [moduloAB(t, 0, 2*pi) for t in phi] length = (phi[1]-phi[0]) % 2*pi nr = 5 if length <= pi/4 else int(a*length/pi) if phi[0] < phi[1]: theta = np.linspace(phi[0], phi[1], nr) else: phi = [moduloAB(t, -pi, pi) for t in phi] theta = np.linspace(phi[0], phi[1], nr) return R * np.exp(1j*theta)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
The real and imaginary parts of these complex numbers will be used to define the ideogram as a Plotly shape bounded by a SVG path.
make_ideogram_arc(1.3, [11*pi/6, pi/17])
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Set ideograms labels and colors:
labels=['Emma', 'Isabella', 'Ava', 'Olivia', 'Sophia'] ideo_colors=['rgba(244, 109, 67, 0.75)', 'rgba(253, 174, 97, 0.75)', 'rgba(254, 224, 139, 0.75)', 'rgba(217, 239, 139, 0.75)', 'rgba(166, 217, 106, 0.75)']#brewer colors with alpha set on 0.75
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Ribbons in a chord diagram While ideograms illustrate how many comments posted each member of the Facebook community, ribbons give a comparative information on the flows of comments from one friend to another. To illustrate this flow we map data onto the unit circle. More precisely, for each matrix row, $k$, the application: t$\mapsto$ t*ideogram_length[k]/row_sum[k] maps the interval [0, row_sum[k]] onto the interval [0, ideogram_length[k]]. Hence each entrymatrix[k][j]in the $k^{th}$ row is mapped tomatrix[k][j] * ideogram_length[k] / row_value[k]`. The function map_data maps all matrix entries to the corresponding values in the intervals associated to ideograms:
def map_data(data_matrix, row_value, ideogram_length): mapped = np.zeros(data_matrix.shape) for j in range(L): mapped[:, j] = ideogram_length * data_matrix[:,j] / row_value return mapped mapped_data = map_data(matrix, row_sum, ideogram_length) mapped_data
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
To each pair of values (mapped_data[k][j], mapped_data[j][k]), $k<=j$, one associates a ribbon, that is a curvilinear filled rectangle (that can be degenerate), having as opposite sides two subarcs of the $k^{th}$ ideogram, respectively $j^{th}$ ideogram, and two arcs of quadratic B&eacute;zier curves. Here we illustrate the ribbons associated to pairs (mapped_data[0][j], mapped_data[j][0]), $j=\overline{0,4}$, that illustrate the flow of comments between Emma and all other friends, and herself:
IFrame('https://plot.ly/~empet/12519/', width=420, height=420)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
For a better looking chord diagram, Circos documentation recommends to sort increasingly each row of the mapped_data. The array idx_sort, defined below, has on each row the indices that sort the corresponding row in mapped_data:
idx_sort = np.argsort(mapped_data, axis=1) idx_sort
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
In the following we call ribbon ends, the lists l=[l[0], l[1]], r=[r[0], r[1]] having as elements the angular coordinates of the ends of arcs that are opposite sides in a ribbon. These arcs are sub-arcs in the internal boundaries of the ideograms, connected by the ribbon (see the image above). Compute the ribbon ends and store them as tuples in a list of lists ($L\times L$):
def make_ribbon_ends(mapped_data, ideo_ends, idx_sort): L = mapped_data.shape[0] ribbon_boundary = np.zeros((L,L+1)) for k in range(L): start = ideo_ends[k][0] ribbon_boundary[k][0] = start for j in range(1,L+1): J = idx_sort[k][j-1] ribbon_boundary[k][j] = start + mapped_data[k][J] start = ribbon_boundary[k][j] return [[(ribbon_boundary[k][j], ribbon_boundary[k][j+1] ) for j in range(L)] for k in range(L)] ribbon_ends = make_ribbon_ends(mapped_data, ideo_ends, idx_sort) print ('ribbon ends starting from the ideogram[2]\n', ribbon_ends[2])
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
We note that ribbon_ends[k][j] corresponds to mapped_data[i][idx_sort[k][j]], i.e. the length of the arc of ends in ribbon_ends[k][j] is equal to mapped_data[i][idx_sort[k][j]]. Now we define a few functions that compute the control points for B&eacute;zier ribbon sides. The function control_pts returns the cartesian coordinates of the control points, $b_0, b_1, b_2$, supposed as being initially located on the unit circle, and thus defined only by their angular coordinate. The angular coordinate of the point $b_1$ is the mean of angular coordinates of the points $b_0, b_2$. Since for a B&eacute;zier ribbon side only $b_0, b_2$ are placed on the unit circle, one gives radius as a parameter that controls position of $b_1$. radius is the distance of $b_1$ to the circle center.
def control_pts(angle, radius): #angle is a 3-list containing angular coordinates of the control points b0, b1, b2 #radius is the distance from b1 to the origin O(0,0) if len(angle) != 3: raise InvalidInputError('angle must have len =3') b_cplx = np.array([np.exp(1j*angle[k]) for k in range(3)]) b_cplx[1] = radius * b_cplx[1] return list(zip(b_cplx.real, b_cplx.imag)) def ctrl_rib_chords(l, r, radius): # this function returns a 2-list containing control poligons of the two quadratic Bezier #curves that are opposite sides in a ribbon #l (r) the list of angular variables of the ribbon arc ends defining #the ribbon starting (ending) arc # radius is a common parameter for both control polygons if len(l) != 2 or len(r) != 2: raise ValueError('the arc ends must be elements in a list of len 2') return [control_pts([l[j], (l[j]+r[j])/2, r[j]], radius) for j in range(2)]
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Each ribbon is colored with the color of one of the two ideograms it connects. We define an L-list of L-lists of colors for ribbons. Denote it by ribbon_color. ribbon_color[k][j] is the Plotly color string for the ribbon associated to mapped_data[k][j] and mapped_data[j][k], i.e. the ribbon connecting two subarcs in the $k^{th}$, respectively, $j^{th}$ ideogram. Hence this structure is symmetric. Initially we define:
ribbon_color = [L * [ideo_colors[k]] for k in range(L)]
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
and then eventually we are changing the color in a few positions. For our example we are perfotming the following color change:
ribbon_color[0][4]=ideo_colors[4] ribbon_color[1][2]=ideo_colors[2] ribbon_color[2][3]=ideo_colors[3] ribbon_color[2][4]=ideo_colors[4]
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
The symmetric locations are not modified, because we do not access ribbon_color[k][j], $k>j$, when drawing the ribbons. Functions that return the Plotly SVG paths that are ribbon boundaries:
def make_q_bezier(b):# defines the Plotly SVG path for a quadratic Bezier curve defined by the #list of its control points if len(b) != 3: raise valueError('control poligon must have 3 points') A, B, C = b return f'M {A[0]}, {A[1]} Q {B[0]}, {B[1]} {C[0]}, {C[1]}' b=[(1,4), (-0.5, 2.35), (3.745, 1.47)] make_q_bezier(b)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
make_ribbon_arc returns the Plotly SVG path corresponding to an arc represented by its end angular coordinates, theta0, theta1.
def make_ribbon_arc(theta0, theta1): if test_2PI(theta0) and test_2PI(theta1): if theta0 < theta1: theta0 = moduloAB(theta0, -pi, pi) theta1 = moduloAB(theta1, -pi, pi) if theta0 *theta1 > 0: raise ValueError('incorrect angle coordinates for ribbon') nr = int(40 * (theta0 - theta1) / pi) if nr <= 2: nr = 3 theta = np.linspace(theta0, theta1, nr) pts=np.exp(1j*theta)# points in polar complex form, on the given arc string_arc = '' for k in range(len(theta)): string_arc += f'L {pts.real[k]}, {pts.imag[k]} ' return string_arc else: raise ValueError('the angle coordinates for an arc side of a ribbon must be in [0, 2*pi]') make_ribbon_arc(np.pi/3, np.pi/6)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Finally we are ready to define data and layout for the Plotly plot of the chord diagram.
import plotly.graph_objects as go def plot_layout(title, plot_size): return dict(title=title, xaxis=dict(visible=False), yaxis=dict(visible=False), showlegend=False, width=plot_size, height=plot_size, margin=dict(t=25, b=25, l=25, r=25), hovermode='closest', )
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Function that returns the Plotly shape of an ideogram:
def make_ideo_shape(path, line_color, fill_color): #line_color is the color of the shape boundary #fill_collor is the color assigned to an ideogram return dict(line=dict(color=line_color, width=0.45), path=path, layer='below', type='path', fillcolor=fill_color)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
We generate two types of ribbons: a ribbon connecting subarcs in two distinct ideograms, respectively a ribbon from one ideogram to itself (it corresponds to mapped_data[k][k], i.e. it gives the flow of comments from a community member to herself).
def make_ribbon(l, r, line_color, fill_color, radius=0.2): #l=[l[0], l[1]], r=[r[0], r[1]] represent the opposite arcs in the ribbon #line_color is the color of the shape boundary #fill_color is the fill color for the ribbon shape poligon = ctrl_rib_chords(l,r, radius) b, c = poligon return dict(line=dict(color=line_color, width=0.5), path=make_q_bezier(b) + make_ribbon_arc(r[0], r[1])+ make_q_bezier(c[::-1]) + make_ribbon_arc(l[1], l[0]), type='path', layer='below', fillcolor = fill_color, ) def make_self_rel(l, line_color, fill_color, radius): #radius is the radius of Bezier control point b_1 b = control_pts([l[0], (l[0]+l[1])/2, l[1]], radius) return dict(line = dict(color=line_color, width=0.5), path = make_q_bezier(b)+make_ribbon_arc(l[1], l[0]), type = 'path', layer = 'below', fillcolor = fill_color ) def invPerm(perm): # function that returns the inverse of a permutation, perm inv = [0] * len(perm) for i, s in enumerate(perm): inv[s] = i return inv layout=plot_layout('Chord diagram', 400)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Now let us explain the key point of associating ribbons to right data: From the definition of ribbon_ends we notice that ribbon_ends[k][j] corresponds to data stored in matrix[k][sigma[j]], where sigma is the permutation of indices $0, 1, \ldots L-1$, that sort the row k in mapped_data. If sigma_inv is the inverse permutation of sigma, we get that to matrix[k][j] corresponds the ribbon_ends[k][sigma_inv[j]]. ribbon_info is a list of dicts setting the information that is displayed when hovering the mouse over the ribbon ends. Set the radius of B&eacute;zier control point, $b_1$, for each ribbon associated to a diagonal data entry:
radii_sribb = [0.4, 0.30, 0.35, 0.39, 0.12]# these value are set after a few trials ribbon_info = [] shapes = [] for k in range(L): sigma = idx_sort[k] sigma_inv = invPerm(sigma) for j in range(k, L): if matrix[k][j] == 0 and matrix[j][k]==0: continue eta = idx_sort[j] eta_inv = invPerm(eta) l = ribbon_ends[k][sigma_inv[j]] if j == k: shapes.append(make_self_rel(l, 'rgb(175,175,175)' , ideo_colors[k], radius=radii_sribb[k])) z = 0.9*np.exp(1j*(l[0]+l[1])/2) #the text below will be displayed when hovering the mouse over the ribbon text = f'{labels[k]} commented on {int(matrix[k][k])} of herself Fb posts' ribbon_info.append(go.Scatter(x=[z.real], y=[z.imag], mode='markers', marker=dict(size=0.5, color=ideo_colors[k]), text=text, hoverinfo='text' ) ) else: r = ribbon_ends[j][eta_inv[k]] zi = 0.9 * np.exp(1j*(l[0]+l[1])/2) zf = 0.9 * np.exp(1j*(r[0]+r[1])/2) #texti and textf are the strings that will be displayed when hovering the mouse #over the two ribbon ends texti = f'{labels[k]} commented on {int(matrix[k][j])} of {labels[j]} Fb posts' textf = f'{labels[j]} commented on {int(matrix[j][k])} of {labels[k]} Fb posts' ribbon_info.append(go.Scatter(x=[zi.real], y=[zi.imag], mode='markers', marker=dict(size=0.5, color=ribbon_color[k][j]), text=texti, hoverinfo='text' ) ), ribbon_info.append(go.Scatter(x=[zf.real], y=[zf.imag], mode='markers', marker=dict(size=0.5, color=ribbon_color[k][j]), text=textf, hoverinfo='text' ) ) r = (r[1], r[0]) # IMPORTANT!!! Reverse these arc ends because otherwise you get # a twisted ribbon #append the ribbon shape shapes.append(make_ribbon(l, r, 'rgb(175,175,175)' , ribbon_color[k][j]))
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
ideograms is a list of dicts that set the position, and color of ideograms, as well as the information associated to each ideogram.
ideograms = [] for k in range(len(ideo_ends)): z = make_ideogram_arc(1.1, ideo_ends[k]) zi = make_ideogram_arc(1.0, ideo_ends[k]) m = len(z) n = len(zi) ideograms.append(go.Scatter(x=z.real, y=z.imag, mode='lines', line=dict(color=ideo_colors[k], shape='spline', width=0.25), text=f'{labels[k]} <br>{int(row_sum[k])} comments', hoverinfo='text' ) ) path = 'M ' for s in range(m): path += f'{z.real[s]}, {z.imag[s]} L ' Zi = np.array(zi.tolist()[::-1]) for s in range(m): path += f'{Zi.real[s]}, {Zi.imag[s]} L ' path += f'{z.real[0]} ,{z.imag[0]}' shapes.append(make_ideo_shape(path,'rgb(150,150,150)' , ideo_colors[k])) data = ideograms + ribbon_info layout['shapes'] = shapes fig = go.Figure(data=data, layout=layout) from plotly.offline import download_plotlyjs, init_notebook_mode, iplot, plot init_notebook_mode(connected=True) iplot(fig)
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
Here is a chord diagram associated to a community of 8 Facebook friends:
IFrame('https://plot.ly/~empet/12148/chord-diagram-of-facebook-comments-in-a-community/', width=500, height=500) from IPython.core.display import HTML def css_styling(): styles = open("./custom.css", "r").read() return HTML(styles) css_styling()
Chord-diagram.ipynb
empet/Plotly-plots
gpl-3.0
We take a peek at the training data with the head() method below.
X_train.head()
notebooks/ml_intermediate/raw/tut3.ipynb
Kaggle/learntools
apache-2.0
Next, we obtain a list of all of the categorical variables in the training data. We do this by checking the data type (or dtype) of each column. The object dtype indicates a column has text (there are other things it could theoretically be, but that's unimportant for our purposes). For this dataset, the columns with text indicate categorical variables.
# Get list of categorical variables s = (X_train.dtypes == 'object') object_cols = list(s[s].index) print("Categorical variables:") print(object_cols)
notebooks/ml_intermediate/raw/tut3.ipynb
Kaggle/learntools
apache-2.0
Define Function to Measure Quality of Each Approach We define a function score_dataset() to compare the three different approaches to dealing with categorical variables. This function reports the mean absolute error (MAE) from a random forest model. In general, we want the MAE to be as low as possible!
#$HIDE$ from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error # Function for comparing different approaches def score_dataset(X_train, X_valid, y_train, y_valid): model = RandomForestRegressor(n_estimators=100, random_state=0) model.fit(X_train, y_train) preds = model.predict(X_valid) return mean_absolute_error(y_valid, preds)
notebooks/ml_intermediate/raw/tut3.ipynb
Kaggle/learntools
apache-2.0
Score from Approach 1 (Drop Categorical Variables) We drop the object columns with the select_dtypes() method.
drop_X_train = X_train.select_dtypes(exclude=['object']) drop_X_valid = X_valid.select_dtypes(exclude=['object']) print("MAE from Approach 1 (Drop categorical variables):") print(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))
notebooks/ml_intermediate/raw/tut3.ipynb
Kaggle/learntools
apache-2.0
Score from Approach 2 (Ordinal Encoding) Scikit-learn has a OrdinalEncoder class that can be used to get ordinal encodings. We loop over the categorical variables and apply the ordinal encoder separately to each column.
from sklearn.preprocessing import OrdinalEncoder # Make copy to avoid changing original data label_X_train = X_train.copy() label_X_valid = X_valid.copy() # Apply ordinal encoder to each column with categorical data ordinal_encoder = OrdinalEncoder() label_X_train[object_cols] = ordinal_encoder.fit_transform(X_train[object_cols]) label_X_valid[object_cols] = ordinal_encoder.transform(X_valid[object_cols]) print("MAE from Approach 2 (Ordinal Encoding):") print(score_dataset(label_X_train, label_X_valid, y_train, y_valid))
notebooks/ml_intermediate/raw/tut3.ipynb
Kaggle/learntools
apache-2.0
In the code cell above, for each column, we randomly assign each unique value to a different integer. This is a common approach that is simpler than providing custom labels; however, we can expect an additional boost in performance if we provide better-informed labels for all ordinal variables. Score from Approach 3 (One-Hot Encoding) We use the OneHotEncoder class from scikit-learn to get one-hot encodings. There are a number of parameters that can be used to customize its behavior. - We set handle_unknown='ignore' to avoid errors when the validation data contains classes that aren't represented in the training data, and - setting sparse=False ensures that the encoded columns are returned as a numpy array (instead of a sparse matrix). To use the encoder, we supply only the categorical columns that we want to be one-hot encoded. For instance, to encode the training data, we supply X_train[object_cols]. (object_cols in the code cell below is a list of the column names with categorical data, and so X_train[object_cols] contains all of the categorical data in the training set.)
from sklearn.preprocessing import OneHotEncoder # Apply one-hot encoder to each column with categorical data OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False) OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[object_cols])) OH_cols_valid = pd.DataFrame(OH_encoder.transform(X_valid[object_cols])) # One-hot encoding removed index; put it back OH_cols_train.index = X_train.index OH_cols_valid.index = X_valid.index # Remove categorical columns (will replace with one-hot encoding) num_X_train = X_train.drop(object_cols, axis=1) num_X_valid = X_valid.drop(object_cols, axis=1) # Add one-hot encoded columns to numerical features OH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1) OH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1) print("MAE from Approach 3 (One-Hot Encoding):") print(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))
notebooks/ml_intermediate/raw/tut3.ipynb
Kaggle/learntools
apache-2.0
2 - Outline of the Assignment To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will: Initialize the parameters for a two-layer network and for an $L$-layer neural network. Implement the forward propagation module (shown in purple in the figure below). Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). We give you the ACTIVATION function (relu/sigmoid). Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function. Compute the loss. Implement the backward propagation module (denoted in red in the figure below). Complete the LINEAR part of a layer's backward propagation step. We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function Finally update the parameters. <img src="images/final outline.png" style="width:800px;height:500px;"> <caption><center> Figure 1</center></caption><br> Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. 3 - Initialization You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. 3.1 - 2-layer Neural Network Exercise: Create and initialize the parameters of the 2-layer neural network. Instructions: - The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID. - Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape. - Use zero initialization for the biases. Use np.zeros(shape).
# GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: parameters -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(1) ### START CODE HERE ### (≈ 4 lines of code) W1 = np.random.randn(n_h, n_x)*0.01 b1 = np.zeros(shape=(n_h, 1)) W2 = np.random.randn(n_y, n_h)*0.01 b2 = np.zeros(shape=(n_y, 1)) ### END CODE HERE ### assert(W1.shape == (n_h, n_x)) assert(b1.shape == (n_h, 1)) assert(W2.shape == (n_y, n_h)) assert(b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters = initialize_parameters(3,2,1) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected output: <table style="width:80%"> <tr> <td> **W1** </td> <td> [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]] </td> </tr> <tr> <td> **b1**</td> <td>[[ 0.] [ 0.]]</td> </tr> <tr> <td>**W2**</td> <td> [[ 0.01744812 -0.00761207]]</td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> 3.2 - L-layer Neural Network The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: <table style="width:100%"> <tr> <td> </td> <td> **Shape of W** </td> <td> **Shape of b** </td> <td> **Activation** </td> <td> **Shape of Activation** </td> <tr> <tr> <td> **Layer 1** </td> <td> $(n^{[1]},12288)$ </td> <td> $(n^{[1]},1)$ </td> <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> <td> $(n^{[1]},209)$ </td> <tr> <tr> <td> **Layer 2** </td> <td> $(n^{[2]}, n^{[1]})$ </td> <td> $(n^{[2]},1)$ </td> <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> <td> $(n^{[2]}, 209)$ </td> <tr> <tr> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$</td> <td> $\vdots$ </td> <tr> <tr> <td> **Layer L-1** </td> <td> $(n^{[L-1]}, n^{[L-2]})$ </td> <td> $(n^{[L-1]}, 1)$ </td> <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> <td> $(n^{[L-1]}, 209)$ </td> <tr> <tr> <td> **Layer L** </td> <td> $(n^{[L]}, n^{[L-1]})$ </td> <td> $(n^{[L]}, 1)$ </td> <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td> <td> $(n^{[L]}, 209)$ </td> <tr> </table> Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\ m & n & o \ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\ d & e & f \ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \ t \ u \end{bmatrix}\tag{2}$$ Then $WX + b$ will be: $$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u \end{bmatrix}\tag{3} $$ Exercise: Implement initialization for an L-layer Neural Network. Instructions: - The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function. - Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01. - Use zeros initialization for the biases. Use np.zeros(shape). - We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network). python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
# GRADED FUNCTION: initialize_parameters_deep def initialize_parameters_deep(layer_dims): """ Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) """ np.random.seed(3) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01 parameters['b' + str(l)] = np.zeros((layer_dims[l], 1)) ### END CODE HERE ### assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters parameters = initialize_parameters_deep([5,4,3]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected output: <table style="width:80%"> <tr> <td> **W1** </td> <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> </tr> <tr> <td>**b1** </td> <td>[[ 0.] [ 0.] [ 0.] [ 0.]]</td> </tr> <tr> <td>**W2** </td> <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> </tr> <tr> <td>**b2** </td> <td>[[ 0.] [ 0.] [ 0.]]</td> </tr> </table> 4 - Forward propagation module 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order: LINEAR LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model) The linear forward module (vectorized over all the examples) computes the following equations: $$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$ where $A^{[0]} = X$. Exercise: Build the linear part of forward propagation. Reminder: The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
# GRADED FUNCTION: linear_forward def linear_forward(A, W, b): """ Implement the linear part of a layer's forward propagation. Arguments: A -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) Returns: Z -- the input of the activation function, also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently """ ### START CODE HERE ### (≈ 1 line of code) Z = np.dot(W,A)+b ### END CODE HERE ### assert(Z.shape == (W.shape[0], A.shape[1])) cache = (A, W, b) return Z, cache A, W, b = linear_forward_test_case() Z, linear_cache = linear_forward(A, W, b) print("Z = " + str(Z))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected output: <table style="width:35%"> <tr> <td> **Z** </td> <td> [[ 3.26295337 -1.23429987]] </td> </tr> </table> 4.2 - Linear-Activation Forward In this notebook, you will use two activation functions: Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call: python A, activation_cache = sigmoid(Z) ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call: python A, activation_cache = relu(Z) For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step. Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
# GRADED FUNCTION: linear_activation_forward def linear_activation_forward(A_prev, W, b, activation): """ Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments: A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: A -- the output of the activation function, also called the post-activation value cache -- a python dictionary containing "linear_cache" and "activation_cache"; stored for computing the backward pass efficiently """ if activation == "sigmoid": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = sigmoid(Z) ### END CODE HERE ### elif activation == "relu": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = linear_forward(A_prev, W, b) A, activation_cache = relu(Z) ### END CODE HERE ### assert (A.shape == (W.shape[0], A_prev.shape[1])) cache = (linear_cache, activation_cache) return A, cache A_prev, W, b = linear_activation_forward_test_case() A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid") print("With sigmoid: A = " + str(A)) A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu") print("With ReLU: A = " + str(A))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected output: <table style="width:35%"> <tr> <td> **With sigmoid: A ** </td> <td > [[ 0.96890023 0.11013289]]</td> </tr> <tr> <td> **With ReLU: A ** </td> <td > [[ 3.43896131 0. ]]</td> </tr> </table> Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID. <img src="images/model_architecture_kiank.png" style="width:600px;height:300px;"> <caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br> Exercise: Implement the forward propagation of the above model. Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.) Tips: - Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times - Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
# GRADED FUNCTION: L_model_forward def L_model_forward(X, parameters): """ Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments: X -- data, numpy array of shape (input size, number of examples) parameters -- output of initialize_parameters_deep() Returns: AL -- last post-activation value caches -- list of caches containing: every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2) the cache of linear_sigmoid_forward() (there is one, indexed L-1) """ caches = [] A = X L = len(parameters) // 2 # number of layers in the neural network # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list. for l in range(1, L): A_prev = A ### START CODE HERE ### (≈ 2 lines of code) A, cache = linear_activation_forward(A_prev, parameters['W'+str(l)], parameters['b'+str(l)], activation='relu') caches.append(cache) ### END CODE HERE ### # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. ### START CODE HERE ### (≈ 2 lines of code) AL, cache = linear_activation_forward(A, parameters['W'+str(L)], parameters['b'+str(L)], activation='sigmoid') caches.append(cache) ### END CODE HERE ### assert(AL.shape == (1,X.shape[1])) return AL, caches X, parameters = L_model_forward_test_case_2hidden() AL, caches = L_model_forward(X, parameters) print("AL = " + str(AL)) print("Length of caches list = " + str(len(caches)))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
<table style="width:50%"> <tr> <td> **AL** </td> <td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td> </tr> <tr> <td> **Length of caches list ** </td> <td > 3 </td> </tr> </table> Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. 5 - Cost function Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning. Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
# GRADED FUNCTION: compute_cost def compute_cost(AL, Y): """ Implement the cost function defined by equation (7). Arguments: AL -- probability vector corresponding to your label predictions, shape (1, number of examples) Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns: cost -- cross-entropy cost """ m = Y.shape[1] # Compute loss from aL and y. ### START CODE HERE ### (≈ 1 lines of code) cost = (-1/m)*(np.dot(np.log(AL), Y.T)+np.dot(np.log(1-AL), (1-Y).T)) ### END CODE HERE ### cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). assert(cost.shape == ()) return cost Y, AL = compute_cost_test_case() print("cost = " + str(compute_cost(AL, Y)))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected Output: <table> <tr> <td>**cost** </td> <td> 0.41493159961539694</td> </tr> </table> 6 - Backward propagation module Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. Reminder: <img src="images/backprop_kiank.png" style="width:650px;height:250px;"> <caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption> <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows: $$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$ In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted. Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$. This is why we talk about **backpropagation**. !--> Now, similar to forward propagation, you are going to build the backward propagation in three steps: - LINEAR backward - LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) 6.1 - Linear backward For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation). Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$. <img src="images/linearback_kiank.png" style="width:250px;height:300px;"> <caption><center> Figure 4 </center></caption> The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need: $$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$ $$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$ $$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ Exercise: Use the 3 formulas above to implement linear_backward().
# GRADED FUNCTION: linear_backward def linear_backward(dZ, cache): """ Implement the linear portion of backward propagation for a single layer (layer l) Arguments: dZ -- Gradient of the cost with respect to the linear output (of current layer l) cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ A_prev, W, b = cache m = A_prev.shape[1] ### START CODE HERE ### (≈ 3 lines of code) dW = np.dot(dZ, cache[0].T)/m db = ((np.sum(dZ, axis=1, keepdims=True))/m) dA_prev = np.dot(cache[1].T, dZ) ### END CODE HERE ### assert (dA_prev.shape == A_prev.shape) assert (dW.shape == W.shape) assert (db.shape == b.shape) # print (b.shape, db.shape) return dA_prev, dW, db # Set up some test inputs dZ, linear_cache = linear_backward_test_case() dA_prev, dW, db = linear_backward(dZ, linear_cache) print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected Output: <table style="width:90%"> <tr> <td> **dA_prev** </td> <td > [[ 0.51822968 -0.19517421] [-0.40506361 0.15255393] [ 2.37496825 -0.89445391]] </td> </tr> <tr> <td> **dW** </td> <td > [[-0.10076895 1.40685096 1.64992505]] </td> </tr> <tr> <td> **db** </td> <td> [[ 0.50629448]] </td> </tr> </table> 6.2 - Linear-Activation backward Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward. To help you implement linear_activation_backward, we provided two backward functions: - sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows: python dZ = sigmoid_backward(dA, activation_cache) relu_backward: Implements the backward propagation for RELU unit. You can call it as follows: python dZ = relu_backward(dA, activation_cache) If $g(.)$ is the activation function, sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
# GRADED FUNCTION: linear_activation_backward def linear_activation_backward(dA, cache, activation): """ Implement the backward propagation for the LINEAR->ACTIVATION layer. Arguments: dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ linear_cache, activation_cache = cache if activation == "relu": ### START CODE HERE ### (≈ 2 lines of code) dZ = relu_backward(dA, activation_cache) dA_prev, dW, db = linear_backward(dZ, linear_cache) ### END CODE HERE ### elif activation == "sigmoid": ### START CODE HERE ### (≈ 2 lines of code) dZ = sigmoid_backward(dA, activation_cache) dA_prev, dW, db = linear_backward(dZ, linear_cache) ### END CODE HERE ### return dA_prev, dW, db AL, linear_activation_cache = linear_activation_backward_test_case() dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid") print ("sigmoid:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db) + "\n") dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu") print ("relu:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected output with sigmoid: <table style="width:100%"> <tr> <td > dA_prev </td> <td >[[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> </tr> <tr> <td > db </td> <td > [[-0.05729622]] </td> </tr> </table> Expected output with relu: <table style="width:100%"> <tr> <td > dA_prev </td> <td > [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> </tr> <tr> <td > db </td> <td > [[-0.20837892]] </td> </tr> </table> 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. <img src="images/mn_backward.png" style="width:450px;height:300px;"> <caption><center> Figure 5 : Backward pass </center></caption> Initializing backpropagation: To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$. To do so, use this formula (derived using calculus which you don't need in-depth knowledge of): python dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$ For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"]. Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
# GRADED FUNCTION: L_model_backward def L_model_backward(AL, Y, caches): """ Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group Arguments: AL -- probability vector, output of the forward propagation (L_model_forward()) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) caches -- list of caches containing: every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2) the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1]) Returns: grads -- A dictionary with the gradients grads["dA" + str(l)] = ... grads["dW" + str(l)] = ... grads["db" + str(l)] = ... """ grads = {} L = len(caches) # the number of layers m = AL.shape[1] Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL # Initializing the backpropagation ### START CODE HERE ### (1 line of code) dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) ### END CODE HERE ### # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"] ### START CODE HERE ### (approx. 2 lines) current_cache = caches[-1] grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation='sigmoid') ### END CODE HERE ### for l in reversed(range(L-1)): # lth layer: (RELU -> LINEAR) gradients. # Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] ### START CODE HERE ### (approx. 5 lines) current_cache = caches[l] dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA"+str(l+2)], current_cache, activation='relu') grads["dA" + str(l + 1)] = dA_prev_temp grads["dW" + str(l + 1)] = dW_temp grads["db" + str(l + 1)] = db_temp ### END CODE HERE ### return grads AL, Y_assess, caches = L_model_backward_test_case() grads = L_model_backward(AL, Y_assess, caches) print_grads(grads)
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Expected Output <table style="width:60%"> <tr> <td > dW1 </td> <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> </tr> <tr> <td > db1 </td> <td > [[-0.22007063] [ 0. ] [-0.02835349]] </td> </tr> <tr> <td > dA1 </td> <td > [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]] </td> </tr> </table> 6.4 - Update Parameters In this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$ where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. Exercise: Implement update_parameters() to update your parameters using gradient descent. Instructions: Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate): """ Update parameters using gradient descent Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backward Returns: parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ... """ L = len(parameters) // 2 # number of layers in the neural network # Update rule for each parameter. Use a for loop. ### START CODE HERE ### (≈ 3 lines of code) for l in range(L): parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)] parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)] ### END CODE HERE ### return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads, 0.1) print ("W1 = "+ str(parameters["W1"])) print ("b1 = "+ str(parameters["b1"])) print ("W2 = "+ str(parameters["W2"])) print ("b2 = "+ str(parameters["b2"]))
Intro_to_Neural_Networks/Building+your+Deep+Neural+Network+-+Step+by+Step+v5.ipynb
radu941208/DeepLearning
mit
Making Python faster This homework provides practice in making Python code faster. Note that we start with functions that already use idiomatic numpy (which are about two orders of magnitude faster than the pure Python versions). Functions to optimize
def logistic(x): """Logistic function.""" return np.exp(x)/(1 + np.exp(x)) def gd(X, y, beta, alpha, niter): """Gradient descent algorihtm.""" n, p = X.shape Xt = X.T for i in range(niter): y_pred = logistic(X @ beta) epsilon = y - y_pred grad = Xt @ epsilon / n beta += alpha * grad return beta x = np.linspace(-6, 6, 100) plt.plot(x, logistic(x)) pass
homework/05_Making_Python_Faster_Solutions.ipynb
cliburn/sta-663-2017
mit
Data set for classification
n = 10000 p = 2 X, y = make_blobs(n_samples=n, n_features=p, centers=2, cluster_std=1.05, random_state=23) X = np.c_[np.ones(len(X)), X] y = y.astype('float')
homework/05_Making_Python_Faster_Solutions.ipynb
cliburn/sta-663-2017
mit
Using gradient descent for classification by logistic regression
# initial parameters niter = 1000 α = 0.01 β = np.zeros(p+1) # call gradient descent β = gd(X, y, β, α, niter) # assign labels to points based on prediction y_pred = logistic(X @ β) labels = y_pred > 0.5 # calculate separating plane sep = (-β[0] - β[1] * X)/β[2] plt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter') plt.plot(X, sep, 'r-') pass
homework/05_Making_Python_Faster_Solutions.ipynb
cliburn/sta-663-2017
mit
1. Rewrite the logistic function so it only makes one np.exp call. Compare the time of both versions with the input x given below using the @timeit magic. (10 points)
np.random.seed(123) n = int(1e7) x = np.random.normal(0, 1, n) def logistic2(x): """Logistic function.""" return 1/(1 + np.exp(-x)) %timeit logistic(x) %timeit logistic2(x)
homework/05_Making_Python_Faster_Solutions.ipynb
cliburn/sta-663-2017
mit
2. (20 points) Use numba to compile the gradient descent function. Use the @vectorize decorator to create a ufunc version of the logistic function and call this logistic_numba_cpu with function signatures of float64(float64). Create another function called logistic_numba_parallel by giving an extra argument to the decorator of target=parallel (5 points) For each function, check that the answers are the same as with the original logistic function using np.testing.assert_array_almost_equal. Use %timeit to compare the three logistic functions (5 points) Now use @jit to create a JIT_compiled version of the logistic and gd functions, calling them logistic_numba and gd_numba. Provide appropriate function signatures to the decorator in each case. (5 points) Compare the two gradient descent functions gd and gd_numba for correctness and performance. (5 points)
@vectorize([float64(float64)], target='cpu') def logistic_numba_cpu(x): """Logistic function.""" return 1/(1 + math.exp(-x)) @vectorize([float64(float64)], target='parallel') def logistic_numba_parallel(x): """Logistic function.""" return 1/(1 + math.exp(-x)) np.testing.assert_array_almost_equal(logistic(x), logistic_numba_cpu(x)) np.testing.assert_array_almost_equal(logistic(x), logistic_numba_parallel(x)) %timeit logistic(x) %timeit logistic_numba_cpu(x) %timeit logistic_numba_parallel(x) @jit(float64[:](float64[:]), nopython=True) def logistic_numba(x): return 1/(1 + np.exp(-x)) @jit(float64[:](float64[:,:], float64[:], float64[:], float64, int64), nopython=True) def gd_numba(X, y, beta, alpha, niter): """Gradient descent algorihtm.""" n, p = X.shape Xt = X.T for i in range(niter): y_pred = logistic_numba(X @ beta) epsilon = y - y_pred grad = Xt @ epsilon / n beta += alpha * grad return beta beta1 = gd(X, y, β, α, niter) beta2 = gd_numba(X, y, β, α, niter) np.testing.assert_almost_equal(beta1, beta2) %timeit gd(X, y, β, α, niter) %timeit gd_numba(X, y, β, α, niter) # initial parameters niter = 1000 α = 0.01 β = np.zeros(p+1) # call gradient descent β = gd_numba(X, y, β, α, niter) # assign labels to points based on prediction y_pred = logistic(X @ β) labels = y_pred > 0.5 # calculate separating plane sep = (-β[0] - β[1] * X)/β[2] plt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter') plt.plot(X, sep, 'r-') pass
homework/05_Making_Python_Faster_Solutions.ipynb
cliburn/sta-663-2017
mit
3. (30 points) Use cython to compile the gradient descent function. Cythonize the logistic function as logistic_cython. Use the --annotate argument to the cython magic function to find slow regions. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (10 points) Now cythonize the gd function as gd_cython. This function should use of the cythonized logistic_cython as a C function call. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (20 points) Hints: Give static types to all variables Know how to use def, cdef and cpdef Use Typed MemoryViews Find out how to transpose a Typed MemoryView to store the transpose of X Typed MemoryVeiws are not numpy arrays - you often have to write explicit loops to operate on them Use the cython boundscheck, wraparound, and cdivision operators
%%cython --annotate import cython import numpy as np cimport numpy as np from libc.math cimport exp @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def logistic_cython(double[:] x): """Logistic function.""" cdef int i cdef int n = x.shape[0] cdef double [:] s = np.empty(n) for i in range(n): s[i] = 1.0/(1.0 + exp(-x[i])) return s np.testing.assert_array_almost_equal(logistic(x), logistic_cython(x)) %timeit logistic2(x) %timeit logistic_cython(x) %%cython --annotate import cython import numpy as np cimport numpy as np from libc.math cimport exp @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) cdef double[:] logistic_(double[:] x): """Logistic function.""" cdef int i cdef int n = x.shape[0] cdef double [:] s = np.empty(n) for i in range(n): s[i] = 1.0/(1.0 + exp(-x[i])) return s @cython.boundscheck(False) @cython.wraparound(False) @cython.cdivision(True) def gd_cython(double[:, ::1] X, double[:] y, double[:] beta, double alpha, int niter): """Gradient descent algorihtm.""" cdef int n = X.shape[0] cdef int p = X.shape[1] cdef double[:] eps = np.empty(n) cdef double[:] y_pred = np.empty(n) cdef double[:] grad = np.empty(p) cdef int i, j, k cdef double[:, :] Xt = X.T for i in range(niter): y_pred = logistic_(np.dot(X, beta)) for j in range(n): eps[j] = y[j] - y_pred[j] grad = np.dot(Xt, eps) / n for k in range(p): beta[k] += alpha * grad[k] return beta niter = 1000 alpha = 0.01 beta = np.random.random(X.shape[1]) beta1 = gd(X, y, β, α, niter) beta2 = gd_cython(X, y, β, α, niter) np.testing.assert_almost_equal(beta1, beta2) %timeit gd(X, y, beta, alpha, niter) %timeit gd_cython(X, y, beta, alpha, niter)
homework/05_Making_Python_Faster_Solutions.ipynb
cliburn/sta-663-2017
mit
4. (40 points) Wrapping modules in C++. Rewrite the logistic and gd functions in C++, using pybind11 to create Python wrappers. Compare accuracy and performance as usual. Replicate the plotted example using the C++ wrapped functions for logistic and gd Writing a vectorized logistic function callable from both C++ and Python (10 points) Writing the gd function callable from Python (25 points) Checking accuracy, benchmarking and creating diagnostic plots (5 points) Hints: Use the C++ Eigen library to do vector and matrix operations When calling the exponential function, you have to use exp(m.array()) instead of exp(m) if you use an Eigen dynamic template. Use cppimport to simplify the wrapping for Python See pybind11 docs See my examples for help
import os if not os.path.exists('./eigen'): ! git clone https://github.com/RLovelett/eigen.git %%file wrap.cpp <% cfg['compiler_args'] = ['-std=c++11'] cfg['include_dirs'] = ['./eigen'] setup_pybind11(cfg) %> #include <pybind11/pybind11.h> #include <pybind11/numpy.h> #include <pybind11/eigen.h> namespace py = pybind11; Eigen::VectorXd logistic(Eigen::VectorXd x) { return 1.0/(1.0 + exp((-x).array())); } Eigen::VectorXd gd(Eigen::MatrixXd X, Eigen::VectorXd y, Eigen::VectorXd beta, double alpha, int niter) { int n = X.rows(); Eigen::VectorXd y_pred; Eigen::VectorXd resid; Eigen::VectorXd grad; Eigen::MatrixXd Xt = X.transpose(); for (int i=0; i<niter; i++) { y_pred = logistic(X * beta); resid = y - y_pred; grad = Xt * resid / n; beta = beta + alpha * grad; } return beta; } PYBIND11_PLUGIN(wrap) { py::module m("wrap", "pybind11 example plugin"); m.def("gd", &gd, "The gradient descent fucntion."); m.def("logistic", &logistic, "The logistic fucntion."); return m.ptr(); } import cppimport cppimport.force_rebuild() funcs = cppimport.imp("wrap") np.testing.assert_array_almost_equal(logistic(x), funcs.logistic(x)) %timeit logistic(x) %timeit funcs.logistic(x) β = np.array([0.0, 0.0, 0.0]) gd(X, y, β, α, niter) β = np.array([0.0, 0.0, 0.0]) funcs.gd(X, y, β, α, niter) %timeit gd(X, y, β, α, niter) %timeit funcs.gd(X, y, β, α, niter) # initial parameters niter = 1000 α = 0.01 β = np.zeros(p+1) # call gradient descent β = funcs.gd(X, y, β, α, niter) # assign labels to points based on prediction y_pred = funcs.logistic(X @ β) labels = y_pred > 0.5 # calculate separating plane sep = (-β[0] - β[1] * X)/β[2] plt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter') plt.plot(X, sep, 'r-') pass
homework/05_Making_Python_Faster_Solutions.ipynb
cliburn/sta-663-2017
mit
Guide to Building End-to-End Reinforcement Learning Application Pipelines using Vertex AI <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Overview This demo showcases the use of TF-Agents, Kubeflow Pipelines (KFP) and Vertex AI, particularly Vertex Pipelines, in building an end-to-end reinforcement learning (RL) pipeline of a movie recommendation system. The demo is intended for developers who want to create RL applications using TensorFlow, TF-Agents and Vertex AI services, and those who want to build end-to-end production pipelines using KFP and Vertex Pipelines. It is recommended for developers to have familiarity with RL and the contextual bandits formulation, and the TF-Agents interface. Dataset This demo uses the MovieLens 100K dataset to simulate an environment with users and their respective preferences. It is available at gs://cloud-samples-data/vertex-ai/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/u.data. Objective In this notebook, you will learn how to build an end-to-end RL pipeline for a TF-Agents (particularly the bandits module) based movie recommendation system, using KFP, Vertex AI and particularly Vertex Pipelines which is fully managed and highly scalable. This Vertex Pipeline includes the following components: 1. Generator to generate MovieLens simulation data 2. Ingester to ingest data 3. Trainer to train the RL policy 4. Deployer to deploy the trained policy to a Vertex AI endpoint After pipeline construction, you (1) create the Simulator (which utilizes Cloud Functions, Cloud Scheduler and Pub/Sub) to send simulated MovieLens prediction requests, (2) create the Logger to asynchronously log prediction inputs and results (which utilizes Cloud Functions, Pub/Sub and a hook in the prediction code), and (3) create the Trigger to trigger recurrent re-training. A more general ML pipeline is demonstrated in MLOps on Vertex AI. Costs This tutorial uses billable components of Google Cloud: Vertex AI BigQuery Cloud Build Cloud Functions Cloud Scheduler Cloud Storage Pub/Sub Learn about Vertex AI pricing, BigQuery pricing, Cloud Build, Cloud Functions, Cloud Scheduler, Cloud Storage pricing, and Pub/Sub pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Google Cloud SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Install additional packages Install additional package dependencies not installed in your notebook environment, such as the Kubeflow Pipelines (KFP) SDK.
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" ! pip3 install {USER_FLAG} google-cloud-aiplatform ! pip3 install {USER_FLAG} google-cloud-pipeline-components ! pip3 install {USER_FLAG} --upgrade kfp ! pip3 install {USER_FLAG} numpy ! pip3 install {USER_FLAG} --upgrade tensorflow ! pip3 install {USER_FLAG} --upgrade pillow ! pip3 install {USER_FLAG} --upgrade tf-agents ! pip3 install {USER_FLAG} --upgrade fastapi
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin Select a GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU" Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API, BigQuery API, Cloud Build, Cloud Functions, Cloud Scheduler, Cloud Storage, and Pub/Sub API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud.
import os PROJECT_ID = "" # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID)
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. In this tutorial, a Cloud Storage bucket holds the MovieLens dataset files to be used for model training. Vertex AI also saves the trained model that results from your training job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI. Also note that Vertex Pipelines is currently only supported in select regions such as "us-central1" (reference).
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import libraries and define constants
import os import sys from google.cloud import aiplatform from google_cloud_pipeline_components import aiplatform as gcc_aip from kfp.v2 import compiler, dsl from kfp.v2.google.client import AIPlatformClient
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Fill out the following configurations
# BigQuery parameters (used for the Generator, Ingester, Logger) BIGQUERY_DATASET_ID = f"{PROJECT_ID}.movielens_dataset" # @param {type:"string"} BigQuery dataset ID as `project_id.dataset_id`. BIGQUERY_LOCATION = "us" # @param {type:"string"} BigQuery dataset region. BIGQUERY_TABLE_ID = f"{BIGQUERY_DATASET_ID}.training_dataset" # @param {type:"string"} BigQuery table ID as `project_id.dataset_id.table_id`.
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set additional configurations You may use the default values below as is.
# Dataset parameters RAW_DATA_PATH = "gs://[your-bucket-name]/raw_data/u.data" # @param {type:"string"} # Download the sample data into your RAW_DATA_PATH ! gsutil cp "gs://cloud-samples-data/vertex-ai/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/u.data" $RAW_DATA_PATH # Pipeline parameters PIPELINE_NAME = "movielens-pipeline" # Pipeline display name. ENABLE_CACHING = False # Whether to enable execution caching for the pipeline. PIPELINE_ROOT = f"{BUCKET_NAME}/pipeline" # Root directory for pipeline artifacts. PIPELINE_SPEC_PATH = "metadata_pipeline.json" # Path to pipeline specification file. OUTPUT_COMPONENT_SPEC = "output-component.yaml" # Output component specification file. # BigQuery parameters (used for the Generator, Ingester, Logger) BIGQUERY_TMP_FILE = ( "tmp.json" # Temporary file for storing data to be loaded into BigQuery. ) BIGQUERY_MAX_ROWS = 5 # Maximum number of rows of data in BigQuery to ingest. # Dataset parameters TFRECORD_FILE = ( f"{BUCKET_NAME}/trainer_input_path/*" # TFRecord file to be used for training. ) # Logger parameters (also used for the Logger hook in the prediction container) LOGGER_PUBSUB_TOPIC = "logger-pubsub-topic" # Pub/Sub topic name for the Logger. LOGGER_CLOUD_FUNCTION = "logger-cloud-function" # Cloud Functions name for the Logger.
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the RL pipeline components This section consists of the following steps: 1. Create the Generator to generate MovieLens simulation data 2. Create the Ingester to ingest data 3. Create the Trainer to train the RL policy 4. Create the Deployer to deploy the trained policy to a Vertex AI endpoint After pipeline construction, create the Simulator to send simulated MovieLens prediction requests, create the Logger to asynchronously log prediction inputs and results, and create the Trigger to trigger re-training. Here's the entire workflow: 1. The startup pipeline has the following components: Generator --> Ingester --> Trainer --> Deployer. This pipeline only runs once. 2. Then, the Simulator generates prediction requests (e.g. every 5 mins), and the Logger gets invoked immediately at each prediction request and logs each prediction request asynchronously into BigQuery. The Trigger runs the re-training pipeline (e.g. every 30 mins) with the following components: Ingester --> Trainer --> Deploy. You can find the KFP SDK documentation here. Create the Generator to generate MovieLens simulation data Create the Generator component to generate the initial set of training data using a MovieLens simulation environment and a random data-collecting policy. Store the generated data in BigQuery. The Generator source code is src/generator/generator_component.py. Run unit tests on the Generator component Before running the command, you should update the RAW_DATA_PATH in src/generator/test_generator_component.py.
! python3 -m unittest src.generator.test_generator_component
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Ingester to ingest data Create the Ingester component to ingest data from BigQuery, package them as tf.train.Example objects, and output TFRecord files. Read more about tf.train.Example and TFRecord here. The Ingester component source code is in src/ingester/ingester_component.py. Run unit tests on the Ingester component
! python3 -m unittest src.ingester.test_ingester_component
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Trainer to train the RL policy Create the Trainer component to train a RL policy on the training dataset, and then submit a remote custom training job to Vertex AI. This component trains a policy using the TF-Agents LinUCB agent on the MovieLens simulation dataset, and saves the trained policy as a SavedModel. The Trainer component source code is in src/trainer/trainer_component.py. You use additional Vertex AI platform code in pipeline construction to submit the training code defined in Trainer as a custom training job to Vertex AI. (The additional code is similar to what kfp.v2.google.experimental.run_as_aiplatform_custom_job does. You can find an example notebook here for how to use that first-party Trainer component.) The Trainer performs off-policy training, where you train a policy on a static set of pre-collected data records containing information including observation, action and reward. For a data record, the policy in training might not output the same action given the observation in that data record. If you're interested in pipeline metrics, read about KFP Pipeline Metrics here.
# Trainer parameters TRAINING_ARTIFACTS_DIR = ( f"{BUCKET_NAME}/artifacts" # Root directory for training artifacts. ) TRAINING_REPLICA_COUNT = 1 # Number of replica to run the custom training job. TRAINING_MACHINE_TYPE = ( "n1-standard-4" # Type of machine to run the custom training job. ) TRAINING_ACCELERATOR_TYPE = "ACCELERATOR_TYPE_UNSPECIFIED" # Type of accelerators to run the custom training job. TRAINING_ACCELERATOR_COUNT = 0 # Number of accelerators for the custom training job.
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run unit tests on the Trainer component
! python3 -m unittest src.trainer.test_trainer_component
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Deployer to deploy the trained policy to a Vertex AI endpoint Use google_cloud_pipeline_components.aiplatform components during pipeline construction to: 1. Upload the trained policy 2. Create a Vertex AI endpoint 3. Deploy the uploaded trained policy to the endpoint These 3 components formulate the Deployer. They support flexible configurations; for instance, if you want to set up traffic splitting for the endpoint to run A/B testing, you may pass in your configurations to google_cloud_pipeline_components.aiplatform.ModelDeployOp.
# Deployer parameters TRAINED_POLICY_DISPLAY_NAME = ( "movielens-trained-policy" # Display name of the uploaded and deployed policy. ) TRAFFIC_SPLIT = {"0": 100} ENDPOINT_DISPLAY_NAME = "movielens-endpoint" # Display name of the prediction endpoint. ENDPOINT_MACHINE_TYPE = "n1-standard-4" # Type of machine of the prediction endpoint. ENDPOINT_REPLICA_COUNT = 1 # Number of replicas of the prediction endpoint. ENDPOINT_ACCELERATOR_TYPE = "ACCELERATOR_TYPE_UNSPECIFIED" # Type of accelerators to run the custom training job. ENDPOINT_ACCELERATOR_COUNT = 0 # Number of accelerators for the custom training job.
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a custom prediction container using Cloud Build Before setting up the Deployer, define and build a custom prediction container that serves predictions using the trained policy. The source code, Cloud Build YAML configuration file and Dockerfile are in src/prediction_container. This prediction container is the serving container for the deployed, trained policy. See a more detailed guide on building prediction custom containers here.
# Prediction container parameters PREDICTION_CONTAINER = "prediction-container" # Name of the container image. PREDICTION_CONTAINER_DIR = "src/prediction_container"
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Build YAML file using Kaniko build Note: For this application, you are recommended to use E2_HIGHCPU_8 or other high resouce machine configurations instead of the standard machine type listed here to prevent out-of-memory errors.
cloudbuild_yaml = """steps: - name: "gcr.io/kaniko-project/executor:latest" args: ["--destination=gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest", "--cache=true", "--cache-ttl=99h"] env: ["AIP_STORAGE_URI={ARTIFACTS_DIR}", "PROJECT_ID={PROJECT_ID}", "LOGGER_PUBSUB_TOPIC={LOGGER_PUBSUB_TOPIC}"] options: machineType: "E2_HIGHCPU_8" """.format( PROJECT_ID=PROJECT_ID, PREDICTION_CONTAINER=PREDICTION_CONTAINER, ARTIFACTS_DIR=TRAINING_ARTIFACTS_DIR, LOGGER_PUBSUB_TOPIC=LOGGER_PUBSUB_TOPIC, ) with open(f"{PREDICTION_CONTAINER_DIR}/cloudbuild.yaml", "w") as fp: fp.write(cloudbuild_yaml)
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run unit tests on the prediction code
! python3 -m unittest src.prediction_container.test_main
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Build custom prediction container
! gcloud builds submit --config $PREDICTION_CONTAINER_DIR/cloudbuild.yaml $PREDICTION_CONTAINER_DIR
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Author and run the RL pipeline You author the pipeline using custom KFP components built from the previous section, and create a pipeline run using Vertex Pipelines. You can read more about whether to enable execution caching here. You can also specifically configure the worker pool spec for training if for instance you want to train at scale and/or at a higher speed; you can adjust the replica count, machine type, accelerator type and count, and many other specifications. Here, you build a "startup" pipeline that generates randomly sampled training data (with the Generator) as the first step. This pipeline runs only once.
from google_cloud_pipeline_components.experimental.custom_job import utils from kfp.components import load_component_from_url generate_op = load_component_from_url( "https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/generator/component.yaml" ) ingest_op = load_component_from_url( "https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/ingester/component.yaml" ) train_op = load_component_from_url( "https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/62a2a7611499490b4b04d731d48a7ba87c2d636f/community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/src/trainer/component.yaml" ) @dsl.pipeline(pipeline_root=PIPELINE_ROOT, name=f"{PIPELINE_NAME}-startup") def pipeline( # Pipeline configs project_id: str, raw_data_path: str, training_artifacts_dir: str, # BigQuery configs bigquery_dataset_id: str, bigquery_location: str, bigquery_table_id: str, bigquery_max_rows: int = 10000, # TF-Agents RL configs batch_size: int = 8, rank_k: int = 20, num_actions: int = 20, driver_steps: int = 3, num_epochs: int = 5, tikhonov_weight: float = 0.01, agent_alpha: float = 10, ) -> None: """Authors a RL pipeline for MovieLens movie recommendation system. Integrates the Generator, Ingester, Trainer and Deployer components. This pipeline generates initial training data with a random policy and runs once as the initiation of the system. Args: project_id: GCP project ID. This is required because otherwise the BigQuery client will use the ID of the tenant GCP project created as a result of KFP, which doesn't have proper access to BigQuery. raw_data_path: Path to MovieLens 100K's "u.data" file. training_artifacts_dir: Path to store the Trainer artifacts (trained policy). bigquery_dataset: A string of the BigQuery dataset ID in the format of "project.dataset". bigquery_location: A string of the BigQuery dataset location. bigquery_table_id: A string of the BigQuery table ID in the format of "project.dataset.table". bigquery_max_rows: Optional; maximum number of rows to ingest. batch_size: Optional; batch size of environment generated quantities eg. rewards. rank_k: Optional; rank for matrix factorization in the MovieLens environment; also the observation dimension. num_actions: Optional; number of actions (movie items) to choose from. driver_steps: Optional; number of steps to run per batch. num_epochs: Optional; number of training epochs. tikhonov_weight: Optional; LinUCB Tikhonov regularization weight of the Trainer. agent_alpha: Optional; LinUCB exploration parameter that multiplies the confidence intervals of the Trainer. """ # Run the Generator component. generate_task = generate_op( project_id=project_id, raw_data_path=raw_data_path, batch_size=batch_size, rank_k=rank_k, num_actions=num_actions, driver_steps=driver_steps, bigquery_tmp_file=BIGQUERY_TMP_FILE, bigquery_dataset_id=bigquery_dataset_id, bigquery_location=bigquery_location, bigquery_table_id=bigquery_table_id, ) # Run the Ingester component. ingest_task = ingest_op( project_id=project_id, bigquery_table_id=generate_task.outputs["bigquery_table_id"], bigquery_max_rows=bigquery_max_rows, tfrecord_file=TFRECORD_FILE, ) # Run the Trainer component and submit custom job to Vertex AI. # Convert the train_op component into a Vertex AI Custom Job pre-built component custom_job_training_op = utils.create_custom_training_job_op_from_component( component_spec=train_op, replica_count=TRAINING_REPLICA_COUNT, machine_type=TRAINING_MACHINE_TYPE, accelerator_type=TRAINING_ACCELERATOR_TYPE, accelerator_count=TRAINING_ACCELERATOR_COUNT, ) train_task = custom_job_training_op( training_artifacts_dir=training_artifacts_dir, tfrecord_file=ingest_task.outputs["tfrecord_file"], num_epochs=num_epochs, rank_k=rank_k, num_actions=num_actions, tikhonov_weight=tikhonov_weight, agent_alpha=agent_alpha, project=PROJECT_ID, location=REGION, ) # Run the Deployer components. # Upload the trained policy as a model. model_upload_op = gcc_aip.ModelUploadOp( project=project_id, display_name=TRAINED_POLICY_DISPLAY_NAME, artifact_uri=train_task.outputs["training_artifacts_dir"], serving_container_image_uri=f"gcr.io/{PROJECT_ID}/{PREDICTION_CONTAINER}:latest", ) # Create a Vertex AI endpoint. (This operation can occur in parallel with # the Generator, Ingester, Trainer components.) endpoint_create_op = gcc_aip.EndpointCreateOp( project=project_id, display_name=ENDPOINT_DISPLAY_NAME ) # Deploy the uploaded, trained policy to the created endpoint. (This operation # has to occur after both model uploading and endpoint creation complete.) gcc_aip.ModelDeployOp( endpoint=endpoint_create_op.outputs["endpoint"], model=model_upload_op.outputs["model"], deployed_model_display_name=TRAINED_POLICY_DISPLAY_NAME, traffic_split=TRAFFIC_SPLIT, dedicated_resources_machine_type=ENDPOINT_MACHINE_TYPE, dedicated_resources_accelerator_type=ENDPOINT_ACCELERATOR_TYPE, dedicated_resources_accelerator_count=ENDPOINT_ACCELERATOR_COUNT, dedicated_resources_min_replica_count=ENDPOINT_REPLICA_COUNT, ) # Compile the authored pipeline. compiler.Compiler().compile(pipeline_func=pipeline, package_path=PIPELINE_SPEC_PATH) # Create a pipeline run job. job = aiplatform.PipelineJob( display_name=f"{PIPELINE_NAME}-startup", template_path=PIPELINE_SPEC_PATH, pipeline_root=PIPELINE_ROOT, parameter_values={ # Pipeline configs "project_id": PROJECT_ID, "raw_data_path": RAW_DATA_PATH, "training_artifacts_dir": TRAINING_ARTIFACTS_DIR, # BigQuery configs "bigquery_dataset_id": BIGQUERY_DATASET_ID, "bigquery_location": BIGQUERY_LOCATION, "bigquery_table_id": BIGQUERY_TABLE_ID, }, enable_caching=ENABLE_CACHING, ) job.run()
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create the Simulator to send simulated MovieLens prediction requests Create the Simulator to obtain observations from the MovieLens simulation environment, formats them, and sends prediction requests to the Vertex AI endpoint. The workflow is: Cloud Scheduler --> Pub/Sub --> Cloud Functions --> Endpoint In production, this Simulator logic can be modified to that of gathering real-world input features as observations, getting prediction results from the endpoint and communicating those results to real-world users. The Simulator source code is src/simulator/main.py.
# Simulator parameters SIMULATOR_PUBSUB_TOPIC = ( "simulator-pubsub-topic" # Pub/Sub topic name for the Simulator. ) SIMULATOR_CLOUD_FUNCTION = ( "simulator-cloud-function" # Cloud Functions name for the Simulator. ) SIMULATOR_SCHEDULER_JOB = ( "simulator-scheduler-job" # Cloud Scheduler cron job name for the Simulator. ) SIMULATOR_SCHEDULE = "*/5 * * * *" # Cloud Scheduler cron job schedule for the Simulator. Eg. "*/5 * * * *" means every 5 mins. SIMULATOR_SCHEDULER_MESSAGE = ( "simulator-message" # Cloud Scheduler message for the Simulator. ) # TF-Agents RL configs BATCH_SIZE = 8 RANK_K = 20 NUM_ACTIONS = 20
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Run unit tests on the Simulator
! python3 -m unittest src.simulator.test_main
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Pub/Sub topic Read more about creating Pub/Sub topics here
! gcloud pubsub topics create $SIMULATOR_PUBSUB_TOPIC
community-content/tf_agents_bandits_movie_recommendation_with_kfp_and_vertex_sdk/mlops_pipeline_tf_agents_bandits_movie_recommendation/mlops_pipeline_tf_agents_bandits_movie_recommendation.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0