markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
For the example, we will use a linear regression model.
np.random.seed(26) xdata = np.linspace(0, 50, 100) b0, b1, sigma = -2, 1, 3 ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma) plt.plot(xdata, ydata);
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
Now we will write the PyMC3 model, keeping in mind the following two points: 1. Data must be modifiable (both x and y). 2. The model must be recompiled in order to be refitted with the modified data. We, therefore, have to create a function that recompiles the model when it's called. Luckily for us, compilation in PyMC3 is generally quite fast.
def compile_linreg_model(xdata, ydata): with pm.Model() as model: x = pm.Data("x", xdata) b0 = pm.Normal("b0", 0, 10) b1 = pm.Normal("b1", 0, 10) sigma_e = pm.HalfNormal("sigma_e", 10) y = pm.Normal("y", b0 + b1 * x, sigma_e, observed=ydata) return model sample_kwargs = {"draws": 500, "tune": 500, "chains": 4} with compile_linreg_model(xdata, ydata) as linreg_model: trace = pm.sample(**sample_kwargs)
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We have defined a dictionary sample_kwargs that will be passed to the SamplingWrapper in order to make sure that all refits use the same sampler parameters. We follow the same pattern with {func}az.from_pymc3 <arviz.from_pymc3>. Note, however, how coords are not set. This is done to prevent errors due to coordinates and values shapes being incompatible during refits. Otherwise we'd have to handle subsetting of the coordinate values even though the refits are never used outside the refitting functions such as {func}~arviz.reloo. We also exclude the model because the model, like the trace, is different for every refit. This may seem counterintuitive or even plain wrong, but we have to remember that the pm.Model object contains information like the observed data.
dims = {"y": ["time"], "x": ["time"]} idata_kwargs = { "dims": dims, "log_likelihood": False, } idata = az.from_pymc3(trace, model=linreg_model, **idata_kwargs) idata
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We are now missing the log_likelihood group due to setting log_likelihood=False in idata_kwargs. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get PyMC3 to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood. Even though it is not ideal to lose part of the straight out of the box capabilities of PyMC3, this should generally not be a problem. In fact, other PPLs such as Stan always require writing the pointwise log likelihood values manually (either within the Stan code or in Python). Moreover, computing the pointwise log likelihood in Python using xarray will be more efficient in computational terms than the automatic extraction from PyMC3. It could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape n_samples * n_observations) in memory.
def calculate_log_lik(x, y, b0, b1, sigma_e): mu = b0 + b1 * x return stats.norm(mu, sigma_e).logpdf(y)
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars. Therefore, we can use {func}xr.apply_ufunc <xarray.apply_ufunc> to handle the broadasting and preserve the dimension names:
log_lik = xr.apply_ufunc( calculate_log_lik, idata.constant_data["x"], idata.observed_data["y"], idata.posterior["b0"], idata.posterior["b1"], idata.posterior["sigma_e"], ) idata.add_groups(log_likelihood=log_lik)
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to xr.apply_ufunc. We are now passing the arguments to calculate_log_lik initially as {class}xarray:xarray.DataArrays. What is happening here behind the scenes is that xr.apply_ufunc is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing Numpy arrays to calculate_log_lik. Everything works automagically. Now let's see what happens if we were to pass the arrays directly to calculate_log_lik instead:
calculate_log_lik( idata.constant_data["x"].values, idata.observed_data["y"].values, idata.posterior["b0"].values, idata.posterior["b1"].values, idata.posterior["sigma_e"].values )
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
If you are still curious about the magic of xarray and xr.apply_ufunc, you can also try to modify the dims used to generate the InferenceData a couple cells before: dims = {"y": ["time"], "x": ["time"]} What happens to the result if you use a different name for the dimension of x?
idata
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We will create a subclass of az.SamplingWrapper.
class PyMC3LinRegWrapper(az.SamplingWrapper): def sample(self, modified_observed_data): with self.model(*modified_observed_data) as linreg_model: idata = pm.sample( **self.sample_kwargs, return_inferencedata=True, idata_kwargs=self.idata_kwargs ) return idata def get_inference_data(self, idata): return idata def sel_observations(self, idx): xdata = self.idata_orig.constant_data["x"] ydata = self.idata_orig.observed_data["y"] mask = np.isin(np.arange(len(xdata)), idx) data__i = [ary[~mask] for ary in (xdata, ydata)] data_ex = [ary[mask] for ary in (xdata, ydata)] return data__i, data_ex loo_orig = az.loo(idata, pointwise=True) loo_orig
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
We initialize our sampling wrapper. Let's stop and analyze each of the arguments. We'd generally use model to pass a model object of some kind, already compiled and reexecutable, however, as we saw before, we need to recompile the model every time we use it to pass the model generating function instead. Close enough. We then use the log_lik_fun and posterior_vars argument to tell the wrapper how to call xr.apply_ufunc. log_lik_fun is the function to be called, which is then called with the following positional arguments: log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars] where data_ex is the second element returned by sel_observations and idata__i is the InferenceData object result of get_inference_data which contains the fit on the subsetted data. We have generated data_ex to be a tuple of DataArrays so it plays nicely with this call signature. We use idata_orig as a starting point, and mostly as a source of observed and constant data which is then subsetted in sel_observations. Finally, sample_kwargs and idata_kwargs are used to make sure all refits and corresponding InferenceData are generated with the same properties.
pymc3_wrapper = PyMC3LinRegWrapper( model=compile_linreg_model, log_lik_fun=calculate_log_lik, posterior_vars=("b0", "b1", "sigma_e"), idata_orig=idata, sample_kwargs=sample_kwargs, idata_kwargs=idata_kwargs, )
doc/source/user_guide/pymc3_refitting_xr_lik.ipynb
arviz-devs/arviz
apache-2.0
Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), 'input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), 'input_z') return inputs_real, inputs_z
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: generated outputs after and before activation ''' with tf.variable_scope('generator', reuse=reuse): # Hidden layer #h1 = tf.contrib.layers.fully_connected(z, n_units, activation_fn=None) # shorter h1 = tf.layers.dense(z, n_units) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output #logits = tf.contrib.layers.fully_connected(h1, out_dim, activation_fn=None) # shorter logits = tf.layers.dense(h1, out_dim) out = tf.tanh(logits) return out, logits
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: generated output after and before activation ''' with tf.variable_scope('discriminator', reuse=reuse): # Hidden layer #h1 = tf.contrib.layers.fully_connected(x, n_units, activation_fn=None) # shorter h1 = tf.layers.dense(x, n_units) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) #logits = tf.contrib.layers.fully_connected(h1, 1, activation_fn=None) # shape logits = tf.layers.dense(h1, 1) out = tf.sigmoid(logits) return out, logits
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier.
tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model, g_logits = generator(input_z, input_size, g_hidden_size, False, alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real, d_hidden_size, False, alpha) d_model_fake, d_logits_fake = discriminator(g_model, d_hidden_size, True, alpha)
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Discriminator and Generator Losses For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The fake logits are used with labels of all zeros. We want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using the labels that are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
# Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( labels=tf.ones_like(d_logits_real) * (1-smooth), logits=d_logits_real)) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( labels=tf.zeros_like(d_logits_fake), logits=d_logits_fake)) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( labels=tf.ones_like(d_logits_fake), logits=d_logits_fake))
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
# Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [x for x in tf.trainable_variables() if 'generator' in x.name] d_vars = [x for x in tf.trainable_variables() if x.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training.
def view_samples(epoch, samples): print(len(samples[0][0]),len(samples[0][1])) fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch][0]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f)
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) print(np.array(samples).shape) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[0][::int(len(sample[0])/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False)
15_gan_mnist/Intro_to_GANs_Exercises.ipynb
adrianstaniec/deep-learning
mit
Drop Row Based On A Conditional
%%sql -- Delete all rows DELETE FROM criminals -- if the age is less than 18 WHERE age < 18
sql/drop_rows.ipynb
tpin3694/tpin3694.github.io
mit
Setup
path = "data/dogscats/" # path = "data/dogscats/sample/" model_path = path + 'models/' if not os.path.exists(model_path): os.mkdir(model_path) batch_size=32 # batch_size=1 batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filenames) = get_classes(path)
nbs/dogscats-ensemble.ipynb
roebius/deeplearning_keras2
apache-2.0
Dense model
def get_conv_model(model): layers = model.layers last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:] return conv_model, fc_layers, last_conv_idx def get_fc_layers(p, in_shape): return [ MaxPooling2D(input_shape=in_shape), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(2, activation='softmax') ] def train_dense_layers(i, model): conv_model, fc_layers, last_conv_idx = get_conv_model(model) conv_shape = conv_model.output_shape[1:] fc_model = Sequential(get_fc_layers(0.5, conv_shape)) for l1,l2 in zip(fc_model.layers, fc_layers): weights = l2.get_weights() l1.set_weights(weights) fc_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) fc_model.fit(trn_features, trn_labels, epochs=2, batch_size=batch_size, validation_data=(val_features, val_labels)) # width_zoom_range removed from the following because not available in Keras2 gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.05, zoom_range=0.05, channel_shift_range=10, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True) batches = gen.flow(trn, trn_labels, batch_size=batch_size) val_batches = image.ImageDataGenerator().flow(val, val_labels, shuffle=False, batch_size=batch_size) for layer in conv_model.layers: layer.trainable = False for layer in get_fc_layers(0.5, conv_shape): conv_model.add(layer) for l1,l2 in zip(conv_model.layers[last_conv_idx+1:], fc_model.layers): l1.set_weights(l2.get_weights()) steps_per_epoch = int(np.ceil(batches.n/batch_size)) validation_steps = int(np.ceil(val_batches.n/batch_size)) conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.save_weights(model_path+'no_dropout_bn' + i + '.h5') conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=1, validation_data=val_batches, validation_steps=validation_steps) for layer in conv_model.layers[16:]: layer.trainable = True #- added again the compile instruction in order to avoid a Keras 2.1 warning message conv_model.compile(optimizer=Adam(1e-5), loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=8, validation_data=val_batches, validation_steps=validation_steps) conv_model.optimizer.lr = 1e-7 conv_model.fit_generator(batches, steps_per_epoch=steps_per_epoch, epochs=10, validation_data=val_batches, validation_steps=validation_steps) conv_model.save_weights(model_path + 'aug' + i + '.h5')
nbs/dogscats-ensemble.ipynb
roebius/deeplearning_keras2
apache-2.0
Load IMDB Dataset
# load the dataset but only keep the top n words, zero the rest # docs at: https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb/load_data top_words = 5000 start_char=1 oov_char=2 index_from=3 (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words, start_char=start_char, oov_char = oov_char, index_from = index_from ) print(X_train.shape) print(y_train.shape) print(len(X_train[0])) print(len(X_train[1])) print(X_test.shape) print(y_test.shape) X_train[0]
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Pad sequences so they are all the same length (required by keras/tensorflow).
# truncate and pad input sequences max_review_length = 500 X_train = sequence.pad_sequences(X_train, maxlen=max_review_length) X_test = sequence.pad_sequences(X_test, maxlen=max_review_length) print(X_train.shape) print(y_train.shape) print(len(X_train[0])) print(len(X_train[1])) print(X_test.shape) print(y_test.shape) X_train[0] y_train[0:20] # first 20 sentiment labels
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Setup Vocabulary Dictionary The index value loaded differes from the dictionary value by "index_from" so that special characters for padding, start of sentence, and out of vocabulary can be prepended to the start of the vocabulary.
word_index = imdb.get_word_index() inv_word_index = np.empty(len(word_index)+index_from+3, dtype=np.object) for k, v in word_index.items(): inv_word_index[v+index_from]=k inv_word_index[0]='<pad>' inv_word_index[1]='<start>' inv_word_index[2]='<oov>' word_index['ai'] inv_word_index[16942+index_from] inv_word_index[:50]
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Convert Encoded Sentences to Readable Text
def toText(wordIDs): s = '' for i in range(len(wordIDs)): if wordIDs[i] != 0: w = str(inv_word_index[wordIDs[i]]) s+= w + ' ' return s for i in range(5): print() print(str(i) + ') sentiment = ' + ('negative' if y_train[i]==0 else 'positive')) print(toText(X_train[i]))
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Build the model Sequential guide, compile() and fit() Embedding The embeddings layer works like an effiecient one hot encoding for the word index followed by a dense layer of size embedding_vector_length. LSTM (middle of page) Dense Dropout (1/3 down the page) "model.compile(...) sets up the "adam" optimizer, similar to SGD but with some gradient averaging that works like a larger batch size to reduce the variability in the gradient from one small batch to the next. Each SGD step is of batch_size training records. Adam is also a variant of momentum optimizers. 'binary_crossentropy' is the loss functiom used most often with logistic regression and is equivalent to softmax for only two classes. In the "Output Shape", None is a unknown for a variable number of training records to be supplied later.
backend.clear_session() embedding_vector_length = 5 rnn_vector_length = 150 #activation = 'relu' activation = 'sigmoid' model = Sequential() model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length)) model.add(Dropout(0.2)) #model.add(LSTM(rnn_vector_length, activation=activation)) model.add(GRU(rnn_vector_length, activation=activation)) model.add(Dropout(0.2)) model.add(Dense(1, activation=activation)) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary())
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Setup Tensorboard make sure /data/kaggle-tensorboard path exists or can be created Start tensorboard from command line: tensorboard --logdir=/data/kaggle-tensorboard open http://localhost:6006/
log_dir = '/data/kaggle-tensorboard' shutil.rmtree(log_dir, ignore_errors=True) os.makedirs(log_dir) tbCallBack = TensorBoard(log_dir=log_dir, histogram_freq=0, write_graph=True, write_images=True) full_history=[]
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Train the Model Each epoch takes about 3 min. You can reduce the epochs to 3 for a faster build and still get good accuracy. Overfitting starts to happen at epoch 7 to 9. Note: You can run this cell multiple times to add more epochs to the model training without starting over.
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=8, batch_size=64, callbacks=[tbCallBack]) full_history += history.history['loss']
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Accuracy on the Test Set
scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1]*100)) print( 'embedding_vector_length = ' + str( embedding_vector_length )) print( 'rnn_vector_length = ' + str( rnn_vector_length ))
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
# Hyper Parameter Tuning Notes | Accuracy % | Type | max val acc epoch | embedding_vector_length | RNN state size | Dropout | :------------ | :-------- | :---- | :---- | :---- | :---- | 88.46 * | GRU | 6 | 5 | 150 | 0.2 (after Embedding and LSTM) | 88.4 | GRU | 4 | 5 | 100 | no dropout | 88.32 | GRU | 7 | 32 | 100 | | 88.29 | GRU | 8 | 5 | 200 | no dropout | 88.03 | GRU | >6 | 20 | 40 | 0.3 (after Embedding and LSTM) | 87.93 | GRU | 4 | 32 | 50 | 0.2 (after LSTM) | 87.60 | GRU | 5 | 5 | 50 | no dropout | 87.5 | GRU | 8 | 10 | 20 | no dropout | 87.5 | GRU | 5 | 32 | 50 | | 87.46 | GRU | 8 | 16 | 100 | | < 87 | LSTM | 9-11 | 32 | 100 | | 87.66 | GRU | 5 | 32 | 50 | 0.3 (after Embedding and LSTM) | 86.5 | GRU | >10 | 5 | 10 | no dropout Graphs
history.history # todo: add graph of all 4 values with history plt.plot(history.history['loss']) plt.yscale('log') plt.show() plt.plot(full_history) plt.yscale('log') plt.show()
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Evaluate on Custom Text
import re words_only = r'[^\s!,.?\-":;0-9]+' re.findall(words_only, "Some text to, tokenize. something's.Something-else?".lower()) def encode(reviewText): words = re.findall(words_only, reviewText.lower()) reviewIDs = [start_char] for word in words: index = word_index.get(word, oov_char -index_from) + index_from # defaults to oov_char for missing if index > top_words: index = oov_char reviewIDs.append(index) return reviewIDs toText(encode('To code and back again. ikkyikyptangzooboing ni !!')) # reviews from: # https://www.pluggedin.com/movie-reviews/solo-a-star-wars-story # http://badmovie-badreview.com/category/bad-reviews/ user_reviews = ["This movie is horrible", "This wasn't a horrible movie and I liked it actually", "This movie was great.", "What a waste of time. It was too long and didn't make any sense.", "This was boring and drab.", "I liked the movie.", "I didn't like the movie.", "I like the lead actor but the movie as a whole fell flat", "I don't know. It was ok, some good and some bad. Some will like it, some will not like it.", "There are definitely heroic seeds at our favorite space scoundrel's core, though, seeds that simply need a little life experience to nurture them to growth. And that's exactly what this swooping heist tale is all about. You get a yarn filled with romance, high-stakes gambits, flashy sidekicks, a spunky robot and a whole lot of who's-going-to-outfox-who intrigue. Ultimately, it's the kind of colorful adventure that one could imagine Harrison Ford's version of Han recalling with a great deal of flourish … and a twinkle in his eye.", "There are times to be politically correct and there are times to write things about midget movies, and I’m afraid that sharing Ankle Biters with the wider world is an impossible task without taking the low road, so to speak. There are horrible reasons for this, all of them the direct result of the midgets that this film contains, which makes it sound like I am blaming midgets for my inability to regulate my own moral temperament but I like to think I am a…big…enough person (geddit?) to admit that the problem rests with me, and not the disabled.", "While Beowulf didn’t really remind me much of Beowulf, it did reminded me of something else. At first I thought it was Van Helsing, but that just wasn’t it. It only hit me when Beowulf finally told his backstory and suddenly even the dumbest of the dumb will realise that this is a simple ripoff of Blade. The badass hero, who is actually born from evil, now wants to destroy it, while he apparently has to fight his urges to become evil himself (not that it is mentioned beyond a single reference at the end of Beowulf) and even the music fits into the same range. Sadly Beowulf is not even nearly as interesting or entertaining as its role model. The only good aspects I can see in Beowulf would be the stupid beginning and Christopher Lamberts hair. But after those first 10 minutes, the movie becomes just boring and you don’t care much anymore.", "You don't frighten us, English pig-dogs! Go and boil your bottoms, son of a silly person! I blow my nose at you, so-called Arthur King! You and all your silly English Knnnnnnnn-ighuts!!!" ] X_user = np.array([encode(review) for review in user_reviews ]) X_user X_user_pad = sequence.pad_sequences(X_user, maxlen=max_review_length) X_user_pad
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Features View
for row in X_user_pad: print() print(toText(row))
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Results
user_scores = model.predict(X_user_pad) is_positive = user_scores >= 0.5 # I'm an optimist for i in range(len(user_reviews)): print( '\n%.2f %s:' % (user_scores[i][0], 'positive' if is_positive[i] else 'negative' ) + ' ' + user_reviews[i] )
rnn-lstm-text-classification/LSTM Text Classification.ipynb
david-hagar/NLP-Analytics
mit
Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
# Installing the latest version of the package import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) %%bash # Exporting the project export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results. You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes. Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Evaluate the baseline model Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data. NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab. Review the learning and eval statistics for the baseline_model.
%%bigquery # Eval statistics on the held out data. # Here, ML.EVALUATE function is used to evaluate model metrics SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL feat_eng.baseline_model) %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT * FROM ML.EVALUATE(MODEL feat_eng.baseline_model)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
NOTE: Because you performed a linear regression, the results include the following columns: mean_absolute_error mean_squared_error mean_squared_log_error median_absolute_error r2_score explained_variance Resource for an explanation of the Regression Metrics. Mean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values. Root mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data. R2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean. Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
%%bigquery #TODO 1 # Here, ML.EVALUATE function is used to evaluate model metrics SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.baseline_model)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Model 1: EXTRACT dayofweek from the pickup_datetime feature. As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday). If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the datatype returned would be integer. Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek.
%%bigquery #TODO 2 CREATE OR REPLACE MODEL feat_eng.model_1 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, pickup_datetime, EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, pickuplon, pickuplat, dropofflon, dropofflat FROM feat_eng.feateng_training_data
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
%%bigquery # Here, ML.TRAINING_INFO function is used to see information about the training iterations of a model. SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL feat_eng.model_1) %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT * FROM ML.EVALUATE(MODEL feat_eng.model_1)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
%%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.model_1)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Model 2: EXTRACT hourofday from the pickup_datetime feature As you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date. Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am. Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.
%%bigquery #TODO 3a CREATE OR REPLACE MODEL feat_eng.model_2 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, #pickup_datetime, EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) AS hourofday, pickuplon, pickuplat, dropofflon, dropofflat FROM `feat_eng.feateng_training_data` %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT * FROM ML.EVALUATE(MODEL feat_eng.model_2) %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.model_2)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Model 3: Feature cross dayofweek and hourofday using CONCAT First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross. Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hourofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it. Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3"
%%bigquery #TODO 3b CREATE OR REPLACE MODEL feat_eng.model_3 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, #pickup_datetime, #EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, #EXTRACT(HOUR FROM pickup_datetime) AS hourofday, CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING), CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday, pickuplon, pickuplat, dropofflon, dropofflat FROM `feat_eng.feateng_training_data` %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT * FROM ML.EVALUATE(MODEL feat_eng.model_3) %%bigquery # Here, ML.EVALUATE function is used to evaluate model metrics SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.model_3)
courses/machine_learning/deepdive2/feature_engineering/solutions/1_bqml_basic_feat_eng.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
2. Check that your problem is solvable While we don't have to check everything. Rather than spend hours trying to debug our program, it helps to spend a few moments and make sure the data you have to work with makes sense. Certainly this doesn't have to be exhaustive, but it saves headaches later. Some common things to check, specifically in this context: Missing variables Do we have lanes defined for factories or warehouses that don't exist? Impossible conditions Is the total demand more than the total supply? Is the inbound capacity obviously too small to feed each retailer? etc
#Do we have lanes defined for factories or warehouses that don't exist? all_locations = set(lanes.origin) | set(lanes.destination) for f in factories.Factory: if f not in all_locations: print('missing ', f) for w in warehouses.Warehouse: if w not in all_locations: print('missing ', w) #Is the total demand more than the total supply? assert factories.Supply.sum() >= warehouses.Demand.sum() #Is the inbound capacity obviously too small to feed each retailer? capacity_in = lanes.groupby('destination').capacity.sum() check = warehouses.set_index('Warehouse').join(capacity_in) assert np.all(check.capacity >= check.Demand)
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
3. Model the data with a graph Our data has a very obvious graph structure to it. We have factories and warehouses (nodes), and we have lanes that connect them (edges). In many cases the extra effort of explicitly making a graph allows us to have very natural looking constraint and objective formulations. This is absolutely not required but makes reasoning very straightforward. To make a graph, we will use networkx
import networkx as nx G = nx.DiGraph() #add all the nodes for i, row in factories.iterrows(): G.add_node(row.Factory, supply=row.Supply, node_type='factory') for i, row in warehouses.iterrows(): G.add_node(row.Warehouse, demand=row.Demand, node_type='warehouse') #add the lanes (edges) for i, row in lanes.iterrows(): G.add_edge(row.origin, row.destination, cost=row.cost, capacity=row.capacity) #lets make a quick rendering to spot check the connections %matplotlib inline layout = nx.layout.circular_layout(G) nx.draw(G,layout) nx.draw_networkx_labels(G,layout);
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
4. Define the actual Linear Program So far everything we have done hasn't concerned itself with solving a linear program. We have one primary question to answer here: What quantity of plumbus-es should we ship from each factory to each warehouse to minimize the total shipping cost? Taking this apart, we are looking for quantities from each factory to each warehouse - these are our shipping lanes. We will need as many variables as we have lanes.
from pulp import *
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
The variables are the amounts to put on each edge. LpVariable.dicts allows us to access the variables using dictionary access syntax, i.e., the quantity from Garfield to BurgerQueen is python qty[('Garfield','BurgerQueen')] the actual variable name created under the hood is qty_('Garfield',_'BurgerQueen')
qty = LpVariable.dicts("qty", G.edges(), lowBound=0)
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
okay cool, so what about our objective? Revisiting the question: What quantity of plumbus-es should we ship from each factory to each warehouse to minimize the total shipping cost? We are seeking to minimize the shipping cost. So we need to calculate our shipping cost as a function of our variables (the lanes), and it needs to be linear. This is just the lane quantity multiplied by the lane cost. $$f(Lanes) = \sum_{o,d \in Lanes} qty_{o,d}*cost_{o,d} $$ When dealing with sums in pulp, it is most efficient to use its supplied lpSum function.
#the total cost of this routing is the cost per unit * the qty sent on each lane def objective(): shipping_cost = lpSum([ qty[(org,dest)]*data['cost'] for (org,dest,data) in G.edges(data=True)]) return shipping_cost
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
We have a few constraints to define: The demand at each retailer must be satisfied. In graph syntax this means the sum of all inbound edges must match the demand we have on file: $$\sum_{o,d \in in_edges(d)} qty_{o,d} = Demand(d)$$ We must not use more supply than each factory has. i.e., the sum of the outbound edges from a factory must be less than or equal to the supply: $$\sum_{o,d \in out_edges(o)} qty_{o,d} \leq Supply(o)$$ Each qty must be less than or equal to the lane capacity: $$qty_{o,d} \leq Capacity_{o,d}$$ networkx makes this very easy to program because we can simply ask for all the inbound edges to a given node using nx.Digraph.in_edges
def constraints(): constraints=[] for x, data in G.nodes(data=True): #demand must be met if data['node_type'] =='warehouse': inbound_qty = lpSum([ qty[(org,x)] for org, _ in G.in_edges(x)]) c = inbound_qty == data['demand'] constraints.append(c) #must not use more than the available supply elif data['node_type'] =='factory': out_qty = lpSum([ qty[(x,dest)] for _,dest in G.out_edges(x)]) c = out_qty <= data['supply'] constraints.append(c) #now the edge constraints #we qty <= capacity on each lane for org,dest, data in G.edges(data=True): c = qty[(org,dest)] <= data['capacity'] constraints.append(c) return constraints
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
Finally ready to create the problem, add the objective, and add the constraints
#setup the problem prob = LpProblem('warehouse_routing',LpMinimize) #add the objective prob += objective() #add all the constraints for c in constraints(): prob+=c
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
Now we can finally answer: What quantity of plumbus-es should we ship from each factory to each warehouse?
#you can also use the value() function instead of .varValue for org,dest in G.edges(): v= value(qty[(org,dest)]) if v >0: print(org,dest, v)
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
and, How much will our shipping cost be?
value(prob.objective)
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
It is a good idea to verify explicitly that all the constraints were met. Sometimes it is easy to forget a necessary constraint.
#lets verify all the conditions #first lets stuff our result into a dataframe for export result=[] for org,dest in G.edges(): v= value(qty[(org,dest)]) result.append({'origin':org,'destination':dest,'qty':v}) result_df = pd.DataFrame(result) lanes['key']=lanes.origin+lanes.destination result_df['key'] = result_df.origin+result_df.destination lanes = lanes.set_index('key').merge(result_df.set_index('key')) #any lane over capacity? assert np.all(lanes.qty <= lanes.capacity) #check that we met the demand out_qty =lanes.groupby('destination').qty.sum() check = warehouses.set_index('Warehouse').join(out_qty) assert np.all(check.qty == check.Demand) #check that we met the supply in_qty =lanes.groupby('origin').qty.sum() check = factories.set_index('Factory').join(in_qty) assert np.all(check.qty <= check.Supply) #the result! lanes[lanes.qty !=0]
03.5-network-flows/factory_routing_problem.ipynb
cochoa0x1/integer-programming-with-python
mit
The Kernel The kernel decays quickly around $x=0$, which is why cubic splines suffer from very little "ringing" -- moving one point doesn't significantly affect the curve at points far away.
# # Plot the kernel # DECAY = math.sqrt(3)-2; vs = [3*(DECAY**x) for x in range(1,7)] ys = [0]*len(vs) + [1] + [0]*len(vs) vs = [-v for v in vs[::-1]] + [0.0] + vs xs = numpy.linspace(0,len(ys)-1, 1000) plt.figure(0, figsize=(12.0,4.0));plt.grid(True);plt.ylim([-0.2,1.1]);plt.xticks(range(-5,6)) plt.plot([x-6.0 for x in xs],[hermite_interp(ys,vs,x) for x in xs])
NaturalCubicSplines.ipynb
mtimmerm/IPythonNotebooks
apache-2.0
Printing had a weird implementation in Python 2 that was rectified, and now printing is a function like every other. This means that you must use brackets when you print something. Then in Python 2, there were two ways of doing integer division: 2 / 3 and 2 // 3 both gave zero. In Python 3, the former triggers a type upgrade to floats. If you import division from future, you get the same behaviour in Python 2. 3.4 Don't know how to code? Completely new to Python? A good start for any programming language is a Jupyter kernel if the language has one. Jupyter was originally designed for Python, so naturally it has a matching kernel. Why Jupyter? It is a uniform interface for many languages (Python, Julia, R, Scala, Haskell, even bloody MATLAB has a Jupyter kernel), so you can play with a new language in a familiar, interpreter-oriented environment. If you never coded in your life, it is also a good start, as you get instant feedback on your initial steps in what essentially is a tab in your browser. If you are coming from MATLAB, or you advanced beyond the skills of writing a few dozens lines of code in Python, I recommend using Spyder. It is an awesome integrated environment for doing scientific work in Python: it includes instant access to documentation, variable inspection, code navigation, an IPython console, plus cool tools for writing beautiful and efficient code. For tutorials, check out the Learning tab in Anaconda Navigator. Both videos and other tutorials are available in great multitude. 4. Where to find code and how (don't reinvent the wheel, round 1) The fundamental difference between a computer scientist and an arbitrary other scientist is that the former will first try to find other people's code to achieve a task, whereas the latter type is suspicious of alien influence and will try to code up everything from scratch. Find a balance. Here we are not talking about packages: we are talking about snippets of code. The chances are slim that you want to do something in Python that N+1 humans did not do before. Two and a half places to look for code: The obvious internet search will point you to the exact solution on Stackoverflow. Code search engines are junk, so for even half-trivial queries that include idiomatic use of a programming language, they will not show up much. This is when you can turn to GitHub's Advanced Search. It will not let you search directly for code, but you can restrict your search by language, and look at relevant commits and issues. You have a good chance of finding what you want. GitHub has a thing called gist. These are short snippets (1-150 lines) of code under git control. The gist search engine is awesome for finding good code. Exercise 1. Find three different ways of iterating over a dictionary and printing out each key-value pairs. Explain the design principle of one obvious way of doing something through this example. If you do not know what a dictionary is, that is even better. 5. Why am I committing a crime against humanity by using MATLAB? Hate speech follows: Licence fee: MathWorks is second biggest enemy of science after academic publishers. You need a pricey licence on every computer where you want to use it. Considering that the language did not see much development since 1984, it does not seem like a great deal. They, however, ensure that subsequent releases break something, so open source replacement efforts like Octave will never be able to catch up. Package management does not exist. Maintenance: maintaining a toolbox is a major pain since the language forces you to have a very large number of files. Slow: raw MATLAB code is on par with Python in terms of inefficiency. It can be fast, but only when the operations you use actually translate to low-level linear algebra operations. MEX: this system was designed to interact with C code. In reality, it only ensures that you tear your hair out if you try to use it. Interface is not decoupled correctly. You cannot use the editor while running a code in the interpreter. Seriously? In 2017? Name space mangling: imported functions override older ones. There is no other option. You either overwrite, or you do not use a toolbox. Write-only language: this one can be argued. With an excessive use of parentheses, MATLAB code can be pretty hard to parse, but allegedly some humans mastered it. 6. Package management (don't reinvent the wheel, round 2) Once you go beyond the basic hurdles of Python, you definitely want to use packages. Many of them are extremely well written, efficient, and elegant. Although most of the others are complete junk. Package management in Python used to be terrible, but nowadays it is simply bad (this is already a step up from MATLAB or Mathematica). So where does the difficulty stem from? From compilation. Since Python interacts so well with compiled languages, it is the most natural thing to do to bypass the GIL with C or Cython code for some quick calculations, and then get everything back to Python. The problem is that we have to deal with three major operating systems and at least three compiler chain families. Python allows the distribution of pre-compiled packages through a system called wheels, which works okay if the developers have access to all the platforms. Anaconda itself is essentially a package management system for Python, shipping precompiled binaries that supposed to work together well. So, assuming you have Anaconda, and you know which package you want to install, try this first: conda install whatever_package If the package is not in the Anaconda ecosytem, you can use the standard Python Package Index (PyPI) through the ultra-universal pip command: pip install whatever_package If you do not have Anaconda or you use some shared computer, change this to pip install whatever_package --user. This will install the package locally to your home folder. Depending on your operating system, several things can happen. Windows: if there are no binaries in Anaconda or on PyPI, good luck. Compilation is notoriously difficult to get right on Windows both for package developers and for users. macOS: if there are no binaries in Anaconda or on PyPI, start scratching your head. There are two paths to follow: (i) the code will compile with Apple's purposefully maimed Clang variant. In this case, if you XCode, things will work with a high chance of success. The downside: Apple hates you. They keep removing support for compiling multithreaded from Clang. (ii) Install the uncontaminated GNU Compiler Chain (gcc) with brew. You still have a high chance of making it work. The problems begin if the compilation requires many dependent libraries to be present, which may or may not be supported by brew. Linux: there are no binaries by design. The compiler chain is probably already there. The pain comes from getting the development headers of all necessary libraries, not to mention, the right version of the libraries. Ubuntu tends to have outdated libraries. Exercise 2. Install the conic optimization library Picos. In Anaconda, proceed in two steps: install cvxopt with conda, and then Picos from PyPI. If you are not using Anaconda, a pip install will be just fine. 7. Idiomatic Python 7.1 Tricks with lists Python has few syntactic candies, precisely because it wants to keep code readable. One thing you can do, though, is defining lists in a functional programming way, that is, it will be familiar to Mathematica users. This is the crappy way of filling a list with values:
l = [] for i in range(10): l.append(i) print(l)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
This is more Pythonesque:
l = [i**2 for i in range(10)] print(l)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
What you have inside the square bracket is a generator expression. Sometimes you do not need the list, only its values. In such cases, it suffices to use the generator expression. The following two lines of code achieve the same thing:
print(sum([i for i in range(10)])) print(sum(i for i in range(10)))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Which one is more efficient? Why? You can also use conditionals in the generator expressions. For instance, this is a cheap way to get even numbers:
[i for i in range(10) if i % 2 == 0]
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Exercise 3. List all odd square numbers below 1000. 7.2 PEP8 And on the seventh day, God created PEP8. Python Enhancement Proposal (PEP) is a series of ideas and good practices for writing nice Python code and evolving the language. PEP8 is the set of policies that tells you what makes Python syntax pretty (meaning it is easy to read for any other Python programmer). In an ideal world, everybody should follow it. Start programming in Python by keeping good practices in mind. As a starter, Python uses indentation and indentation alone to tell the hierarchy of code. Use EXACTLY four space characters as indentation, always. If somebody tells you to use one tab, butcher the devil on the spot. Bad:
for _ in range(10): print("Vomit")
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Good:
for _ in range(10): print("OMG, the code generating this is so prettily idented")
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
The code is more readable if it is a bit leafy. For this reason, leave a space after every comma just as you would do in natural languages:
print([1,2,3,4]) # Ugly crap print([1, 2, 3, 4]) # My god, this is so much easier to read!
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Spyder has tools for helping you keeping to PEP8, but it is not so straightforward in Jupyter unfortunately. Exercise 4. Clean up this horrific mess:
for i in range(2,5): print(i) for j in range( -10,0, 1): print(j )
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
7.3 Tuples, swap Tuples are like lists, but with a fixed number of entries. Technically, this is a tuple:
t = (2, 3, 4) print(t) print(type(t))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
You would, however, seldom use it in this form, because you would just use a list. They come handy in certain scenarios, like enumerating a list:
very_interesting_list = [i**2-1 for i in range(10) if i % 2 != 0] for i, e in enumerate(very_interesting_list): print(i, e)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Here enumerate returns you a tuple with the running index and the matching entry of the list. You can also zip several lists and create a stream of tuples:
another_interesting_list = [i**2+1 for i in range(10) if i % 2 == 0] for i, j in zip(very_interesting_list, another_interesting_list): print(i, j)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
You can use tuple-like assignment to initialize multiple variables:
a, b, c = 1, 2, 3 print(a, b, c)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
This syntax in turn enables you the most elegant way of swapping the value of two variables:
a, b = b, a print(a, b)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
7.4 Indexing You saw that you can use in, zip, and enumerate to iterate over lists. You can also use slicing on one-dimensional lists:
l = [i for i in range(10)] print(l) print(l[2:5]) print(l[2:]) print(l[:-1]) l[-2]
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Note that the upper index is not inclusive (the same as in range). The index -1 refers to the last item, -2 to the second last, and so on. Python lists are zero-indexed. Unfortunately, you cannot do convenient double indexing on multidimensional lists. For this, you need numpy.
import numpy as np a = np.array([[(i+1)*(j+1)for j in range(5)] for i in range(3)]) print(a) print(a[:, 0]) print(a[0, :])
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Exercise 5. Get the bottom-right 2x2 submatrix of a. 8. Types Python will hide the pain of working with types: you don't have to declare the type of any variable. But this does not mean they don't have a type. The type gets assigned automatically via an internal type inference mechanism. To demonstrate this, we import the main numerical and symbolic packages, along with an option to pretty-print symbolic operations.
import sympy as sp import numpy as np from sympy.interactive import printing printing.init_printing(use_latex='mathjax') print(np.sqrt(2)) sp.sqrt(2)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
The types tell you why these two look different:
print(type(np.sqrt(2))) print(type(sp.sqrt(2)))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
The symbolic representation is, in principle, infinite precision, whereas the numerical representation uses 64 bits. As we said above, you can do some things with numpy arrays that you cannot do with lists. Their types can be checked:
a = [0. for _ in range(5)] b = np.zeros(5) print(a) print(b) print(type(a)) print(type(b))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
There are many differences between numpy arrays and lists. The most important ones are that lists can expand, but arrays cannot, and lists can contain any object, whereas numpy arrays can only contain things of the same type. Type conversion is (usually) easy:
print(type(list(b))) print(type(np.array(a)))
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
This is where the trouble begins:
from sympy import sqrt from numpy import sqrt sqrt(2)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Because of this, never import everything from a package: from numpy import * is forbidden. Exercise 6. What would you do to keep everything at infinite precision to ensure the correctness of a computational proof? This does not seem to be working:
b = np.zeros(3) b[0] = sp.pi b[1] = sqrt(2) b[2] = 1/3 print(b)
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
9. Read the fine documentation (and write it) Python packages and individual functions typically come with documentation. Documentation is often hosted on ReadTheDocs. For individual functions, you can get the matching documentation as you type. Just press Shift+Tab on a function:
sp.sqrt
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
In Spyder, Ctrl+I will bring up the documentation of the function. This documentation is called docstring, and it is extremely easy to write, and you should do it yourself if you write a function. It is epsilon effort and it will take you a second to write it. Here is an example:
def multiply(a, b): """Multiply two numbers together. :param a: The first number to be multiplied. :type a: float. :param b: The second number to be multiplied. :type b: float. :returns: the multiplication of the two numbers. """ return a*b
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
Now you can press Shift+Tab to see the above documentation:
multiply
Tutorials/Python_Introduction.ipynb
jdnz/qml-rg
gpl-3.0
To give our Python function a test run, we will now do some imports and generate the input data for the initial conditions of our metal sheet with a few very hot points. We'll also make two plots, one after a thousand time steps, and a second plot after another two thousand time steps. Do note that the plots are using different ranges for the colors. Also, executing the following cell may take a little while.
import numpy #setup initial conditions def get_initial_conditions(nx, ny): field = numpy.ones((ny, nx)).astype(numpy.float32) field[numpy.random.randint(0,nx,size=10), numpy.random.randint(0,ny,size=10)] = 1e3 return field field = get_initial_conditions(nx, ny)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can now use this initial condition to solve the diffusion problem and plot the results.
from matplotlib import pyplot %matplotlib inline #run the diffuse function a 1000 times and another 2000 times and make plots fig, (ax1, ax2) = pyplot.subplots(1,2) cpu=numpy.copy(field) for i in range(1000): cpu = diffuse(cpu) ax1.imshow(cpu) for i in range(2000): cpu = diffuse(cpu) ax2.imshow(cpu)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Now let's take a quick look at the execution time of our diffuse function. Before we do, we also copy the current state of the metal sheet to be able to restart the computation from this state.
#run another 1000 steps of the diffuse function and measure the time from time import time start = time() cpu=numpy.copy(field) for i in range(1000): cpu = diffuse(cpu) end = time() print("1000 steps of diffuse on a %d x %d grid took" %(nx,ny), (end-start)*1000.0, "ms") pyplot.imshow(cpu)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
The above CUDA kernel parallelizes the work such that every grid point will be processed by a different CUDA thread. Therefore, the kernel is executed by a 2D grid of threads, which are grouped together into 2D thread blocks. The specific thread block dimensions we choose are not important for the result of the computation in this kernel. But as we will see will later, they will have an impact on performance. In this kernel we are using two, currently undefined, compile-time constants for block_size_x and block_size_y, because we will auto tune these parameters later. It is often needed for performance to fix the thread block dimensions at compile time, because the compiler can unroll loops that iterate using the block size, or because you need to allocate shared memory using the thread block dimensions. The next bit of Python code initializes PyCuda, and makes preparations so that we can call the CUDA kernel to do the computation on the GPU as we did earlier in Python.
from pycuda import driver, compiler, gpuarray, tools import pycuda.autoinit from time import time #allocate GPU memory u_old = gpuarray.to_gpu(field) u_new = gpuarray.to_gpu(field) #setup thread block dimensions and compile the kernel threads = (16,16,1) grid = (int(nx/16), int(ny/16), 1) block_size_string = "#define block_size_x 16\n#define block_size_y 16\n" mod = compiler.SourceModule(block_size_string+kernel_string) diffuse_kernel = mod.get_function("diffuse_kernel")
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
The above code is a bit of boilerplate we need to compile a kernel using PyCuda. We've also, for the moment, fixed the thread block dimensions at 16 by 16. These dimensions serve as our initial guess for what a good performing pair of thread block dimensions could look like. Now that we've setup everything, let's see how long the computation would take using the GPU.
#call the GPU kernel a 1000 times and measure performance t0 = time() for i in range(500): diffuse_kernel(u_new, u_old, block=threads, grid=grid) diffuse_kernel(u_old, u_new, block=threads, grid=grid) driver.Context.synchronize() print("1000 steps of diffuse ona %d x %d grid took" %(nx,ny), (time()-t0)*1000, "ms.") #copy the result from the GPU to Python for plotting gpu_result = u_old.get() fig, (ax1, ax2) = pyplot.subplots(1,2) ax1.imshow(gpu_result) ax1.set_title("GPU Result") ax2.imshow(cpu) ax2.set_title("Python Result")
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Note that the Kernel Tuner prints a lot of useful information. To ensure you'll be able to tell what was measured in this run the Kernel Tuner always prints the GPU or OpenCL Device name that is being used, as well as the name of the kernel. After that every line contains the combination of parameters and the time that was measured during benchmarking. The time that is being printed is in milliseconds and is obtained by averaging the execution time of 7 runs of the kernel. Finally, as a matter of convenience, the Kernel Tuner also prints the best performing combination of tunable parameters. However, later on in this tutorial we'll explain how to analyze and store the tuning results using Python. Looking at the results printed above, the difference in performance between the different kernel configurations may seem very little. However, on our hardware, the performance of this kernel already varies in the order of 10%. Which of course can build up to large differences in the execution time if the kernel is to be executed thousands of times. We can also see that the performance of the best configuration in this set is 5% better than our initially guessed thread block dimensions of 16 by 16. In addtion, you may notice that not all possible combinations of values for block_size_x and block_size_y are among the results. For example, 128x32 is not among the results. This is because some configuration require more threads per thread block than allowed on our GPU. The Kernel Tuner checks the limitations of your GPU at runtime and automatically skips over configurations that use too many threads per block. It will also do this for kernels that cannot be compiled because they use too much shared memory. And likewise for kernels that use too many registers to be launched at runtime. If you'd like to know about which configurations were skipped automatically you can pass the optional parameter verbose=True to tune_kernel. However, knowing the best performing combination of tunable parameters becomes even more important when we start to further optimize our CUDA kernel. In the next section, we'll add a simple code optimization and show how this affects performance. Using shared memory Shared memory, is a special type of the memory available in CUDA. Shared memory can be used by threads within the same thread block to exchange and share values. It is in fact, one of the very few ways for threads to communicate on the GPU. The idea is that we'll try improve the performance of our kernel by using shared memory as a software controlled cache. There are already caches on the GPU, but most GPUs only cache accesses to global memory in L2. Shared memory is closer to the multiprocessors where the thread blocks are executed, comparable to an L1 cache. However, because there are also hardware caches, the performance improvement from this step is expected to not be that great. The more fine-grained control that we get by using a software managed cache, rather than a hardware implemented cache, comes at the cost of some instruction overhead. In fact, performance is quite likely to degrade a little. However, this intermediate step is necessary for the next optimization step we have in mind.
kernel_string_shared = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x; int by = blockIdx.y * block_size_y; __shared__ float sh_u[block_size_y+2][block_size_x+2]; #pragma unroll for (int i = ty; i<block_size_y+2; i+=block_size_y) { #pragma unroll for (int j = tx; j<block_size_x+2; j+=block_size_x) { int y = by+i-1; int x = bx+j-1; if (x>=0 && x<nx && y>=0 && y<ny) { sh_u[i][j] = u[y*nx+x]; } } } __syncthreads(); int x = bx+tx; int y = by+ty; if (x>0 && x<nx-1 && y>0 && y<ny-1) { int i = ty+1; int j = tx+1; u_new[y*nx+x] = sh_u[i][j] + dt * ( sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] + sh_u[i][j-1] + sh_u[i-1][j] ); } } """ % (nx, ny)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can now tune this new kernel using the kernel tuner
result = tune_kernel("diffuse_kernel", kernel_string_shared, problem_size, args, tune_params)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Tiling GPU Code One very useful code optimization is called tiling, sometimes also called thread-block-merge. You can look at it in this way, currently we have many thread blocks that together work on the entire domain. If we were to use only half of the number of thread blocks, every thread block would need to double the amount of work it performs to cover the entire domain. However, the threads may be able to reuse part of the data and computation that is required to process a single output element for every element beyond the first. This is a code optimization because effectively we are reducing the total number of instructions executed by all threads in all thread blocks. So in a way, were are condensing the total instruction stream while keeping the all the really necessary compute instructions. More importantly, we are increasing data reuse, where previously these values would have been reused from the cache or in the worst-case from GPU memory. We can apply tiling in both the x and y-dimensions. This also introduces two new tunable parameters, namely the tiling factor in x and y, which we will call tile_size_x and tile_size_y. This is what the new kernel looks like:
kernel_string_tiled = """ #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { int tx = threadIdx.x; int ty = threadIdx.y; int bx = blockIdx.x * block_size_x * tile_size_x; int by = blockIdx.y * block_size_y * tile_size_y; __shared__ float sh_u[block_size_y*tile_size_y+2][block_size_x*tile_size_x+2]; #pragma unroll for (int i = ty; i<block_size_y*tile_size_y+2; i+=block_size_y) { #pragma unroll for (int j = tx; j<block_size_x*tile_size_x+2; j+=block_size_x) { int y = by+i-1; int x = bx+j-1; if (x>=0 && x<nx && y>=0 && y<ny) { sh_u[i][j] = u[y*nx+x]; } } } __syncthreads(); #pragma unroll for (int tj=0; tj<tile_size_y; tj++) { int i = ty+tj*block_size_y+1; int y = by + ty + tj*block_size_y; #pragma unroll for (int ti=0; ti<tile_size_x; ti++) { int j = tx+ti*block_size_x+1; int x = bx + tx + ti*block_size_x; if (x>0 && x<nx-1 && y>0 && y<ny-1) { u_new[y*nx+x] = sh_u[i][j] + dt * ( sh_u[i+1][j] + sh_u[i][j+1] -4.0f * sh_u[i][j] + sh_u[i][j-1] + sh_u[i-1][j] ); } } } } """ % (nx, ny)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can tune our tiled kernel by adding the two new tunable parameters to our dictionary tune_params. We also need to somehow tell the Kernel Tuner to use fewer thread blocks to launch kernels with tile_size_x or tile_size_y larger than one. For this purpose the Kernel Tuner's tune_kernel function supports two optional arguments, called grid_div_x and grid_div_y. These are the grid divisor lists, which are lists of strings containing all the tunable parameters that divide a certain grid dimension. So far, we have been using the default settings for these, in which case the Kernel Tuner only uses the block_size_x and block_size_y tunable parameters to divide the problem_size. Note that the Kernel Tuner will replace the values of the tunable parameters inside the strings and use the product of the parameters in the grid divisor list to compute the grid dimension rounded up. You can even use arithmetic operations, inside these strings as they will be evaluated. As such, we could have used ["block_size_x*tile_size_x"] to get the same result. We are now ready to call the Kernel Tuner again and tune our tiled kernel. Let's execute the following code block, note that it may take a while as the number of kernel configurations that the Kernel Tuner will try has just been increased with a factor of 9!
tune_params["tile_size_x"] = [1,2,4] #add tile_size_x to the tune_params tune_params["tile_size_y"] = [1,2,4] #add tile_size_y to the tune_params grid_div_x = ["block_size_x", "tile_size_x"] #tile_size_x impacts grid dimensions grid_div_y = ["block_size_y", "tile_size_y"] #tile_size_y impacts grid dimensions result = tune_kernel("diffuse_kernel", kernel_string_tiled, problem_size, args, tune_params, grid_div_x=grid_div_x, grid_div_y=grid_div_y)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can see that the number of kernel configurations tried by the Kernel Tuner is growing rather quickly. Also, the best performing configuration quite a bit faster than the best kernel before we started optimizing. On our GTX Titan X, the execution time went from 0.72 ms to 0.53 ms, a performance improvement of 26%! Note that the thread block dimensions for this kernel configuration are also different. Without optimizations the best performing kernel used a thread block of 32x2, after we've added tiling the best performing kernel uses thread blocks of size 64x4, which is four times as many threads! Also the amount of work increased with tiling factors 2 in the x-direction and 4 in the y-direction, increasing the amount of work per thread block by a factor of 8. The difference in the area processed per thread block between the naive and the tiled kernel is a factor 32. However, there are actually several kernel configurations that come close. The following Python code prints all instances with an execution time within 5% of the best performing configuration. Using the best parameters in a production run Now that we have determined which parameters are the best for our problems we can use them to simulate the heat diffusion problem. There are several ways to do so depending on the host language you wish to use. Python run To use the optimized parameters in a python run, we simply have to modify the kernel code to specify which value to use for the block and tile size. There are of course many different ways to achieve this. In simple cases on can define a dictionary of values and replace the string block_size_i and tile_size_j by their values.
import pycuda.autoinit # define the optimal parameters size = [nx,ny,1] threads = [128,4,1] # create a dict of fixed parameters fixed_params = OrderedDict() fixed_params['block_size_x'] = threads[0] fixed_params['block_size_y'] = threads[1] # select the kernel to use kernel_string = kernel_string_shared # replace the block/tile size for k,v in fixed_params.items(): kernel_string = kernel_string.replace(k,str(v))
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We also need to determine the size of the grid
# for regular and shared kernel grid = [int(numpy.ceil(n/t)) for t,n in zip(threads,size)]
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We can then transfer the data initial condition on the two gpu arrays as well as compile the code and get the function we want to use.
#allocate GPU memory u_old = gpuarray.to_gpu(field) u_new = gpuarray.to_gpu(field) # compile the kernel mod = compiler.SourceModule(kernel_string) diffuse_kernel = mod.get_function("diffuse_kernel")
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
We now just have to use the kernel with these optimized parameters to run the simulation
#call the GPU kernel a 1000 times and measure performance t0 = time() for i in range(500): diffuse_kernel(u_new, u_old, block=tuple(threads), grid=tuple(grid)) diffuse_kernel(u_old, u_new, block=tuple(threads), grid=tuple(grid)) driver.Context.synchronize() print("1000 steps of diffuse on a %d x %d grid took" %(nx,ny), (time()-t0)*1000, "ms.") #copy the result from the GPU to Python for plotting gpu_result = u_old.get() pyplot.imshow(gpu_result)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
C run If you wish to incorporate the optimized parameters in the kernel and use it in a C run you can use ifndef statement at the begining of the kerenel as demonstrated in the psedo code below.
kernel_string = """ #ifndef block_size_x #define block_size_x <insert optimal value> #endif #ifndef block_size_y #define block_size_y <insert optimal value> #endif #define nx %d #define ny %d #define dt 0.225f __global__ void diffuse_kernel(float *u_new, float *u) { ...... } } """ % (nx, ny)
tutorial/diffusion_use_optparam.ipynb
benvanwerkhoven/kernel_tuner
apache-2.0
Calculate the average time step
print np.mean(np.diff(t))
hfr_start_stop.ipynb
rsignell-usgs/notebook
mit
So we have time steps of about 1 hour Now calculate the unique time steps
print(np.unique(np.diff(t)).data)
hfr_start_stop.ipynb
rsignell-usgs/notebook
mit
So there are gaps of 2, 3, 6, 9, 10, 14 and 19 hours in the otherwise hourly data
nc['time'][:]
hfr_start_stop.ipynb
rsignell-usgs/notebook
mit
Un poco de estadística
import numpy as np import matplotlib.pyplot as plt %matplotlib inline
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Hacemos dos listas, la primera contendrá las edades de los chavos de clubes de ciencia y la segusda el número de personas que tienen dicha edad
Edades = np.array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24]) Frecuencia = np.array([10, 22, 39, 32, 26, 10, 7, 5, 8, 1]) print sum(Frecuencia) plt.bar(Edades, Frecuencia) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Distribución uniforme Lotería mexicana
x1=np.random.rand(50) plt.hist(x1) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Distribución de Poisson Número de solicitudes de amistad en facebook en una semana
s = np.random.poisson(5,20) plt.hist(s) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Distribución normal Distribución de calificaciones en un exámen
x=np.random.randn(50) plt.hist(x) plt.show() x=np.random.randn(100) plt.hist(x) plt.show() x=np.random.randn(200) plt.hist(x) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Una forma de automatizar esto es:
tams = [1,2,3,4,5,6,7] for tam in tams: numeros = np.random.randn(10**tam) plt.hist(numeros,bins=20 ) plt.title('%d' %tam) plt.show() numeros = np.random.normal(loc=2.0,scale=2.0,size=1000) plt.hist(numeros) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Probabilidad en una distribución normal $1 \sigma$ = 68.26% $2 \sigma$ = 95.44% $3 \sigma$ = 99.74% $4 \sigma$ = 99.995% $5 \sigma$ = 99.99995% Actividades Grafica lo siguiente: Crear 3 distribuciones variando mean Crear 3 distribuciones variando std Crear 2 distribuciones con cierto sobrelape Campanas gaussianas en la Naturaleza Examenes de salidad en prepas en Polonia: Distribución normal en 2D
x = np.random.normal(loc=2.0,scale=2.0,size=100) y = np.random.normal(loc=2.0,scale=2.0,size=100) plt.scatter(x,y) plt.show()
Dia3/4_Estadistica_Basica.ipynb
beangoben/HistoriaDatos_Higgs
gpl-2.0
Carregar dados CSV <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/csv"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver em TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Executar em Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Ver código fonte no GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Baixar notebook</a> </td> </table> Este tutorial fornece um exemplo de como carregar dados CSV de um arquivo em um tf.data.Dataset. Os dados usados neste tutorial foram retirados da lista de passageiros do Titanic. O modelo preverá a probabilidade de sobrevivência de um passageiro com base em características como idade, sexo, classe de passagem e se a pessoa estava viajando sozinha. Setup
try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass from __future__ import absolute_import, division, print_function, unicode_literals import functools import numpy as np import tensorflow as tf TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv" TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv" train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL) test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL) # Facilitar a leitura de valores numpy. np.set_printoptions(precision=3, suppress=True)
site/pt-br/tutorials/load_data/csv.ipynb
tensorflow/docs-l10n
apache-2.0