markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Now we create a new classifier and train it with this output and the labels from ground truth
Classifier is copied from our first VGG style network | input_shape = bottleneck_features_train.shape[1:]
from keras.models import Model
from keras.layers import Dense, Dropout, Flatten, Input
# try and vary between .4 and .75
drop_out = 0.50
inputs = Input(shape=input_shape)
x = Flatten()(inputs)
# this is an additional dropout to compensate for the missing one after bottleneck features
x = Dropout(drop_out)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(drop_out)(x)
# softmax activation, 6 categories
predictions = Dense(6, activation='softmax')(x)
classifier_model = Model(input=inputs, output=predictions)
classifier_model.summary()
classifier_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
!rm -rf tf_log
# https://keras.io/callbacks/#tensorboard
tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')
# To start tensorboard
# tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log
# open http://localhost:6006 | notebooks/workshops/tss/cnn-imagenet-retrain.ipynb | DJCordhose/ai | mit |
This is a very simple architecture and should train pretty fast
it overfits by quite a bit | %time history = classifier_model.fit(bottleneck_features_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback])
# more epochs might be needed for original data
# %time history = classifier_model.fit(bottleneck_features_train, y_train, epochs=2000, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback]) | notebooks/workshops/tss/cnn-imagenet-retrain.ipynb | DJCordhose/ai | mit |
Issue 1: We have two separate models now
How do we evaluate?
How to save model for later prediction use / deployment? | from keras import models
combined_model = models.Sequential()
combined_model.add(vgg_model)
combined_model.add(classifier_model)
combined_model.summary()
combined_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
train_loss, train_accuracy = combined_model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = combined_model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
# complete original non augmented speed limit signs
original_loss, original_accuracy = combined_model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)
original_loss, original_accuracy
# combined_model.save('vgg16-retrained.hdf5')
combined_model.save('vgg16-augmented-retrained.hdf5')
# !ls -lh vgg16-retrained.hdf5
!ls -lh vgg16-augmented-retrained.hdf5 | notebooks/workshops/tss/cnn-imagenet-retrain.ipynb | DJCordhose/ai | mit |
Issue 2: Whatever we do, we overfit, much more than 85% on test not possible
for non augmented data it might even be as low as 70%
first thing we could try: maybe bottlebeck feature being 2x2 is too small, we could compensate by scaling images up to 128x128 or even 256x256
this can indeed bring up test score to 90%
however, this will make the model incompatible with the 64x64 input of the other models and make deployment harder, so we keep 64x64
maybe feature extracting from Imagenet is too different from what we have with speed limit signs?
or is the classifier too simply for the complex features?
Let us try some fine tuning
First we freeze all but the last convolutional block | len(vgg_model.layers)
vgg_model.layers
first_conv_layer = vgg_model.layers[1]
first_conv_layer.trainable
# set the first 15 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
# so, the general features are kept and we (hopefully) do not have overfitting
non_trainable_layers = vgg_model.layers[:15]
non_trainable_layers
for layer in non_trainable_layers:
layer.trainable = False
first_conv_layer.trainable | notebooks/workshops/tss/cnn-imagenet-retrain.ipynb | DJCordhose/ai | mit |
We then tweak the complete model by very slowly re-retraining classifier and final convolutional block
slow learning prevents us from ruining previous good results
leave everthing else in place
earlier layers hopefully already encode common feaure channels
less risk of overfitting
earlier layers are more general
model has too much capacity for training and is likley to learn each and every detail
a little bit faster
This may still take quite a while | from keras import optimizers
# compile the model with a SGD/momentum optimizer
# and a very slow learning rate
# make updates very small and non adaptive so we do not ruin previous learnings
combined_model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9),
metrics=['accuracy'])
!rm -r tf_log
%time combined_model.fit(X_train, y_train, epochs=150, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback])
# non augmented data is cheap to retrain, so we can try a few more epochs
# %time combined_model.fit(X_train, y_train, epochs=1000, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback]) | notebooks/workshops/tss/cnn-imagenet-retrain.ipynb | DJCordhose/ai | mit |
90% for validation is quite a bit of improvement, might even increase when we train for a bit longer
Metrics for Augmented Data
Accuracy
Validation Accuracy | train_loss, train_accuracy = combined_model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = combined_model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
# complete original non augmented speed limit signs
original_loss, original_accuracy = combined_model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)
original_loss, original_accuracy
combined_model.save('vgg16-augmented-retrained-fine-tuned.hdf5')
# combined_model.save('vgg16-retrained-fine-tuned.hdf5')
# !ls -lh vgg16-retrained-fine-tuned.hdf5
!ls -lh vgg16-augmented-retrained-fine-tuned.hdf5 | notebooks/workshops/tss/cnn-imagenet-retrain.ipynb | DJCordhose/ai | mit |
Activation Functions
Q1. Apply relu, elu, and softplus to x. | _x = np.linspace(-10., 10., 1000)
x = tf.convert_to_tensor(_x)
relu = tf.nn.relu(x)
elu = tf.nn.elu(x)
softplus = tf.nn.softplus(x)
with tf.Session() as sess:
_relu, _elu, _softplus = sess.run([relu, elu, softplus])
plt.plot(_x, _relu, label='relu')
plt.plot(_x, _elu, label='elu')
plt.plot(_x, _softplus, label='softplus')
plt.legend(bbox_to_anchor=(0.5, 1.0))
plt.show() | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q2. Apply sigmoid and tanh to x. | _x = np.linspace(-10., 10., 1000)
x = tf.convert_to_tensor(_x)
sigmoid = tf.nn.sigmoid(x)
tanh = tf.nn.tanh(x)
with tf.Session() as sess:
_sigmoid, _tanh = sess.run([sigmoid, tanh])
plt.plot(_x, _sigmoid, label='sigmoid')
plt.plot(_x, _tanh, label='tanh')
plt.legend(bbox_to_anchor=(0.5, 1.0))
plt.grid()
plt.show() | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q3. Apply softmax to x. | _x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)
x = tf.convert_to_tensor(_x)
out = tf.nn.softmax(x, dim=-1)
with tf.Session() as sess:
_out = sess.run(out)
print(_out)
assert np.allclose(np.sum(_out, axis=-1), 1) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q4. Apply dropout with keep_prob=.5 to x. | _x = np.array([[1, 2, 4, 8], [2, 4, 6, 8]], dtype=np.float32)
print("_x =\n" , _x)
x = tf.convert_to_tensor(_x)
out = tf.nn.dropout(x, keep_prob=0.5)
with tf.Session() as sess:
_out = sess.run(out)
print("_out =\n", _out) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Fully Connected
Q5. Apply a fully connected layer to x with 2 outputs and then an sigmoid function. | x = tf.random_normal([8, 10])
out = tf.contrib.layers.fully_connected(inputs=x, num_outputs=2,
activation_fn=tf.nn.sigmoid,
weights_initializer=tf.contrib.layers.xavier_initializer())
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(out))
| programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Convolution
Q6. Apply 2 kernels of width-height (2, 2), stride 1, and same padding to x. | tf.reset_default_graph()
x = tf.random_uniform(shape=(2, 3, 3, 3), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(2, 2, 3, 2), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = tf.nn.conv2d(x, filter, strides=[1, 1, 1, 1], padding="SAME")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q7. Apply 3 kernels of width-height (2, 2), stride 1, dilation_rate 2 and valid padding to x. | tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 3), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(2, 2, 3, 2), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = tf.nn.atrous_conv2d(x, filter, padding="VALID", rate=2)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape)
# Do we really have to distinguish between these two functions?
# Unless you want to use stride of 2 or more,
# You can just use tf.nn.atrous_conv2d. For normal convolution, set rate 1. | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q8. Apply 4 kernels of width-height (3, 3), stride 2, and same padding to x. | tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(3, 3, 5, 4), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = tf.nn.conv2d(x, filter, strides=[1, 2, 2, 1], padding="SAME")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q9. Apply 4 times of kernels of width-height (3, 3), stride 2, and same padding to x, depth-wise. | tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(3, 3, 5, 4), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = tf.nn.depthwise_conv2d(x, filter, strides=[1, 2, 2, 1], padding="SAME")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q10. Apply 5 kernels of height 3, stride 2, and valid padding to x. | tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 10, 5), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(3, 5, 5), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
out = tf.nn.conv1d(x, filter, stride=2, padding="VALID")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q11. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and same padding to x. | tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(3, 3, 5, 4), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
shp = x.get_shape().as_list()
output_shape = [shp[0], shp[1]*2, shp[2]*2, 5]
out = tf.nn.conv2d_transpose(x, filter, strides=[1, 2, 2, 1], output_shape=output_shape, padding="SAME")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q12. Apply conv2d transpose with 5 kernels of width-height (3, 3), stride 2, and valid padding to x. | tf.reset_default_graph()
x = tf.random_uniform(shape=(4, 5, 5, 4), dtype=tf.float32)
filter = tf.get_variable("filter", shape=(3, 3, 5, 4), dtype=tf.float32,
initializer=tf.random_uniform_initializer())
shp = x.get_shape().as_list()
output_shape = [shp[0], (shp[1]-1)*2+3, (shp[2]-1)*2+3, 5]
out = tf.nn.conv2d_transpose(x, filter, strides=[1, 2, 2, 1], output_shape=output_shape, padding="VALID")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
_out = sess.run(out)
print(_out.shape) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
Q13. Apply max pooling and average pooling of window size 2, stride 1, and valid padding to x. | _x = np.zeros((1, 3, 3, 3), dtype=np.float32)
_x[0, :, :, 0] = np.arange(1, 10, dtype=np.float32).reshape(3, 3)
_x[0, :, :, 1] = np.arange(10, 19, dtype=np.float32).reshape(3, 3)
_x[0, :, :, 2] = np.arange(19, 28, dtype=np.float32).reshape(3, 3)
print("1st channel of x =\n", _x[:, :, :, 0])
print("\n2nd channel of x =\n", _x[:, :, :, 1])
print("\n3rd channel of x =\n", _x[:, :, :, 2])
x = tf.constant(_x)
maxpool = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 1, 1, 1], padding="VALID")
avgpool = tf.nn.avg_pool(x, [1, 2, 2, 1], [1, 1, 1, 1], padding="VALID")
with tf.Session() as sess:
_maxpool, _avgpool = sess.run([maxpool, avgpool])
print("\n1st channel of max pooling =\n", _maxpool[:, :, :, 0])
print("\n2nd channel of max pooling =\n", _maxpool[:, :, :, 1])
print("\n3rd channel of max pooling =\n", _maxpool[:, :, :, 2])
print("\n1st channel of avg pooling =\n", _avgpool[:, :, :, 0])
print("\n2nd channel of avg pooling =\n", _avgpool[:, :, :, 1])
print("\n3rd channel of avg pooling =\n", _avgpool[:, :, :, 2]) | programming/Python/tensorflow/exercises/Neural_Network_Part1_Solutions.ipynb | diegocavalca/Studies | cc0-1.0 |
For the classification task, we will build a ridge regression model, and train it on a part of the full dataset | from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=120, random_state = 1960)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df_orig[lFeatures].values,
df_orig['TGT'].values,
test_size=0.2,
random_state=1960)
df_train = pd.DataFrame(X_train , columns=lFeatures)
df_train['TGT'] = y_train
df_test = pd.DataFrame(X_test , columns=lFeatures)
df_test['TGT'] = y_test
clf.fit(X_train , y_train)
# clf.predict_proba(df[lFeatures])[:,1] | doc/sklearn_reason_codes_RandomForest.ipynb | antoinecarme/sklearn_explain | bsd-3-clause |
Model Explanation
The goal here is to be able, for a given individual, the impact of each predictor on the final score.
For our model, we will do this by analyzing cross statistics between (binned) predictors and the (binned) final score.
For each score bin, we fit a linear model locally and use it to explain the score. This is generalization of the linear case, based on the fact that any model can be approximated well enough locally be a linear function (inside each score_bin). The more score bins we use, the more data we have, the better the approximation is.
For a random forest , the score can be seen as the probability of the positive class. | from sklearn.linear_model import *
def create_score_stats(df, feature_bins = 4 , score_bins=30):
df_binned = df.copy()
df_binned['Score'] = clf.predict_proba(df[lFeatures].values)[:,0]
df_binned['Score_bin'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=False, duplicates='drop')
df_binned['Score_bin_labels'] = pd.qcut(df_binned['Score'] , q=score_bins, labels=None, duplicates='drop')
for col in lFeatures:
df_binned[col + '_bin'] = pd.qcut(df[col] , feature_bins, labels=False, duplicates='drop')
binned_features = [col + '_bin' for col in lFeatures]
lInterpolated_Score= pd.Series(index=df_binned.index)
bin_classifiers = {}
coefficients = {}
intercepts = {}
for b in range(score_bins):
bin_clf = Ridge(random_state = 1960)
bin_indices = (df_binned['Score_bin'] == b)
# print("PER_BIN_INDICES" , b , bin_indexes)
bin_data = df_binned[bin_indices]
bin_X = bin_data[binned_features]
bin_y = bin_data['Score']
if(bin_y.shape[0] > 0):
bin_clf.fit(bin_X , bin_y)
bin_classifiers[b] = bin_clf
bin_coefficients = dict(zip(lFeatures, [bin_clf.coef_.ravel()[i] for i in range(len(lFeatures))]))
# print("PER_BIN_COEFFICIENTS" , b , bin_coefficients)
coefficients[b] = bin_coefficients
intercepts[b] = bin_clf.intercept_
predicted = bin_clf.predict(bin_X)
lInterpolated_Score[bin_indices] = predicted
df_binned['Score_interp'] = lInterpolated_Score
return (df_binned , bin_classifiers , coefficients, intercepts)
| doc/sklearn_reason_codes_RandomForest.ipynb | antoinecarme/sklearn_explain | bsd-3-clause |
For simplicity, to describe our method, we use 5 score bins and 5 predictor bins.
We fit our local models on the training dataset, each model is fit on the values inside its score bin. |
(df_cross_stats , per_bin_classifiers , per_bin_coefficients, per_bin_intercepts) = create_score_stats(df_train , feature_bins=5 , score_bins=10)
def debrief_score_bin_classifiers(bin_classifiers):
binned_features = [col + '_bin' for col in lFeatures]
score_classifiers_df = pd.DataFrame(index=(['intercept'] + list(binned_features)))
for (b, bin_clf) in per_bin_classifiers.items():
bin
score_classifiers_df['score_bin_' + str(b) + "_model"] = [bin_clf.intercept_] + list(bin_clf.coef_.ravel())
return score_classifiers_df
df = debrief_score_bin_classifiers(per_bin_classifiers)
df.head(10) | doc/sklearn_reason_codes_RandomForest.ipynb | antoinecarme/sklearn_explain | bsd-3-clause |
From the table above, we see that lower score values (score_bin_0) are all around zero probability and are not impacted by the predictor values, higher score values (score_bin_5) are all around 1 and are also not impacted. This is what one expects from a good classification model.
in the score bin 3, the score values increase significantly with mean area_bin and decrease with mean radius_bin values.
Predictor Effects
Predictor effects describe the impact of specific predictor values on the final score. For example, some values of a predictor can increase or decrease the score locally by 0.10 or more points and change the negative decision to a positive one.
The predictor effect reflects how a specific predictor increases the score (above or below the mean local contribtution of this variable). | for col in lFeatures:
lcoef = df_cross_stats['Score_bin'].apply(lambda x : per_bin_coefficients.get(x).get(col))
lintercept = df_cross_stats['Score_bin'].apply(lambda x : per_bin_intercepts.get(x))
lContrib = lcoef * df_cross_stats[col + '_bin'] + lintercept/len(lFeatures)
df1 = pd.DataFrame();
df1['contrib'] = lContrib
df1['Score_bin'] = df_cross_stats['Score_bin']
lContribMeanDict = df1.groupby(['Score_bin'])['contrib'].mean().to_dict()
lContribMean = df1['Score_bin'].apply(lambda x : lContribMeanDict.get(x))
# print("CONTRIB_MEAN" , col, lContribMean)
df_cross_stats[col + '_Effect'] = lContrib - lContribMean
df_cross_stats.sample(6, random_state=1960) | doc/sklearn_reason_codes_RandomForest.ipynb | antoinecarme/sklearn_explain | bsd-3-clause |
The previous sample, shows that the first individual lost 0.000000 score points due to the feature $X_1$, gained 0.003994 with the feature $X_2$, etc
Reason Codes
The reason codes are a user-oriented representation of the decision making process. These are the predictors ranked by their effects. | import numpy as np
reason_codes = np.argsort(df_cross_stats[[col + '_Effect' for col in lFeatures]].values, axis=1)
df_rc = pd.DataFrame(reason_codes, columns=['reason_idx_' + str(NC-c) for c in range(NC)])
df_rc = df_rc[list(reversed(df_rc.columns))]
df_rc = pd.concat([df_cross_stats , df_rc] , axis=1)
for c in range(NC):
reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x])
df_rc['reason_' + str(c+1)] = reason
# detailed_reason = df_rc['reason_idx_' + str(c+1)].apply(lambda x : lFeatures[x] + "_bin")
# df_rc['detailed_reason_' + str(c+1)] = df_rc[['reason_' + str(c+1) , ]]
df_rc.sample(6, random_state=1960)
df_rc[['reason_' + str(NC-c) for c in range(NC)]].describe() | doc/sklearn_reason_codes_RandomForest.ipynb | antoinecarme/sklearn_explain | bsd-3-clause |
The markov chain seems to be irreducible
One way to obtain the stationary state is to look at the eigen vectors correspendoing to the eigen value of 1. However, the eigen vectors come out to be imaginary. This seemed to be an issue wwith the solver so I relied on solving the system of equation: $\pi = P\pi$ | w, v = LA.eig(P)
for i in range(0,6):
print 'Eigen value: {}\n Eigen vector: {}\n'.format(w[i],v[:,i])
## Solve for (I-Q)^{-1}
iq = np.linalg.inv(np.eye(5)-qq)
iq_phi = iq[0,0]
iq_alpha = iq[1,1]
iq_beta = iq[2,2]
iq_alphabeta = iq[3,3]
iq_pol = iq[4,4]
| 2015_Fall/MATH-578B/Homework1/Homework1.ipynb | saketkc/hatex | mit |
EDIT: I made correction to solve for corrected $\pi$, by acounting for $P^T$ and not $P$ | A = np.eye(6)-P.T
A[-1,:] = [1,1,1,1,1,1]
B = [0,0,0,0,0,1]
X=np.linalg.solve(A,B)
print(X)
| 2015_Fall/MATH-578B/Homework1/Homework1.ipynb | saketkc/hatex | mit |
Stationary state is given by $\pi = (0.1667, 0.1667, 0.1667, 0.1667, 0.1667, 0.1667)$ The mean number of visits per unit time to $\dagger$ are $\frac{1}{\pi_6} = 6$ However strangely this does not satisfy $\pi=P\pi$. I was not able to figure out where I went wrong.
EDIT: I made correction to solve for corrected $\pi$, by acounting for $P^T$ and not $P$, so this no longer holds | #EDIT: I made correction to solve for corrected $\pi$, by acounting for $P^T$ and not $P$
print('\pi*P={}\n'.format(X*P))
print('But \pi={}'.format(X)) | 2015_Fall/MATH-578B/Homework1/Homework1.ipynb | saketkc/hatex | mit |
Simulating the chain:
General strategy: Generate a random number $\longrightarrow$ Select a state $\longrightarrow$ Jump to state $\longrightarrow$ Repeat | ## phi
np.random.seed(1)
PP = {}
PP['phi']= [1-k_a-k_b, k_a ,k_b, 0, 0, 0]
PP['alpha'] = [k_a, 1-k_a-k_b, 0, k_b, 0, 0]
PP['beta'] = [k_b, 0, 1-k_a-k_b, k_a, 0, 0]
PP['ab']= [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0]
PP['pol']= [0, 0, 0, 0, 0, 1]
PP['d']= [0, 0, 0, 1, 0, 0]
##For $h(\phi)$
x0='phi'
x='phi'
def h(x):
s=0
new_state=x
for i in range(1,1000):
old_state=new_state
probs = PP[old_state]
z=np.random.choice(6, 1, p=probs)
new_state = states[z[0]]
#print('{} --> {}'.format(old_state, new_state))
s+=z[0]
return s/1000
| 2015_Fall/MATH-578B/Homework1/Homework1.ipynb | saketkc/hatex | mit |
Part (a,b,c) | print(r'$h(\phi)$: From simulation: {}; From calculation: {}'.format(h('phi'),iq_phi))
print(r'$h(\alpha)$: From simulation: {}; From calculation: {}'.format(h('alpha'),iq_alpha))
print(r'$h(\beta)$: From simulation: {}; From calculation: {}'.format(h('beta'),iq_beta))
print(r'$h(\alpha+\beta)$: From simulation: {}; From calculation: {}'.format(h('ab'),iq_alphabeta))
print(r'$h(\pol)$: From simulation: {}; From calculation: {}'.format(h('pol'),iq_pol))
old_state = [0.1,0.2,0.3,0.4,0,0]
def perturb(old_state):
new_state = old_state*P
return new_state
new_state = [0,0,0,0,0,1]
while not np.allclose(old_state, new_state):
old_state, new_state = new_state, perturb(old_state)
print old_state
# EDIT: I made correction to solve for corrected $\pi$, by acounting for $P^T$ and not $P$
print('From calculation(which is NO LONGER wrong!), stationary distribution:{}'.format(X))
print('From simulation, stationary distribution: {}'.format(old_state)) | 2015_Fall/MATH-578B/Homework1/Homework1.ipynb | saketkc/hatex | mit |
1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
Warm-up exercise: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$
where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding. | # GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters['W' + str(l+1)] - learning_rate* grads['dW' + str(l+1)]
parameters["b" + str(l+1)] = parameters['b' + str(l+1)] - learning_rate* grads['db' + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"])) | deeplearning.ai/C2.ImproveDeepNN/week2/assignment/Optimization+methods.ipynb | jinzishuai/learn2deeplearn | gpl-3.0 |
Expected Output:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74604067]
[-0.75184921]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88020257]
[ 0.02561572]
[ 0.57539477]] </td>
</tr>
</table>
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.
(Batch) Gradient Descent:
``` python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
Stochastic Gradient Descent:
python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost = compute_cost(a, Y[:,j])
# Backward propagation
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this:
<img src="images/kiank_sgd.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : SGD vs GD<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>
Note also that implementing SGD requires 3 for-loops in total:
1. Over the number of iterations
2. Over the $m$ training examples
3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)
In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.
<img src="images/kiank_minibatch.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 2 </u>: <font color='purple'> SGD vs Mini-Batch GD<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>
<font color='blue'>
What you should remember:
- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.
- You have to tune a learning rate hyperparameter $\alpha$.
- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).
2 - Mini-Batch Gradient descent
Let's learn how to build mini-batches from the training set (X, Y).
There are two steps:
- Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.
<img src="images/kiank_shuffle.png" style="width:550px;height:300px;">
Partition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this:
<img src="images/kiank_partition.png" style="width:550px;height:300px;">
Exercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:
python
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...
Note that the last mini-batch might end up smaller than mini_batch_size=64. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is math.floor(s) in Python). If the total number of examples is not a multiple of mini_batch_size=64 then there will be $\lfloor \frac{m}{mini_batch_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini__batch__size \times \lfloor \frac{m}{mini_batch_size}\rfloor$). | # GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, k*mini_batch_size : (k+1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, k*mini_batch_size : (k+1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, num_complete_minibatches*mini_batch_size : m]
mini_batch_Y = shuffled_Y[:, num_complete_minibatches*mini_batch_size : m]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3])) | deeplearning.ai/C2.ImproveDeepNN/week2/assignment/Optimization+methods.ipynb | jinzishuai/learn2deeplearn | gpl-3.0 |
Expected Output:
<table style="width:50%">
<tr>
<td > **shape of the 1st mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_X** </td>
<td > (12288, 20) </td>
</tr>
<tr>
<td > **shape of the 1st mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_Y** </td>
<td > (1, 20) </td>
</tr>
<tr>
<td > **mini batch sanity check** </td>
<td > [ 0.90085595 -0.7612069 0.2344157 ] </td>
</tr>
</table>
<font color='blue'>
What you should remember:
- Shuffling and Partitioning are the two steps required to build mini-batches
- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.
3 - Momentum
Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations.
Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.
<img src="images/opt_momentum.png" style="width:400px;height:250px;">
<caption><center> <u><font color='purple'>Figure 3</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>
Exercise: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the grads dictionary, that is:
for $l =1,...,L$:
python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
Note that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the for loop. | # GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)]).shape)
v["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)]).shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"])) | deeplearning.ai/C2.ImproveDeepNN/week2/assignment/Optimization+methods.ipynb | jinzishuai/learn2deeplearn | gpl-3.0 |
Expected Output:
<table style="width:90%">
<tr>
<td > **W1** </td>
<td > [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74493465]
[-0.76027113]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.87809283]
[ 0.04055394]
[ 0.58207317]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]</td>
</tr>
</table>
Note that:
- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.
- If $\beta = 0$, then this just becomes standard gradient descent without momentum.
How do you choose $\beta$?
The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much.
Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default.
Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$.
<font color='blue'>
What you should remember:
- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.
- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$.
4 - Adam
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.
How does Adam work?
1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction).
2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction).
3. It updates parameters in a direction based on combining information from "1" and "2".
The update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \
v^{corrected}{dW^{[l]}} = \frac{v{dW^{[l]}}}{1 - (\beta_1)^t} \
s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \
s^{corrected}{dW^{[l]}} = \frac{s{dW^{[l]}}}{1 - (\beta_1)^t} \
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}{dW^{[l]}}}{\sqrt{s^{corrected}{dW^{[l]}}} + \varepsilon}
\end{cases}$$
where:
- t counts the number of steps taken of Adam
- L is the number of layers
- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages.
- $\alpha$ is the learning rate
- $\varepsilon$ is a very small number to avoid dividing by zero
As usual, we will store all parameters in the parameters dictionary
Exercise: Initialize the Adam variables $v, s$ which keep track of the past information.
Instruction: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads, that is:
for $l = 1, ..., L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
``` | # GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)]).shape)
v["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)]).shape)
s["dW" + str(l+1)] = np.zeros((parameters["W" + str(l+1)]).shape)
s["db" + str(l+1)] = np.zeros((parameters["b" + str(l+1)]).shape)
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
| deeplearning.ai/C2.ImproveDeepNN/week2/assignment/Optimization+methods.ipynb | jinzishuai/learn2deeplearn | gpl-3.0 |
Expected Output:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
Exercise: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \
v^{corrected}{W^{[l]}} = \frac{v{W^{[l]}}}{1 - (\beta_1)^t} \
s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \
s^{corrected}{W^{[l]}} = \frac{s{W^{[l]}}}{1 - (\beta_2)^t} \
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}{W^{[l]}}}{\sqrt{s^{corrected}{W^{[l]}}}+\varepsilon}
\end{cases}$$
Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding. | # GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1*v["dW" + str(l+1)]+(1-beta1)*grads['dW' + str(l+1)]
v["db" + str(l+1)] = beta1*v["db" + str(l+1)]+(1-beta1)*grads['db' + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)]/(1-np.power(beta1, t))
v_corrected["db" + str(l+1)] = v["db" + str(l+1)]/(1-np.power(beta1, t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2*s["dW" + str(l+1)]+(1-beta2)*np.power(grads['dW' + str(l+1)],2)
s["db" + str(l+1)] = beta2*s["db" + str(l+1)]+(1-beta2)*np.power(grads['db' + str(l+1)],2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)]/(1-np.power(beta2, t))
s_corrected["db" + str(l+1)] = s["db" + str(l+1)]/(1-np.power(beta2, t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - \
learning_rate*v_corrected["dW" + str(l+1)]/(np.sqrt(s_corrected["dW" + str(l+1)])+epsilon)
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - \
learning_rate*v_corrected["db" + str(l+1)]/(np.sqrt(s_corrected["db" + str(l+1)])+epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"])) | deeplearning.ai/C2.ImproveDeepNN/week2/assignment/Optimization+methods.ipynb | jinzishuai/learn2deeplearn | gpl-3.0 |
see tut-section-subselect-epochs for details.
The tutorials tut-epochs-class and tut-evoked-class have many
more details about working with the ~mne.Epochs and ~mne.Evoked classes.
Amplitude and latency measures
It is common in ERP research to extract measures of amplitude or latency to
compare across different conditions. There are many measures that can be
extracted from ERPs, and many of these are detailed (including the respective
strengths and weaknesses) in chapter 9 of Luck :footcite:Luck2014 (also see
the Measurement Tool <https://bit.ly/37uydRw>_ in the ERPLAB Toolbox
:footcite:Lopez-CalderonLuck2014).
This part of the tutorial will demonstrate how to extract three common
measures:
Peak latency
Peak amplitude
Mean amplitude
Peak latency and amplitude
The most common measures of amplitude and latency are peak measures.
Peak measures are basically the maximum amplitude of the signal in a
specified time window and the time point (or latency) at which the peak
amplitude occurred.
Peak measures can be obtained using the :meth:~mne.Evoked.get_peak method.
There are two important things to point out about
:meth:~mne.Evoked.get_peak method. First, it finds the strongest peak
looking across all channels of the selected type that are available in
the :class:~mne.Evoked object. As a consequence, if you want to restrict
the search for the peak to a group of channels or a single channel, you
should first use the :meth:~mne.Evoked.pick or
:meth:~mne.Evoked.pick_channels methods. Second, the
:meth:~mne.Evoked.get_peak method can find different types of peaks using
the mode argument. There are three options:
mode='pos': finds the peak with a positive voltage (ignores
negative voltages)
mode='neg': finds the peak with a negative voltage (ignores
positive voltages)
mode='abs': finds the peak with the largest absolute voltage
regardless of sign (positive or negative)
The following example demonstrates how to find the first positive peak in the
ERP (i.e., the P100) for the left visual condition (i.e., the
l_vis :class:~mne.Evoked object). The time window used to search for
the peak ranges from .08 to .12 s. This time window was selected because it
is when P100 typically occurs. Note that all 'eeg' channels are submitted
to the :meth:~mne.Evoked.get_peak method. | # Define a function to print out the channel (ch) containing the
# peak latency (lat; in msec) and amplitude (amp, in µV), with the
# time range (tmin and tmax) that were searched.
# This function will be used throughout the remainder of the tutorial
def print_peak_measures(ch, tmin, tmax, lat, amp):
print(f'Channel: {ch}')
print(f'Time Window: {tmin * 1e3:.3f} - {tmax * 1e3:.3f} ms')
print(f'Peak Latency: {lat * 1e3:.3f} ms')
print(f'Peak Amplitude: {amp * 1e6:.3f} µV')
# Get peak amplitude and latency from a good time window that contains the peak
good_tmin, good_tmax = .08, .12
ch, lat, amp = l_vis.get_peak(ch_type='eeg', tmin=good_tmin, tmax=good_tmax,
mode='pos', return_amplitude=True)
# Print output from the good time window that contains the peak
print('** PEAK MEASURES FROM A GOOD TIME WINDOW **')
print_peak_measures(ch, good_tmin, good_tmax, lat, amp) | 0.24/_downloads/27d6cff3f645408158cdf4f3f05a21b6/30_eeg_erp.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
3. Enter BigQuery Anonymize Query Recipe Parameters
Ensure you have user access to both datasets.
Provide the source project, dataset and query.
Provide the destination project, dataset, and table.
Modify the values below for your use case, can be done multiple times, then click play. | FIELDS = {
'auth_read':'service', # Credentials used.
'from_project':'', # Original project to read from.
'from_dataset':'', # Original dataset to read from.
'from_query':'', # Query to read data.
'to_project':None, # Anonymous data will be writen to.
'to_dataset':'', # Anonymous data will be writen to.
'to_table':'', # Anonymous data will be writen to.
}
print("Parameters Set To: %s" % FIELDS)
| colabs/anonymize_query.ipynb | google/starthinker | apache-2.0 |
4. Execute BigQuery Anonymize Query
This does NOT need to be modified unless you are changing the recipe, click play. | from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'anonymize':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'service','description':'Credentials used.'}},
'bigquery':{
'from':{
'project':{'field':{'name':'from_project','kind':'string','order':1,'description':'Original project to read from.'}},
'dataset':{'field':{'name':'from_dataset','kind':'string','order':2,'description':'Original dataset to read from.'}},
'query':{'field':{'name':'from_query','kind':'string','order':3,'description':'Query to read data.'}}
},
'to':{
'project':{'field':{'name':'to_project','kind':'string','order':4,'default':None,'description':'Anonymous data will be writen to.'}},
'dataset':{'field':{'name':'to_dataset','kind':'string','order':5,'description':'Anonymous data will be writen to.'}},
'table':{'field':{'name':'to_table','kind':'string','order':6,'description':'Anonymous data will be writen to.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
| colabs/anonymize_query.ipynb | google/starthinker | apache-2.0 |
Reading input-output data: | # reading stimulus
Stim = np.array(pd.read_csv(os.path.join(data_path,'Stim.csv'),header = None))
# reading location of spikes
tsp = np.hstack(np.array(pd.read_csv(os.path.join(data_path,'tsp.csv'),header = None))) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Extracting a spike train from spike positions: | dt = 0.01
tsp_int = np.ceil((tsp - dt*0.001)/dt)
tsp_int = np.reshape(tsp_int,(tsp_int.shape[0],1))
tsp_int = tsp_int.astype(int)
y = np.array([item in tsp_int for item in np.arange(Stim.shape[0]/dt)+1]).astype(int) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Displaying a subset of the spike train: | fig, ax = plt.subplots(figsize=(16, 2))
fig = ax.matshow(np.reshape(y[:1000],(1,len(y[:1000]))),cmap = 'Greys',aspect = 15) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Creating filters: | # create a stimulus filter
kpeaks = np.array([0,round(20/3)])
pars_k = {'neye':5,'n':5,'kpeaks':kpeaks,'b':3}
K,K_orth,kt_domain = filters.createStimulusBasis(pars_k, nkt = 20)
# create a post-spike filter
hpeaks = np.array([0.1,2])
pars_h = {'n':5,'hpeaks':hpeaks,'b':.4,'absref':0.}
H,H_orth,ht_domain = filters.createPostSpikeBasis(pars_h,dt)
# Interpolate Post Spike Filter
MSP = auxfun.makeInterpMatrix(len(ht_domain),1)
MSP[0,0] = 0
H_orth = np.dot(MSP,H_orth)
MSP.shape | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Conditional Intensity (spike rate):
$$\lambda_{\beta} = \exp(K(\beta_k)Stim + H(\beta_h)y + dc)$$
($\beta_k$ and $\beta_h$ are the unknown coefficients of the filters and $dc$ is the direct current).
Since the convolution is a linear operation the intensity can be written in the following form:
$$\lambda_{\beta} = \exp(M_k \beta_k + M_h\beta_h + \textbf{1}dc),$$
where $M_k$ and $M_h$ are matrices depending on the stimulus and the response correspondingly and $\textbf{1}$ is a vector of ones.
Creating a matrix of covariates: | M_k = lk.construct_M_k(Stim,K,dt)
M_h = lk.construct_M_h(tsp,H_orth,dt,Stim) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Combining $M_k$, $M_h$ and $\textbf{1}$ into one covariate matrix: | M = np.hstack((M_k,M_h,np.ones((M_h.shape[0],1)))) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
The conditional intensity becomes:
$$ \lambda_{\beta} = \exp(M\beta) $$
($\beta$ contains all the unknown parameters).
Create a Poisson process model with this intensity: | model = PP.PPModel(M.T,dt = dt/100) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Setting initial parameters for optimization: | coeff_k0 = np.array([-0.02304,
0.12903,
0.35945,
0.39631,
0.27189,
0.22003,
-0.17457,
0.00482,
-0.09811,
0.04823])
coeff_h0 = np.zeros((5,))
dc0 = 0
pars0 = np.hstack((coeff_k0,coeff_h0,dc0))
# pars0 = np.hstack((np.zeros((10,)),np.ones((5,)),0)) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Fitting the likelihood (here using Limited Memory BFGS method with 500 iterations): | res = model.fit(y,start_coef = pars0,maxiter = 500,method = 'L-BFGS-B') | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Optimization results: | print(res) | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Creating the predicted filters: | k_coeff_predicted = res.x[:10]
h_coeff_predicted = res.x[10:15]
kfilter_predicted = np.dot(K,k_coeff_predicted)
hfilter_predicted = np.dot(H_orth,h_coeff_predicted)
k_coeff = np.array([ 0.061453,0.284916,0.860335,1.256983,0.910615,0.488660,-0.887091,0.097441,0.026607,-0.090147])
h_coeff = np.array([-15.18,38.24,-67.58,-14.06,-3.36])
kfilter_true = np.dot(K,k_coeff)
hfilter_true = np.dot(H_orth,h_coeff)
plt.plot(-kt_domain[::-1],kfilter_predicted,color = "r",label = 'predicted')
plt.hold(True)
plt.plot(-kt_domain[::-1],kfilter_true,color= "blue",label = 'true')
plt.hold(True)
plt.plot(-kt_domain[::-1],np.dot(K,coeff_k0),color = "g",label = 'initial')
plt.legend(loc = 'upper left')
plt.plot(ht_domain,hfilter_predicted,color = "r",label = 'predicted')
plt.hold(True)
plt.plot(ht_domain,hfilter_true,color = "b",label = 'true')
plt.hold(True)
plt.plot(ht_domain,np.dot(H_orth,coeff_h0),color = "g",label = 'initial')
plt.legend(loc = 'lower right') | notebooks/MLE_singleNeuron.ipynb | valentina-s/GLM_PythonModules | bsd-2-clause |
Configuring maps and loading data about where the complaints have occured. Observe, to sucesfully configure the Google Maps you have to create an API Key (You can generate one from this site: https://developers.google.com/maps/documentation/javascript/get-api-key) and change in the line 'plot.api_key = ""' | map_options = GMapOptions(lat=39.151042, lng=-77.193023, map_type="roadmap", zoom=11)
plot = GMapPlot(x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options)
plot.title.text = "Montgomery County"
# For GMaps to function, Google requires you obtain and enable an API key:
#
# https://developers.google.com/maps/documentation/javascript/get-api-key
#
# Replace the value below with your personal API key:
plot.api_key = "AIzaSyBFHmpkUOfk2FtDZXHVBSUUHp6LVPmI-fs" | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
Load data in using read_csv function, configure which tools will be available in the plot. | #Loading dataset from Montgomery County complaint dataset
monty_data = pd.read_csv("MontgomeryCountyCrime2013.csv")
latitude_data = monty_data["Latitude"]
longitude_data = monty_data["Longitude"]
monty_data.head()
| EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
Categorizing complaint classes | #Creating a master class to categorize crimes
classaux = monty_data["Class"]/100
classaux = classaux.astype(int)
classaux = classaux*100
#Inserting this new data in the dataset
monty_data["MasterClass"] = classaux
#print(montydata.groupby("Class")["Class Description"].mean())
#Sort by Class of complaint to analise master classes of Class complaints
#montydata.sort_values(by="Class")
#montydata.sort_values(by="Class Description")
monty_data["Class","Class Description"]
#print(montydata.groupby["Class Description"])
source = ColumnDataSource(
data=dict(
lat=latitude_data[13:130],
lon=longitude_data[13:130],
)
)
print(source.data.values)
circle = Circle(x="lon", y="lat", size=15, fill_color="blue", fill_alpha=0.8, line_color=None)
plot.add_glyph(source, circle)
plot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool()) | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
Ploting the geographic data in Google Maps. Note that the 'show' function receives another parameter 'notebook_handle=True' responsible for tell Bhoke to do a inline plot | show(plot,notebook_handle=True) | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3>Which sort of complaints are most common, TOP 10?</h3> | #Using the agg function allows you to calculate the frequency for each group using the standard library function len.
#Sorting the result by the aggregated column code_count values, in descending order, then head selecting the top n records, then reseting the frame; will produce the top n frequent records
top = montydata.groupby(['Class','Class Description'])['Class'].agg({"frequency": len}).sort_values("frequency", ascending=False).head(40).reset_index()
top['frequency'] = (top['frequency']/number_of_registries[0])*100
top
from decimal import *
#Configure precision
getcontext().prec = 2
parcial_perc = top['frequency'].sum()
parcial_perc = round(parcial_perc,2)
print("The crimes above are responsible for up to " + str(parcial_perc) + "% of the total crimes") | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3><strong>What are the Classes of Classes (Master Classes) of complaints?</strong></h3> | #Considering the top crimes
#copy
top_classes_top = top
#Creation of a Master Class
top_classes_top['Master Class'] = 0
aux = top_classes_top['Master Class'].astype(float,copy=True)
top_classes_top['Master Class'] = aux
top_classes_top['Master Class'] = top_classes_top['Class']/100
top_classes_top['Master Class'] = top_classes_top['Master Class'].round()
top_classes_top['Master Class'] = top_classes_top['Master Class']*100
aux = top_classes_top['Master Class'].astype(int,copy=True)
top_classes_top['Master Class'] = aux
#teste.describe
#top_classes_top
#top_classes_top['Master Class'].describe()
#top_classes_top.dtypes
top_classes_top
| EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h4>Describing 'Master Classes'</h4> | #Inserting the description of the Master Classes
top_classes_top['Master Class Description'] =''
top_classes_top[top_classes_top['Master Class'] == 600]
test_top = top_classes_top
test_top.loc[(test_top['Master Class'] == 600),'Master Class Description'] = 'LARCENY'
test_top.loc[(test_top['Master Class'] == 2900),'Master Class Description'] = 'MISC'
test_top.loc[(test_top['Master Class'] == 1400),'Master Class Description'] = 'VANDALISM'
test_top.loc[(test_top['Master Class'] == 1000),'Master Class Description'] = 'FORGERY/CNTRFT'
test_top.loc[(test_top['Master Class'] == 500),'Master Class Description'] = 'BURGLARY'
test_top.loc[(test_top['Master Class'] == 800),'Master Class Description'] = 'ASSAULT & BATTERY'
test_top.loc[(test_top['Master Class'] == 1800),'Master Class Description'] = 'CONTROLLED DANGEROUS SUBSTANCE POSSESSION'
test_top.loc[(test_top['Master Class'] == 700),'Master Class Description'] = 'THEFT'
test_top.loc[(test_top['Master Class'] == 2100),'Master Class Description'] = 'JUVENILE RUNAWAY'
test_top.loc[(test_top['Master Class'] == 2800),'Master Class Description'] = 'DRIVING UNDER THE INFLUENCE'
test_top.loc[(test_top['Master Class'] == 1900),'Master Class Description'] = 'CONTROLLED DANGEROUS SUBSTANCE IMPLMNT'
test_top.loc[(test_top['Master Class'] == 2200),'Master Class Description'] = 'LIQUOR - DRINK IN PUB OVER 21'
test_top.loc[(test_top['Master Class'] == 2400),'Master Class Description'] = 'DISORDERLY CONDUCT'
test_top.loc[(test_top['Master Class'] == 2700),'Master Class Description'] = 'TRESPASSING'
test_top | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3>Could we categorize the types of crimes in violent or not?</h3>
According to wikipedia (https://en.wikipedia.org/wiki/Violent_crime) include but are not limited to this list of crimes: Typically, violent criminals includes aircraft hijackers, bank robbers, muggers, burglars, terrorists, carjackers, rapists, kidnappers, torturers, active shooters, murderers, gangsters, drug cartels, and others.
Only analysing each master class we can see that only tree master classes are considered violent, that are: 500 - BURGLARY, 800 - ASSAULT & BATTERY and 700 - THEFT. | test_top['Violent crime'] = False
test_top.loc[(test_top['Master Class'] == 500),'Violent crime'] = True
test_top.loc[(test_top['Master Class'] == 800),'Violent crime'] = True
test_top.loc[(test_top['Master Class'] == 700),'Violent crime'] = True
test_top.sort_values(['Violent crime', 'frequency'], ascending=False, axis=0, kind='quicksort') | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
Acording to the data, almost 80% of the crimes selected from the total of crimes, the violent crimes are only | value_percentage = test_top[test_top['Violent crime'] == True]['frequency'].sum()
value_percentage = round(value_percentage,2)
print(str(value_percentage) + '% of the total crimes') | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3>Wich period (morning, afternoon, night) of the day that most complaints occur</h3> | #Considering the top crimes
day_process = montydata
| EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3>Wich day of the week that most complaints occur</h3> | #Considering the top crimes | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3>Wich month of the years that most complaints occur</h3> | #Considering the top crimes | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3>These complainsts are related with holidays?</h3> | #Considering the top crimes | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
<h3>What period of time (time of day/day of the week/month of the year) has correlation with the type of complaint</h3> | #Considering the top crimes | EEC2006/1_CrimeProject/Finding crime patterns in Montgomery County-Copy1.ipynb | marco-olimpio/ufrn | gpl-3.0 |
Pure-Python ADMM Implementation
The code below is a direct Python port of the reference MATLAB implementation in Reference[1]. | from numpy.linalg import inv, norm
def objective(P, q, r, x):
"""Return the value of the Standard form QP using the current value of x."""
return 0.5 * np.dot(x, np.dot(P, x)) + np.dot(q, x) + r
def qp_admm(P, q, r, lb, ub,
max_iter=1000,
rho=1.0,
alpha=1.2,
atol=1e-4,
rtol=1e-2):
n = P.shape[0]
x = np.zeros(n)
z = np.zeros(n)
u = np.zeros(n)
history = []
R = inv(P + rho * np.eye(n))
for k in range(1, max_iter+1):
x = np.dot(R, (z - u) - q)
# z-update with relaxation
z_old = z
x_hat = alpha * x +(1 - alpha) * z_old
z = np.minimum(ub, np.maximum(lb, x_hat + u))
# u-update
u = u + (x_hat - z)
# diagnostics, and termination checks
objval = objective(P, q, r, x)
r_norm = norm(x - z)
s_norm = norm(-rho * (z - z_old))
eps_pri = np.sqrt(n) * atol + rtol * np.maximum(norm(x), norm(-z))
eps_dual = np.sqrt(n)* atol + rtol * norm(rho*u)
history.append({
'objval' : objval,
'r_norm' : r_norm,
's_norm' : s_norm,
'eps_pri' : eps_pri,
'eps_dual': eps_dual,
})
if r_norm < eps_pri and s_norm < eps_dual:
print('Optimization terminated after {} iterations'.format(k))
break;
history = pd.DataFrame(history)
return x, history
| simple_implementations/qp_admm.ipynb | dipanjank/ml | gpl-3.0 |
QP Solver using CVXPY
For comparison, we also implement QP solver using cvxpy. | import cvxpy as cvx
def qp_cvxpy(P, q, r, lb, ub,
max_iter=1000,
atol=1e-4,
rtol=1e-2):
n = P.shape[0]
# The variable we want to solve for
x = cvx.Variable(n)
constraints = [x >= cvx.Constant(lb), x <= cvx.Constant(ub)]
# Construct the QP expression using CVX Primitives
# Note that in the CVX-meta language '*' of vectors of matrices indicates dot product,
# not elementwise multiplication
expr = cvx.Constant(0.5) * cvx.quad_form(x, cvx.Constant(P)) + cvx.Constant(q) * x + cvx.Constant(r)
qp = cvx.Problem(cvx.Minimize(expr), constraints=constraints)
qp.solve(max_iters=max_iter, abstol=atol, reltol=rtol, verbose=True)
# The result is a Matrix object. Make it an NDArray and drop of 2nd dimension i.e. make it a vector.
x_opt = np.array(x.value).squeeze()
return x_opt | simple_implementations/qp_admm.ipynb | dipanjank/ml | gpl-3.0 |
Generate Optimal Portfolio Holdings
In this section, we define a helper function to load the one of the five asset returns datasets from OR library (Reference [2]). The data are available by requesting filenames port[1-5]. Each file contains a progressively larger set of asset returns, standard deviations of returns and correlations of returns. | import requests
from statsmodels.stats.moment_helpers import corr2cov
from functools import lru_cache
@lru_cache(maxsize=5)
def get_cov(filename):
url = r'http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/{}.txt'.format(filename)
data = requests.get(url).text
lines = [line.strip() for line in data.split('\n')]
# First line is the number of assets
n_assets = int(lines[0])
# Next n_assets lines contain the space separated mean and stddev. of returns for each asset
means_and_sds = pd.DataFrame(
data=np.nan,
index=range(0, n_assets),
columns=['ret_mean', 'ret_std'])
# Next n_assetsC2 lines contain the 1-based row and column index and the corresponding correlation
for i in range(0, n_assets):
mean, sd = map(float, lines[1+i].split())
means_and_sds.loc[i, ['ret_mean', 'ret_std']] = [mean, sd]
n_corrs = (n_assets * (n_assets + 1)) // 2
corrs = pd.DataFrame(index=range(n_assets), columns=range(n_assets), data=np.nan)
for i in range(0, n_corrs):
row, col, corr = lines[n_assets + 1 + i].split()
row, col = int(row)-1, int(col)-1
corr = float(corr)
corrs.loc[row, col] = corr
corrs.loc[col, row] = corr
cov = corr2cov(corrs, means_and_sds.ret_std)
return cov | simple_implementations/qp_admm.ipynb | dipanjank/ml | gpl-3.0 |
Set up the Portfolio Optimization problem as a QP | from numpy.random import RandomState
rng = RandomState(0)
P = get_cov('port1')
n = P.shape[0]
alphas = rng.uniform(-0.4, 0.4, size=n)
q = -alphas
ub = np.ones_like(q)
lb = np.zeros_like(q)
r = 0 | simple_implementations/qp_admm.ipynb | dipanjank/ml | gpl-3.0 |
Using ADMM | %%time
x_opt_admm, history = qp_admm(P, q, r, lb, ub)
fig, ax = plt.subplots(history.shape[1], 1, figsize=(10, 8))
ax = history.plot(subplots=True, ax=ax, rot=0) | simple_implementations/qp_admm.ipynb | dipanjank/ml | gpl-3.0 |
Using CVXPY | %%time
x_opt_cvxpy = qp_cvxpy(P, q, r, lb, ub) | simple_implementations/qp_admm.ipynb | dipanjank/ml | gpl-3.0 |
Optimal Holdings Comparison | holdings = pd.DataFrame(np.column_stack([x_opt_admm, x_opt_cvxpy]), columns=['opt_admm', 'opt_cvxpy'])
fig, ax = plt.subplots(1, 1, figsize=(12, 4))
ax = holdings.plot(kind='bar', ax=ax, rot=0)
labels = ax.set(xlabel='Assets', ylabel='Holdings') | simple_implementations/qp_admm.ipynb | dipanjank/ml | gpl-3.0 |
Let's take a look at our base (content) image and our style reference image | from IPython.display import Image, display
display(Image(base_image_path))
display(Image(style_reference_image_path))
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
Image preprocessing / deprocessing utilities |
def preprocess_image(image_path):
# Util function to open, resize and format pictures into appropriate tensors
img = keras.preprocessing.image.load_img(
image_path, target_size=(img_nrows, img_ncols)
)
img = keras.preprocessing.image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg19.preprocess_input(img)
return tf.convert_to_tensor(img)
def deprocess_image(x):
# Util function to convert a tensor into a valid image
x = x.reshape((img_nrows, img_ncols, 3))
# Remove zero-center by mean pixel
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
# 'BGR'->'RGB'
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype("uint8")
return x
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
Compute the style transfer loss
First, we need to define 4 utility functions:
gram_matrix (used to compute the style loss)
The style_loss function, which keeps the generated image close to the local textures
of the style reference image
The content_loss function, which keeps the high-level representation of the
generated image close to that of the base image
The total_variation_loss function, a regularization loss which keeps the generated
image locally-coherent | # The gram matrix of an image tensor (feature-wise outer product)
def gram_matrix(x):
x = tf.transpose(x, (2, 0, 1))
features = tf.reshape(x, (tf.shape(x)[0], -1))
gram = tf.matmul(features, tf.transpose(features))
return gram
# The "style loss" is designed to maintain
# the style of the reference image in the generated image.
# It is based on the gram matrices (which capture style) of
# feature maps from the style reference image
# and from the generated image
def style_loss(style, combination):
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2))
# An auxiliary loss function
# designed to maintain the "content" of the
# base image in the generated image
def content_loss(base, combination):
return tf.reduce_sum(tf.square(combination - base))
# The 3rd loss function, total variation loss,
# designed to keep the generated image locally coherent
def total_variation_loss(x):
a = tf.square(
x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, 1:, : img_ncols - 1, :]
)
b = tf.square(
x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, : img_nrows - 1, 1:, :]
)
return tf.reduce_sum(tf.pow(a + b, 1.25))
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
Next, let's create a feature extraction model that retrieves the intermediate activations
of VGG19 (as a dict, by name). | # Build a VGG19 model loaded with pre-trained ImageNet weights
model = vgg19.VGG19(weights="imagenet", include_top=False)
# Get the symbolic outputs of each "key" layer (we gave them unique names).
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# Set up a model that returns the activation values for every layer in
# VGG19 (as a dict).
feature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict)
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
Finally, here's the code that computes the style transfer loss. | # List of layers to use for the style loss.
style_layer_names = [
"block1_conv1",
"block2_conv1",
"block3_conv1",
"block4_conv1",
"block5_conv1",
]
# The layer to use for the content loss.
content_layer_name = "block5_conv2"
def compute_loss(combination_image, base_image, style_reference_image):
input_tensor = tf.concat(
[base_image, style_reference_image, combination_image], axis=0
)
features = feature_extractor(input_tensor)
# Initialize the loss
loss = tf.zeros(shape=())
# Add content loss
layer_features = features[content_layer_name]
base_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss = loss + content_weight * content_loss(
base_image_features, combination_features
)
# Add style loss
for layer_name in style_layer_names:
layer_features = features[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(style_layer_names)) * sl
# Add total variation loss
loss += total_variation_weight * total_variation_loss(combination_image)
return loss
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
Add a tf.function decorator to loss & gradient computation
To compile it, and thus make it fast. |
@tf.function
def compute_loss_and_grads(combination_image, base_image, style_reference_image):
with tf.GradientTape() as tape:
loss = compute_loss(combination_image, base_image, style_reference_image)
grads = tape.gradient(loss, combination_image)
return loss, grads
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
The training loop
Repeatedly run vanilla gradient descent steps to minimize the loss, and save the
resulting image every 100 iterations.
We decay the learning rate by 0.96 every 100 steps. | optimizer = keras.optimizers.SGD(
keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=100.0, decay_steps=100, decay_rate=0.96
)
)
base_image = preprocess_image(base_image_path)
style_reference_image = preprocess_image(style_reference_image_path)
combination_image = tf.Variable(preprocess_image(base_image_path))
iterations = 4000
for i in range(1, iterations + 1):
loss, grads = compute_loss_and_grads(
combination_image, base_image, style_reference_image
)
optimizer.apply_gradients([(grads, combination_image)])
if i % 100 == 0:
print("Iteration %d: loss=%.2f" % (i, loss))
img = deprocess_image(combination_image.numpy())
fname = result_prefix + "_at_iteration_%d.png" % i
keras.preprocessing.image.save_img(fname, img)
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
After 4000 iterations, you get the following result: | display(Image(result_prefix + "_at_iteration_4000.png"))
| examples/generative/ipynb/neural_style_transfer.ipynb | keras-team/keras-io | apache-2.0 |
1. Create a Doc object from the file owlcreek.txt<br>
HINT: Use with open('../TextFiles/owlcreek.txt') as f: | # Enter your code here:
with open('../TextFiles/owlcreek.txt') as f:
doc = nlp(f.read())
# Run this cell to verify it worked:
doc[:36] | nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb | rishuatgithub/MLPy | apache-2.0 |
3. How many sentences are contained in the file?<br>HINT: You'll want to build a list first! | sents = [sent for sent in doc.sents]
len(sents) | nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb | rishuatgithub/MLPy | apache-2.0 |
4. Print the second sentence in the document<br> HINT: Indexing starts at zero, and the title counts as the first sentence. | print(sents[1].text) | nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb | rishuatgithub/MLPy | apache-2.0 |
5. For each token in the sentence above, print its text, POS tag, dep tag and lemma<br>
CHALLENGE: Have values line up in columns in the print output. | # NORMAL SOLUTION:
for token in sents[1]:
print(token.text, token.pos_, token.dep_, token.lemma_)
# CHALLENGE SOLUTION:
for token in sents[1]:
print(f'{token.text:{15}} {token.pos_:{5}} {token.dep_:{10}} {token.lemma_:{15}}') | nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb | rishuatgithub/MLPy | apache-2.0 |
6. Write a matcher called 'Swimming' that finds both occurrences of the phrase "swimming vigorously" in the text<br>
HINT: You should include an 'IS_SPACE': True pattern between the two words! | # Import the Matcher library:
from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)
# Create a pattern and add it to matcher:
pattern = [{'LOWER': 'swimming'}, {'IS_SPACE': True, 'OP':'*'}, {'LOWER': 'vigorously'}]
matcher.add('Swimming', None, pattern)
# Create a list of matches called "found_matches" and print the list:
found_matches = matcher(doc)
print(found_matches) | nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb | rishuatgithub/MLPy | apache-2.0 |
7. Print the text surrounding each found match | print(doc[1265:1290])
print(doc[3600:3615]) | nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb | rishuatgithub/MLPy | apache-2.0 |
EXTRA CREDIT:<br>Print the sentence that contains each found match | for sent in sents:
if found_matches[0][1] < sent.end:
print(sent)
break
for sent in sents:
if found_matches[1][1] < sent.end:
print(sent)
break | nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/07-NLP-Basics-Assessment-Solution.ipynb | rishuatgithub/MLPy | apache-2.0 |
Downloading and ranking structures
Methods
<div class="alert alert-warning">
**Warning:**
Downloading all PDBs takes a while, since they are also parsed for metadata. You can skip this step and just set representative structures below if you want to minimize the number of PDBs downloaded.
</div> | # Download all mapped PDBs and gather the metadata
my_gempro.pdb_downloader_and_metadata()
my_gempro.df_pdb_metadata.head(2)
# Set representative structures
my_gempro.set_representative_structure()
my_gempro.df_representative_structures.head()
# Looking at the information saved within a gene
my_gempro.genes.get_by_id('Rv1295').protein.representative_structure
my_gempro.genes.get_by_id('Rv1295').protein.representative_structure.get_dict() | docs/notebooks/GEM-PRO - SBML Model.ipynb | nmih/ssbio | mit |
Conversion of the Data into OxCal-usable Form | df = pd.read_csv("https://raw.githubusercontent.com/dirkseidensticker/aDRAC/master/data/aDRAC.csv", encoding='utf8')
display(df.head()) | Python/aDRACtoOxCal.ipynb | dirkseidensticker/CARD | mit |
Choosing only the first five entries as subsample: | df_sub = df.head() | Python/aDRACtoOxCal.ipynb | dirkseidensticker/CARD | mit |
OxCal-compliant output: | print('''Plot()
{''')
for index, row in df_sub.iterrows():
print('R_Date("', row['SITE'],'/', row['FEATURE'], '-', row['LABNR'],'",', row['C14AGE'],',', row['C14STD'],');')
print('};') | Python/aDRACtoOxCal.ipynb | dirkseidensticker/CARD | mit |
As we can see these two documents are not very similar, at least in terms of their 3-gram shingle Jaccard similarity. That aside the problem with these shingles is that they do not allow us to compute the similarities of large numbers of documents very easily, we have to do an all pairs comparison. To get around that we can use locality sensitive hashing, but before LSH we'll turn the documents into a more manageable and uniform representation: a fixed length fingerprint comprised of $k$ minhashes.
Every document has a different number of shingles depending on the length of the document, for a corpus of any size predicting the memory requirements for an all pairs comparison is not possible as each document will consume a variable amount of memory. For LSH we would like to have a fixed length representation of the documents without changing the semantics of document similarity. This is where minhashing comes in. It turns out that the probability of a hash collision for a minhash is exactly the Jaccard similarity of two sets. This can be seen by considering the two sets of shingles as a matrix. For two dummy documents the shingles could be represented as the table below (the zeros and ones indicate if a shingle is present in the document or not). Notice that the Jaccard similarity of the documents is 2/5.
<table>
<th colspan=4><center>Document Shingles</center></th>
<tr> <td>row</td><td>shingle ID</td><td>Doc 1</td><td>Doc 2</td> </tr>
<tr> <td>1</td><td>1</td><td>0</td><td>1</td> </tr>
<tr> <td>2</td><td>2</td><td>1</td><td>1</td> </tr>
<tr> <td>3</td><td>3</td><td>0</td><td>1</td> </tr>
<tr> <td>4</td><td>4</td><td>1</td><td>0</td> </tr>
<tr> <td>5</td><td>5</td><td>1</td><td>1</td> </tr>
<tr> <td>6</td><td>6</td><td>0</td><td>0</td> </tr>
</table>
The minhash corresponds to a random permutation of the rows and gives back the row number where the first non zero entry is found. For the above table the minhash for documents one and two would thus be 2 and 1 respectively - meaning that the documents are not similar. The above table however is just one ordering of the shingles of each document. A different random permutation of the rows will give a different minhash, in this case 2 and 2, making the documents similar.
<table>
<th colspan=4><center>Document Shingles</center></th>
<tr> <td>row</td><td>shingle ID</td><td>Doc 1</td><td>Doc 2</td> </tr>
<tr> <td>1</td><td>6</td><td>0</td><td>0</td> </tr>
<tr> <td>2</td><td>2</td><td>1</td><td>1</td> </tr>
<tr> <td>3</td><td>3</td><td>0</td><td>1</td> </tr>
<tr> <td>4</td><td>1</td><td>0</td><td>1</td> </tr>
<tr> <td>5</td><td>4</td><td>1</td><td>0</td> </tr>
<tr> <td>6</td><td>5</td><td>1</td><td>1</td> </tr>
</table>
A random permutation of the rows can produce any of 6! == 720 (factorial) different orderings. However we only care about the orderings for which the two columns have the same lowest row number with a 1, that is shingle ID $\in {2, 5}$. Since the rows with zeros on them don't count, there are 5 rows with a one on it in any column, and two rows with a 1 in both columns. All a random permutation can therefore do is put two out of the five rows in the lowest row number, in other words produce a hash collision with a probability 2/5.
The above explanation follows Chapter 3 of <cite>Mining Massive Datasets</cite> [3]. An in depth explanation for why and how minhash works is provided there along with other interesting hash functions.
Applying minhash gives us a fixed length $k$ (you pick the length) representation of each document such that the probability of a hash collision is equal to the Jaccard similarity of any pair. This being a probabilitic measure you're not guaranteed to get a collision. For Lorem Ipsum documents above and $k=100$ we get similarities that are roughly the same as the Jaccard similarity. | from lsh import minhash
for _ in range(5):
hasher = minhash.MinHasher(seeds=100, char_ngram=5)
fingerprint0 = hasher.fingerprint('Lorem Ipsum dolor sit amet'.encode('utf8'))
fingerprint1 = hasher.fingerprint('Lorem Ipsum dolor sit amet is how dummy text starts'.encode('utf8'))
print(sum(fingerprint0[i] in fingerprint1 for i in range(hasher.num_seeds)) / hasher.num_seeds) | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
Increasing the length of the fingerprint from $k=100$ to $k=1000$ reduces the variance between random initialisations of the minhasher. | for _ in range(5):
hasher = minhash.MinHasher(seeds=1000, char_ngram=5)
fingerprint0 = hasher.fingerprint('Lorem Ipsum dolor sit amet'.encode('utf8'))
fingerprint1 = hasher.fingerprint('Lorem Ipsum dolor sit amet is how dummy text starts'.encode('utf8'))
print(sum(fingerprint0[i] in fingerprint1 for i in range(hasher.num_seeds)) / hasher.num_seeds) | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
Some duplicate items are present in the corpus so let's see what happens when we apply LSH to it. First a helper function that takes a file pointer and some parameters for minhash and LSH and then finds duplicates. | import itertools
from lsh import cache, minhash # https://github.com/mattilyra/lsh
# a pure python shingling function that will be used in comparing
# LSH to true Jaccard similarities
def shingles(text, char_ngram=5):
return set(text[head:head + char_ngram] for head in range(0, len(text) - char_ngram))
def jaccard(set_a, set_b):
intersection = set_a & set_b
union = set_a | set_b
return len(intersection) / len(union)
def candidate_duplicates(document_feed, char_ngram=5, seeds=100, bands=5, hashbytes=4):
char_ngram = 5
sims = []
hasher = minhash.MinHasher(seeds=seeds, char_ngram=char_ngram, hashbytes=hashbytes)
if seeds % bands != 0:
raise ValueError('Seeds has to be a multiple of bands. {} % {} != 0'.format(seeds, bands))
lshcache = cache.Cache(num_bands=bands, hasher=hasher)
for line in document_feed:
line = line.decode('utf8')
docid, headline_text = line.split('\t', 1)
fingerprint = hasher.fingerprint(headline_text.encode('utf8'))
# in addition to storing the fingerpring store the line
# number and document ID to help analysis later on
lshcache.add_fingerprint(fingerprint, doc_id=docid)
candidate_pairs = set()
for b in lshcache.bins:
for bucket_id in b:
if len(b[bucket_id]) > 1:
pairs_ = set(itertools.combinations(b[bucket_id], r=2))
candidate_pairs.update(pairs_)
return candidate_pairs | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
Then run through some data adding documents to the LSH cache | hasher = minhash.MinHasher(seeds=100, char_ngram=5, hashbytes=4)
lshcache = cache.Cache(bands=10, hasher=hasher)
# read in the data file and add the first 100 documents to the LSH cache
with open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh:
feed = itertools.islice(fh, 100)
for line in feed:
docid, articletext = line.decode('utf8').split('\t', 1)
lshcache.add_fingerprint(hasher.fingerprint(line), docid)
# for every bucket in the LSH cache get the candidate duplicates
candidate_pairs = set()
for b in lshcache.bins:
for bucket_id in b:
if len(b[bucket_id]) > 1: # if the bucket contains more than a single document
pairs_ = set(itertools.combinations(b[bucket_id], r=2))
candidate_pairs.update(pairs_) | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
candidate_pairs now contains a bunch of document IDs that may be duplicates of each other | candidate_pairs | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
Now let's run LSH on a few different parameter settings and see what the results look like. To save some time I'm only using the first 1000 documents. | num_candidates = []
bands = [2, 5, 10, 20]
for num_bands in bands:
with open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh:
feed = itertools.islice(fh, 1000)
candidates = candidate_duplicates(feed, char_ngram=5, seeds=100, bands=num_bands, hashbytes=4)
num_candidates.append(len(candidates))
fig, ax = plt.subplots(figsize=(8, 6))
plt.bar(bands, num_candidates, align='center');
plt.title('Number of candidate duplicate pairs found by LSH using 100 minhash fingerprint.');
plt.xlabel('Number of bands');
plt.ylabel('Number of candidate duplicates');
plt.xticks(bands, bands); | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
So the more promiscuous [4] version (20 bands per fingerprint) finds many more candidate pairs than the conservative 2 bands model. The first implication of this difference is that it leads to you having to do more comparisons to find the real duplicates. Let's see what that looks like in practice.
We'll slightly modify the candidate_duplicates function so that it stores the line number along with the document ID, that way we can retrieve to document contents easier later on. | def candidate_duplicates(document_feed, char_ngram=5, seeds=100, bands=5, hashbytes=4):
char_ngram = 5
sims = []
hasher = minhash.MinHasher(seeds=seeds, char_ngram=char_ngram, hashbytes=hashbytes)
if seeds % bands != 0:
raise ValueError('Seeds has to be a multiple of bands. {} % {} != 0'.format(seeds, bands))
lshcache = cache.Cache(num_bands=bands, hasher=hasher)
for i_line, line in enumerate(document_feed):
line = line.decode('utf8')
docid, headline_text = line.split('\t', 1)
fingerprint = hasher.fingerprint(headline_text.encode('utf8'))
# in addition to storing the fingerpring store the line
# number and document ID to help analysis later on
lshcache.add_fingerprint(fingerprint, doc_id=(i_line, docid))
candidate_pairs = set()
for b in lshcache.bins:
for bucket_id in b:
if len(b[bucket_id]) > 1:
pairs_ = set(itertools.combinations(b[bucket_id], r=2))
candidate_pairs.update(pairs_)
return candidate_pairs
lines = []
with open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh:
# read the first 1000 lines into memory so we can compare them
for line in itertools.islice(fh, 1000):
lines.append(line.decode('utf8'))
# reset file pointer and do LSH
fh.seek(0)
feed = itertools.islice(fh, 1000)
candidates = candidate_duplicates(feed, char_ngram=5, seeds=100, bands=20, hashbytes=4)
# go over all the generated candidates comparing their similarities
similarities = []
for ((line_a, docid_a), (line_b, docid_b)) in candidates:
doc_a, doc_b = lines[line_a], lines[line_b]
shingles_a = shingles(lines[line_a])
shingles_b = shingles(lines[line_b])
jaccard_sim = jaccard(shingles_a, shingles_b)
fingerprint_a = set(hasher.fingerprint(doc_a.encode('utf8')))
fingerprint_b = set(hasher.fingerprint(doc_b.encode('utf8')))
minhash_sim = len(fingerprint_a & fingerprint_b) / len(fingerprint_a | fingerprint_b)
similarities.append((docid_a, docid_b, jaccard_sim, minhash_sim))
import random
print('There are {} candidate duplicates in total'.format(len(candidates)))
random.sample(similarities, k=15) | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
So LSH with 20 bands indeed finds a lot of candidate duplicates (111 out of 1000), some of which - for instance (3256, 3186) above - are not all that similar. Let's see how many LSH missed given some similarity threshold. | sims_all = np.zeros((1000, 1000), dtype=np.float64)
for i, line in enumerate(lines):
for j in range(i+1, len(lines)):
shingles_a = shingles(lines[i])
shingles_b = shingles(lines[j])
jaccard_sim = jaccard(shingles_a, shingles_b)
# similarities are symmetric so we only care about the
# upper diagonal here and leave (j, i) to be 0
sims_all[i, j] = jaccard_sim
# turn the candidates into a dictionary so we have easy access to
# candidates pairs that were found
candidates_dict = {(line_a, line_b): (docid_a, docid_b) for ((line_a, docid_a), (line_b, docid_b)) in candidates}
found = 0
for i in range(len(lines)):
for j in range(i+1, len(lines)):
if sims_all[i, j] >= .9:
# documents i and j have an actual Jaccard similarity >= 90%
found += ((i, j) in candidates_dict or (j, i) in candidates_dict)
print('Out of {} pairs with similarity >= 90% {} were found, that\'s {:.1%}'.format((sims_all >= .9).sum(), found, found / (sims_all >= .9).sum())) | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
That seems pretty well inline with the <a href="#bands_rows">figure</a> showing how setting bands and rows affects the probability of finding similar documents. So we're doing quite well in terms of the true positives, what about the false positives? 27 pairs of documents from the ones found were true positives, so the rest are false positives. Since LSH found 110 document pairs in total $110-27 = 83$ pairs were incorrect, that's 83 documents that were checked in vein in comparison to the 499000 pairs we would have had to go through for an all pairs comparison.
499000 is the number of entries on the upper diagonal of a $1000\times1000$ matrix. Since document similarities are symmetric we only need to compare i to j not j to i, so that's $\frac{1000\times1000}{2}$. We also don't need compare i to i or j to j which cuts out the last 1000 entries on the diagonal.
References
[1] <cite>Reuters Corpora (RCV1, RCV2, TRC2)</cite> http://trec.nist.gov/data/reuters/reuters.html
[2] <cite>Amazon product data</cite> http://jmcauley.ucsd.edu/data/amazon/
[3] <cite>Mining Massive Datasets</cite> http://www.mmds.org http://infolab.stanford.edu/~ullman/mmds/ch3.pdf by Leskovec, Rajamaran and Ullman
[4] <cite>promiscuous</cite> demonstrating or implying an unselective approach; indiscriminate or casual: the city fathers were promiscuous with their honours. | # preprocess RCV1 to be contained in a single file
import glob, zipfile, re
import xml.etree.ElementTree as ET
files = glob.glob('../data/rcv1/xml/*.zip')
with open('../data/rcv1/headline.text.txt', 'wb') as out:
for f in files:
zf = zipfile.ZipFile(f)
for zi in zf.namelist():
fh = zf.open(zi, 'r')
root = ET.fromstring(fh.read().decode('latin-1'))
itemid = root.attrib['itemid']
headline = root.find('./headline').text
text = ' '.join(root.find('./text').itertext())
text = re.sub('\s+', ' ', text)
out.write(('{}\t{} {}\n'.format(itemid, headline, text)).encode('utf8')) | examples/Introduction.ipynb | mattilyra/suckerpunch | lgpl-3.0 |
Data Exploration | star_wars = pd.read_csv('star_wars.csv', encoding="ISO-8859-1")
star_wars.head()
star_wars.columns | Star Wars survey/Star Wars survey.ipynb | frankbearzou/Data-analysis | mit |
Data Cleaning
Remove invalid first column RespondentID which are NaN. | star_wars = star_wars.dropna(subset=['RespondentID']) | Star Wars survey/Star Wars survey.ipynb | frankbearzou/Data-analysis | mit |
Change the second and third columns. | star_wars['Do you consider yourself to be a fan of the Star Wars film franchise?'].isnull().value_counts()
star_wars['Have you seen any of the 6 films in the Star Wars franchise?'].value_counts() | Star Wars survey/Star Wars survey.ipynb | frankbearzou/Data-analysis | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.