markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
1 - Baseline model: Emojifier-V1
1.1 - Dataset EMOJISET
Let's start by building a simple baseline classifier.
You have a tiny dataset (X, Y) where:
- X contains 127 sentences (strings)
- Y contains a integer label between 0 and 4 corresponding to an emoji for each sentence
<img src="images/data_set.png" style="width:700px;height:300px;">
<caption><center> Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. </center></caption>
Let's load the dataset using the code below. We split the dataset between training (127 examples) and testing (56 examples). | X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split()) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Run the following cell to print sentences from X_train and corresponding labels from Y_train. Change index to see different examples. Because of the font the iPython notebook uses, the heart emoji may be colored black rather than red. | index = 59
print(X_train[index], label_to_emoji(Y_train[index])) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
1.2 - Overview of the Emojifier-V1
In this part, you are going to implement a baseline model called "Emojifier-v1".
<center>
<img src="images/image_1.png" style="width:900px;height:300px;">
<caption><center> Figure 2: Baseline model (Emojifier-V1).</center></caption>
</center>
The input of the model is a string corresponding to a sentence (e.g. "I love you). In the code, the output will be a probability vector of shape (1,5), that you then pass in an argmax layer to extract the index of the most likely emoji output.
To get our labels into a format suitable for training a softmax classifier, lets convert $Y$ from its current shape current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, where each row is a one-hot vector giving the label of one example, You can do so using this next code snipper. Here, Y_oh stands for "Y-one-hot" in the variable names Y_oh_train and Y_oh_test: | Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Let's see what convert_to_one_hot() did. Feel free to change index to print out different values. | index = 59
print(Y_train[index], "is converted into one hot", Y_oh_train[index]) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
All the data is now ready to be fed into the Emojify-V1 model. Let's implement the model!
1.3 - Implementing Emojifier-V1
As shown in Figure (2), the first step is to convert an input sentence into the word vector representation, which then get averaged together. Similar to the previous exercise, we will use pretrained 50-dimensional GloVe embeddings. Run the following cell to load the word_to_vec_map, which contains all the vector representations. | word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt') | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
You've loaded:
- word_to_index: dictionary mapping from words to their indices in the vocabulary (400,001 words, with the valid indices ranging from 0 to 400,000)
- index_to_word: dictionary mapping from indices to their corresponding words in the vocabulary
- word_to_vec_map: dictionary mapping words to their GloVe vector representation.
Run the following cell to check if it works. | word = "cucumber"
index = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(index) + "th word in the vocabulary is", index_to_word[index]) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Exercise: Implement sentence_to_avg(). You will need to carry out two steps:
1. Convert every sentence to lower-case, then split the sentence into a list of words. X.lower() and X.split() might be useful.
2. For each word in the sentence, access its GloVe representation. Then, average all these values. | # GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
"""
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (50,)
"""
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = sentence.lower().split()
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros((50,))
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in words:
avg += word_to_vec_map[w]
avg = avg/len(words)
### END CODE HERE ###
return avg
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = ", avg) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Expected Output:
<table>
<tr>
<td>
**avg= **
</td>
<td>
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
</td>
</tr>
</table>
Model
You now have all the pieces to finish implementing the model() function. After using sentence_to_avg() you need to pass the average through forward propagation, compute the cost, and then backpropagate to update the softmax's parameters.
Exercise: Implement the model() function described in Figure (2). Assuming here that $Yoh$ ("Y one hot") is the one-hot encoding of the output labels, the equations you need to implement in the forward pass and to compute the cross-entropy cost are:
$$ z^{(i)} = W . avg^{(i)} + b$$
$$ a^{(i)} = softmax(z^{(i)})$$
$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Yoh^{(i)}_k * log(a^{(i)}_k)$$
It is possible to come up with a more efficient vectorized implementation. But since we are using a for-loop to convert the sentences one at a time into the avg^{(i)} representation anyway, let's not bother this time.
We provided you a function softmax(). | # GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate = 0.01, num_iterations = 400):
"""
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
"""
np.random.seed(1)
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = 5 # number of classes
n_h = 50 # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C = n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer
z = np.dot(W, avg)+b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = -np.sum(np.multiply(Y_oh, np.log(a)))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y,1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map)
return pred, W, b
print(X_train.shape)
print(Y_train.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(X_train[0])
print(type(X_train))
Y = np.asarray([5,0,0,5, 4, 4, 4, 6, 6, 4, 1, 1, 5, 6, 6, 3, 6, 3, 4, 4])
print(Y.shape)
X = np.asarray(['I am going to the bar tonight', 'I love you', 'miss you my dear',
'Lets go party and drinks','Congrats on the new job','Congratulations',
'I am so happy for you', 'Why are you feeling bad', 'What is wrong with you',
'You totally deserve this prize', 'Let us go play football',
'Are you down for football this afternoon', 'Work hard play harder',
'It is suprising how people can be dumb sometimes',
'I am very disappointed','It is the best day in my life',
'I think I will end up alone','My life is so boring','Good job',
'Great so awesome'])
print(X.shape)
print(np.eye(5)[Y_train.reshape(-1)].shape)
print(type(X_train))
| Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Run the next cell to train your model and learn the softmax parameters (W,b). | pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Expected Output (on a subset of iterations):
<table>
<tr>
<td>
**Epoch: 0**
</td>
<td>
cost = 1.95204988128
</td>
<td>
Accuracy: 0.348484848485
</td>
</tr>
<tr>
<td>
**Epoch: 100**
</td>
<td>
cost = 0.0797181872601
</td>
<td>
Accuracy: 0.931818181818
</td>
</tr>
<tr>
<td>
**Epoch: 200**
</td>
<td>
cost = 0.0445636924368
</td>
<td>
Accuracy: 0.954545454545
</td>
</tr>
<tr>
<td>
**Epoch: 300**
</td>
<td>
cost = 0.0343226737879
</td>
<td>
Accuracy: 0.969696969697
</td>
</tr>
</table>
Great! Your model has pretty high accuracy on the training set. Lets now see how it does on the test set.
1.4 - Examining test set performance | print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Expected Output:
<table>
<tr>
<td>
**Train set accuracy**
</td>
<td>
97.7
</td>
</tr>
<tr>
<td>
**Test set accuracy**
</td>
<td>
85.7
</td>
</tr>
</table>
Random guessing would have had 20% accuracy given that there are 5 classes. This is pretty good performance after training on only 127 examples.
In the training set, the algorithm saw the sentence "I love you" with the label ❤️. You can check however that the word "adore" does not appear in the training set. Nonetheless, lets see what happens if you write "I adore you." | X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Amazing! Because adore has a similar embedding as love, the algorithm has generalized correctly even to a word it has never seen before. Words such as heart, dear, beloved or adore have embedding vectors similar to love, and so might work too---feel free to modify the inputs above and try out a variety of input sentences. How well does it work?
Note though that it doesn't get "not feeling happy" correct. This algorithm ignores word ordering, so is not good at understanding phrases like "not happy."
Printing the confusion matrix can also help understand which classes are more difficult for your model. A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class). | print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
<font color='blue'>
What you should remember from this part:
- Even with a 127 training examples, you can get a reasonably good model for Emojifying. This is due to the generalization power word vectors gives you.
- Emojify-V1 will perform poorly on sentences such as "This movie is not good and not enjoyable" because it doesn't understand combinations of words--it just averages all the words' embedding vectors together, without paying attention to the ordering of words. You will build a better algorithm in the next part.
2 - Emojifier-V2: Using LSTMs in Keras:
Let's build an LSTM model that takes as input word sequences. This model will be able to take word ordering into account. Emojifier-V2 will continue to use pre-trained word embeddings to represent words, but will feed them into an LSTM, whose job it is to predict the most appropriate emoji.
Run the following cell to load the Keras packages. | import numpy as np
np.random.seed(0)
from keras.models import Model
from keras.layers import Dense, Input, Dropout, LSTM, Activation
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
from keras.initializers import glorot_uniform
np.random.seed(1) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
2.1 - Overview of the model
Here is the Emojifier-v2 you will implement:
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. </center></caption>
2.2 Keras and mini-batching
In this exercise, we want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the same length. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time.
The common solution to this is to use padding. Specifically, set a maximum sequence length, and pad all sequences to the same length. For example, of the maximum sequence length is 20, we could pad every sentence with "0"s so that each input sentence is of length 20. Thus, a sentence "i love you" would be represented as $(e_{i}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. In this example, any sentences longer than 20 words would have to be truncated. One simple way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set.
2.3 - The Embedding layer
In Keras, the embedding matrix is represented as a "layer", and maps positive integers (indices corresponding to words) into dense vectors of fixed size (the embedding vectors). It can be trained or initialized with a pretrained embedding. In this part, you will learn how to create an Embedding() layer in Keras, initialize it with the GloVe 50-dimensional vectors loaded earlier in the notebook. Because our training set is quite small, we will not update the word embeddings but will instead leave their values fixed. But in the code below, we'll show you how Keras allows you to either train or leave fixed this layer.
The Embedding() layer takes an integer matrix of size (batch size, max input length) as input. This corresponds to sentences converted into lists of indices (integers), as shown in the figure below.
<img src="images/embedding1.png" style="width:700px;height:250px;">
<caption><center> Figure 4: Embedding layer. This example shows the propagation of two examples through the embedding layer. Both have been zero-padded to a length of max_len=5. The final dimension of the representation is (2,max_len,50) because the word embeddings we are using are 50 dimensional. </center></caption>
The largest integer (i.e. word index) in the input should be no larger than the vocabulary size. The layer outputs an array of shape (batch size, max input length, dimension of word vectors).
The first step is to convert all your training sentences into lists of indices, and then zero-pad all these lists so that their length is the length of the longest sentence.
Exercise: Implement the function below to convert X (array of sentences as strings) into an array of indices corresponding to words in the sentences. The output shape should be such that it can be given to Embedding() (described in Figure 4). | # GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
"""
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
"""
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m,max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = [w.lower() for w in X[i].split()]
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j = j+1
### END CODE HERE ###
return X_indices | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Run the following cell to check what sentences_to_indices() does, and check your results. | X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1,word_to_index, max_len = 5)
print("X1 =", X1)
print("X1_indices =", X1_indices) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Expected Output:
<table>
<tr>
<td>
**X1 =**
</td>
<td>
['funny lol' 'lets play football' 'food is ready for you']
</td>
</tr>
<tr>
<td>
**X1_indices =**
</td>
<td>
[[ 155345. 225122. 0. 0. 0.] <br>
[ 220930. 286375. 151266. 0. 0.] <br>
[ 151204. 192973. 302254. 151349. 394475.]]
</td>
</tr>
</table>
Let's build the Embedding() layer in Keras, using pre-trained word vectors. After this layer is built, you will pass the output of sentences_to_indices() to it as an input, and the Embedding() layer will return the word embeddings for a sentence.
Exercise: Implement pretrained_embedding_layer(). You will need to carry out the following steps:
1. Initialize the embedding matrix as a numpy array of zeroes with the correct shape.
2. Fill in the embedding matrix with all the word embeddings extracted from word_to_vec_map.
3. Define Keras embedding layer. Use Embedding(). Be sure to make this layer non-trainable, by setting trainable = False when calling Embedding(). If you were to set trainable = True, then it will allow the optimization algorithm to modify the values of the word embeddings.
4. Set the embedding weights to be equal to the embedding matrix | # GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
"""
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
"""
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Initialize the embedding matrix as a numpy array of zeros of shape (vocab_len, dimensions of word vectors = emb_dim)
emb_matrix = np.zeros((vocab_len, emb_dim))
# Set each row "index" of the embedding matrix to be the word vector representation of the "index"th word of the vocabulary
for word, index in word_to_index.items():
emb_matrix[index, :] = word_to_vec_map[word]
# Define Keras embedding layer with the correct output/input sizes, make it trainable. Use Embedding(...). Make sure to set trainable=False.
embedding_layer = Embedding(vocab_len, emb_dim, trainable=False)
### END CODE HERE ###
# Build the embedding layer, it is required before setting the weights of the embedding layer. Do not modify the "None".
embedding_layer.build((None,))
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][3] =", embedding_layer.get_weights()[0][1][3]) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Expected Output:
<table>
<tr>
<td>
**weights[0][1][3] =**
</td>
<td>
-0.3403
</td>
</tr>
</table>
2.3 Building the Emojifier-V2
Lets now build the Emojifier-V2 model. You will do so using the embedding layer you have built, and feed its output to an LSTM network.
<img src="images/emojifier-v2.png" style="width:700px;height:400px;"> <br>
<caption><center> Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier. </center></caption>
Exercise: Implement Emojify_V2(), which builds a Keras graph of the architecture shown in Figure 3. The model takes as input an array of sentences of shape (m, max_len, ) defined by input_shape. It should output a softmax probability vector of shape (m, C = 5). You may need Input(shape = ..., dtype = '...'), LSTM(), Dropout(), Dense(), and Activation(). | # GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices = Input(input_shape, dtype='int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
X = LSTM(128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128, return_sequences=False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(5)(X)
# Add a softmax activation
X = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs=sentence_indices ,outputs=X)
### END CODE HERE ###
return model | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Run the following cell to create your model and check its summary. Because all sentences in the dataset are less than 10 words, we chose max_len = 10. You should see your architecture, it uses "20,223,927" parameters, of which 20,000,050 (the word embeddings) are non-trainable, and the remaining 223,877 are. Because our vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001*50 = 20,000,050 non-trainable parameters. | model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary() | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, adam optimizer and ['accuracy'] metrics: | model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
It's time to train your model. Your Emojifier-V2 model takes as input an array of shape (m, max_len) and outputs probability vectors of shape (m, number of classes). We thus have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors). | X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Fit the Keras model on X_train_indices and Y_train_oh. We will use epochs = 50 and batch_size = 32. | model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Your model should perform close to 100% accuracy on the training set. The exact accuracy you get may be a little different. Run the following cell to evaluate your model on the test set. | X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples. | # This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip()) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Now you can try it on your own example. Write your own sentence below. | # Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['not feeling happy'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices)))) | Sequence_Models/Emojify+-+v2.ipynb | radu941208/DeepLearning | mit |
Input file
Attention to the new data structure for input !!!
Change your input file with path/name in the cell below to be processed.
Data Format
ID | X | Y | group 1 | group 2 | group n | cc.readAttributesFile('/Users/sandrofsousa/Downloads/valid/Segreg sample.csv') | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Measures
Compute Population Intensity
For non spatial result, please comment the function call at: "cc.locality= ..."
to comment a code use # in the begining of the line
Distance matrix is calculated at this step. Change the parameters for the population
intensity according to your needs. Parameters are:
bandwidth - is set to be 5000m by default, you can change it here
weightmethod - 1 for gaussian, 2 for bi-square and empty for moving window | start_time = time.time()
cc.locality = cc.cal_localityMatrix(bandwidth=700, weightmethod=1)
print("--- %s seconds for processing ---" % (time.time() - start_time)) | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
For validation only
Remove the comment (#) if you want to see the values and validate | # np.set_printoptions(threshold=np.inf)
# print('Location (coordinates from data):\n', cc.location)
# print()
# print('Population intensity for all groups:\n', cc.locality)
'''To select locality for a specific line (validation), use the index in[x,:]'''
# where x is the number of the desired line
# cc.locality[5,:] | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute local Dissimilarity | diss_local = cc.cal_localDissimilarity()
diss_local = np.asmatrix(diss_local).transpose() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute global Dissimilarity | diss_global = cc.cal_globalDissimilarity() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute local Exposure/Isolation
expo is a matrix of n_group * n_group therefore, exposure (m,n) = rs[m,n]
the columns are exporsure m1 to n1, to n2... n5, m2 to n1....n5
- m,m = isolation index of group m
- m,n = expouse index of group m to n
Result of all combinations of local groups expousure/isolation
To select a specific line of m to n, use the index [x]
Each value is a result of the combinations m,n
e.g.: g1xg1, g1xg2, g2,g1, g2xg2 = isolation, expousure, // , isolation | expo_local = cc.cal_localExposure() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute global Exposure/Isolation | expo_global = cc.cal_globalExposure() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute local Entropy | entro_local = cc.cal_localEntropy() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute global Entropy | entro_global = cc.cal_globalEntropy() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute local Index H | idxh_local = cc.cal_localIndexH() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Compute global Index H | idxh_global = cc.cal_globalIndexH() | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Results
Prepare data for saving on a local file | # Concatenate local values from measures
if len(cc.locality) == 0:
results = np.concatenate((expo_local, diss_local, entro_local, idxh_local), axis=1)
else:
results = np.concatenate((cc.locality, expo_local, diss_local, entro_local, idxh_local), axis=1)
# Concatenate the results with original data
output = np.concatenate((cc.tract_id, cc.attributeMatrix, results),axis = 1)
names = ['id','x','y']
for i in range(cc.n_group):
names.append('group_'+str(i))
if len(cc.locality) == 0:
for i in range(cc.n_group):
for j in range(cc.n_group):
if i == j:
names.append('iso_' + str(i) + str(j))
else:
names.append('exp_' + str(i) + str(j))
names.append('dissimil')
names.append('entropy')
names.append('indexh')
else:
for i in range(cc.n_group):
names.append('intens_'+str(i))
for i in range(cc.n_group):
for j in range(cc.n_group):
if i == j:
names.append('iso_' + str(i) + str(j))
else:
names.append('exp_' + str(i) + str(j))
names.append('dissimil')
names.append('entropy')
names.append('indexh') | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Save Local and global results to a file
The paramenter fname corresponds to the folder/filename, change it as you want.
To save on a diferent folder, use the "/" to pass the directory.
The local results will be saved using the name defined and adding the "_local" postfix to file's name.
The global results are automatically saved using the same name with the addiction of the postfix "_globals".
It's recommended to save on a different folder from the code, e.g.: a folder named result.
The fname value should be changed for any new executions or the local file will be overwrited! | fname = "/Users/sandrofsousa/Downloads/valid/result"
output = pd.DataFrame(output, columns=names)
output.to_csv("%s_local.csv" % fname, sep=",", index=False)
with open("%s_global.txt" % fname, "w") as f:
f.write('Global dissimilarity: ' + str(diss_global))
f.write('\nGlobal entropy: ' + str(entro_global))
f.write('\nGlobal Index H: ' + str(idxh_global))
f.write('\nGlobal isolation/exposure: \n')
f.write(str(expo_global))
# code to save data as a continuous string - Marcus request for R use
# names2 = ['dissimil', 'entropy', 'indexh']
# for i in range(cc.n_group):
# for j in range(cc.n_group):
# if i == j:
# names2.append('iso_' + str(i) + str(j))
# else:
# names2.append('exp_' + str(i) + str(j))
# values = [diss_global, entro_global, idxh_global]
# for i in expo_global: values.append(i)
# file2 = "/Users/sandrofsousa/Downloads/"
# with open("%s_global.csv" % file2, "w") as f:
# f.write(', '.join(names2) + '\n')
# f.write(', '.join(str(i) for i in values)) | Pysegreg/Pysegreg_notebook_distance.ipynb | sandrofsousa/Resolution | mit |
Let's open our test project by its name. If you completed the first examples this should all work out of the box.
Open all connections to the MongoDB and Session so we can get started. | project = Project('tutorial') | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Let's see again where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal. | print project.files
print project.generators
print project.models | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Now restore our old ways to generate tasks by loading the previously used generators. | engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb'] | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
A simple task
A task is in essence a bash script-like description of what should be executed by the worker. It has details about files to be linked to the working directory, bash commands to be executed and some meta information about what should happen in case we succeed or fail.
The execution structure
Let's first explain briefly how a task is executed and what its components are. This was originally build so that it is compatible with radical.pilot and still is. So, if you are familiar with it, all of the following information should sould very familiar.
A task is executed from within a unique directory that only exists for this particular task. These are located in adaptivemd/workers/ and look like
worker.0x5dcccd05097611e7829b000000000072L/
the long number is a hex representation of the UUID of the task. Just if you are curious type
print hex(my_task.__uuid__)
Then we change directory to this folder write a running.sh bash script and execute it. This script is created from the task definition and also depends on your resource setting (which basically only contain the path to the workers directory, etc)
The script is divided into 1 or 3 parts depending on which Task class you use. The main Task uses a single list of commands, while PrePostTask has the following structure
Pre-Exec: Things to happen before the main command (optional)
Main: the main commands are executed
Post-Exec: Things to happen after the main command (optional)
Okay, lots of theory, now some real code for running a task that generated a trajectory | task = engine.run(project.new_trajectory(pdb_file, 100))
task.script | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
We are linking a lot of files to the worker directory and change the name for the .pdb in the process. Then call the actual python script that runs openmm. And finally move the output.dcd and the restart file back tp the trajectory folder.
There is a way to list lot's of things about tasks and we will use it a lot to see our modifications. | print task.description | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Modify a task
As long as a task is not saved and hence placed in the queue, it can be altered in any way. All of the 3 / 5 phases can be changed separately. You can add things to the staging phases or bash phases or change the command. So, let's do that now
Add a bash line
First, a Task is very similar to a list of bash commands and you can simply append (or prepend) a command. A text line will be interpreted as a bash command. | task.append('echo "This new line is pointless"')
print task.description | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
As expected this line was added to the end of the script.
Add staging actions
To set staging is more difficult. The reason is, that you normally have no idea where files are located and hence writing a copy or move is impossible. This is why the staging commands are not bash lines but objects that hold information about the actual file transaction to be done. There are some task methods that help you move files but also files itself can generate this commands for you.
Let's move one trajectory (directory) around a little more as an example | traj = project.trajectories.one
transaction = traj.copy()
print transaction | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
This looks like in the script. The default for a copy is to move a file or folder to the worker directory under the same name, but you can give it another name/location if you use that as an argument. Note that since trajectories are a directory you need to give a directory name (which end in a /) | transaction = traj.copy('new_traj/')
print transaction | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
If you want to move it not to the worker directory you have to specify the location and you can do so with the prefixes (shared://, sandbox://, staging:// as explained in the previous examples) | transaction = traj.copy('staging:///cached_trajs/')
print transaction | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Besides .copy you can also .move or .link files. | transaction = pdb_file.copy('staging:///delete.pdb')
print transaction
transaction = pdb_file.move('staging:///delete.pdb')
print transaction
transaction = pdb_file.link('staging:///delete.pdb')
print transaction | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Local files
Let's mention these because they require special treatment. We cannot copy files to the HPC, we need to store them in the DB first. | new_pdb = File('file://../files/ntl9/ntl9.pdb').load() | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Make sure you use file:// to indicate that you are using a local file. The above example uses a relative path which will be replaced by an absolute one, otherwise we ran into trouble once we open the project at a different directory. | print new_pdb.location | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Note that now there are 3 / in the filename, two from the :// and one from the root directory of your machine
The load() at the end really loads the file and when you save this File now it will contain the content of the file. You can access this content as seen in the previous example. | print new_pdb.get_file()[:300] | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
For local files you normally use .transfer, but copy, move or link work as well. Still, there is no difference since the file only exists in the DB now and copying from the DB to a place on the HPC results in a simple file creation.
Now, we want to add a command to the staging and see what happens. | transaction = new_pdb.transfer()
print transaction
task.append(transaction)
print task.description | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
We now have one more transfer command. But something else has changed. There is one more files listed as required. So, the task can only run, if that file exists, but since we loaded it into the DB, it exists (for us). For example the newly created trajectory 25.dcd does not exist yet. Would that be a requirement the task would fail. But let's check that it exists. | new_pdb.exists | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Okay, we have now the PDB file staged and so any real bash commands could work with a file ntl9.pdb. Alright, so let's output its stats. | task.append('stat ntl9.pdb') | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Note that usually you place these stage commands at the top or your script.
Now we could run this task, as before and see, if it works. (Make sure you still have a worker running) | project.queue(task) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
And check, that the task is running | task.state | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
If we did not screw up the task, it should have succeeded and we can look at the STDOUT. | print task.stdout | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Well, great, we have the pointless output and the stats of the newly staged file ntl9.pdb
How does a real script look like
Just for fun let's create the same scheduler that the adaptivemdworker uses, but from inside this notebook. | from adaptivemd import WorkerScheduler
sc = WorkerScheduler(project._current_configuration) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
If you really wanted to use the worker you need to initialize it and it will create directories and stage files for the generators, etc. For that you need to call sc.enter(project), but since we only want it to parse our tasks, we only set the project without invoking initialization. You should normally not do that. | sc.project = project | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Now we can use a function .task_to_script that will parse a task into a bash script. So this is really what would be run on your machine now. | print '\n'.join(sc.task_to_script(task)) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Now you see that all file paths have been properly interpreted to work. See that there is a comment about a temporary file from the DB that is then renamed. This is a little trick to be compatible with RPs way of handling files. (TODO: We might change this to just write to the target file. Need to check if that is still consistent)
A note on file locations
One problem with bash scripts is that when you create the tasks you have no concept on where the files actually are located. To get around this the created bash script will be scanned for paths, that contain prefixed like we are used to and are interpreted in the context of the worker / scheduler. The worker is the only instance to know all that is necessary so this is the place to fix that problem.
Let's see that in a little example, where we create an empty file in the staging area. | task = Task()
task.append('touch staging:///my_file.txt')
print '\n'.join(sc.task_to_script(task)) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
And voila, the path has changed to a relative path from the working directory of the worker. Note that you see here the line we added in the very beginning of example 1 to our resource!
A Task from scratch
If you want to start a new task you can begin with | task = Task() | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
as we did before.
Just start adding staging and bash commands and you are done. When you create a task you can assign it a generator, then the system will assume that this task was generated by that generator, so don't do it for you custom tasks, unless you generated them in a generator. Setting this allows you to tell a worker only to run tasks of certain types.
The Python RPC Task
The tasks so far a very powerful, but they lack the possibility to call a python function. Since we are using python here, it would be great to really pretend to call a python function from here and not taking the detour of writing a python bash executable with arguments, etc... An example for this is the PyEmma generator which uses this capability.
Let's do an example of this as well. Assume we have a python function in a file (you need to have your code in a file so far so that we can copy the file to the HPC if necessary). Let's create the .py file now. | %%file my_rpc_function.py
def my_func(f):
import os
print f
return os.path.getsize(f) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Now create a PythonTask instead | task = PythonTask(modeller) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
and the call function has changed. Note that also now you can still add all the bash and stage commands as before. A PythonTask is also a subclass of PrePostTask so we have a .pre and .post phase available. | from my_rpc_function import my_func | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
We call the function my_func with one argument | task.call(my_func, f=project.trajectories.one)
print task.description | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Well, interesting. What this actually does is to write the input arguments to the function into a temporary .json file on the worker, (in RP on the local machine and then transfers it to remote), rename it to input.json and read it in the _run_.py. This is still a little clumsy, but needs to be this way to be RP compatible which only works with files! Look at the actual script.
You see, that we really copy the .py file that contains the source code to the worker directory. All that is done automatically. A little caution on this. You can either write a function in a single file or use any installed package, but in this case the same package needs to be installed on the remote machine as well!
Let's run it and see what happens. | project.queue(task) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
And wait until the task is done | project.wait_until(task.is_done) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
The default settings will automatically save the content from the resulting output.json in the DB an you can access the data that was returned from the task at .output. In our example the result was just the size of a the file in bytes | task.output | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
And you can use this information in an adaptive script to make decisions.
success callback
The last thing we did not talk about is the possibility to also call a function with the returned data automatically on successful execution. Since this function is executed on the worker we (so far) only support function calls with the following restrictions.
you can call a function of the related generator class. for this you need to create the task using PythonTask(generator)
the function name you want to call is stored in task.then_func_name. So you can write a generator class with several possible outcomes and chose the function for each task.
The Generator needs to be part of adaptivemd
So in the case of modeller.execute we create a PythonTask that references the following functions | task = modeller.execute(project.trajectories)
task.then_func_name | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
So we will call the default then_func of modeller or the class modeller is of. | help(modeller.then_func) | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
These callbacks are called with the current project, the resulting data (which is in the modeller case a Model object) and array of initial inputs.
This is the actual code of the callback
py
@staticmethod
def then_func(project, task, model, inputs):
# add the input arguments for later reference
model.data['input']['trajectories'] = inputs['kwargs']['files']
model.data['input']['pdb'] = inputs['kwargs']['topfile']
project.models.add(model)
All it does is to add some of the input parameters to the model for later reference and then store the model in the project. You are free to define all sorts of actions here, even queue new tasks.
Next, we will talk about the factories for Task objects, called generators. There we will actually write a new class that does some stuff with the results. | project.close() | examples/tutorial/4_example_advanced_tasks.ipynb | markovmodel/adaptivemd | lgpl-2.1 |
Sacar toda la info de una peli para poder meterla en un diccionario y usarla en ElasticSearch, indexandola (metodo todo en 1)
Tarda bastante en ejecutarse (5 a 15 min), mete 250 peliculas en elastic
quitado parametro de es.index (, id=i)
Coge el sumario de cada peli de la lista, y guarda la info en elasticSearch | for i in range(10,250):
peli = listaPelis[i]
peli2 = ia.get_movie(peli.movieID)
string = peli2.summary()
separado = string.split('\n')
solucion = {}
for i in range(2,len(separado)):
sep2 = separado[i].split(':')
#Forma de evitar que haya fallo al pasar el split a diccionario
#Caso del fallo en los 2 cuadros de abajo
sep2[1:len(sep2)] = [''.join(sep2[1:len(sep2)])]
solucion.update(dict([sep2]))
es.index(index='prueba-index', doc_type='text', body=solucion)
separado
sep2[1] | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Pruebas | import pandas as pd
lista=[]
for i in range(0400000,0400010,1):
peli = ia.get_movie(i)
lista.append(peli.summary())
datos = pd.DataFrame(lista)
print datos.values
import pandas as pd
lista=[]
datos = pd.DataFrame([])
for i in range(0005000,0005003):
lista.append(ia.get_movie(i))
lista.append(ia.get_movie_plot(i))
datos = datos.append(lista)
print datos.values | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Elastic Seach (cabezera de ejemplo) | from datetime import datetime
from elasticsearch import Elasticsearch
es = Elasticsearch()
'''
doc = {
'prueba': 'Holi',
'text': 'A man throws away an old top hat and a tramp uses it to sole his boots.',
}
res = es.index(index="movies-index", doc_type='text', id=1, body=doc)
print(res['created'])
'''
res = es.get(index="movies-index", doc_type='text', id=6)
print(res['_source'])
es.indices.refresh(index="movies-index")
res = es.search(index="movies-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(text)s" % hit["_source"]) | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Inicializacion real de Elastic Search (ejecutar) | # make sure ES is up and running
import requests
res = requests.get('http://localhost:9200')
print(res.content)
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}]) | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Guardamos el top 250 dentro de elastic search (antiguo) | #Lista con el top 250 de peliculas
top = ia.get_top250_movies()
#Recorro la lista y saco los datos para indexarlos en elastic search, el id es el orden en la lista
for i in range(0,250):
es.index(index='films-index', doc_type='text', id=i, body=top[i].data)
| ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Buscamos los datos guardados (antiguo) | res = es.search(index="films-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
#Modificar para que funcione
for hit in res['hits']['hits']:
print("%(kind)s %(title)s %(year)s %(rating)s" % hit["_source"]) | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Sacar los hits e info de unos cuantos de ellos | res = es.search(index="prueba-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s %(Genres)s %(Director)s %(Cast)s %(Writer)s %(Country)s %(Language)s %(Rating)s %(Plot)s" % hit["_source"])
res = es.search(index="prueba-index", body={"query": {"match_all": {}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s" % hit["_source"])
res = es.search(index="prueba-index", body={"query": {"match_all": {}}})
res
res = es.search(index="prueba-index", body={
"query":
{"match" : {'Director': 'Christopher Nolan'}
},
{
"highlight" : {
"fields" : {
"Language" : {}
}
}
}
})
res | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Query sin fuzziness
No funciona si le quitas una letra, la query de abajo si al ser fuzzy | res = es.search(index="prueba-index", body={"query": {"match" : {'Director': 'Christophe Nola'}}})
print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s" % hit["_source"]) | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Query con fuzziness añadida | bodyQuery = {
"query": {
"multi_match" : {
"query" : "Int",
"fields": ["Plot", "Title"],
"fuzziness": "2"
}
}
}
res = es.search(index="prueba-index", body=bodyQuery)
#print res
#print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s" % hit["_source"])
bodyQuery = {
"query": {
"regexp":{
"Title": "wonder.*"
}
}
}
res = es.search(index="prueba-index", body=bodyQuery)
#print res
#print("Got %d Hits:" % res['hits']['total'])
for hit in res['hits']['hits']:
print("%(Title)s" % hit["_source"]) | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Query 2 con highlight de distintos campos y la forma de mostrarlo | bodyQuery2 = {
"query": {
"match" : {
"Title" : {
"query" : "wond",
"operator" : "and",
"zero_terms_query": "all"
}
}
},
"highlight" : {
"fields" : {
"Title" : {},
"Plot" : {"fragment_size" : 150, "number_of_fragments" : 3}
},
#Permite el hightlight sobre campos que no se han hecho query
#como Plot en este ejemplo
"require_field_match" : False
}
}
res = es.search(index="prueba-index", body=bodyQuery2)
print("Got %d Hits:" % res['hits']['total'])
# Uso el [0] porque solo hay 1 hit, si hubiese mas, pues habria mas campos
# de la lista, habria que usar el for de arriba para sacar el highlight de
# cada uno de la lista
#print res['hits']['hits'][0]['highlight']
for hit in res['hits']['hits']:
print(hit)
bodyQuery2 = {
"query": {
"bool": {
"should": [
{ "match": {
"Title": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
{ "match": {
"Plot": {
"query": "wonder" + ".*",
"fuzziness": 2,
"prefix_length" : 1,
"operator": "and"
}
}
},
{ "match": {
"Genres": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
{ "match": {
"Director": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
{ "match": {
"Writer": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
{ "match": {
"Cast": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
{ "match": {
"Country": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
{ "match": {
"Language": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
{ "match": {
"Rating": {
"query": "wonder" + ".*",
"fuzziness": "AUTO",
"prefix_length" : 1,
"operator": "and"
}
}},
]
}
},
"highlight": {
"fields": {
"Title": {},
"Plot": {},
"Director": {}
},
# Permite el hightlight sobre campos que no se han hecho query
# como Plot en este ejemplo
"require_field_match": False
}
}
'''
"query": {
"match": {
"Title": {
"query": buscado,
"fuzziness": "AUTO",
"boost" : 2.0,
"prefix_length" : 1,
"max_expansions": 100,
#"minimum_should_match" : 10,
"operator": "and"
}
}
},
"highlight": {
"fields": {
"Title": {},
"Plot": {"fragment_size": 300, "number_of_fragments": 3}
},
# Permite el hightlight sobre campos que no se han hecho query
# como Plot en este ejemplo
"require_field_match": False
}
'''
res = es.search(index="prueba-index", body= bodyQuery2)
print("Got %d Hits:" % res['hits']['total'])
# Uso el [0] porque solo hay 1 hit, si hubiese mas, pues habria mas campos
# de la lista, habria que usar el for de arriba para sacar el highlight de
# cada uno de la lista
# print res['hits']['hits'][0]['highlight']
resultado = []
for hit in res['hits']['hits']:
resultado.append(hit)
print resultado[10]['_source']['Title'] | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Borrar datos | es.delete(index='prueba-index', doc_type='text', id=1) | ejercicio 5/Practica 5.ipynb | cristhro/Machine-Learning | gpl-3.0 |
Validating Models
One of the most important pieces of machine learning is model validation: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.
Consider the digits example we've been looking at previously. How might we check how well our model fits the data? | from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Let's fit a K-neighbors classifier | from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Now we'll use this classifier to predict labels for the data | y_pred = knn.predict(X) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Finally, we can check how well our prediction did: | print("{0} / {1} correct".format(np.sum(y == y_pred), len(y))) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
It seems we have a perfect classifier!
Question: what's wrong with this?
Validation Sets
Above we made the mistake of testing our data on the same set of data that was used for training. This is not generally a good idea. If we optimize our estimator this way, we will tend to over-fit the data: that is, we learn the noise.
A better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility: | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Now we train on the training data, and validate on the test data: | knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test))) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine: | from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
This can also be computed directly from the model.score method: | knn.score(X_test, y_test) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors: | for n_neighbors in [1, 5, 10, 20, 30]:
knn = KNeighborsClassifier(n_neighbors)
knn.fit(X_train, y_train)
print(n_neighbors, knn.score(X_test, y_test)) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
We see that in this case, a small number of neighbors seems to be the best option.
Cross-Validation
One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice: | X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)
X1.shape, X2.shape
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))
print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2)) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help: | from sklearn.model_selection import cross_val_score
cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
cv.mean() | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
K-fold Cross-Validation
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the cv parameter above. Let's do 10-fold cross-validation: | cross_val_score(KNeighborsClassifier(1), X, y, cv=10) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
This gives us an even better idea of how well our model is doing.
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a
more complicated model will give worse results. Also, Sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Illustration of the Bias-Variance Tradeoff
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple linear regression problem.
This can be accomplished within scikit-learn with the sklearn.linear_model module.
We'll create a simple nonlinear function that we'd like to fit | def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Now let's create a realization of this dataset: | def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y); | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit: | X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y))); | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this: | from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs)) | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Now we'll use this to fit a quadratic curve to the data. | model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y))); | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial? | model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)))
plt.ylim(-4, 14); | present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/05-Validation.ipynb | csaladenes/csaladenes.github.io | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.