markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Unique values Sometimes it is helpful to know what unique values are in a column. Especially when there are many rows (millions), it is impractical to manually scan through the columns to look for unique values. However, we can use a pandas function unique() to do just that. We will see this is particularly helpful in doing data cleaning to identify rows with problems in the data.
sampledata['CatCol'].unique()
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Text regex features Another type of text feature extraction using a regex or regular expression pattern recognition code. The date/time conversion uses one form of this, but we can be more general in identifying patterns. There are some very useful tools for testing your pattern. I like the tester at https://regex101.com/. I use it whenever I build a pattern recognition string.
# This simple text pattern gathers all the letters up to (but not including) the last 'e' in the text entry. There are lots of other pattern recognition tools to extract features from text. # Note that it returns "NaN" if there are no 'e's in the text string. We could use that to find all the strings without an 'e' in them. sampledata['TextCol'].str.extract("(.*)e", expand=True)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Converting to categorical We already saw how to convert text columns to categorical columns. We can also covert other data types to categorical columns. For example, we could bin a float column into regularly sized bins, then create a categorical column from those bins. Word/Text cleaning Finally, it is often useful to clean up text entries before trying to turn them into features. For example, we may want to remove all punctuation, capital letters, or other special characters. We may also want to consider all of the forms of a word as the same word. For example, we may want to have both "dog" and "dogs" as the same feature. Or we may want "wonder" and "wonderful" as the same feature. There are a couple of text processing tools in python that simplify this work considerably. I created a small dataset to work with. We'll use one of the rows to test our text cleaning process.
textDF = pd.read_csv('Class03_text.tsv',sep='\t') testcase = textDF['review'][3] testcase
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
The first thing we notice is that there are hypertext bits in the text (the <br /> items). We want to clean all of those out. The BeautifulSoup function does this for us.
from bs4 import BeautifulSoup cleantext = BeautifulSoup(testcase,"html5lib").text cleantext
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
We now want to get rid of everything that isn't an alphabetical letter. That will clean up all punctuation and get rid of all numbers. We'll use a regex substitution function to do this. It looks for everything that is not an alphabetical character and replaces it with a blank space.
import re onlyletters = re.sub("[^a-zA-Z]"," ",cleantext) onlyletters
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
We'll get rid of upper-case letters to only look at the words themselves.
lowercase = onlyletters.lower() lowercase
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
The next two steps we'll do at once because we need to split up the text into individual words to do them. The split() function breaks up the string into an array of words. We will then eliminate any words that are stopwords in English. These are words like "and", "or", "the" that don't communciate any information but are necessary for language. The other thing we'll do is cut the words down to their root stems. This will get rid of plurals or other modifications of words.
import nltk from nltk.corpus import stopwords # Import the stop word list words = lowercase.split() meaningfulwords = [w for w in words if not w in stopwords.words("english")] from nltk.stem import SnowballStemmer snowball_stemmer = SnowballStemmer("english") stemmedwords = [snowball_stemmer.stem(w) for w in meaningfulwords ] print(" ".join(meaningfulwords)) print("\n") print(" ".join(stemmedwords)) # Now we make a function that we can apply to every entry in the dataframe def cleantext(textinput): # First Pass: remove any html tags from bs4 import BeautifulSoup cleantext = BeautifulSoup(textinput,"html5lib").text # Second pass: remove non-letters and make everything lower case import re testcase = re.sub("[^a-zA-Z]"," ",cleantext) lowercase = testcase.lower() # Third pass: remove all stop words (non-essential words) from nltk.corpus import stopwords # Import the stop word list words = lowercase.split() meaningfulwords = [w for w in words if not w in stopwords.words("english")] # Fourth pass: get the word stems so that plurals, etc. are reduced from nltk.stem import SnowballStemmer snowball_stemmer = SnowballStemmer("english") stemmedwords = [snowball_stemmer.stem(w) for w in meaningfulwords ] # Put the words back together again with a single space beteen them return " ".join(stemmedwords) textDF['cleaned'] = textDF['review'].apply(cleantext) textDF
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Data Cleaning Example In-class Activity The tutorial on cleaning messy data is located here: http://nbviewer.jupyter.org/github/jvns/pandas-cookbook/blob/v0.1/cookbook/Chapter%207%20-%20Cleaning%20up%20messy%20data.ipynb Follow the tutorial, looking at the data and how to do a preliminary clean to eliminate entries that aren't correct or don't help. The data file can be loaded from the SageMath folder. I've reduced the number of column features in the data set to make it a bit easier to work with.
requests = pd.read_csv("Class03_311_data.csv")
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
tf.__version__
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
from mnist import MNIST data = MNIST(data_dir="data/MNIST/")
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
print("Size of:") print("- Training-set:\t\t{}".format(data.num_train)) print("- Validation-set:\t{}".format(data.num_val)) print("- Test-set:\t\t{}".format(data.num_test))
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Copy some of the data-dimensions for convenience.
# The number of pixels in each dimension of an image. img_size = data.img_size # The images are stored in one-dimensional arrays of this length. img_size_flat = data.img_size_flat # Tuple with height and width of images used to reshape arrays. img_shape = data.img_shape # Number of classes, one class for each of 10 digits. num_classes = data.num_classes # Number of colour channels for the images: 1 channel for gray-scale. num_channels = data.num_channels
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Helper-functions for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
def plot_images(images, cls_true, cls_pred=None): assert len(images) == len(cls_true) == 9 # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) fig.subplots_adjust(hspace=0.3, wspace=0.3) for i, ax in enumerate(axes.flat): # Plot image. ax.imshow(images[i].reshape(img_shape), cmap='binary') # Show true and predicted classes. if cls_pred is None: xlabel = "True: {0}".format(cls_true[i]) else: xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i]) # Show the classes as the label on the x-axis. ax.set_xlabel(xlabel) # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Function used to plot 10 images in a 2x5 grid.
def plot_images10(images, smooth=True): # Interpolation type. if smooth: interpolation = 'spline16' else: interpolation = 'nearest' # Create figure with sub-plots. fig, axes = plt.subplots(2, 5) # Adjust vertical spacing. fig.subplots_adjust(hspace=0.1, wspace=0.1) # For each entry in the grid. for i, ax in enumerate(axes.flat): # Get the i'th image and only use the desired pixels. img = images[i, :, :] # Plot the image. ax.imshow(img, interpolation=interpolation, cmap='binary') # Remove ticks. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Function used to plot a single image.
def plot_image(image): plt.imshow(image, interpolation='nearest', cmap='binary') plt.xticks([]) plt.yticks([])
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Plot a few images to see if data is correct
# Get the first images from the test-set. images = data.x_test[0:9] # Get the true classes for those images. cls_true = data.y_test_cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
TensorFlow Graph The neural network is constructed as a computational graph in TensorFlow using the tf.layers API, which is described in detail in Tutorial #03-B. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The convolutional layers expect x to be encoded as a 4-rank tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
y_true_cls = tf.argmax(y_true, axis=1)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Neural Network We now implement the Convolutional Neural Network using the Layers API. We use the net-variable to refer to the last layer while building the neural network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the net-variable to the reshaped input image.
net = x_image
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial #02.
net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same', filters=16, kernel_size=5, activation=tf.nn.relu)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
After the convolution we do a max-pooling which is also described in Tutorial #02.
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Then we make a second convolutional layer, also with max-pooling.
net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same', filters=36, kernel_size=5, activation=tf.nn.relu) net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The output then needs to be flattened so it can be used in fully-connected (aka. dense) layers.
net = tf.layers.flatten(net) # This should eventually be replaced by: # net = tf.layers.flatten(net)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We can now add fully-connected (or dense) layers to the neural network.
net = tf.layers.dense(inputs=net, name='layer_fc1', units=128, activation=tf.nn.relu)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has num_classes=10 output neurons.
net = tf.layers.dense(inputs=net, name='layer_fc_out', units=num_classes, activation=None)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The outputs of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name which we will also use further below.
logits = net
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one.
y_pred = tf.nn.softmax(logits=logits)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number.
y_pred_cls = tf.argmax(y_pred, axis=1)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Loss-Function to be Optimized To make the model better at classifying the input images, we must somehow change the variables of the neural network. The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model. TensorFlow has a function for calculating the cross-entropy, which uses the values of the logits-layer because it also calculates the softmax internally, so as to to improve numerical stability.
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_true, logits=logits)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
loss = tf.reduce_mean(cross_entropy)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Optimization Method Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4. Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Classification Accuracy We need to calculate the classification accuracy so we can report progress to the user. First we create a vector of booleans telling us whether the predicted class equals the true class of each image.
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Optimize the Neural Network Create TensorFlow session Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
session = tf.Session()
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Initialize variables The variables for the TensorFlow graph must be initialized before we start optimizing them.
session.run(tf.global_variables_initializer())
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer. If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations.
train_batch_size = 64
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
# Counter for total number of iterations performed so far. total_iterations = 0 def optimize(num_iterations): # Ensure we update the global variable rather than a local copy. global total_iterations for i in range(total_iterations, total_iterations + num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch, _ = data.random_batch(batch_size=train_batch_size) # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. session.run(optimizer, feed_dict=feed_dict_train) # Print status every 100 iterations. if i % 100 == 0: # Calculate the accuracy on the training-set. acc = session.run(accuracy, feed_dict=feed_dict_train) # Message for printing. msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}" # Print it. print(msg.format(i + 1, acc)) # Update the total number of iterations performed. total_iterations += num_iterations
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
def plot_example_errors(cls_pred, correct): # This function is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # correct is a boolean array whether the predicted class # is equal to the true class for each image in the test-set. # Negate the boolean array. incorrect = (correct == False) # Get the images from the test-set that have been # incorrectly classified. images = data.x_test[incorrect] # Get the predicted classes for those images. cls_pred = cls_pred[incorrect] # Get the true classes for those images. cls_true = data.y_test_cls[incorrect] # Plot the first 9 images. plot_images(images=images[0:9], cls_true=cls_true[0:9], cls_pred=cls_pred[0:9])
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Helper-function to plot confusion matrix
def plot_confusion_matrix(cls_pred): # This is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # Get the true classifications for the test-set. cls_true = data.y_test_cls # Get the confusion matrix using sklearn. cm = confusion_matrix(y_true=cls_true, y_pred=cls_pred) # Print the confusion matrix as text. print(cm) # Plot the confusion matrix as an image. plt.matshow(cm) # Make various adjustments to the plot. plt.colorbar() tick_marks = np.arange(num_classes) plt.xticks(tick_marks, range(num_classes)) plt.yticks(tick_marks, range(num_classes)) plt.xlabel('Predicted') plt.ylabel('True') # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show()
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Helper-function for showing the performance Below is a function for printing the classification accuracy on the test-set. It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function. Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
# Split the test-set into smaller batches of this size. test_batch_size = 256 def print_test_accuracy(show_example_errors=False, show_confusion_matrix=False): # Number of images in the test-set. num_test = data.num_test # Allocate an array for the predicted classes which # will be calculated in batches and filled into this array. cls_pred = np.zeros(shape=num_test, dtype=np.int) # Now calculate the predicted classes for the batches. # We will just iterate through all the batches. # There might be a more clever and Pythonic way of doing this. # The starting index for the next batch is denoted i. i = 0 while i < num_test: # The ending index for the next batch is denoted j. j = min(i + test_batch_size, num_test) # Get the images from the test-set between index i and j. images = data.x_test[i:j, :] # Get the associated labels. labels = data.y_test[i:j, :] # Create a feed-dict with these images and labels. feed_dict = {x: images, y_true: labels} # Calculate the predicted class using TensorFlow. cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict) # Set the start-index for the next batch to the # end-index of the current batch. i = j # Convenience variable for the true class-numbers of the test-set. cls_true = data.y_test_cls # Create a boolean array whether each image is correctly classified. correct = (cls_true == cls_pred) # Calculate the number of correctly classified images. # When summing a boolean array, False means 0 and True means 1. correct_sum = correct.sum() # Classification accuracy is the number of correctly classified # images divided by the total number of images in the test-set. acc = float(correct_sum) / num_test # Print the accuracy. msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})" print(msg.format(acc, correct_sum, num_test)) # Plot some examples of mis-classifications, if desired. if show_example_errors: print("Example errors:") plot_example_errors(cls_pred=cls_pred, correct=correct) # Plot the confusion matrix, if desired. if show_confusion_matrix: print("Confusion Matrix:") plot_confusion_matrix(cls_pred=cls_pred)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Performance before any optimization The accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly.
print_test_accuracy()
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Performance after 10,000 optimization iterations After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
%%time optimize(num_iterations=10000) print_test_accuracy(show_example_errors=True, show_confusion_matrix=True)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Optimizing the Input Images Now that the neural network has been optimized so it can recognize hand-written digits with about 99% accuracy, we will then find the input images that maximize certain features inside the neural network. This will show us what images the neural network likes to see the most. We will do this by creating another form of optimization for the neural network, and we need several helper functions for doing this. Helper-function for getting the names of convolutional layers Function for getting the names of all the convolutional layers in the neural network. We could have made this list manually, but for larger neural networks it is easier to do this with a function.
def get_conv_layer_names(): graph = tf.get_default_graph() # Create a list of names for the operations in the graph # for the Inception model where the operator-type is 'Conv2D'. names = [op.name for op in graph.get_operations() if op.type=='Conv2D'] return names conv_names = get_conv_layer_names() conv_names len(conv_names)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Helper-function for finding the input image This function finds the input image that maximizes a given feature in the network. It essentially just performs optimization with gradient ascent. The image is initialized with small random values and is then iteratively updated using the gradient for the given feature with regard to the image.
def optimize_image(conv_id=None, feature=0, num_iterations=30, show_progress=True): """ Find an image that maximizes the feature given by the conv_id and feature number. Parameters: conv_id: Integer identifying the convolutional layer to maximize. It is an index into conv_names. If None then use the last fully-connected layer before the softmax output. feature: Index into the layer for the feature to maximize. num_iteration: Number of optimization iterations to perform. show_progress: Boolean whether to show the progress. """ # Create the loss-function that must be maximized. if conv_id is None: # If we want to maximize a feature on the last layer, # then we use the fully-connected layer prior to the # softmax-classifier. The feature no. is the class-number # and must be an integer between 1 and 1000. # The loss-function is just the value of that feature. loss = tf.reduce_mean(logits[:, feature]) else: # If instead we want to maximize a feature of a # convolutional layer inside the neural network. # Get the name of the convolutional operator. conv_name = conv_names[conv_id] # Get the default TensorFlow graph. graph = tf.get_default_graph() # Get a reference to the tensor that is output by the # operator. Note that ":0" is added to the name for this. tensor = graph.get_tensor_by_name(conv_name + ":0") # The loss-function is the average of all the # tensor-values for the given feature. This # ensures that we generate the whole input image. # You can try and modify this so it only uses # a part of the tensor. loss = tf.reduce_mean(tensor[:,:,:,feature]) # Get the gradient for the loss-function with regard to # the input image. This creates a mathematical # function for calculating the gradient. gradient = tf.gradients(loss, x_image) # Generate a random image of the same size as the raw input. # Each pixel is a small random value between 0.45 and 0.55, # which is the middle of the valid range between 0 and 1. image = 0.1 * np.random.uniform(size=img_shape) + 0.45 # Perform a number of optimization iterations to find # the image that maximizes the loss-function. for i in range(num_iterations): # Reshape the array so it is a 4-rank tensor. img_reshaped = image[np.newaxis,:,:,np.newaxis] # Create a feed-dict for inputting the image to the graph. feed_dict = {x_image: img_reshaped} # Calculate the predicted class-scores, # as well as the gradient and the loss-value. pred, grad, loss_value = session.run([y_pred, gradient, loss], feed_dict=feed_dict) # Squeeze the dimensionality for the gradient-array. grad = np.array(grad).squeeze() # The gradient now tells us how much we need to change the # input image in order to maximize the given feature. # Calculate the step-size for updating the image. # This step-size was found to give fast convergence. # The addition of 1e-8 is to protect from div-by-zero. step_size = 1.0 / (grad.std() + 1e-8) # Update the image by adding the scaled gradient # This is called gradient ascent. image += step_size * grad # Ensure all pixel-values in the image are between 0 and 1. image = np.clip(image, 0.0, 1.0) if show_progress: print("Iteration:", i) # Convert the predicted class-scores to a one-dim array. pred = np.squeeze(pred) # The predicted class for the Inception model. pred_cls = np.argmax(pred) # The score (probability) for the predicted class. cls_score = pred[pred_cls] # Print the predicted score etc. msg = "Predicted class: {0}, score: {1:>7.2%}" print(msg.format(pred_cls, cls_score)) # Print statistics for the gradient. msg = "Gradient min: {0:>9.6f}, max: {1:>9.6f}, stepsize: {2:>9.2f}" print(msg.format(grad.min(), grad.max(), step_size)) # Print the loss-value. print("Loss:", loss_value) # Newline. print() return image.squeeze()
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
This next function finds the images that maximize the first 10 features of a layer, by calling the above function 10 times.
def optimize_images(conv_id=None, num_iterations=30): """ Find 10 images that maximize the 10 first features in the layer given by the conv_id. Parameters: conv_id: Integer identifying the convolutional layer to maximize. It is an index into conv_names. If None then use the last layer before the softmax output. num_iterations: Number of optimization iterations to perform. """ # Which layer are we using? if conv_id is None: print("Final fully-connected layer before softmax.") else: print("Layer:", conv_names[conv_id]) # Initialize the array of images. images = [] # For each feature do the following. for feature in range(0,10): print("Optimizing image for feature no.", feature) # Find the image that maximizes the given feature # for the network layer identified by conv_id (or None). image = optimize_image(conv_id=conv_id, feature=feature, show_progress=False, num_iterations=num_iterations) # Squeeze the dim of the array. image = image.squeeze() # Append to the list of images. images.append(image) # Convert to numpy-array so we can index all dimensions easily. images = np.array(images) # Plot the images. plot_images10(images=images)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
First Convolutional Layer These are the input images that maximize the features in the first convolutional layer, so these are the images that it likes to see.
optimize_images(conv_id=0)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Note how these are very simple shapes such as lines and angles. Some of these images may be completely white, which suggests that those features of the neural network are perhaps unused, so the number of features could be reduced in this layer. Second Convolutional Layer This shows the images that maximize the features or neurons in the second convolutional layer, so these are the input images it likes to see. Note how these are more complex lines and patterns compared to the first convolutional layer.
optimize_images(conv_id=1)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Final output layer Now find the image for the 2nd feature of the final output of the neural network. That is, we want to find an image that makes the neural network classify that image as the digit 2. This is the image that the neural network likes to see the most for the digit 2.
image = optimize_image(conv_id=None, feature=2, num_iterations=10, show_progress=True)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Note how the predicted class indeed becomes 2 already within the first few iterations so the optimization is working as intended. Also note how the loss-measure is increasing rapidly until it apparently converges. This is because the loss-measure is actually just the value of the feature or neuron that we are trying to maximize. Because this is the logits-layer prior to the softmax, these values can potentially be infinitely high, but they are limited because we limit the image-values between 0 and 1. Now plot the image that was found. This is the image that the neural network believes looks most like the digit 2.
plot_image(image)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Although some of the curves do hint somewhat at the digit 2, it is hard for a human to see why the neural network believes this is the optimal image for the digit 2. This can only be understood when the optimal images for the remaining digits are also shown.
optimize_images(conv_id=None)
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
These images may vary each time you run the optimization. Some of the images can be seen to somewhat resemble the hand-written digits. But the other images are often impossible to recognize and it is hard to understand why the neural network thinks these are the optimal input images for those digits. The reason is perhaps that the neural network tries to recognize all digits simultaneously, and it has found that certain pixels often determine whether the image shows one digit or another. So the neural network has learned to differentiate those pixels that it has found to be important, but not the underlying curves and shapes of the digits, in the same way that a human recognizes the digits. Another possibility is that the data-set contains mis-classified digits which may confuse the neural network during training. We have previously seen how some of the digits in the data-set are very hard to read even for humans, and this may cause the neural network to become distorted and trying to recognize strange artifacts in the images. Yet another possibility is that the optimization process has stagnated in a local optimum. One way to test this, would be to run the optimization 50 times for the digits that are unclear, and see if some of the resulting images become more clear. Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
# This has been commented out in case you want to modify and experiment # with the Notebook without having to restart it. # session.close()
13B_Visual_Analysis_MNIST.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Lesson: Develop a Predictive Theory
print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998)
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Project 1: Quick Theory Validation
from collections import Counter import numpy as np positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): negative_counts[word] += 1 total_counts[word] += 1 positive_counts.most_common() pos_neg_ratios = Counter() for term,cnt in list(total_counts.most_common()): if(cnt > 100): pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1) pos_neg_ratios[term] = pos_neg_ratio for word,ratio in pos_neg_ratios.most_common(): if(ratio > 1): pos_neg_ratios[word] = np.log(ratio) else: pos_neg_ratios[word] = -np.log((1 / (ratio+0.01))) # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30]
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Transforming Text into Numbers
from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png')
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Project 2: Creating the Input/Output Data
vocab = set(total_counts.keys()) vocab_size = len(vocab) print(vocab_size) list(vocab) import numpy as np layer_0 = np.zeros((1,vocab_size)) layer_0 from IPython.display import Image Image(filename='sentiment_network.png') word2index = {} for i,word in enumerate(vocab): word2index[word] = i word2index def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 def get_target_for_label(label): if(label == 'POSITIVE'): return 1 else: return 0 labels[0] get_target_for_label(labels[0]) labels[1] get_target_for_label(labels[1])
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Project 3: Building a Neural Network Start with your neural network from the last chapter 3 layer neural network no non-linearity in hidden layer use our functions to create the training data create a "pre_process_data" function to create vocabulary for our training data generating functions modify "train" to train over the entire corpus Where to Get Help if You Need it Re-watch previous week's Udacity Lectures Chapters 3-5 - Grokking Deep Learning - (40% Off: traskud17)
import time import sys import numpy as np # Let's tweak our network from before to model these phenomena class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): # set our random number generator np.random.seed(1) self.pre_process_data(reviews, labels) self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) self.review_vocab = list(review_vocab) label_vocab = set() for label in labels: label_vocab.add(label) self.label_vocab = list(label_vocab) self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.learning_rate = learning_rate self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # clear out previous state, reset the layer to be all 0s self.layer_0 *= 0 for word in review.split(" "): if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] += 1 def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) def train(self, training_reviews, training_labels): assert(len(training_reviews) == len(training_labels)) correct_so_far = 0 start = time.time() for i in range(len(training_reviews)): review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### # Input Layer self.update_input_layer(review) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # TODO: Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # TODO: Update the weights self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step if(np.abs(layer_2_error) < 0.5): correct_so_far += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): correct = 0 start = time.time() for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): # Input Layer self.update_input_layer(review.lower()) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) if(layer_2[0] > 0.5): return "POSITIVE" else: return "NEGATIVE" mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) # evaluate our model before training (just to show how horrible it is) mlp.test(reviews[-1000:],labels[-1000:]) # train the network mlp.train(reviews[:-1000],labels[:-1000]) mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) # train the network mlp.train(reviews[:-1000],labels[:-1000]) mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001) # train the network mlp.train(reviews[:-1000],labels[:-1000])
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Understanding Neural Noise
from IPython.display import Image Image(filename='sentiment_network.png') def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 review_counter = Counter() for word in reviews[0].split(" "): review_counter[word] += 1 review_counter.most_common()
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Project 4: Reducing Noise in our Input Data
import time import sys import numpy as np # Let's tweak our network from before to model these phenomena class SentimentNetwork: def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1): # set our random number generator np.random.seed(1) self.pre_process_data(reviews, labels) self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() for review in reviews: for word in review.split(" "): review_vocab.add(word) self.review_vocab = list(review_vocab) label_vocab = set() for label in labels: label_vocab.add(label) self.label_vocab = list(label_vocab) self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) self.word2index = {} for i, word in enumerate(self.review_vocab): self.word2index[word] = i self.label2index = {} for i, label in enumerate(self.label_vocab): self.label2index[label] = i def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes)) self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.learning_rate = learning_rate self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # clear out previous state, reset the layer to be all 0s self.layer_0 *= 0 for word in review.split(" "): if(word in self.word2index.keys()): self.layer_0[0][self.word2index[word]] = 1 def get_target_for_label(self,label): if(label == 'POSITIVE'): return 1 else: return 0 def sigmoid(self,x): return 1 / (1 + np.exp(-x)) def sigmoid_output_2_derivative(self,output): return output * (1 - output) def train(self, training_reviews, training_labels): assert(len(training_reviews) == len(training_labels)) correct_so_far = 0 start = time.time() for i in range(len(training_reviews)): review = training_reviews[i] label = training_labels[i] #### Implement the forward pass here #### ### Forward pass ### # Input Layer self.update_input_layer(review) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output. layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2) # TODO: Backpropagated error layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error # TODO: Update the weights self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step if(np.abs(layer_2_error) < 0.5): correct_so_far += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): correct = 0 start = time.time() for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 reviews_per_second = i / float(time.time() - start) sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + "% #Correct:" + str(correct) + " #Tested:" + str(i+1) + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): # Input Layer self.update_input_layer(review.lower()) # Hidden layer layer_1 = self.layer_0.dot(self.weights_0_1) # Output layer layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2)) if(layer_2[0] > 0.5): return "POSITIVE" else: return "NEGATIVE" mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) # evaluate our model before training (just to show how horrible it is) mlp.test(reviews[-1000:],labels[-1000:])
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Analyzing Inefficiencies in our Network
Image(filename='sentiment_network_sparse.png') layer_0 = np.zeros(10) layer_0 layer_0[4] = 1 layer_0[9] = 1 layer_0 weights_0_1 = np.random.randn(10,5) layer_0.dot(weights_0_1) indices = [4,9] layer_1 = np.zeros(5) for index in indices: layer_1 += (weights_0_1[index]) layer_1 Image(filename='sentiment_network_sparse_2.png')
sentiment_network/Sentiment Classification - How to Best Frame a Problem for a Neural Network (Lesson 5).ipynb
y2ee201/Deep-Learning-Nanodegree
mit
Acquire data The Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames.
# read titanic training & test csv files as a pandas DataFrame train_df = pd.read_csv('data/titanic-kaggle/train.csv') test_df = pd.read_csv('data/titanic-kaggle/test.csv')
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Analyze by describing data Pandas also helps describe the datasets answering following questions early in our project. Which features are available in the dataset? Noting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle data page here.
print train_df.columns.values
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Which features are categorical? These values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization. Categorical: Survived, Sex, and Embarked. Ordinal: Pclass. Which features are numerical? Which features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization. Continous: Age, Fare. Discrete: SibSp, Parch.
# preview the data train_df.head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Which features are mixed data types? Numerical, alphanumeric data within same feature. These are candidates for correcting goal. Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric. Which features may contain errors or typos? This is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting. Name feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.
train_df.tail()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Which features contain blank, null or empty values? These will require correcting. Cabin > Age > Embarked features contain a number of null values in that order for the training dataset. Cabin > Age are incomplete in case of test dataset. What are the data types for various features? Helping us during converting goal. Seven features are integer or floats. Six in case of test dataset. Five features are strings (object).
train_df.info() print('_'*40) test_df.info()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
What is the distribution of numerical feature values across the samples? This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain. Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224). Survived is a categorical feature with 0 or 1 values. Around 38% samples survived representative of the actual survival rate. Most passengers (> 75%) did not travel with parents or children. More than 35% passengers had a sibling on board. Fares varied significantly with few passengers (<1%) paying as high as $512. Few elderly passengers (<1%) within age range 65-80.
train_df.describe(percentiles=[.25, .5, .75]) # Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate. # Review Parch distribution using `percentiles=[.75, .8]` # Sibling distribution `[.65, .7]` # Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
What is the distribution of categorical features? Names are unique across the dataset (count=unique=891) Sex variable as two possible values with 65% male (top=male, freq=577/count=891). Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin. Embarked takes three possible values. S port used by most passengers (top=S) Ticket feature has high ratio (22%) of duplicate values (unique=681). Possibly an error as two passengers may not travel on the same ticket.
train_df.describe(include=['O'])
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Assumtions based on data analysis We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions. Completing. We may want to complete Age feature as it is definitely correlated to survival. We may want to complete the Embarked feature as it may also correlate with survival or another important feature. Correcting. Ticket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival. Cabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset. PassengerId may be dropped from training dataset as it does not contribute to survival. Name feature is relatively non-standard, may not contribute directly to survival, so maybe dropped. Creating. We may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board. We may want to engineer the Name feature to extract Title as a new feature. We may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature. We may also want to create a Fare range feature if it helps our analysis. Correlating. Does port of embarkation (Embarked) correlate with survival? Does fare paid (range) correlate with survival? We may also add to our assumptions based on the problem description noted earlier. Classifying. Women (Sex=female) were more likely to have survived. Children (Age<?) were more likely to have survived. The upper-class passengers (Pclass=1) were more likely to have survived. Analyze by visualizing data Now we can start confirming some of our assumptions using visualizations for analyzing the data. Correlating numerical features Let us start by understanding correlations between numerical features and our solution goal (Survived). A histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?) Note that x-axis in historgram visualizations represents the count of samples or passengers. Observations. Infants (Age <=4) had high survival rate. Oldest passengers (Age = 80) survived. Large number of 15-25 year olds did not survive. Most passengers are in 15-35 age range. Decisions. This simple analysis confirms our assumptions as decisions for subsequent workflow stages. We should consider Age (our assumption classifying #2) in our model training. Complete the Age feature for null values (completing #1).
g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'Age', bins=20)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values. Observations. Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption #2. Infant passengers in Pclass=2 mostly survived. Further qualifies our classifying assumption #2. Most passengers in Pclass=1 survived. Confirms our classifying assumption #3. Pclass varies in terms of Age distribution of passengers. Decisions. Consider Pclass for model training.
grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived') grid.map(plt.hist, 'Age', alpha=.5, bins=20) grid.add_legend();
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Correlating categorical features Now we can correlate categorical features with our solution goal. Observations. Female passengers had much better survival rate than males. Confirms classifying (#1). Exception in Embarked=C where males had higher survival rate. Males had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (#2). Ports of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (#1). Decisions. Add Sex feature to model training. Complete and add Embarked feature to model training.
grid = sns.FacetGrid(train_df, col='Embarked') grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep') grid.add_legend()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Correlating categorical and numerical features We may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric). Observations. Higher fare paying passengers had better survival. Confirms our assumption for creating (#4) fare ranges. Port of embarkation correlates with survival rates. Confirms correlating (#1) and completing (#2). Decisions. Consider banding Fare feature.
grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'}) grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None) grid.add_legend()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Wrangle data We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals. Correcting by dropping features This is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis. Based on our assumptions and decisions we want to drop the Cabin (correcting #2) and Ticket (correcting #1) features. Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1) test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Creating new feature extracting from existing We want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features. In the following code we extract Title feature using regular expressions. The RegEx pattern (\w+\.) matches the first word which ends with a dot character within Name feature. The expand=False flag returns a DataFrame. Observations. When we plot Title, Age, and Survived, we note the following observations. Most titles band Age groups accurately. For example: Master title has Age mean of 5 years. Survival among Title Age bands varies slightly. Certain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer). Decision. We decide to retain the new Title feature for model training.
train_df['Title'] = train_df.Name.str.extract('(\w+\.)', expand=False) sns.barplot(hue="Survived", x="Age", y="Title", data=train_df, ci=False)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Let us extract the Title feature for the training dataset as well. Then we can safely drop the Name feature from training and testing datasets and the PassengerId feature from the training dataset.
test_df['Title'] = test_df.Name.str.extract('(\w+\.)', expand=False) train_df = train_df.drop(['Name', 'PassengerId'], axis=1) test_df = test_df.drop(['Name'], axis=1) test_df.describe(include=['O'])
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Converting a categorical feature Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal. Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
train_df['Gender'] = train_df['Sex'].map( {'female': 1, 'male': 0} ).astype(int) train_df.loc[:, ['Gender', 'Sex']].head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We do this both for training and test datasets.
test_df['Gender'] = test_df['Sex'].map( {'female': 1, 'male': 0} ).astype(int) test_df.loc[:, ['Gender', 'Sex']].head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We can now drop the Sex feature from our datasets.
train_df = train_df.drop(['Sex'], axis=1) test_df = test_df.drop(['Sex'], axis=1) train_df.head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Completing a numerical continuous feature Now we should start estimating and completing features with missing or null values. We will first do this for the Age feature. We can consider three methods to complete a numerical continuous feature. A simple way is to generate random numbers between mean and standard deviation. More accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Gender, and Pclass. Guess Age values using median values for Age across sets of Pclass and Gender feature combinations. So, median Age for Pclass=1 and Gender=0, Pclass=1 and Gender=1, and so on... Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations. Method 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.
grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender') grid.map(plt.hist, 'Age', alpha=.5, bins=20) grid.add_legend();
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.
guess_ages = np.zeros((2,3)) guess_ages
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Now we iterate over Gender (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations. Note that we also tried creating the AgeFill feature using method 3 and realized during model stage that the correlation coeffficient of AgeFill is better when compared with the method 2.
for i in range(0, 2): for j in range(0, 3): guess_df = train_df[(train_df['Gender'] == i) & \ (train_df['Pclass'] == j+1)]['Age'].dropna() # Correlation of AgeFill is -0.014850 # age_mean = guess_df.mean() # age_std = guess_df.std() # age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std) # Correlation of AgeFill is -0.011304 age_guess = guess_df.median() # Convert random age float to nearest .5 age guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5 guess_ages train_df['AgeFill'] = train_df['Age'] for i in range(0, 2): for j in range(0, 3): train_df.loc[ (train_df.Age.isnull()) & (train_df.Gender == i) & (train_df.Pclass == j+1),\ 'AgeFill'] = guess_ages[i,j] train_df[train_df['Age'].isnull()][['Gender','Pclass','Age','AgeFill']].head(10)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We repeat the feature completing goal for the test dataset.
guess_ages = np.zeros((2,3)) for i in range(0, 2): for j in range(0, 3): guess_df = test_df[(test_df['Gender'] == i) & \ (test_df['Pclass'] == j+1)]['Age'].dropna() # Correlation of AgeFill is -0.014850 # age_mean = guess_df.mean() # age_std = guess_df.std() # age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std) # Correlation of AgeFill is -0.011304 age_guess = guess_df.median() guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5 test_df['AgeFill'] = test_df['Age'] for i in range(0, 2): for j in range(0, 3): test_df.loc[ (test_df.Age.isnull()) & (test_df.Gender == i) & (test_df.Pclass == j+1),\ 'AgeFill'] = guess_ages[i,j] test_df[test_df['Age'].isnull()][['Gender','Pclass','Age','AgeFill']].head(10)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We can now drop the Age feature from our datasets.
train_df = train_df.drop(['Age'], axis=1) test_df = test_df.drop(['Age'], axis=1) train_df.head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Create new feature combining existing features We can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets. Note that we commented out this code as we realized during model stage that the combined feature is reducing the confidence score of our dataset instead of improving it. The correlation score of separate Parch feature is also better than combined FamilySize feature.
# Logistic Regression Score is 0.81032547699214363 # Parch correlation is -0.065878 and SibSp correlation is -0.370618 # Decision: Retain Parch and SibSp as separate features # Logistic Regression Score is 0.80808080808080807 # FamilySize correlation is -0.233974 # train_df['FamilySize'] = train_df['SibSp'] + train_df['Parch'] # test_df['FamilySize'] = test_df['SibSp'] + test_df['Parch'] # train_df.loc[:, ['Parch', 'SibSp', 'FamilySize']].head(10) # train_df = train_df.drop(['Parch', 'SibSp'], axis=1) # test_df = test_df.drop(['Parch', 'SibSp'], axis=1) # train_df.head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We can also create an artificial feature combining Pclass and AgeFill.
test_df['Age*Class'] = test_df.AgeFill * test_df.Pclass train_df['Age*Class'] = train_df.AgeFill * train_df.Pclass train_df.loc[:, ['Age*Class', 'AgeFill', 'Pclass']].head(10)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Completing a categorical feature Embarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
freq_port = train_df.Embarked.dropna().mode()[0] freq_port train_df['EmbarkedFill'] = train_df['Embarked'] train_df.loc[train_df['Embarked'].isnull(), 'EmbarkedFill'] = freq_port train_df[train_df['Embarked'].isnull()][['Embarked','EmbarkedFill']].head(10)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We can now drop the Embarked feature from our datasets.
test_df['EmbarkedFill'] = test_df['Embarked'] train_df = train_df.drop(['Embarked'], axis=1) test_df = test_df.drop(['Embarked'], axis=1) train_df.head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Converting categorical feature to numeric We can now convert the EmbarkedFill feature by creating a new numeric Port feature.
Ports = list(enumerate(np.unique(train_df['EmbarkedFill']))) Ports_dict = { name : i for i, name in Ports } train_df['Port'] = train_df.EmbarkedFill.map( lambda x: Ports_dict[x]).astype(int) Ports = list(enumerate(np.unique(test_df['EmbarkedFill']))) Ports_dict = { name : i for i, name in Ports } test_df['Port'] = test_df.EmbarkedFill.map( lambda x: Ports_dict[x]).astype(int) train_df[['EmbarkedFill', 'Port']].head(10)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Similarly we can convert the Title feature to numeric enumeration TitleBand banding age groups with titles.
Titles = list(enumerate(np.unique(train_df['Title']))) Titles_dict = { name : i for i, name in Titles } train_df['TitleBand'] = train_df.Title.map( lambda x: Titles_dict[x]).astype(int) Titles = list(enumerate(np.unique(test_df['Title']))) Titles_dict = { name : i for i, name in Titles } test_df['TitleBand'] = test_df.Title.map( lambda x: Titles_dict[x]).astype(int) train_df[['Title', 'TitleBand']].head(10)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Now we can safely drop the EmbarkedFill and Title features. We this we now have a dataset that only contains numerical values, a requirement for the model stage in our workflow.
train_df = train_df.drop(['EmbarkedFill', 'Title'], axis=1) test_df = test_df.drop(['EmbarkedFill', 'Title'], axis=1) train_df.head()
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Quick completing and converting a numeric feature We can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code. Note that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values. We may also want round off the fare to two decimals as it represents currency.
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True) train_df['Fare'] = train_df['Fare'].round(2) test_df['Fare'] = test_df['Fare'].round(2) test_df.head(10)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Model, predict and solve Now we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include: Logistic Regression KNN or k-Nearest Neighbors Support Vector Machines Naive Bayes classifier Decision Tree Random Forrest Perceptron Artificial neural network RVM or Relevance Vector Machine
X_train = train_df.drop("Survived", axis=1) Y_train = train_df["Survived"] X_test = test_df.drop("PassengerId", axis=1).copy() X_train.shape, Y_train.shape, X_test.shape
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia. Note the confidence score generated by the model based on our training dataset.
# Logistic Regression logreg = LogisticRegression() logreg.fit(X_train, Y_train) Y_pred = logreg.predict(X_test) acc_log = round(logreg.score(X_train, Y_train) * 100, 2) acc_log
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the correlation coefficient for all features as these relate to survival. Gender as expected has the highest corrlation with Survived. Surprisingly Fare ranks higher than Age. Our decision to extract TitleBand feature from name is a good one. The artificial feature Age*Class scores well against existing features. We tried creating a feature combining Parch and SibSp into FamilySize. Parch ended up with better correlation coefficient and FamilySize reduced our LogisticRegression confidence score. Another surprise is that Pclass contributes least to our model, even worse than Port of embarkation, or the artificial feature Age*Class.
coeff_df = pd.DataFrame(train_df.columns.delete(0)) coeff_df.columns = ['Feature'] coeff_df["Correlation"] = pd.Series(logreg.coef_[0]) coeff_df.sort_values(by='Correlation', ascending=False)
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia. Note that the model generates a confidence score which is higher than Logistics Regression model.
# Support Vector Machines svc = SVC() svc.fit(X_train, Y_train) Y_pred = svc.predict(X_test) acc_svc = round(svc.score(X_train, Y_train) * 100, 2) acc_svc
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia. KNN confidence score is better than Logistics Regression but worse than SVM.
knn = KNeighborsClassifier(n_neighbors = 3) knn.fit(X_train, Y_train) Y_pred = knn.predict(X_test) acc_knn = round(knn.score(X_train, Y_train) * 100, 2) acc_knn
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia. The model generated confidence score is the lowest among the models evaluated so far.
# Gaussian Naive Bayes gaussian = GaussianNB() gaussian.fit(X_train, Y_train) Y_pred = gaussian.predict(X_test) acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2) acc_gaussian
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.
# Perceptron perceptron = Perceptron() perceptron.fit(X_train, Y_train) Y_pred = perceptron.predict(X_test) acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2) acc_perceptron # Linear SVC linear_svc = LinearSVC() linear_svc.fit(X_train, Y_train) Y_pred = linear_svc.predict(X_test) acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2) acc_linear_svc # Stochastic Gradient Descent sgd = SGDClassifier() sgd.fit(X_train, Y_train) Y_pred = sgd.predict(X_test) acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2) acc_sgd
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia. The model confidence score is the highest among models evaluated so far.
# Decision Tree decision_tree = DecisionTreeClassifier() decision_tree.fit(X_train, Y_train) Y_pred = decision_tree.predict(X_test) acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2) acc_decision_tree
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit
The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia. The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
# Random Forest random_forest = RandomForestClassifier(n_estimators=100) random_forest.fit(X_train, Y_train) Y_pred = random_forest.predict(X_test) random_forest.score(X_train, Y_train) acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2) acc_random_forest
titanic-data-science-solutions.ipynb
Startupsci/data-science-notebooks
mit