markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1) y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1) print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean()) print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
MOOC/stanford_cnn_cs231n/assignment2/FullyConnectedNets.ipynb
ThyrixYang/LearningNotes
gpl-3.0
The following function is used to run the different benchmarks. It takes a target function to test, a setup function to create the file and the number of iterations the function should be run to get a decent average:
import time def benchmark(target, setup=None, teardown=None, iterations=10): total_time = 0 setup_teardown_start = time.time() for i in range(iterations): data = tuple() if setup is not None: data = setup() time.sleep(1) # allow changes to be flushed to disk start_time = time.time() target(*data) end_time = time.time() total_time += end_time - start_time if teardown is not None: teardown(*data) setup_teardown_end = time.time() total_setup_teardown = setup_teardown_end - setup_teardown_start mean = total_time / iterations return mean
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
The following functions are used as wrappers to make it easy to run a benchmark of Exdir or h5py:
import pandas as pd import numpy as np all_results = [] def benchmark_both(function, iterations=10, name_validation=True): if name_validation: setup_exdir_ = setup_exdir name = function.__name__ else: setup_exdir_ = setup_exdir_no_validation name = function.__name__ + " (minimal name validation)" exdir_mean = benchmark( target=lambda f, path: function(f), setup=setup_exdir_, teardown=teardown_exdir, iterations=iterations ) hdf5_mean = benchmark( target=lambda f, path: function(f), setup=setup_h5py, teardown=teardown_h5py, iterations=iterations ) result = pd.DataFrame( [(name, hdf5_mean, exdir_mean, hdf5_mean/exdir_mean)], columns=["Test", "h5py", "Exdir", "Ratio"] ) all_results.append(result) return result def benchmark_exdir(function, iterations=10): exdir_mean = benchmark( target=lambda f, path: function(f), setup=setup_exdir, teardown=teardown_exdir, iterations=iterations ) result = pd.DataFrame( [(function.__name__, np.nan, exdir_mean, np.nan)], columns=["Test", "h5py", "Exdir", "Ratio"] ) all_results.append(result) return result
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
We are now ready to start running the different benchmarks. Benchmark functions The following benchmark creates a small number of attributes. This should be very fast with both h5py and Exdir:
def add_few_attributes(obj): for i in range(5): obj.attrs["hello" + str(i)] = "world" benchmark_both(add_few_attributes)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
The following benchmark adds a larger number of attributes one-by-one. Because Exdir needs to read back and rewrite the entire file in case someone changed it between each write, this is significantly slower with Exdir than h5py:
def add_many_attributes(obj): for i in range(200): obj.attrs["hello" + str(i)] = "world" benchmark_both(add_many_attributes, 10)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
However, Exdir is capable of writing all attributes in one operation. This makes writing the same attributes about as fast (or even faster than h5py). Writing a large number of attributes in a single operation is not possible with h5py. We therefore need to run this only with Exdir:
def add_many_attributes_single_operation(obj): attributes = {} for i in range(200): attributes["hello" + str(i)] = "world" obj.attrs = attributes benchmark_exdir(add_many_attributes_single_operation)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
Exdir also supports adding nested attributes, such as Python dictionaries, which is not supported by h5py:
def add_attribute_tree(obj): tree = {} for i in range(100): tree["hello" + str(i)] = "world" tree["intermediate"] = {} intermediate = tree["intermediate"] for level in range(10): level_str = "level" + str(level) intermediate[level_str] = {} intermediate = intermediate[level_str] intermediate = 42 obj.attrs["test"] = tree benchmark_exdir(add_attribute_tree)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
The following benchmarks create a small, a medium, and a large dataset:
def add_small_dataset(obj): data = np.zeros((100, 100, 100)) obj.create_dataset("foo", data=data) obj.close() benchmark_both(add_small_dataset) def add_medium_dataset(obj): data = np.zeros((1000, 100, 100)) obj.create_dataset("foo", data=data) obj.close() benchmark_both(add_medium_dataset, 10) def add_large_dataset(obj): data = np.zeros((1000, 1000, 100)) obj.create_dataset("foo", data=data) obj.close() benchmark_both(add_large_dataset, 3)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
There is some overhead in creating the objects themselves. This is rather small in h5py, but can be high in Exdir with name validation enabled. This is because the name of every created object must be checked against all the existing objects in the same group:
def create_many_objects(obj): for i in range(5000): group = obj.create_group("group{}".format(i)) benchmark_both(create_many_objects, 3)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
Without minimal validation, this is almost as fast in Exdir as it is in h5py. Minimal name validation only checks if file with the exact same name exist in the folder:
benchmark_both(create_many_objects, 3, name_validation=False)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
Not only the number of created objects matter. Creating them in a tree structure can also incur a performance penalty. The following test creates an object tree:
def create_large_tree(obj, level=0): if level > 4: return for i in range(3): group = obj.create_group("group_{}_{}".format(i, level)) data = np.zeros((10, 10, 10)) group.create_dataset("dataset_{}_{}".format(i, level), data=data) create_large_tree(group, level + 1) benchmark_both(create_large_tree)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
The final benchmark tests writing a "slice" of a dataset, which means only a part of the entire dataset is modified. This is typically fast in both h5py and in Exdir thanks to memory mapping.
def write_slice(dataset): dataset[320:420, 0:300, 0:100] = np.ones((100, 300, 100)) def create_setup_dataset(setup_function): def setup(): f, path = setup_function() data = np.zeros((1000, 500, 100)) dataset = f.create_dataset("foo", data=data) time.sleep(1) # allow changes to get flushed to disk return dataset, f, path return setup exdir_mean = benchmark( target=lambda dataset, f, path: write_slice(dataset), setup=create_setup_dataset(setup_exdir), teardown=lambda dataset, f, path: teardown_exdir(f, path), iterations=3 ) hdf5_mean = benchmark( target=lambda dataset, f, path: write_slice(dataset), setup=create_setup_dataset(setup_h5py), teardown=lambda dataset, f, path: teardown_h5py(f, path), iterations=3 ) result = pd.DataFrame( [("write_slice", hdf5_mean, exdir_mean, hdf5_mean/exdir_mean)], columns=["Test", "h5py", "Exdir", "Ratio"] ) all_results.append(result) result
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
Benchmark summary The results are summarized in the following table:
pd.concat(all_results)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
Profiling the largest differences While the performance of Exdir in many cases is close to h5py, there are a few cases that can be worth investigating further. For instance, it might be interesting to know what takes most time in create_large_tree, which is about 2-3 times slower in Exdir than h5py:
import cProfile f, path = setup_exdir() cProfile.run('create_large_tree(f)', sort="cumtime") teardown_exdir(f, path)
tests/benchmarks/benchmarks.ipynb
CINPLA/exdir
mit
Unzipping files with Amazon Baby Products Reviews The dataset consists of baby product reviews from Amazon.com.
# Put files in current direction into a list files_list = [f for f in os.listdir('.') if os.path.isfile(f)] # Filename of unzipped file unzipped_file = 'amazon_baby.csv' # If upzipped file not in files_list, unzip the file if unzipped_file not in files_list: zip_file = unzipped_file + '.zip' unzipping = zipfile.ZipFile(zip_file) unzipping.extractall() unzipping.close
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Loading the products data The dataset is loaded into a Pandas DataFrame called products.
products = pd.read_csv("amazon_baby.csv")
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now, let us see a preview of what the dataset looks like.
products.head()
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Performing text cleaning Let us explore a specific example of a baby product.
products.ix[1]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now, we will perform 2 simple data transformations: Remove punctuation using Python's built-in string functionality. Transform the reviews into word-counts. Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations. Before removing the punctuation from the strings in the review column, we will fall all NA values with empty string.
products["review"] = products["review"].fillna("")
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Below, we are removing all the punctuation from the strings in the review column and saving the result into a new column in the dataframe.
products["review_clean"] = products["review"].str.translate(None, string.punctuation)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Extract sentiments We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
products = products[products['rating'] != 3] len(products)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label. Below, we are create a function we will applyi to the "ratings" column of the dataframe to determine if the review is positive or negative.
def sent_func(x): # If rating is >=4, return a positive sentiment (+1) if x>=4: return 1 # Else, return a negative sentiment (-1) else: return -1
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Creating a "sentiment" column by applying the sent_func to the "rating" column in the dataframe.
products['sentiment'] = products['rating'].apply(sent_func) products.ix[20:22]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1). Split data into training and test sets Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. Loading the indicies for the train and test data and putting them in a list
with open('module-2-assignment-train-idx.txt', 'r') as train_file: ind_list_train = map(int,train_file.read().split(',')) with open('module-2-assignment-test-idx.txt', 'r') as test_file: ind_list_test = map(int,test_file.read().split(','))
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Using the indicies of the train and test data to create the train and test datasets.
train_data = products.iloc[ind_list_train,:] test_data = products.iloc[ind_list_test,:] print len(train_data) print len(test_data)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Build the word count vector for each review We will now compute the word count for each word that appears in the reviews. A vector consisting of word counts is often referred to as bag-of-word features. Since most words occur in only a few reviews, word count vectors are sparse. For this reason, scikit-learn and many other tools use sparse matrices to store a collection of word count vectors. Refer to appropriate manuals to produce sparse word count vectors. General steps for extracting word count vectors are as follows: Learn a vocabulary (set of all words) from the training data. Only the words that show up in the training data will be considered for feature extraction. Compute the occurrences of the words in each review and collect them into a row vector. Build a sparse matrix where each row is the word count vector for the corresponding review. Call this matrix train_matrix. Using the same mapping between words and columns, convert the test data into a sparse matrix test_matrix. The following cell uses CountVectorizer in scikit-learn. Notice the token_pattern argument in the constructor.
# Use this token pattern to keep single-letter words vectorizer = CountVectorizer(token_pattern=r'\b\w+\b') # First, learn vocabulary from the training data and assign columns to words # Then convert the training data into a sparse matrix train_matrix = vectorizer.fit_transform(train_data['review_clean']) # Second, convert the test data into a sparse matrix, using the same word-column mapping test_matrix = vectorizer.transform(test_data['review_clean'])
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Train a sentiment classifier with logistic regression We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. Note: This line may take 1-2 minutes. Creating an instance of the LogisticRegression class
logreg = linear_model.LogisticRegression()
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model sentiment_model.
sentiment_model = logreg.fit(train_matrix, train_data["sentiment"])
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Putting all the weights from the model into a numpy array.
weights_list = list(sentiment_model.intercept_) + list(sentiment_model.coef_.flatten()) weights_sent_model = np.array(weights_list, dtype = np.double) print len(weights_sent_model)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment. Quiz question: How many weights are >= 0?
num_positive_weights = len(weights_sent_model[weights_sent_model >= 0.0]) num_negative_weights = len(weights_sent_model[weights_sent_model < 0.0]) print "Number of positive weights: %i" % num_positive_weights print "Number of negative weights: %i" % num_negative_weights
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Making predictions with logistic regression Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.
sample_test_data = test_data.ix[[59,71,91]] print sample_test_data['rating'] sample_test_data
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Let's dig deeper into the first row of the sample_test_data. Here's the full review:
sample_test_data['review'].ix[59]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
That review seems pretty positive. Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
sample_test_data['review'].ix[71]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as: $$ \mbox{score}_i = \mathbf{w}^T h(\mathbf{x}_i) $$ where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores . For each row, the score (or margin) is a number in the range [-inf, inf].
sample_test_matrix = vectorizer.transform(sample_test_data['review_clean']) scores = sentiment_model.decision_function(sample_test_matrix) print scores
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Predicting sentiment These scores can be used to make class predictions as follows: $$ \hat{y} = \left{ \begin{array}{ll} +1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \ -1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \ \end{array} \right. $$ Using scores, write code to calculate $\hat{y}$, the class predictions:
pred_sent_test_data = [] for val in scores: if val>0: pred_sent_test_data.append(1) else: pred_sent_test_data.append(-1) print pred_sent_test_data
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Checkpoint: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from Scikit-Learn.
print "Class predictions according to Scikit-Learn:" print sentiment_model.predict(sample_test_matrix)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Probability predictions Recall from the lectures that we can also calculate the probability predictions from the scores using: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}. $$ Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
prob_pos_score = 1.0/(1.0 + np.exp(-scores)) prob_pos_score
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Checkpoint: Make sure your probability predictions match the ones obtained from Scikit-Learn.
print "Class predictions according to Scikit-Learn:" print sentiment_model.predict_proba(sample_test_matrix)[:,1]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review? The 3rd data point has the lowest probability of being positive Find the most positive (and negative) review We now turn to examining the full test dataset, test_data. Using the sentiment_model, find the 40 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews." To calculate these top-40 reviews, use the following steps: 1. Make probability predictions on test_data using the sentiment_model. 2. Sort the data according to those predictions and pick the top 40. Computing the scores with the sentiment_model decision function and then calculating the probability that y = +1
scores_test_data = sentiment_model.decision_function(test_matrix) prob_test_data = 1.0/(1.0 + np.exp(-scores_test_data))
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
To find the 40 most positive and the 40 most negative values, we will create a list of tuples with the entries (probability, index). We will then sort the list and will be able to extract the indicies corresponding to each entry.
# List of indicies in the test data ind_vals_test_data = test_data.index.values # Empty list that will be filled with the tuples (probability, index) score_label_lst_test = len(scores_test_data)*[-1]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Filling the list of tuples with the (probability, index) values
for i in range(len(scores_test_data)): score_label_lst_test[i] = (prob_test_data[i],ind_vals_test_data[i])
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Sorting the list with the entries (probability, index)
score_label_lst_test.sort()
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Extracting the top 40 positive reviews and the top 40 negative reviews
top_40_pos_test_rev = score_label_lst_test[-40:] top_40_neg_test_rev = score_label_lst_test[0:40]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Getting the indicies of the top 40 positive reviews.
ind_top_40_pos_test = 40*[-1] for i,val in enumerate(top_40_pos_test_rev): ind_top_40_pos_test[i] = val[1]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Getting the indicies of the top 40 negative reviews.
ind_top_40_neg_test = 40*[-1] for i,val in enumerate(top_40_neg_test_rev): ind_top_40_neg_test[i] = val[1]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Which of the following products are represented in the 40 most positive reviews? [multiple choice]
test_data.ix[ind_top_40_pos_test]["name"]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]
test_data.ix[ind_top_40_neg_test]["name"]
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Compute accuracy of the classifier We will now evaluate the accuracy of the trained classifer. Recall that the accuracy is given by $$ \mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}} $$ This can be computed as follows: Step 1: Use the trained model to compute class predictions Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below). Step 3: Divide the total number of correct predictions by the total number of data points in the dataset. Complete the function below to compute the classification accuracy:
def get_classification_accuracy(model, data, true_labels): # Constructing the wordcount vector data_matrix = vectorizer.transform(data['review_clean']) # Getting the predictions preds_data = model.predict(data_matrix) # Computing the number of correctly classified examples and the total examples n_correct = float(np.sum(preds_data == true_labels.values)) n_total = float(len(preds_data)) # Computing the accuracy by dividing number of #correctly classified examples by total number of examples accuracy = n_correct/n_total return accuracy
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now, let's compute the classification accuracy of the sentiment_model on the test_data.
acc_sent_mod_test = get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']) print acc_sent_mod_test
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).
print "Accuracy on Test Data: %.2f" %(acc_sent_mod_test)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better? No, you may be overfitting. Now, computing the accuracy of the sentiment model on the training data for a future quiz question.
acc_sent_mod_train = get_classification_accuracy(sentiment_model, train_data, train_data['sentiment']) print acc_sent_mod_train
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Finding the weights of significant words for the sentiment_model. In this section, we will find the weights of significant words for the sentiment_model. Creating a vocab list. The vocab list constains all the words used for the sentiment_model
vocab = vectorizer.get_feature_names() print len(vocab)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Creating a list of the significant words in the utf-8 format
un_sig_words = [u'love', u'great', u'easy', u'old', u'little', u'perfect', u'loves', u'well', u'able', u'car', u'broke', u'less', u'even', u'waste', u'disappointed', u'work', u'product', u'money', u'would', u'return']
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Creating a list that will store all the indicies where the significant words appear in the vocab list.
ind_vocab_sig_words = []
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Finding the index where each significant word appears.
for word in un_sig_words: ind_vocab_sig_words.append(vocab.index(word))
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Creating an empty list that will store the weights of the significant words. Then, using the index to find the weight for each signigicant word.
ws_sent_mod_sig_words = [] for ind in ind_vocab_sig_words: ws_sent_mod_sig_words.append(sentiment_model.coef_.flatten()[ind])
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Creating a series that will store the weights of the significant words and displaying this Series.
ws_sent_mod_ser = pd.Series(data=ws_sent_mod_sig_words, index=un_sig_words) ws_sent_mod_ser
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Learn another classifier with fewer words There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subet of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves', 'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed', 'work', 'product', 'money', 'would', 'return'] len(significant_words)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Compute a new set of word count vectors using only these words. The CountVectorizer class has a parameter that lets you limit the choice of words when building word count vectors:
vectorizer_word_subset = CountVectorizer(vocabulary=significant_words) # limit to 20 words train_matrix_word_subset = vectorizer_word_subset.fit_transform(train_data['review_clean']) test_matrix_word_subset = vectorizer_word_subset.transform(test_data['review_clean'])
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Train a logistic regression model on a subset of data We will now build a classifier with word_count_subset as the feature and sentiment as the target. Creating an instance of the LogisticRegression class. Using the fit method to train the classifier. This model should use the sparse word count matrix (train_matrix) as features and the column sentiment of train_data as the target. Use the default values for other parameters. Call this model simple_model.
log_reg = linear_model.LogisticRegression() simple_model = logreg.fit(train_matrix_word_subset, train_data["sentiment"])
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Getting the weights for the 20 significant words from the simple_model
ws_simp_model = list(simple_model.coef_.flatten())
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Putting the weights in a Series with the words corresponding to the weights as the index.
ws_simp_mod_ser = pd.Series(data=ws_simp_model, index=significant_words) ws_simp_mod_ser
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Consider the coefficients of simple_model. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?
print len(simple_model.coef_[simple_model.coef_>0])
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model? Yes, see weights below for the significant words for the sentiment model
ws_sent_mod_ser
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Comparing models We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above. First, compute the classification accuracy of the sentiment_model on the train_data:
acc_sent_mod_train
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now, compute the classification accuracy of the simple_model on the train_data:
preds_simp_mod_train = simple_model.predict(train_matrix_word_subset) n_cor_preds_simp_mod_train = float(np.sum(preds_simp_mod_train == train_data['sentiment'].values)) n_tol_preds_simp_mod_train = float(len(preds_simp_mod_train)) acc_simp_mod_train = n_cor_preds_simp_mod_train/n_tol_preds_simp_mod_train print acc_simp_mod_train
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?
if acc_sent_mod_train>acc_simp_mod_train: print "sentiment_model" else: print "simple_model"
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now, we will repeat this excercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:
acc_sent_mod_test
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Next, we will compute the classification accuracy of the simple_model on the test_data:
preds_simp_mod_test = simple_model.predict(test_matrix_word_subset) n_cor_preds_simp_mod_test = float(np.sum(preds_simp_mod_test == test_data['sentiment'].values)) n_tol_preds_simp_mod_test = float(len(preds_simp_mod_test)) acc_simp_mod_test = n_cor_preds_simp_mod_test/n_tol_preds_simp_mod_test print acc_simp_mod_test
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?
if acc_sent_mod_test>acc_simp_mod_test: print "sentiment_model" else: print "simple_model"
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Baseline: Majority class prediction It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless. What is the majority class in the train_data?
num_positive = (train_data['sentiment'] == +1).sum() num_negative = (train_data['sentiment'] == -1).sum() acc_pos_train = float(num_positive)/float(len(train_data['sentiment'])) acc_neg_train = float(num_negative)/float(len(train_data['sentiment'])) if acc_pos_train>acc_neg_train: print "Positive Sentiment is Majority Classifier for Training Data" else: print "Negative Sentiment is Majority Classifier for Training Data"
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Now compute the accuracy of the majority class classifier on test_data. Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).
num_pos_test = (test_data['sentiment'] == +1).sum() acc_pos_test = float(num_pos_test)/float(len(test_data['sentiment'])) print "Accuracy of Majority Class Classifier on Test Data: %.2f" %(acc_pos_test)
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?
if acc_sent_mod_test>acc_pos_test: print "Yes, the sentiment_model is better than majority class classifier" else: print "No, the majority class classifier is better than sentiment_model"
Week_1_Predicting_Sentiment_from_Reviews/week_1_lin_classifier_assign.ipynb
Santana9937/Classification_ML_Specialization
mit
Using magic functions of Jupyter and timeit https://docs.python.org/3.5/library/timeit.html https://ipython.org/ipython-doc/3/interactive/magics.html#magic-time
%%timeit fun() %%time fun()
content/handouts/concurrency-exercise.ipynb
oroszgy/oroszgy.github.io
mit
Exercises What is the fastest way to download 100 pages from index.hu? How to calculate the factors of 1000 random integers effectively using factorize_naive function below?
import requests def get_page(url): response = requests.request(url=url, method="GET") return response get_page("http://index.hu") def factorize_naive(n): """ A naive factorization method. Take integer 'n', return list of factors. """ if n < 2: return [] factors = [] p = 2 while True: if n == 1: return factors r = n % p if r == 0: factors.append(p) n = n // p elif p * p >= n: factors.append(n) return factors elif p > 2: # Advance in steps of 2 over odd numbers p += 2 else: # If p == 2, get to 3 p += 1 assert False, "unreachable"
content/handouts/concurrency-exercise.ipynb
oroszgy/oroszgy.github.io
mit
Install dependencies
!apt-get install libicu-dev libpango1.0-dev libcairo2-dev libleptonica-dev
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
Clone, compile and set up Tesseract
!git clone https://github.com/tesseract-ocr/tesseract import os os.chdir('tesseract') !sh autogen.sh !./configure --disable-graphics !make -j 8 !make install !ldconfig !make training !make training-install
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
Grab some things to scrape the RIA corpus
import os os.chdir('/content') !git clone https://github.com/jimregan/tesseract-gle-uncial/ !apt-get install lynx
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
Scrape the RIA corpus
! for i in A B C D E F G H I J K L M N O P Q R S T U V W X Y Z;do lynx -dump "http://corpas.ria.ie/index.php?fsg_function=1&fsg_page=$i" |grep http://corpas.ria.ie|awk '{print $NF}' >> list;done !grep 'function=3' list |sort|uniq|grep corpas.ria|sed -e 's/function=3/function=5/' > input !wget -x -c -i input !mkdir text !for i in corpas.ria.ie/*;do id=$(echo $i|awk -F'=' '{print $NF}');cat $i | perl /content/tesseract-gle-uncial/scripts/extract-ria.pl > text/$id.txt;done
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
Get the raw corpus in a single text file
!cat text/*.txt|grep -v '^$' > ria-raw.txt
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
Compress the raw text; this can be downloaded through the file browser on the left, so the scraping steps can be skipped in future
!gzip ria-raw.txt
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
...and can be re-added using the upload feature in the file browser
!gzip -d ria-raw.txt.gz
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
This next part is so I can update the langdata files
import os os.chdir('/content') !git clone https://github.com/tesseract-ocr/langdata !cat ria-raw.txt | perl /content/tesseract-gle-uncial/scripts/toponc.pl > ria-ponc.txt !mkdir genwlout !perl /content/tesseract-gle-uncial/scripts/genlangdata.pl -i ria-ponc.txt -d genwlout -p gle_uncial import os os.chdir('/content/genwlout') #!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cat $i.unsorted | awk -F'\t' '{print $1}' | sort | uniq > $i.sorted;done !for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cat $i.sorted /content/langdata/gle_uncial/$i | sort | uniq > $i;done !for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cp $i /content/langdata/gle_uncial/;done Grab the fonts import os os.chdir('/content') !mkdir fonts os.chdir('fonts') !wget -i /content/tesseract-gle-uncial/fonts.txt !for i in *.zip; do unzip $i;done
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
Generate
os.chdir('/content') !mkdir unpack !combine_tessdata -u /content/gle_uncial.traineddata unpack/gle_uncial. os.chdir('unpack') !for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cp /content/genwlout/$i .;done !wordlist2dawg gle_uncial.numbers gle_uncial.lstm-number-dawg gle_uncial.lstm-unicharset !wordlist2dawg gle_uncial.punc gle_uncial.lstm-punc-dawg gle_uncial.lstm-unicharset !wordlist2dawg gle_uncial.wordlist gle_uncial.lstm-word-dawg gle_uncial.lstm-unicharset !rm gle_uncial.numbers gle_uncial.word.bigrams gle_uncial.punc gle_uncial.wordlist os.chdir('/content') !mv gle_uncial.traineddata gle_uncial.traineddata.orig !combine_tessdata unpack/gle_uncial. os.chdir('/content') !bash /content/tesseract/src/training/tesstrain.sh !text2image --fonts_dir fonts --list_available_fonts !cat genwlout/gle_uncial.wordlist.unsorted|awk -F'\t' '{print $2 "\t" $1'}|sort -nr > freqlist !cat freqlist|awk -F'\t' '{print $2}'|grep -v '^$' > wordlist !cat ria-ponc.txt|sort|uniq|head -n 400000 > gle_uncial.training_text !cp unpack/gle_uncial.traineddata /usr/share/tesseract-ocr/4.00/tessdata !cp gle_uncial.trainingtext langdata/gle_uncial/ !mkdir output !bash tesseract/src/training/tesstrain.sh --fonts_dir fonts --lang gle_uncial --linedata_only --noextract_font_properties --langdata_dir langdata --tessdata_dir /usr/share/tesseract-ocr/4.00/tessdata --output_dir output
Update_gle_uncial_traineddata_for_Tesseract_4.ipynb
jimregan/tesseract-gle-uncial
apache-2.0
SGDClassifier SGD stands for Stochastic Gradient Descent, a very popular numerical procedure to find the local minimum of a function (in this case, the loss function, which measures how far every instance is from our boundary). The algorithm will learn the coefficients of the hyperplane by minimizing the loss function.
# instantiate sgd = SGDClassifier() # fitting sgd.fit(X_train, y_train) # coefficient print("coefficient", sgd.coef_) # intercept print("intercept: ", sgd.intercept_) # predicting for one y_pred = sgd.predict(scaler.transform([[4.9,3.1,1.5,0.1]])) print(y_pred) # predicting for X_test y_pred = sgd.predict(X_test) # checking accuracy score print("Model Accuracy on Train data: ", accuracy_score(y_train, sgd.predict(X_train))) print("Model Accuracy on Test data: ", accuracy_score(y_test, y_pred)) # let's plot the data plt.figure(figsize=(8,6)) plt.scatter(X_train[:,0][y_train==0],X_train[:,1][y_train==0],color='red', label='setosa') plt.scatter(X_train[:,0][y_train==1],X_train[:,1][y_train==1],color='blue', label='verginica') plt.scatter(X_train[:,0][y_train==2],X_train[:,1][y_train==2],color='green', label='versicolour') plt.legend(loc='best')
Sklearn_MLPython/CH01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Classification Report Accuracy = (TP+TN)/m Precision = TP/(TP+FP) Recall = TP/(TP+FN) F1-score = 2 * Precision * Recall / (Precision + Recall)
# predicting print(classification_report(y_pred=y_pred, y_true=y_test)) confusion_matrix(y_pred=y_pred, y_true=y_test)
Sklearn_MLPython/CH01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Using a pipeline mechanism to build and test our model
# create a composite estimator made by a pipeline of the standarization and the linear model clf = pipeline.Pipeline([ ('scaler', preprocessing.StandardScaler()), ('linear_model', SGDClassifier()) ]) # create a k-fold cross validation iterator of k=5 folds cv = KFold(X.shape[0], 5, shuffle=True, random_state=33) # by default the score used is the one returned by score method of the estimator (accuracy) scores = cross_val_score(clf, X, y, cv=cv) print(scores) # mean accuracy print(np.mean(scores), sp.stats.sem(scores))
Sklearn_MLPython/CH01.ipynb
atulsingh0/MachineLearning
gpl-3.0
You can use the single line call "analyze," which does all the available analysis simultaneously
# NOTE: This will take several minutes depending on the performance of your machine audio_features = audioAnalyzer.analyze(audio_filename) # plot the features plt.rcParams['figure.figsize'] = [20, 8] audioAnalyzer.plot(audio_features) plt.show()
demos/audio_analysis_demo.ipynb
sertansenturk/tomato
agpl-3.0
... or call all the methods individually
# audio metadata extraction metadata = audioAnalyzer.crawl_musicbrainz_metadata(audio_filename) # predominant melody extraction pitch = audioAnalyzer.extract_pitch(audio_filename) # pitch post filtering pitch_filtered = audioAnalyzer.filter_pitch(pitch) # histogram computation pitch_distribution = audioAnalyzer.compute_pitch_distribution(pitch_filtered) pitch_class_distribution = copy.deepcopy(pitch_distribution) pitch_class_distribution.to_pcd() # tonic identification tonic = audioAnalyzer.identify_tonic(pitch_filtered) # get the makam from metadata if possible else apply makam recognition makams = audioAnalyzer.get_makams(metadata, pitch_filtered, tonic) makam = list(makams)[0] # for now get the first makam # transposition (ahenk) identification transposition = audioAnalyzer.identify_transposition(tonic, makam) # stable note extraction (tuning analysis) note_models = audioAnalyzer.compute_note_models(pitch_distribution, tonic, makam) # get the melodic progression model melodic_progression = audioAnalyzer.compute_melodic_progression(pitch_filtered)
demos/audio_analysis_demo.ipynb
sertansenturk/tomato
agpl-3.0
Visualize the MetaLearning pipeline built on top NitroML. We are using NitroML on Kubeflow: This notebook allows users to analyze NitroML metalearning pipelines results.
# Step 1: Configure your cluster with gcloud # `gcloud container clusters get-credentials <cluster_name> --zone <cluster-zone> --project <project-id> # Step 2: Get the port where the gRPC service is running on the cluster # `kubectl get configmap metadata-grpc-configmap -o jsonpath={.data}` # Use `METADATA_GRPC_SERVICE_PORT` in the next step. The default port used is 8080. # Step 3: Port forwarding # `kubectl port-forward deployment/metadata-grpc-deployment 9898:<METADATA_GRPC_SERVICE_PORT>` # Troubleshooting # If getting error related to Metadata (For examples, Transaction already open). Try restarting the metadata-grpc-service using: # `kubectl rollout restart deployment metadata-grpc-deployment` import sys, os PROJECT_DIR=os.path.join(sys.path[0], '..') %cd {PROJECT_DIR} import json from examples import config as cloud_config import examples.tuner_data_utils as tuner_utils from ml_metadata.proto import metadata_store_pb2 from ml_metadata.metadata_store import metadata_store from nitroml.benchmark import results import seaborn as sns import tensorflow as tf import qgrid sns.set()
examples/visualize_tuner_plots.ipynb
google/nitroml
apache-2.0
Connect to the ML Metadata (MLMD) database First we need to connect to our MLMD database which stores the results of our benchmark runs.
connection_config = metadata_store_pb2.MetadataStoreClientConfig() connection_config.host = 'localhost' connection_config.port = 9898 store = metadata_store.MetadataStore(connection_config)
examples/visualize_tuner_plots.ipynb
google/nitroml
apache-2.0
Get trial summary data (used to plot Area under Learning Curve) stored as AugmentedTuner artifacts.
# Name of the dataset/subbenchmark # This is used to filter out the component path. testdata = 'ilpd' def get_metalearning_data(meta_algorithm: str = '', test_dataset: str = '', multiple_runs: bool = True): d_list = [] execs = store.get_executions_by_type('nitroml.automl.metalearning.tuner.component.AugmentedTuner') model_dir_map = {} for tuner_exec in execs: run_id = tuner_exec.properties['run_id'].string_value pipeline_root = tuner_exec.properties['pipeline_root'].string_value component_id = tuner_exec.properties['component_id'].string_value pipeline_name = tuner_exec.properties['pipeline_name'].string_value if multiple_runs: if '.run_' not in component_id: continue if test_dataset not in component_id: continue if f'metalearning_benchmark' != pipeline_name and meta_algorithm not in pipeline_name: continue config_path = os.path.join(pipeline_root, component_id, 'trial_summary_plot', str(tuner_exec.id)) model_dir_map[tuner_exec.id] = config_path d_list.append(config_path) return d_list # Specify the path to tuner_dir from above # You can get the list of tuner_dirs by calling: get_metalearning_data(multiple_runs=False) example_plot = '' if not example_plot: raise ValueError('Please specify the path to the tuner plot dir.') with tf.io.gfile.GFile(os.path.join(example_plot, 'tuner_plot_data.txt'), mode='r') as fin: data = json.load(fin) tuner_utils.display_tuner_data(data, save_plot=False)
examples/visualize_tuner_plots.ipynb
google/nitroml
apache-2.0
Majority Voting
algorithm = 'majority_voting' d_list = get_metalearning_data(algorithm, testdata) d_list # Select the runs from `d_list` to visualize. data_list = [] for d in d_list: with tf.io.gfile.GFile(os.path.join(d, 'tuner_plot_data.txt'), mode='r') as fin: data_list.append(json.load(fin)) tuner_utils.display_tuner_data_with_error_bars(data_list, save_plot=True)
examples/visualize_tuner_plots.ipynb
google/nitroml
apache-2.0
Nearest Neighbor
algorithm = 'nearest_neighbor' d_list = get_metalearning_data(algorithm, testdata) d_list # Select the runs from `d_list` to visualize. data_list = [] for d in d_list: with tf.io.gfile.GFile(os.path.join(d, 'tuner_plot_data.txt'), mode='r') as fin: data_list.append(json.load(fin)) tuner_utils.display_tuner_data_with_error_bars(data_list, save_plot=True)
examples/visualize_tuner_plots.ipynb
google/nitroml
apache-2.0
Representamos ambos diámetros en la misma gráfica
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,3),ylim=(1.4,2)).hlines([1.85,1.65],0,3500,colors='r') datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
medidas/04082015/estudio.datos.ipynb
darkomen/TFG
cc0-1.0
Mostramos la representación gráfica de la media de las muestras
pd.rolling_mean(datos[columns], 50).plot(subplots=True, figsize=(12,12))
medidas/04082015/estudio.datos.ipynb
darkomen/TFG
cc0-1.0
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
medidas/04082015/estudio.datos.ipynb
darkomen/TFG
cc0-1.0
Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
medidas/04082015/estudio.datos.ipynb
darkomen/TFG
cc0-1.0
T = 0.05
L_t0=plotter(T[0])
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit
T=10
L_t1= plotter(T[1])
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit