markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
`enumerate()`En caso de que también necesiten saber el indice: | for indice, valor in enumerate(mi_lista):
print('indice: {}, valoror: {}'.format(indice, valor)) | _____no_output_____ | MIT | notebooks/beginner/notebooks/for_loops.ipynb | mateodif/learn-python3 |
Iterando diccionarios | mi_dicc = {'hacker': True, 'edad': 72, 'nombre': 'John Doe'}
for valor in mi_dicc:
print(valor)
for llave, valor in mi_dicc.items():
print('{}={}'.format(llave, valor)) | _____no_output_____ | MIT | notebooks/beginner/notebooks/for_loops.ipynb | mateodif/learn-python3 |
`range()` | for numero in range(5):
print(numero)
for numero in range(2, 5):
print(numero)
for numero in range(0, 10, 2): # el ultimo son los pasos
print(numero) | _____no_output_____ | MIT | notebooks/beginner/notebooks/for_loops.ipynb | mateodif/learn-python3 |
Pos-Tagging & Feature ExtractionFollowing normalisation, we can now proceed to the process of pos-tagging and feature extraction. Let's start with pos-tagging. POS-taggingPart-of-speech tagging is one of the most important text analysis tasks used to classify words into their part-of-speech and label them according the tagset which is a collection of tags used for the pos tagging. Part-of-speech tagging also known as word classes or lexical categories.The `nltk` library provides its own pre-trained `POS-tagger`. Let's see how it is used. | import pandas as pd
df0 = pd.read_csv("../../data/interim/001_normalised_keyed_reviews.csv", sep="\t", low_memory=False)
df0.head()
# For monitoring duration of pandas processes
from tqdm import tqdm, tqdm_pandas
# To avoid RuntimeError: Set changed size during iteration
tqdm.monitor_interval = 0
# Register `pandas.progress_apply` and `pandas.Series.map_apply` with `tqdm`
# (can use `tqdm_gui`, `tqdm_notebook`, optional kwargs, etc.)
tqdm.pandas(desc="Progress:")
# Now you can use `progress_apply` instead of `apply`
# and `progress_map` instead of `map`
# can also groupby:
# df.groupby(0).progress_apply(lambda x: x**2)
def convert_text_to_list(review):
return review.replace("[","").replace("]","").replace("'","").split(",")
# Convert "reviewText" field to back to list
df0['reviewText'] = df0['reviewText'].astype(str)
df0['reviewText'] = df0['reviewText'].progress_apply(lambda text: convert_text_to_list(text));
df0['reviewText'].head()
df0['reviewText'][12]
import nltk
nltk.__version__
# Split negs
def split_neg(review):
new_review = []
for token in review:
if '_' in token:
split_words = token.split("_")
new_review.append(split_words[0])
new_review.append(split_words[1])
else:
new_review.append(token)
return new_review
df0["reviewText"] = df0["reviewText"].progress_apply(lambda review: split_neg(review))
df0["reviewText"].head()
### Remove Stop Words
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
def remove_stopwords(review):
return [token for token in review if not token in stop_words]
df0["reviewText"] = df0["reviewText"].progress_apply(lambda review: remove_stopwords(review))
df0["reviewText"].head() | Progress:: 100%|██████████| 582711/582711 [00:12<00:00, 48007.55it/s]
| MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
Unfortunatelly, this tagger, though much better and accurate, takes a lot of time. In order to process the above data set it would need close to 3 days of running. Follow this link for more info on the tagger: https://nlp.stanford.edu/software/tagger.shtmlHistory | from nltk.tag import StanfordPOSTagger
from nltk import word_tokenize
# import os
# os.getcwd()
# Add the jar and model via their path (instead of setting environment variables):
jar = '../../models/stanford-postagger-full-2017-06-09/stanford-postagger.jar'
model = '../../models/stanford-postagger-full-2017-06-09/models/english-left3words-distsim.tagger'
pos_tagger = StanfordPOSTagger(model, jar, encoding='utf8')
def pos_tag(review):
if(len(review)>0):
return pos_tagger.tag(review)
# Example
text = pos_tagger.tag(word_tokenize("What's the airspeed of an unladen swallow ?"))
print(text)
tagged_df = pd.DataFrame(df0['reviewText'].progress_apply(lambda review: pos_tag(review)))
tagged_df.head()
# tagged_df = pd.DataFrame(df0['reviewText'].progress_apply(lambda review: nltk.pos_tag(review)))
# tagged_df.head() | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
Thankfully, `nltk` provides documentation for each tag, which can be queried using the tag, e.g., `nltk.help.upenn_tagset(‘RB’)`, or a regular expression. `nltk` also provides batch pos-tagging method for document pos-tagging: | tagged_df['reviewText'][8] | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
The list of all possible tags appears below:| Tag | Description ||------|------------------------------------------|| CC | Coordinating conjunction || CD | Cardinal number || DT | Determiner || EX | ExistentialĘthere || FW | Foreign word || IN | Preposition or subordinating conjunction || JJ | Adjective || JJR | Adjective, comparative || JJS | Adjective, superlative || LS | List item marker || MD | Modal || NN | Noun, singular or mass || NNS | Noun, plural || NNP | Proper noun, singular || NNPS | Proper noun, plural || PDT | Predeterminer || POS | Possessive ending || PRP | Personal pronoun || PRP* | Possessive pronoun || RB | Adverb || RBR | Adverb, comparative || RBS | Adverb, superlative || RP | Particle || SYM | Symbol || TO | to || UH | Interjection || VB | Verb, base form || VBD | Verb, past tense || VBG | Verb, gerund or present participle || VBN | Verb, past participle || VBP | Verb, non-3rd person singular present || VBZ | Verb, 3rd person singular present || WDT | Wh-determiner || WP | Wh-pronoun || WP* | Possessive wh-pronoun || WRB | Wh-adverb |Notice: where you see `*` replace with `$`. | ## Join with Original Key and Persist Locally to avoid RE-processing
uniqueKey_series_df = df0[['uniqueKey']]
uniqueKey_series_df.head()
pos_tagged_keyed_reviews = pd.concat([uniqueKey_series_df, tagged_df], axis=1);
pos_tagged_keyed_reviews.head()
pos_tagged_keyed_reviews.to_csv("../data/interim/002_pos_tagged_keyed_reviews.csv", sep='\t', header=True, index=False); | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
NounsNouns generally refer to people, places, things, or concepts, e.g.: woman, Scotland, book, intelligence. Nouns can appear after determiners and adjectives, and can be the subject or object of the verb.The simplified noun tags are `N` for common nouns like book, and `NP` for proper nouns like Scotland. | def noun_collector(word_tag_list):
if(len(word_tag_list)>0):
return [word for (word, tag) in word_tag_list if tag in {'NN', 'NNS', 'NNP', 'NNPS'}]
nouns_df = pd.DataFrame(tagged_df['reviewText'].progress_apply(lambda review: noun_collector(review)))
nouns_df.head()
keyed_nouns_df = pd.concat([uniqueKey_series_df, nouns_df], axis=1);
keyed_nouns_df.head()
keyed_nouns_df.to_csv("../../data/interim/002_keyed_nouns_stanford.csv", sep='\t', header=True, index=False);
## END_OF_FILE | _____no_output_____ | MIT | notebooks/test/002_pos_tagging-Copy1.ipynb | VictorQuintana91/Thesis |
CNTK 201B: Hands On Labs Image Recognition This hands-on lab shows how to implement image recognition task using [convolution network][] with CNTK v2 Python API. You will start with a basic feedforward CNN architecture in order to classify Cifar dataset, then you will keep adding advanced feature to your network. Finally, you will implement a VGG net and residual net similar to the one that won ImageNet competition but smaller in size.[convolution network]:https://en.wikipedia.org/wiki/Convolutional_neural_network IntroductionIn this hands-on, you will practice the following:* Understanding subset of CNTK python API needed for image classification task.* Write a custom convolution network to classify Cifar dataset.* Modifying the network structure by adding: * [Dropout][] layer. * Batchnormalization layer.* Implement a [VGG][] style network.* Introduction to Residual Nets (RESNET).* Implement and train [RESNET network][].[RESNET network]:https://github.com/Microsoft/CNTK/wiki/Hands-On-Labs-Image-Recognition[VGG]:http://www.robots.ox.ac.uk/~vgg/research/very_deep/[Dropout]:https://en.wikipedia.org/wiki/Dropout_(neural_networks) PrerequisitesCNTK 201A hands-on lab, in which you will download and prepare Cifar dataset is a prerequisites for this lab. This tutorial depends on CNTK v2, so before starting this lab you will need to install CNTK v2. Furthermore, all the tutorials in this lab are done in python, therefore, you will need a basic knowledge of Python.CNTK 102 lab is recommended but not a prerequisites for this tutorials. However, a basic understanding of Deep Learning is needed. DatasetYou will use Cifar 10 dataset, from https://www.cs.toronto.edu/~kriz/cifar.html, during this tutorials. The dataset contains 50000 training images and 10000 test images, all images are 32x32x3. Each image is classified as one of 10 classes as shown below: | # Figure 1
Image(url="https://cntk.ai/jup/201/cifar-10.png", width=500, height=500) | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
The above image is from: https://www.cs.toronto.edu/~kriz/cifar.html Convolution Neural Network (CNN)Convolution Neural Network (CNN) is a feedforward network comprise of a bunch of layers in such a way that the output of one layer is fed to the next layer (There are more complex architecture that skip layers, we will discuss one of those at the end of this lab). Usually, CNN start with alternating between convolution layer and pooling layer (downsample), then end up with fully connected layer for the classification part. Convolution layerConvolution layer consist of multiple 2D convolution kernels applied on the input image or the previous layer, each convolution kernel output a feature map. | # Figure 2
Image(url="https://cntk.ai/jup/201/Conv2D.png") | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
The stack of feature maps output are the input to the next layer. | # Figure 3
Image(url="https://cntk.ai/jup/201/Conv2DFeatures.png") | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
> Gradient-Based Learning Applied to Document Recognition, Proceedings of the IEEE, 86(11):2278-2324, November 1998> Y. LeCun, L. Bottou, Y. Bengio and P. Haffner In CNTK:Here the [convolution][] layer in Python:```pythondef Convolution(filter_shape, e.g. (3,3) num_filters, e.g. 64 activation, relu or None...etc. init, Random initialization pad, True or False strides) strides e.g. (1,1)```[convolution]:https://www.cntk.ai/pythondocs/layerref.htmlconvolution Pooling layerIn most CNN vision architecture, each convolution layer is succeeded by a pooling layer, so they keep alternating until the fully connected layer. The purpose of the pooling layer is as follow:* Reduce the dimensionality of the previous layer, which speed up the network.* Provide a limited translation invariant.Here an example of max pooling with a stride of 2: | # Figure 4
Image(url="https://cntk.ai/jup/201/MaxPooling.png", width=400, height=400) | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
In CNTK:Here the [pooling][] layer in Python:```python Max poolingdef MaxPooling(filter_shape, e.g. (3,3) strides, (2,2) pad) True or False Average poolingdef AveragePooling(filter_shape, e.g. (3,3) strides, (2,2) pad) True or False```[pooling]:https://www.cntk.ai/pythondocs/layerref.htmlmaxpooling-averagepooling Dropout layerDropout layer takes a probability value as an input, the value is called the dropout rate. Let's say the dropu rate is 0.5, what this layer does it pick at random 50% of the nodes from the previous layer and drop them out of the nework. This behavior help regularize the network.> Dropout: A Simple Way to Prevent Neural Networks from Overfitting> Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov In CNTK:Dropout layer in Python:```python Dropoutdef Dropout(prob) dropout rate e.g. 0.5``` Batch normalization (BN)Batch normalization is a way to make the input to each layer has zero mean and unit variance. BN help the network converge faster and keep the input of each layer around zero. BN has two learnable parameters called gamma and beta, the purpose of those parameters is for the network to decide for itself if the normalized input is what is best or the raw input.> Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift> Sergey Ioffe, Christian Szegedy In CNTK:[Batch normalization][] layer in Python:```python Batch normalizationdef BatchNormalization(map_rank) For image map_rank=1```[Batch normalization]:https://www.cntk.ai/pythondocs/layerref.htmlbatchnormalization-layernormalization-stabilizer Microsoft Cognitive Network Toolkit (CNTK)CNTK is a highly flexible computation graphs, each node take inputs as tensors and produce tensors as the result of the computation. Each node is exposed in Python API, which give you the flexibility of creating any custom graphs, you can also define your own node in Python or C++ using CPU, GPU or both.For Deep learning, you can use the low level API directly or you can use CNTK layered API. We will start with the low level API, then switch to the layered API in this lab.So let's first import the needed modules for this lab. | from __future__ import print_function
import os
import numpy as np
import matplotlib.pyplot as plt
import math
from cntk.layers import default_options, Convolution, MaxPooling, AveragePooling, Dropout, BatchNormalization, Dense, Sequential, For
from cntk.io import MinibatchSource, ImageDeserializer, StreamDef, StreamDefs
import cntk.io.transforms as xforms
from cntk.initializer import glorot_uniform, he_normal
from cntk import Trainer
from cntk.learner import momentum_sgd, learning_rate_schedule, UnitType, momentum_as_time_constant_schedule
from cntk.ops import cross_entropy_with_softmax, classification_error, relu, input_variable, softmax, element_times
from cntk.utils import *
# Figure 5
Image(url="https://cntk.ai/jup/201/CNN.png") | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Now that we imported the needed modules, let's implement our first CNN, as shown in Figure 5 above.Let's implement the above network using CNTK layer API: | def create_basic_model(input, out_dims):
net = Convolution((5,5), 32, init=glorot_uniform(), activation=relu, pad=True)(input)
net = MaxPooling((3,3), strides=(2,2))(net)
net = Convolution((5,5), 32, init=glorot_uniform(), activation=relu, pad=True)(net)
net = MaxPooling((3,3), strides=(2,2))(net)
net = Convolution((5,5), 64, init=glorot_uniform(), activation=relu, pad=True)(net)
net = MaxPooling((3,3), strides=(2,2))(net)
net = Dense(64, init=glorot_uniform())(net)
net = Dense(out_dims, init=glorot_uniform(), activation=None)(net)
return net | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
To train the above model we need two things:* Read the training images and their corresponding labels.* Define a cost function, compute the cost for each mini-batch and update the model weights according to the cost value.To read the data in CNTK, we will use CNTK readers which handle data augmentation and can fetch data in parallel.Example of a map text file: S:\data\CIFAR-10\train\00001.png 9 S:\data\CIFAR-10\train\00002.png 9 S:\data\CIFAR-10\train\00003.png 4 S:\data\CIFAR-10\train\00004.png 1 S:\data\CIFAR-10\train\00005.png 1 | # model dimensions
image_height = 32
image_width = 32
num_channels = 3
num_classes = 10
#
# Define the reader for both training and evaluation action.
#
def create_reader(map_file, mean_file, train):
if not os.path.exists(map_file) or not os.path.exists(mean_file):
raise RuntimeError("This tutorials depends 201A tutorials, please run 201A first.")
# transformation pipeline for the features has jitter/crop only when training
transforms = []
if train:
transforms += [
xforms.crop(crop_type='randomside', side_ratio=0.8) # train uses data augmentation (translation only)
]
transforms += [
xforms.scale(width=image_width, height=image_height, channels=num_channels, interpolations='linear'),
xforms.mean(mean_file)
]
# deserializer
return MinibatchSource(ImageDeserializer(map_file, StreamDefs(
features = StreamDef(field='image', transforms=transforms), # first column in map file is referred to as 'image'
labels = StreamDef(field='label', shape=num_classes) # and second as 'label'
))) | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Now let us write the the training and validation loop. | #
# Train and evaluate the network.
#
def train_and_evaluate(reader_train, reader_test, max_epochs, model_func):
# Input variables denoting the features and label data
input_var = input_variable((num_channels, image_height, image_width))
label_var = input_variable((num_classes))
# Normalize the input
feature_scale = 1.0 / 256.0
input_var_norm = element_times(feature_scale, input_var)
# apply model to input
z = model_func(input_var_norm, out_dims=10)
#
# Training action
#
# loss and metric
ce = cross_entropy_with_softmax(z, label_var)
pe = classification_error(z, label_var)
# training config
epoch_size = 50000
minibatch_size = 64
# Set training parameters
lr_per_minibatch = learning_rate_schedule([0.01]*10 + [0.003]*10 + [0.001], UnitType.minibatch, epoch_size)
momentum_time_constant = momentum_as_time_constant_schedule(-minibatch_size/np.log(0.9))
l2_reg_weight = 0.001
# trainer object
learner = momentum_sgd(z.parameters,
lr = lr_per_minibatch, momentum = momentum_time_constant,
l2_regularization_weight=l2_reg_weight)
progress_printer = ProgressPrinter(tag='Training', num_epochs=max_epochs)
trainer = Trainer(z, (ce, pe), [learner], [progress_printer])
# define mapping from reader streams to network inputs
input_map = {
input_var: reader_train.streams.features,
label_var: reader_train.streams.labels
}
log_number_of_parameters(z) ; print()
# perform model training
batch_index = 0
plot_data = {'batchindex':[], 'loss':[], 'error':[]}
for epoch in range(max_epochs): # loop over epochs
sample_count = 0
while sample_count < epoch_size: # loop over minibatches in the epoch
data = reader_train.next_minibatch(min(minibatch_size, epoch_size - sample_count), input_map=input_map) # fetch minibatch.
trainer.train_minibatch(data) # update model with it
sample_count += data[label_var].num_samples # count samples processed so far
# For visualization...
plot_data['batchindex'].append(batch_index)
plot_data['loss'].append(trainer.previous_minibatch_loss_average)
plot_data['error'].append(trainer.previous_minibatch_evaluation_average)
batch_index += 1
trainer.summarize_training_progress()
#
# Evaluation action
#
epoch_size = 10000
minibatch_size = 16
# process minibatches and evaluate the model
metric_numer = 0
metric_denom = 0
sample_count = 0
minibatch_index = 0
while sample_count < epoch_size:
current_minibatch = min(minibatch_size, epoch_size - sample_count)
# Fetch next test min batch.
data = reader_test.next_minibatch(current_minibatch, input_map=input_map)
# minibatch data to be trained with
metric_numer += trainer.test_minibatch(data) * current_minibatch
metric_denom += current_minibatch
# Keep track of the number of samples processed so far.
sample_count += data[label_var].num_samples
minibatch_index += 1
print("")
print("Final Results: Minibatch[1-{}]: errs = {:0.1f}% * {}".format(minibatch_index+1, (metric_numer*100.0)/metric_denom, metric_denom))
print("")
# Visualize training result:
window_width = 32
loss_cumsum = np.cumsum(np.insert(plot_data['loss'], 0, 0))
error_cumsum = np.cumsum(np.insert(plot_data['error'], 0, 0))
# Moving average.
plot_data['batchindex'] = np.insert(plot_data['batchindex'], 0, 0)[window_width:]
plot_data['avg_loss'] = (loss_cumsum[window_width:] - loss_cumsum[:-window_width]) / window_width
plot_data['avg_error'] = (error_cumsum[window_width:] - error_cumsum[:-window_width]) / window_width
plt.figure(1)
plt.subplot(211)
plt.plot(plot_data["batchindex"], plot_data["avg_loss"], 'b--')
plt.xlabel('Minibatch number')
plt.ylabel('Loss')
plt.title('Minibatch run vs. Training loss ')
plt.show()
plt.subplot(212)
plt.plot(plot_data["batchindex"], plot_data["avg_error"], 'r--')
plt.xlabel('Minibatch number')
plt.ylabel('Label Prediction Error')
plt.title('Minibatch run vs. Label Prediction Error ')
plt.show()
return softmax(z)
data_path = os.path.join('data', 'CIFAR-10')
reader_train = create_reader(os.path.join(data_path, 'train_map.txt'), os.path.join(data_path, 'CIFAR-10_mean.xml'), True)
reader_test = create_reader(os.path.join(data_path, 'test_map.txt'), os.path.join(data_path, 'CIFAR-10_mean.xml'), False)
pred = train_and_evaluate(reader_train, reader_test, max_epochs=5, model_func=create_basic_model) | Training 116906 parameters in 10 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.062444 * 50000, metric = 75.3% * 50000 13.316s (3754.8 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.675133 * 50000, metric = 61.7% * 50000 13.772s (3630.5 samples per second);
Finished Epoch[3 of 300]: [Training] loss = 1.520789 * 50000, metric = 55.4% * 50000 13.674s (3656.7 samples per second);
Finished Epoch[4 of 300]: [Training] loss = 1.421881 * 50000, metric = 51.4% * 50000 13.668s (3658.2 samples per second);
Finished Epoch[5 of 300]: [Training] loss = 1.338381 * 50000, metric = 48.0% * 50000 13.675s (3656.3 samples per second);
Final Results: Minibatch[1-626]: errs = 43.3% * 10000
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Although, this model is very simple, it still has too much code, we can do better. Here the same model in more terse format: | def create_basic_model_terse(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((5,5), [32,32,64][i], init=glorot_uniform(), pad=True),
MaxPooling((3,3), strides=(2,2))
]),
Dense(64, init=glorot_uniform()),
Dense(out_dims, init=glorot_uniform(), activation=None)
])
return model(input)
pred_basic_model = train_and_evaluate(reader_train, reader_test, max_epochs=10, model_func=create_basic_model_terse) | Training 116906 parameters in 10 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.054147 * 50000, metric = 75.0% * 50000 13.674s (3656.6 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.695077 * 50000, metric = 62.6% * 50000 14.271s (3503.7 samples per second);
Finished Epoch[3 of 300]: [Training] loss = 1.542115 * 50000, metric = 56.3% * 50000 13.872s (3604.3 samples per second);
Finished Epoch[4 of 300]: [Training] loss = 1.450798 * 50000, metric = 52.3% * 50000 13.823s (3617.3 samples per second);
Finished Epoch[5 of 300]: [Training] loss = 1.373555 * 50000, metric = 49.2% * 50000 13.857s (3608.4 samples per second);
Finished Epoch[6 of 300]: [Training] loss = 1.300828 * 50000, metric = 46.6% * 50000 13.965s (3580.3 samples per second);
Finished Epoch[7 of 300]: [Training] loss = 1.232516 * 50000, metric = 43.7% * 50000 13.827s (3616.0 samples per second);
Finished Epoch[8 of 300]: [Training] loss = 1.189415 * 50000, metric = 42.0% * 50000 13.885s (3600.9 samples per second);
Finished Epoch[9 of 300]: [Training] loss = 1.134052 * 50000, metric = 39.9% * 50000 13.871s (3604.6 samples per second);
Finished Epoch[10 of 300]: [Training] loss = 1.098405 * 50000, metric = 38.9% * 50000 13.961s (3581.3 samples per second);
Final Results: Minibatch[1-626]: errs = 36.2% * 10000
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Now that we have a trained model, let's classify the following image: | # Figure 6
Image(url="https://cntk.ai/jup/201/00014.png", width=64, height=64)
import PIL
def eval(pred_op, image_path):
label_lookup = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
image_mean = 133.0
image_data = np.array(PIL.Image.open(image_path), dtype=np.float32)
image_data -= image_mean
image_data = np.ascontiguousarray(np.transpose(image_data, (2, 0, 1)))
result = np.squeeze(pred_op.eval({pred_op.arguments[0]:[image_data]}))
# Return top 3 results:
top_count = 3
result_indices = (-np.array(result)).argsort()[:top_count]
print("Top 3 predictions:")
for i in range(top_count):
print("\tLabel: {:10s}, confidence: {:.2f}%".format(label_lookup[result_indices[i]], result[result_indices[i]] * 100))
eval(pred_basic_model, "data/CIFAR-10/test/00014.png") | Top 3 predictions:
Label: truck , confidence: 98.95%
Label: ship , confidence: 0.46%
Label: automobile, confidence: 0.26%
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Adding dropout layer, with drop rate of 0.25, before the last dense layer: | def create_basic_model_with_dropout(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((5,5), [32,32,64][i], init=glorot_uniform(), pad=True),
MaxPooling((3,3), strides=(2,2))
]),
Dense(64, init=glorot_uniform()),
Dropout(0.25),
Dense(out_dims, init=glorot_uniform(), activation=None)
])
return model(input)
pred_basic_model_dropout = train_and_evaluate(reader_train, reader_test, max_epochs=5, model_func=create_basic_model_with_dropout) | Training 116906 parameters in 10 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.123667 * 50000, metric = 78.7% * 50000 16.391s (3050.5 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.817045 * 50000, metric = 67.9% * 50000 16.894s (2959.5 samples per second);
Finished Epoch[3 of 300]: [Training] loss = 1.678272 * 50000, metric = 62.2% * 50000 17.006s (2940.1 samples per second);
Finished Epoch[4 of 300]: [Training] loss = 1.583182 * 50000, metric = 58.1% * 50000 16.644s (3004.1 samples per second);
Finished Epoch[5 of 300]: [Training] loss = 1.514311 * 50000, metric = 55.3% * 50000 16.790s (2977.9 samples per second);
Final Results: Minibatch[1-626]: errs = 49.2% * 10000
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Add batch normalization after each convolution and before the last dense layer: | def create_basic_model_with_batch_normalization(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((5,5), [32,32,64][i], init=glorot_uniform(), pad=True),
BatchNormalization(map_rank=1),
MaxPooling((3,3), strides=(2,2))
]),
Dense(64, init=glorot_uniform()),
BatchNormalization(map_rank=1),
Dense(out_dims, init=glorot_uniform(), activation=None)
])
return model(input)
pred_basic_model_bn = train_and_evaluate(reader_train, reader_test, max_epochs=5, model_func=create_basic_model_with_batch_normalization) | Training 117290 parameters in 18 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 1.512835 * 50000, metric = 54.1% * 50000 15.499s (3226.1 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.206524 * 50000, metric = 42.8% * 50000 16.071s (3111.2 samples per second);
Finished Epoch[3 of 300]: [Training] loss = 1.087695 * 50000, metric = 38.3% * 50000 16.160s (3094.1 samples per second);
Finished Epoch[4 of 300]: [Training] loss = 1.008182 * 50000, metric = 35.4% * 50000 16.057s (3113.8 samples per second);
Finished Epoch[5 of 300]: [Training] loss = 0.953168 * 50000, metric = 33.4% * 50000 16.247s (3077.4 samples per second);
Final Results: Minibatch[1-626]: errs = 30.8% * 10000
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Let's implement an inspired VGG style network, using layer API, here the architecture:| VGG9 || ------------- || conv3-64 || conv3-64 || max3 || || conv3-96 || conv3-96 || max3 || || conv3-128 || conv3-128 || max3 || || FC-1024 || FC-1024 || || FC-10 | | def create_vgg9_model(input, out_dims):
with default_options(activation=relu):
model = Sequential([
For(range(3), lambda i: [
Convolution((3,3), [64,96,128][i], init=glorot_uniform(), pad=True),
Convolution((3,3), [64,96,128][i], init=glorot_uniform(), pad=True),
MaxPooling((3,3), strides=(2,2))
]),
For(range(2), lambda : [
Dense(1024, init=glorot_uniform())
]),
Dense(out_dims, init=glorot_uniform(), activation=None)
])
return model(input)
pred_vgg = train_and_evaluate(reader_train, reader_test, max_epochs=5, model_func=create_vgg9_model) | Training 2675978 parameters in 18 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 2.253115 * 50000, metric = 83.6% * 50000 46.007s (1086.8 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.931100 * 50000, metric = 71.8% * 50000 46.236s (1081.4 samples per second);
Finished Epoch[3 of 300]: [Training] loss = 1.706618 * 50000, metric = 63.3% * 50000 46.271s (1080.6 samples per second);
Finished Epoch[4 of 300]: [Training] loss = 1.576171 * 50000, metric = 58.1% * 50000 46.348s (1078.8 samples per second);
Finished Epoch[5 of 300]: [Training] loss = 1.473403 * 50000, metric = 53.7% * 50000 46.386s (1077.9 samples per second);
Final Results: Minibatch[1-626]: errs = 51.2% * 10000
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Residual Network (ResNet)One of the main problem of a Deep Neural Network is how to propagate the error all the way to the first layer. For a deep network, the gradient keep getting smaller until it has no effect on the network weights. [ResNet](https://arxiv.org/abs/1512.03385) was designed to overcome such problem, by defining a block with identity path, as shown below: | # Figure 7
Image(url="https://cntk.ai/jup/201/ResNetBlock2.png") | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
The idea of the above block is 2 folds:* During back propagation the gradient have a path that doesn't affect its magnitude.* The network need to learn residual mapping (delta to x).So let's implements ResNet blocks using CNTK: ResNetNode ResNetNodeInc | | +------+------+ +---------+----------+ | | | | V | V V +----------+ | +--------------+ +----------------+ | Conv, BN | | | Conv x 2, BN | | SubSample, BN | +----------+ | +--------------+ +----------------+ | | | | V | V | +-------+ | +-------+ | | ReLU | | | ReLU | | +-------+ | +-------+ | | | | | V | V | +----------+ | +----------+ | | Conv, BN | | | Conv, BN | | +----------+ | +----------+ | | | | | | +---+ | | +---+ | +--->| + |+ + +<-------+ +---+ +---+ | | V V +-------+ +-------+ | ReLU | | ReLU | +-------+ +-------+ | | V V | from cntk.ops import combine, times, element_times, AVG_POOLING
def convolution_bn(input, filter_size, num_filters, strides=(1,1), init=he_normal(), activation=relu):
if activation is None:
activation = lambda x: x
r = Convolution(filter_size, num_filters, strides=strides, init=init, activation=None, pad=True, bias=False)(input)
r = BatchNormalization(map_rank=1)(r)
r = activation(r)
return r
def resnet_basic(input, num_filters):
c1 = convolution_bn(input, (3,3), num_filters)
c2 = convolution_bn(c1, (3,3), num_filters, activation=None)
p = c2 + input
return relu(p)
def resnet_basic_inc(input, num_filters):
c1 = convolution_bn(input, (3,3), num_filters, strides=(2,2))
c2 = convolution_bn(c1, (3,3), num_filters, activation=None)
s = convolution_bn(input, (1,1), num_filters, strides=(2,2), activation=None)
p = c2 + s
return relu(p)
def resnet_basic_stack(input, num_filters, num_stack):
assert (num_stack > 0)
r = input
for _ in range(num_stack):
r = resnet_basic(r, num_filters)
return r | _____no_output_____ | RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Let's write the full model: | def create_resnet_model(input, out_dims):
conv = convolution_bn(input, (3,3), 16)
r1_1 = resnet_basic_stack(conv, 16, 3)
r2_1 = resnet_basic_inc(r1_1, 32)
r2_2 = resnet_basic_stack(r2_1, 32, 2)
r3_1 = resnet_basic_inc(r2_2, 64)
r3_2 = resnet_basic_stack(r3_1, 64, 2)
# Global average pooling
pool = AveragePooling(filter_shape=(8,8), strides=(1,1))(r3_2)
net = Dense(out_dims, init=he_normal(), activation=None)(pool)
return net
pred_resnet = train_and_evaluate(reader_train, reader_test, max_epochs=5, model_func=create_resnet_model) | Training 272474 parameters in 65 parameter tensors.
Finished Epoch[1 of 300]: [Training] loss = 1.859668 * 50000, metric = 69.3% * 50000 47.499s (1052.7 samples per second);
Finished Epoch[2 of 300]: [Training] loss = 1.583096 * 50000, metric = 58.7% * 50000 48.541s (1030.0 samples per second);
Finished Epoch[3 of 300]: [Training] loss = 1.453993 * 50000, metric = 53.4% * 50000 48.982s (1020.8 samples per second);
Finished Epoch[4 of 300]: [Training] loss = 1.347815 * 50000, metric = 49.2% * 50000 48.704s (1026.6 samples per second);
Finished Epoch[5 of 300]: [Training] loss = 1.269185 * 50000, metric = 45.8% * 50000 48.155s (1038.3 samples per second);
Final Results: Minibatch[1-626]: errs = 44.6% * 10000
| RSA-MD | Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb | StillKeepTry/CNTK |
Objective* 20190815: * Given stock returns for the last N days, we do prediction for the next N+H days, where H is the forecast horizon * We use double exponential smoothing to predict | %matplotlib inline
import math
import matplotlib
import numpy as np
import pandas as pd
import seaborn as sns
import time
from collections import defaultdict
from datetime import date, datetime, time, timedelta
from matplotlib import pyplot as plt
from pylab import rcParams
from sklearn.metrics import mean_squared_error
from tqdm import tqdm_notebook
#### Input params ##################
stk_path = "./data/VTI_20130102_20181231.csv"
H = 21
train_size = 252*3 # Use 3 years of data as train set. Note there are about 252 trading days in a year
val_size = 252 # Use 1 year of data as validation set
# alpha - smoothing coeff
alphaMax = 0.999
alphaMin = 0.001
alphaStep = 0.001
# beta - trend coeff
betaMax = 0.999
betaMin = 0.001
betaStep = 0.001
fontsize = 14
ticklabelsize = 14
####################################
train_val_size = train_size + val_size # Size of train+validation set
print("No. of days in train+validation set = " + str(train_val_size))
print("We will start forecasting on day %d" % (train_val_size+1)) | We will start forecasting on day 1009
| Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Common functions | def get_smape(y_true, y_pred):
"""
Compute symmetric mean absolute percentage error
"""
y_true, y_pred = np.array(y_true), np.array(y_pred)
return 100/len(y_true) * np.sum(2 * np.abs(y_pred - y_true) / (np.abs(y_true) + np.abs(y_pred)))
def get_mape(y_true, y_pred):
"""
Compute mean absolute percentage error (MAPE)
"""
y_true, y_pred = np.array(y_true), np.array(y_pred)
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def get_mae(a, b):
"""
Comp mean absolute error e_t = E[|a_t - b_t|]. a and b can be lists.
Returns a vector of len = len(a) = len(b)
"""
return np.mean(abs(np.array(a)-np.array(b)))
def get_rmse(a, b):
"""
Comp RMSE. a and b can be lists.
Returns a scalar.
"""
return math.sqrt(np.mean((np.array(a)-np.array(b))**2))
def double_exponential_smoothing(series, H, alpha=0.5, beta=0.5, return_all=False):
"""
Given a series and alpha, return series of smoothed points
Initialization:
S_1 = y_1,
b_1 = y_2 - y_1,
F_1 = 0, F_2 = y_1
level, S_t = alpha*y_t + (1-alpha)*(S_t-1 + b_t-1)
trend, b_t = beta*(S_t - S_t-1) + (1-beta)*b_t-1
forecast, F_t+1 = S_t + b_t
forecast, F_t+m = S_t + m*b_t
result[len(series)] is the estimate of series[len(series)]
Inputs
series: series to forecast
H : forecast horizon
alpha : smoothing constant.
When alpha is close to 1, dampening is quick.
When alpha is close to 0, dampening is slow
beta : smoothing constant for trend
return_all : if 1 return both original series + predictions, if 0 return predictions only
Outputs
the predictions of length H
"""
result = [0, series[0]]
for n in range(1, len(series)+H-1):
if n == 1:
level, trend = series[0], series[1] - series[0]
if n >= len(series): # we are forecasting
m = n - len(series) + 2
result.append(level + m*trend) # result[len(series)+1] is the estimate of series[len(series)+1]
else:
value = series[n]
last_level, level = level, alpha*value + (1-alpha)*(level+trend)
trend = beta*(level-last_level) + (1-beta)*trend
result.append(level+trend)
# e.g. result[2] uses series[1]
# ie. result[2] is the estimate of series[2]
# e.g. result[len(series)] uses series[len(series)-1]
# ie. result[len(series)] is the estimate of series[len(series)]
if return_all == True:
return result
else:
return result[len(series):len(series)+H]
def get_error_metrics(series, train_size, H, alpha, beta):
"""
Given a series consisting of both train+validation, do predictions of forecast horizon H on the validation set,
at H/2 intervals.
Inputs
series : series to forecast, with length = (train_size + val_size)
train_size : length of series to use as train ie. train set is series[:train_size]
H : forecast horizon
Outputs
mean of rmse, mean of mape, mean of mae
"""
# Predict using single exponential smoothing, and compute error metrics also
rmse = [] # root mean square error
mape = [] # mean absolute percentage error
mae = [] # mean absolute error
smape = [] # symmetric mean absolute percentage error
preds_dict = {}
for i in range(train_size, len(series)-H, int(H/2)):
preds_list = double_exponential_smoothing(series[i-train_size:i], H, alpha, beta)
rmse.append(get_rmse(series[i:i+H], preds_list))
mape.append(get_mape(series[i:i+H], preds_list))
mae.append(get_mae(series[i:i+H], preds_list))
smape.append(get_smape(series[i:i+H], preds_list))
preds_dict[i] = preds_list
return np.mean(rmse), np.mean(mape), np.mean(mae), np.mean(smape), preds_dict
def hyperpram_tune_alpha_beta(series, train_size, H):
"""
Given a series, tune hyperparameter alpha, fit and predict
Inputs
series : series to forecast, with length = (train_size + val_size)
train_size : length of series to use as train ie. train set is series[:train_size]
H : forecast horizon
Outputs
optimum hyperparameters, error metrics dataframe
"""
err_dict = defaultdict(list)
alpha = alphaMin
beta = betaMin
while alpha <= alphaMax:
while beta <= betaMax:
rmse_mean, mape_mean, mae_mean, smape_mean, _ = get_error_metrics(series, train_size, H, alpha, beta)
# Append alpha and beta
err_dict['alpha'].append(alpha)
err_dict['beta'].append(beta)
# Compute error metrics
err_dict['rmse'].append(rmse_mean)
err_dict['mape'].append(mape_mean)
err_dict['mae'].append(mae_mean)
err_dict['smape'].append(smape_mean)
# Increase beta by one step
beta = beta + betaStep
# Increase alpha by one step
alpha = alpha + alphaStep
# Convert to dataframe
err_df = pd.DataFrame(err_dict)
# Get min RMSE
rmse_min = err_df['rmse'].min()
return err_df[err_df['rmse'] == rmse_min]['alpha'].values[0], err_df[err_df['rmse'] == rmse_min]['beta'].values[0], err_df | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Load data | df = pd.read_csv(stk_path, sep = ",")
# Convert Date column to datetime
df.loc[:, 'Date'] = pd.to_datetime(df['Date'],format='%Y-%m-%d')
# Change all column headings to be lower case, and remove spacing
df.columns = [str(x).lower().replace(' ', '_') for x in df.columns]
# Sort by datetime
df.sort_values(by='date', inplace=True, ascending=True)
df.head(10)
df['date'].min(), df['date'].max()
# Plot adjusted close over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='adj_close', style='b-', grid=True)
ax.set_xlabel("date")
ax.set_ylabel("USD") | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Get Stock Returns | df['returns'] = df['adj_close'].pct_change() * 100
df.loc[0, 'returns'] = 0 # set the first value of returns to be 0 for simplicity
df.head()
# Plot returns over time
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='returns', style='b-', grid=True)
ax.set_xlabel("date")
ax.set_ylabel("returns")
# Plot distribution of returns
plt.figure(figsize=(12, 8), dpi=80)
ax = sns.distplot(df['returns'][1:])
ax.grid()
ax.set_xlabel('daily returns', fontsize = 14)
ax.set_ylabel("probability density function", fontsize = 14)
matplotlib.rcParams.update({'font.size': 14})
| _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon) and a specific date | i = train_val_size # Predict for day i, for the next H-1 days. Note indexing of days start from 0.
print("Predicting on day %d, date %s, with forecast horizon H = %d" % (i, df.iloc[i]['date'], H))
# Predict
preds_list = double_exponential_smoothing(df['returns'][i-train_val_size:i].values, H)
print("For forecast horizon %d, predicting on day %d, date %s, the RMSE is %f" % (H, i, df['date'][i], get_rmse(df[i:i+H]['returns'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the MAPE is %f" % (H, i, df['date'][i], get_mape(df[i:i+H]['returns'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the SMAPE is %f" % (H, i, df['date'][i], get_smape(df[i:i+H]['returns'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the MAE is %f" % (H, i, df['date'][i], get_mae(df[i:i+H]['returns'], preds_list)))
# Plot the predictions
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
matplotlib.rcParams.update({'font.size': 14})
ax = df.plot(x='date', y='returns', style='bx-', grid=True)
# Plot the predictions
ax.plot(df['date'][i:i+H], preds_list, marker='x')
ax.set_xlabel("date")
# ax.set_ylabel("daily returns")
ax.legend(['daily returns', 'predictions'])
# ax.set_ylim([105, 120])
ax.set_xlim([date(2016, 11, 1), date(2017, 2, 28)]) | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon) and a specific date, with hyperparameter tuning - alpha, beta | i = train_val_size # Predict for day i, for the next H-1 days. Note indexing of days start from 0.
print("Predicting on day %d, date %s, with forecast horizon H = %d" % (i, df.iloc[i]['date'], H))
# Get optimum hyperparams
alpha_opt, beta_opt, err_df = hyperpram_tune_alpha_beta(df['returns'][i-train_val_size:i].values, train_size, H)
print("alpha_opt = " + str(alpha_opt))
print("beta_opt = " + str(beta_opt))
# print("rmse opt = " + str(err_df[(err_df['alpha']==alpha_opt) & (err_df['beta']==beta_opt)]['rmse'].values[0]))
print(err_df[(err_df['alpha']==alpha_opt) & (err_df['beta']==beta_opt)])
err_df
# Predict
preds_list = double_exponential_smoothing(df['returns'][i-train_val_size:i].values, H, alpha_opt, beta_opt)
print("For forecast horizon %d, predicting on day %d, date %s, the RMSE is %f" % (H, i, df['date'][i], get_rmse(df[i:i+H]['returns'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the MAPE is %f" % (H, i, df['date'][i], get_mape(df[i:i+H]['returns'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the SMAPE is %f" % (H, i, df['date'][i], get_smape(df[i:i+H]['returns'], preds_list)))
print("For forecast horizon %d, predicting on day %d, date %s, the MAE is %f" % (H, i, df['date'][i], get_mae(df[i:i+H]['returns'], preds_list)))
# Plot the predictions
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
matplotlib.rcParams.update({'font.size': 14})
ax = df.plot(x='date', y='returns', style='bx-', grid=True)
# Plot the predictions
ax.plot(df['date'][i:i+H], preds_list, marker='x')
ax.set_xlabel("date")
# ax.set_ylabel("USD")
ax.legend(['returns', 'predictions'])
# ax.set_ylim([105, 120])
ax.set_xlim([date(2016, 11, 1), date(2017, 2, 28)]) | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon), and various dates, using model trained in previous step | print("alpha_opt = " + str(alpha_opt))
print("beta_opt = " + str(beta_opt))
# Predict and compute error metrics also
rmse = [] # root mean square error
mape = [] # mean absolute percentage error
mae = [] # mean absolute error
smape = [] # symmetric mean absolute percentage error
preds_dict = {}
i_list = range(train_val_size, train_val_size+84*5+42+1, 42)
for i in i_list:
print("Predicting on day %d, date %s" % (i, df.iloc[i]['date']))
preds_list = double_exponential_smoothing(df['returns'][i-train_val_size:i].values, H, alpha_opt, beta_opt)
# Collect the predictions
preds_dict[i] = preds_list
# Compute error metrics
rmse.append(get_rmse(df[i:i+H]['returns'], preds_list))
mape.append(get_mape(df[i:i+H]['returns'], preds_list))
mae.append(get_mae(df[i:i+H]['returns'], preds_list))
smape.append(get_smape(df[i:i+H]['returns'], preds_list))
print("Altogether we made %d forecasts, each of length %d days" % (len(rmse), H))
print("For forecast horizon %d, the mean RMSE is %f" % (H, np.mean(rmse)))
print("For forecast horizon %d, the mean MAPE is %f" % (H, np.mean(mape)))
print("For forecast horizon %d, the mean SMAPE is %f" % (H, np.mean(smape)))
print("For forecast horizon %d, the mean MAE is %f" % (H, np.mean(mae)))
results_final_no_tune = pd.DataFrame({'day': i_list,
'alpha_opt': [alpha_opt]*len(i_list),
'beta_opt': [beta_opt]*len(i_list),
'rmse': rmse,
'mape': mape,
'mae': mae,
'smape': smape})
results_final_no_tune
# Plot the predictions, and zoom in
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='returns', style='b-', grid=True)
# Plot the predictions
for key in preds_dict:
ax.plot(df['date'][key:key+H], preds_dict[key])
ax.set_xlabel("date")
# ax.set_ylabel("USD")
ax.legend(['returns', 'predictions'])
# ax.set_ylim([105, 150])
ax.set_xlim([date(2017, 1, 1), date(2018, 12, 31)]) | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Predict for a specific H (forecast horizon), and various dates, tuning model for every prediction | # Predict and compute error metrics also
preds_dict = {}
results_final = defaultdict(list)
i_list = range(train_val_size, train_val_size+84*5+42+1, 42)
for i in i_list:
print("Predicting on day %d, date %s" % (i, df.iloc[i]['date']))
# Get optimum hyperparams
alpha_opt, beta_opt, err_df = hyperpram_tune_alpha_beta(df['returns'][i-train_val_size:i].values, train_size, H)
preds_list = double_exponential_smoothing(df['returns'][i-train_val_size:i].values, H, alpha_opt, beta_opt)
# Collect the predictions
preds_dict[i] = preds_list
# Compute error metrics
results_final['rmse'].append(get_rmse(df[i:i+H]['returns'], preds_list))
results_final['mape'].append(get_mape(df[i:i+H]['returns'], preds_list))
results_final['mae'].append(get_mae(df[i:i+H]['returns'], preds_list))
results_final['smape'].append(get_smape(df[i:i+H]['returns'], preds_list))
results_final['alpha_opt'].append(alpha_opt)
results_final['beta_opt'].append(beta_opt)
results_final = pd.DataFrame(results_final)
print("Altogether we made %d forecasts, each of length %d days" % (len(rmse), H))
print("For forecast horizon %d, the mean RMSE is %f" % (H, results_final['rmse'].mean()))
print("For forecast horizon %d, the mean MAPE is %f" % (H, results_final['mape'].mean()))
print("For forecast horizon %d, the mean SMAPE is %f" % (H, results_final['smape'].mean()))
print("For forecast horizon %d, the mean MAE is %f" % (H, results_final['mae'].mean()))
# results
results_final
# Plot the predictions, and zoom in
rcParams['figure.figsize'] = 10, 8 # width 10, height 8
ax = df.plot(x='date', y='returns', style='b-', grid=True)
# Plot the predictions
for key in preds_dict:
ax.plot(df['date'][key:key+H], preds_dict[key])
ax.set_xlabel("date")
# ax.set_ylabel("USD")
ax.legend(['returns', 'predictions'])
# ax.set_ylim([105, 150])
ax.set_xlim([date(2017, 1, 1), date(2018, 12, 31)])
# Plot scatter plot of actual values vs. predictions
for key in preds_dict:
plt.plot(df['returns'][key:key+H], preds_dict[key], 'x')
plt.plot(range(-3, 4, 1), range(-3, 4, 1), 'b-')
plt.xlabel('returns')
plt.ylabel('predictions')
plt.grid() | _____no_output_____ | Apache-2.0 | StockReturnsPrediction_fh21/StockReturnsPrediction_v2_DExpSmoothing.ipynb | clairvoyant/Stocks |
Notebook generates plots of activation functionsFigures generated include:- Fig. 1a- Supp Fig. 7 | import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-5,5,50)
x_neg = np.linspace(-5,0,50)
x_pos = np.linspace(0, 5,50)
x_elu = np.concatenate([x_neg, x_pos])
elu = np.concatenate([.5*(np.exp(x_neg)-1), x_pos])
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
plt.plot(x, np.exp(x), linewidth=2, alpha=0.7)
plt.plot(x, np.maximum(x,0), linewidth=2, alpha=0.7)
plt.plot(x, 1/(1+np.exp(-x)), linewidth=2, alpha=0.7)
plt.plot(x, np.tanh(x), linewidth=2, alpha=0.7)
plt.plot(x, np.log(1+np.exp(x)), linewidth=2, alpha=0.7)
plt.plot(x, x, linewidth=2, alpha=0.7)
plt.plot(x_elu, elu, linewidth=2, alpha=0.7)
plt.xticks([-4, -2, 0, 2, 4], fontsize=14)
plt.yticks([-4, -2, 0, 2, 4], fontsize=14)
plt.plot([-5,5], [0, 0], 'k', alpha=0.1)
plt.plot([-0,0], [-5, 5], 'k', alpha=0.1)
plt.axis('tight')
ax.set_ybound([-5, 5])
ax.set_xbound([-5, 5])
plt.xlabel('x', fontsize=14)
plt.ylabel('f(x)', fontsize=14)
plt.legend(['Exp', 'Relu', 'Sigmoid', 'Tanh', 'Softplus', 'Linear', 'Elu'], frameon=False, fontsize=12)
outfile = os.path.join('../results', 'activations.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
x = np.linspace(-5,5,20)
fig = plt.figure(figsize=(5,5))
ax = plt.subplot()
plt.plot(x, np.exp(x), linewidth=2)
plt.plot(x, np.maximum((x-.2)**3,0), linewidth=2)
plt.plot(x, 1/(1+np.exp(-(x-8)))*4000, linewidth=2)
plt.plot(x, np.tanh(x-5.0)*500+500, linewidth=2)
plt.xticks([-4, -2, 0, 2, 4], fontsize=14)
plt.yticks([0, 50, 100, 150], fontsize=14)
plt.plot([-5,5], [0, 0], 'k', alpha=0.1)
plt.plot([-0,0], [-5, 200], 'k', alpha=0.1)
ax.set_ybound([-5, 100])
ax.set_xbound([-5, 5])
plt.xlabel('x', fontsize=14)
plt.ylabel('f(x)', fontsize=14)
plt.legend(['Exp', 'Modified-Relu', 'Modified-Sigmoid', 'Modified-Tanh'], frameon=False, fontsize=12)
outfile = os.path.join('../results', 'modified-activations_zoom.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') | _____no_output_____ | MIT | code/plot_activation_functions.ipynb | p-koo/exponential_activations |
Developing an AI applicationGoing forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. The project is broken down into multiple steps:* Load and preprocess the image dataset* Train the image classifier on your dataset* Use the trained classifier to predict image contentWe'll lead you through each part which you'll implement in Python.When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.Please make sure if you are running this notebook in the workspace that you have chosen GPU rather than CPU mode. | # Imports here
import numpy as np
import torch
import data_utils
import train_f as train
from utils import get_saved_model, get_device, get_checkpoints_path, evaluate_model
import predict_f as predict
import matplotlib.pyplot as plt | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Load the dataHere you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1. | data_dir = 'flowers'
train_dir = data_dir + '/train'
valid_dir = data_dir + '/valid'
test_dir = data_dir + '/test'
# TODO: Define your transforms for the training, validation, and testing sets
dataloaders, image_datasets, data_transforms = data_utils.get_data(data_dir, train_dir, valid_dir, test_dir) | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Label mappingYou'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers. | import json
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f) | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Building and training the classifierNow that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.We're going to leave this part up to you. Refer to [the rubric](https://review.udacity.com/!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout* Train the classifier layers using backpropagation using the pre-trained network to get the features* Track the loss and accuracy on the validation set to determine the best hyperparametersWe've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.One last important tip if you're using the workspace to run your code: To avoid having your workspace disconnect during the long-running tasks in this notebook, please read in the earlier page in this lesson called Intro to GPU Workspaces about Keeping Your Session Active. You'll want to include code from the workspace_utils.py module. | # arch = 'resnet34'
# arch = 'inception_v3' -> Expected tensor for argument #1 'input' to have the same dimension as tensor for 'result'; but 4 does not equal 2 (while checking arguments for cudnn_convolution)
# arch = 'densenet161'
arch = 'vgg16'
train.train(data_dir, cat_to_name, './', max_epochs=1, arch=arch) | - Loaded model from a checkpoint -
- Input features: - 25088
- Continuing training from a previous state -
- Starting from epoch: 0
- End epoch: 1
- Training ... -
Epoch: 1/1 ... Steps: 50 ... Train loss: 1.3225 ... Train accuracy: 65%
Epoch: 1/1 ... Steps: 100 ... Train loss: 1.3011 ... Train accuracy: 66%
Evaluating epoch 1 ...
- Calculating accuracy and loss ... -
Validation loss: 0.0261 ... Validation accuracy: 65 %
Saving model
Path: ./state.pt
Training complete
| MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Testing your networkIt's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well. | model = get_saved_model(arch=arch)
model.to(get_device())
model.eval()
acc, _ = evaluate_model(dataloaders['test'], model)
print('Accuracy on the test dataset: %d %%' % (acc)) | - Loaded model from a checkpoint -
- Input features: - 25088
- Calculating accuracy and loss ... -
Accuracy on the test dataset: 65 %
| MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Save the checkpointNow that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.```model.class_to_idx = image_datasets['train'].class_to_idx```Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now. | # See utils.save_checkpoint | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Loading the checkpointAt this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network. | # See utils.get_saved_model | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Inference for classificationNow you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like ```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']```First you'll need to handle processing the input image such that it can be used in your network. Image PreprocessingYou'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions. | # See predict.process_image | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions). | def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.numpy().transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
#img = process_image('./flowers/valid/59/image_05034.jpg')
#imshow(torch.from_numpy(img).float()) | _____no_output_____ | MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Class PredictionOnce you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.htmltorch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']``` |
predict.predict('./flowers/valid/59/image_05034.jpg', get_saved_model(arch=arch)) | - Loaded model from a checkpoint -
- Input features: - 25088
| MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
Sanity CheckingNow that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above. | # TODO: Display an image along with the top 5 classes
model = get_saved_model(arch=arch)
model.eval()
checkpoint = torch.load(get_checkpoints_path(), map_location=str(get_device()))
img_path = './flowers/valid/76/image_02458.jpg'
real_class = checkpoint['cat_to_name'].get(str(img_path.split('/')[3]))
print(real_class)
img = predict.process_image(img_path)
imshow(torch.from_numpy(img).float())
probs, classes = predict.predict(img_path, model)
idx_to_class = {v: k for k, v in checkpoint['class_to_idx'].items()}
categories = [checkpoint['cat_to_name'].get(str(idx_to_class.get(x.item()))) for x in classes]
fig, ax = plt.subplots()
ax.barh(categories, probs, align='center')
plt.show()
| - Loaded model from a checkpoint -
- Input features: - 25088
morning glory
| MIT | Image Classifier Project.ipynb | stoykostanchev/aipnd-project |
스파크를 이용한 기본 지표 생성 예제> 기본 지표를 생성하는 데에 있어, 정해진 틀을 그대로 따라하기 보다, 가장 직관적인 방법을 지속적으로 개선하는 과정을 설명하기 위한 예제입니다. 첫 번째 예제인 만큼 지표의 복잡도를 줄이기 위해 해당 서비스를 오픈 일자는 2020/10/25 이며, 지표를 집계하는 시점은 2020/10/26 일 입니다* 원본 데이터를 그대로 읽는 방법* dataframe api 를 이용하는 방법* spark.sql 을 이용하는 방법* 기본 지표 (DAU, PU)를 추출하는 예제 실습* 날짜에 대한 필터를 넣는 방법* 날짜에 대한 필터를 데이터 소스에 적용하는 방법* 기본 지표 (ARPU, ARPPU)를 추출하는 예제 실습* 스칼라 값을 가져와서 다음 질의문에 적용하는 방법* 누적 금액을 구할 때에 단순한 방법* 서비스 오픈 일자의 디멘젼 테이블을 생성하는 방법* 널 값에 대한 처리하는 방법* 생성된 데이터 프레임을 저장하는 방법* 전 일자 데이터를 가져오는 방법* 요약 지표를 생성할 때에 단순한 방법* 팩트 테이블을 활용하는 방법 | from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark = SparkSession \
.builder \
.appName("Data Engineer Basic Day3") \
.config("spark.dataengineer.basic.day3", "tutorial-1") \
.getOrCreate()
spark.read.option("inferSchema", "true").option("header", "true").json("access/20201026") \
.withColumn("gmt_time", expr("from_unixtime(a_time, 'yyyy-MM-dd HH:mm:ss')")) \
.withColumn("localtime", expr("from_utc_timestamp(from_unixtime(a_time), 'Asis/Seoul')")) \
.show()
# from_utc_timestamp(from_unixtime(epoch_time), tz_name) as local_time
# spark.conf.unset("spark.sql.session.timeZone")
spark.conf.get("spark.sql.session.timeZone") # 'Etc/UTC' => 이게 원인이었네 ... 초기 값의 TimeZone 설정이 제대로 안 되어 있었음.;ㅁ;
spark.conf.set("spark.sql.session.timeZone", "Asia/Seoul")
spark.conf.get("spark.sql.session.timeZone")
spark.read.option("inferSchema", "true").option("header", "true").json("access/20201026") \
.withColumn("gmt_time", expr("from_unixtime(a_time, 'yyyy-MM-dd HH:mm:ss')")) \
.withColumn("localtime", expr("from_utc_timestamp(from_unixtime(a_time), 'Asis/Seoul')")) \
.show()
sc = spark.sparkContext
spark.read.option("inferSchema", "true").option("header", "true").parquet("user/20201025").createOrReplaceTempView("user")
pWhere=""
spark.read.option("inferSchema", "true").option("header", "true").parquet("purchase/20201025").withColumn("p_time", expr("from_unixtime(p_time)")).createOrReplaceTempView("purchase")
aWhere=""
spark.read.option("inferSchema", "true").option("header", "true").json("access/20201026").withColumn("a_time", expr("from_unixtime(a_time)")).createOrReplaceTempView("access")
spark.sql("desc user").show()
spark.sql("desc purchase").show()
spark.sql("desc access").show() | +--------+---------+-------+
|col_name|data_type|comment|
+--------+---------+-------+
| u_id| int| null|
| u_name| string| null|
|u_gender| string| null|
|u_signup| int| null|
+--------+---------+-------+
+--------+---------+-------+
|col_name|data_type|comment|
+--------+---------+-------+
| p_time| string| null|
| p_uid| int| null|
| p_id| int| null|
| p_name| string| null|
| p_amoun| int| null|
+--------+---------+-------+
+-----------+---------+-------+
| col_name|data_type|comment|
+-----------+---------+-------+
| a_id| string| null|
| a_tag| string| null|
| a_time| string| null|
|a_timestamp| string| null|
| a_uid| string| null|
+-----------+---------+-------+
| Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 1. 주어진 데이터를 이용하여 2020/10/25 기준의 DAU, PU 를 구하시오* DAU : Daily Active User, 일 별 접속자 수 - log_access 를 통해 unique 한 a_uid 값을 구합니다* PU : Purchase User, 일 별 구매자 수 - tbl_purchase 를 통해 unique 한 p_uid 값을 구합니다> 값을 구하기 전에 Spark API 대신 Spark SQL 을 이용하기 위해 [createOrReplaceTempView](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=createorreplacepyspark.sql.DataFrame.createOrReplaceTempView) 를 생성합니다 | # DAU - access
spark.sql("select a_time as a_time, a_uid from access").show()
dau = spark.sql("select count(distinct a_uid) as DAU from access where a_time >= '2020-10-25 00:00:00' and a_time < '2020-10-26 00:00:00'")
dau.show()
# PU - purchase
spark.sql("select p_time, p_uid from purchase").show()
pu = spark.sql("select count(distinct p_uid) as PU from purchase where p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00'")
pu.show()
v_dau = dau.collect()[0]["DAU"]
v_pu = pu.collect()[0]["PU"] | _____no_output_____ | Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 2. 주어진 데이터를 이용하여 2020/10/25 기준의 ARPU, ARPPU 를 구하시오* ARPU : Average Revenue Per User, 유저 당 평균 수익 - 해당 일자의 전체 수익 (Total Purchase Amount) / 해당 일에 접속한 유저 수 (DAU)* ARPPU : Average Revenue Per Purchase User, 구매 유저 당 평균 수익 - 해당 일자의 전체 수익 (Total Purchase Amount) / 해당 일에 접속한 구매 유저 수 (PU) | # ARPU - total purchase amount, dau
query="select sum(p_amount) / {} from purchase where p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00'".format(v_dau)
print(query)
total_purchase_amount = spark.sql("select sum(p_amount) as total_purchase_amount from purchase where p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00'")
total_purchase_amount.show()
spark.sql("select sum(p_amount) / 5 from purchase where p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00'").show()
spark.sql("select sum(p_amount) / {} as ARPU from purchase where p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00'".format(v_dau)).show()
# ARPPU - total purchase amount, pu
v_amt = total_purchase_amount.collect()[0]["total_purchase_amount"]
print("| ARPPU | {} |".format(v_amt / v_pu)) | | ARPPU | 3000000.0 |
| Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 3. 주어진 데이터를 이용하여 2020/10/26 현재의 "누적 매출 금액" 과 "누적 접속 유저수"를 구하시오* 누적 매출 금액 : 10/25 (오픈) ~ 현재 - 전체 로그를 읽어서 매출 금액의 합을 구한다 - 유저별 매출 정보를 누적하여 저장해두고 재활용한다* 누적 접속 유저수 : 10/25 (오픈) ~ 현재 - 전체 로그를 읽어서 접속자의 유일한 수를 구한다 - 유저별 접속 정보를 누적하여 저장해두고 재활용한다 | # 누적 매출 금액
spark.sql("select sum(p_amount) from purchase ").show()
# 누적 접속 유저수
spark.sql("select count(distinct a_uid) from access").show() | +-------------+
|sum(p_amount)|
+-------------+
| 16700000|
+-------------+
+---------------------+
|count(DISTINCT a_uid)|
+---------------------+
| 9|
+---------------------+
| Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 4. 유저별 정보를 누적시키기 위한 디멘젼 테이블을 설계하고 생성합니다 User Dimension 테이블 설계| 컬럼 명 | 컬럼 타입 | 컬럼 설명 || :- | :-: | :- || d_uid | int | 유저 아이디 || d_name | string | 고객 이름 || d_pamount | int | 누적 구매 금액 || d_pcount | int | 누적 구매 횟수 || d_acount | int | 누적 접속 횟수 | | # 오픈 첫 날의 경우 예외적으로 별도의 프로그램을 작성합니다
#
# 1. 가장 큰 레코드 수를 가진 정보가 접속정보이므로 해당 일자의 이용자 별 접속 횟수를 추출합니다
# 단, login 횟수를 접속 횟수로 가정합니다 - logout 만 있는 경우는 login 유실 혹은 전일자의 로그이므로 이러한 경우는 제외합니다
spark.sql("describe access").show()
spark.sql("select * from access where a_id = 'login' and a_time >= '2020-10-25 00:00:00' and a_time < '2020-10-26 00:00:00'").show()
uids = spark.sql("select a_uid, count(a_uid) as acount from access where a_id = 'login' and a_time >= '2020-10-25 00:00:00' and a_time < '2020-10-26 00:00:00' group by a_uid")
uids.show()
# 2. 해당 일자의 이용자 별 총 매출 금액과, 구매 횟수를 추출합니다
spark.sql("describe purchase").show()
amts = spark.sql("select p_uid, sum(p_amount) as pamount, count(p_uid) as pcount from purchase where p_time >= '2020-10-25 00:00:00' and p_time < '2020-10-26 00:00:00' group by p_uid")
amts.show()
# 3. 이용자 접속횟수 + 총구매금액 + 구매횟수 (uids + amts)
uids.printSchema()
amts.printSchema()
dim1 = uids.join(amts, uids["a_uid"] == amts["p_uid"], how="left").sort(uids["a_uid"].asc())
dim2 = dim1.withColumnRenamed("a_uid", "d_uid") \
.withColumnRenamed("acount", "d_acount") \
.drop("p_uid") \
.withColumnRenamed("pamount", "d_pamount") \
.withColumnRenamed("pcount", "d_pcount")
dim2.show()
# 4. 이용자 정보를 덧붙입니다
user = spark.sql("select * from user")
user.show()
dim3 = dim2.join(user, dim2["d_uid"] == user["u_id"], "left")
dim4 = dim3.withColumnRenamed("u_name", "d_name") \
.withColumnRenamed("u_gender", "d_gender")
dim5 = dim4.select("d_uid", "d_name", "d_gender", "d_acount", "d_pamount", "d_pcount")
dimension = dim5.na.fill({"d_pamount":0, "d_pcount":0})
dimension.show()
# 4. 다음날 해당 데이터를 사용하도록 하기 위해 일자별 경로에 저장합니다
# - ./users/dt=20201025/
target="./users/dt=20201025"
dimension.write.mode("overwrite").parquet(target) | _____no_output_____ | Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 5. 전일자 디멘젼 정보를 이용하여 누적된 접속, 매출 지표를 생성합니다 | # 이전 일자 기준의 고객의 상태를 유지하여 활용합니다
yesterday = spark.read.parquet(target)
yesterday.sort(yesterday["d_uid"].asc()).show()
# 5. 다음 날 동일한 지표를 생성하되 이전 일자의 정보에 누적한 지표를 생성합니다
# 기존 테이블의 고객과 오늘 신규 고객을 모두 포함한 전체 데이터집합을 생성합니다
yesterday.show()
# 새로운 모수를 추가해야 하므로 전체 모수에 해당하는 uid 만을 추출합니다
uid = yesterday.select("d_uid").join(accs.select("a_uid"), yesterday.d_uid == accs.a_uid, "full_outer") \
.withColumn("uid", when(yesterday.d_uid.isNull(), accs.a_uid).otherwise(yesterday.d_uid)) \
.select("uid")
uid.show()
# uid 기준으로 이름, 성별을 조인합니다
user.show()
dim1 = uid.join(user, uid.uid == user.u_id).select(uid.uid, user.u_name, user.u_gender)
dim1.show()
# 어제 디멘젼을 기준으로 누적접속, 누적구매금액, 누적구매횟수 등을 조인합니다
print("dim2")
dim2 = dim1.join(yesterday, dim1.uid == yesterday.d_uid, "left") \
.select(dim1.uid, dim1.u_name, dim1.u_gender, yesterday.d_acount, yesterday.d_pamount, yesterday.d_pcount) \
.na.fill({"d_acount":0, "d_pamount":0, "d_pcount":0})
dim2.show()
# 6. 오늘 생성된 접속수치, 매출 및 매출 횟수를 더합니다
accs = spark.sql("select a_uid, count(a_uid) as acount from access where a_id = 'login' and a_time >= '2020-10-26 00:00:00' and a_time < '2020-10-27 00:00:00' group by a_uid")
accs.show()
print("dim3")
dim3 = dim2.join(accs, dim2.uid == accs.a_uid, "left") \
.withColumn("total_amount", dim2.d_acount + when(accs.acount.isNull(), 0).otherwise(accs.acount)) \
.select("uid", "u_name", "u_gender", "total_amount", "d_pamount", "d_pcount") \
.withColumnRenamed("total_amount", "d_acount")
dim3.show()
# 오늘 발생한 매출을 더합니다
dim3.show()
amts = spark.sql("select p_uid, sum(p_amount) as pamount, count(p_uid) as pcount from purchase where p_time >= '2020-10-26 00:00:00' and p_time < '2020-10-27 00:00:00' group by p_uid")
amts.show()
print("dim4")
dim4 = dim3.join(amts, dim3.uid == amts.p_uid, "left") \
.withColumn("total_pamount", dim3.d_pamount + when(amts.pamount.isNull(), 0).otherwise(amts.pamount)) \
.withColumn("total_pcount", dim3.d_acount + when(amts.pcount.isNull(), 0).otherwise(amts.pcount)) \
.drop("d_pamount", "d_pcount") \
.withColumnRenamed("uid", "d_uid") \
.withColumnRenamed("u_name", "d_name") \
.withColumnRenamed("u_gender", "d_gender") \
.withColumnRenamed("total_pamount", "d_pamount") \
.withColumnRenamed("total_pcount", "d_pcount") \
.select("d_uid", "d_name", "d_gender", "d_acount", "d_pamount", "d_pcount")
dimension = dim4.sort(dim4.d_uid.asc()).coalesce(1)
dimension.show()
# 7. 생성된 디멘젼을 20201026 경로에 저장합니다
target="./users/dt=20201026"
dimension.write.mode("overwrite").parquet(target) | _____no_output_____ | Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
과제 6. inner, left_outer, right_outer, full_outer 조인 실습 예제를 작성하시오 | valuesA = [('A',1),('B',2),('C',3),('D',4)]
A = spark.createDataFrame(valuesA,['a_id','a_value'])
valuesB = [('C',10),('D',20),('E',30),('F',40)]
B = spark.createDataFrame(valuesB,['b_id','b_value'])
A.join(B, A.a_id == B.b_id, "inner").sort(A.a_id.asc()).show() # C, D
# A.join(B, A.a_id == B.b_id, "left").sort(A.a_id.asc()).show() # A, B, C, D
# A.join(B, A.a_id == B.b_id, "right").sort(B.b_id.asc()).show() # C, D, E, F
A.join(B, A.a_id == B.b_id, "left_outer").sort(A.a_id.asc()).show() # A, B, C, D
A.join(B, A.a_id == B.b_id, "right_outer").sort(B.b_id.asc()).show() # C, D, E, F
A.join(B, A.a_id == B.b_id, "full_outer").sort(A.a_id.asc_nulls_last(), B.b_id.asc_nulls_last()).show() # A, B, C, D, E, F
# full outer 조인 시에 결과 생성
A.join(B, A.a_id == B.b_id, "full_outer").withColumn("id", expr("case when a_id is null then b_id else a_id end")).select("id").sort("id").show()
# F.when(df.age > 4, 1).when(df.age < 3, -1).otherwise(0)
A.join(B, A.a_id == B.b_id, "full_outer").show()
A.join(B, A.a_id == B.b_id, "full_outer").withColumn("id", when(A.a_id.isNull(), B.b_id).otherwise(A.a_id)).select("id").sort("id").show() | +---+
| id|
+---+
| A|
| B|
| C|
| D|
| E|
| F|
+---+
| Apache-2.0 | day1/notebooks/.ipynb_checkpoints/lgde_basic-checkpoint.ipynb | sw-woo/data-engineer-basic-training |
AI for Medicine Course 1 Week 1 lecture exercises Counting labelsAs you saw in the lecture videos, one way to avoid having class imbalance impact the loss function is to weight the losses differently. To choose the weights, you first need to calculate the class frequencies.For this exercise, you'll just get the count of each label. Later on, you'll use the concepts practiced here to calculate frequencies in the assignment! | # Import the necessary packages
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Read csv file containing training datadata
train_df = pd.read_csv("nih/train-small.csv")
# Count up the number of instances of each class (drop non-class columns from the counts)
class_counts = train_df.sum().drop(['Image','PatientId'])
for column in class_counts.keys():
print(f"The class {column} has {train_df[column].sum()} samples")
# Plot up the distribution of counts
sns.barplot(class_counts.values, class_counts.index, color='b')
plt.title('Distribution of Classes for Training Dataset', fontsize=15)
plt.xlabel('Number of Patients', fontsize=15)
plt.ylabel('Diseases', fontsize=15)
plt.show() | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Weighted Loss function Below is an example of calculating weighted loss. In the assignment, you will calculate a weighted loss function. This sample code will give you some intuition for what the weighted loss function is doing, and also help you practice some syntax you will use in the graded assignment.For this example, you'll first define a hypothetical set of true labels and then a set of predictions.Run the next cell to create the 'ground truth' labels. | # Generate an array of 4 binary label values, 3 positive and 1 negative
y_true = np.array(
[[1],
[1],
[1],
[0]])
print(f"y_true: \n{y_true}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Two modelsTo better understand the loss function, you will pretend that you have two models.- Model 1 always outputs a 0.9 for any example that it's given. - Model 2 always outputs a 0.1 for any example that it's given. | # Make model predictions that are always 0.9 for all examples
y_pred_1 = 0.9 * np.ones(y_true.shape)
print(f"y_pred_1: \n{y_pred_1}")
print()
y_pred_2 = 0.1 * np.ones(y_true.shape)
print(f"y_pred_2: \n{y_pred_2}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Problems with the regular loss functionThe learning goal here is to notice that with a regular loss function (not a weighted loss), the model that always outputs 0.9 has a smaller loss (performs better) than model 2.- This is because there is a class imbalance, where 3 out of the 4 labels are 1.- If the data were perfectly balanced, (two labels were 1, and two labels were 0), model 1 and model 2 would have the same loss. Each would get two examples correct and two examples incorrect.- However, since the data is not balanced, the regular loss function implies that model 1 is better than model 2. Notice the shortcomings of a regular non-weighted lossSee what loss you get from these two models (model 1 always predicts 0.9, and model 2 always predicts 0.1), see what the regular (unweighted) loss function is for each model. | loss_reg_1 = -1 * np.sum(y_true * np.log(y_pred_1)) + \
-1 * np.sum((1 - y_true) * np.log(1 - y_pred_1))
print(f"loss_reg_1: {loss_reg_1:.4f}")
loss_reg_2 = -1 * np.sum(y_true * np.log(y_pred_2)) + \
-1 * np.sum((1 - y_true) * np.log(1 - y_pred_2))
print(f"loss_reg_2: {loss_reg_2:.4f}")
print(f"When the model 1 always predicts 0.9, the regular loss is {loss_reg_1:.4f}")
print(f"When the model 2 always predicts 0.1, the regular loss is {loss_reg_2:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Notice that the loss function gives a greater loss when the predictions are always 0.1, because the data is imbalanced, and has three labels of `1` but only one label for `0`.Given a class imbalance with more positive labels, the regular loss function implies that the model with the higher prediction of 0.9 performs better than the model with the lower prediction of 0.1 How a weighted loss treats both models the sameWith a weighted loss function, you will get the same weighted loss when the predictions are all 0.9 versus when the predictions are all 0.1. - Notice how a prediction of 0.9 is 0.1 away from the positive label of 1.- Also notice how a prediction of 0.1 is 0.1 away from the negative label of 0- So model 1 and 2 are "symmetric" along the midpoint of 0.5, if you plot them on a number line between 0 and 1. Weighted Loss EquationCalculate the loss for the zero-th label (column at index 0)- The loss is made up of two terms. To make it easier to read the code, you will calculate each of these terms separately. We are giving each of these two terms a name for explanatory purposes, but these are not officially called $loss_{pos}$ or $loss_{neg}$ - $loss_{pos}$: we'll use this to refer to the loss where the actual label is positive (the positive examples). - $loss_{neg}$: we'll use this to refer to the loss where the actual label is negative (the negative examples). $$ loss^{(i)} = loss_{pos}^{(i)} + los_{neg}^{(i)} $$$$loss_{pos}^{(i)} = -1 \times weight_{pos}^{(i)} \times y^{(i)} \times log(\hat{y}^{(i)})$$$$loss_{neg}^{(i)} = -1 \times weight_{neg}^{(i)} \times (1- y^{(i)}) \times log(1 - \hat{y}^{(i)})$$ Since this sample dataset is small enough, you can calculate the positive weight to be used in the weighted loss function. To get the positive weight, count how many NEGATIVE labels are present, divided by the total number of examples.In this case, there is one negative label, and four total examples.Similarly, the negative weight is the fraction of positive labels.Run the next cell to define positive and negative weights. | # calculate the positive weight as the fraction of negative labels
w_p = 1/4
# calculate the negative weight as the fraction of positive labels
w_n = 3/4
print(f"positive weight w_p: {w_p}")
print(f"negative weight w_n {w_n}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Model 1 weighted lossRun the next two cells to calculate the two loss terms separately.Here, `loss_1_pos` and `loss_1_neg` are calculated using the `y_pred_1` predictions. | # Calculate and print out the first term in the loss function, which we are calling 'loss_pos'
loss_1_pos = -1 * np.sum(w_p * y_true * np.log(y_pred_1 ))
print(f"loss_1_pos: {loss_1_pos:.4f}")
# Calculate and print out the second term in the loss function, which we're calling 'loss_neg'
loss_1_neg = -1 * np.sum(w_n * (1 - y_true) * np.log(1 - y_pred_1 ))
print(f"loss_1_neg: {loss_1_neg:.4f}")
# Sum positive and negative losses to calculate total loss
loss_1 = loss_1_pos + loss_1_neg
print(f"loss_1: {loss_1:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Model 2 weighted lossNow do the same calculations for when the predictions are from `y_pred_2'. Calculate the two terms of the weighted loss function and add them together. | # Calculate and print out the first term in the loss function, which we are calling 'loss_pos'
loss_2_pos = -1 * np.sum(w_p * y_true * np.log(y_pred_2))
print(f"loss_2_pos: {loss_2_pos:.4f}")
# Calculate and print out the second term in the loss function, which we're calling 'loss_neg'
loss_2_neg = -1 * np.sum(w_n * (1 - y_true) * np.log(1 - y_pred_2))
print(f"loss_2_neg: {loss_2_neg:.4f}")
# Sum positive and negative losses to calculate total loss when the prediction is y_pred_2
loss_2 = loss_2_pos + loss_2_neg
print(f"loss_2: {loss_2:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Compare model 1 and model 2 weighted loss | print(f"When the model always predicts 0.9, the total loss is {loss_1:.4f}")
print(f"When the model always predicts 0.1, the total loss is {loss_2:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
What do you notice?Since you used a weighted loss, the calculated loss is the same whether the model always predicts 0.9 or always predicts 0.1. You may have also noticed that when you calculate each term of the weighted loss separately, there is a bit of symmetry when comparing between the two sets of predictions. | print(f"loss_1_pos: {loss_1_pos:.4f} \t loss_1_neg: {loss_1_neg:.4f}")
print()
print(f"loss_2_pos: {loss_2_pos:.4f} \t loss_2_neg: {loss_2_neg:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Even though there is a class imbalance, where there are 3 positive labels but only one negative label, the weighted loss accounts for this by giving more weight to the negative label than to the positive label. Weighted Loss for more than one classIn this week's assignment, you will calculate the multi-class weighted loss (when there is more than one disease class that your model is learning to predict). Here, you can practice working with 2D numpy arrays, which will help you implement the multi-class weighted loss in the graded assignment.You will work with a dataset that has two disease classes (two columns) | # View the labels (true values) that you will practice with
y_true = np.array(
[[1,0],
[1,0],
[1,0],
[1,0],
[0,1]
])
y_true | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Choosing axis=0 or axis=1You will use `numpy.sum` to count the number of times column `0` has the value 0. First, notice the difference when you set axis=0 versus axis=1 | # See what happens when you set axis=0
print(f"using axis = 0 {np.sum(y_true,axis=0)}")
# Compare this to what happens when you set axis=1
print(f"using axis = 1 {np.sum(y_true,axis=1)}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Notice that if you choose `axis=0`, the sum is taken for each of the two columns. This is what you want to do in this case. If you set `axis=1`, the sum is taken for each row. Calculate the weightsPreviously, you visually inspected the data to calculate the fraction of negative and positive labels. Here, you can do this programmatically. | # set the positive weights as the fraction of negative labels (0) for each class (each column)
w_p = np.sum(y_true == 0,axis=0) / y_true.shape[0]
w_p
# set the negative weights as the fraction of positive labels (1) for each class
w_n = np.sum(y_true == 1, axis=0) / y_true.shape[0]
w_n | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
In the assignment, you will train a model to try and make useful predictions. In order to make this example easier to follow, you will pretend that your model always predicts the same value for every example. | # Set model predictions where all predictions are the same
y_pred = np.ones(y_true.shape)
y_pred[:,0] = 0.3 * y_pred[:,0]
y_pred[:,1] = 0.7 * y_pred[:,1]
y_pred | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
As before, calculate the two terms that make up the loss function. Notice that you are working with more than one class (represented by columns). In this case, there are two classes.Start by calculating the loss for class `0`.$$ loss^{(i)} = loss_{pos}^{(i)} + los_{neg}^{(i)} $$$$loss_{pos}^{(i)} = -1 \times weight_{pos}^{(i)} \times y^{(i)} \times log(\hat{y}^{(i)})$$$$loss_{neg}^{(i)} = -1 \times weight_{neg}^{(i)} \times (1- y^{(i)}) \times log(1 - \hat{y}^{(i)})$$ View the zero column for the weights, true values, and predictions that you will use to calculate the loss from the positive predictions. | # Print and view column zero of the weight
print(f"w_p[0]: {w_p[0]}")
print(f"y_true[:,0]: {y_true[:,0]}")
print(f"y_pred[:,0]: {y_pred[:,0]}")
# calculate the loss from the positive predictions, for class 0
loss_0_pos = -1 * np.sum(w_p[0] *
y_true[:, 0] *
np.log(y_pred[:, 0])
)
print(f"loss_0_pos: {loss_0_pos:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
View the zero column for the weights, true values, and predictions that you will use to calculate the loss from the negative predictions. | # Print and view column zero of the weight
print(f"w_n[0]: {w_n[0]}")
print(f"y_true[:,0]: {y_true[:,0]}")
print(f"y_pred[:,0]: {y_pred[:,0]}")
# Calculate the loss from the negative predictions, for class 0
loss_0_neg = -1 * np.sum(
w_n[0] *
(1 - y_true[:, 0]) *
np.log(1 - y_pred[:, 0])
)
print(f"loss_0_neg: {loss_0_neg:.4f}")
# add the two loss terms to get the total loss for class 0
loss_0 = loss_0_neg + loss_0_pos
print(f"loss_0: {loss_0:.4f}") | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Now you are familiar with the array slicing that you would use when there are multiple disease classes stored in a two-dimensional array. Now it's your turn!* Can you calculate the loss for class (column) `1`? | # calculate the loss from the positive predictions, for class 1
loss_1_pos = None | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Expected output```CPPloss_1_pos: 0.2853``` | # Calculate the loss from the negative predictions, for class 1
loss_1_neg = None | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
Expected output```CPPloss_1_neg: 0.9632``` | # add the two loss terms to get the total loss for class 0
loss_1 = None | _____no_output_____ | MIT | Week1/Counting Labels and weight loss function.ipynb | Armos05/Ai-for-Medical-Diagnosis-Specialization |
___ ___ NumPy Exercises - SolutionsNow that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions. Import NumPy as np | import numpy as np | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 10 zeros | np.zeros(10) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 10 ones | np.ones(10) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 10 fives | np.ones(10) * 5 | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of the integers from 10 to 50 | np.arange(10,51) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of all the even integers from 10 to 50 | np.arange(10,51,2) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create a 3x3 matrix with values ranging from 0 to 8 | np.arange(9).reshape(3,3) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create a 3x3 identity matrix | np.eye(3) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Use NumPy to generate a random number between 0 and 1 | np.random.rand(1) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution | np.random.randn(25) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create the following matrix: | np.arange(1,101).reshape(10,10) / 100 | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Create an array of 20 linearly spaced points between 0 and 1: | np.linspace(0,1,20) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Numpy Indexing and SelectionNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs: | mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:,1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3,4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[:3,1:2]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[4,:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3:5,:] | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Now do the following Get the sum of all the values in mat | np.sum(mat) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Get the standard deviation of the values in mat | np.std(mat) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Get the sum of all the columns in mat | mat.sum(axis=0) | _____no_output_____ | Apache-2.0 | 1-NumPy/Numpy Exercises - Solutions.ipynb | BhavyaSree/pythonForDataScience |
Opiods VA - Nolan Reilly | import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
opiodsva = pd.read_csv('OpidsVA.csv') #importing data
opiodsva.head() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
Do opioid overdoes tend to be associated with less affluent areas—that is, areas where families have lower incomes? | plt.scatter(opiodsva['MedianHouseholdIncome'], opiodsva['FPOO-Rate'])
plt.xlabel('Median Household Income ($)')
plt.ylabel('Opiod Overdoses')
plt.suptitle("Median Household Income vs Opiod Overdoses")
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
I used a scatterplot because they can easily show the realtionship between 2 variables based on the grouping of the data points. It appears that as median houshold income rises, expected overdoses goes down. Some people who start with opioid addictions are reported to transition to heroin use. What is the relationship in Virginia counties between opioid overdoses and heroin overdoses? | plt.scatter(opiodsva['FFHO-Rate'], opiodsva['FPOO-Rate'])
plt.xlabel('Fentanyl/Heroin Overdoses')
plt.ylabel('Opiod Overdoses')
plt.suptitle('VA Opiod Overdoes vs Fentanyl Overdoses')
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
There is a relationship with high fentanyl/heroin overdoses increasing the number of opiod overdoses. The relationship is not as strong as I expected, I would like to see the reporting methods. Presidents Which states are associated with the greatest number of United States presidents in terms of the presidents’ birthplaces? | presidents = pd.read_csv('presidents.csv')
presidents.head()
presidents['State'].value_counts().plot(kind = 'bar')
plt.xlabel('State')
plt.ylabel('Presidents born')
plt.suptitle('Presidential Birthplaces')
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
A bar chart appropriatly shows the values for each state. Virginia and Ohio are the two most common states for a US president to be born in. Total NSA How have vehicle sales in the United States varied over time? | cars = pd.read_csv('TOTALNSA.csv')
cars.head()
plt.plot(cars['DATE'], cars['TOTALNSA'])
plt.xlabel('Date')
plt.ylabel('Car Sales')
plt.suptitle('Monthly US Car Sales Since Jan 1 1970')
plt.xticks(cars['DATE'])
plt.show() | _____no_output_____ | CC0-1.0 | graphing/BasicGraphAssignment.ipynb | nolanpreilly/nolanpreilly.github.io |
--- | # Read the CSV file from the Resources folder into a Pandas DataFrame
loans_df = pd.read_csv(Path('Resources/lending_data.csv'))
# Review the DataFrame
display(loans_df.head())
display(loans_df.tail())
# Separate the data into labels and features
# Separate the y variable, the labels
y = loans_df['loan_status']
# Separate the X variable, the features
X = loans_df.drop(columns=['loan_status'])
# Review the y variable Series
display(y.head())
display(y.tail())
# Review the X variable DataFrame
display(X.head())
display(X.tail())
# Check the balance of our target values
y.value_counts()
# Import the train_test_learn module
from sklearn.model_selection import train_test_split
# Split the data using train_test_split
# Assign a random_state of 1 to the function
train_X, test_X, train_y, test_y = train_test_split(X, y, random_state=1) | _____no_output_____ | MIT | credit_risk_resampling.ipynb | talibkateeb/Logistic-Regression-Credit-Risk-Analysis |
--- | # Import the LogisticRegression module from SKLearn
from sklearn.linear_model import LogisticRegression
# Instantiate the Logistic Regression model
# Assign a random_state parameter of 1 to the model
logistic_regression_model = LogisticRegression(random_state=1)
# Fit the model using training data
logistic_regression_model.fit(train_X, train_y)
# Make a prediction using the testing data
testing_predictions = logistic_regression_model.predict(test_X)
# Print the balanced_accuracy score of the model
balanced_accuracy_score(test_y, testing_predictions)
# Generate a confusion matrix for the model
confusion_matrix(test_y, testing_predictions)
# Print the classification report for the model
print(classification_report_imbalanced(test_y, testing_predictions)) | pre rec spe f1 geo iba sup
0 1.00 0.99 0.91 1.00 0.95 0.91 18765
1 0.85 0.91 0.99 0.88 0.95 0.90 619
avg / total 0.99 0.99 0.91 0.99 0.95 0.91 19384
| MIT | credit_risk_resampling.ipynb | talibkateeb/Logistic-Regression-Credit-Risk-Analysis |
**Question:** How well does the logistic regression model predict both the `0` (healthy loan) and `1` (high-risk loan) labels?**Answer:** The model appears to predict both of them with really well. It predicts the healthy loan almost perfectly, and predicts the high risk loan a little less accuratley but still very high. Both their precision and recall scores are high as well as their F-1 score. The healthy loan has perfect on 2/3 and 0.99 on the recall. While the high risk loan has a 0.85 precision, 0.91 recall, and 0.88 F-1 score. However, due to the imbalance we cannot be sure that this is actually true, and that the results are not skewed due to the low value counts of the high risk loans. | # Import the RandomOverSampler module form imbalanced-learn
from imblearn.over_sampling import RandomOverSampler
# Instantiate the random oversampler model
# Assign a random_state parameter of 1 to the model
random_oversampler = RandomOverSampler(random_state=1)
# Fit the original training data to the random_oversampler model
X_resampled, y_resampled = random_oversampler.fit_resample(train_X, train_y)
# Count the distinct values of the resampled labels data
y_resampled.value_counts()
# Instantiate the Logistic Regression model
# Assign a random_state parameter of 1 to the model
new_logistic_regression_model = LogisticRegression(random_state=1)
# Fit the model using the resampled training data
new_logistic_regression_model.fit(X_resampled, y_resampled)
# Make a prediction using the testing data
oversampled_predictions = new_logistic_regression_model.predict(test_X)
# Print the balanced_accuracy score of the model
balanced_accuracy_score(test_y, oversampled_predictions)
# Generate a confusion matrix for the model
confusion_matrix(test_y, oversampled_predictions)
# Print the classification report for the model
print(classification_report_imbalanced(test_y, oversampled_predictions)) | pre rec spe f1 geo iba sup
0 1.00 0.99 0.99 1.00 0.99 0.99 18765
1 0.84 0.99 0.99 0.91 0.99 0.99 619
avg / total 0.99 0.99 0.99 0.99 0.99 0.99 19384
| MIT | credit_risk_resampling.ipynb | talibkateeb/Logistic-Regression-Credit-Risk-Analysis |
Introduction à Python > présentée par Loïc Messal Introduction aux flux de contrôles Les tests Ils permettent d'exécuter des déclarations sous certaines conditions. | age = 17
if age < 18:
print("Mineur") # executé si et seulement si la condition est vraie
age = 19
if age < 18:
print("Mineur") # executé si et seulement si la condition est vraie
else:
print("Majeur") # exécuté si et seulement si la condition est fausse
employeur = "JLR"
# employeur = "Jakarto"
# employeur = "Une autre entreprise"
# Commentez, décommentez les valeurs de la variable employeur pour tester.
if employeur == "JLR":
# exécuté si et seulement si la condition employeur == "JLR" est vraie
richesse_statut = "riche"
elif employeur == "Jakarto":
# exécuté si et seulement si la condition employeur == "Jakarto" est vraie
# et qu'aucune condition précédente n'a été remplie
richesse_statut = "ça s'en vient bientôt"
else:
# exécuté si et seulement si aucune condition précédente n'a été remplie
richesse_statut = "probablement pas"
print("Richesse d'un employé de {} : {}".format(employeur, richesse_statut)) | Richesse d'un employé de JLR : riche
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Les boucles Les boucles permettent d'itérer sur des itérables (composés de plusieurs éléments). | un_iterable = []
un_iterable.append({"nom": "Messal", "prénom": "Loïc", "employeur": "Jakarto", "age": 23})
un_iterable.append({"nom": "Lassem", "prénom": "Ciol", "employeur": "Otrakaj", "age": 17})
un_iterable.append({"nom": "Alssem", "prénom": "Icol", "employeur": "Torakaj", "age": 20})
un_iterable
for item in un_iterable:
print("{} {} travaille chez {}.".format(item["prénom"], item["nom"], item["employeur"])) | Loïc Messal travaille chez Jakarto.
Ciol Lassem travaille chez Otrakaj.
Icol Alssem travaille chez Torakaj.
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Il est possible de générer des séquences avec la fonction `range()`. | for compteur in range(5): # range(5) génére une séquence de 0 à 5 (exclus)
print(compteur)
for compteur in range(1, 5+1): # range(1, 5+1) génére une séquence de 1 à 5+1 (exclus)
print(compteur)
for index in range(len(un_iterable)):
item = un_iterable[index] # accède à l'item à partir de son index
print("Item {} : {} {} travaille chez {}.".format(index, item["prénom"], item["nom"], item["employeur"]))
for index, item in enumerate(un_iterable): # enumerate permet d'itérer en obtenant l'index ET l'item
print("Item {} : {} {} travaille chez {}.".format(index, item["prénom"], item["nom"], item["employeur"]))
compteur = 0
stop = 5
while compteur < stop: # exécutera les déclarations suivantes tant que la condition est vraie
print(compteur)
compteur = compteur + 1 | 0
1
2
3
4
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Il est possible de contrôler les boucles avec certains mots clés:- `continue` passera à l'itération suivante sans exécuter les déclarations qui suivent- `break` quittera la boucle prématurément | for index, item in enumerate(un_iterable):
if item["age"] < 18:
continue # Si la condition est vraie, passage à l'itération suivante.
print("Item {} : {} {} (majeur) travaille chez {}.".format(index, item["prénom"], item["nom"], item["employeur"]))
for index, item in enumerate(un_iterable):
print("Item {} : {} {} travaille chez {}.".format(index, item["prénom"], item["nom"], item["employeur"]))
if item["prénom"] == "Loïc":
break # Arrête la boucle si la condition est vraie | Item 0 : Loïc Messal travaille chez Jakarto.
| MIT | 03_Tests_et_boucles.ipynb | Tofull/introduction_python |
Logistic Regression | import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split,KFold
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,\
recall_score,roc_curve,auc
import expectation_reflection as ER
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from function import split_train_test,make_data_balance
np.random.seed(1) | _____no_output_____ | MIT | 2020.07.2400_classification/.ipynb_checkpoints/LR_knn9-checkpoint.ipynb | danhtaihoang/classification |
First of all, the processed data are imported. | #data_list = ['1paradox','2peptide','3stigma']
#data_list = np.loadtxt('data_list.txt',dtype='str')
data_list = np.loadtxt('data_list_30sets.txt',dtype='str')
#data_list = ['9coag']
print(data_list)
def read_data(data_id):
data_name = data_list[data_id]
print('data_name:',data_name)
Xy = np.loadtxt('../classification_data/%s/data_processed_knn7.dat'%data_name)
X = Xy[:,:-1]
y = Xy[:,-1]
#print(np.unique(y,return_counts=True))
X,y = make_data_balance(X,y)
print(np.unique(y,return_counts=True))
X, y = shuffle(X, y, random_state=1)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state = 1)
sc = MinMaxScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
return X_train,X_test,y_train,y_test
def measure_performance(X_train,X_test,y_train,y_test):
#model = LogisticRegression(max_iter=100)
model = SGDClassifier(loss='log',max_iter=1000,tol=0.001) # 'log' for logistic regression, 'hinge' for SVM
# regularization penalty space
#penalty = ['l1','l2']
penalty = ['elasticnet']
# solver
#solver=['saga']
#solver=['liblinear']
# regularization hyperparameter space
#C = np.logspace(0, 4, 10)
#C = [0.001,0.1,1.0,10.0,100.0]
alpha = [0.001,0.01,0.1,1.0,10.,100.]
# l1_ratio
#l1_ratio = [0.1,0.5,0.9]
l1_ratio = [0.,0.2,0.4,0.6,0.8,1.0]
# Create hyperparameter options
#hyperparameters = dict(penalty=penalty,solver=solver,C=C,l1_ratio=l1_ratio)
#hyper_parameters = dict(penalty=penalty,solver=solver,C=C)
hyper_parameters = dict(penalty=penalty,alpha=alpha,l1_ratio=l1_ratio)
# Create grid search using cross validation
clf = GridSearchCV(model, hyper_parameters, cv=4, iid='deprecated')
# Fit grid search
best_model = clf.fit(X_train, y_train)
# View best hyperparameters
#print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
#print('Best C:', best_model.best_estimator_.get_params()['C'])
#print('Best alpha:', best_model.best_estimator_.get_params()['alpha'])
#print('Best l1_ratio:', best_model.best_estimator_.get_params()['l1_ratio'])
# best hyper parameters
print('best_hyper_parameters:',best_model.best_params_)
# performance:
y_test_pred = best_model.best_estimator_.predict(X_test)
acc = accuracy_score(y_test,y_test_pred)
#print('Accuracy:', acc)
p_test_pred = best_model.best_estimator_.predict_proba(X_test) # prob of [0,1]
p_test_pred = p_test_pred[:,1] # prob of 1
fp,tp,thresholds = roc_curve(y_test, p_test_pred, drop_intermediate=False)
roc_auc = auc(fp,tp)
#print('AUC:', roc_auc)
precision = precision_score(y_test,y_test_pred)
#print('Precision:',precision)
recall = recall_score(y_test,y_test_pred)
#print('Recall:',recall)
f1_score = 2*precision*recall/(precision+recall)
return acc,roc_auc,precision,recall,f1_score
n_data = len(data_list)
roc_auc = np.zeros(n_data) ; acc = np.zeros(n_data)
precision = np.zeros(n_data) ; recall = np.zeros(n_data)
f1_score = np.zeros(n_data)
#data_id = 0
for data_id in range(n_data):
X_train,X_test,y_train,y_test = read_data(data_id)
acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id] =\
measure_performance(X_train,X_test,y_train,y_test)
print(data_id,acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id])
print('acc_mean:',acc.mean())
print('roc_mean:',roc_auc.mean())
print('precision:',precision.mean())
print('recall:',recall.mean())
print('f1_score:',f1_score.mean())
np.savetxt('result_knn7_LR.dat',(roc_auc,acc,precision,recall,f1_score),fmt='%f') | _____no_output_____ | MIT | 2020.07.2400_classification/.ipynb_checkpoints/LR_knn9-checkpoint.ipynb | danhtaihoang/classification |
Importação dos dados* Um CSV para cada campus* data: de 2019-02-18 (segunda semana de aula) até 019-06-28 (penultima semana de aula)* Granularidade: 1h (potência agregada pela média)* Dados climáticos obtidos pela plataforma yr* Colunas * potencia ativa da fase A (Kw) * Temperatura (ºC) * Pressão (hPa) | raw = pd.read_csv ('../../datasets/2019-1 Fpolis.csv', sep=',')
raw.describe()
(ax1, ax2,ax3) = raw.plot(subplots=True)
ax1.legend(loc='upper left')
ax2.legend(loc='upper left')
ax3.legend(loc='upper left')
raw['pa'].plot.kde().set_xlabel("Potência Ativa (KW)")
raw['temp_celsius'].plot.kde().set_xlabel("Temperatura (ºC)")
raw['pressao'].plot.kde().set_xlabel("Pressão (hPa)") | _____no_output_____ | MIT | artificial_intelligence/01 - ConsumptionRegression/All campus/Fpolis.ipynb | LeonardoSanBenitez/LorisWeb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.