markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Quality distribution per chromosome | with size_controller(FULL_FIG_W, FULL_FIG_H):
bqc = gffData.boxplot(column='Confidence', by='RefContigID') | opticalmapping/xmap_reader.ipynb | sauloal/ipython | mit |
Position distribution | with size_controller(FULL_FIG_W, FULL_FIG_H):
hs = gffData['RefStartPos'].hist() | opticalmapping/xmap_reader.ipynb | sauloal/ipython | mit |
Position distribution per chromosome | hsc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
hsc = gffData['RefStartPos'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1)) | opticalmapping/xmap_reader.ipynb | sauloal/ipython | mit |
Length distribution | with size_controller(FULL_FIG_W, FULL_FIG_H):
hl = gffData['qry_match_len'].hist() | opticalmapping/xmap_reader.ipynb | sauloal/ipython | mit |
Length distribution per chromosome | hlc = gffData['qry_match_len'].hist(by=gffData['RefContigID'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1)) | opticalmapping/xmap_reader.ipynb | sauloal/ipython | mit |
Criar um novo dataset para práticar | # Selecionar três centróides
cluster_center_1 = np.array([2,3])
cluster_center_2 = np.array([6,6])
cluster_center_3 = np.array([10,1])
# Gerar amostras aleátorias a partir dos centróides escolhidos
cluster_data_1 = np.random.randn(100, 2) + cluster_center_1
cluster_data_2 = np.random.randn(100,2) + cluster_center_2
cluster_data_3 = np.random.randn(100,2) + cluster_center_3
new_dataset = np.concatenate((cluster_data_1, cluster_data_2,
cluster_data_3), axis = 0)
plt.scatter(new_dataset[:,0], new_dataset[:,1], s=10)
plt.show() | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html | def calculate_initial_centers(dataset, k):
"""
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
"""
#### CODE HERE ####
minimum = np.min(dataset, axis=0)
maximum = np.max(dataset, axis=0)
shape = [k, dataset.shape[1]]
centroids = np.random.uniform(minimum, maximum, size=shape)
### END OF CODE ###
return centroids | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
1.2 Definir os clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação:
$$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$
$$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$ | def euclidean_distance(a, b):
"""
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
"""
#### CODE HERE ####
distance = np.sqrt(np.sum(np.square(a-b)))
### END OF CODE ###
return distance | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html | def nearest_centroid(a, centroids):
"""
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
"""
#### CODE HERE ####
distance_zeros = np.zeros(centroids.shape[0])
for index, centroid in enumerate(centroids):
distance = euclidean_distance(a, centroid)
distance_zeros[index] = distance
nearest_index = np.argmin(distance_zeros)
### END OF CODE ###
return nearest_index | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset. | def all_nearest_centroids(dataset, centroids):
"""
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
"""
#### CODE HERE ####
nearest_indexes = np.zeros(dataset.shape[0])
for index, a in enumerate(dataset):
nearest_indexes[index] = nearest_centroid(a, centroids)
### END OF CODE ###
return nearest_indexes | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes:
A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares.
A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos.
Fonte: https://scikit-learn.org/stable/modules/clustering.html
Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente.
$$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$ | def inertia(dataset, centroids, nearest_indexes):
"""
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
"""
#### CODE HERE ####
inertia = 0
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for a in dataframe:
inertia += np.square(euclidean_distance(a,centroid))
### END OF CODE ###
return inertia | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster. | def update_centroids(dataset, centroids, nearest_indexes):
"""
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
"""
#### CODE HERE ####
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
if(dataframe.size != 0):
centroids[index] = np.mean(dataframe, axis=0)
### END OF CODE ###
return centroids | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means! | class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters)
# Computa o cluster de cada amostra
self.labels_ = all_nearest_centroids(X, self.cluster_centers_)
# Calcula a inércia inicial
old_inertia = inertia(X, self.cluster_centers_, self.labels_)
for index in range(self.max_iter):
#### CODE HERE ####
self.cluster_centers_ = update_centroids(X, self.cluster_centers_, self.labels_)
self.labels_ = all_nearest_centroids(X, self.cluster_centers_)
self.inertia_ = inertia(X, self.cluster_centers_, self.labels_)
if(old_inertia == self.inertia_):
break
else:
old_inertia = self.inertia_
### END OF CODE ###
return self
def predict(self, X):
return all_nearest_centroids(X, self.cluster_centers_) | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans | from sklearn.cluster import KMeans as scikit_KMeans
scikit_kmeans = scikit_KMeans(n_clusters=3)
scikit_kmeans.fit(dataset)
print("Inércia = ", scikit_kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=scikit_kmeans.labels_)
plt.scatter(scikit_kmeans.cluster_centers_[:,0],
scikit_kmeans.cluster_centers_[:,1], c='red')
plt.show() | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados. | n_clusters_test = 8
n_sequence = np.arange(1, n_clusters_test+1)
inertia_vec = np.zeros(n_clusters_test)
for index, n_cluster in enumerate(n_sequence):
inertia_vec[index] = KMeans(n_clusters=n_cluster).fit(dataset).inertia_
plt.plot(n_sequence, inertia_vec, 'ro-')
plt.show() | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | InsightLab/data-science-cookbook | mit |
Calculating Molar Fluorescence (MF) of Free Ligand
1. Maximum likelihood curve-fitting
Find the maximum likelihood estimate, $\theta^$, i.e. the curve that minimizes the squared error $\theta^ = \text{argmin} \sum_i |y_i - f_\theta(x_i)|^2$ (assuming i.i.d. Gaussian noise)
Y = MF*L + BKG
Y: Fluorescence read (Flu unit)
L: Total ligand concentration (uM)
BKG: background fluorescence without ligand (Flu unit)
MF: molar fluorescence of free ligand (Flu unit/ uM) | import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
%matplotlib inline
def model(x,slope,intercept):
''' 1D linear model in the format scipy.optimize.curve_fit expects: '''
return x*slope + intercept
# generate some data
#X = np.random.rand(1000)
#true_slope=1.0
#true_intercept=0.0
#noise = np.random.randn(len(X))*0.1
#Y = model(X,slope=true_slope,intercept=true_intercept) + noise
#ligand titration
lig1=np.array([200.0000,86.6000,37.5000,16.2000,7.0200, 3.0400, 1.3200, 0.5700, 0.2470, 0.1070, 0.0462, 0.0200])
lig1
# Since I have 4 replicates
L=np.concatenate((lig1, lig1, lig1, lig1))
len(L)
# Fluorescence read
df_topread.loc[:,("B - Buffer", "D - Buffer", "F - Buffer", "H - Buffer")]
B=df_topread.loc[:,("B - Buffer")]
D=df_topread.loc[:,("D - Buffer")]
F=df_topread.loc[:,("F - Buffer")]
H=df_topread.loc[:,("H - Buffer")]
Y = np.concatenate((B.as_matrix(),D.as_matrix(),F.as_matrix(),H.as_matrix()))
(MF,BKG),_ = optimize.curve_fit(model,L,Y)
print('MF: {0:.3f}, BKG: {1:.3f}'.format(MF,BKG))
print('y = {0:.3f} * L + {1:.3f}'.format(MF, BKG))
| examples/ipynbs/data-analysis/hsa/analyzing_FLU_hsa_lig1_20150922.ipynb | sonyahanson/assaytools | lgpl-2.1 |
The following are the scripts for Sentiment Analysis with NLP. | score_file = 'reviews_score.csv'
review_file = 'reviews.csv'
def read_score_review(score_file, review_file):
"""Read score and review data."""
score_df = pd.read_csv(score_file)
review_df = pd.read_csv(review_file)
return score_df, review_df
def groupby_agg_data(df, gkey='gkey', rid='rid'):
"""Group-by aggregate data."""
agg_df = (df.groupby(gkey)[rid]
.count()
.reset_index())
nan_count = df[gkey].isnull().sum()
nan_df = pd.DataFrame({gkey: [np.nan], rid: [nan_count]})
agg_df = agg_df.append(nan_df)[[gkey, rid]]
agg_df['percent'] = agg_df[rid] / agg_df[rid].sum()
return agg_df
def count_missing_data(df, cols='cols'):
"""Count missing records w.r.t. columns."""
print('Missing rows:')
for col in cols:
nan_rows = df[col].isnull().sum()
print('For {0}: {1}'.format(col, nan_rows))
def slice_abnormal_id(df, rid='hotel_review_id'):
"""View abnormal records with column"""
abnorm_bool_arr = (df[rid] == 0)
abnorm_count = abnorm_bool_arr.sum()
print('abnorm_count: {}'.format(abnorm_count))
abnorm_df = df[abnorm_bool_arr]
return abnorm_df
def remove_missing_abnormal_data(score_raw_df, review_raw_df,
rid='hotel_review_id',
score_col='rating_overall'):
"""Remove missing / abnormal data."""
filter_score_bool_arr = (score_raw_df[rid].notnull() &
score_raw_df[score_col].notnull())
score_df = score_raw_df[filter_score_bool_arr]
filter_review_bool_arr = review_raw_df[rid].notnull()
review_df = review_raw_df[filter_review_bool_arr]
return score_df, review_df
def join_score_review(score_df, review_df, on='hotel_review_id', how='left'):
"""Join score and review datasets."""
score_review_df = pd.merge(score_df, review_df, on=on, how=how)
score_review_count = score_review_df.shape[0]
print('score_review_count: {}'.format(score_review_count))
return score_review_df
def concat_review_title_comments(score_review_df,
concat_cols=['review_title', 'review_comments'],
concat_2col='review_title_comments'):
"""Concat review title and review comments."""
concat_text_col = ''
for concat_col in concat_cols:
concat_text_col += score_review_df[concat_col]
if concat_col != concat_cols[len(concat_cols) - 1]:
concat_text_col += '. '
score_review_df[concat_2col] = concat_text_col
return score_review_df
def lower_review_title_comments(score_review_df,
lower_col='review_title_comments'):
"""Lower sentences."""
score_review_df[lower_col] = score_review_df[lower_col].str.lower()
return score_review_df
def _tokenize_sen(sen):
"""Tokenize one sentence."""
from nltk.tokenize import word_tokenize
sen_token = word_tokenize(str(sen))
return sen_token
def _remove_nonstop_words_puncs(sen):
"""Remove nonstop words and meaningless punctuations in one sentence."""
from nltk.corpus import stopwords
sen_clean = [
word for word in sen
if word not in stopwords.words('english') and
word not in [',', '.', '(', ')', '&']]
return sen_clean
def tokenize_clean_sentence(sen):
"""Tokenize and clean one sentence."""
sen_token = _tokenize_sen(sen)
sen_token_clean = _remove_nonstop_words_puncs(sen_token)
return sen_token_clean
# def preprocess_sentence(df, sen_cols=['review_title', 'review_comments']):
# """Preprocess sentences (deprecated due to slow performance)."""
# for sen_col in sen_cols:
# print('Start tokenizing "{}"'.format(sen_col))
# sen_token_col = '{}_token'.format(sen_col)
# df[sen_token_col] = df[sen_col].apply(tokenize_clean_sentence)
# print('Finish tokenizing "{}"'.format(sen_col))
# return df
def preprocess_sentence_par(df, sen_col='review_title_comments',
sen_token_col='review_title_comments_token', num_proc=32):
"""Preporecess sentences in parallel.
Note: We apply multiprocessing with 32 cores; adjust `num_proc` by your computing environment.
"""
import multiprocessing as mp
pool = mp.Pool(num_proc)
df[sen_token_col] = pool.map_async(tokenize_clean_sentence , df[sen_col]).get()
return df
def get_bag_of_words(w_ls):
"""Get bag of words in word list."""
w_bow = dict([(w, True) for w in w_ls])
return w_bow
def get_bag_of_words_par(df, sen_token_col='review_title_comments_token',
bow_col='review_title_comments_bow', num_proc=32):
"""Get bag of words in parallel for sentences."""
import multiprocessing as mp
pool = mp.Pool(num_proc)
df[bow_col] = pool.map_async(get_bag_of_words , df[sen_token_col]).get()
return df
def label_review(df, scores_ls=None, label='negative',
score_col='rating_overall',
review_col='review_title_comments_bow'):
"""Label review by positive or negative."""
df_label = df[df[score_col].isin(scores_ls)]
label_review_ls = (df_label[review_col]
.apply(lambda bow: (bow, label))
.tolist())
return label_review_ls
def permutate(data_ls):
"""Randomly permutate data."""
np.random.shuffle(data_ls)
def create_train_test_sets(pos_review_ls, neg_review_ls, train_percent=0.75):
"""Create the training and test sets."""
neg_num = np.int(np.ceil(len(neg_review_ls) * train_percent))
pos_num = np.int(np.ceil(len(pos_review_ls) * train_percent))
train_set = neg_review_ls[:neg_num] + pos_review_ls[:pos_num]
permutate(train_set)
test_set = neg_review_ls[neg_num:] + pos_review_ls[pos_num:]
permutate(test_set)
return train_set, test_set
def train_naive_bayes(train_set):
from nltk.classify import NaiveBayesClassifier
nb_clf = NaiveBayesClassifier.train(train_set)
return nb_clf
def eval_naive_bayes(test_set, nb_clf):
import collections
from nltk.metrics.scores import precision
from nltk.metrics.scores import recall
ref_sets = {'positive': set(),
'negative': set()}
pred_sets = {'positive': set(),
'negative': set()}
for i, (bow, label) in enumerate(test_set):
ref_sets[label].add(i)
pred_label = nb_clf.classify(bow)
pred_sets[pred_label].add(i)
print('Positive precision:', precision(ref_sets['positive'], pred_sets['positive']))
print('Positive recall:', recall(ref_sets['positive'], pred_sets['positive']))
print('Negative precision:', precision(ref_sets['negative'], pred_sets['negative']))
print('Negative recall:', recall(ref_sets['negative'], pred_sets['negative']))
def pred_labels(df, clf,
bow_col='review_title_comments_bow',
pred_col='pred_label',
sel_cols=['rating_overall',
'review_title_comments_bow',
'pred_label']):
"""Predict labels for bag of words."""
df[pred_col] = df[bow_col].apply(clf.classify)
df_pred = df[sel_cols]
return df_pred
def get_boxplot_data(pred_label_df,
pred_col='pred_label', score_col='rating_overall'):
pos_data = pred_label_df[pred_label_df[pred_col] == 'positive'][score_col].values
neg_data = pred_label_df[pred_label_df[pred_col] == 'negative'][score_col].values
box_data = [pos_data, neg_data]
return box_data
def plot_box(d_ls, title='Box Plot', xlab='xlab', ylab='ylab',
xticks=None, xlim=None, ylim=None, figsize=(15, 10)):
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib
matplotlib.style.use('ggplot')
%matplotlib inline
plt.figure()
fig, ax = plt.subplots(figsize=figsize)
plt.boxplot(d_ls)
plt.title(title)
plt.xlabel(xlab)
plt.ylabel(ylab)
if xticks:
ax.set_xticklabels(xticks)
if xlim:
plt.xlim(xlim)
if ylim:
plt.ylim(ylim)
# plt.axis('auto')
plt.show() | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Collect Data
We first read score and review raw datasets.
Score dataset: two columns
hotel_review_id: hotel review sequence ID
rating_overall: overal accommodation rating
Review dataset: three columns
hotel_review_id: hotel review sequence ID
review_title: review title
review_comments: detailed review comments | score_raw_df, review_raw_df = read_score_review(score_file, review_file)
print(len(score_raw_df))
print(len(review_raw_df))
score_raw_df.head(5)
review_raw_df.head(5) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
EDA with Datasets
Check missing / abnormal data | count_missing_data(score_raw_df,
cols=['hotel_review_id', 'rating_overall'])
score_raw_df[score_raw_df.rating_overall.isnull()]
count_missing_data(review_raw_df,
cols=['hotel_review_id', 'review_title', 'review_comments'])
abnorm_df = slice_abnormal_id(score_raw_df, rid='hotel_review_id')
abnorm_df
abnorm_df = slice_abnormal_id(review_raw_df, rid='hotel_review_id')
abnorm_df | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Group-by aggregate score distributions
From the following results we can observe that
the rating_overall scores are imbalanced. Specifically, only about $1\%$ records have low scores $\le 5$, thus about $99\%$ records have scores $\ge 6$.
some records have missing score. | score_raw_df.rating_overall.unique()
score_agg_df = groupby_agg_data(
score_raw_df, gkey='rating_overall', rid='hotel_review_id')
score_agg_df | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Pre-process Datasets
Remove missing / abnormal data
Since there are few records (only 27) having missing hotel_review_id and rating_overall score, we just ignore them. | score_df, review_df = remove_missing_abnormal_data(
score_raw_df, review_raw_df,
rid='hotel_review_id',
score_col='rating_overall')
score_df.head(5)
review_df.head(5) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Join score & review datasets
To leverage fast vectorized operation with Pandas DataFrame, we joint score and review datasets. | score_review_df_ = join_score_review(score_df, review_df)
score_review_df_.head(5) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
The following are the procedure for processing natural language texts.
Concat review_title and review_comments
Using the Occam's Razor Principle, since review_title and review_comments both are natural languages, we can simply concat them into one sentence for further natural language processing. | score_review_df = concat_review_title_comments(
score_review_df_,
concat_cols=['review_title', 'review_comments'],
concat_2col='review_title_comments')
score_review_df.head(5) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Lower review_title_comments | score_review_df = lower_review_title_comments(
score_review_df,
lower_col='review_title_comments')
score_review_df.head(5) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Tokenize and remove stopwords
Tokenizing is an important technique by which we would like to split the sentence into vector of invidual words. Nevertheless, there are many stopwords that are useless in natural language text, for example: he, is, at, which, and on. Thus we would like to remove them from the vector of tokenized words.
Note that since the tokenizing and removing stopwords tasks are time-consuming, we apply Python build-in package multiprocessing for parallel computing to improve the performance. | start_token_time = time.time()
score_review_token_df = preprocess_sentence_par(
score_review_df,
sen_col='review_title_comments',
sen_token_col='review_title_comments_token', num_proc=32)
end_token_time = time.time()
print('Time for tokenizing: {}'.format(end_token_time - start_token_time))
score_review_token_df.head(5)
score_review_token_df.review_title_comments_token[1] | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Get bag of words
The tokenized words may contain duplicated words, and for simplicity, we would like to apply the Bag of Words, which just represents the sentence as a bag (multiset) of its words, ignoring grammar and even word order. Here, following the Occam's Razor Principle again, we do not keep word frequencies, thus we use binary (presence/absence or True/False) weights. | start_bow_time = time.time()
score_review_bow_df = get_bag_of_words_par(
score_review_token_df,
sen_token_col='review_title_comments_token',
bow_col='review_title_comments_bow', num_proc=32)
end_bow_time= time.time()
print('Time for bag of words: {}'.format(end_bow_time - start_bow_time))
score_review_bow_df.review_title_comments_bow[:5] | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Sentiment Analysis
Label data
Since we would like to polarize data with consideration for the imbalanced data problem as mentioned before, we decide to label
ratings 2, 3 and 4 by "negative",
ratings 9 and 10 by "positive". | neg_review_ls = label_review(
score_review_bow_df,
scores_ls=[2, 3, 4], label='negative',
score_col='rating_overall',
review_col='review_title_comments_bow')
pos_review_ls = label_review(
score_review_bow_df,
scores_ls=[9, 10], label='positive',
score_col='rating_overall',
review_col='review_title_comments_bow')
neg_review_ls[1]
pos_review_ls[1] | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Splite training and test sets
We split the training and test sets by the rule of $75\%$ and $25\%$. | train_set, test_set = create_train_test_sets(
pos_review_ls, neg_review_ls, train_percent=0.75)
train_set[10] | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Naive Bayes Classification
We first apply Naive Bayes Classifier to learn positive or negative sentiment. | nb_clf = train_naive_bayes(train_set) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Model evaluation
We evaluate our model by positive / negative precision and recall. From the results we can observe that our model performs fairly good. | eval_naive_bayes(test_set, nb_clf) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Measure Real-World Performance
Predict label based on bag of words | start_pred_time = time.time()
pred_label_df = pred_labels(
score_review_bow_df, nb_clf,
bow_col='review_title_comments_bow',
pred_col='pred_label')
end_pred_time = time.time()
print('Time for prediction: {}'.format(end_pred_time - start_pred_time))
pred_label_df.head(5) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Compare two labels's score distributions
From the following boxplot, we can observe that our model performs reasonably well in the real world, even by our suprisingly simple machine learning modeling.
We can further apply divergence measures, such as Kullback-Leibler divergence, to quantify the rating_overall distribution distance between two label groups, if needed. | box_data = get_boxplot_data(
pred_label_df,
pred_col='pred_label', score_col='rating_overall')
plot_box(box_data, title='Box Plot for rating_overall by Sentiment Classes',
xlab='class', ylab='rating_overall',
xticks=['positive', 'negative'], figsize=(12, 7)) | notebook/sentiment_nlp.ipynb | bowen0701/data_science | bsd-2-clause |
Loading and visualizing training data
The training data is 5000 digit images of digits of size 20x20. We will display a random selection of 25 of them. | ex3data1 = scipy.io.loadmat("./ex3data1.mat")
X = ex3data1['X']
y = ex3data1['y'][:,0]
y[y==10] = 0
m, n = X.shape
m, n
fig = plt.figure(figsize=(5,5))
fig.subplots_adjust(wspace=0.05, hspace=0.15)
import random
display_rows, display_cols = (5, 5)
for i in range(display_rows * display_cols):
ax = fig.add_subplot(display_rows, display_cols, i+1)
ax.set_axis_off()
image = X[random.randint(0, m-1)].reshape(20, 20).T
image /= np.max(image)
ax.imshow(image, cmap=plt.cm.Greys_r)
X = np.insert(X, 0, np.ones(m), 1) | ex3/ml-ex3-onevsall.ipynb | noammor/coursera-machinelearning-python | mit |
Part 2: Vectorize Logistic Regression
In this part of the exercise, you will reuse your logistic regression
code from the last exercise. You task here is to make sure that your
regularized logistic regression implementation is vectorized. After
that, you will implement one-vs-all classification for the handwritten
digit dataset. | def sigmoid(z):
return 1 / (1 + np.exp(-z))
def h(theta, x):
return sigmoid(x.dot(theta))
#LRCOSTFUNCTION Compute cost and gradient for logistic regression with
#regularization
# J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
# theta as the parameter for regularized logistic regression and the
# gradient of the cost w.r.t. to the parameters.
def cost(X, y, theta, lambda_=None):
# You need to return the following variables correctly
J = 0
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the cost of a particular choice of theta.
# You should set J to the cost.
# Compute the partial derivatives and set grad to the partial
# derivatives of the cost w.r.t. each parameter in theta
#
# Hint: The computation of the cost function and gradients can be
# efficiently vectorized. For example, consider the computation
#
# sigmoid(X * theta)
#
# Each row of the resulting matrix will contain the value of the
# prediction for that example. You can make use of this to vectorize
# the cost function and gradient computations.
#
# =============================================================
return J
def gradient(X, y, theta, lambda_=None):
# You need to return the following variables correctly
grad = np.zeros(theta.shape)
# ====================== YOUR CODE HERE ======================
# Hint: When computing the gradient of the regularized cost function,
# there're many possible vectorized solutions, but one solution
# looks like:
# grad = (unregularized gradient for logistic regression)
# temp = theta;
# temp[0] = 0; # because we don't add anything for j = 0
# grad = grad + YOUR_CODE_HERE (using the temp variable)
# =============================================================
return grad
initial_theta = np.zeros(n + 1)
lambda_ = 0.1
cost(X, y, initial_theta, lambda_)
gradient(X, y, initial_theta, lambda_).shape
def one_vs_all(X, y, num_labels, lambda_):
#ONEVSALL trains multiple logistic regression classifiers and returns all
#the classifiers in a matrix all_theta, where the i-th row of all_theta
#corresponds to the classifier for label i
# [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
# logisitc regression classifiers and returns each of these classifiers
# in a list all_theta, where the i-th item of all_theta corresponds
# to the classifier for label i
# You need to return the following variables correctly
all_theta = [None] * num_labels
# ====================== YOUR CODE HERE ======================
# Instructions: You should complete the following code to train num_labels
# logistic regression classifiers with regularization
# parameter lambda.
#
# Hint: You can use y == c to obtain a vector of True's and False's
#
# Note: For this assignment, we recommend using scipy.optimize.minimize with method='L-BFGS-B'
# to optimize the cost function.
# It is okay to use a for-loop (for i in range(num_labels)) to
# loop over the different classes.
#
# Example Code for scipy.optimize.minimize:
#
# result = scipy.optimize.minimize(lambda t: cost(X, y==digit, t, lambda_),
# initial_theta,
# jac=lambda t: gradient(X, y==digit, t, lambda_),
# method='L-BFGS-B')
# theta = result.x
# =========================================================================
num_labels = 10
thetas = one_vs_all(X, y, num_labels, lambda_)
fig = plt.figure(figsize=(10,10))
for d in range(10):
ax = fig.add_subplot(5, 2, d+1)
ax.scatter(range(m), h(thetas[d], X), s=1)
def predict_one_vs_all(X, thetas):
#PREDICT Predict the label for a trained one-vs-all classifier. The labels
#are in the range 1..K, where K = len(thetas)
# p = PREDICTONEVSALL(all_theta, X) will return a vector of predictions
# for each example in the matrix X. Note that X contains the examples in
# rows. all_theta is a list where the i-th entry is a trained logistic
# regression theta vector for the i-th class. You should set p to a vector
# of values from 1..K (e.g., p = [1; 3; 1; 2] predicts classes 1, 3, 1, 2
# for 4 examples)
# You need to return the following variables correctly
p = np.zeros(X.shape[0]);
# ====================== YOUR CODE HERE ======================
# Instructions: Complete the following code to make predictions using
# your learned logistic regression parameters (one-vs-all).
# You should set p to a vector of predictions (from 1 to
# num_labels).
#
# Hint: This code can be done all vectorized using the max function.
# In particular, the max function can also return the index of the
# max element, for more information see 'help max'. If your examples
# are in rows, then, you can use max(A, [], 2) to obtain the max
# for each row.
#
# =========================================================================
return p
predictions = predict_one_vs_all(X, thetas)
plt.scatter(range(m), predictions, s=1) | ex3/ml-ex3-onevsall.ipynb | noammor/coursera-machinelearning-python | mit |
Training set accuracy: | (predictions == y).mean() | ex3/ml-ex3-onevsall.ipynb | noammor/coursera-machinelearning-python | mit |
<h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://console.cloud.google.com/bigquery?project=nyc-tlc&p=nyc-tlc&d=yellow&t=trips&page=table">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format. | %%bigquery
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude, dropoff_longitude,
dropoff_latitude, passenger_count, trip_distance, tolls_amount,
fare_amount, total_amount
FROM
`nyc-tlc.yellow.trips` # TODO 1
LIMIT 10 | courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
<h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering. | # TODO 2
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8) | courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
Note the extra WHERE clauses. | %%bigquery trips
SELECT
FORMAT_TIMESTAMP(
"%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
# TODO 3
AND trip_distance > 0
AND fare_amount >= 2.5
print(len(trips))
ax = sns.regplot(
x="trip_distance", y="fare_amount",
fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8) | courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model. | def distance_between(lat1, lon1, lat2, lon2):
# Haversine formula to compute distance "as the crow flies".
lat1_r = np.radians(lat1)
lat2_r = np.radians(lat2)
lon_diff_r = np.radians(lon2 - lon1)
sin_prod = np.sin(lat1_r) * np.sin(lat2_r)
cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_diff_r)
minimum = np.minimum(1, sin_prod + cos_prod)
dist = np.degrees(np.arccos(minimum)) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(
df["pickuplat"], df["pickuplon"], df["dropofflat"], df["dropofflon"])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual - predicted) ** 2))
def print_rmse(df, rate, name):
print ("{1} RMSE = {0}".format(
compute_rmse(df["fare_amount"], rate * estimate_distance(df)), name))
# TODO 4
FEATURES = ["pickuplon", "pickuplat", "dropofflon", "dropofflat", "passengers"]
TARGET = "fare_amount"
columns = list([TARGET])
columns.append("pickup_datetime")
columns.extend(FEATURES) # in CSV, target is first column, after the features
columns.append("key")
df_train = pd.read_csv("taxi-train.csv", header=None, names=columns)
df_valid = pd.read_csv("taxi-valid.csv", header=None, names=columns)
df_test = pd.read_csv("taxi-test.csv", header=None, names=columns)
rate = df_train["fare_amount"].mean() / estimate_distance(df_train).mean()
print ("Rate = ${0}/km".format(rate))
print_rmse(df_train, rate, "Train")
print_rmse(df_valid, rate, "Valid")
print_rmse(df_test, rate, "Test") | courses/machine_learning/deepdive2/launching_into_ml/solutions/explore_data.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Here, we visualize the generated multi- and singly-connected BBNs. | with warnings.catch_warnings():
warnings.simplefilter('ignore')
plt.figure(figsize=(10, 5))
plt.subplot(121)
nx.draw(nx_multi_bbn, with_labels=True, font_weight='bold')
plt.title('Multi-connected BBN')
plt.subplot(122)
nx.draw(nx_singly_bbn, with_labels=True, font_weight='bold')
plt.title('Singly-connected BBN') | jupyter/generate-bbn.ipynb | vangj/py-bbn | apache-2.0 |
Now, let's print out the probabilities of each node for the multi- and singly-connected BBNs. | join_tree = InferenceController.apply(m_bbn)
for node in join_tree.get_bbn_nodes():
potential = join_tree.get_bbn_potential(node)
print(node)
print(potential)
print('>')
join_tree = InferenceController.apply(s_bbn)
for node in join_tree.get_bbn_nodes():
potential = join_tree.get_bbn_potential(node)
print(node)
print(potential)
print('>') | jupyter/generate-bbn.ipynb | vangj/py-bbn | apache-2.0 |
Generate a lot of graphs and visualize them | def generate_graphs(n=10, prog='neato', multi=True):
d = {}
for i in range(n):
max_nodes = np.random.randint(3, 8)
max_iter = np.random.randint(10, 100)
if multi is True:
g, p = generate_multi_bbn(max_nodes, max_iter=max_iter)
else:
g, p = generate_singly_bbn(max_nodes, max_iter=max_iter)
bbn = convert_for_exact_inference(g, p)
pos = nx.nx_agraph.graphviz_layout(g, prog=prog)
d[i] = {
'g': g,
'p': p,
'bbn': bbn,
'pos': pos
}
return d
def draw_graphs(graphs, prefix):
fig, axes = plt.subplots(5, 2, figsize=(15, 20))
for i, ax in enumerate(np.ravel(axes)):
graph = graphs[i]
nx.draw(graph['g'], pos=graph['pos'], with_labels=True, ax=ax)
ax.set_title('{} Graph {}'.format(prefix, i + 1))
multi_graphs = generate_graphs(multi=True)
singly_graphs = generate_graphs(multi=False)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
draw_graphs(multi_graphs, 'Multi-connected')
draw_graphs(singly_graphs, 'Singly-connected') | jupyter/generate-bbn.ipynb | vangj/py-bbn | apache-2.0 |
Efficient Computation of Powers
The function power takes two natural numbers $m$ and $n$ and computes $m^n$. Our first implementation is inefficient and takes $n-1$ multiplication to compute $m^n$. | def power(m, n):
r = 1
for i in range(n):
r *= m
return r
power(2, 3), power(3, 2)
%%time
p = power(3, 500000)
p | Python/Chapter-02/Power.ipynb | karlstroetmann/Algorithms | gpl-2.0 |
Next, we try a recursive implementation that is based on the following two equations:
1. $m^0 = 1$
2. $m^n = \left{\begin{array}{ll}
m^{n//2} \cdot m^{n//2} & \mbox{if $n$ is even}; \
m^{n//2} \cdot m^{n//2} \cdot m & \mbox{if $n$ is odd}.
\end{array}
\right.
$ | def power(m, n):
if n == 0:
return 1
p = power(m, n // 2)
if n % 2 == 0:
return p * p
else:
return p * p * m
%%time
p = power(3, 500000) | Python/Chapter-02/Power.ipynb | karlstroetmann/Algorithms | gpl-2.0 |
LDA Classifier Object & Fit
Now we are in a position to run the LDA classifier. This, as you can see from the three lines below, is as easy as it gets. | from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
# Create LDA object and run classifier
lda = LDA(solver='lsqr')
lda = lda.fit(X_train, y_train)
lda | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
The parameter solver='lsqr' specifies the method by which the covariance matrix is estimated. lsqr follows the approach introduced in the preceding subsection. Others such as svd or eigen are available. See Scikit-learn's guide or the function description.
Every function in sklearn has different attributes and methods. Sklearns convention is to store anything that is derived from the data in attributes that end with a trailing underscore. That is to separate them from parameters that are set by the user (Mueller and Guido (2017)). For example the estimated covariance matrix can be printed with this command. | lda.covariance_ | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
In a Jupyter notebook, to see all options you can simply type lda. and hit tab.
LDA Performance
Here are some basic metrics on how the LDA classifier performed on the training data. | print('default-rate: {0: .4f}'.format(np.sum(y_train)/len(y_train)))
print('score: {0: .4f}'.format(lda.score(X_train, y_train)))
print('error-rate: {0: .4f}'.format(1-lda.score(X_train, y_train))) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Overall, 3.33% of all observations defaulted. If we would simply label each entry as 'non-default' we would have an error rate of this magnitude. So, in comparison to this naive classifier, LDA seems to have some skill in predicting the default.
IMPORTANT NOTE: In order to be in line with James et al. (2015), the textbook for this course, we have not performed any train/test split of the data. Therefore we will use the same full matrix X_train and response vector y_train as test data. Performance metrics might be applied to both test and training data but in the end the results on the test set are those that we are ultimately interested in. To drive this point home, I have relabeled the X_train and y_train to X_test, y_test. Nevertheless, be aware that in this unique case it is the same data!
Let us print the confusion matrix introduced in the previous chapter to see the class-wise performance. For reference the confusion matrix is also printed as DataFrame but moving forward be sure to know that row represent the true values and columns predicted labels. | # Relabel variables as discussed
X_test = X_train
y_test = y_train
# Predict labels
y_pred = lda.predict(X_test)
# Sklearn's confusion matrix
print(metrics.confusion_matrix(y_test, y_pred))
# Manual confusion matrix as pandas DataFrame
confm = pd.DataFrame({'Predicted default status': y_pred,
'True default status': y_test})
confm.replace(to_replace={0:'No', 1:'Yes'}, inplace=True)
print(confm.groupby(['True default status','Predicted default status']).size().unstack('Predicted default status')) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
The confusion matrix tells us that for the non-defaulters, LDA only misclassified 22 of them. This is an excellent rate. However, out of the 333 (=253 + 80) people who actually defaulted, LDA classified only 80 correctly. This means our classifier missed out on 76.0% of those who actually defaulted! For a credit card applicant with a bad credit score this is good news. For a credit card company, not so much.
Varying the Threshold Levels
Why does LDA miss all these 'defaulters'? Implicitly, Bayes classifier minimizes the overall error rate, meaning that it yields the smallest possible total number of misclassified observations - irrespective of the class-specific error rate. Bayes classifier works by assigning an observation to class 'default' for which the posterior probability $Pr(\text{default = Yes}|X=x) > 0.5$. For a credit card company who seeks to have as few defaults as possible, this threshold might be too large. Instead, such a company might decide to label any customer with a posterior probability of default above 20% to the 'default' class ($Pr(\text{default = Yes}|X=x) > 0.2$). Let us investigate how the results in such a case would look like. | # Calculated posterior probabilities
posteriors = lda.predict_proba(X_test)
posteriors[:5, :] | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
The function lda.predict_proba() provides the posterior probabilities of $\Pr(\text{default = 0}|X=x)$ in the first column and $\Pr(\text{default = 1}|X=x)$ in the second. The latter column is what we are interested in. Out of convenience we use sklearn's binarize function to classify all probabilities above the threshold of 0.2 as 1 (=default) and generate the confusion matrix. | from sklearn.preprocessing import binarize
# Set threshold and get classes
thresh = 0.2
y_pred020 = binarize([posteriors[:, 1]], thresh)[0]
# new confusion matrix (threshold of 0.2)
print(metrics.confusion_matrix(y_test, y_pred020)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Now LDA misclassifies only 140 out of 333 defaults, or 42.0%. Thats a sharp improvement over the 76.0% from before. But this comes at a price: Before, of those who did not default LDA mislabeled only 22 (or 0.2%) incorrectly. This number increased now to 232 (or 2.4%). Combined, the total error rate increased from 2.75% to 3.72%. For a credit card company, this might be a price they are willing to pay to have a more accurate identification of individuals who default.
Below code snippet calculates and plots the overall error rate, the proportion of missed defaulting customers and the fraction of error among the non-defaulting customers as a function of the threshold value for the posterior probability that is used to assign classes. | # Array of thresholds
thresh = np.linspace(0, 0.5, num=100)
er = [] # Total error rate
der = [] # Defaults error rate
nder = [] # Non-Defaults error rate
for t in thresh:
# Sort/arrange data
y_pred_class = binarize([posteriors[:, 1]], t)[0]
confm = metrics.confusion_matrix(y_test, y_pred_class)
# Calculate error rates
er = np.append(er, (confm[0, 1] + confm[1, 0]) / len(posteriors))
der = np.append(der, confm[1, 0] / np.sum(confm[1, :]))
nder = np.append(nder, confm[0, 1] / np.sum(confm[0, :]))
# Plot
plt.figure(figsize=(12, 6))
plt.plot(thresh, er, label='Total error rate')
plt.plot(thresh, der, label='Missed defaults')
plt.plot(thresh, nder, label='Missed non-defaults')
plt.xlim(0, 0.5)
plt.xlabel('Threshold')
plt.ylabel('Error Rate')
plt.legend(); | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
How do we know what threshold value is best? Unfortunately there's no formula for it. "Such a decision must be based on domain knowledge, such as detailed information about costs associated with defaults" (James et al. (2013, p.147)) and it will always be a trade-off: if we increase the threshold we reduce the missed non-defaults but at the same time increase the missed defaults.
Performance Metrics
This is now the perfect opportunity to refresh our memory on a few classification performance measures introduced in the previous chapters and add a few more to have a full bag of performance metrics. The following table will help in doing this.
<img src="Graphics/0208_ConfusionMatrixDefault.png" alt="ConfusionMatrixDefault" style="width: 800px;"/>
We will use the following abbreviations (Markham (2016)):
True Positives (TP): correctly predicted defaults
True Negatives (TN): correctly predicted non-defaults
False Positives (FP): incorrectly predicted defaults ("Type I error")
False Negatives (FN): incorrectly predicted non-defaults ("Type II error") | # Assign confusion matrix values to variables
confm = metrics.confusion_matrix(y_test, y_pred)
print(confm)
TP = confm[1, 1] # True positives
TN = confm[0, 0] # True negatives
FP = confm[0, 1] # False positives
FN = confm[1, 0] # False negatives | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
So far we've encountered the following performance metrics:
Score
Error rate
Sensitivity and
Specificity.
We briefly recapture their meaning, how they are calculated and how to call them in Scikit-learn. We will make use of the functions in the metrics sublibrary of sklearn
Score
Score = (TN + TP) / (TN + TP + FP + FN)
Fraction of (overall) correctly predicted classes | print((TN + TP) / (TN + TP + FP + FN))
print(metrics.accuracy_score(y_test, y_pred))
print(lda.score(X_test, y_test)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Error rate
Error rate = 1 - Score or Error rate = (FP + FN) / (TN + TP + FP + FN)
Fraction of (overall) incorrectly predicted classes
Also known as "Misclassification Rate" | print((FP + FN) / (TN + TP + FP + FN))
print(1 - metrics.accuracy_score(y_test, y_pred))
print(1 - lda.score(X_test, y_test)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Specificity
Specificity = TN / (TN + FP)
Fraction of correctly predicted negatives (e.g. 'non-defaults') | print(TN / (TN + FP)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Sensitivity or Recall
Sensitivity = TP / (TP + FN)
Fraction of correctly predicted 'positives' (e.g. 'defaults'). Basically asks the question: "When the actual value is positive, how often is the prediction correct?"
Also known as True positive rate
Counterpart to Precision | print(TP / (TP + FN))
print(metrics.recall_score(y_test, y_pred)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
The above four classification performance metrics we already encountered. There are two more metrics we want to cover: Precision and the F-Score.
Precision
Precision = TP / (TP + FP)
Refers to the accuracy of a positive ('default') prediction. Basically asks the question: "When a positive value is predicted, how often is the prediction correct?"
Counterpart to Recall | print(TP / (TP + FP))
print(metrics.precision_score(y_test, y_pred)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
<img src="Graphics/0208_ConfusionMatrixDefault.png" alt="ConfusionMatrixDefault" style="width: 800px;"/>
F-Score
Van Rijsbergen (1979) introduced a measure that is still widely used to evaluate the accuracy of predictions in two-class (binary) classification problems: the F-Score. It combines Precision and Recall (aka Sensitivity) in one metric and tells us something about the relations between data's positive labels and those given by a classifier. It is a single measure of a classification procedure's usefullness and in general the rule is that the higher the F-Score, the better the predictive power of the classification procedure. It is defined as:
\begin{align}
F_{\beta} &= \frac{(1 + \beta^2) \cdot \text{precision} \cdot \text{recall}}{\beta^2 \cdot \text{precision} + \text{recall}} \[2ex]
&= \frac{(1+\beta^2) \cdot TP}{(1+\beta^2) \cdot TP + \beta^2 \cdot FN + FP}
\end{align}
This measure employs a parameter $\beta$ that captures a user's preference (Guggenbuehler (2015)). The most common value for $\beta$ is 1. This $F_1$-score weights both precision and recall evenly (simple harmonic mean). In rare cases the $F_2$-score is used, which puts twice as much weights on recall as on precision (Hripcsak and Rotschild (2005)). | print(metrics.confusion_matrix(y_test, y_pred))
print(metrics.f1_score(y_test, y_pred))
print(((1+1**2) * TP)/((1+1**2) * TP + FN + FP))
print(metrics.classification_report(y_test, y_pred)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Let us compare this to the situation where we set the posterior probability threshold for 'default' at 20%. | # Confusion matrix & clf-report for cut-off
# value Pr(default=yes | X = x) > 0.20
print(metrics.confusion_matrix(y_test, y_pred020))
print(metrics.classification_report(y_test, y_pred020)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
We see that by reducing the cut-off level from $\Pr(\text{default} = 1| X=x) > 0.5$ to $\Pr(\text{default} = 1| X=x) > 0.2$ precision decreases but recall improves. This changes the $F_1$-score.
Does this mean that a threshold of 20% is more appropriate? In general, one could argue for a 'yes'. Yet, as mentioned before, this boils down to domain knowledge. Where the $F_1$-score is of help, together with the other metrics introduced, is when we compare models against each other and want to determine which one performed best. For example if we compare results from logistic regression and LDA (and both used a cut-off level of 50%) the F-score would suggest that the one with the higher value performed better.
For a summary on performance metrics the following two ressources are recommended:
For the interested reader an excellent and easily accessible summary on performance metrics is the article by Sokolova and Lapalme (2009).
For further details and examples please also consider the scikit-learn discription.
Precision-Recall Curve
If one is interested in understanding how precision and recall varies given different level of thresholds, then there is a function to do this. | # Extract data displayed in above plot
precision, recall, threshold = metrics.precision_recall_curve(y_test, posteriors[:, 1])
print('Precision: ', precision)
print('Recall: ', recall)
print('Threshold: ', threshold) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
This one can easily visualize - done in the next code snippet. We also add some more information to the plot by displaying the Average Precision (AP) and the Area under the Curve (AUC). The former summarizes the plot in that it calculates the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight see further description here. The latter calculates the area under the curve using the trapezoidal rule. Notice that ideally the function hugs the top-right corner. | # Calculate the average precisions score
y_dec_bry = lda.decision_function(X_test)
average_precision = metrics.average_precision_score(y_test, y_dec_bry)
# Calculate AUC
prec_recall_auc = metrics.auc(recall, precision)
# Plot Precision/Recall variations given different
# levels of thresholds
plt.plot(recall, precision)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('2-class Precision-Recall curve: \n AP={0:0.2f} / AUC={1:0.2f}'.format(
average_precision, prec_recall_auc)); | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
ROC Curve
Having introduced the major performance measures, let us now discuss the so called ROC curve (short for "receiver operating characteristics"). This is a very popular way of visualizing the performance of binary classifiers. Its origin are in signal detection theory durign WWII (Flaach (2017)) but it has since found application in medical decision making and machine learning (Fawcett (2006)). ROC investigates the relationship between sensitivity and specificity of a binary classifier. Sensitivity (or true positive rate) measures the proportion of positives (defaults) correctly classified. Specificity (or true negative rate) measures the proportion of negatives (non-defaults) correctly classified.
Above we calculated that if we use $\Pr(\text{default = Yes}|X=x) > 0.5$ to classify posterior probabilities as defaults, LDA has its best overall error rate but misses on 76.0% of the customers who acutally defaulted. By decreasing the threshold to 0.2 we improved the accuracy of detecting defaults but this came at the cost of a higher overall error rate. This was the trade-off we faced. The ROC curve serves to visualize a variation of this trade-off. It varies the cut-off threshold from 0 to 1 and calculates for each threshold the true positive rate (aka sensitivity) and false positive rate (equals 1 - specificity). These values are then plotted with the former on the vertical and the later on the horizontal axis.
Though this might feel a bit abstract if one is not familiar with all these technical terms, the interpretation is fortunately fairly simple. The ideal ROC curve will hug the top left corner. In that case, the area under the curve (AUC) is biggest. The bigger the AUC, the better the classifier. A perfect classifier has an AUC of 1.
Here's how we calculate the ROC numbers, the corresponding area under the curve and how we plot it. | # Compute ROC curve and ROC area (AUC) for each class
fpr, tpr, thresholds = metrics.roc_curve(y_test, posteriors[:, 1])
roc_auc = metrics.auc(fpr, tpr)
plt.figure(figsize=(6, 6))
plt.plot(fpr, tpr, lw=2, label='ROC curve (area = {0: 0.2f})'.format(roc_auc))
plt.plot([0, 1], [0, 1], lw=2, c = 'k', linestyle='--')
plt.xlim([-0.01, 1.0])
plt.ylim([-0.01, 1.01])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.title('Receiver operating characteristic (ROC)', fontweight='bold', fontsize=18)
plt.legend(loc="lower right"); | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
An AUC value of 0.95 is close to the maximum of 1 and should be deemed very good. The dashed black line puts this in perspective: it represents the "no information" classifier; this is what we would expect if the probability of default is not associated with 'student' status and 'balance'. Such a classifier, that performs no better than chance, is expected to have an AUC of 0.5.
Quadratic Discriminant Analysis
Underlying Assumptions
For LDA we assume that observations within each class are drawn from a multivariate normal distribution with a class-specific mean vector and a common covariance metrix: $X \sim N(\mu_k, \Sigma)$. Quadratic discriminant analysis (QDA) relaxes these assumptions somewhat. The basic assumption is still that the observations follow a multivariate normal distribution, however, QDA allows for class specific means and covariance matrices: $X \sim N(\mu_k, \Sigma_k)$, where $\Sigma_k$ is a covariance matrix for the $k$th class. With that, the Bayes classifier assigns an observation to the class for which
\begin{equation}
\delta_k(x) = \arg \max_k \; - \frac{1}{2} \ln(|\Sigma_k|) - \frac{1}{2} (x - \mu_k)^T \Sigma_k^{-1} (x - \mu_k) + \ln(\Pr(k))
\end{equation}
is highest. For a derivation of this result see again the appendix of the script. As was the case for LDA, parameter $\mu_k, \Sigma_k$ and $\Pr(k)$ are again estimated from the training data with the same formulas introduced in this notebook.
Below figure depict both LDA and QDA. Both classifiers were trained on the same data. Due to the different variability of the two classes the QDA algorithm seems to perform slightly better in this case.
<img src="Graphics/0208_QDABayesDescBoundary2d.png" alt="QDABayesDescBoundary2d" style="width: 1000px;"/>
Under what circumstances should we prefer QDA over LDA? As always, there's no straight answer to this question. Obviously, performance should be king. However, it is said that LDA tends to be a better bet than QDA if the training set is small. In contrast, if the hold-out set is large or the assumption of a common covariance matrix is clearly incorrect, then QDA is recommended. Beyond that, we have to keep in mind that QDA estimates $K p(p+1)/2$ parameters. So if the number of parameters $p$ is large, QDA might take some time to process (James et al. (2013)).
Naive Bayes
Naive Bayes is the name for a family of popular ML algorithms that are often used in text mining. Text mining is a field of ML that deals with extracting quantitative information from text. A simple example of it is the analysis of Twitter feeds in order to predict stock market reactions. There exist different variations of Naive Bayes applications. One is called 'Gaussian Naive Bayes' and works similar to QDA - with the exception that contrary to QDA the covariance matrices $\Sigma$ are assumed to be diagonal. This means $\Sigma_k$ only contains the variances of the different features for class $k$. Its covariance terms (the off-diagonal elements) are assumed to be zero. Because of its popularity, Naive Bayes is well documented in text books and on the web. A good starting point is Scikit-learn's tutorial on Naive Bayes, Collins (2013) or Russell and Norvig (2009, p.499). To apply the algorithm in Python you want to use sklearn.naive_bayes.GaussianNB() or (for text mining preferably) sklearn.naive_bayes.MultinomialNB().
QDA in Python
The application of QDA follows the one detailed for LDA. Therefore we let the code speak for itself. | from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
# Run qda on training data
qda = QDA().fit(X_train, y_train)
qda
# Predict classes for qda
y_pred_qda = qda.predict(X_test)
posteriors_qda = qda.predict_proba(X_test)[:, 1]
# Print performance metrics
print(metrics.confusion_matrix(y_test, y_pred_qda))
print(qda.score(X_test, y_test))
print(metrics.classification_report(y_test, y_pred_qda)) | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
The performance seems to be slightly better than with LDA. Let's plot the ROC curve for both LDA and QDA. | # Compute ROC curve and ROC area (AUC) for each class
fpr_qda, tpr_qda, _ = metrics.roc_curve(y_test, posteriors_qda)
roc_auc_qda = metrics.auc(fpr_qda, tpr_qda)
plt.figure(figsize=(6, 6))
plt.plot(fpr, tpr, lw=2, label='LDA ROC (AUC = {0: 0.2f})'.format(roc_auc))
plt.plot(fpr_qda, tpr_qda, lw=2, label='QDA ROC (AUC = {0: 0.2f})'.format(roc_auc_qda))
plt.plot([0, 1], [0, 1], lw=2, c = 'k', linestyle='--')
plt.xlim([-0.01, 1.0])
plt.ylim([-0.01, 1.01])
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.title('ROC Curve', fontweight='bold', fontsize=18)
plt.legend(loc="lower right"); | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
With respect to Sensitivity (Recall) and Specificity LDA and QDA perform virtually identical. Therefore, one might give the edge here to QDA because of its slighly better Recall and $F_1$-Score.
Reality and the Gaussian Assumption for LDA & QDA
Despite the rather strict assumptions regarding normal distribution, LDA and QDA perform well on an amazingly large and diverse set of classification tasks. Friedman et al. (2001, p. 111) put it this way:
"Both techniques are widely used, and entire books are devoted to LDA. It seems that whatever exotic tools are the rage of the day, we should always have available these two simple tools. The question arises why LDA and QDA have such a good track record. The reason is not likely to be that the data are approximately Gaussian, and in addition for LDA that the covariances are approximately equal. More likely a reason is that the data can only support simple decision boundaries such as linear or quadratic, and the estimates provided via the Gaussian models are stable. This is a bias variance tradeoff - we can put up with the bias of a linear decision boundary because it can be estimated with much lower variance than more exotic alternatives. This argument is less believable for QDA, since it can have many parameters itself - although perhaps fewer than the non-parametric alternatives."
Whether LDA or QDA should be applied to categorical/binary features warrants a separate note. It is true that discriminant analysis was designed for continuous features (Ronald A. Fisher (1936)) where the underlying assumption is that the values are normally distributed. However, as above quote shows, studies have proven the robustness of the model even in light of violations of the rather rigid normality assumption. This is not only true for continuous features but also for categorical/binary features. For more details see Huberty et al. (1986). It follows that applying LDA and QDA is possible, though the user should cautiously control the output. We will discuss appropriate cross validation methods to do so in the next chapter.
Further Ressources
In writing this notebook, many ressources were consulted. For internet ressources the links are provided within the textflow above and will therefore not be listed again. Beyond these links, the following ressources were consulted and are recommended as further reading on the discussed topics:
Collins, Michael, 2013, The Naive Bayes Model, Maximum-Likelihood Estimation, and the EM Algorithm, Technical report, Columbia University, New York.
Fawcett, Tom, 2006, An introduction to ROC analysis, Pattern Recognition Letters 27, 861–874.
Fisher, Roland A., 1936, The Use of Multiple Measurements in Taxonomic Problems, Annals of Human Genetics 7, 179-188.
Flach, Peter A., 2017, Roc analysis, in Claude Sammut, and Geoffrey I. Webb, eds., Encyclopedia of Machine Learning and Data Mining, 1109–1116 (Springer Science & Business Media, New York, NY).
Friedman, Jerome, Trevor Hastie, and Robert Tibshirani, 2001, The Elements of Statistical Learning (Springer, New York, NY).
Guggenbuehler, Jan P., 2015, Predicting Net New Money Using Machine Learning Algorithms and Newspaper Articles, Technical report, University of Zurich, Zurich.
James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani, 2013, An Introduction to Statistical Learning: With Applications in R (Springer Science & Business Media, New York, NY).
Jobson, J. David, and Bob Korkie, 1980, Estimation for Markowitz Efficient Portfolios, Journal of the American Statistical Association 75, 544–554.
Hripcsak, George, and Adam S Rothschild, 2005, Agreement, the F-measure, and Reliability in Information Retrieval, Journal of the American Medical Informatics Association 12, 296–298.
Huberty, Carl J., Joseph M. Wisenbaker, Jerry D. Smith, and Janet C. Smith, 1986, Using Categorical Variables in Discriminant Analysis, Multivariate Behavioral Research 21, 479-496.
Ledoit, Olivier, and Michael Wolf, 2004, Honey, i shrunk the sample covariance matrix, The Journal of Portfolio Management 30, 110–119.
Müller, Andreas C., and Sarah Guido, 2017, Introduction to Machine Learning with Python (O’Reilly Media, Sebastopol, CA).
Raschka, Sebastian, 2014, Naive Bayes and Text Classification I - Introduction and Theory from website, http://sebastianraschka.com/Articles/2014_naive_bayes_1.html, 08/31/2017
Russell, Stuart, and Peter Norvig, 2009, Artificial Intelligence: A Modern Approach (Prentice Hall Press, Upper Saddle River, NJ).
Sokolova, Marina, and Guy Lapalme, 2009, A systematic analysis of performance measures for classification tasks, Information Processing & Management 45, 427–437.
Van Rijsbergen, Cornelis Joost, 1979, Information Retrieval (Butterworths, London).
Addendum
predict, predict_proba, and decision_function
Let us quickly discuss the difference between the
* classifier.predict(),
* classifier.predict_proba(), and
* classifier.decision_function().
classifier.predict() we already know: it simply predicts the label given the traineded classifier and a feature matrix X (preferably a test set). | lda.predict(X_test)[:10] | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
classifier.predict_proba() we have also introduced above: it provides probabilities of $\Pr(y = 0|X=x)$ in the first column and $\Pr(y = 1|X=x)$ in the second. | lda.predict_proba(X_test)[:10] | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Finally, classifier.decision_function() predicts confidence scores given the feature matrix. The confidence scores for a feature matrix is the signed distance of that sample to the hyperplane. What this exaclty means should become more clear once we have discussed the support vector classifier (SVC). | lda.decision_function(X_test)[:10] | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
ROC & Precision-Recall Curve in Sklearn Version 0.22.1
Starting with Scikit-learn version 0.22.1 the plotting of the ROC and Precision-Recall Curve was integrated into Scikit-learn and there's now a function available to cut the plotting work a bit short. Below two code snippets that show how to do it. | # Plot Precision-Recall Curve
disp = metrics.plot_precision_recall_curve(lda, X_test, y_test);
disp = metrics.plot_roc_curve(lda, X_test, y_test); | 0208_LDA-QDA.ipynb | bMzi/ML_in_Finance | mit |
Question 1 | df['temperature'].hist() | statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ThomasProctor/Slide-Rule-Data-Intensive | mit |
No, this sample isn't normal, it is definitely skewed. However "this is a condition for the CLT... to apply" is just wrong. The whole power of the CLT is that it says that the distribution of sample means (not the sample distribution) tends to a normal distribution regardless of the distribution of the population or sample. What we do care about for the CLT is that our data is independent, which, assuming the data was gathered in a traditional manner, should be the case.
Question 2 | m=df['temperature'].mean()
m | statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ThomasProctor/Slide-Rule-Data-Intensive | mit |
With 130 data points, it really doesn't matter if we use the normal or t distribution. A t distribution with 129 degrees of freedom is essentially a normal distribution, so the results should not be very different. However, in this day in age I don't see the purpose of even bothering with the normal distribution. Looking up t distribution tables is awfully annoying, so it once had purpose, however nowdays I'm just going to let a computer calculate either for me, and both are equally simple. | from scipy.stats import t, norm
from math import sqrt
patients=df.shape[0]
n=patients-1
patients
SE=df['temperature'].std()/sqrt(n)
SE | statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ThomasProctor/Slide-Rule-Data-Intensive | mit |
Our null hypothosis is that the true average body temperature is $98.6^\circ F$. We'll be calculating the probability of finding a value less than or equal to the mean we obtained in this data given that this null hypothosis is true, i.e. our alternative hypothosis is that the true average body temperature is less than $98.6^\circ F$ | t.cdf((m-98.6)/SE,n)
norm.cdf((m-98.6)/SE) | statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ThomasProctor/Slide-Rule-Data-Intensive | mit |
Regardless of what distribution we assume we are drawing our sample means from, the probability of seeing this data or averages less than it if the true average body temperature was 98.6 is basically zero.
Question 3 | print(m+t.ppf(0.95,n)*SE)
print(m-t.ppf(0.95,n)*SE)
t.ppf(0.95,n)*SE | statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ThomasProctor/Slide-Rule-Data-Intensive | mit |
Our estimate of the true average human body temperature is thus $98.2^\circ F \pm 0.1$.
This confidence interval, however, does not answer the question 'At what temperature should we consider someone's temperature to be "abnormal"?'. We can look at the population distribution, and see right away that the majority of our test subjects would be considered abnormal if we this, which makes no sense.
The confidence intervals only say something about what we can expect of sample means, not about individual values. Unfortunately, we would not expect the percentiles of this data to be drawn from a normal distribution, so I, at least, am not currently equipped to do confidence/hypothosis testing. However, I can give them, which should give a good estimate of what should be considered normal, but I can't give estimates of how confident we can be in these values. | df['temperature'].quantile([.1,.9]) | statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ThomasProctor/Slide-Rule-Data-Intensive | mit |
This range, 97.29-99.10 degrees F includes 80% of the patients in our sample.
This shows the dramatic difference between the population distribution and the sample distribution of the mean; we looked at the sample distribution (from the confidence interval), and found that 90% of the population fell within a $\pm 0.1^\circ$ range, while looking at the population distribution, we see a $\pm 0.9^\circ$ range for a smaller percentage of the distribution.
Question 4 | males=df[df['gender']=='M']
males.describe()
females=df[df['gender']=='F']
females.describe()
SEgender=sqrt(females['temperature'].std()/females.shape[0]+males['temperature'].std()/males.shape[0])
SEgender
mgender=females['temperature'].mean()-males['temperature'].mean()
mgender
2*(1-t.cdf(mgender/SEgender,21)) | statistics project 1/sliderule_dsi_inferential_statistics_exercise_1.ipynb | ThomasProctor/Slide-Rule-Data-Intensive | mit |
Now we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API | import matplotlib.pyplot as plt | vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb | paulovn/ml-vm-notebook | bsd-3-clause |
Matplotlib allows extensive customization of the graph aspect. Some of these customizations come together in "styles". Let's see which styles are available: | from __future__ import print_function
print(plt.style.available)
# Let's choose one style. And while we are at it, define thicker lines and big graphic sizes
plt.style.use('bmh')
plt.rcParams['lines.linewidth'] = 1.5
plt.rcParams['figure.figsize'] = (15, 5) | vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb | paulovn/ml-vm-notebook | bsd-3-clause |
Simple plots
Without much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted | import numpy as np
x = np.arange( -10, 11 )
y = x*x | vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb | paulovn/ml-vm-notebook | bsd-3-clause |
Matplotlib syntax
Matplotlib commands have two variants:
* A declarative syntax, with direct plotting commands. It is inspired by Matlab graphics syntax, so if you know Matlab it will be easy. It is the one used above.
* An object-oriented syntax, more complicated but somehow more powerful
The next cell shows an example of the object-oriented syntax | # Create a figure object
fig = plt.figure()
# Add a graph to the figure. We get an axes object
ax = fig.add_subplot(1, 1, 1) # specify (nrows, ncols, axnum)
# Create two vectors: x, y
x = np.linspace(0, 10, 1000)
y = np.sin(x)
# Plot those vectors on the axes we have
ax.plot(x, y)
# Add another plot to the same axes
y2 = np.cos(x)
ax.plot(x, y2)
# Modify the axes
ax.set_ylim(-1.5, 1.5)
# Add labels
ax.set_xlabel("$x$")
ax.set_ylabel("$f(x)$")
ax.set_title("Sinusoids")
# Add a legend
ax.legend(['sine', 'cosine']); | vmfiles/IPNB/Examples/a Basic/03 Matplotlib essentials.ipynb | paulovn/ml-vm-notebook | bsd-3-clause |
Tabular data with independent (Shapley value) masking | # build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, X)
shap_values = explainer(X[:100])
# get just the explanations for the positive class
shap_values = shap_values[...,1] | notebooks/api_examples/explainers/Permutation.ipynb | slundberg/shap | mit |
Tabular data with partition (Owen value) masking
While Shapley values result from treating each feature independently of the other features, it is often useful to enforce a structure on the model inputs. Enforcing such a structure produces a structure game (i.e. a game with rules about valid input feature coalitions), and when that structure is a nest set of feature grouping we get the Owen values as a recursive application of Shapley values to the group. In SHAP, we take the partitioning to the limit and build a binary herarchial clustering tree to represent the structure of the data. This structure could be chosen in many ways, but for tabular data it is often helpful to build the structure from the redundancy of information between the input features about the output label. This is what we do below: | # build a clustering of the features based on shared information about y
clustering = shap.utils.hclust(X, y)
# above we implicitly used shap.maskers.Independent by passing a raw dataframe as the masker
# now we explicitly use a Partition masker that uses the clustering we just computed
masker = shap.maskers.Partition(X, clustering=clustering)
# build a Permutation explainer and explain the model predictions on the given dataset
explainer = shap.explainers.Permutation(model.predict_proba, masker)
shap_values2 = explainer(X[:100])
# get just the explanations for the positive class
shap_values2 = shap_values2[...,1] | notebooks/api_examples/explainers/Permutation.ipynb | slundberg/shap | mit |
Now, we create a TranslationModel instance: | nmt_model = TranslationModel(params,
model_type='GroundHogModel',
model_name='tutorial_model',
vocabularies=dataset.vocabulary,
store_path='trained_models/tutorial_model/',
verbose=True) | examples/2_training_tutorial.ipynb | Sasanita/nmt-keras | mit |
Now, we must define the inputs and outputs mapping from our Dataset instance to our model | inputMapping = dict()
for i, id_in in enumerate(params['INPUTS_IDS_DATASET']):
pos_source = dataset.ids_inputs.index(id_in)
id_dest = nmt_model.ids_inputs[i]
inputMapping[id_dest] = pos_source
nmt_model.setInputsMapping(inputMapping)
outputMapping = dict()
for i, id_out in enumerate(params['OUTPUTS_IDS_DATASET']):
pos_target = dataset.ids_outputs.index(id_out)
id_dest = nmt_model.ids_outputs[i]
outputMapping[id_dest] = pos_target
nmt_model.setOutputsMapping(outputMapping) | examples/2_training_tutorial.ipynb | Sasanita/nmt-keras | mit |
We can add some callbacks for controlling the training (e.g. Sampling each N updates, early stop, learning rate annealing...). For instance, let's build an Early-Stop callback. After each 2 epochs, it will compute the 'coco' scores on the development set. If the metric 'Bleu_4' doesn't improve during more than 5 checkings, it will stop. We need to pass some variables to the callback (in the extra_vars dictionary): | extra_vars = {'language': 'en',
'n_parallel_loaders': 8,
'tokenize_f': eval('dataset.' + 'tokenize_none'),
'beam_size': 12,
'maxlen': 50,
'model_inputs': ['source_text', 'state_below'],
'model_outputs': ['target_text'],
'dataset_inputs': ['source_text', 'state_below'],
'dataset_outputs': ['target_text'],
'normalize': True,
'alpha_factor': 0.6,
'val': {'references': dataset.extra_variables['val']['target_text']}
}
vocab = dataset.vocabulary['target_text']['idx2words']
callbacks = []
callbacks.append(PrintPerformanceMetricOnEpochEndOrEachNUpdates(nmt_model,
dataset,
gt_id='target_text',
metric_name=['coco'],
set_name=['val'],
batch_size=50,
each_n_epochs=2,
extra_vars=extra_vars,
reload_epoch=0,
is_text=True,
index2word_y=vocab,
sampling_type='max_likelihood',
beam_search=True,
save_path=nmt_model.model_path,
start_eval_on_epoch=0,
write_samples=True,
write_type='list',
verbose=True)) | examples/2_training_tutorial.ipynb | Sasanita/nmt-keras | mit |
Now we are almost ready to train. We set up some training parameters... | training_params = {'n_epochs': 100,
'batch_size': 40,
'maxlen': 30,
'epochs_for_save': 1,
'verbose': 0,
'eval_on_sets': [],
'n_parallel_loaders': 8,
'extra_callbacks': callbacks,
'reload_epoch': 0,
'epoch_offset': 0} | examples/2_training_tutorial.ipynb | Sasanita/nmt-keras | mit |
TicTaeToe 게임 | from IPython.display import Image
Image(filename='images/TicTaeToe.png') | midterm/kookmin_midterm_정인환.ipynb | initialkommit/kookmin | mit |
TicTaeToe게임을 간단 버젼으로 구현한 것으로 사용자가 먼저 착수하여 승부를 겨루게 됩니다.
향후에는 기계학습으로 발전시켜 실력을 키워 보려 합니다. | # %load TicTaeToe.py
import sys
import random
# 게임 방범 설명
print("출처: http://www.practicepython.org")
print("==================================")
print("가로, 세로, 대각선 방향으로 ")
print("세점을 먼저 이어 놓으면 이기는")
print("게임으로 사용자(U)와 Computer(C)가")
print("번갈아 놓습니다.")
print("==================================\n")
# 3 x 3 정보를 담기 위한 저장소 선언
# 0 은 초기 상태
# 1 은 사용자가 선택한 곳
# 2 는 컴퓨터가 선택한 곳
dim=3
list4 = [0,0,0,0,0,0,0,0,0]
# 사용자 안내를 위한 박스를 그리고 그 안에 번호 넣기
def graph():
k = 1
for i in range(dim+1):
print(" ---"*dim)
for j in range(dim):
if (i < dim):
print("| "+str(k), end=" ")
k = k + 1
if (i != 3):
print("|")
# 사용자 또는 컴퓨터가 수를 둘때 마다,
# 누가 이겼는지 체크
def game_wins(list4):
#print(list4)
for i in range(dim):
#checks to see if you win in a column
if list4[i] == list4[i+3] == list4[i+6] == 1:
print("You Won")
elif list4[i] == list4[i+3] == list4[i+6] == 2:
print("You Lost")
#checks to see if you win in a row
if list4[dim*i] == list4[dim*i+1] == list4[dim*i+2] == 1:
print ("You Won")
elif list4[dim*i] == list4[dim*i+1] == list4[dim*i+2] == 2:
print("You Lost")
#checks to see if you win in a diagonal
if list4[0] == list4[4] == list4[8] == 1:
print ("You Won")
elif list4[0] == list4[4] == list4[8] == 2:
print("You Lost")
if list4[2] == list4[4] == list4[6] == 1:
print ("You Won")
elif list4[2] == list4[4] == list4[6] == 2:
print("You Lost")
# 사용자 안내를 위한 박스를 그리고 그 안에 번호 또는 둔 수 표기
def graph_pos(list4):
for idx in range(len(list4)):
if (idx % 3 == 0):
print(" ---"*dim)
if (list4[idx] == 0):
print("| "+str(idx+1), end=" ")
elif (list4[idx] == 1):
print("| "+"U", end=" ")
else:
print("| "+"C", end=" ")
if (idx % 3 == 2):
print("|")
print("\n")
# 게임 시작
go = input("Play TicTaeToe? Enter, or eXit?")
if (go == 'x' or go == 'X'):
sys.exit(0)
graph()
print("\n")
while(1): # 보드게임이 승부가 날때까지 무한 반복
# 빈곳 선택
pos = int(input("You : ")) - 1
while (pos < 0 or pos > 8 or list4[pos] != 0):
pos = int(input("Again : ")) - 1
list4[pos] = 1
# 보드를 갱신하여 그리고, 승부 체크
graph_pos(list4)
game_wins(list4)
# 컴퓨터 차례로, 빈곳을 랜덤하게 선택하여 List에 저장
pos = random.randrange(9)
while (list4[pos] != 0):
pos = random.randrange(9)
print("Computer : " + str(pos+1))
list4[pos] = 2
# 보드를 갱신하여 그리고, 승부 체크
graph_pos(list4)
game_wins(list4) | midterm/kookmin_midterm_정인환.ipynb | initialkommit/kookmin | mit |
Problem 1.1) Make a simple mean coadd
Simulate N observations of a star, and coadd them by taking the mean of the N observations. (We can only do this because they are already astrometrically and photometrically aligned and have the same background value.) | MU = 35
S = 100
F = 100
FWHM = 5
x = np.arange(100)
# simulate a single observation of the star and plot:
y = # complete
pixel_plot(x, y)
# Write a simulateN function that returns an array of size (N, x)
# representing N realizations of your simulated star
# This will stand in as a stack of multiple observations of one star
def simulateN(x, mu, fwhm, S, F, N):
"""simulate a noisy stellar signal
Parameters
----------
x : array-like
detector pixel number
mu : float
mean position of the 1D star
fwhm : float
Full-width half-maximum of the stellar profile on the detector
S : float or array-like of len(x)
Sky background for each pixel
F : float
Total stellar flux
N: int
Number of images to simulate
Returns
-------
noisy_counts : array-like of shape (N, x)
the (noisy) number of counts in each pixel
"""
# complete
return noisy_counts
# simulate N=50 images with the same star
x = np.arange(100)
stack = # complete
# where stack is an array of size (50, 100) representing a pile of 50 images with 100 pixels
# coadd by taking the mean and plot the result
coadd = # complete
pixel_plot(x, coadd)
# Try a few different N to see how it affects the S/N of your result
# Plot the coadds of N=[1, 10, and 100] on the same plot:
# complete
| Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 1.2) SNR vs N
Now compute the observed SNR of the simulated star on each coadd and compare to the expected SNR in the idealized case. The often repeated mnemonic for SNR inscrease as a function of number of images, $N$, is that noise decreses like $\sqrt{N}$. This is of course idealized case where the noise in each observation is identical.
Using your simulateN function, simulate a series of mean coadds with increasing N.
First, plot the empirical noise/uncertainty/stdev as a function of N. and overplot the expected uncertainty given the input sky level. You can use an area you know isn't touched by the star.
Next, plot the empirical SNR of the star (measured flux/fluxErr) as a function of N. Overplot the expected SNR. You can assume you know the sky level.
Your expected scaling with N should roughly track your empirical estimate. | # complete
# hint. One way to start this
# std = []
# flux = []
# Ns = np.arange(1, 1000, 5)
# for N in Ns:
# y = simulateN(...)
# complete
# plt.plot(Ns, ..., label="coadd")
# plt.plot(Ns, ..., label="expected")
# plt.xlabel('N')
# plt.ylabel('pixel noise')
# plt.legend()
# complete
# plt.plot(Ns, ..., label="coadd")
# plt.plot(Ns, ..., label="expected")
# plt.xlabel('N')
# plt.ylabel('PSF Flux SNR')
# plt.legend() | Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 2) PSFs and Image weights in coadds
Problem (1) pretends that the input images are identical in quality, however this is never the case in practice. In practice, adding another image does not necessarily increase the SNR. For example, imagine you have two exposures, but in one the dome light was accidentally left on. A coadd with these two images weighted equally will have a worse SNR than the first image alone. Therefore the images should be aggregated with a weighted mean, so that images of poor quality don't degrade the quality of the coadd. What weights to we pick?
Weights can be chosen to either minimize the variance on the coadd or maximize the SNR of point sources on the coadd.
Some background:
Assuming that all noise sources are independent, the SNR of the measurement of flux from a star is:
\begin{equation}
SNR \propto {{N_{\rm photons}}\over{ \sigma_{\rm sky}} \sqrt{A} },
\end{equation}
where $N_{\rm photons}$ is the number of photons detected from the star,
$A$ is the area in pixels covered by the star.
The per-pixel sky noise $\sigma_{sky}$ includes all sources of noise: dark current, read noise and sky-background, and it coded in the variance plane of the image. For the epoch $i$, $\sigma^2_{i, {\rm sky}}$ is the average of the variance plane.
The $N_{\rm photons}$ is proportional to transparency $T$, and the area that the stellar photons cover is determined by the seeing: $A \propto {\rm FWHM}^2$.
Therefore, a coadd optimized for point-source detection would, to weight each image by the SNR$^2$, use the following as weights which prefers good-seeing epochs taken when the sky is transparent and dark.
\begin{align}
w_i & = {\rm SNR}^2 \propto {{T_i^2}\over{{\rm FWHM_i}^2 \sigma_i^2}}.
\end{align}
The usual inverse-variance weighting which produces the minimum-variance co-add, is given by,
\begin{align}
w_i & =T_i^2/\sigma_i^2
\end{align}
In practice, the factor of $T_i^2$ is incorporated into the variance when flux-scaling the single-epoch images to a common zeropoint. This step multiplies the image by a scale-factor, which increases the variance of the image by the square of the scale factor. The scale factor is inversely proportional to the transparency, so that
$\sigma_{scaled}= \sigma/T$.
For this problem assume the images are all on the same zeropoint (like problem 1) i.e. T=1.
Problem 2.1 Weighting images in Variable Sky
Now simulate 50 observations of stars with Sky S ranging from 100 to 1000. Remember to subtract this background off before stacking this time! Plot the plain (unweighted) mean coadd vs. the minimum variance coadd. Weights should add up to 1. What's the empirical noise estimate of the coadd? | # complete | Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 2.2 Weighting images in Variable Seeing
Simulate 50 observations with FWHM's ranging from 2-10 pixels. Keep the flux amplitude F, and sky noise S both fixed.
Generate two coadds, (1) with the weights that minimize variance and (2) with the weights that maximize point source SNR. Weights should add up to 1. Plot both coadds. | # complete | Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 2.3 Image variance vs per pixel variance (Challenge Problem)
Why do we use per image variances instead of per pixel variances? Let's see! Start tracking the per-pixel variance when you simulate the star. Make a coadd of 200 observations with FWHM's ranging from 2 to 20 pixels. Make a coadd weighted by the per-pixel inverse variance. How does the profile of the star look in this coadd compared to an unweighted coadd and compared to the coadd with the $w_i = \frac{1}{{\rm FWHM_i}^2 \sigma_i^2}$? (You may have to plot the difference to see the change). | # complete | Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 3.2) What if we have amazing astrometric registration
and shrink the astrometric offset by a factor of a thousand. Is there a star sufficiently bright to produce the same dipole? What is its FLUX SCALE? | ASTROM_OFFSET = 0.0001
PSF = 1.
# complete
# Plot both dipoles (for the offset=0.1 and the offset=0.0001 in the same figure.
# Same or different subplots up to you. | Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Problem 3.3) Distance between peaks.
Does the distance between the dipole's positive and negative peaks depend on the astrometric offset? If not, what does it depend on? You can answer this by vizualizing the dipoles vs offsets. But for a challenge, measure the distance between peaks and plot them as a function of astrometric offset or another factor. | # complete | Sessions/Session11/Day4/CoadditionAndSubtraction.ipynb | LSSTC-DSFP/LSSTC-DSFP-Sessions | mit |
Matrix creation | B = np.arange(9).reshape(3, 3)
print(B)
A = np.array([
[-1, 42, 10],
[12, 0, 9]
])
print(A)
# inspecting the matrices
print(A.shape) # 2 x 3
print(B.shape) # 3 x 3
# We have 2 dimensions `X1` and `X2`
print(A.ndim)
print(B.ndim)
Zeros = np.zeros((2, 3))
print(Zeros)
Ones = np.ones((3, 3))
print(Ones)
Empty = np.empty((4, 4))
print(Empty) | disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb | jhonatancasale/graduation-pool | apache-2.0 |
Vector creation | print(np.arange(5, 30, 7))
print(np.arange(10, 13, .3))
print(np.linspace(0, 2, 13)) | disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb | jhonatancasale/graduation-pool | apache-2.0 |
np.arange bahevior to large numbers | print(np.arange(10000))
print(np.arange(10000).reshape(100,100)) | disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb | jhonatancasale/graduation-pool | apache-2.0 |
Basic Operations
$$A_{mxn} \pm B_{mxn} \mapsto C_{mxn}$$
$$u_{1xn} \pm v_{1xn} \mapsto w_{1xn} \quad (u_n \pm v_n \mapsto w_n)$$ | A = np.array([10, 20, 30, 40, 50, -1])
B = np.linspace(0, 1, A.size)
print("{} + {} -> {}".format(A, B, A + B))
print("{} - {} -> {}".format(A, B, A - B)) | disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb | jhonatancasale/graduation-pool | apache-2.0 |
$$f:M_{mxn} \to M_{mxn}$$
$$a_{ij} \mapsto a_{ij}^2$$ | print("{} ** 2 -> {}".format(A, A ** 2)) | disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb | jhonatancasale/graduation-pool | apache-2.0 |
$$f:M_{mxn} \to M_{mxn}$$
$$a_{ij} \mapsto 2\sin(a_{ij})$$ | print("2 * sin({}) -> {}".format(A, 2 * np.sin(A))) | disciplines/SME0819 - Matrices for Applied Statistics/0x00_Fundamentals/Matrices - Fundamentals.ipynb | jhonatancasale/graduation-pool | apache-2.0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.