markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Use a classification metric: accuracy[Classification metrics are different from regression metrics!](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction? | from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
train[target].value_counts(normalize=True)
y_val = val[target]
y_val
y_pred = [0] * len(y_val)
accuracy_score(y_val, y_pred) | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Challenge In your assignment, your Sprint Challenge, and your upcoming Kaggle challenge, you'll begin with the majority class baseline. How quickly can you beat this baseline? Express and explain the intuition and interpretation of Logistic Regression OverviewTo help us get an intuition for *Logistic* Regression, let's start by trying *Linear* Regression instead, and see what happens... Follow Along Linear Regression? | train.describe()
# 1. Import estimator class
from sklearn.linear_model import LinearRegression
# 2. Instantiate this class
linear_reg = LinearRegression()
# 3. Arrange X feature matrices (already did y target vectors)
features = ['Pclass', 'Age', 'Fare']
X_train = train[features]
X_val = val[features]
# Impute missing values
from sklearn.impute import SimpleImputer
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train)
X_val_imputed = imputer.transform(X_val)
# 4. Fit the model
linear_reg.fit(X_train_imputed, y_train)
# 5. Apply the model to new data.
# The predictions look like this ...
linear_reg.predict(X_val_imputed)
# Get coefficients
pd.Series(linear_reg.coef_, features)
test_case = [[1, 5, 500]] # 1st class, 5-year old, Rich
linear_reg.predict(test_case) | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Logistic Regression! | from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train_imputed, y_train)
print('Validation Accuracy', log_reg.score(X_val_imputed, y_val))
# The predictions look like this
log_reg.predict(X_val_imputed)
log_reg.predict(test_case)
log_reg.predict_proba(test_case)
# What's the math?
log_reg.coef_
log_reg.intercept_
# The logistic sigmoid "squishing" function, implemented to accept numpy arrays
import numpy as np
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))
log_reg.coef_
test_case
sigmoid() | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
So, clearly a more appropriate model in this situation! For more on the math, [see this Wikipedia example](https://en.wikipedia.org/wiki/Logistic_regressionProbability_of_passing_an_exam_versus_hours_of_study). Use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models OverviewNow that we have more intuition and interpretation of Logistic Regression, let's use it within a realistic, complete scikit-learn workflow, with more features and transformations. Follow AlongSelect these features: `['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']`(Why shouldn't we include the `Name` or `Ticket` features? What would happen here?) Fit this sequence of transformers & estimator:- [category_encoders.one_hot.OneHotEncoder](https://contrib.scikit-learn.org/categorical-encoding/onehot.html)- [sklearn.impute.SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)- [sklearn.preprocessing.StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)- [sklearn.linear_model.LogisticRegressionCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html)Get validation accuracy. | features = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
target = 'Survived'
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_train.shape, y_train.shape, X_val.shape, y_val.shape
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegressionCV
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
imputer = SimpleImputer(strategy='mean')
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
model = LogisticRegressionCV()
model.fit(X_train_scaled, y_train)
y_pred = model.predict(X_val_scaled)
accuracy_score(y_val, y_pred)
print('Validation Accuracy:', model.score(X_val_scaled, y_val)) | Validation Accuracy: 0.7802690582959642
| MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Plot coefficients: | %matplotlib inline
coefficients = pd.Series(model.coef_[0], X_train_encoded.columns)
coefficients.sort_values().plot.barh(); | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
!pip install -qqq torchtext -qqq pytorch-transformers dgl
!pip install -qqqU git+https://github.com/harvardnlp/pytorch-struct
import torchtext
import torch
from torch_struct import SentCFG
from torch_struct.networks import NeuralCFG
import torch_struct.data
# Download and the load default data.
WORD = torchtext.data.Field(include_lengths=True)
UD_TAG = torchtext.data.Field(init_token="<bos>", eos_token="<eos>", include_lengths=True)
# Download and the load default data.
train, val, test = torchtext.datasets.UDPOS.splits(
fields=(('word', WORD), ('udtag', UD_TAG), (None, None)),
filter_pred=lambda ex: 5 < len(ex.word) < 30
)
WORD.build_vocab(train.word, min_freq=3)
UD_TAG.build_vocab(train.udtag)
train_iter = torch_struct.data.TokenBucket(train,
batch_size=200,
device="cuda:0")
H = 256
T = 30
NT = 30
model = NeuralCFG(len(WORD.vocab), T, NT, H)
model.cuda()
opt = torch.optim.Adam(model.parameters(), lr=0.001, betas=[0.75, 0.999])
def train():
#model.train()
losses = []
for epoch in range(10):
for i, ex in enumerate(train_iter):
opt.zero_grad()
words, lengths = ex.word
N, batch = words.shape
words = words.long()
params = model(words.cuda().transpose(0, 1))
dist = SentCFG(params, lengths=lengths)
loss = dist.partition.mean()
(-loss).backward()
losses.append(loss.detach())
torch.nn.utils.clip_grad_norm_(model.parameters(), 3.0)
opt.step()
if i % 100 == 1:
print(-torch.tensor(losses).mean(), words.shape)
losses = []
train()
for i, ex in enumerate(train_iter):
opt.zero_grad()
words, lengths = ex.word
N, batch = words.shape
words = words.long()
params = terms(words.transpose(0, 1)), rules(batch), roots(batch)
tree = CKY(MaxSemiring).marginals(params, lengths=lengths, _autograd=True)
print(tree)
break
def split(spans):
batch, N = spans.shape[:2]
splits = []
for b in range(batch):
cover = spans[b].nonzero()
left = {i: [] for i in range(N)}
right = {i: [] for i in range(N)}
batch_split = {}
for i in range(cover.shape[0]):
i, j, A = cover[i].tolist()
left[i].append((A, j, j - i + 1))
right[j].append((A, i, j - i + 1))
for i in range(cover.shape[0]):
i, j, A = cover[i].tolist()
B = None
for B_p, k, a_span in left[i]:
for C_p, k_2, b_span in right[j]:
if k_2 == k + 1 and a_span + b_span == j - i + 1:
B, C = B_p, C_p
k_final = k
break
if j > i:
batch_split[(i, j)] =k
splits.append(batch_split)
return splits
splits = split(spans) | _____no_output_____ | BSD-3-Clause | torchbenchmark/models/pytorch_struct/notebooks/Unsupervised_CFG.ipynb | ramiro050/benchmark |
|
Sklearn sklearn.linear_model | from matplotlib.colors import ListedColormap
from sklearn import cross_validation, datasets, linear_model, metrics
import numpy as np
%pylab inline | Populating the interactive namespace from numpy and matplotlib
| MIT | LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb | ishatserka/MachineLearningAndDataAnalysisCoursera |
Линейная регрессия Генерация данных | data, target, coef = datasets.make_regression(n_features = 2, n_informative = 1, n_targets = 1,
noise = 5., coef = True, random_state = 2)
pylab.scatter(list(map(lambda x:x[0], data)), target, color='r')
pylab.scatter(list(map(lambda x:x[1], data)), target, color='b')
train_data, test_data, train_labels, test_labels = cross_validation.train_test_split(data, target,
test_size = 0.3) | _____no_output_____ | MIT | LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb | ishatserka/MachineLearningAndDataAnalysisCoursera |
LinearRegression | linear_regressor = linear_model.LinearRegression()
linear_regressor.fit(train_data, train_labels)
predictions = linear_regressor.predict(test_data)
print(test_labels)
print(predictions)
metrics.mean_absolute_error(test_labels, predictions)
linear_scoring = cross_validation.cross_val_score(linear_regressor, data, target, scoring = 'mean_absolute_error',
cv=10)
print('mean: {}, std: {}'.format(linear_scoring.mean(), linear_scoring.std()))
scorer = metrics.make_scorer(metrics.mean_absolute_error, greater_is_better=True)
linear_scoring = cross_validation.cross_val_score(linear_regressor, data, target, scoring=scorer,
cv = 10)
print('mean: {}, std: {}'.format(linear_scoring.mean(), linear_scoring.std()))
coef
linear_regressor.coef_
# в лекции не указано, что в уравнении обученной модели также участвует свободный член
linear_regressor.intercept_
print("y = {:.2f}*x1 + {:.2f}*x2".format(coef[0], coef[1]))
print("y = {:.2f}*x1 + {:.2f}*x2 + {:.2f}".format(linear_regressor.coef_[0],
linear_regressor.coef_[1],
linear_regressor.intercept_)) | y = 38.15*x1 + -0.16*x2 + -0.88
| MIT | LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb | ishatserka/MachineLearningAndDataAnalysisCoursera |
Lasso | lasso_regressor = linear_model.Lasso(random_state = 3)
lasso_regressor.fit(train_data, train_labels)
lasso_predictions = lasso_regressor.predict(test_data)
lasso_scoring = cross_validation.cross_val_score(lasso_regressor, data, target, scoring = scorer, cv = 10)
print('mean: {}, std: {}'.format(lasso_scoring.mean(), lasso_scoring.std()))
print(lasso_regressor.coef_)
print("y = {:.2f}*x1 + {:.2f}*x2".format(coef[0], coef[1]))
print("y = {:.2f}*x1 + {:.2f}*x2".format(lasso_regressor.coef_[0], lasso_regressor.coef_[1])) | y = 37.31*x1 + -0.00*x2
| MIT | LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb | ishatserka/MachineLearningAndDataAnalysisCoursera |
Questão 1 Resolva o sistema linear $Ax = b$ em que$A =\begin{bmatrix}9. & −4. & 1. & 0. & 0. & 0. & 0. \\−4. & 6. & −4. & 1. & 0. & 0. & 0. \\1. & −4. & 6. & −4. & 1. & 0. & 0. \\0. & 1. & −4. & 6. & −4. & 1. & 0. \\\vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\0. & 0. & 1. & −4. & 6. & −4. & 1. \\0. & 0. & 0. & 1. & −4. & 5. & −2. \\0. & 0. & 0. & 0. & 1. & −2. & 1.\end{bmatrix} \in R^{\; 200 \times 200}$e $b =\begin{bmatrix}1 \\1\\1\\1\\\vdots\\1\\1\\1\\\end{bmatrix} \inR^{\; 200}$usando o método da Eliminação de Gauss (com pivoteamento parcial) e os métodos iterativos de Gauss-Jacobi e Gauss-Seidel, se possível. Compare o desempenho dos métodos para a resolução do sistema linear em termos do tempo de execução. | def timer(f):
@functools.wraps(f)
def wrapper_timer(*args, **kwargs):
tempo_inicio = time.perf_counter()
retorno = f(*args, **kwargs)
tempo_fim = time.perf_counter()
tempo_exec = tempo_fim - tempo_inicio
print(f"Tempo de Execução: {tempo_exec:0.4f} segundos")
return retorno
return wrapper_timer
def FatoracaoLUPivot(A):
U = np.copy(A)
n = A.shape[0]
L = np.zeros_like(A)
P = np.eye(n)
m = 0
for j in range(n):
#Pivoteamento Parcial
k = np.argmax(np.abs(U[j:,j])) + j
U[j], U[k] = np.copy(U[k]), np.copy(U[j])
P[j], P[k] = np.copy(P[k]), np.copy(P[j])
L[j], L[k] = np.copy(L[k]), np.copy(L[j])
m += 1
for i in range(j + 1, n):
L[i][j] = U[i][j]/U[j][j]
for k in range(j + 1, n):
U[i][k] -= L[i][j] * U[j][k]
U[i][j] = 0
m += 1
L += np.eye(n)
return L, U, P
def SubstituicaoRegressiva(U, c): # U triangular superior
x = np.copy(c)
n = U.shape[0]
for i in range(n-1, -1, -1):
for j in range(i + 1, n):
x[i] -= (U[i,j] * x[j])
x[i] /= U[i,i]
return x
def SubstituicaoDireta(U, c): #U triangular inferior
x = np.copy(c)
n = U.shape[0]
for i in range(n):
for j in range(i):
x[i] -= (U[i,j] * x[j])
x[i] /= U[i,i]
return x
@timer
def EliminacaoGaussLUPivot(A, b):
L, U, P = FatoracaoLUPivot(A)
# Resolver Ly = b e Ux = y
y = SubstituicaoDireta(L, b)
x = SubstituicaoRegressiva(U, y)
return P, x
def buildA():
A = np.zeros((200, 200))
A[0,0:3] = np.array([9, -4, 1])
A[1,0:4] = np.array([-4, 6, -4, 1])
A[198,196:200] = np.array([1, -4, 5, -2])
A[199,197:200] = np.array([1, -2, 1])
for i in range(2, 198):
A[i, i-2:i+3] = np.array([1, -4, 6, -4, 1])
return A
A = buildA()
A
b = np.ones((200,1))
b | _____no_output_____ | MIT | Atividade 04.ipynb | Lucas-Otavio/MS211K-2s21 |
Solução por Eliminação de Gauss com Pivoteamento Parcial | P, x = EliminacaoGaussLUPivot(A, b)
x
print(np.max(np.abs(P @ A @ x - b))) | 2.384185791015625e-07
| MIT | Atividade 04.ipynb | Lucas-Otavio/MS211K-2s21 |
Método Gauss-Jacobi | @timer
def GaussJacobi(A, b):
n = A.shape[0]
x_history = list()
x_old = np.zeros(n)
x_new = np.zeros(n)
k_limite = 200
k = 0
tau = 1E-4
Dr = 1
while (k < k_limite and Dr > tau):
for i in range(n):
soma = 0
for j in range(n):
if (i == j):
continue
soma += A[i,j]*x_old[j]
x_new[i] = (b[i] - soma) / A[i,i]
k += 1
Dr = np.max(np.abs(x_new - x_old)) / np.max(np.abs(x_new))
x_history.append(x_old)
x_old = np.copy(x_new)
return x_history, x_new
history, x = GaussJacobi(A, b)
erros = []
for i in range(len(history)):
erro = np.max(np.abs(A @ history[i] - b))
if (erro != np.inf):
erros.append(erro)
plt.semilogy(erros) | _____no_output_____ | MIT | Atividade 04.ipynb | Lucas-Otavio/MS211K-2s21 |
Método Gauss-Seidel | @timer
def GaussSeidel(A, b, k_limite=200):
n = A.shape[0]
x_history = list()
x_old = np.zeros(n)
x_new = np.zeros(n)
k = 0
tau = 1E-4
Dr = 1
while (k < k_limite and Dr > tau):
for i in range(n):
soma = 0
for j in range(n):
if (i == j):
continue
soma += A[i,j]*x_new[j]
x_new[i] = (b[i] - soma) / A[i,i]
Dr = np.max(np.abs(x_new - x_old)) / np.max(np.abs(x_new))
x_history.append(x_old)
x_old = np.copy(x_new)
k += 1
if (Dr > tau):
print("NÃO CONVERGIU!")
return x_history, x_new
history, x = GaussSeidel(A, b)
print(np.max(np.abs(A @ x - b)))
erros = []
for i in range(len(history)):
erro = np.max(np.abs(A @ history[i] - b))
if (erro != np.inf):
erros.append(erro)
plt.semilogy(erros) | _____no_output_____ | MIT | Atividade 04.ipynb | Lucas-Otavio/MS211K-2s21 |
hello> API details. | #hide
from nbdev.showdoc import *
from nbdev_tutorial.core import *
%load_ext autoreload
%autoreload 2
#export
class HelloSayer:
"Say hello to `to` using `say_hello`"
def __init__(self, to): self.to = to
def say(self):
"Do the saying"
return say_hello(self.to)
show_doc(HelloSayer)
Card(suit=2, rank=11)
show_doc(Card) | _____no_output_____ | Apache-2.0 | 01_hello.ipynb | hannesloots/nbdev-tutorial |
Assignment 9: Implement Dynamic ProgrammingIn this exercise, we will begin to explore the concept of dynamic programming and how it related to various object containers with respect to computational complexity. Deliverables: 1) Choose and implement a Dynamic Programming algorithm in Python, make sure you are using a Dynamic Programming solution (not another one). 2) Use the algorithm to solve a range of scenarios. 3) Explain what is being done in the implementation. That is, write up a walk through of the algorithm and explain how it is a Dynamic Programming solution. Prepare an executive summary of your results, referring to the table and figures you have generated. Explain how your results relate to big O notation. Describe your results in language that management can understand. This summary should be included as text paragraphs in the Jupyter notebook. Explain how the algorithm works and why it is a useful to data engineers. A. The Dynamic programming problem: Longest Increasing Sequence The Longest Increasing Subsequence (LIS) problem is to find the length of the longest subsequence of a given sequence such that all elements of the subsequence are sorted in increasing order. For example, the length of LIS for {10, 22, 9, 33, 21, 50, 41, 60, 80} is 6 and LIS is {10, 22, 33, 50, 60, 80}. A. Setup: Library imports and Algorithm | import numpy as np
import pandas as pd
import seaborn as sns
import time
#import itertools
import random
import matplotlib.pyplot as plt
#import networkx as nx
#import pydot
#from networkx.drawing.nx_pydot import graphviz_layout
#from collections import deque
# Dynamic Programming Approach of Finding LIS by reducing the problem to longest common Subsequence
def lis(a):
n=len(a) #get the length of the list
b=sorted(list(set(a))) #removes duplicates, and sorts list
m=len(b) #gets the length of the truncated and sorted list
dp=[[-1 for i in range(m+1)] for j in range(n+1)] #instantiates a list of lists filled with -1 columns are indicies of the sorted array; rows the original array
for i in range(n+1): # for every column in the table at each row:
for j in range(m+1):
if i==0 or j==0: #if at first element in either a row or column set the table row,index to zero
dp[i][j]=0
elif a[i-1]==b[j-1]: #else if the sorted array value matches the original array:
dp[i][j]=1+dp[i-1][j-1]#sets dp[i][j] to 1+prveious cell of the dyanmic table
else:
dp[i][j]=max(dp[i-1][j],dp[i][j-1]) #else record the max of the row or column for that cell in the cell
return dp[-1][-1] # This will return the max running sequence.
# Driver program to test above function
arr1 = [10, 22, 9, 33, 21, 50, 41, 60]
len_arr1 = len(arr1)
print("Longest increaseing sequence has a length of:", lis(arr1))
# addtional comments included from the original code contributed by Dheeraj Khatri (https://www.geeksforgeeks.org/longest-increasing-subsequence-dp-3/)
def Container(arr, fun): ### I'm glad I was able to reuse this from assignment 3 and 4. Useful function.
objects = [] #instantiates an empty list to collect the returns
times = [] #instantiates an empty list to collect times for each computation
for t in arr:
start= time.perf_counter() #collects the start time
obj = fun(t) # applies the function to the arr object
end = time.perf_counter() # collects end time
duration = (end-start)* 1E3 #converts to milliseconds
objects.append(obj)# adds the returns of the functions to the objects list
times.append(duration) # adds the duration for computation to list
return objects, times
| Longest increaseing sequence has a length of: 5
| MIT | Assignment9- Dynamic.ipynb | bblank70/MSDS432 |
B. Test Array Generation | RANDOM_SEED = 300
np.random.seed(RANDOM_SEED)
arr100 = list(np.random.randint(low=1, high= 5000, size=100))
np.random.seed(RANDOM_SEED)
arr200 = list(np.random.randint(low=1, high= 5000, size=200))
np.random.seed(RANDOM_SEED)
arr400 = list(np.random.randint(low=1, high= 5000, size=400))
np.random.seed(RANDOM_SEED)
arr600 = list(np.random.randint(low=1, high= 5000, size=600))
np.random.seed(RANDOM_SEED)
arr800 = list(np.random.randint(low=1, high= 5000, size=800))
print(len(arr100), len(arr200), len(arr400), len(arr600), len(arr800))
arr_list = [arr100, arr200, arr400, arr600, arr800]
metrics = Container(arr_list, lis) | _____no_output_____ | MIT | Assignment9- Dynamic.ipynb | bblank70/MSDS432 |
Table1. Performance Summary | summary = {
'ArraySize' : [len(arr100), len(arr200), len(arr400), len(arr600), len(arr800)],
'SequenceLength' : [metrics[0][0],metrics[0][1], metrics[0][2], metrics[0][3], metrics[0][4]],
'Time(ms)' : [metrics[1][0],metrics[1][1], metrics[1][2], metrics[1][3], metrics[1][4]]
}
df =pd.DataFrame(summary)
df | _____no_output_____ | MIT | Assignment9- Dynamic.ipynb | bblank70/MSDS432 |
Figure 1. Performance | sns.scatterplot(data=df, x='Time(ms)', y='ArraySize')
| _____no_output_____ | MIT | Assignment9- Dynamic.ipynb | bblank70/MSDS432 |
Chapter 3 Questions 3.1 Form dollar bars for E-mini S&P 500 futures:1. Apply a symmetric CUSUM filter (Chapter 2, Section 2.5.2.1) where the threshold is the standard deviation of daily returns (Snippet 3.1).2. Use Snippet 3.4 on a pandas series t1, where numDays=1.3. On those sampled features, apply the triple-barrier method, where ptSl=[1,1] and t1 is the series you created in point 1.b.4. Apply getBins to generate the labels. | import numpy as np
import pandas as pd
import timeit
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_curve, classification_report, confusion_matrix
from mlfinlab.corefns.core_functions import CoreFunctions
from mlfinlab.fracdiff.fracdiff import frac_diff_ffd
import matplotlib.pyplot as plt
%matplotlib inline
# Read in data
data = pd.read_csv('official_data/dollar_bars.csv', nrows=40000)
data.index = pd.to_datetime(data['date_time'])
data = data.drop('date_time', axis=1)
data.head() | _____no_output_____ | MIT | Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb | alexanu/research |
**Apply a symmetric CUSUM filter (Chapter 2, Section 2.5.2.1) where the threshold is the standard deviation of daily returns (Snippet 3.1).** | # Compute daily volatility
vol = CoreFunctions.get_daily_vol(close=data['close'], lookback=50)
vol.plot(figsize=(14, 7), title='Volatility as caclulated by de Prado')
plt.show()
# Apply Symmetric CUSUM Filter and get timestamps for events
# Note: Only the CUSUM filter needs a point estimate for volatility
cusum_events = CoreFunctions.get_t_events(data['close'], threshold=vol.mean()) | 1%|▏ | 592/39998 [00:00<00:06, 5912.41it/s] | MIT | Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb | alexanu/research |
**Use Snippet 3.4 on a pandas series t1, where numDays=1.** | # Compute vertical barrier
vertical_barriers = CoreFunctions.add_vertical_barrier(cusum_events, data['close'])
vertical_barriers.head() | _____no_output_____ | MIT | Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb | alexanu/research |
**On those sampled features, apply the triple-barrier method, where ptSl=[1,1] and t1 is the series you created in point 1.b.** | triple_barrier_events = CoreFunctions.get_events(close=data['close'],
t_events=cusum_events,
pt_sl=[1, 1],
target=vol,
min_ret=0.01,
num_threads=1,
vertical_barrier_times=vertical_barriers,
side=None)
triple_barrier_events.head()
labels = CoreFunctions.get_bins(triple_barrier_events, data['close'])
labels.head()
labels['bin'].value_counts() | _____no_output_____ | MIT | Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb | alexanu/research |
--- 3.2 From exercise 1, use Snippet 3.8 to drop rare labels. | clean_labels = CoreFunctions.drop_labels(labels)
print(labels.shape)
print(clean_labels.shape) | (660, 3)
(660, 3)
| MIT | Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb | alexanu/research |
--- 3.3 Adjust the getBins function (Snippet 3.5) to return a 0 whenever the vertical barrier is the one touched first.This change was made inside the module CoreFunctions. --- 3.4 Develop a trend-following strategy based on a popular technical analysis statistic (e.g., crossing moving averages). For each observation, themodel suggests a side, but not a size of the bet.1. Derive meta-labels for pt_sl = [1,2] and t1 where num_days=1. Use as trgt the daily standard deviation as computed by Snippet 3.1.2. Train a random forest to decide whether to trade or not. Note: The decision is whether to trade or not, {0,1}, since the underllying model (the crossing moveing average has decided the side{-1, 1}) | # This question is answered in the notebook: 2019-03-06_JJ_Trend-Following-Question | _____no_output_____ | MIT | Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb | alexanu/research |
---- 3.5 Develop a mean-reverting strategy based on Bollinger bands. For each observation, the model suggests a side, but not a size of the bet.* (a) Derive meta-labels for ptSl = [0, 2] and t1 where numDays = 1. Use as trgt the daily standard deviation as computed by Snippet 3.1.* (b) Train a random forest to decide whether to trade or not. Use as features: volatility, seial correlation, and teh crossinmg moving averages.* (c) What is teh accuracy of prediction from the primary model? (i.e. if the secondary model does not filter the bets) What are the precision, recall and FI-scores?* (d) What is teh accuracy of prediction from the primary model? What are the precision, recall and FI-scores? | # This question is answered in the notebook: 2019-03-07_BBand-Question | _____no_output_____ | MIT | Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb | alexanu/research |
Import Libraries | from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms | _____no_output_____ | MIT | Prasant_Kumar/Assignments/F9.ipynb | ks1320/Traffic-Surveillance-System |
Data TransformationsWe first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise. | # Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.RandomRotation((-7.0, 7.0), fill=(1,)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
| _____no_output_____ | MIT | Prasant_Kumar/Assignments/F9.ipynb | ks1320/Traffic-Surveillance-System |
Dataset and Creating Train/Test Split | train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms) | _____no_output_____ | MIT | Prasant_Kumar/Assignments/F9.ipynb | ks1320/Traffic-Surveillance-System |
Dataloader Arguments & Test/Train Dataloaders | SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args) | CUDA Available? True
| MIT | Prasant_Kumar/Assignments/F9.ipynb | ks1320/Traffic-Surveillance-System |
The modelLet's start with the model we first saw | import torch.nn.functional as F
dropout_value = 0.1
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(32),
nn.Dropout(dropout_value)
) # output_size = 24
# TRANSITION BLOCK 1
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
) # output_size = 24
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 12
# CONVOLUTION BLOCK 2
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 10
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 8
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 6
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=16, kernel_size=(3, 3), padding=1, bias=False),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout(dropout_value)
) # output_size = 6
# OUTPUT BLOCK
self.gap = nn.Sequential(
nn.AvgPool2d(kernel_size=6)
) # output_size = 1
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=16, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
# nn.BatchNorm2d(10),
# nn.ReLU(),
# nn.Dropout(dropout_value)
)
self.dropout = nn.Dropout(dropout_value)
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.gap(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1) | _____no_output_____ | MIT | Prasant_Kumar/Assignments/F9.ipynb | ks1320/Traffic-Surveillance-System |
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help | !pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28)) | Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1)
cuda
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 16, 26, 26] 144
ReLU-2 [-1, 16, 26, 26] 0
BatchNorm2d-3 [-1, 16, 26, 26] 32
Dropout-4 [-1, 16, 26, 26] 0
Conv2d-5 [-1, 32, 24, 24] 4,608
ReLU-6 [-1, 32, 24, 24] 0
BatchNorm2d-7 [-1, 32, 24, 24] 64
Dropout-8 [-1, 32, 24, 24] 0
Conv2d-9 [-1, 10, 24, 24] 320
MaxPool2d-10 [-1, 10, 12, 12] 0
Conv2d-11 [-1, 16, 10, 10] 1,440
ReLU-12 [-1, 16, 10, 10] 0
BatchNorm2d-13 [-1, 16, 10, 10] 32
Dropout-14 [-1, 16, 10, 10] 0
Conv2d-15 [-1, 16, 8, 8] 2,304
ReLU-16 [-1, 16, 8, 8] 0
BatchNorm2d-17 [-1, 16, 8, 8] 32
Dropout-18 [-1, 16, 8, 8] 0
Conv2d-19 [-1, 16, 6, 6] 2,304
ReLU-20 [-1, 16, 6, 6] 0
BatchNorm2d-21 [-1, 16, 6, 6] 32
Dropout-22 [-1, 16, 6, 6] 0
Conv2d-23 [-1, 16, 6, 6] 2,304
ReLU-24 [-1, 16, 6, 6] 0
BatchNorm2d-25 [-1, 16, 6, 6] 32
Dropout-26 [-1, 16, 6, 6] 0
AvgPool2d-27 [-1, 16, 1, 1] 0
Conv2d-28 [-1, 10, 1, 1] 160
================================================================
Total params: 13,808
Trainable params: 13,808
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 1.06
Params size (MB): 0.05
Estimated Total Size (MB): 1.12
----------------------------------------------------------------
| MIT | Prasant_Kumar/Assignments/F9.ipynb | ks1320/Traffic-Surveillance-System |
Training and TestingAll right, so we have 24M params, and that's too many, we know that. But the purpose of this notebook is to set things right for our future experiments. Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions | from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
from torch.optim.lr_scheduler import StepLR
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
# scheduler = StepLR(optimizer, step_size=6, gamma=0.1)
EPOCHS = 20
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
# scheduler.step()
test(model, device, test_loader) |
0%| | 0/469 [00:00<?, ?it/s] | MIT | Prasant_Kumar/Assignments/F9.ipynb | ks1320/Traffic-Surveillance-System |
Document AI Specialized Parser with HITLThis notebook shows you how to use Document AI's specialized parsers ex. Invoice, Receipt, W2, W9, etc. and also shows Human in the Loop (HITL) output for supported parsers. | # Install necessary Python libraries and restart your kernel after.
!python -m pip install -r ../requirements.txt
from google.cloud import documentai_v1beta3 as documentai
from PIL import Image, ImageDraw
import os
import pandas as pd | _____no_output_____ | Apache-2.0 | specialized/specialized_form_parser.ipynb | jlehga1/documentai-notebooks |
Set your processor variables | # TODO(developer): Fill these variables with your values before running the sample
PROJECT_ID = "YOUR_PROJECT_ID_HERE"
LOCATION = "us" # Format is 'us' or 'eu'
PROCESSOR_ID = "PROCESSOR_ID" # Create processor in Cloud Console
PDF_PATH = "../resources/procurement/invoices/invoice.pdf" # Update to path of target document | _____no_output_____ | Apache-2.0 | specialized/specialized_form_parser.ipynb | jlehga1/documentai-notebooks |
The following code calls the synchronous API and parses the form fields and values. | def process_document_sample():
# Instantiates a client
client_options = {"api_endpoint": "{}-documentai.googleapis.com".format(LOCATION)}
client = documentai.DocumentProcessorServiceClient(client_options=client_options)
# The full resource name of the processor, e.g.:
# projects/project-id/locations/location/processor/processor-id
# You must create new processors in the Cloud Console first
name = f"projects/{PROJECT_ID}/locations/{LOCATION}/processors/{PROCESSOR_ID}"
with open(PDF_PATH, "rb") as image:
image_content = image.read()
# Read the file into memory
document = {"content": image_content, "mime_type": "application/pdf"}
# Configure the process request
request = {"name": name, "document": document}
# Recognizes text entities in the PDF document
result = client.process_document(request=request)
document = result.document
entities = document.entities
print("Document processing complete.\n\n")
# For a full list of Document object attributes, please reference this page: https://googleapis.dev/python/documentai/latest/_modules/google/cloud/documentai_v1beta3/types/document.html#Document
types = []
values = []
confidence = []
# Grab each key/value pair and their corresponding confidence scores.
for entity in entities:
types.append(entity.type_)
values.append(entity.mention_text)
confidence.append(round(entity.confidence,4))
# Create a Pandas Dataframe to print the values in tabular format.
df = pd.DataFrame({'Type': types, 'Value': values, 'Confidence': confidence})
display(df)
if result.human_review_operation:
print ("Triggered HITL long running operation: {}".format(result.human_review_operation))
return document
def get_text(doc_element: dict, document: dict):
"""
Document AI identifies form fields by their offsets
in document text. This function converts offsets
to text snippets.
"""
response = ""
# If a text segment spans several lines, it will
# be stored in different text segments.
for segment in doc_element.text_anchor.text_segments:
start_index = (
int(segment.start_index)
if segment in doc_element.text_anchor.text_segments
else 0
)
end_index = int(segment.end_index)
response += document.text[start_index:end_index]
return response
doc = process_document_sample() | _____no_output_____ | Apache-2.0 | specialized/specialized_form_parser.ipynb | jlehga1/documentai-notebooks |
Draw the bounding boxesWe will now use the spatial data returned by the processor to mark our values on the invoice pdf file that we first converted into a jpg. | JPG_PATH = "../resources/procurement/invoices/invoice.jpg" # Update to path of a jpg of your sample document.
document_image = Image.open(JPG_PATH)
draw = ImageDraw.Draw(document_image)
for entity in doc.entities:
# Draw the bounding box around the entities
vertices = []
for vertex in entity.page_anchor.page_refs[0].bounding_poly.normalized_vertices:
vertices.append({'x': vertex.x * document_image.size[0], 'y': vertex.y * document_image.size[1]})
draw.polygon([
vertices[0]['x'], vertices[0]['y'],
vertices[1]['x'], vertices[1]['y'],
vertices[2]['x'], vertices[2]['y'],
vertices[3]['x'], vertices[3]['y']], outline='blue')
document_image | _____no_output_____ | Apache-2.0 | specialized/specialized_form_parser.ipynb | jlehga1/documentai-notebooks |
Human in the loop (HITL) Operation **Only complete this section if a HITL Operation is triggered.** | lro = "LONG_RUNNING_OPERATION" # LRO printed in the previous cell ex. projects/660199673046/locations/us/operations/174674963333130330
client = documentai.DocumentProcessorServiceClient()
operation = client._transport.operations_client.get_operation(lro)
if operation.done:
print("HITL location: {} ".format(str(operation.response.value)[5:-1]))
else:
print('Waiting on human review.')
!gsutil cp "HITL_LOCATION" response.json # Location printed above ex. gs://gcs_bucket/receipt-output/174674963333130330/data-00001-of-00001.json
with open("response.json", "r") as file:
import json
entities = {}
data = json.load(file)
for entity in data['entities']:
if 'mentionText' in entity:
entities[entity['type']] = entity['mentionText']
else:
entities[entity['type']] = ""
for t in entities:
print("{} : {}\n ".format(t, entities[t])) | _____no_output_____ | Apache-2.0 | specialized/specialized_form_parser.ipynb | jlehga1/documentai-notebooks |
Notebook which focuses on the randomly generated data sets and the performance comparison of algorithms on it | from IPython.core.display import display, HTML
display(HTML('<style>.container {width:100% !important;}</style>'))
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import torch
from itertools import product, chain
import nmf.mult
import nmf.pgrad
import nmf.nesterov
import nmf_torch.mult
import nmf_torch.pgrad
import nmf_torch.nesterov
import nmf_torch.norms
import matplotlib
import pickle
from performance.performance_eval_func import get_random_lowrank_matrix, get_time_ratio,\
compare_performance, plot_errors_dict,\
torch_algo_wrapper,\
plot_ratios_gpu_algo, plot_ratios_cpu_gpu, plot_ratios_cpu_algo,\
plot_errors_dict, errors_at_time_t_over_inner_dim
algo_dict_to_test = {
"mult": nmf.mult.factorize_Fnorm,
"pgrad": nmf.pgrad.factorize_Fnorm_subproblems,
"nesterov": nmf.nesterov.factorize_Fnorm,
"mult_torch": torch_algo_wrapper(nmf_torch.mult.factorize_Fnorm,
device="cuda"),
"pgrad_torch": torch_algo_wrapper(nmf_torch.pgrad.factorize_Fnorm_subproblems,
device="cuda"),
"nesterov_torch": torch_algo_wrapper(nmf_torch.nesterov.factorize_Fnorm,
device="cuda")
}
f, ax = plt.subplots()
plot_errors_dict(errors_over_r_random, ax, log=True, x_lbl="Inner dim", title="site3")
f, ax = plt.subplots()
plot_errors_dict(errors_over_r_random, ax, log=False, x_lbl="Inner dim", title="site3")
shapes = [(5 * a, a) for a in [30, 100, 300, 1000, 3000]]
shapes
inner_dims_small = [sh[1] // 10 for sh in shapes]
inner_dims_small
inner_dims_big = [8 * sh[1] // 10 for sh in shapes]
inner_dims_big
shapes_all = shapes + shapes
inner_dims = inner_dims_small + inner_dims_big
times = [5, 25, 200, 1200, 8000]
times = times + [t * 2 for t in times]
print(len(shapes_all))
errors_dict = pickle.load(open("random_data_errors_dict.pkl","rb"))
del errors_dict[(3, (150, 30))]
errors_dict = {}
for inner_dim, shape, t in zip(inner_dims, shapes_all, times):
print((inner_dim, shape))
if (inner_dim, shape) in errors_dict.keys():
continue
V = get_random_lowrank_matrix(shape[0], inner_dim, shape[1]) + np.random.rand(*shape) * 0.1
W_init = np.random.rand(shape[0], inner_dim)
H_init = np.random.rand(inner_dim, shape[1])
errors = compare_performance(V=V, inner_dim=inner_dim, time_limit=t,
W_init=W_init, H_init=H_init,
algo_dict_to_test=algo_dict_to_test)
errors_dict[(inner_dim, shape)] = errors
pickle.dump(errors_dict, open("random_data_errors_dict.pkl","wb"))
pickle.dump(errors_dict, open("random_data_errors_dict.pkl","wb"))
keys = zip(inner_dims, shapes_all)
keys = sorted(keys, key=lambda k: k[0])
keys = sorted(keys, key=lambda k: k[1][0])
keys
for k in keys:
r, shape = k
M = np.random.rand(shape[0], r) @ np.random.rand(r, shape[1])
errros_dict_particular_data = errors_dict[k]
f, axes = plt.subplots(3, 2, figsize=(8, 7), dpi=100, gridspec_kw=dict(hspace=0.45, top=0.92, bottom=0.08,
left=0.08, right=0.99))
f.suptitle("Comparison, time ratio for {}, {:.2f}KB, {:.2f}MB".format(k, M.nbytes / 2**10, M.nbytes / 2**20))
plot_errors_dict(errros_dict_particular_data, axes[0, 0], log=True, title="Objective function", x_lbl="time [s]")
plot_ratios_cpu_gpu(errros_dict_particular_data, axes[0, 1])
plot_ratios_cpu_algo(errros_dict_particular_data, axes[1:, 0], selected_algs=["mult", "pgrad", "nesterov"])
plot_ratios_gpu_algo(errros_dict_particular_data, axes[1:, 1],
selected_algs=["mult_torch", "pgrad_torch", "nesterov_torch"])
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 14}
matplotlib.rc('font', **font)
figsize = (9, 10)
gridspec_kw = dict(wspace=0.4, hspace=0.9,
top=0.85,
bottom=0.1,
left=0.1, right=0.95)
plt.close("all")
f, axes1 = plt.subplots(3, 2, figsize=figsize, dpi=100,
gridspec_kw=gridspec_kw)
f.suptitle("Ratio between time required\nto reach particular cost function value on CPU and on GPU")
f, axes2 = plt.subplots(3, 2, figsize=figsize, dpi=100,
gridspec_kw=gridspec_kw)
f.suptitle("Ratio between time required\nto reach particular cost function value on CPU and on GPU")
axes1[0,0].get_shared_y_axes().join(*axes1[0, :], *axes1[1, :], *axes1[2, :])
axes2[0,0].get_shared_y_axes().join(*axes2[0, :], *axes2[1, :], *axes2[2, :])
axes1[2,1].set_axis_off()
axes2[2,1].set_axis_off()
axes1 = list(axes1.ravel())
axes2 = list(axes2.ravel())
legend_is = False
for k, a in zip(keys, chain.from_iterable(zip(axes1, axes2))):
print(a)
r, shape = k
plot_ratios_cpu_gpu(errors_dict[k], a)
M = np.random.rand(shape[0], r) @ np.random.rand(r, shape[1])
kb = M.nbytes / 2**10
mb = M.nbytes / 2**20
if mb < 1:
size = "{:.1f}KB".format(kb)
else:
size = "{:.1f}MB".format(mb)
a.set_title("Factorization of size {}\nmat. dim. {}, {}".format(k[0], k[1], size))
if legend_is:
a.get_legend().remove()
else:
legend_is = True
plt.close("all")
f, axes1 = plt.subplots(3, 2, figsize=figsize, dpi=100,
gridspec_kw=gridspec_kw)
f.suptitle("Ratio between time required to reach a particular"+
"cost \n function value for multiplicative algorithms and gradient algorithms")
f, axes2 = plt.subplots(3, 2, figsize=figsize, dpi=100,
gridspec_kw=gridspec_kw)
f.suptitle("Ratio between time required to reach a particular cost \n function value for projecitve and Nesterov gradient algorithms")
axes1 = list(axes1.ravel())
axes2 = list(axes2.ravel())
axes1[-1].set_axis_off()
axes2[-1].set_axis_off()
# axes1[0].get_shared_y_axes().join(*axes1)
axes2[0].get_shared_y_axes().join(*axes2)
print(keys)
print(len(axes1))
print(len(axes2))
legend_is = False
for k, a1, a2 in zip(keys[::2], axes1, axes2):
r, shape = k
if r != 0.1 * shape[1]:
print(k)
continue
plot_ratios_gpu_algo(errors_dict[k], [a1, a2],
selected_algs=["mult_torch",
"pgrad_torch",
"nesterov_torch"])
M = np.random.rand(shape[0], r) @ np.random.rand(r, shape[1])
kb = M.nbytes / 2**10
mb = M.nbytes / 2**20
if mb < 1:
size = "{:.2f}KB".format(kb)
else:
size = "{:.2f}MB".format(mb)
a1.set_title("factorization of size {}\nmat. shape {} {}".format(k[0], k[1], size))
a2.set_title("factorization of size {}\nmat. shape {} {}".format(k[0], k[1], size))
if legend_is:
a1.get_legend().remove()
a2.get_legend().remove()
else:
legend_is = True | _____no_output_____ | MIT | performance_on_random.ipynb | ninextycode/finalYearProjectNMF |
02: Fitting Power Spectrum Models=================================Introduction to the module, beginning with the FOOOF object. | # Import the FOOOF object
from fooof import FOOOF
# Import utility to download and load example data
from fooof.utils.download import load_fooof_data
# Download examples data files needed for this example
freqs = load_fooof_data('freqs.npy', folder='data')
spectrum = load_fooof_data('spectrum.npy', folder='data') | _____no_output_____ | Apache-2.0 | doc/auto_tutorials/plot_02-FOOOF.ipynb | varman-m/eeg_notebooks_doc |
FOOOF Object------------At the core of the module, which is object oriented, is the :class:`~fooof.FOOOF` object,which holds relevant data and settings as attributes, and contains methods to run thealgorithm to parameterize neural power spectra.The organization is similar to sklearn:- A model object is initialized, with relevant settings- The model is used to fit the data- Results can be extracted from the object Calculating Power Spectra~~~~~~~~~~~~~~~~~~~~~~~~~The :class:`~fooof.FOOOF` object fits models to power spectra. The module itself does notcompute power spectra, and so computing power spectra needs to be done prior to usingthe FOOOF module.The model is broadly agnostic to exactly how power spectra are computed. Commonmethods, such as Welch's method, can be used to compute the spectrum.If you need a module in Python that has functionality for computing power spectra, try`NeuroDSP `_.Note that FOOOF objects require frequency and power values passed in as inputs tobe in linear spacing. Passing in non-linear spaced data (such logged values) mayproduce erroneous results. Fitting an Example Power Spectrum---------------------------------The following example demonstrates fitting a power spectrum model to a single power spectrum. | # Initialize a FOOOF object
fm = FOOOF()
# Set the frequency range to fit the model
freq_range = [2, 40]
# Report: fit the model, print the resulting parameters, and plot the reconstruction
fm.report(freqs, spectrum, freq_range) | _____no_output_____ | Apache-2.0 | doc/auto_tutorials/plot_02-FOOOF.ipynb | varman-m/eeg_notebooks_doc |
Fitting Models with 'Report'~~~~~~~~~~~~~~~~~~~~~~~~~~~~The above method 'report', is a convenience method that calls a series of methods:- :meth:`~fooof.FOOOF.fit`: fits the power spectrum model- :meth:`~fooof.FOOOF.print_results`: prints out the results- :meth:`~fooof.FOOOF.plot`: plots to data and model fitEach of these methods can also be called individually. | # Alternatively, just fit the model with FOOOF.fit() (without printing anything)
fm.fit(freqs, spectrum, freq_range)
# After fitting, plotting and parameter fitting can be called independently:
# fm.print_results()
# fm.plot() | _____no_output_____ | Apache-2.0 | doc/auto_tutorials/plot_02-FOOOF.ipynb | varman-m/eeg_notebooks_doc |
Model Parameters~~~~~~~~~~~~~~~~Once the power spectrum model has been calculated, the model fit parameters are storedas object attributes that can be accessed after fitting.Following the sklearn convention, attributes that are fit as a result ofthe model have a trailing underscore, for example:- ``aperiodic_params_``- ``peak_params_``- ``error_``- ``r2_``- ``n_peaks_`` Access model fit parameters from FOOOF object, after fitting: | # Aperiodic parameters
print('Aperiodic parameters: \n', fm.aperiodic_params_, '\n')
# Peak parameters
print('Peak parameters: \n', fm.peak_params_, '\n')
# Goodness of fit measures
print('Goodness of fit:')
print(' Error - ', fm.error_)
print(' R^2 - ', fm.r_squared_, '\n')
# Check how many peaks were fit
print('Number of fit peaks: \n', fm.n_peaks_) | _____no_output_____ | Apache-2.0 | doc/auto_tutorials/plot_02-FOOOF.ipynb | varman-m/eeg_notebooks_doc |
Selecting Parameters~~~~~~~~~~~~~~~~~~~~You can also select parameters using the :meth:`~fooof.FOOOF.get_params`method, which can be used to specify which parameters you want to extract. | # Extract a model parameter with `get_params`
err = fm.get_params('error')
# Extract parameters, indicating sub-selections of parameter
exp = fm.get_params('aperiodic_params', 'exponent')
cfs = fm.get_params('peak_params', 'CF')
# Print out a custom parameter report
template = ("With an error level of {error:1.2f}, FOOOF fit an exponent "
"of {exponent:1.2f} and peaks of {cfs:s} Hz.")
print(template.format(error=err, exponent=exp,
cfs=' & '.join(map(str, [round(cf, 2) for cf in cfs])))) | _____no_output_____ | Apache-2.0 | doc/auto_tutorials/plot_02-FOOOF.ipynb | varman-m/eeg_notebooks_doc |
For a full description of how you can access data with :meth:`~fooof.FOOOF.get_params`,check the method's documentation.As a reminder, you can access the documentation for a function using '?' in aJupyter notebook (ex: `fm.get_params?`), or more generally with the `help` functionin general Python (ex: `help(get_params)`). Notes on Interpreting Peak Parameters-------------------------------------Peak parameters are labeled as:- CF: center frequency of the extracted peak- PW: power of the peak, over and above the aperiodic component- BW: bandwidth of the extracted peakNote that the peak parameters that are returned are not exactly the same as theparameters of the Gaussians used internally to fit the peaks.Specifically:- CF is the exact same as mean parameter of the Gaussian- PW is the height of the model fit above the aperiodic component [1], which is not necessarily the same as the Gaussian height- BW is 2 * the standard deviation of the Gaussian [2][1] Since the Gaussians are fit together, if any Gaussians overlap,than the actual height of the fit at a given point can only be assessedwhen considering all Gaussians. To be better able to interpret heightsfor single peak fits, we re-define the peak height as above, and label itas 'power', as the units of the input data are expected to be units of power.[2] Gaussian standard deviation is '1 sided', where as the returned BW is '2 sided'. The underlying gaussian parameters are also available from the FOOOF object,in the ``gaussian_params_`` attribute. | # Compare the 'peak_params_' to the underlying gaussian parameters
print(' Peak Parameters \t Gaussian Parameters')
for peak, gauss in zip(fm.peak_params_, fm.gaussian_params_):
print('{:5.2f} {:5.2f} {:5.2f} \t {:5.2f} {:5.2f} {:5.2f}'.format(*peak, *gauss)) | _____no_output_____ | Apache-2.0 | doc/auto_tutorials/plot_02-FOOOF.ipynb | varman-m/eeg_notebooks_doc |
FOOOFResults~~~~~~~~~~~~There is also a convenience method to return all model fit results::func:`~fooof.FOOOF.get_results`.This method returns all the model fit parameters, including the underlying Gaussianparameters, collected together into a FOOOFResults object.The FOOOFResults object, which in Python terms is a named tuple, is a standard dataobject used with FOOOF to organize and collect parameter data. | # Grab each model fit result with `get_results` to gather all results together
# Note that this returns a FOOOFResult object
fres = fm.get_results()
# You can also unpack all fit parameters when using `get_results`
ap_params, peak_params, r_squared, fit_error, gauss_params = fm.get_results()
# Print out the FOOOFResults
print(fres, '\n')
# From FOOOFResults, you can access the different results
print('Aperiodic Parameters: \n', fres.aperiodic_params)
# Check the r^2 and error of the model fit
print('R-squared: \n {:5.4f}'.format(fm.r_squared_))
print('Fit error: \n {:5.4f}'.format(fm.error_)) | _____no_output_____ | Apache-2.0 | doc/auto_tutorials/plot_02-FOOOF.ipynb | varman-m/eeg_notebooks_doc |
DiscretisationDiscretisation is the process of transforming continuous variables into discrete variables by creating a set of contiguous intervals that span the range of the variable's values. Discretisation is also called **binning**, where bin is an alternative name for interval. Discretisation helps handle outliers and may improve value spread in skewed variablesDiscretisation helps handle outliers by placing these values into the lower or higher intervals, together with the remaining inlier values of the distribution. Thus, these outlier observations no longer differ from the rest of the values at the tails of the distribution, as they are now all together in the same interval / bucket. In addition, by creating appropriate bins or intervals, discretisation can help spread the values of a skewed variable across a set of bins with equal number of observations. Discretisation approachesThere are several approaches to transform continuous variables into discrete ones. Discretisation methods fall into 2 categories: **supervised and unsupervised**. Unsupervised methods do not use any information, other than the variable distribution, to create the contiguous bins in which the values will be placed. Supervised methods typically use target information in order to create the bins or intervals. Unsupervised discretisation methods- Equal width discretisation- Equal frequency discretisation- K-means discretisation Supervised discretisation methods- Discretisation using decision treesIn this lecture, I will describe **equal width discretisation**. Equal width discretisationEqual width discretisation divides the scope of possible values into N bins of the same width.The width is determined by the range of values in the variable and the number of bins we wish to use to divide the variable:width = (max value - min value) / Nwhere N is the number of bins or intervals.For example if the values of the variable vary between 0 and 100, we create 5 bins like this: width = (100-0) / 5 = 20. The bins thus are 0-20, 20-40, 40-60, 80-100. The first and final bins (0-20 and 80-100) can be expanded to accommodate outliers (that is, values under 0 or greater than 100 would be placed in those bins as well).There is no rule of thumb to define N, that is something to determine experimentally. In this demoWe will learn how to perform equal width binning using the Titanic dataset with- pandas and NumPy- Feature-engine- Scikit-learn | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import KBinsDiscretizer
from feature_engine.discretisers import EqualWidthDiscretiser
# load the numerical variables of the Titanic Dataset
data = pd.read_csv('../titanic.csv',
usecols=['age', 'fare', 'survived'])
data.head()
# Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[['age', 'fare']],
data['survived'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
The variables Age and fare contain missing data, that I will fill by extracting a random sample of the variable. | def impute_na(data, variable):
df = data.copy()
# random sampling
df[variable + '_random'] = df[variable]
# extract the random sample to fill the na
random_sample = X_train[variable].dropna().sample(
df[variable].isnull().sum(), random_state=0)
# pandas needs to have the same index in order to merge datasets
random_sample.index = df[df[variable].isnull()].index
df.loc[df[variable].isnull(), variable + '_random'] = random_sample
return df[variable + '_random']
# replace NA in both train and test sets
X_train['age'] = impute_na(data, 'age')
X_test['age'] = impute_na(data, 'age')
X_train['fare'] = impute_na(data, 'fare')
X_test['fare'] = impute_na(data, 'fare')
# let's explore the distribution of age
data[['age', 'fare']].hist(bins=30, figsize=(8,4))
plt.show() | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
Equal width discretisation with pandas and NumPyFirst we need to determine the intervals' edges or limits. | # let's capture the range of the variable age
age_range = X_train['age'].max() - X_train['age'].min()
age_range
# let's divide the range into 10 equal width bins
age_range / 10 | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
The range or width of our intervals will be 7 years. | # now let's capture the lower and upper boundaries
min_value = int(np.floor( X_train['age'].min()))
max_value = int(np.ceil( X_train['age'].max()))
# let's round the bin width
inter_value = int(np.round(age_range / 10))
min_value, max_value, inter_value
# let's capture the interval limits, so we can pass them to the pandas cut
# function to generate the bins
intervals = [i for i in range(min_value, max_value+inter_value, inter_value)]
intervals
# let's make labels to label the different bins
labels = ['Bin_' + str(i) for i in range(1, len(intervals))]
labels
# create binned age / discretise age
# create one column with labels
X_train['Age_disc_labels'] = pd.cut(x=X_train['age'],
bins=intervals,
labels=labels,
include_lowest=True)
# and one with bin boundaries
X_train['Age_disc'] = pd.cut(x=X_train['age'],
bins=intervals,
include_lowest=True)
X_train.head(10) | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
We can see in the above output how by discretising using equal width, we placed each Age observation within one interval / bin. For example, age=13 was placed in the 7-14 interval, whereas age 30 was placed into the 28-35 interval.When performing equal width discretisation, we guarantee that the intervals are all of the same lenght, however there won't necessarily be the same number of observations in each of the intervals. See below: | X_train.groupby('Age_disc')['age'].count()
X_train.groupby('Age_disc')['age'].count().plot.bar()
plt.xticks(rotation=45)
plt.ylabel('Number of observations per bin') | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
The majority of people on the Titanic were between 14-42 years of age.Now, we can discretise Age in the test set, using the same interval boundaries that we calculated for the train set: | X_test['Age_disc_labels'] = pd.cut(x=X_test['age'],
bins=intervals,
labels=labels,
include_lowest=True)
X_test['Age_disc'] = pd.cut(x=X_test['age'],
bins=intervals,
include_lowest=True)
X_test.head()
# if the distributions in train and test set are similar, we should expect similar propotion of
# observations in the different intervals in the train and test set
# let's see that below
t1 = X_train.groupby(['Age_disc'])['age'].count() / len(X_train)
t2 = X_test.groupby(['Age_disc'])['age'].count() / len(X_test)
tmp = pd.concat([t1, t2], axis=1)
tmp.columns = ['train', 'test']
tmp.plot.bar()
plt.xticks(rotation=45)
plt.ylabel('Number of observations per bin') | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
Equal width discretisation with Feature-Engine | # Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[['age', 'fare']],
data['survived'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# replace NA in both train and test sets
X_train['age'] = impute_na(data, 'age')
X_test['age'] = impute_na(data, 'age')
X_train['fare'] = impute_na(data, 'fare')
X_test['fare'] = impute_na(data, 'fare')
# with feature engine we can automate the process for many variables
# in one line of code
disc = EqualWidthDiscretiser(bins=10, variables = ['age', 'fare'])
disc.fit(X_train)
# in the binner dict, we can see the limits of the intervals. For age
# the value increases aproximately 7 years from one bin to the next.
# for fare it increases in around 50 dollars from one interval to the
# next, but it increases always the same value, aka, same width.
disc.binner_dict_
# transform train and text
train_t = disc.transform(X_train)
test_t = disc.transform(X_test)
train_t.head()
t1 = train_t.groupby(['age'])['age'].count() / len(train_t)
t2 = test_t.groupby(['age'])['age'].count() / len(test_t)
tmp = pd.concat([t1, t2], axis=1)
tmp.columns = ['train', 'test']
tmp.plot.bar()
plt.xticks(rotation=0)
plt.ylabel('Number of observations per bin')
t1 = train_t.groupby(['fare'])['fare'].count() / len(train_t)
t2 = test_t.groupby(['fare'])['fare'].count() / len(test_t)
tmp = pd.concat([t1, t2], axis=1)
tmp.columns = ['train', 'test']
tmp.plot.bar()
plt.xticks(rotation=0)
plt.ylabel('Number of observations per bin') | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
We can see quite clearly, that equal width discretisation does not improve the value spread. The original variable Fare was skewed, and the discrete variable is also skewed. Equal width discretisation with Scikit-learn | # Let's separate into train and test set
X_train, X_test, y_train, y_test = train_test_split(
data[['age', 'fare']],
data['survived'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# replace NA in both train and test sets
X_train['age'] = impute_na(data, 'age')
X_test['age'] = impute_na(data, 'age')
X_train['fare'] = impute_na(data, 'fare')
X_test['fare'] = impute_na(data, 'fare')
disc = KBinsDiscretizer(n_bins=10, encode='ordinal', strategy='uniform')
disc.fit(X_train[['age', 'fare']])
disc.bin_edges_
train_t = disc.transform(X_train[['age', 'fare']])
train_t = pd.DataFrame(train_t, columns = ['age', 'fare'])
train_t.head()
test_t = disc.transform(X_test[['age', 'fare']])
test_t = pd.DataFrame(test_t, columns = ['age', 'fare'])
t1 = train_t.groupby(['age'])['age'].count() / len(train_t)
t2 = test_t.groupby(['age'])['age'].count() / len(test_t)
tmp = pd.concat([t1, t2], axis=1)
tmp.columns = ['train', 'test']
tmp.plot.bar()
plt.xticks(rotation=0)
plt.ylabel('Number of observations per bin')
t1 = train_t.groupby(['fare'])['fare'].count() / len(train_t)
t2 = test_t.groupby(['fare'])['fare'].count() / len(test_t)
tmp = pd.concat([t1, t2], axis=1)
tmp.columns = ['train', 'test']
tmp.plot.bar()
plt.xticks(rotation=0)
plt.ylabel('Number of observations per bin') | _____no_output_____ | BSD-3-Clause | Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb | cym3509/FeatureEngineering |
Obligatory imports | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
import matplotlib
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (12,8)
matplotlib.rcParams['font.size']=20
matplotlib.rcParams['lines.linewidth']=4
matplotlib.rcParams['xtick.major.size'] = 10
matplotlib.rcParams['ytick.major.size'] = 10
matplotlib.rcParams['xtick.major.width'] = 2
matplotlib.rcParams['ytick.major.width'] = 2 | _____no_output_____ | CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
We use the MNIST Dataset again | import IPython
url = 'http://yann.lecun.com/exdb/mnist/'
iframe = '<iframe src=' + url + ' width=80% height=400px></iframe>'
IPython.display.HTML(iframe) | _____no_output_____ | CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Fetch the data | from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original', data_home='../day4/data/')
allimages = mnist.data
allimages.shape
all_image_labels = mnist.target
set(all_image_labels) | _____no_output_____ | CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
check out the data | digit1 = mnist.data[0,:].reshape(28,-1) # arr.reshape(4, -1) is equivalent to arr.reshape(4, 7), is arr has size 28
fig, ax = plt.subplots(figsize=(1.5, 1.5))
ax.imshow(digit1, vmin=0, vmax=1) | _____no_output_____ | CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Theoretical background**Warning: math ahead** Taking logistic regression a step further: neural networks How (artificial) neural networks predict a label from features?* The *input layer* has **dimention = number of features.*** For each training example, each feature value is "fed" into the input layer. * Each "neuron" in the hidden layer receives a weighted sum of the features: the weight is initialized to a random value in the beginning, and the network "learns" from the datasetsand tunes these weights. Each hidden neuron, based on its input, and an "activation function", e.g.: the logistic function* The output is again, a weighted sum of the values at each hidden neuron. * There can be *more than one hidden layer*, in which case the output of the first hidden layer becomes the input of the second hidden layer. RegularizationLike Logistic regression and SVM, neural networks also can be improved with regularization.Fot scikit-learn, the relevant tunable parameter is `alpha` (as opposed to `gamma` for LR and SVM). Furthermore, it has default value 0.0001, unlike gamma, for which it is 1. Separate the data into training data and test data | len(allimages) | _____no_output_____ | CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Sample the data, 70000 is too many images to handle on a single PC | len(allimages)
size_desired_dataset = 2000
sample_idx = np.random.choice(len(allimages), size_desired_dataset)
images = allimages[sample_idx, :]
image_labels = all_image_labels[sample_idx]
set(image_labels)
image_labels.shape | _____no_output_____ | CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Partition into training and test set *randomly* **As a rule of thumb, 80/20 split between training/test dataset is often recommended.**See below for cross validation and how that changes this thumbrule. | from scipy.stats import itemfreq
from sklearn.model_selection import train_test_split
training_data, test_data, training_labels, test_labels = train_test_split(images, image_labels, train_size=0.8) | /home/dmanik/venvs/teaching/lib/python3.5/site-packages/sklearn/model_selection/_split.py:2026: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.
FutureWarning)
| CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
** Importance of normalization** If Feature A is in the range [0,1] and Feature B is in [10000,50000], SVM (in fact, most of the classifiers) will suffer inaccuracy.The solution is to *normalize* (AKA "feature scaling") each feature to the same interval e.g. [0,1] or [-1, 1].**scipy provides a standard function for this:** | from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Fit only to the training data: IMPORTANT
scaler.fit(training_data)
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(hidden_layer_sizes=(50,), max_iter = 5000)
clf.fit(scaler.transform(training_data), training_labels)
clf.score(scaler.transform(training_data), training_labels), clf.score(scaler.transform(test_data), test_labels) | /home/dmanik/venvs/teaching/lib/python3.5/site-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype uint8 was converted to float64 by StandardScaler.
warnings.warn(msg, DataConversionWarning)
| CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Visualize the hidden layer: | # source:
#
#http://scikit-learn.org/stable/auto_examples/neural_networks/plot_mnist_filters.html
fig, axes = plt.subplots(4, 4, figsize=(15,15))
# use global min / max to ensure all weights are shown on the same scale
vmin, vmax = clf.coefs_[0].min(), clf.coefs_[0].max()
for coef, ax in zip(clf.coefs_[0].T, axes.ravel()):
ax.matshow(coef.reshape(28, 28), cmap=plt.cm.gray, vmin=.5 * vmin,
vmax=.5 * vmax)
ax.set_xticks(())
ax.set_yticks(())
plt.show() | _____no_output_____ | CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Not bad, but is it better than Logistic regression? Check out with Learning curves: | from sklearn.model_selection import learning_curve
import pandas as pd
curve = learning_curve(clf, scaler.transform(images), image_labels)
train_sizes, train_scores, test_scores = curve
train_scores = pd.DataFrame(train_scores)
train_scores.loc[:,'train_size'] = train_sizes
test_scores = pd.DataFrame(test_scores)
test_scores.loc[:,'train_size'] = train_sizes
train_scores = pd.melt(train_scores, id_vars=['train_size'], value_name = 'CrossVal score')
test_scores = pd.melt(test_scores, id_vars=['train_size'], value_name = 'CrossVal score')
matplotlib.rcParams['figure.figsize'] = (12,8)
sns.tsplot(train_scores, time = 'train_size', unit='variable', value = 'CrossVal score')
sns.tsplot(test_scores, time = 'train_size', unit='variable', value = 'CrossVal score', color='g')
plt.ylim(0,1.1) | /home/dmanik/venvs/teaching/lib/python3.5/site-packages/seaborn/timeseries.py:183: UserWarning: The tsplot function is deprecated and will be removed or replaced (in a substantially altered version) in a future release.
warnings.warn(msg, UserWarning)
| CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Not really, we can try to improve it with parameter space search. Parameter space search with `GridSearchCV` | from sklearn.model_selection import GridSearchCV
clr = MLPClassifier()
clf = GridSearchCV(clr, {'alpha':np.logspace(-8, -1, 2)})
clf.fit(scaler.transform(images), image_labels)
clf.best_params_
clf.best_score_
nn_tuned = clf.best_estimator_
nn_tuned.fit(scaler.transform(training_data), training_labels)
curve = learning_curve(nn_tuned, scaler.transform(images), image_labels)
train_sizes, train_scores, test_scores = curve
train_scores = pd.DataFrame(train_scores)
train_scores.loc[:,'train_size'] = train_sizes
test_scores = pd.DataFrame(test_scores)
test_scores.loc[:,'train_size'] = train_sizes
train_scores = pd.melt(train_scores, id_vars=['train_size'], value_name = 'CrossVal score')
test_scores = pd.melt(test_scores, id_vars=['train_size'], value_name = 'CrossVal score')
matplotlib.rcParams['figure.figsize'] = (12,8)
sns.tsplot(train_scores, time = 'train_size', unit='variable', value = 'CrossVal score')
sns.tsplot(test_scores, time = 'train_size', unit='variable', value = 'CrossVal score', color='g')
plt.ylim(0,1.1)
plt.legend() | /home/dmanik/venvs/teaching/lib/python3.5/site-packages/seaborn/timeseries.py:183: UserWarning: The tsplot function is deprecated and will be removed or replaced (in a substantially altered version) in a future release.
warnings.warn(msg, UserWarning)
No handles with labels found to put in legend.
| CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
The increase in accuracy is miniscule. Multi layered NN's | from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(images)
images_normed = scaler.transform(images)
clr = MLPClassifier(hidden_layer_sizes=(25,25))
clf = GridSearchCV(clr, {'alpha':np.logspace(-80, -1, 3)})
clf.fit(images_normed, image_labels)
clf.best_score_
clf.best_params_
nn_tuned = clf.best_estimator_
nn_tuned.fit(scaler.transform(training_data), training_labels)
curve = learning_curve(nn_tuned, images_normed, image_labels)
train_sizes, train_scores, test_scores = curve
train_scores = pd.DataFrame(train_scores)
train_scores.loc[:,'train_size'] = train_sizes
test_scores = pd.DataFrame(test_scores)
test_scores.loc[:,'train_size'] = train_sizes
train_scores = pd.melt(train_scores, id_vars=['train_size'], value_name = 'CrossVal score')
test_scores = pd.melt(test_scores, id_vars=['train_size'], value_name = 'CrossVal score')
matplotlib.rcParams['figure.figsize'] = (12, 8)
sns.tsplot(train_scores, time = 'train_size', unit='variable', value = 'CrossVal score')
sns.tsplot(test_scores, time = 'train_size', unit='variable', value = 'CrossVal score', color='g')
plt.ylim(0,1.1)
plt.legend() | /home/dmanik/venvs/teaching/lib/python3.5/site-packages/seaborn/timeseries.py:183: UserWarning: The tsplot function is deprecated and will be removed or replaced (in a substantially altered version) in a future release.
warnings.warn(msg, UserWarning)
No handles with labels found to put in legend.
| CC-BY-4.0 | day5/02-NN.ipynb | JanaLasser/data-science-course |
Numerical Methods -- Assignment 5 Problem1 -- Energy density The matter and radiation density of the universe at redshift $z$ is$$\Omega_m(z) = \Omega_{m,0}(1+z)^3$$$$\Omega_r(z) = \Omega_{r,0}(1+z)^4$$where $\Omega_{m,0}=0.315$ and $\Omega_r = 9.28656 \times 10^{-5}$ (a) Plot | %config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
z = np.linspace(-1000,4000,10000)
O_m0 = 0.315
O_r0 = 9.28656e-5
O_m = O_m0*np.power(z+1,3)
O_r = O_r0*np.power(z+1,4)
#define where the roots are
x1 = -1; x2 = O_m0/O_r0
y1 = O_m0*np.power(x1+1,3)
y2 = O_m0*np.power(x2+1,3)
x = np.array([x1,x2])
y = np.array([y1,y2])
#plot the results
plt.figure(figsize=(8,8))
plt.plot(z,O_m,'-',label="matter density")
plt.plot(z,O_r,'-',label="radiation density")
plt.plot(x,y,'h',label=r"$z_{eq}$")
plt.xlabel("redshift(z)")
plt.ylabel("energy density")
plt.legend()
plt.show() | _____no_output_____ | MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
(b) Analytical solution An analytical solution can be found by equating the two equations. Since $z$ denotes for the redshift and it has a physical meaning, so it must take a real value for it to have a meaning. Thus\begin{align*}\Omega_m(z) &= \Omega_r(z)\\\Omega_{m,0}(1+z)^3 &= \Omega_{r,0}(1+z)^4\\(1+z)^3(0.315-9.28656 \times 10^{-5} z)&=0\\(1+z)^3 &= 0\\or \ (0.315-9.28656 \times 10^{-5} (z+1))&=0\\\end{align*}$z_1 = -1$ or $z_2 = 3391.0$ (c) Bisection method The bisection method in mathematics is a root-finding method that repeatedly bisects an interval and then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow. scipy.optimize.bisect calculates the roots for a given function, but for it to work $f(a)$ and $f(b)$ must take different signs (so that there exists a root $\in [a,b]$). | from scipy.optimize import bisect
def f(z):
O_m0 = 0.315
O_r0 = 9.28656e-5
O_m = O_m0*np.power(z+1,3)
O_r = O_r0*np.power(z+1,4)
return O_m -O_r
z1 = bisect(f,-1000,0,xtol=1e-10)
z2 = bisect(f,0,4000,xtol=1e-10)
print "The roots are found to be:",z1,z2 | The roots are found to be: -1.00000000003 3390.9987595
| MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
(d) Secant method The $\textit{secant method}$ uses secant lines to find the root. A secant line is a straight line that intersects two points of a curve. In the secant method, a line is drawn between two points on the continuous function such that it extends and intersects the $x$ axis. A secant line $y$ is drawn from $f(b)$ to $f(a)$ and intersects at point $c$ on the $x$ axis such that$$y = \frac{f(b)-f(a)}{b-a}(c-b)+f(b)$$The solution is therefore$$c = b-f(b)\frac{b-a}{f(b)-f(a)}$$ |
def secant(f, x0, x1, eps):
f_x0 = f(x0)
f_x1 = f(x1)
iteration_counter = 0
while abs(f_x1) > eps and iteration_counter < 100:
try:
denominator = float(f_x1 - f_x0)/(x1 - x0)
x = x1 - float(f_x1)/denominator
except ZeroDivisionError:
print "Error! - denominator zero for x = ", x
sys.exit(1) # Abort with error
x0 = x1
x1 = x
f_x0 = f_x1
f_x1 = f(x1)
iteration_counter += 1
# Here, either a solution is found, or too many iterations
if abs(f_x1) > eps:
iteration_counter = -1
return x, iteration_counter
#find the roots in the nearby region, with an accuracy of 1e-10
z1 = secant(f,-10,-0.5,1e-10)[0]
z2 = secant(f,3000,4000,1e-10)[0]
print "The roots are found to be:",z1,z2
| The roots are found to be: -0.999466618551 3390.9987595
| MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
(e) Newton-Raphson method In numerical methods, $\textit{Newton-Raphson method}$ is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is as follows:* Starting with a function $f$ defined over the real number $x$, the function's derivative $f'$, and an initial guess $x_0$ for a root of the fucntion $f$, then a better approximation $x_1$ is:$$x_1 = x_0 -\frac{f(x_0)}{f'(x_0)}$$* The process is then repeated as$$x_{n+1} = x_n-\frac{f(x_n)}{f'(x_n)}$$until a sufficiently satisfactory value is reached. | def fprime(z):
O_m0 = 0.315
O_r0 = 9.28656e-5
O_m = O_m0*np.power(z+1,2)
O_r = O_r0*np.power(z+1,3)
return 3*O_m -4*O_r
def Newton(f, dfdx, x, eps):
f_value = f(x)
iteration_counter = 0
while abs(f_value) > eps and iteration_counter < 100:
try:
x = x - float(f_value)/dfdx(x)
except ZeroDivisionError:
print "Error! - derivative zero for x = ", x
sys.exit(1) # Abort with error
f_value = f(x)
iteration_counter += 1
# Here, either a solution is found, or too many iterations
if abs(f_value) > eps:
iteration_counter = -1
return x, iteration_counter
z1 = Newton(f,fprime,0,1e-10)[0]
z2 = Newton(f,fprime,3000,1e-10)[0]
print "The roots are found to be:",z1,z2
| The roots are found to be: -0.9993234602 3390.9987595
| MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
Now, change the initial guess far from the values obtained from (b). And test how the three algorithms perform respectively. | #test how the bisection method perform
import time
start1 = time.time()
z1 = bisect(f,-1000,1000,xtol=1e-10)
end1 = time.time()
start2 = time.time()
z2 = bisect(f,3000,10000,xtol=1e-10)
end2 = time.time()
err1 = abs((z1-(-1))/(-1))
err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1))
print "The roots are found to be:",z1,z2
print "With a deviation of:",err1,err2
print "Time used are:",end1-start1,end2-start2
#test how the secant method perform
start1 = time.time()
z1 = secant(f,-1000,1000,1e-10)[0]
end1 = time.time()
start2 = time.time()
z2 = secant(f,3000,10000,1e-10)[0]
end2 = time.time()
err1 = abs((z1-(-1))/(-1))
err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1))
print "The roots are found to be:",z1,z2
print "With a deviation of:",err1,err2
print "Time used are:",end1-start1,end2-start2
print "Roots found after",secant(f,-10,-0.5,1e-10)[1],"and",secant(f,3000,4000,1e-10)[1],"loops"
#test how the newton-Raphson method perform
start1 = time.time()
z1 = Newton(f,fprime,-1000,1e-10)[0]
end1 = time.time()
start2 = time.time()
z2 = Newton(f,fprime,10000,1e-10)[0]
end2 = time.time()
err1 = abs((z1-(-1))/(-1))
err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1))
print "The roots are found to be:",z1,z2
print "With a deviation of:",err1,err2
print "Time used are:",end1-start1,end2-start2
print "Roots found after",Newton(f,fprime,0,1e-10)[1],"and",Newton(f,fprime,3000,1e-10)[1],"loops" | The roots are found to be: -1.00051824126 3390.9987595
With a deviation of: 0.000518241260632 0.0
Time used are: 0.000991821289062 0.000278949737549
Roots found after 18 and 7 loops
| MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
It is not difficult to find out that tested with the function given, bisection method is the fastest and the most reliable method in finding the first root; however, in determining the second root, both the secant method and Newton's method showed better performance, with zero deviation from the actual value, and a much faster run time. But in general, when dealing with more complicated calculations, bisection method is relatively slow. But within a given tolerance Newton's method and secant method may probably show better performance. Problem 2 -- Potential $\textit{Navarro-Frenk-White}$ and $\textit{Hernquist}$ potential can be expressed as the following equations:$$\Phi_{NFW}(r) = \Phi_0\frac{r_s}{r}\,ln(1+r/r_s)$$$$\Phi_{Hernquist}(r) = -\Phi_0\,\frac{1}{2(1+r/r_s)}$$with $\Phi_0 = 1.659 \times 10^4 \ km^2/s^2$ and $r_s = 15.61 \ kpc$.The apocentre and pericentre can be found by solving the following equation:\begin{align*}E_{tot} &= \frac{1}{2}\left(v_t^2+v_r^2\right)+\Phi\\\end{align*}where $L = J_r=rv_r$ is the angular momentum in the radial direction, and $E_{tot}$ is the total energy of the elliptical orbit and can be found by $(r,v_t,v_r)$ of a given star.Define the residue function$$R\equiv E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$$so that the percenter and apocenter can be found when $R=0$.Then, the radial action $J_r$ is defined as $\textit{(Jason L. Sanders,2015)}$$$J_r = \frac{1}{\pi}\int_{r_p}^{r_a}dr\sqrt{2E-2\Phi-\frac{L^2}{r^2}}$$where $r_p$ is the pericentric radius and $r_a$ is the apocentric radius. | import matplotlib.pyplot as plt
import numpy as np
from scipy.optimize import newton
from scipy.integrate import quad
from math import *
r = np.array([7.80500, 15.6100,31.2200,78.0500,156.100]) #r in kpc
vt = np.array([139.234,125.304,94.6439,84.5818,62.8640]) # vt in km/s
vr = np.array([-15.4704,53.7018,-283.932,-44.5818,157.160]) # vr in km/s
#NFW profile potential
def NFW(r):
phi0 = 1.659e4
rs = 15.61
ratio = rs/r
return -phi0*ratio*np.log(1+1/ratio)
#Hernquist profile potential
def H(r):
phi0 = 1.659e4
rs = 15.61
ratio = r/rs
return -phi0/(2*(1+ratio))
#1st derivative of Hernquist profile potential
def H_d(r):
phi0 = 1.659e4
rs = 15.61
ratio = r/rs
return phi0*0.5/rs*((1+ratio)**(-2))
#1st derivative of NFW profile potential
def NFW_d(r):
phi0 = 1.659e4
rs = 15.61
ratio = rs/r
return -phi0*rs*((-1/r**2)*np.log(1+1/ratio)+1/(r*rs)*(1+1/ratio)**(-1))
#total energy, NFW profile
def E_NFW(r,vt,vr):
E = 0.5*(vt**2+vr**2)+NFW(r)
return E
#total energy, Hernquist profile
def E_H(r,vt,vr):
E = 0.5*(vt**2+vr**2)+H(r)
return E
#Residue function
def Re(r,Energy,momentum,p):
return Energy - 0.5*(momentum/r)**2-p
#Residue function for NFW profile
def R_NFW(r,Energy,momentum):
return Energy - 0.5*(momentum/r)**2-NFW(r)
#Residue function for Hernquist profile
def R_H(r,Energy,momentum):
return Energy - 0.5*(momentum/r)**2-H(r)
#derivative of residue of NFW profile
def R_dNFW(r,Energy,momentum):
return Energy*0+momentum**2*r**(-3)-NFW_d(r)
#derivative of residue of Hernquist profile
def R_dH(r,Energy,momentum):
return Energy*0+momentum**2*r**(-3)-H_d(r)
#second derivative of residue of Hernquist profile, come handy if the
#calculated value for pericentre for Hernquist profile is too far off
#from the value calculated for NFW profile
def R_ddH(r,Energy,momentum):
phi0 = 1.659e4
rs = 15.61
ratio = r/rs
return Energy*0-3*momentum**2*r**(-4)+phi0*0.5/rs**2*((1+ratio)**(-3))
#function that defines the radial action
def r_actionNFW(r,Energy,momentum):
return np.sqrt(2*(Energy-NFW(r))-(momentum/r)**2)/pi
def r_actionH(r,Energy,momentum):
return np.sqrt(2*(Energy-H(r))-(momentum/r)**2)/pi
R1 = np.linspace(7,400,1000)
R2 = np.linspace(10,500,1000)
R3 = np.linspace(7,600,1000)
R4 = np.linspace(50,800,1000)
R5 = np.linspace(50,1500,1000)
Momentum = r*vt
Energy_nfw = E_NFW(r,vt,vr)
Energy_h = E_H(r,vt,vr)
#plot results for 5 stars
#1st star
i = 0
R_nfw = Re(R1,Energy_nfw[i],Momentum[i],NFW(R1))
R_h = Re(R1,Energy_h[i],Momentum[i],H(R1))
plt.figure(figsize=(15,10))
plt.plot(R1,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R1,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"1st star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
z2 = newton(R_NFW,100,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
e2 = R_NFW(z2,Energy_nfw[i],Momentum[i])
print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
plt.plot(z2,e2,marker='d',label='apocentre-NFW')
z3 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH)
e3 = Re(z1,Energy_h[i],Momentum[i],H(z1))
print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile"
plt.plot(z1,e1,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"
#2nd star
i = 1
R_nfw = Re(R2,Energy_nfw[i],Momentum[i],NFW(R2))
R_h = Re(R2,Energy_h[i],Momentum[i],H(R2))
plt.figure(figsize=(15,10))
plt.plot(R2,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R2,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"2nd star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
z2 = newton(R_NFW,400,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
e2 = R_NFW(z2,Energy_nfw[i],Momentum[i])
print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
plt.plot(z2,e2,marker='d',label='apocentre-NFW')
z3 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH)
e3 = Re(z1,Energy_h[i],Momentum[i],H(z1))
print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile"
plt.plot(z3,e3,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"
#3rd star
i = 2
R_nfw = Re(R3,Energy_nfw[i],Momentum[i],NFW(R3))
R_h = Re(R3,Energy_h[i],Momentum[i],H(R3))
plt.figure(figsize=(15,10))
plt.plot(R3,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R3,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"3rd star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
print "The pericentre is found to be:",z1,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
z2 = newton(R_H,10,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH)
e2 = R_H(z2,Energy_h[i],Momentum[i])
print "The pericentre is found to be:",z2,"kpc","for the Hernquist profile"
plt.plot(z2,e2,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,np.inf,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z2,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"
#4th star
i = 3
R_nfw = Re(R4,Energy_nfw[i],Momentum[i],NFW(R4))
R_h = Re(R4,Energy_h[i],Momentum[i],H(R4))
plt.figure(figsize=(15,10))
plt.plot(R4,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R4,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"4th star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
z2 = newton(R_NFW,400,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
e2 = R_NFW(z2,Energy_nfw[i],Momentum[i])
print "The pericentre and apocentre are found to be:",z1,"kpc","and",z2,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
plt.plot(z2,e2,marker='d',label='apocentre-NFW')
z3 = newton(R_H,50,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH)
e3 = R_H(z3,Energy_h[i],Momentum[i])
print "The pericentre is found to be:",z3,"kpc","for the Hernquist profile"
plt.plot(z1,e1,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,z2,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z3,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc"
#5th star
i = 4
R_nfw = Re(R5,Energy_nfw[i],Momentum[i],NFW(R5))
R_h = Re(R5,Energy_h[i],Momentum[i],H(R5))
plt.figure(figsize=(15,10))
plt.plot(R5,R_nfw,ls='-',label="NFW",color='#9370db',lw=2)
plt.plot(R5,R_h,ls='--',label="Hernquist",color='salmon',alpha=0.75,lw=2)
plt.rc('xtick', labelsize=15) # fontsize of the tick labels
plt.rc('ytick', labelsize=15)
plt.axhline(y=0,color='#b22222',lw=3)
plt.title(r"5th star, $R= E_{tot} - \frac{1}{2}\left(\frac{L}{r}\right)^2-\Phi$",fontsize=20)
plt.xlabel("r(kpc)",fontsize=15)
plt.ylabel("Residue",fontsize=15)
z1 = newton(R_NFW,10,args=(Energy_nfw[i],Momentum[i]),fprime=R_dNFW)
e1 = R_NFW(z1,Energy_nfw[i],Momentum[i])
print "The pericentre is found to be:",z1,"kpc","for the NFW profile"
plt.plot(z1,e1,marker='d',label='pericentre-NFW')
z2 = newton(R_H,50,args=(Energy_h[i],Momentum[i]),fprime=R_dH,fprime2=R_ddH)
e2 = R_H(z2,Energy_h[i],Momentum[i])
print "The pericentre is found to be:",z2,"kpc","for the Hernquist profile"
plt.plot(z1,e1,marker='o',label='pericentre-H')
plt.legend(fontsize=15)
plt.show()
J_NFW = quad(r_actionNFW,z1,np.inf,args=(Energy_nfw[i],Momentum[i]))
print "The radial action for NFW profile is:",J_NFW[0],"km/s kpc"
J_H = quad(r_actionH,z2,np.inf,args=(Energy_h[i],Momentum[i]))
print "The radial action for Hernquist profile is:",J_H[0],"km/s kpc" | The pericentre is found to be: 52.2586723359 kpc for the NFW profile
The pericentre is found to be: 55.9497763757 kpc for the Hernquist profile
| MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
The table below lists all the parameters of the five stars. Problem 3 -- System of equations $$f(x,y) = x^2+y^2-50=0$$$$g(x,y) = x \times y -25 = 0$$ (a) Analytical solution First $f(x,y)-2g(x,y)$,we find:\begin{align*}x^2+y^2-2xy &=0\\(x-y)^2 &= 0\\x&=y\end{align*}Then $f(x,y)+2g(x,y)$,we find:\begin{align*}x^2+y^2+2xy &=100\\(x+y)^2 &= 100\\x,y = 5,5 \ &or -5,-5\end{align*} (b) Newton's method Newton-Raphson method can also be applied to solve multivariate systems. The algorithm is simply as follows:* Suppose we have an N-D multivariate system of the form:\begin{cases}f_1(x_1,...,x_N)=f_1(\mathbf{x})=0\\f_2(x_1,...,x_N)=f_2(\mathbf{x})=0\\...... \\f_N(x_1,...,x_N)=f_N(\mathbf{x})=0\\\end{cases}where we have defined $$\mathbf{x}=[x_1,...,x_N]^T$$Define a vector function$$\mathbf{f}(\mathbf{x})=[f_1(\mathbf{x}),...,f_N(\mathbf{x})]^T$$So that the equation system above can be written as$$\mathbf{f}(\mathbf{x})=\mathbf{0}$$* $\mathbf{J}_{\mathbf{f}}(\mathbf{x})$ is the $\textit{Jacobian matrix}$ over the function vector $\mathbf{f}(\mathbf{x})$ $$\mathbf{J}_{\mathbf{f}}(\mathbf{x})=\begin{bmatrix}\frac{\partial f_1}{\partial x_1} & \dots & \frac{\partial f_1}{\partial x_N} \\\vdots & \ddots & \vdots \\\frac{\partial f_N}{\partial x_1} & \dots & \frac{\partial f_N}{\partial x_N}\end{bmatrix}$$* If all equations are linear we have$$\mathbf{f}(\mathbf{x}+\delta \mathbf{x})=\mathbf{f}(\mathbf{x})+\mathbf{J}(\mathbf{x})\delta\mathbf{x}$$* by assuming $\mathbf{f}(\mathbf{x}+\delta \mathbf{x})=0$, we can find the roots as $\mathbf{x}+\delta \mathbf{x}$, where$$\delta \mathbf{x} = -\mathbf{J}(\mathbf{x})^{-1}\mathbf{f}(\mathbf{x})$$* The approximation can be improved iteratively$$\mathbf{x}_{n+1} = \mathbf{x}_n +\delta \mathbf{x}_n = \mathbf{x}_n-\mathbf{J}(\mathbf{x}_n)^{-1}\mathbf{f}(\mathbf{x}_n)$$ | from scipy.optimize import fsolve
import numpy as np
f1 = lambda x: [x[0]**2+x[1]**2-50,x[0]*x[1]-25]
#the Jacobian needed to implement Newton's method
fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2)
#define the domain where we want to find the solution (x,y)
a = np.linspace(-10,10,100)
b = a
#for every point (a,b), pass on to fsolve and append the result
#then round the result and see how many pairs of solutions there are
i = 0
result = np.array([[5,5]])
#print result
for a,b in zip(a,b):
x = fsolve(f1,[a,b],fprime=fd)
x = np.round(x)
result = np.append(result,[x],axis=0)
print "The sets of solutions are found to be:",np.unique(result,axis=0)
| The sets of solutions are found to be: [[-5. -5.]
[ 5. 5.]]
| MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
From above we learn that the solutions are indeed left with $(x,y) = (5,5)$ or $(x,y) = (-5,-5)$ (c) Convergence | %config InlineBackend.figure_format = 'retina'
import numpy as np
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
def f(x, y):
return x**2+y**2-50;
def g(x, y):
return x*y-25
x = np.linspace(-6, 6, 500)
@np.vectorize
def fy(x):
x0 = 0.0
def tmp(y):
return f(x, y)
y1, = fsolve(tmp, x0)
return y1
@np.vectorize
def gy(x):
x0 = 0.0
def tmp(y):
return g(x, y)
y1, = fsolve(tmp, x0)
return y1
plt.plot(x, fy(x), x, gy(x))
plt.xlabel('x')
plt.ylabel('y')
plt.rc('xtick', labelsize=10) # fontsize of the tick labels
plt.rc('ytick', labelsize=10)
plt.legend(['fy', 'gy'])
plt.show()
#print fy(x)
i =1
I = np.array([])
F = np.array([])
G = np.array([])
X_std = np.array([])
Y_std = np.array([])
while i<50:
x_result = fsolve(f1,[-100,-100],maxfev=i)
f_result = f(x_result[0],x_result[1])
g_result = g(x_result[0],x_result[1])
x1_std = abs(x_result[0]+5.0)
x2_std = abs(x_result[1]+5.0)
F = np.append(F,f_result)
G = np.append(G,g_result)
I = np.append(I,i)
X_std = np.append(X_std,x1_std)
Y_std = np.append(Y_std,x2_std)
i+=1
xtol = 1.49012e-08
plt.loglog(I,np.abs(F),I,np.abs(G))
plt.title("converge of f and g")
plt.xlabel("iterations")
plt.ylabel("function values")
plt.legend(['f','g'])
plt.show()
plt.loglog(I,X_std,I,Y_std)
plt.axhline(y=xtol,color='#b22222',lw=3)
plt.title(r"$converge \ of \ \Delta_x \ and \ \Delta_y$")
plt.xlabel("iterations")
plt.ylabel("Deviation values")
plt.legend([r'$\Delta x$',r'$\Delta y$','tolerance'])
plt.show() | _____no_output_____ | MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
(d) Maximum iterations Now also apply the Jacobian. The jacobian of the system of equation is simply as follows$$\mathbf{J} = \begin{bmatrix}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\end{bmatrix}$$$$=\begin{bmatrix}2x & 2y \\y & x\end{bmatrix}$$ | fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2)
i =1
I = np.array([])
F = np.array([])
G = np.array([])
X_std = np.array([])
Y_std = np.array([])
while i<50:
x_result = fsolve(f1,[-100,-100],fprime=fd,maxfev=i)
f_result = f(x_result[0],x_result[1])
g_result = g(x_result[0],x_result[1])
x1_std = abs(x_result[0]+5.0)
x2_std = abs(x_result[1]+5.0)
F = np.append(F,f_result)
G = np.append(G,g_result)
I = np.append(I,i)
X_std = np.append(X_std,x1_std)
Y_std = np.append(Y_std,x2_std)
i+=1
xtol = 1.49012e-08
plt.loglog(I,np.abs(F),I,np.abs(G))
plt.title("converge of f and g")
plt.xlabel("iterations")
plt.ylabel("function values")
plt.legend(['f','g'])
plt.show()
plt.loglog(I,X_std,I,Y_std)
plt.axhline(y=xtol,color='#b22222',lw=3)
plt.title(r"$converge \ of \ \Delta_x \ and \ \Delta_y$")
plt.xlabel("iterations")
plt.ylabel("Deviation values")
plt.legend([r'$\Delta x$',r'$\Delta y$','tolerance'])
plt.show() | _____no_output_____ | MIT | numerical5.ipynb | fatginger1024/NumericalMethods |
Exploratory Data Analysis | from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql import functions as F
spark = SparkSession.builder.master('local[1]').appName("Jupyter").getOrCreate()
sc = spark.sparkContext
#test if this works
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy
import datetime
# the more advanced python visualization library
import seaborn as sns
# apply style to all the charts
sns.set_style('whitegrid')
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# create sparksession
spark = SparkSession \
.builder \
.appName("Pysparkexample") \
.config("spark.some.config.option", "some-value") \
.getOrCreate() | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
Load Data | #collisions = spark.read.csv('data/accidents.csv', header='true', inferSchema = True)
#collisions.show(2)
df_new = spark.read.csv('data/accidents_new.csv', header='true', inferSchema = True) | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
Data Perspective_____ * One variable * Numeric variables: * continuous * discrete * Categorical variables: * ordinal * nominal* Multiple variables: * Numeric x Numeric * Categorical x Numeric * Categorical x Categorical____________________ Overview | print('The total number of rows : ', df_new.count(),
'\nThe total number of columns :', len(df_new.columns)) | The total number of rows : 128647
The total number of columns : 40
| MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
Data SchemaPrint the data schema for our dataset - SAAQ Accident Information | df_new.printSchema()
# Create temporary table query with SQL
df_new.createOrReplaceTempView('AccidentData')
accidents_limit_10 = spark.sql(
'''
SELECT * FROM AccidentData
LIMIT 10
'''
).toPandas()
accidents_limit_10 | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
One Variable__________ a. Numeric - Data TotalsTotals for various accident records | from pyspark.sql import functions as func
#df_new.agg(func.sum("NB_BLESSES_VELO").alias('Velo'),func.sum("NB_VICTIMES_MOTO"),func.sum("NB_VEH_IMPLIQUES_ACCDN")).show()
df_new.agg(func.sum("NB_VEH_IMPLIQUES_ACCDN").alias('Ttl Cars In Accidents')).show()
df_new.agg(func.sum("NB_VICTIMES_TOTAL").alias('Ttl Victims')).show()
df_new.agg(func.sum("NB_MORTS").alias('Ttl Deaths')).show()
df_new.agg(func.sum("NB_BLESSES_GRAVES").alias('Ttl Severe Injuries')).show()
df_new.agg(func.sum("NB_BLESS_LEGERS").alias('Ttl Light Injuries')).show()
df_new.agg(func.sum("NB_DECES_PIETON").alias('Ttl Pedestrian Deaths')).show()
df_new.agg(func.sum("NB_BLESSES_PIETON").alias('Ttl Pedestrian Injuries')).show()
df_new.agg(func.sum("NB_VICTIMES_PIETON").alias('Ttl Pedestrian Victims')).show()
df_new.agg(func.sum("NB_DECES_MOTO").alias('Ttl Moto Deaths')).show()
df_new.agg(func.sum("NB_BLESSES_MOTO").alias('Ttl Moto Injuries')).show()
df_new.agg(func.sum("NB_VICTIMES_MOTO").alias('Ttl Moto Victims')).show()
df_new.agg(func.sum("NB_DECES_VELO").alias('Ttl Bike Deaths')).show()
df_new.agg(func.sum("NB_BLESSES_VELO").alias('Ttl Bike Injuries')).show()
df_new.agg(func.sum("NB_VICTIMES_VELO").alias('Ttl Bike Victims')).show()
df_new.agg(func.sum("nb_automobile_camion_leger").alias('Ttl Car - Light Trucks')).show()
df_new.agg(func.sum("nb_camionLourd_tractRoutier").alias('Ttl Heavy Truck - Tractor')).show()
df_new.agg(func.sum("nb_outil_equipement").alias('Ttl Equipment - Tools')).show()
df_new.agg(func.sum("nb_tous_autobus_minibus").alias('Ttl Bus')).show()
df_new.agg(func.sum("nb_bicyclette").alias('Ttl Bikes')).show()
df_new.agg(func.sum("nb_cyclomoteur").alias('Ttl Motorized Bike')).show()
df_new.agg(func.sum("nb_motocyclette").alias('Ttl Motorcycle')).show()
df_new.agg(func.sum("nb_taxi").alias('Ttl Taxi')).show()
df_new.agg(func.sum("nb_urgence").alias('Ttl Emergency')).show()
df_new.agg(func.sum("nb_motoneige").alias('Ttl Snowmobile')).show()
df_new.agg(func.sum("nb_VHR").alias('Ttl Motorhome')).show()
df_new.agg(func.sum("nb_autres_types").alias('Ttl Other Types')).show()
df_new.agg(func.sum("nb_veh_non_precise").alias('Ttl Non Specified Vehicles')).show()
df_totals = pd.DataFrame(columns=['Attr','Total'])
#df_totals.append({'Attr':'NB_VEH_IMPLIQUES_ACCDN','Total':df_new.agg(func.sum("NB_VEH_IMPLIQUES_ACCDN"))},ignore_index=True)
#df_totals | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
b. Categorical GRAVITE - severity of the accident | gravite_levels = spark.sql(
'''
SELECT GRAVITE, COUNT(*) as Total FROM AccidentData
GROUP BY GRAVITE
ORDER BY Total DESC
'''
).toPandas()
gravite_levels
# Pie Chart
fig,ax = plt.subplots(1,1,figsize=(12,6))
wedges, texts, autotexts = ax.pie(gravite_levels['Total'], radius=2, #labeldistance=2, pctdistance=1.1,
autopct='%1.2f%%', startangle=90)
ax.legend(wedges, gravite_levels['GRAVITE'],
title="GRAVITE",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
plt.setp(autotexts, size=12, weight="bold")
ax.axis('equal')
plt.tight_layout()
plt.savefig('figures/gravite_levels.png')
plt.show() | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
METEO - Weather Conditions | meteo_conditions = spark.sql(
'''
SELECT METEO, COUNT(*) as Total FROM AccidentData
GROUP BY METEO
ORDER BY Total DESC
'''
).toPandas()
meteo_conditions['METEO'] = meteo_conditions['METEO'].replace( {11:'Clear',12:'Overcast: cloudy/dark',13:'Fog/mist',
14:'Rain/bruine',15:'Heavy rain',16:'Strong wind',
17:'Snow/storm',18:'Blowing snow/blizzard',
19:'Ice',99:'Other..'})
meteo_conditions
fig,ax = plt.subplots(1,1,figsize=(10,6))
plt.bar(meteo_conditions['METEO'], meteo_conditions['Total'],
align='center', alpha=0.7, width=0.7, color='purple')
plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right')
fig.tight_layout()
plt.savefig('figures/meteo_conditions.png')
plt.show() | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
Multiple Variables____________ Numeric X Categorical 1. Accident Victims by Municipality | victims_by_municipality = spark.sql(
'''
SELECT MUNCP, SUM(NB_VICTIMES_TOTAL) as Total FROM AccidentData
GROUP BY MUNCP
ORDER BY Total DESC
'''
).toPandas()
victims_by_municipality
fig,ax = plt.subplots(1,1,figsize=(10,6))
victims_by_municipality.plot(x = 'MUNCP', y = 'Total', kind = 'barh', color = 'C0', ax = ax, legend = False)
ax.set_xlabel('Total Victims', size = 16)
ax.set_ylabel('Municipality', size = 16)
plt.savefig('figures/victims_by_municipality.png')
plt.show()
victims_by_region = spark.sql(
'''
SELECT MUNCP, SUM(NB_VICTIMES_TOTAL) as Total FROM AccidentData
GROUP BY MUNCP
'''
).toPandas()
plt.figure(figsize = (10,6))
sns.distplot(np.log(victims_by_region['Total']))
plt.title('Total Victims Histogram by Region', size = 16)
plt.ylabel('Density', size = 16)
plt.xlabel('Log Total', size = 16)
plt.savefig('figures/distplot.png')
plt.show() | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
2. Total Collisions by Day of Week | collisions_by_day = spark.sql(
'''
SELECT WEEK_DAY, COUNT(WEEK_DAY) as Number_of_Collisions FROM AccidentData
GROUP BY WEEK_DAY
ORDER BY Number_of_Collisions DESC
'''
).toPandas()
collisions_by_day
fig,ax = plt.subplots(1,1,figsize=(10,6))
collisions_by_day.plot(x = 'WEEK_DAY', y = 'Number_of_Collisions', kind = 'barh', color = 'C0', ax = ax, legend = False)
ax.set_xlabel('Number_of_Collisions', size = 16)
ax.set_ylabel('WEEK_DAY', size = 16)
plt.savefig('figures/collisions_by_day.png')
plt.show() | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
"VE", Friday has the highest number of collisions. 3. Top 10 Accidents by street | accidents_by_street = spark.sql(
'''
SELECT STREET, COUNT(STREET) as Number_of_Accidents FROM AccidentData
GROUP BY STREET
ORDER BY Number_of_Accidents DESC
LIMIT 10
'''
).toPandas()
fig,ax = plt.subplots(1,1,figsize=(10,6))
#accidents_by_street.plot(x = 'STREET', y = 'Number_of_Accidents', kind = 'barh', color = 'C0', ax = ax, legend = False)
sns.barplot(x=accidents_by_street['Number_of_Accidents'], y=accidents_by_street['STREET'], orient='h')
ax.set_xlabel('Number of Accidents', size = 16)
ax.set_ylabel('Street', size = 16)
plt.savefig('figures/accidents_by_street.png')
plt.show() | _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
Numeric X Numeric Correlation HeatmapIllustrates the corellation between numeric variables of the dataset. | plot_df = spark.sql(
'''
SELECT METEO, SURFACE, LIGHT, TYPE_ACCDN,
NB_MORTS, NB_BLESSES_GRAVES, NB_VEH_IMPLIQUES_ACCDN, NB_VICTIMES_TOTAL
FROM AccidentData
'''
).toPandas()
corrmat = plot_df.corr()
f, ax = plt.subplots(figsize=(10, 7))
sns.heatmap(corrmat, vmax=.8, square=True)
plt.savefig('figures/heatmap.png')
plt.show()
| _____no_output_____ | MIT | code/data_EDA.ipynb | ArwaSheraky/Montreal-Collisions |
Hist plot for all values (even though 0 is actually useless) | fig, ax = plt.subplots(figsize=(10,10))
ax.hist(expressions, alpha=0.5, label=expressions.columns)
ax.legend() | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Filter out all values that are equal to 0 | expressions.value_counts(sort=True) | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
===> columns contempt, unkown and NF can be dropped | expressions_drop = expressions.drop(columns=["unknown", "contempt", "NF"])
exp_nan = expressions_drop.replace(0, np.NaN)
exp_stacked = exp_nan.stack(dropna=True)
exp_unstacked = exp_stacked.reset_index(level=1)
expressions_single = exp_unstacked.rename(columns={"level_1": "expression"}).drop(columns=[0])
expressions_single.head() | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Append expressions to expw | expw["expression"] = expressions_single["expression"]
expw.head() | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Remove unnecessary columns | expw_minimal = expw.drop(expw.columns[1:-1], axis=1)
expw_minimal.loc[:, "Image name"] = data_dir + "/" + expw_minimal["Image name"].astype(str)
expw_minimal.shape | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Histogram of expression distribution | x_ticks = [f"{idx} = {expr}, count: {count}" for idx, (expr, count) in enumerate(zip(list(expressions_single.value_counts().index.get_level_values(0)), expressions_single.value_counts().values))]
x_ticks
ax = expressions_single.value_counts().plot(kind='barh')
ax.set_yticklabels(x_ticks) | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Create a csv file with all absolute image paths for annotating with FairFace | col_name = "img_path"
image_names = expw[["Image name"]]
image_names.head()
image_names.rename(columns={"Image name": "img_path"}, inplace=True)
image_names.loc[:, "img_path"] = data_dir + "/" + image_names["img_path"].astype(str)
save_path = "/home/steffi/dev/independent_study/FairFace/expw_image_paths.csv"
image_names.to_csv(save_path, index=False) | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Filter only img_paths which contain "black", "African", "chinese", "asian" | black = image_names.loc[image_names.img_path.str.contains('(black)'), :]
african = image_names.loc[image_names.img_path.str.contains('(African)'), :]
asian = image_names.loc[image_names.img_path.str.contains('(asian)'), :]
chinese = image_names.loc[image_names.img_path.str.contains('(chinese)'), :]
filtered = pd.concat([black, african, asian, chinese]) | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Filter and save subgroups Anger | black_angry_annoyed = black.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :]
black_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/black_angry_annoyed.csv", index=False)
black_angry_annoyed.head()
african_angry_annoyed = african.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :]
african_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/african_angry_annoyed.csv", index=False)
african_angry_annoyed.shape
asian_angry_annoyed = asian.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :]
asian_angry_annoyed
asian_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/asian_angry_annoyed.csv", index=False)
chinese_angry_annoyed = chinese.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :]
chinese_angry_annoyed.head()
chinese_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/chinese_angry_annoyed.csv", index=False) | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Surprise | black_awe_astound_amazed = black.loc[image_names.img_path.str.contains('(awe)|(astound)|(amazed)'), :]
black_awe_astound_amazed
black_awe_astound_amazed.to_csv("/home/steffi/dev/independent_study/FairFace/black_awe_astound_amazed.csv", index=False)
african_awe = african.loc[image_names.img_path.str.contains('(awe)'), :]
african_awe
african_awe.to_csv("/home/steffi/dev/independent_study/FairFace/african_awe.csv", index=False)
asian_astound = asian.loc[image_names.img_path.str.contains('(astound)'), :]
asian_astound.to_csv("/home/steffi/dev/independent_study/FairFace/asian_astound.csv", index=False)
asian_astound
chinese_astound = chinese.loc[image_names.img_path.str.contains('(astound)'), :]
chinese_astound.to_csv("/home/steffi/dev/independent_study/FairFace/chinese_astound.csv", index=False)
chinese_astound | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Fear | black_fear = black.loc[image_names.img_path.str.contains('(fear)|(frightened)|(anxious)|(shocked)'), :]
black_fear.shape
african_fear = african.loc[image_names.img_path.str.contains('(fear)|(frightened)|(anxious)|(shocked)'), :]
black_african_fear = pd.concat([african_fear, black_fear])
black_african_fear.shape
black_african_fear.to_csv("/home/steffi/dev/independent_study/FairFace/black_african_fear.csv", index=False) | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Disgust | black_disgust = black.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :]
african_digsust = african.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :]
african_digsust.shape
black_african_disgust = pd.concat([black_disgust, african_digsust])
pd.set_option('display.max_colwidth', -1)
black_african_disgust
black_african_disgust.to_csv("/home/steffi/dev/independent_study/FairFace/black_african_disgust.csv", index=False)
disgust_all = image_names.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :]
disgust_all.shape
disgust_all | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
Saving all filtered to csv | filtered_save_path = "/home/steffi/dev/independent_study/FairFace/filtered_expw_image_paths.csv"
filtered.to_csv(filtered_save_path, index=False) | _____no_output_____ | MIT | notebooks/ExpW EDA.ipynb | StefanieStoppel/InferFace |
T M V A_Tutorial_Classification_Tmva_AppTMVA example, for classification with following objectives: * Apply a BDT with TMVA**Author:** Lailin XU This notebook tutorial was automatically generated with ROOTBOOK-izer from the macro found in the ROOT repository on Tuesday, April 27, 2021 at 01:21 AM. | from ROOT import TMVA, TFile, TTree, TCut, TH1F, TCanvas, gROOT, TLegend
from subprocess import call
from os.path import isfile
from array import array
gROOT.SetStyle("ATLAS") | Welcome to JupyROOT 6.22/07
| CC-BY-4.0 | MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb | LailinXu/hepstat-tutorial |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.