markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
What makes composite plates special is the fact that they typically not isotropic. This is handled by the 6x6 ABD matrix that defines the composites properties axially, in bending, and the coupling between the two. | # composite properties
A11,A22,A66,A12,A16,A26,A66 = symbols('A11,A22,A66,A12,A16,A26,A66')
B11,B22,B66,B12,B16,B26,B66 = symbols('B11,B22,B66,B12,B16,B26,B66')
D11,D22,D66,D12,D16,D26,D66 = symbols('D11,D22,D66,D12,D16,D26,D66')
## constants of integration when solving differential equation
C1,C2,C3,C4,C5,C6 = symbols('C1,C2,C3,C4,C5,C6')
# plate and composite parameters
th,a,b = symbols('th,a,b')
# displacement functions
u0 = Function('u0')(x,y)
v0 = Function('v0')(x,y)
w0 = Function('w0')(x,y) | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | mit |
Let's compute our 6 displacement conditions which is where our PDE's show up | Nxf = A11*diff(u0,x) + A12*diff(v0,y) + A16*(diff(u0,y) + diff(v0,x)) - B11*diff(w0,x,2) - B12*diff(w0,y,2) - 2*B16*diff(w0,x,y)
Eq(Nx, Nxf)
Nyf = A12*diff(u0,x) + A22*diff(v0,y) + A26*(diff(u0,y) + diff(v0,x)) - B12*diff(w0,x,2) - B22*diff(w0,y,2) - 2*B26*diff(w0,x,y)
Eq(Ny,Nyf)
Nxyf = A16*diff(u0,x) + A26*diff(v0,y) + A66*(diff(u0,y) + diff(v0,x)) - B16*diff(w0,x,2) - B26*diff(w0,y,2) - 2*B66*diff(w0,x,y)
Eq(Nxy,Nxyf)
Mxf = B11*diff(u0,x) + B12*diff(v0,y) + B16*(diff(u0,y) + diff(v0,x)) - D11*diff(w0,x,2) - D12*diff(w0,y,2) - 2*D16*diff(w0,x,y)
Eq(Mx,Mxf)
Myf = B12*diff(u0,x) + B22*diff(v0,y) + B26*(diff(u0,y) + diff(v0,x)) - D12*diff(w0,x,2) - D22*diff(w0,y,2) - 2*D26*diff(w0,x,y)
Eq(My,Myf)
Mxyf = B16*diff(u0,x) + B26*diff(v0,y) + B66*(diff(u0,y) + diff(v0,x)) - D16*diff(w0,x,2) - D26*diff(w0,y,2) - 2*D66*diff(w0,x,y)
Eq(Mxy,Mxyf) | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | mit |
Now, combine our 6 displacement conditions with our 3 equalibrium equations to get three goverening equations | eq1 = diff(Nxf,x) + diff(Nxf,y)
eq1
eq2 = diff(Nxyf,x) + diff(Nyf,y)
eq2
eq3 = diff(Mxf,x,2) + 2*diff(Mxyf,x,y) + diff(Myf,y,2) + q
eq3 | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | mit |
Yikes, I do not want to solve that (at least right now). If we make the assumption that the plate has equal displacement of y in the x and y direction, then we can simply things ALOT! These simplifications are valid for cross ply unsymmetric laminates plate, Hyer pg 616. This is applied by setting some of our material properties to zero. $ A16=A26=D16=D26=B16=B26=B12=B66=0 $
Almost like magic, we now have some equations that aren't so scary. | u0 = Function('u0')(x)
v0 = Function('v0')(x)
w0 = Function('w0')(x)
Nxf = A11*diff(u0,x) + A12*diff(v0,y) - B11*diff(w0,x,2)
Eq(Nx, Nxf)
Nyf = A12*diff(u0,x) + A22*diff(v0,y) - B22*diff(w0,y,2)
Eq(Ny,Nyf)
Nxyf = A66*(diff(u0,y) + diff(v0,x))
Eq(Nxy,Nxyf)
Mxf = B11*diff(u0,x) - D11*diff(w0,x,2) - D12*diff(w0,y,2)
Eq(Mx,Mxf)
Myf = B22*diff(v0,y) - D12*diff(w0,x,2) - D22*diff(w0,y,2)
Eq(My,Myf)
Mxyf = 0
Eq(Mxy,Mxyf) | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | mit |
Now we are getting somewhere. Finally we can solve the differential equations | dsolve(diff(Nx(x)))
dsolve(diff(Mx(x),x,2)+q) | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | mit |
Now solve for u0 and w0 with some pixie dust | eq4 = (Nxf-C1)
eq4
eq5 = Mxf -( -q*x**2 + C2*x + C3 )
eq5
eq6 = Eq(solve(eq4,diff(u0,x))[0] , solve(eq5, diff(u0,x))[0])
eq6
w0f = dsolve(eq6, w0)
w0f
eq7 = Eq(solve(eq6, diff(w0,x,2))[0] , solve(eq4,diff(w0,x,2))[0])
eq7
u0f = dsolve(eq7)
u0f | tutorials/Composite_Plate_Mechanics_with_Python_Theory.ipynb | nagordon/mechpy | mit |
Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now | num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 64
with_EOS = False
total_size = 57994
train_size = 46400
test_size = 11584 | 04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Once generate data | data_folder = '../../../../Dropbox/data'
ph_data_path = '../data/price_history'
npz_full = ph_data_path + '/price_history_dp_60to30_57994.npz'
npz_train = ph_data_path + '/price_history_dp_60to30_57994_46400_train.npz'
npz_test = ph_data_path + '/price_history_dp_60to30_57994_11584_test.npz' | 04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Step 1 - collect data | # dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)
# dp.inputs.shape, dp.targets.shape
# aa, bb = dp.next()
# aa.shape, bb.shape | 04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Step 2 - Build model | model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)
# graph = model.getGraph(batch_size=batch_size,
# num_units=num_units,
# input_len=input_len,
# target_len=target_len)
#show_graph(graph) | 04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Step 3 training the network | best_params = [500,
tf.nn.tanh,
0.0001,
0.62488034788862112,
0.001]
num_units, activation, lamda2, keep_prob_input, learning_rate = best_params
batch_size
def experiment():
return model.run(npz_path=npz_train,
npz_test = npz_test,
epochs=100,
batch_size = batch_size,
num_units = num_units,
input_len=input_len,
target_len=target_len,
learning_rate = learning_rate,
preds_gather_enabled=True,
batch_norm_enabled = True,
activation = activation,
decoder_first_input = PriceHistorySeq2SeqDynDecIns.DECODER_FIRST_INPUT.ZEROS,
keep_prob_input = keep_prob_input,
lamda2 = lamda2,
)
#%%time
dyn_stats, preds_dict, targets = get_or_run_nn(experiment, filename='024_seq2seq_60to30_002',
nn_runs_folder= data_folder + '/nn_runs') | 04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
One epoch takes approximately 268 secs
If we want to let it run for ~8 hours = 8 * 3600 / 268 ~= 107 epochs
So let it run for 100 epochs and see how it behaves | dyn_stats.plotStats()
plt.show()
data_len = len(targets)
mses = np.empty(data_len)
for ii, (pred, target) in enumerate(zip(preds_dict.values(), targets.values())):
mses[ii] = mean_squared_error(pred, target)
np.mean(mses)
huber_losses = np.empty(data_len)
for ii, (pred, target) in enumerate(zip(preds_dict.values(), targets.values())):
huber_losses[ii] = np.mean(huber_loss(pred, target))
np.mean(huber_losses)
targets_arr = np.array(targets.values())
targets_arr.shape
preds_arr = np.array(preds_dict.values())
preds_arr.shape
np.mean(huber_loss(y_true=targets_arr, y_pred=preds_arr))
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
#sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show() | 04_time_series_prediction/24_price_history_seq2seq-full_dataset_testing.ipynb | pligor/predicting-future-product-prices | agpl-3.0 |
Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews. | products = graphlab.SFrame('amazon_baby_subset.gl/') | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment. | products['sentiment'] | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews. | products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1]) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Note: For this assignment, we eliminated class imbalance by choosing
a subset of the data with a similar number of positive and negative reviews.
Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file: | import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows: | def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted. | for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word)) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews. | products['perfect'] | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Now, write some code to compute the number of product reviews that contain the word perfect.
Hint:
* First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1.
* Sum the number of 1s in the column contains_perfect.
Quiz Question. How many reviews contain the word perfect?
Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import. | import numpy as np | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term. | def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Let us convert the data into NumPy arrays. | # Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment') | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-3-assignment-numpy-arrays.npz')
feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment'] | feature_matrix.shape | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz Question: How many features are there in the feature_matrix?
Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model?
Now, let us see what the sentiment column looks like: | sentiment | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Estimating conditional probability with link function
Recall from lecture that the link function is given by:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function: | '''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
...
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = ...
# return predictions
return predictions | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:
$$
[\text{feature_matrix}] =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right] =
\left[
\begin{array}{cccc}
h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \
h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \
\vdots & \vdots & \ddots & \vdots \
h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)
\end{array}
\right]
$$
By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$.
$$
[\text{score}] =
[\text{feature_matrix}]\mathbf{w} =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right]
\mathbf{w}
= \left[
\begin{array}{c}
h(\mathbf{x}_1)^T\mathbf{w} \
h(\mathbf{x}_2)^T\mathbf{w} \
\vdots \
h(\mathbf{x}_N)^T\mathbf{w}
\end{array}
\right]
= \left[
\begin{array}{c}
\mathbf{w}^T h(\mathbf{x}_1) \
\mathbf{w}^T h(\mathbf{x}_2) \
\vdots \
\mathbf{w}^T h(\mathbf{x}_N)
\end{array}
\right]
$$
Checkpoint
Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match: | dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:
* errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$.
Complete the following code block: | def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = ...
# Return the derivative
return derivative | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
We provide a function to compute the log likelihood for the entire dataset. | def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match. | dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent: | from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = ...
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = ...
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
...
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Now, let us run the logistic regression solver. | coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz question: As each iteration of gradient ascent passes, does the log likelihood increase or decrease?
Predicting sentiments
Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & \mathbf{x}_i^T\mathbf{w} > 0 \
-1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Now, we will write some code to compute class predictions. We will do this in two steps:
* Step 1: First compute the scores using feature_matrix and coefficients using a dot product.
* Step 2: Using the formula above, compute the class predictions from the scores.
Step 1 can be implemented as follows: | # Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above:
Quiz question: How many reviews were predicted to have positive sentiment?
Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Complete the following code block to compute the accuracy of the model. | num_mistakes = ... # YOUR CODE HERE
accuracy = ... # YOUR CODE HERE
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Quiz question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy)
Which words contribute most to positive & negative sentiments?
Recall that in Module 2 assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:
* Treat each coefficient as a tuple, i.e. (word, coefficient_value).
* Sort all the (word, coefficient_value) tuples by coefficient_value in descending order. | coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True) | machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb | tuanavu/coursera-university-of-washington | mit |
Visualizing data | mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(batch_xs[0].reshape(28, 28))
batch_ys[0]
plt.imshow(batch_xs[10].reshape(28, 28))
batch_ys[10]
plt.imshow(batch_xs[60].reshape(28, 28))
batch_ys[60] | MNIST_for_beginners_noNN_noCONV_0.12.0-rc1.ipynb | gtesei/DeepExperiments | apache-2.0 |
The current state of the art in classifying these digits can be found here: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354
Model | def main(_):
# Import data
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
#sess = tf.InteractiveSession()
with tf.Session() as sess:
tf.global_variables_initializer().run()
#init = tf.initialize_all_variables()
#sess.run(init)
# Train
for _ in range(iterations):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(">>> Test Accuracy::"+str(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
main(_) | MNIST_for_beginners_noNN_noCONV_0.12.0-rc1.ipynb | gtesei/DeepExperiments | apache-2.0 |
TensorBoard: Visualizing Learning | from tensorflow.contrib.tensorboard.plugins import projector
def variable_summaries(var):
"""Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""
with tf.name_scope('summaries'):
mean = tf.reduce_mean(var)
tf.summary.scalar(var.name+'_mean', mean)
#tf.scalar_summary(var.name+'_mean', mean)
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar(var.name+'_stddev', stddev)
#tf.scalar_summary(var.name+'_stddev', stddev)
tf.summary.scalar(var.name+'_max', tf.reduce_max(var))
#tf.scalar_summary(var.name+'_max', tf.reduce_max(var))
tf.summary.scalar(var.name+'_min', tf.reduce_min(var))
#tf.histogram_summary( var.name, var)
tf.summary.histogram( var.name, var)
def main2(_):
# Import data
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
#config = tf.contrib.tensorboard.plugins.projector.ProjectorConfig()
# input images
with tf.name_scope('input'):
# None -> batch size can be any size, 784 -> flattened mnist image
x = tf.placeholder(tf.float32, shape=[None, 784], name="x-input")
xs = tf.Variable(tf.zeros([batch_size, 784]) , name="x-input-slice1")
xs = tf.slice(x, [0, 0], [batch_size, 784] , name="x-input-slice2")
variable_summaries(xs)
#emb1 = config.embeddings.add()
#emb1.tensor_name = xs.name
#emb1.metadata_path = os.path.join(FLAGS.data_dir + '/_logs', 'metadata.tsv')
# target 10 output classes
y_ = tf.placeholder(tf.float32, shape=[None, 10], name="y-input")
#variable_summaries(y_)
with tf.name_scope('input_image'):
image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])
tf.summary.image('image', image_shaped_input, 10)
with tf.name_scope('W'):
W = tf.Variable(tf.zeros([784, 10]))
variable_summaries(W)
with tf.name_scope('b'):
b = tf.Variable(tf.zeros([10]))
variable_summaries(b)
with tf.name_scope('y'):
y = tf.matmul(x, W) + b
variable_summaries(y)
with tf.name_scope('cross_entropy'):
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
tf.summary.scalar('cross_entropy', cross_entropy)
#tf.scalar_summary('cross_entropy', cross_entropy)
with tf.name_scope('train_step'):
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Test trained model
with tf.name_scope('accuracy-scope'):
with tf.name_scope('correct_prediction'):
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
with tf.name_scope('accuracy'):
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('accuracy-val', accuracy)
#tf.scalar_summary('accuracy', accuracy)
#######
#init = tf.initialize_all_variables()
# Create a saver for writing training checkpoints.
saver = tf.train.Saver()
#sess = tf.InteractiveSession()
with tf.Session() as sess:
tf.global_variables_initializer().run()
# Merge all the summaries and write them out to ./logs (by default)
#merged = tf.merge_all_summaries()
merged = tf.summary.merge_all()
#writer = tf.train.SummaryWriter(FLAGS.data_dir + '/_logs',sess.graph)
writer = tf.summary.FileWriter(FLAGS.data_dir + '/_logs',sess.graph)
#projector.visualize_embeddings(writer, config)
#sess.run(init)
# Train
for i in range(iterations):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
if i % 100 == 0 or i == (iterations-1):
summary = sess.run(merged, feed_dict={x: batch_xs, y_: batch_ys})
writer.add_summary(summary, i)
summary, acc = sess.run([merged, accuracy], feed_dict={x: mnist.test.images,y_: mnist.test.labels})
writer.add_summary(summary, i)
writer.flush()
checkpoint_file = os.path.join(FLAGS.data_dir + '/_logs', 'checkpoint')
saver.save(sess, checkpoint_file, global_step=i)
print('>>> Test Accuracy [%s/%s]: %s' % (i,iterations,acc))
main2(_) | MNIST_for_beginners_noNN_noCONV_0.12.0-rc1.ipynb | gtesei/DeepExperiments | apache-2.0 |
Load Data
For this notebook, we'll be using a sample set of timeseries data of BART ridership on the 5 most commonly traveled stations in San Francisco. This subsample of data was selected and processed from Pyro's examples http://docs.pyro.ai/en/stable/_modules/pyro/contrib/examples/bart.html | import os
import urllib.request
smoke_test = ('CI' in os.environ)
if not smoke_test and not os.path.isfile('../BART_sample.pt'):
print('Downloading \'BART\' sample dataset...')
urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1A6LqCHPA5lHa5S3lMH8mLMNEgeku8lRG', '../BART_sample.pt')
torch.manual_seed(1)
if smoke_test:
train_x, train_y, test_x, test_y = torch.randn(2, 100, 1), torch.randn(2, 100), torch.randn(2, 100, 1), torch.randn(2, 100)
else:
train_x, train_y, test_x, test_y = torch.load('../BART_sample.pt', map_location='cpu')
if torch.cuda.is_available():
train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
print(train_x.shape, train_y.shape, test_x.shape, test_y.shape)
train_x_min = train_x.min()
train_x_max = train_x.max()
train_x = train_x - train_x_min
test_x = test_x - train_x_min
train_y_mean = train_y.mean(dim=-1, keepdim=True)
train_y_std = train_y.std(dim=-1, keepdim=True)
train_y = (train_y - train_y_mean) / train_y_std
test_y = (test_y - train_y_mean) / train_y_std | examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb | jrg365/gpytorch | mit |
Define a Model
The only thing of note here is the use of the kernel. For this example, we'll learn a kernel with 2048 deltas in the mixture, and initialize by sampling directly from the empirical spectrum of the data. | class SpectralDeltaGP(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, num_deltas, noise_init=None):
likelihood = gpytorch.likelihoods.GaussianLikelihood(noise_constraint=gpytorch.constraints.GreaterThan(1e-11))
likelihood.register_prior("noise_prior", gpytorch.priors.HorseshoePrior(0.1), "noise")
likelihood.noise = 1e-2
super(SpectralDeltaGP, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
base_covar_module = gpytorch.kernels.SpectralDeltaKernel(
num_dims=train_x.size(-1),
num_deltas=num_deltas,
)
base_covar_module.initialize_from_data(train_x[0], train_y[0])
self.covar_module = gpytorch.kernels.ScaleKernel(base_covar_module)
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
model = SpectralDeltaGP(train_x, train_y, num_deltas=1500)
if torch.cuda.is_available():
model = model.cuda() | examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb | jrg365/gpytorch | mit |
Train | model.train()
mll = gpytorch.mlls.ExactMarginalLogLikelihood(model.likelihood, model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[40])
num_iters = 1000 if not smoke_test else 4
with gpytorch.settings.max_cholesky_size(0): # Ensure we dont try to use Cholesky
for i in range(num_iters):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
if train_x.dim() == 3:
loss = loss.mean()
loss.backward()
optimizer.step()
if i % 10 == 0:
print(f'Iteration {i} - loss = {loss:.2f} - noise = {model.likelihood.noise.item():e}')
scheduler.step()
# Get into evaluation (predictive posterior) mode
model.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.max_cholesky_size(0), gpytorch.settings.fast_pred_var():
test_x_f = torch.cat([train_x, test_x], dim=-2)
observed_pred = model.likelihood(model(test_x_f))
varz = observed_pred.variance | examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb | jrg365/gpytorch | mit |
Plot Results | from matplotlib import pyplot as plt
%matplotlib inline
_task = 3
plt.subplots(figsize=(15, 15), sharex=True, sharey=True)
for _task in range(2):
ax = plt.subplot(3, 1, _task + 1)
with torch.no_grad():
# Initialize plot
# f, ax = plt.subplots(1, 1, figsize=(16, 12))
# Get upper and lower confidence bounds
lower = observed_pred.mean - varz.sqrt() * 1.98
upper = observed_pred.mean + varz.sqrt() * 1.98
lower = lower[_task] # + weight * test_x_f.squeeze()
upper = upper[_task] # + weight * test_x_f.squeeze()
# Plot training data as black stars
ax.plot(train_x[_task].detach().cpu().numpy(), train_y[_task].detach().cpu().numpy(), 'k*')
ax.plot(test_x[_task].detach().cpu().numpy(), test_y[_task].detach().cpu().numpy(), 'r*')
# Plot predictive means as blue line
ax.plot(test_x_f[_task].detach().cpu().numpy(), (observed_pred.mean[_task]).detach().cpu().numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x_f[_task].detach().cpu().squeeze().numpy(), lower.detach().cpu().numpy(), upper.detach().cpu().numpy(), alpha=0.5)
# ax.set_ylim([-3, 3])
ax.legend(['Training Data', 'Test Data', 'Mean', '95% Confidence'], fontsize=16)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.tick_params(axis='both', which='minor', labelsize=16)
ax.set_ylabel('Passenger Volume (Normalized)', fontsize=16)
ax.set_xlabel('Hours (Zoomed to Test)', fontsize=16)
ax.set_xticks([])
plt.xlim([1250, 1680])
plt.tight_layout() | examples/01_Exact_GPs/Spectral_Delta_GP_Regression.ipynb | jrg365/gpytorch | mit |
Generate Poisson process data and generate exponential
For each interval choose $n$ events from a Poisson. Then draw from a uniform the location in the interval for each of the events. | np.random.seed(8675309)
nT = 400
cts = np.random.poisson(20, size=nT)
edata = []
for i in range(nT):
edata.extend(i + np.sort(np.random.uniform(low=0, high=1, size=cts[i])))
edata = np.asarray(edata)
edata.shape
plt.plot(edata, np.arange(len(edata)))
plt.xlabel('Time of event')
plt.ylabel('Event number')
plt.title("Modeled underlying data")
with mc.Model() as model:
lam = mc.Uniform('lambda', 0, 1000) # this is the exponential parameter
meas = mc.Exponential('meas', lam, observed=np.diff(edata))
lam2 = mc.Uniform('lam2', 0, 1000)
poi = mc.Poisson('Poisson', lam2, observed=cts)
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, lines={'lambda':20, 'lam2':20})
mc.summary(trace)
fig, ax = plt.subplots(ncols=1, nrows=2, sharex=True)
sns.distplot(trace['lambda'], ax=ax[0])
sns.distplot(trace['lam2'], ax=ax[1])
plt.xlabel('Lambda')
ax[0].set_ylabel('Exp')
ax[1].set_ylabel('Poisson')
ax[0].axvline(20, c='r', lw=1)
ax[1].axvline(20, c='r', lw=1)
plt.tight_layout() | Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
This is consistent with a Poisson of parameter 20! But there seems to be an under prediction going on, wonder why?
Go through Posterior Predictive Checks (http://docs.pymc.io/notebooks/posterior_predictive.html) and see if we are reprodicting the mean and variance. | ppc = mc.sample_ppc(trace, samples=500, model=model, size=100)
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['Poisson']], kde=False, ax=ax)
ax.axvline(cts.mean())
ax.set(title='Posterior predictive of the mean (Poisson)', xlabel='mean(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.var() for n in ppc['Poisson']], kde=False, ax=ax)
ax.axvline(cts.var())
ax.set(title='Posterior predictive of the variance (Poisson)', xlabel='var(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['meas']], kde=False, ax=ax)
ax.axvline(np.diff(edata).mean())
ax.set(title='Posterior predictive of the mean (Exponential)', xlabel='mean(x)', ylabel='Frequency');
ax = plt.subplot()
sns.distplot([n.var() for n in ppc['meas']], kde=False, ax=ax)
ax.axvline(np.diff(edata).var())
ax.set(title='Posterior predictive of the variance (Exponential)', xlabel='var(x)', ylabel='Frequency'); | Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
We are reprodicting well.
Given the data we generated that will be treated as truth, what would we measure with various deadtime and does teh corection match what we think it should?
Correction should look like $n_1 = \frac{R_1}{1-R_1 \tau}$ where $n_1$ is real rate, $R_1$ is observed rate, and $\tau$ is the dead time.
Take edata from above and strep through from beginning to end only keeping points that are dead time away from the previous point. | deadtime1 = 0.005 # small dead time
deadtime2 = 0.1 # large dead time
edata_td1 = []
edata_td1.append(edata[0])
edata_td2 = []
edata_td2.append(edata[0])
for ii, v in enumerate(edata[1:], 1): # stop one shy to not run over the end, start enumerate at 1
if v - edata_td1[-1] >= deadtime1:
edata_td1.append(v)
if v - edata_td2[-1] >= deadtime2:
edata_td2.append(v)
edata_td1 = np.asarray(edata_td1)
edata_td2 = np.asarray(edata_td2)
plt.figure(figsize=(8,6))
plt.plot(edata, np.arange(len(edata)), label='Real data')
plt.plot(edata_td1, np.arange(len(edata_td1)), label='Small dead time')
plt.plot(edata_td2, np.arange(len(edata_td2)), label='Large dead time')
plt.xlabel('Time of event')
plt.ylabel('Event number')
plt.title("Modeled underlying data")
plt.legend(bbox_to_anchor=(1, 1)) | Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
And plot the rates per unit time | plt.figure(figsize=(8,6))
h1, b1 = np.histogram(edata, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b1), h1, label='Real data', c='k')
h2, b2 = np.histogram(edata_td1, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b2), h2, label='Small dead time', c='r')
h3, b3 = np.histogram(edata_td2, np.arange(1000))
plt.plot(tb.bin_edges_to_center(b3), h3, label='Large dead time')
plt.legend(bbox_to_anchor=(1, 1))
plt.xlim((0,400))
plt.ylabel('Rate')
plt.xlabel('Time') | Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
Can we use $n_1 = \frac{R_1}{1-R_1 \tau}$ to derive the relation and spread in the dist of R?
Algerbra changes math to: $R_1=\frac{n_1}{1+n_1\tau}$
Use the small dead time | # assume R1 is Poisson
with mc.Model() as model:
tau = deadtime1
obsRate = mc.Uniform('obsRate', 0, 1000, shape=1)
obsData = mc.Poisson('obsData', obsRate, observed=h2[:400], shape=1)
realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau))
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, varnames=('obsRate', ))
mc.summary(trace, varnames=('obsRate', ))
sns.distplot(trace['realRate'].mean(axis=0), bins=10)
plt.xlabel('realRate')
plt.ylabel('Density')
dt1_bounds = np.percentile(trace['realRate'], (2.5, 50, 97.5))
print('The estimate of the real rate given that we know the dead time is:', dt1_bounds,
(dt1_bounds[2]-dt1_bounds[0])/dt1_bounds[1])
dat_bounds = np.percentile(h1[:400], (2.5, 50, 97.5))
print("This compares with if we measured without dead time as:", dat_bounds,
(dat_bounds[2]-dat_bounds[0])/dat_bounds[1])
| Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
Use the large dead time | # assume R1 is Poisson
with mc.Model() as model:
tau = deadtime2
obsRate = mc.Uniform('obsRate', 0, 1000)
obsData = mc.Poisson('obsData', obsRate, observed=h3[:400])
realRate = mc.Deterministic('realRate', obsData/(1-obsData*tau))
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=8)
mc.traceplot(trace, combined=True, varnames=('obsRate', ))
mc.summary(trace, varnames=('obsRate', ))
sns.distplot(trace['realRate'].mean(axis=0))
plt.xlabel('realRate')
plt.ylabel('Density')
dt2_bounds = np.percentile(trace['realRate'], (2.5, 50, 97.5))
print('The estimate of the real rate given that we know the dead time is:', dt1_bounds,
(dt2_bounds[2]-dt2_bounds[0])/dt2_bounds[1])
dat_bounds = np.percentile(h1[:400], (2.5, 50, 97.5))
print("This compares with if we measured without dead time as:", dat_bounds,
(dat_bounds[2]-dat_bounds[0])/dat_bounds[1])
| Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
But this is totally broken!!!
Output data files for each | real = pd.Series(edata)
td1 = pd.Series(edata_td1)
td2 = pd.Series(edata_td2)
real.to_csv('no_deadtime_times.csv')
td1.to_csv('small_deadtime_times.csv')
td2.to_csv('large_deadtime_times.csv')
real = pd.Series(h1[h1>0])
td1 = pd.Series(h2[h2>0])
td2 = pd.Series(h3[h3>0])
real.to_csv('no_deadtime_rates.csv')
td1.to_csv('small_deadtime_rates.csv')
td2.to_csv('large_deadtime_rates.csv') | Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
Work on the random thoughts | with mc.Model() as model:
BoundedExp = mc.Bound(mc.Exponential, lower=deadtime2, upper=None)
# we observe the following time between counts
lam = mc.Uniform('lam', 0, 1000)
time_between = BoundedExp('tb_ob', lam, observed=np.diff(edata_td2))
start = mc.find_MAP()
trace = mc.sample(10000, njobs=8, start=start)
| Counting/Poisson and exponential.ipynb | balarsen/pymc_learning | bsd-3-clause |
Synthetic Features and Outliers
Learning Objectives:
* Create a synthetic feature that is the ratio of two other features
* Use this new feature as an input to a linear regression model
* Improve the effectiveness of the model by identifying and clipping (removing) outliers out of the input data
Let's revisit our model from the previous First Steps with TensorFlow exercise.
First, we'll import the California housing data into a pandas DataFrame:
Setup | from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.metrics as metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Next, we'll set up our input function, and define the function for model training: | def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(learning_rate, steps, batch_size, input_feature):
"""Trains a linear regression model.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
Returns:
A Pandas `DataFrame` containing targets and the corresponding predictions done
after training the model.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]].astype('float32')
my_label = "median_house_value"
targets = california_housing_dataframe[my_label].astype('float32')
# Create input functions.
training_input_fn = lambda: my_input_fn(my_feature_data, targets, batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Set up to plot the state of our model's line each period.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Learned Line by Period")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
root_mean_squared_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Compute loss.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, root_mean_squared_error))
# Add the loss metrics from this period to our list.
root_mean_squared_errors.append(root_mean_squared_error)
# Finally, track the weights and biases over time.
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Periods')
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Create a table with calibration data.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print("Final RMSE (on training data): %0.2f" % root_mean_squared_error)
return calibration_data | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Task 1: Try a Synthetic Feature
Both the total_rooms and population features count totals for a given city block.
But what if one city block were more densely populated than another? We can explore how block density relates to median house value by creating a synthetic feature that's a ratio of total_rooms and population.
In the cell below, create a feature called rooms_per_person, and use that as the input_feature to train_model().
What's the best performance you can get with this single feature by tweaking the learning rate? (The better the performance, the better your regression line should fit the data, and the lower
the final RMSE should be.)
NOTE: You may find it helpful to add a few code cells below so you can try out several different learning rates and compare the results. To add a new code cell, hover your cursor directly below the center of this cell, and click CODE. | #
# YOUR CODE HERE
#
california_housing_dataframe["rooms_per_person"] =
calibration_data = train_model(
learning_rate=0.00005,
steps=500,
batch_size=5,
input_feature="rooms_per_person"
) | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Solution
Click below for a solution. | california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"])
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person") | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Task 2: Identify Outliers
We can visualize the performance of our model by creating a scatter plot of predictions vs. target values. Ideally, these would lie on a perfectly correlated diagonal line.
Use Pyplot's scatter() to create a scatter plot of predictions vs. targets, using the rooms-per-person model you trained in Task 1.
Do you see any oddities? Trace these back to the source data by looking at the distribution of values in rooms_per_person. | # YOUR CODE HERE | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Solution
Click below for the solution. | plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.scatter(calibration_data["predictions"], calibration_data["targets"]) | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
The calibration data shows most scatter points aligned to a line. The line is almost vertical, but we'll come back to that later. Right now let's focus on the ones that deviate from the line. We notice that they are relatively few in number.
If we plot a histogram of rooms_per_person, we find that we have a few outliers in our input data: | plt.subplot(1, 2, 2)
_ = california_housing_dataframe["rooms_per_person"].hist() | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Task 3: Clip Outliers
See if you can further improve the model fit by setting the outlier values of rooms_per_person to some reasonable minimum or maximum.
For reference, here's a quick example of how to apply a function to a Pandas Series:
clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0))
The above clipped_feature will have no values less than 0. | # YOUR CODE HERE | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Solution
Click below for the solution.
The histogram we created in Task 2 shows that the majority of values are less than 5. Let's clip rooms_per_person to 5, and plot a histogram to double-check the results. | california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["rooms_per_person"]).apply(lambda x: min(x, 5))
_ = california_housing_dataframe["rooms_per_person"].hist() | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
To verify that clipping worked, let's train again and print the calibration data once more: | calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person")
_ = plt.scatter(calibration_data["predictions"], calibration_data["targets"]) | ml_notebooks/synthetic_features_and_outliers.ipynb | bt3gl/Machine-Learning-Resources | gpl-2.0 |
Feedforward Neural Network | # import feedforward neural net
from mlnn import neural_net | .ipynb_checkpoints/mlnn-checkpoint.ipynb | ishank26/nn_from_scratch | gpl-3.0 |
<script type="text/javascript" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML"></script>
Let's build a 4-layer neural network. Our network has one input layer, two hidden layer and one output layer. Our model can be represented as a directed acyclic graph wherein each node in a layer is connected all other nodes in its succesive layer. The neural net is shown below-
Each node in the hidden layer uses a nonlinear activation function $f(x)$, which computes the outputs from its inputs and transfer these outputs to successive layers. Here we've used $f(x)= tanh(x)$, as our non-linear activation. Its derivative is given by- $f'(x)= 1-tanh(x)^2$.
Our network graph can be represented as-
| Layer No. | Notation | Value | Variable |
|----------:|-----------:|---------------------------------------------:|----------:|
| 1 | X | $X$| X|
| 2 | W1(~)+b1 | $W1X+b1$| pre_act1|
| 2 | tanh | $tanh(W1X+b1)$| act1|
| 3 | W2(~)+b2 | $W2(tanh(W1X +b1))+b2$| pre_act2|
| 3 | tanh | $tanh(W2(tanhW1X+b1))+b2)$| act2|
| 4 | W3(~)+b3 | $W3(tanh(W2(tanhW1X+b1))+b2)+b3$ | pre_act3|
| 4 | softmax |$softmax(W3(tanh(W2(tanhW1X+b1)+b2))+b3)$ </br>| act3|
Backpropagation
Now we formulate the backpropagation algorithm or backprop for training the network. For derivation of the backprop, please see Dr. Hugo Larochelle's excellent course on neural networks.
$ \large\frac{\partial L}{\partial Pred} = \frac{\partial L}{\partial L} * \frac{\partial L}{\partial Pred} $
$ \large\frac{\partial L}{\partial act3} = \frac{\partial L}{\partial Pred} * \frac{\partial Pred}{\partial act3} $
$ \large\frac{\partial L}{\partial pre_act3} = \frac{\partial L}{\partial act3} * \frac{\partial act3}{\partial pre_act3}= \delta4$
$ \large\frac{\partial L}{\partial act2} = \frac{\partial L}{\partial pre_act3} * \frac{\partial pre_act3}{\partial act2} $
$ \large\frac{\partial L}{\partial pre_act2} = \frac{\partial L}{\partial act2} * \frac{\partial act2}{\partial pre_act2}= \delta3$
$ \large\frac{\partial L}{\partial act1} = \frac{\partial L}{\partial pre_act2} * \frac{\partial pre_act2}{\partial act1} $
$ \large\frac{\partial L}{\partial pre_act1} = \frac{\partial L}{\partial act1} * \frac{\partial act1}{\partial pre_act1}= \delta2$
$ \large\frac{\partial L}{\partial W3} = \delta4 * \frac{\partial pre_act3}{\partial W3}$
$ \large\frac{\partial L}{\partial W2} = \delta3 * \frac{\partial pre_act2}{\partial W2}$
$ \large\frac{\partial L}{\partial W1} = \delta2 * \frac{\partial pre_act1}{\partial W1}$ | # Visualize tanh and its derivative
x = np.linspace(-np.pi, np.pi, 120)
plt.figure(figsize=(8, 3))
plt.subplot(1, 2, 1)
plt.plot(x, np.tanh(x))
plt.title("tanh(x)")
plt.xlim(-3, 3)
plt.subplot(1, 2, 2)
plt.plot(x, 1 - np.square(np.tanh(x)))
plt.xlim(-3, 3)
plt.title("tanh\'(x)")
plt.show() | .ipynb_checkpoints/mlnn-checkpoint.ipynb | ishank26/nn_from_scratch | gpl-3.0 |
It can be seen from the above figure that as we increase our input the our activation starts to saturate which can inturn kill gradients. This can be mitigated using rectified activation functions. Another problem that we encounter in training deep neural networks during backpropagation is vanishing gradient and gradient explosion. It can be observed from the derivative of our nth activation- $\large\frac{\partial act_n}{\partial pre_act_n}$ , is fairly large near zero. Let's assume that the weigths $< 1$, this will usually satisfy $|w_{i}*tanh'(x)| < 1$. The succesive product of such values in each layer will exponentially decrease the computed product leading to vanishing gradient. This is not a robust explanation of vanishing gradient problem. For more information refer to this article.
Similarly if the weigths are large 100, 40.., we can formulate the gradient explosion problem. | # Training the neural network
my_nn = neural_net([2, 4, 2]) # [2,4,2] = [input nodes, hidden nodes, output nodes]
my_nn.train(X, y, 0.001, 0.0001) # weights regularization lambda= 0.001 , epsilon= 0.0001
### visualize predictions
my_nn.visualize_preds(X ,y) | .ipynb_checkpoints/mlnn-checkpoint.ipynb | ishank26/nn_from_scratch | gpl-3.0 |
Animate Training: | X_, y_ = sklearn.datasets.make_circles(n_samples=400, noise=0.18, factor=0.005, random_state=1)
plt.figure(figsize=(7, 5))
plt.scatter(X_[:, 0], X_[:, 1], s=15, c=y_, cmap=plt.cm.Spectral)
plt.show()
'''
Uncomment the code below to see classification process for above data.
To stop training early reduce no. of iterations.
'''
#new_nn = neural_net([2, 6, 2])
#new_nn.animate_preds(X_, y_, 0.001, 0.0001) # max iterations = 35000 | .ipynb_checkpoints/mlnn-checkpoint.ipynb | ishank26/nn_from_scratch | gpl-3.0 |
We can segment the income data into 50 buckets, and plot it as a histogram: | %matplotlib inline
# %config InlineBackend.figure_format='retina'
# import seaborn as sns
# sns.set_context("paper")
# sns.set_style("white")
# sns.set()
import matplotlib.pyplot as plt
plt.hist(incomes, 50)
plt.show() | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Now compute the median - since we have a nice, even distribution it too should be close to 27,000: | np.median(incomes) | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Now we'll add Donald Trump into the mix. Darn income inequality! | incomes = np.append(incomes, [1000000000]) | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
The median won't change much, but the mean does: | np.median(incomes)
np.mean(incomes) | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Mode
Next, let's generate some fake age data for 500 people: | ages = np.random.randint(18, high=90, size=500)
ages
from scipy import stats
stats.mode(ages) | handson-data-science-python/DataScience-Python3/.ipynb_checkpoints/MeanMedianMode-checkpoint.ipynb | vadim-ivlev/STUDY | mit |
Data extraction and clean up
My first data source is the World Bank. We will access World Bank data by using 'Wbdata', Wbdata is a simple python interface to find and request information from the World Bank's various databases, either as a dictionary containing full metadata or as a pandas DataFrame. Currently, wbdata wraps most of the World Bank API, and also adds some convenience functions for searching and retrieving information.
Documentation is available at http://wbdata.readthedocs.org/
We install it with 'pip install wbdata'
Credits go to:
Sherouse, Oliver (2014). Wbdata. Arlington, VA. Available from http://github.com/OliverSherouse/wbdata.
Let's get to it. | wb.search('gdp.*capita.*const') # we use this function to search for GDP related indicators
wb.search('employment') # we use this function to search for employment related indicators
wb.search('unemployment') # we use this function to search for unemployment related indicators
#I have identified the relevant variables in the three fields
#To download data for multiple indicators, I specify them as a list
#ESP is the ISO code for Spain
#I equalize the start and end dates
wb.download( indicator=['NY.GDP.PCAP.CD','SL.UEM.TOTL.ZS','SL.UEM.1524.ZS',
'SL.UEM.PRIM.ZS', 'SL.UEM.SECO.ZS','SL.UEM.TERT.ZS','SL.UEM.NEET.MA.ZS','SL.UEM.NEET.MA.ZS'],
country=['ESP'], start=1990, end=2015)
#Construct the dataframe
data = wb.download(indicator=['NY.GDP.PCAP.CD','SL.UEM.TOTL.ZS','SL.UEM.1524.ZS',
'SL.UEM.PRIM.ZS', 'SL.UEM.SECO.ZS','SL.UEM.TERT.ZS','SL.UEM.NEET.MA.ZS','SL.UEM.NEET.MA.ZS'],
country=['ESP'], start=1990, end=2015)
esplbr = pd.DataFrame(data)
#Rename the columns for clarity
esplbr.columns = ["GDP/capita(US$ 2016)", "UnemploymentRate", "YouthUnempRate", "UnempW/PrimEd.", "UnempW/SecEd","UnempW/TertEd", "Ni-nis"]
esplbr
#What on earth are Ni-nis? A Spanish neologism for "ni estudia, ni trabaja": percentage of youth "not working, not studying"
#A cultural and socioeconomic phenomenon
# Wbata renders a complex multi-index, which I convert to old-school columns that are easier to work with
esplbr.reset_index(inplace=True)
esplbr
esplbr.columns
# housekeeping for column names
esplbr.columns = ["Country", "Year", "GDP/capita(US$ 2016)", "UnemploymentRate", "YouthUnempRate", "UnempW/PrimEd.", "UnempW/SecEd","UnempW/TertEd", "Ni-nis"]
esplbr
# we know we are dealing exclusively with Spain, so we drop the reduntdant 'Country' column
esplbr.drop('Country', axis=1, inplace=True)
esplbr
# what do I have in my hands?
esplbr.dtypes
esplbr.index | UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb | NYUDataBootcamp/Projects | mit |
Plotting the data | # with a clean and orthodox Dataframe, I can start to do some graphics
import matplotlib.pyplot as plt
%matplotlib inline
# we invert the x axis. Never managed to make 'Year' the X axis, lost a lot of hair in the process :(
plt.gca().invert_xaxis() # Came up with this solution
# and add the indicators
plt.plot(esplbr.index, esplbr['UnemploymentRate'])
plt.plot(esplbr.index, esplbr['YouthUnempRate'])
plt.plot(esplbr.index, esplbr['Ni-nis'])
# and modify the plot
plt.title('Labor Market in Spain', fontsize=14, loc='left') # add title
plt.ylabel('Percentage Unemployed') # y axis label
plt.legend(['UnemploymentRate', 'YouthUnempRate','Ni-nis'], fontsize=8, loc=0) | UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb | NYUDataBootcamp/Projects | mit |
Observations
Spain has recently lived through a depression without precedent, yet unemployment rates above 20% are nothing new: there is a large structural component in addition to the demand-deficient factor.
Youth unemployment is particuarly bad, which is the norm elsewhere too, but the spread is accentuated in Spain. Deductively, this hints at labor market duality between bullet-proof contracts and part-time or 'indefinite' contracts. | # let's take a look at unemployment by education level
import matplotlib.pyplot as plt
%matplotlib inline
# we invert the x axis
plt.gca().invert_xaxis()
#we add the variables
plt.plot(esplbr.index, esplbr['UnempW/PrimEd.'])
plt.plot(esplbr.index, esplbr['UnempW/SecEd'])
plt.plot(esplbr.index, esplbr['UnempW/TertEd'])
plt.plot(esplbr.index, esplbr['Ni-nis'])
# we modify the plot
plt.title('Education and Employment Outcomes', fontsize=14, loc='left')
plt.ylabel('Percentage Unemployed')
plt.legend(['UnempW/PrimEd.', 'UnempW/SecEd','UnempW/TertEd', 'Ni-nis'], fontsize=7, loc=0) | UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb | NYUDataBootcamp/Projects | mit |
Observations
Those unemployed with only primary education completed and ni-nis start to rise hand in hand ten years ago, when the crisis hits. This suggests overlap between the two groups.
The elephant in the room a massive construction bubble that made Spain's variant of the crisis particularly brutal. For decades, a debt-fueled bubble in real estate signaled youngsters to drop the books and pick up the bricks.
The labor market now faces the painful readjustment of the economy's productive model, from "deuda y ladrillo" (debt and brick) to exports, that account for Spain's recent growth
P.S.: if you ever need to investigate (how not to execute) a Keynesian stimulus plan, check out how the government's Plan E added fuel to malinvestments http://www.economist.com/node/13611650
Digging for more
I'm interested in measuring structural unemployment. Ideally, I would build an unemployment model myself based on separation and accesion rates to arrive at the Natural Rate of Unemployment, as we see in one of my three bibles:
http://www.stern.nyu.edu/sites/default/files/assets/documents/The_Global_Economy_Amazon_Digital%20%282%29.pdf
In the interest of time, I sought an indicator that acts as a proxy for structural unemployment. The NAIRU and NAWRU come to mind, but they are not reported by the World Bank.
And so I became acquainted with Quandl's API and proceeded to dig through several economic databases, and landed at the notorious OECD database: I suspect Quandl and I are going to become good friends moving forward.
Load the modules | # Don't forget the the DMV paperwork
import quandl # Quandl package
quandl.ApiConfig.api_key = '3w_GYBRfX3ZxG7my_vhs' # register for a key and unlimited number of requests
# Playing it safe
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np
%matplotlib inline
| UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb | NYUDataBootcamp/Projects | mit |
Data extraction and clean up
We're going to be comparing Spain's NAIRU to that of Denmark. Don't tell Sanders, but Denmark is well known for having one of the most 'flexible' labor markets in Europe. | # We extract the indicators and print the dataframe
NAIRU = quandl.get((['OECD/EO91_INTERNET_ESP_NAIRU_A','OECD/EO91_INTERNET_DNK_NAIRU_A']), #We call for both
start_date = "1990-12-31", end_date = "2013-12-31") # And limit the time horizon
NAIRU
# What do we have here?
type(NAIRU)
NAIRU.columns
# Dataframe housekeeping
NAIRU.columns = ['NAIRU Spain', 'NAIRU Denmark']
NAIRU
# Nice and polished
NAIRU.columns
plt.style.available #Take a look at the menu
# We are ready to plot
import matplotlib.pyplot as plt
%matplotlib inline
#we add the variables
plt.plot(NAIRU.index, NAIRU['NAIRU Spain'])
plt.plot(NAIRU.index, NAIRU['NAIRU Denmark'])
#We modify the plot
plt.title('Measuring Structural Unemployment ESP v DEN', fontsize=15, loc='left') # add title
plt.ylabel('Percentage Unemployed') # y axis label
plt.legend(['NAIRU Spain', 'NAIRU Denmark'], fontsize=8, loc=2) # more descriptive variable namesDescribe what each of these arguments/parameters does
plt.style.use("bmh") | UG_F16/RodriguezBallve-Spain's_Labor_Market.ipynb | NYUDataBootcamp/Projects | mit |
VIB + DoSE
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this example, we train a deep variational information bottleneck model (VIB) on the MNIST dataset. We then use density of states estimation to turn our VIB model into an Out-of-distribution (OOD) detector. Our current implementation achieves near-SOTA performance on both OOD detection and classification simultaneously without any exposure to OOD data during training.
References
The VIB paper (Alemi et al. 2016) can be found Here
The DoSE paper (Morningstar et al. 2020) can be found Here
1 Imports | import functools
import sys
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
# Globally Enable XLA.
# tf.config.optimizer.set_jit(True)
try:
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
2 Load Dataset | [train_dataset, eval_dataset], datasets_info = tfds.load(
name='mnist',
split=['train', 'test'],
with_info=True,
shuffle_files=True)
def _preprocess(sample):
return (tf.cast(sample['image'], tf.float32) * 2 / 255. - 1.,
tf.cast(sample['label'], tf.int32))
train_size = datasets_info.splits['train'].num_examples
batch_size = 32
train_dataset = tfn.util.tune_dataset(
train_dataset,
batch_size=batch_size,
shuffle_size=int(train_size / 7),
preprocess_fn=_preprocess)
eval_dataset = tfn.util.tune_dataset(
eval_dataset,
repeat_count=1,
preprocess_fn=_preprocess)
x = next(iter(eval_dataset.batch(10)))[0]
tfn.util.display_imgs(x) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
3 Define Model | input_shape = datasets_info.features['image'].shape
encoded_size = 16
base_depth = 32
prior = tfd.MultivariateNormalDiag(
loc=tf.zeros(encoded_size),
scale_diag=tf.ones(encoded_size))
Conv = functools.partial(
tfn.Convolution,
init_bias_fn=tf.zeros_initializer(),
init_kernel_fn=tf.initializers.he_uniform()) # Better for leaky_relu.
encoder = tfn.Sequential([
lambda x: 2. * tf.cast(x, tf.float32) - 1., # Center.
Conv(1, 1 * base_depth, 5, strides=1, padding='same'),
tf.nn.leaky_relu,
Conv(1 * base_depth, 1 * base_depth, 5, strides=2, padding='same'),
tf.nn.leaky_relu,
Conv(1 * base_depth, 2 * base_depth, 5, strides=1, padding='same'),
tf.nn.leaky_relu,
Conv(2 * base_depth, 2 * base_depth, 5, strides=2, padding='same'),
tf.nn.elu,
Conv(2 * base_depth, 4 * encoded_size, 7, strides=1, padding='valid'),
tf.nn.leaky_relu,
tfn.util.flatten_rightmost(ndims=3),
tfn.Affine(4*encoded_size, encoded_size + encoded_size * (encoded_size + 1) // 2),
lambda x: tfd.MultivariateNormalTriL(
loc=x[..., :encoded_size],
scale_tril=tfb.FillScaleTriL()(x[..., encoded_size:]))
], name='encoder')
print(encoder.summary())
DeConv = functools.partial(
tfn.ConvolutionTranspose,
init_kernel_fn=tf.initializers.he_uniform()) # Better for leaky_relu.
Affine = functools.partial(
tfn.Affine,
init_kernel_fn=tf.initializers.he_uniform())
decoder = tfn.Sequential([
Affine(encoded_size, 10),
lambda x: tfd.Categorical(logits=x)])
print(decoder.summary()) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
4 Loss / Eval | def compute_loss(x, y, beta=1.):
q = encoder(x)
z = q.sample()
p = decoder(z)
kl = tf.reduce_mean(q.log_prob(z) - prior.log_prob(z), axis=-1)
# Note: we could use exact KL divergence, eg:
# kl = tf.reduce_mean(tfd.kl_divergence(q, prior))
# however we generally find that using the Monte Carlo approximation has
# lower variance.
nll = -tf.reduce_mean(p.log_prob(y), axis=-1)
loss = nll + beta * kl
return loss, (nll, kl), (q, z, p)
train_iter = iter(train_dataset)
def loss():
x, y = next(train_iter)
loss, (nll, kl), _ = compute_loss(x, y, beta=0.075)
return loss, (nll, kl)
opt = tf.optimizers.Adam(learning_rate=1e-3, decay=0.00005)
fit = tfn.util.make_fit_op(
loss,
opt,
decoder.trainable_variables + encoder.trainable_variables,
grad_summary_fn=lambda gs: tf.nest.map_structure(tf.norm, gs))
eval_iter = iter(eval_dataset.batch(5000).repeat())
@tfn.util.tfcompile
def eval():
x, y = next(eval_iter)
loss, (nll, kl), _ = compute_loss(x, y, beta=0.05)
return loss, (nll, kl) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
5 Train | DEBUG_MODE = False
tf.config.experimental_run_functions_eagerly(DEBUG_MODE)
num_train_epochs = 25. # @param { isTemplate: true}
num_evals = 200 # @param { isTemplate: true}
dur_sec = dur_num = 0
num_train_steps = int(num_train_epochs * train_size // batch_size)
for i in range(num_train_steps):
start = time.time()
trn_loss, (trn_nll, trn_kl), g = fit()
stop = time.time()
dur_sec += stop - start
dur_num += 1
if i % int(num_train_steps / num_evals) == 0 or i == num_train_steps - 1:
tst_loss, (tst_nll, tst_kl) = eval()
f, x = zip(*[
('it:{:5}', opt.iterations),
('ms/it:{:6.4f}', dur_sec / max(1., dur_num) * 1000.),
('trn_loss:{:6.4f}', trn_loss),
('tst_loss:{:6.4f}', tst_loss),
('tst_nll:{:6.4f}', tst_nll),
('tst_kl:{:6.4f}', tst_kl),
('sum_norm_grad:{:6.4f}', sum(g)),
])
print(' '.join(f).format(*[getattr(x_, 'numpy', lambda: x_)()
for x_ in x]))
sys.stdout.flush()
dur_sec = dur_num = 0
# if i % 1000 == 0 or i == maxiter - 1:
# encoder.save('/tmp/encoder.npz')
# decoder.save('/tmp/decoder.npz') | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
6 Evaluate Classification Accuracy | def evaluate_accuracy(dataset, encoder, decoder):
"""Evaluate the accuracy of your model on a dataset.
"""
this_it = iter(dataset)
num_correct = 0
num_total = 0
attempts = 0
for xin, xout in this_it:
xin, xout = next(this_it)
e = encoder(xin)
z = e.sample(10000) # 10K samples should have low variance.
d = decoder(z)
yhat = d.sample()
confidence = tf.reduce_mean(d.probs_parameter(), axis=0)
most_likely = tf.cast(tf.math.argmax(confidence, axis=-1), tf.int32)
num_correct += np.sum(most_likely == xout, axis=0)
num_total += xout.shape[0]
attempts +=1
return num_correct, num_total
nc, nt = evaluate_accuracy(eval_dataset.batch(100), encoder, decoder)
print("Accuracy: %.4f"%(nc/nt)) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
The accuracy of one training run with this particular model and training setup was 99.15%, which is within a half of a percent of the state of the art, and comparable to the mnist accuracy reported in Alemi et al. (2016).
OOD detection using DoSE
From the previous section, we have trained a variational classifier. However, this classifier was trained assuming that all of the inputs are from the distribution which generated the training set. In general, we may not always receive images drawn from this distribution. In these situations, our model prediction is unreliable. We want to be able to identify when this may be the case to avoid serving these flawed predictions.
In this section, we turn the VIB classifier into an OOD detector using DoSE.
1 Get statistics | def get_statistics(encoder, decoder, prior):
"""Setup a function to evaluate statistics given model components.
Args:
encoder: Callable neural network which takes in an image and
returns a tfp.distributions.Distribution object.
decoder: Callable neural network which takes in a vector and
returns a tfp.distributions.Distribution object.
prior: A tfp.distributions.Distribution object which operates
on the same spaces as the encoder.
Returns:
T: A function, which takes in a tensor containing an image (or
batch of images) and evaluates statistics on the model.
Optionally it also returns the prediction, under the assumption
that the DoSE model will only dress an actual classifier.
"""
def T(x, return_pred=False):
"""Evaluate statistics on an input image or batch of images.
Given an input tensor `x` containing either an image or a batch of
images, this function evaluates 4 statistics on a VIB model; the
kl-divergence between the posterior and prior, the expected entropy
of the decoder computed using samples from the posterior, the
posterior entopy, and the cross-entropy between the posterior and
the prior. We also allow for the prediction to be optionally
returned.
Args:
x: rank 4 tensor containing a batch of images
return_pred: Bool indicating whether to return the model
prediction.
Returns:
tf.tensor containing the 4 statistics evaluated on the input.
pred (optional): The prediction of the model.
"""
pzgx = encoder(x)
z = pzgx.sample(100, seed=42) # Seed is fixed for determinism.
pxgz = decoder(z)
kl = pzgx.kl_divergence(prior)[tf.newaxis,...]
dent = tf.reduce_mean(pxgz.entropy(), axis=0)[tf.newaxis,...]
eent = pzgx.entropy()[tf.newaxis,...]
xent = pzgx.cross_entropy(prior)[tf.newaxis,...]
if return_pred:
pred = tf.math.argmax(
tf.reduce_mean(pxgz.probs_parameter, axis=0),
axis=-1)
return tf.concat([kl, dent, eent, xent], axis=0), pred
else:
return tf.concat([kl, dent, eent, xent], axis=0)
return T
T = get_statistics(encoder, decoder, prior) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
2 Define DoSE helper classes and functions | def get_DoSE_KDE(T, dataset):
"""Get a distribution and decision rule for OOD detection using DoSE.
Given a tensor of statistics tx, compute a Kernel Density Estimate (KDE) of
the statistics. This uses a quantiles trick to cut down the number of
samples used in the KDE (to lower the cost of evaluating a trial point).
Args:
T: A function which takes an input image and returns a vector of
statistics evaluated using the model.
dataset: A tensorflow_datasets `Dataset` which will be used to evaluate
statistics to construct the estimator.
Returns:
is_ood: A function which takes a new point `x` and `threshold`, and
computes the decision rule KDE.log_prob(T(x)) < threshold
dose_kde: A tfd.MixtureSameFamily object. The distribution used as the KDE
from which the log_prob of a batch of statistics can be computed.
"""
# First we should evaluate the statistics on the training set.
it = iter(dataset)
for x, y in it:
if not "tx" in locals():
tx = T(x)
else:
tx = tf.concat([tx, T(x)], axis=-1)
n = tf.cast(tf.shape(tx)[-1], tx.dtype)
num_quantiles = int(25)
q = tfp.stats.quantiles(tx, num_quantiles, axis=-1)
q = tf.transpose(q, tf.roll(tf.range(tf.rank(q)), shift=-1, axis=0))
# Scott's Rule:
h = 3.49 * tf.math.reduce_std(tx, axis=-1, keepdims=True) * (n)**(-1./3.)
h *= n / num_quantiles
dose_kde = tfd.Independent(
tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(logits=tf.zeros(num_quantiles + 1)),
components_distribution=tfd.Normal(loc=q, scale=h)),
reinterpreted_batch_ndims=1)
is_ood = lambda x, threshold: dose_kde.log_prob(tf.transpose(T(x), [1, 0])) < tf.math.log(threshold)
dose_log_prob = lambda x: dose_kde.log_prob(tf.transpose(T(x), [1, 0]))
# T(x) returns shape [T, N], but dose_kde works on shape [N, T]
return is_ood, dose_kde, dose_log_prob, tx
class DoSE_administrator(object):
def __init__(self, T, train_dataset, eval_dataset):
"""Administrate DoSE for model evaluation in a more efficient way.
This high level object just calls the lower level DoSE methods, but
also evaluates the DoSE log-probabilities on the evaluation dataset.
Using these, we can do things like compute auroc much more efficiently.
"""
dose_build = get_DoSE_KDE(T, train_dataset)
# Call DoSE on an image/batch x for lp threshold `threshold`
self.is_ood = dose_build[0]
# Actual dose distribution
self.dose_dist = dose_build[1]
self.dose_lp = lambda t: self.dose_dist.log_prob(tf.transpose(t, [1, 0]))
# Get the log-probability of a batch from dose
self.dose_log_prob = dose_build[2]
# This helps us evaluate auroc more reliably.
self.training_stats = dose_build[3]
# Get training_log probs efficiently
train_size = self.training_stats.shape[-1]
bs = train_size // 1000
for i in range(1000):
tlp = self.dose_lp(self.training_stats[..., bs*i:bs*(i+1)])
if not hasattr(self, 'training_lp'):
self.training_lp = tlp
else:
self.training_lp = tf.concat([self.training_lp, tlp], axis=0)
# Get log_probs, images, labels, and statistics
# on the evaluation dataset.
eval_it = iter(eval_dataset)
for x, y in eval_it:
if not hasattr(self, 'eval_lp'):
self.eval_stats = T(x)
self.eval_lp = self.dose_lp(self.eval_stats)
self.eval_label = y
self.eval_ims = x
else:
tx = T(x)
self.eval_stats = tf.concat([self.eval_stats,
tx],
axis=0)
self.eval_lp = tf.concat([self.eval_lp,
self.dose_lp(tx)],
axis=0)
self.eval_label = tf.concat([self.eval_label, y], axis=0)
self.eval_ims = tf.concat([self.eval_ims, x], axis=0)
def get_acc(self, threshold):
"""Evaluate the OOD accuracy for a certain threshold probability.
This computes the decision rule: `log q(x) < tf.math.log(thresh)`
on the eval dataset. It uses this decision rule to evaluate the
number of correct predictions, along with the 4 components of the
confusion matrix.
Args:
threshold: A threshold on the DoSE probability density.
Returns:
nc: Number of correct predictions
nt: Number of total predictions
tp: Number of true positives
tn: Number of true negatives
fp: Number of false positives
fn: Number of false negatives
"""
yhat = self.eval_lp < tf.math.log(threshold)
fp = tf.reduce_sum(tf.cast(
tf.logical_and(tf.math.not_equal(yhat, self.eval_label),
tf.equal(self.eval_label, False)),
tf.int32),axis=0)
fn = tf.reduce_sum(tf.cast(
tf.logical_and(tf.math.not_equal(yhat, self.eval_label),
tf.equal(self.eval_label, True)),
tf.int32),axis=0)
tp = tf.reduce_sum(tf.cast(
tf.logical_and(tf.equal(yhat, self.eval_label),
tf.equal(self.eval_label, True)),
tf.int32),axis=0)
tn = tf.reduce_sum(tf.cast(
tf.logical_and(tf.equal(yhat, self.eval_label),
tf.equal(self.eval_label, False)),
tf.int32),axis=0)
nc = tp+tn
nt = tf.cast(tf.size(self.eval_label), tf.float32)
return nc, nt, tp, tn, fp, fn
def roc_curve(self, nbins):
"""Get the roc curve for the model."""
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, 0.))))
fpr = [fp.numpy() / (fp.numpy()+tn.numpy())]
tpr = [tp.numpy()/ (tp.numpy() + fn.numpy())]
for i in range(1, nbins+1):
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, i/float(nbins)*100.))))
fpr.append(fp.numpy()/ (fp.numpy() + tn.numpy()))
tpr.append(tp.numpy()/ (tp.numpy() + fn.numpy()))
return fpr, tpr
def precision_recall_curve(self, nbins):
"""Get the precision-recall curve for the model."""
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, 0.))))
precision = [tp.numpy()/ (tp.numpy() + fp.numpy())]
recall = [tp.numpy() / (tp.numpy() + fn.numpy())]
for i in range(1, nbins+1):
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, i/float(nbins)*100.))))
precision.append(tp.numpy()/ (tp.numpy() + fp.numpy()))
recall.append(tp.numpy() / (tp.numpy() + fn.numpy()))
return precision, recall | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
3 Setup OOD dataset | # For evaluating statistics on the training set, we need to perform a
# pass through the dataset.
train_one_pass = tfds.load('mnist')['train']
train_one_pass = tfn.util.tune_dataset(train_one_pass,
batch_size=1000,
repeat_count=None,
preprocess_fn=_preprocess)
# OOD dataset is Fashion_MNIST
ood_data = tfds.load('fashion_mnist')['test'].map(_preprocess).map(
lambda x, y: (x, tf.ones_like(y, dtype=tf.bool)))
# In-distribution data is the MNIST test set.
ind_data = tfds.load('mnist')['test'].map(_preprocess).map(
lambda x, y: (x, tf.zeros_like(y, dtype=tf.bool)))
# Our trial dataset is a 50-50 split of the two.
hybrid_data = ind_data.concatenate(ood_data)
hybrid_data = tfn.util.tune_dataset(hybrid_data, batch_size=100,
shuffle_size=20000,repeat_count=None) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
4 Administer DoSE | DoSE_admin = DoSE_administrator(T, train_one_pass, hybrid_data) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
5 Evaluate OOD performance | fp, tp = DoSE_admin.roc_curve(10000)
precision, recall = DoSE_admin.precision_recall_curve(10000)
plt.figure(figsize=[10,5])
plt.subplot(121)
plt.plot(fp, tp, 'b-')
plt.xlim(0, 1.)
plt.ylim(0., 1.)
plt.xlabel('FPR', fontsize=12)
plt.ylabel('TPR', fontsize=12)
plt.title("AUROC: %.4f"%np.trapz(tp, fp), fontsize=12)
plt.subplot(122)
plt.plot(recall, precision, 'b-')
plt.xlim(0, 1.)
plt.ylim(0., 1.)
plt.xlabel('Recall', fontsize=12)
plt.ylabel('Precision', fontsize=12)
plt.title("AUPRC: %.4f"%np.trapz(precision[1:], recall[1:]), fontsize=12)
Sorted_ims = tf.gather(DoSE_admin.eval_ims, tf.argsort(DoSE_admin.eval_lp))
Sorted_labels = tf.gather(DoSE_admin.eval_label, tf.argsort(DoSE_admin.eval_lp))
sorted_ind = tf.gather(Sorted_ims, tf.where(Sorted_labels == False))[:,0]
sorted_ood = tf.gather(Sorted_ims, tf.where(Sorted_labels == True))[:,0]
print("Most False Positive")
tfn.util.display_imgs(sorted_ind[:20])
print("Most True Negative")
tfn.util.display_imgs(sorted_ind[-20:])
print("Most False Negative")
tfn.util.display_imgs(sorted_ood[-20:])
print("Most True Positive")
tfn.util.display_imgs(sorted_ood[:20]) | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | tensorflow/probability | apache-2.0 |
Source reconstruction using an LCMV beamformer
This tutorial gives an overview of the beamformer method and shows how to
reconstruct source activity using an LCMV beamformer. | # Authors: Britta Westner <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample, fetch_fsaverage
from mne.beamformer import make_lcmv, apply_lcmv | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Introduction to beamformers
A beamformer is a spatial filter that reconstructs source activity by
scanning through a grid of pre-defined source points and estimating activity
at each of those source points independently. A set of weights is
constructed for each defined source location which defines the contribution
of each sensor to this source.
Beamformers are often used for their focal reconstructions and their ability
to reconstruct deeper sources. They can also suppress external noise sources.
The beamforming method applied in this tutorial is the linearly constrained
minimum variance (LCMV) beamformer :footcite:VanVeenEtAl1997 operates on
time series.
Frequency-resolved data can be reconstructed with the dynamic imaging of
coherent sources (DICS) beamforming method :footcite:GrossEtAl2001.
As we will see in the following, the spatial filter is computed from two
ingredients: the forward model solution and the covariance matrix of the
data.
Data processing
We will use the sample data set for this tutorial and reconstruct source
activity on the trials with left auditory stimulation. | data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
# Read the raw data
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443'] # bad MEG channel
# Set up the epoching
event_id = 1 # those are the trials with left-ear auditory stimuli
tmin, tmax = -0.2, 0.5
events = mne.find_events(raw)
# pick relevant channels
raw.pick(['meg', 'eog']) # pick channels of interest
# Create epochs
proj = False # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
# for speed purposes, cut to a window of interest
evoked = epochs.average().crop(0.05, 0.15)
# Visualize averaged sensor space data
evoked.plot_joint()
del raw # save memory | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Computing the covariance matrices
Spatial filters use the data covariance to estimate the filter
weights. The data covariance matrix will be inverted_ during the spatial
filter computation, so it is valuable to plot the covariance matrix and its
eigenvalues to gauge whether matrix inversion will be possible.
Also, because we want to combine different channel types (magnetometers and
gradiometers), we need to account for the different amplitude scales of these
channel types. To do this we will supply a noise covariance matrix to the
beamformer, which will be used for whitening.
The data covariance matrix should be estimated from a time window that
includes the brain signal of interest,
and incorporate enough samples for a stable estimate. A rule of thumb is to
use more samples than there are channels in the data set; see
:footcite:BrookesEtAl2008 for more detailed advice on covariance estimation
for beamformers. Here, we use a time
window incorporating the expected auditory response at around 100 ms post
stimulus and extend the period to account for a low number of trials (72) and
low sampling rate of 150 Hz. | data_cov = mne.compute_covariance(epochs, tmin=0.01, tmax=0.25,
method='empirical')
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0,
method='empirical')
data_cov.plot(epochs.info)
del epochs | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
When looking at the covariance matrix plots, we can see that our data is
slightly rank-deficient as the rank is not equal to the number of channels.
Thus, we will have to regularize the covariance matrix before inverting it
in the beamformer calculation. This can be achieved by setting the parameter
reg=0.05 when calculating the spatial filter with
:func:~mne.beamformer.make_lcmv. This corresponds to loading the diagonal
of the covariance matrix with 5% of the sensor power.
The forward model
The forward model is the other important ingredient for the computation of a
spatial filter. Here, we will load the forward model from disk; more
information on how to create a forward model can be found in this tutorial:
tut-forward.
Note that beamformers are usually computed in a :class:volume source space
<mne.VolSourceEstimate>, because estimating only cortical surface
activation can misrepresent the data. | # Read forward model
fwd_fname = meg_path / 'sample_audvis-meg-vol-7-fwd.fif'
forward = mne.read_forward_solution(fwd_fname) | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Handling depth bias
The forward model solution is inherently biased toward superficial sources.
When analyzing single conditions it is best to mitigate the depth bias
somehow. There are several ways to do this:
:func:mne.beamformer.make_lcmv has a depth parameter that normalizes
the forward model prior to computing the spatial filters. See the docstring
for details.
Unit-noise gain beamformers handle depth bias by normalizing the
weights of the spatial filter. Choose this by setting
weight_norm='unit-noise-gain'.
When computing the Neural activity index, the depth bias is handled by
normalizing both the weights and the estimated noise (see
:footcite:VanVeenEtAl1997). Choose this by setting weight_norm='nai'.
Note that when comparing conditions, the depth bias will cancel out and it is
possible to set both parameters to None.
Compute the spatial filter
Now we can compute the spatial filter. We'll use a unit-noise gain beamformer
to deal with depth bias, and will also optimize the orientation of the
sources such that output power is maximized.
This is achieved by setting pick_ori='max-power'.
This gives us one source estimate per source (i.e., voxel), which is known
as a scalar beamformer. | filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='max-power',
weight_norm='unit-noise-gain', rank=None)
# You can save the filter for later use with:
# filters.save('filters-lcmv.h5') | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
It is also possible to compute a vector beamformer, which gives back three
estimates per voxel, corresponding to the three direction components of the
source. This can be achieved by setting
pick_ori='vector' and will yield a :class:volume vector source estimate
<mne.VolVectorSourceEstimate>. So we will compute another set of filters
using the vector beamformer approach: | filters_vec = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='vector',
weight_norm='unit-noise-gain', rank=None)
# save a bit of memory
src = forward['src']
del forward | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Apply the spatial filter
The spatial filter can be applied to different data types: raw, epochs,
evoked data or the data covariance matrix to gain a static image of power.
The function to apply the spatial filter to :class:~mne.Evoked data is
:func:~mne.beamformer.apply_lcmv which is
what we will use here. The other functions are
:func:~mne.beamformer.apply_lcmv_raw,
:func:~mne.beamformer.apply_lcmv_epochs, and
:func:~mne.beamformer.apply_lcmv_cov. | stc = apply_lcmv(evoked, filters)
stc_vec = apply_lcmv(evoked, filters_vec)
del filters, filters_vec | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Visualize the reconstructed source activity
We can visualize the source estimate in different ways, e.g. as a volume
rendering, an overlay onto the MRI, or as an overlay onto a glass brain.
The plots for the scalar beamformer show brain activity in the right temporal
lobe around 100 ms post stimulus. This is expected given the left-ear
auditory stimulation of the experiment. | lims = [0.3, 0.45, 0.6]
kwargs = dict(src=src, subject='sample', subjects_dir=subjects_dir,
initial_time=0.087, verbose=True) | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
On MRI slices (orthoview; 2D) | stc.plot(mode='stat_map', clim=dict(kind='value', pos_lims=lims), **kwargs) | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
On MNI glass brain (orthoview; 2D) | stc.plot(mode='glass_brain', clim=dict(kind='value', lims=lims), **kwargs) | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Volumetric rendering (3D) with vectors
These plots can also be shown using a volumetric rendering via
:meth:~mne.VolVectorSourceEstimate.plot_3d. Let's try visualizing the
vector beamformer case. Here we get three source time courses out per voxel
(one for each component of the dipole moment: x, y, and z), which appear
as small vectors in the visualization (in the 2D plotters, only the
magnitude can be shown): | brain = stc_vec.plot_3d(
clim=dict(kind='value', lims=lims), hemi='both', size=(600, 600),
views=['sagittal'],
# Could do this for a 3-panel figure:
# view_layout='horizontal', views=['coronal', 'sagittal', 'axial'],
brain_kwargs=dict(silhouette=True),
**kwargs) | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Visualize the activity of the maximum voxel with all three components
We can also visualize all three components in the peak voxel. For this, we
will first find the peak voxel and then plot the time courses of this voxel. | peak_vox, _ = stc_vec.get_peak(tmin=0.08, tmax=0.1, vert_as_index=True)
ori_labels = ['x', 'y', 'z']
fig, ax = plt.subplots(1)
for ori, label in zip(stc_vec.data[peak_vox, :, :], ori_labels):
ax.plot(stc_vec.times, ori, label='%s component' % label)
ax.legend(loc='lower right')
ax.set(title='Activity per orientation in the peak voxel', xlabel='Time (s)',
ylabel='Amplitude (a. u.)')
mne.viz.utils.plt_show()
del stc_vec | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Morph the output to fsaverage
We can also use volumetric morphing to get the data to fsaverage space. This
is for example necessary when comparing activity across subjects. Here, we
will use the scalar beamformer example.
We pass a :class:mne.SourceMorph as the src argument to
mne.VolSourceEstimate.plot. To save some computational load when applying
the morph, we will crop the stc: | fetch_fsaverage(subjects_dir) # ensure fsaverage src exists
fname_fs_src = subjects_dir / 'fsaverage' / 'bem' / 'fsaverage-vol-5-src.fif'
src_fs = mne.read_source_spaces(fname_fs_src)
morph = mne.compute_source_morph(
src, subject_from='sample', src_to=src_fs, subjects_dir=subjects_dir,
niter_sdr=[5, 5, 2], niter_affine=[5, 5, 2], zooms=7, # just for speed
verbose=True)
stc_fs = morph.apply(stc)
del stc
stc_fs.plot(
src=src_fs, mode='stat_map', initial_time=0.085, subjects_dir=subjects_dir,
clim=dict(kind='value', pos_lims=lims), verbose=True) | dev/_downloads/c6baf7c1a2f53fda44e93271b91f45b8/50_beamformer_lcmv.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
If you don't see four lines of output above, you might be rendering this on Github. If you want to see the output, same as the Python output below, cut and paste the Github URL to nbviewer.jupyter.org, which will do a more thorough rendering job.
Now lets do the same thing in Python. Yes, Python has it's own collections.deque or we could use a list object as a queue, but the point here is to show off similarities, so lets stick with a dict-based implementation, mirroring the JavaScript. | class Queue:
def __init__(self):
self._storage = {}
self._start = -1 # replicating 0 index used for arrays
self._end = -1 # replicating 0 index used for arrays
def size(self):
return self._end - self._start
def enqueue(self, val):
self._end += 1
self._storage[self._end] = val
def dequeue(self):
if self.size():
self._start += 1
nextUp = self._storage[self._start]
del self._storage[self._start]
if not self.size():
self._start = -1
self._end = -1
return nextUp
microsoftQueue = Queue()
microsoftQueue.enqueue("{user: [email protected]}")
microsoftQueue.enqueue("{user: [email protected]}")
microsoftQueue.enqueue("{user: [email protected]}")
microsoftQueue.enqueue("{user: [email protected]}")
def sendTo(recipient):
print(recipient, "gets a Surface Studio")
# Function to send everyone their Surface Studio!
def sendSurface(recepient):
sendTo(recepient)
# When your server is ready to handle this queue, execute this:
while microsoftQueue.size() > 0:
sendSurface(microsoftQueue.dequeue()) | Comparing JavaScript with Python.ipynb | 4dsolutions/Python5 | mit |
Another example of features JavaScript is acquiring with ES6 (Sixth Edition we might call it), are rest and default parameters. A "rest" parameter has nothing to do with RESTful, and everything to do with "the rest" as in "whatever is left over."
For example, in the function below, we pass in more ingredients than some recipe requires, yet because of the rest argument, which has to be the last, the extra ingredients are kept. Pre ES6, JavaScript had no simple mechanism for allowing parameters to "rise to the occasion." Instead they would match up, or stay undefined. | %%javascript
var sendTo = function(s){
element.append(s + "<br />");
}
//Function to send everyone their Surface Studio!
let sendSurface = recepient => {
sendTo(recepient);
}
function recipe(ingredient0, ingre1, ing2, ...more){
sendSurface(ingredient0 + " is one ingredient.");
sendSurface(more[1] + " is another.");
}
recipe("shrimp", "avocado", "tomato", "potato", "squash", "peanuts"); | Comparing JavaScript with Python.ipynb | 4dsolutions/Python5 | mit |
In Python we have both sequence and dictionary parameters, which we could say are both rest parameters, one for scooping up positionals, the other for gathering the named. Here's how that looks: | def recipe(ingr0, *more, ingr1, meat="turkey", **others):
print(more)
print(others)
recipe("avocado", "tomato", "potato", ingr1="squash", dessert="peanuts", meat = "shrimp") | Comparing JavaScript with Python.ipynb | 4dsolutions/Python5 | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.