markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset. | from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape | assignment1/two_layer_net.ipynb | zlpure/CS231n | mit |
Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate. | input_size = 32 * 32 * 3
hidden_size = 100
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=10000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.4, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
| assignment1/two_layer_net.ipynb | zlpure/CS231n | mit |
Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized. | # Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net) | assignment1/two_layer_net.ipynb | zlpure/CS231n | mit |
Tune your hyperparameters
What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.). | best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
best_net=net
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net) | assignment1/two_layer_net.ipynb | zlpure/CS231n | mit |
Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
We will give you extra bonus point for every 1% of accuracy above 52%. | test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc | assignment1/two_layer_net.ipynb | zlpure/CS231n | mit |
It's most common to pass a list into the sorted() function, but in fact it can take as input any sort of iterable collection. The older list.sort() method is an alternative detailed below. The sorted() function seems easier to use compared to sort(), so I recommend using sorted().
The sorted() function can be customized though optional arguments. The sorted() optional argument reverse=True, e.g. sorted(list, reverse=True), makes it sort backwards. | strs = ['aa', 'BB', 'zz', 'CC']
print sorted(strs)
print sorted(strs, reverse=True) | 4 - Sorting.ipynb | sastels/Onboarding | mit |
Custom Sorting With key
For more complex custom sorting, sorted() takes an optional "key=" specifying a "key" function that transforms each element before comparison. The key function takes in 1 value and returns 1 value, and the returned "proxy" value is used for the comparisons within the sort.
For example with a list of strings, specifying key=len (the built in len() function) sorts the strings by length, from shortest to longest. The sort calls len() for each string to get the list of proxy length values, and the sorts with those proxy values. | strs = ['ccc', 'aaaa', 'd', 'bb']
print sorted(strs, key=len) | 4 - Sorting.ipynb | sastels/Onboarding | mit |
As another example, specifying "str.lower" as the key function is a way to force the sorting to treat uppercase and lowercase the same: | strs = ['aa', 'BB', 'zz', 'CC']
print sorted(strs, key=str.lower) | 4 - Sorting.ipynb | sastels/Onboarding | mit |
You can also pass in your own MyFn as the key function. Say we have a list of strings we want to sort by the last letter of the string. | strs = ['xc', 'zb', 'yd' ,'wa'] | 4 - Sorting.ipynb | sastels/Onboarding | mit |
A little function that takes a string, and returns its last letter.
This will be the key function (takes in 1 value, returns 1 value). | def MyFn(s):
return s[-1] | 4 - Sorting.ipynb | sastels/Onboarding | mit |
Now pass key=MyFn to sorted() to sort by the last letter. | print sorted(strs, key=MyFn) | 4 - Sorting.ipynb | sastels/Onboarding | mit |
To use key= custom sorting, remember that you provide a function that takes one value and returns the proxy value to guide the sorting. There is also an optional argument "cmp=cmpFn" to sorted() that specifies a traditional two-argument comparison function that takes two values from the list and returns negative/0/positive to indicate their ordering. The built in comparison function for strings, ints, ... is cmp(a, b), so often you want to call cmp() in your custom comparator. The newer one argument key= sorting is generally preferable.
sort() method
As an alternative to sorted(), the sort() method on a list sorts that list into ascending order, e.g. list.sort(). The sort() method changes the underlying list and returns None, so use it like this: | alist = [1,5,9,2,5]
alist.sort()
alist | 4 - Sorting.ipynb | sastels/Onboarding | mit |
Incorrect (returns None): | blist = alist.sort()
blist | 4 - Sorting.ipynb | sastels/Onboarding | mit |
The above is a very common misunderstanding with sort() -- it does not return the sorted list. The sort() method must be called on a list; it does not work on any enumerable collection (but the sorted() function above works on anything). The sort() method predates the sorted() function, so you will likely see it in older code. The sort() method does not need to create a new list, so it can be a little faster in the case that the elements to sort are already in a list.
Tuples
A tuple is a fixed size grouping of elements, such as an (x, y) co-ordinate. Tuples are like lists, except they are immutable and do not change size (tuples are not strictly immutable since one of the contained elements could be mutable). Tuples play a sort of "struct" role in Python -- a convenient way to pass around a little logical, fixed size bundle of values. A function that needs to return multiple values can just return a tuple of the values. For example, if I wanted to have a list of 3-d coordinates, the natural python representation would be a list of tuples, where each tuple is size 3 holding one (x, y, z) group.
To create a tuple, just list the values within parenthesis separated by commas. The "empty" tuple is just an empty pair of parenthesis. Accessing the elements in a tuple is just like a list -- len(), [ ], for, in, etc. all work the same. | tuple = (1, 2, 'hi')
print len(tuple)
print tuple[2] | 4 - Sorting.ipynb | sastels/Onboarding | mit |
Tuples are immutable, i.e. they cannot be changed. | tuple[2] = 'bye' | 4 - Sorting.ipynb | sastels/Onboarding | mit |
If you want to change a tuple variable, you must reassign it to a new tuple: | tuple = (1, 2, 'bye')
tuple | 4 - Sorting.ipynb | sastels/Onboarding | mit |
To create a size-1 tuple, the lone element must be followed by a comma. | tuple = ('hi',)
tuple | 4 - Sorting.ipynb | sastels/Onboarding | mit |
It's a funny case in the syntax, but the comma is necessary to distinguish the tuple from the ordinary case of putting an expression in parentheses. In some cases you can omit the parenthesis and Python will see from the commas that you intend a tuple.
Assigning a tuple to an identically sized tuple of variable names assigns all the corresponding values. If the tuples are not the same size, it throws an error. This feature works for lists too. | (err_string, err_code) = ('uh oh', 666)
print err_code, ':', err_string | 4 - Sorting.ipynb | sastels/Onboarding | mit |
List Comprehensions
A list comprehension is a compact way to write an expression that expands to a whole list. Suppose we have a list nums [1, 2, 3], here is the list comprehension to compute a list of their squares [1, 4, 9]: | nums = [1, 2, 3, 4]
squares = [ n * n for n in nums ]
squares | 4 - Sorting.ipynb | sastels/Onboarding | mit |
The syntax is [ expr for var in list ] -- the for var in list looks like a regular for-loop, but without the colon (:). The expr to its left is evaluated once for each element to give the values for the new list. Here is an example with strings, where each string is changed to upper case with '!!!' appended: | strs = ['hello', 'and', 'goodbye']
shouting = [ s.upper() + '!!!' for s in strs ]
shouting | 4 - Sorting.ipynb | sastels/Onboarding | mit |
You can add an if test to the right of the for-loop to narrow the result. The if test is evaluated for each element, including only the elements where the test is true. | ## Select values <= 2
nums = [2, 8, 1, 6]
small = [ n for n in nums if n <= 2 ]
small
## Select fruits containing 'a', change to upper case
fruits = ['apple', 'cherry', 'bannana', 'lemon']
afruits = [ s.upper() for s in fruits if 'a' in s ]
afruits | 4 - Sorting.ipynb | sastels/Onboarding | mit |
This shows us where the competition data is stored, so that we can load the files into the notebook. We'll do that next.
Load the data
The second code cell in your notebook now appears below the three lines of output with the file locations.
Type the two lines of code below into your second code cell. Then, once you're done, either click on the blue play button, or hit [Shift] + [Enter]. | train_data = pd.read_csv("../input/titanic/train.csv")
train_data.head() | notebooks/machine_learning/raw/tut_titanic.ipynb | Kaggle/learntools | apache-2.0 |
Your code should return the output above, which corresponds to the first five rows of the table in train.csv. It's very important that you see this output in your notebook before proceeding with the tutorial!
If your code does not produce this output, double-check that your code is identical to the two lines above. And, make sure your cursor is in the code cell before hitting [Shift] + [Enter].
The code that you've just written is in the Python programming language. It uses a Python "module" called pandas (abbreviated as pd) to load the table from the train.csv file into the notebook. To do this, we needed to plug in the location of the file (which we saw was /kaggle/input/titanic/train.csv).
If you're not already familiar with Python (and pandas), the code shouldn't make sense to you -- but don't worry! The point of this tutorial is to (quickly!) make your first submission to the competition. At the end of the tutorial, we suggest resources to continue your learning.
At this point, you should have at least three code cells in your notebook.
Copy the code below into the third code cell of your notebook to load the contents of the test.csv file. Don't forget to click on the play button (or hit [Shift] + [Enter])! | test_data = pd.read_csv("../input/titanic/test.csv")
test_data.head() | notebooks/machine_learning/raw/tut_titanic.ipynb | Kaggle/learntools | apache-2.0 |
As before, make sure that you see the output above in your notebook before continuing.
Once all of the code runs successfully, all of the data (in train.csv and test.csv) is loaded in the notebook. (The code above shows only the first 5 rows of each table, but all of the data is there -- all 891 rows of train.csv and all 418 rows of test.csv!)
Part 3: Improve your score
Remember our goal: we want to find patterns in train.csv that help us predict whether the passengers in test.csv survived.
It might initially feel overwhelming to look for patterns, when there's so much data to sort through. So, we'll start simple.
Explore a pattern
Remember that the sample submission file in gender_submission.csv assumes that all female passengers survived (and all male passengers died).
Is this a reasonable first guess? We'll check if this pattern holds true in the data (in train.csv).
Copy the code below into a new code cell. Then, run the cell. | women = train_data.loc[train_data.Sex == 'female']["Survived"]
rate_women = sum(women)/len(women)
print("% of women who survived:", rate_women) | notebooks/machine_learning/raw/tut_titanic.ipynb | Kaggle/learntools | apache-2.0 |
Before moving on, make sure that your code returns the output above. The code above calculates the percentage of female passengers (in train.csv) who survived.
Then, run the code below in another code cell: | men = train_data.loc[train_data.Sex == 'male']["Survived"]
rate_men = sum(men)/len(men)
print("% of men who survived:", rate_men) | notebooks/machine_learning/raw/tut_titanic.ipynb | Kaggle/learntools | apache-2.0 |
The code above calculates the percentage of male passengers (in train.csv) who survived.
From this you can see that almost 75% of the women on board survived, whereas only 19% of the men lived to tell about it. Since gender seems to be such a strong indicator of survival, the submission file in gender_submission.csv is not a bad first guess, and it makes sense that it performed reasonably well!
But at the end of the day, this gender-based submission bases its predictions on only a single column. As you can imagine, by considering multiple columns, we can discover more complex patterns that can potentially yield better-informed predictions. Since it is quite difficult to consider several columns at once (or, it would take a long time to consider all possible patterns in many different columns simultaneously), we'll use machine learning to automate this for us.
Your first machine learning model
We'll build a random forest model. This model is constructed of several "trees" (there are three trees in the picture below, but we'll construct 100!) that will individually consider each passenger's data and vote on whether the individual survived. Then, the random forest model makes a democratic decision: the outcome with the most votes wins!
The code cell below looks for patterns in four different columns ("Pclass", "Sex", "SibSp", and "Parch") of the data. It constructs the trees in the random forest model based on patterns in the train.csv file, before generating predictions for the passengers in test.csv. The code also saves these new predictions in a CSV file my_submission.csv.
Copy this code into your notebook, and run it in a new code cell. | from sklearn.ensemble import RandomForestClassifier
y = train_data["Survived"]
features = ["Pclass", "Sex", "SibSp", "Parch"]
X = pd.get_dummies(train_data[features])
X_test = pd.get_dummies(test_data[features])
model = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=1)
model.fit(X, y)
predictions = model.predict(X_test)
output = pd.DataFrame({'PassengerId': test_data.PassengerId, 'Survived': predictions})
output.to_csv('my_submission.csv', index=False)
print("Your submission was successfully saved!") | notebooks/machine_learning/raw/tut_titanic.ipynb | Kaggle/learntools | apache-2.0 |
这些可能看起来是很小的任务,但是它体现了一个非常重要的概念。通过画出分割线,我们已经学习了一个可以生成新数据的模型。如果您往这张图上添加一个没有被分类的点,这个算法现在可以预测它应是一个红色的点还是一个蓝色的点。
如果你希望看到生成这个的源代码,你也可以在fig_code文件夹中打开代码,或者你可以用%load命令加载这段代码。
下一个简单的例子我们看一个回归的算法,为一组数据拟合一条最佳的直线。 | from fig_code import plot_linear_regression
plot_linear_regression() | notebooks/02.1-Machine-Learning-Intro.ipynb | palandatarxcom/sklearn_tutorial_cn | bsd-3-clause |
这也是一个从数据中建立模型的例子,所以这个模型可以被用来生成新的数据。这个模型从训练数据中被学习出来,而且可以用来预测测试数据的结果:我们给出一个点的x坐标值,这个模型可以让我们去预测对应的y坐标值。同样的,这看起来是一个简单的例子,但是它是机器学习算法的一个基础的操作。
Scikit-learn中的数据表示
机器学习是从数据中建立模型的,我们将会从怎样让用电脑理解的方式去表示数据开始。同时,我们会用matplotlib的例子讲解如何将数据用图表的形式显示出来。
在Scikit-learn中,大多数的机器学习算法的数据在二维的数组或者矩阵中存储。这些数据可能是numpy数组,在某些情况下也可能是scipy.sparse矩阵。数组的大小应该是[样本数,特征数] (【译者注】sample - 样本,feature - 特征)
样本数(n_sample): 样本的数目。每一个样本都是一个需要处理的独立个体(例如:需要被分类),一个样本可能是一个文档、一幅图片、一段音频、一段视频、一个天文学数据、数据库或者CSV文件中的一行,或者任意一个确定的数值的集合。
特征数(n_feature): 特征的数目,特征是描述一个样本的数值表达。特征一般是实数,不过在某些情况下也会是布尔值或者是离散数据。
特征数必须提前确定。但是对于给定的样本,特征可以是很大(百万级)的一个零占大多数的集合。这种情况下,scipy.sparse矩阵就派上了用场,用这个矩阵比numpy矩阵在存储上会更加高效。
(图片来自 Python Data Science Handbook)
!
一个简单的例子:Iris 数据集
作为简单数据集的例子,我们将会介绍scikit-learn中存储的iris数据集。数据由3种不同品种的鸢尾花组成。下面是数据集中的3个品种,我们可以通过下面的代码显示出它们: | from IPython.core.display import Image, display
display(Image(filename='images/iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(filename='images/iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(filename='images/iris_virginica.jpg'))
print("Iris Virginica") | notebooks/02.1-Machine-Learning-Intro.ipynb | palandatarxcom/sklearn_tutorial_cn | bsd-3-clause |
问题:
如果我们想设计一个算法去分辨iris的品种,数据可能是什么?
记住:我们需要一个2D的数组,其大小为[样本数 * 特征数]
样本数指的是什么?
特征数指的是什么?
记住每一个样本的特征数必须是固定的,而且对于每一个样本,特征数i必须是一个数值型的元素。
用scikit-learn 加载 Iris 数据
Scikit-learn对于Iris数据有一个非常直接表示。数据表示如下:
Iris 数据集的特征:
萼片长度(cm)
萼片宽度(cm)
花瓣长度(cm)
花瓣宽度(cm)
预测的目标类别
Iris Setosa
Iris Versicolour
Iris Virginica
scikit-learn嵌入了一个iris CSV文件的拷贝和一个帮助函数去从numpy数组中加载它: | from sklearn.datasets import load_iris
iris = load_iris()
iris.keys()
n_samples, n_features = iris.data.shape
print((n_samples, n_features))
print(iris.data[0])
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
print(iris.target_names) | notebooks/02.1-Machine-Learning-Intro.ipynb | palandatarxcom/sklearn_tutorial_cn | bsd-3-clause |
这个数据是四维的,但是我们可以使用简单的scatter-plot一次显示出两维的数据: | import numpy as np
import matplotlib.pyplot as plt
x_index = 0
y_index = 1
# 这段代码使用iris的名字来标注颜色条(colorbar)
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.scatter(iris.data[:, x_index], iris.data[:, y_index],
c=iris.target, cmap=plt.cm.get_cmap('RdYlBu', 3))
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.clim(-0.5, 2.5)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index]); | notebooks/02.1-Machine-Learning-Intro.ipynb | palandatarxcom/sklearn_tutorial_cn | bsd-3-clause |
快速练习:
在上面的脚本中改变 x_index 和 y_index, 找到一种可以最大化分隔出三个类别的它们的组合。
这个练习是降维算法的一个预告,我们在之后会看到。
其他数据
它们分为如下三种:
包内置数据: 这些小的数据集已经被集成在scikit-learn的安装包里面了,可以用sklearn.datasets.load_*去下载它
供下载数据: 这些较大的数据可以供用户们下载,scikit-learn里面已经包含了下载这些数据集的流通道。这些数据可以在sklearn.datasets.fetch_*中找到。
生成数据: 通过随机种子,可以通过现有模型随机生成一些数据集。它们可以在sklearn.datasets.make_*中找到
你可以通过IPython的TAB自动补全来发现可能的数据集生成和加载工具。在从sklearn导入datasets之后,
键入
datasets.load_ + TAB
或者
datasets.fetch_ + TAB
或者
datasets.make_ + TAB
可以看到一列函数的组合。 | from sklearn import datasets
# Type datasets.fetch_<TAB> or datasets.load_<TAB> in IPython to see all possibilities
# datasets.fetch_
# datasets.load_ | notebooks/02.1-Machine-Learning-Intro.ipynb | palandatarxcom/sklearn_tutorial_cn | bsd-3-clause |
3.8.1 Sorting out the metadata
The main decision to make when building a supermatrix is what metadata will be used to indicate that sequences of several genes belong to the same OTU in the tree. Obvious candidates would be the species name (stored as 'source_organism' if we read a GenBank file), or sample ID, voucher specimen and so on. Often, we would be required to modify the metadata in our Project, in a way that will correctly reflect the relationship between sequences that emerged from the same sample.
In the case of the Tetillidae.gb example file, sample IDs are stored either under 'source_specimen_voucher' or 'source_isolate'. In addition, identical voucher numbers are sometimes formatted differently for different genes.
In the file 'data/Tetillida_otus_corrected.csv', I have unified the columns 'source_specimen_voucher' and 'source_isolate' in a single column called 'source_otu' and also made sure to uniformly format all the voucher specimens: | from IPython.display import Image
Image('images/fix_otus.png', width = 400) | notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb | szitenberg/ReproPhyloVagrant | mit |
Our Project has to be updated with the recent changes to the spreadsheet: | pj.correct_metadata_from_file('data/Tetillida_otus_corrected.csv') | notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb | szitenberg/ReproPhyloVagrant | mit |
Such fixes can also be done programmatically (see section 3.4)
3.8.2 Designing the supermatrix
Supermatrices are configured with objects of the class Concatenation. In a Concatenation object we can indicate the following:
The name of the concatenation
The loci it includes (here we pass locus objects rather than just Locus names)
The qualifier or metadata that stores the relationships among the records
What loci all the OTUs must have
Groups of loci from which each OTU must have at least one
Which trimmed alignment to use, if we have more than one for each locus in our Project
Here is an example: | concat = Concatenation('large_concat', # Any unique string
pj.loci, # This is a list of Locus objects
'source_otu', # The values of this qualifier
# flag sequences the belong to the same
# sample
otu_must_have_all_of=['MT-CO1'], # All the OTUS must have a cox1 sequence
otu_must_have_one_of=[['18s','28s']], # All the OTUs must have either 18s or 28s or both
define_trimmed_alns=[] # We only have one alignment per gene
# so the list is empty (default value)
) | notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb | szitenberg/ReproPhyloVagrant | mit |
If we print this Concatenation object we get this message: | print concat | notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb | szitenberg/ReproPhyloVagrant | mit |
3.8.3 Building the supermatrix
Building the suprematrix has two steps. First we need to mount the Concatenation object onto the Project where it will be stored in the list pj.concatenations. Second, we need to construct the MultipleSeqAlignment object, which will be stored in the pj.trimmed_alignments dictionary, under the key 'large_concat' in this case: | pj.add_concatenation(concat)
pj.make_concatenation_alignments()
pickle_pj(pj, 'outputs/my_project.pkpj') | notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb | szitenberg/ReproPhyloVagrant | mit |
Now that this supermatrix is stored as a trimmed alignment in the pj.trimmed_alignments dictionary, we can write it to a file or fetch the MultipleSeqAlignment object, as shown in section 3.7.
3.8.4 Quick reference | # Design a supermatrix
concat = Concatenation('concat_name', loci_list, 'otu_qualifier' **kwargs)
# Add it to a project
pj.add_concatenation(concat)
# Build supermatrices based on the Concatenation
# objects in pj.concatenations
pj.make_concatenation_alignments() | notebooks/Tutorials/Basic/3.8 Building a supermatrix.ipynb | szitenberg/ReproPhyloVagrant | mit |
1. The dataset
We describe next the regression task that we will use in the session. The dataset is an adaptation of the <a href=http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html> STOCK dataset</a>, taken originally from the <a href=http://lib.stat.cmu.edu/> StatLib Repository</a>. The goal of this problem is to predict the values of the stocks of a given airplane company, given the values of another 9 companies in the same day.
<small> If you are reading this text from the python notebook with its full functionality, you can explore the results of the regression experiments using two alternative datasets:
The
<a href=https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+Strength>CONCRETE dataset</a>, taken from the <a href=https://archive.ics.uci.edu/ml/index.html>Machine Learning Repository at the University of California Irvine</a>. The goal of the CONCRETE dataset tas is to predict the compressive strength of cement mixtures based on eight observed variables related to the composition of the mixture and the age of the material).
The Advertising dataset, taken from the book <a href= http://www-bcf.usc.edu/~gareth/ISL/data.html> An Introduction to Statistical Learning with applications in R</a>, with permission from the authors: G. James, D. Witten, T. Hastie and R. Tibshirani. The goal of this problem is to predict the sales of a given product, knowing the investment in different advertising sectors. More specifically, the input and output variables can be described as follows:
Input features:
TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
Radio: advertising dollars spent on Radio
Newspaper: advertising dollars spent on Newspaper
Response variable:
Sales: sales of a single product in a given market (in thousands of widgets)
To do so, just replace stock by concrete or advertising in the next cell. Remind that you must run the cells again to see the changes.
</small> | # SELECT dataset
# Available options are 'stock', 'concrete' or 'advertising'
ds_name = 'stock'
# Let us start by loading the data into the workspace, and visualizing the dimensions of all matrices
if ds_name == 'stock':
# STOCK DATASET
data = scipy.io.loadmat('datasets/stock.mat')
X_tr = data['xTrain']
S_tr = data['sTrain']
X_tst = data['xTest']
S_tst = data['sTest']
elif ds_name == 'concrete':
# CONCRETE DATASET.
data = scipy.io.loadmat('datasets/concrete.mat')
X_tr = data['X_tr']
S_tr = data['S_tr']
X_tst = data['X_tst']
S_tst = data['S_tst']
elif ds_name == 'advertising':
# ADVERTISING DATASET
df = pd.read_csv('datasets/Advertising.csv', header=0)
X_tr = df.values[:150, 1:4]
S_tr = df.values[:150, [-1]] # The brackets around -1 is to make sure S_tr is a column vector, as in the other datasets
X_tst = df.values[150:, 1:4]
S_tst = df.values[150:, [-1]]
else:
print('Unknown dataset')
# Print the data dimension and the dataset sizes
print("SELECTED DATASET: " + ds_name)
print("---- The size of the training set is {0}, that is: {1} samples with dimension {2}.".format(
X_tr.shape, X_tr.shape[0], X_tr.shape[1]))
print("---- The target variable of the training set contains {0} samples with dimension {1}".format(
S_tr.shape[0], S_tr.shape[1]))
print("---- The size of the test set is {0}, that is: {1} samples with dimension {2}.".format(
X_tst.shape, X_tst.shape[0], X_tst.shape[1]))
print("---- The target variable of the test set contains {0} samples with dimension {1}".format(
S_tst.shape[0], S_tst.shape[1])) | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
1.1. Scatter plots
We can get a first rough idea about the regression task representing the scatter plot of each of the one-dimensional variables against the target data. | pylab.subplots_adjust(hspace=0.2)
for idx in range(X_tr.shape[1]):
ax1 = plt.subplot(3,3,idx+1)
ax1.plot(X_tr[:,idx],S_tr,'.')
ax1.get_xaxis().set_ticks([])
ax1.get_yaxis().set_ticks([])
plt.show() | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
2. Baseline estimation. Using the average of the training set labels
A first very simple method to build the regression model is to use the average of all the target values in the training set as the output of the model, discarding the value of the observation input vector.
This approach can be considered as a baseline, given that any other method making an effective use of the observation variables, statistically related to $s$, should improve the performance of this method.
The prediction is thus given by | # Mean of all target values in the training set
s_hat = np.mean(S_tr)
print(s_hat) | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
for any input ${\bf x}$.
Exercise 1
Compute the mean square error over training and test sets, for the baseline estimation method. | # We start by defining a function that calculates the average square error
def square_error(s, s_est):
# Squeeze is used to make sure that s and s_est have the appropriate dimensions.
y = np.mean(np.power((s - s_est), 2))
# y = np.mean(np.power((np.squeeze(s) - np.squeeze(s_est)), 2))
return y
# Mean square error of the baseline prediction over the training data
# MSE_tr = <FILL IN>
# Mean square error of the baseline prediction over the test data
# MSE_tst = <FILL IN>
print('Average square error in the training set (baseline method): {0}'.format(MSE_tr))
print('Average square error in the test set (baseline method): {0}'.format(MSE_tst)) | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
Note that in the previous piece of code, function 'square_error' can be used when the second argument is a number instead of a vector with the same length as the first argument. The value will be subtracted from each of the components of the vector provided as the first argument. | if sys.version_info.major == 2:
Test.assertTrue(np.isclose(MSE_tr, square_error(S_tr, s_hat)),'Incorrect value for MSE_tr')
Test.assertTrue(np.isclose(MSE_tst, square_error(S_tst, s_hat)),'Incorrect value for MSE_tst') | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
3. Unidimensional regression with the $k$-nn method
The principles of the $k$-nn method are the following:
For each point where a prediction is to be made, find the $k$ closest neighbors to that point (in the training set)
Obtain the estimation averaging the labels corresponding to the selected neighbors
The number of neighbors is a hyperparameter that plays an important role in the performance of the method. You can test its influence by changing $k$ in the following piece of code. In particular, you can sart with $k=1$ and observe the efect of increasing the value of $k$. | # We implement unidimensional regression using the k-nn method
# In other words, the estimations are to be made using only one variable at a time
from scipy import spatial
var = 0 # pick a variable (e.g., any value from 0 to 8 for the STOCK dataset)
k = 1 # Number of neighbors
n_points = 1000 # Number of points in the 'x' axis (for representational purposes)
# For representational purposes, we will compute the output of the regression model
# in a series of equally spaced-points along the x-axis
grid_min = np.min([np.min(X_tr[:,var]), np.min(X_tst[:,var])])
grid_max = np.max([np.max(X_tr[:,var]), np.max(X_tst[:,var])])
X_grid = np.linspace(grid_min,grid_max,num=n_points)
def knn_regression(X1, S1, X2, k):
""" Compute the k-NN regression estimate for the observations contained in
the rows of X2, for the training set given by the rows in X1 and the
components of S1. k is the number of neighbours of the k-NN algorithm
"""
if X1.ndim == 1:
X1 = np.asmatrix(X1).T
if X2.ndim == 1:
X2 = np.asmatrix(X2).T
distances = spatial.distance.cdist(X1,X2,'euclidean')
neighbors = np.argsort(distances, axis=0, kind='quicksort', order=None)
closest = neighbors[range(k),:]
est_values = np.zeros([X2.shape[0],1])
for idx in range(X2.shape[0]):
est_values[idx] = np.mean(S1[closest[:,idx]])
return est_values
est_tst = knn_regression(X_tr[:,var], S_tr, X_tst[:,var], k)
est_grid = knn_regression(X_tr[:,var], S_tr, X_grid, k)
plt.plot(X_tr[:,var], S_tr,'b.',label='Training points')
plt.plot(X_tst[:,var], S_tst,'rx',label='Test points')
plt.plot(X_grid, est_grid,'g-',label='Regression model')
plt.axis('tight')
plt.legend(loc='best')
plt.show() | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
3.1. Evolution of the error with the number of neighbors ($k$)
We see that a small $k$ results in a regression curve that exhibits many and large oscillations. The curve is capturing any noise that may be present in the training data, and <i>overfits</i> the training set. On the other hand, picking a too large $k$ (e.g., 200) the regression curve becomes too smooth, averaging out the values of the labels in the training set over large intervals of the observation variable.
The next code illustrates this effect by plotting the average training and test square errors as a function of $k$. | var = 0
k_max = 60
k_max = np.minimum(k_max, X_tr.shape[0]) # k_max cannot be larger than the number of samples
#Be careful with the use of range, e.g., range(3) = [0,1,2] and range(1,3) = [1,2]
MSEk_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:,var],k))
for k in range(1, k_max+1)]
MSEk_tst = [square_error(S_tst,knn_regression(X_tr[:,var], S_tr, X_tst[:,var],k))
for k in range(1, k_max+1)]
kgrid = np.arange(1, k_max+1)
plt.plot(kgrid, MSEk_tr,'bo', label='Training square error')
plt.plot(kgrid, MSEk_tst,'ro', label='Test square error')
plt.xlabel('$k$')
plt.ylabel('Square Error')
plt.axis('tight')
plt.legend(loc='best')
plt.show() | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
As we can see, the error initially decreases achiving a minimum (in the test set) for some finite value of $k$ ($k\approx 10$ for the STOCK dataset). Increasing the value of $k$ beyond that value results in poorer performance.
Exercise 2
Analize the training MSE for $k=1$. Why is it smaller than for any other $k$? Under which conditions will it be exactly zero?
Exercise 3
Modify the code above to visualize the square error from $k=1$ up to $k$ equal to the number of training instances. Can you relate the square error of the $k$-NN method with that of the baseline method for certain value of $k$?
3.1. Influence of the input variable
Having a look at the scatter plots, we can observe that some observation variables seem to have a more clear relationship with the target value. Thus, we can expect that not all variables are equally useful for the regression task. In the following plot, we carry out a study of the performance that can be achieved with each variable.
Note that, in practice, the test labels are not available for the selection of hyperparameter
$k$, so we should be careful about the conclusions of this experiment. A more realistic approach will be studied later when we introduce the concept of model validation. | k_max = 20
var_performance = []
k_values = []
for var in range(X_tr.shape[1]):
MSE_tr = [square_error(S_tr, knn_regression(X_tr[:,var], S_tr, X_tr[:, var], k))
for k in range(1, k_max+1)]
MSE_tst = [square_error(S_tst, knn_regression(X_tr[:,var], S_tr, X_tst[:, var], k))
for k in range(1, k_max+1)]
MSE_tr = np.asarray(MSE_tr)
MSE_tst = np.asarray(MSE_tst)
# We select the variable associated to the value of k for which the training error is minimum
pos = np.argmin(MSE_tr)
k_values.append(pos + 1)
var_performance.append(MSE_tst[pos])
plt.stem(range(X_tr.shape[1]), var_performance, use_line_collection=True)
plt.title('Results of unidimensional regression ($k$NN)')
plt.xlabel('Variable')
plt.ylabel('Test MSE')
plt.figure(2)
plt.stem(range(X_tr.shape[1]), k_values, use_line_collection=True)
plt.xlabel('Variable')
plt.ylabel('$k$')
plt.title('Selection of the hyperparameter')
plt.show() | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
4. Multidimensional regression with the $k$-nn method
In the previous subsection, we have studied the performance of the $k$-nn method when using only one variable. Doing so was convenient, because it allowed us to plot the regression curves in a 2-D plot, and to get some insight about the consequences of modifying the number of neighbors.
For completeness, we evaluate now the performance of the $k$-nn method in this dataset when using all variables together. In fact, when designing a regression model, we should proceed in this manner, using all available information to make as accurate an estimation as possible. In this way, we can also account for correlations that might be present among the different observation variables, and that may carry very relevant information for the regression task.
For instance, in the STOCK dataset, it may be that the combination of the stock values of two airplane companies is more informative about the price of the target company, while the value for a single company is not enough.
<small> Also, in the CONCRETE dataset, it may be that for the particular problem at hand the combination of a large proportion of water and a small proportion of coarse grain is a clear indication of certain compressive strength of the material, while the proportion of water or coarse grain alone are not enough to get to that result.</small> | k_max = 20
MSE_tr = [square_error(S_tr, knn_regression(X_tr, S_tr, X_tr, k)) for k in range(1, k_max+1)]
MSE_tst = [square_error(S_tst, knn_regression(X_tr, S_tr, X_tst, k)) for k in range(1, k_max+1)]
plt.plot(np.arange(k_max)+1, MSE_tr,'bo',label='Training square error')
plt.plot(np.arange(k_max)+1, MSE_tst,'ro',label='Test square error')
plt.xlabel('k')
plt.ylabel('Square error')
plt.legend(loc='best')
plt.show() | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
In this case, we can check that the average test square error is much lower than the error that was achieved when using only one variable, and also far better than the baseline method. It is also interesting to note that in this particular case the best performance is achieved for a small value of $k$, with the error increasing for larger values of the hyperparameter.
Nevertheless, as we discussed previously, these results should be taken carefully. How would we select the value of $k$, if test labels are (obvioulsy) not available for model validation?
5. Hyperparameter selection via cross-validation
5.1. Generalization
An inconvenient of the application of the $k$-nn method is that the selection of $k$ influences the final error of the algorithm. In the previous experiments, we kept the value of $k$ that minimized the square error on the training set. However, we also noticed that the location of the minimum is not necessarily the same from the perspective of the test data. Ideally, we would like that the designed regression model works as well as possible on future unlabeled patterns that are not available during the training phase. This property is known as <b>generalization</b>.
Fitting the training data is only pursued in the hope that we are also indirectly obtaining a model that generalizes well. In order to achieve this goal, there are some strategies that try to guarantee a correct generalization of the model. One of such approaches is known as <b>cross-validation</b>
5.2. Cross-validation
Since using the test labels during the training phase is not allowed (they should be kept aside to simultate the future application of the regression model on unseen patterns), we need to figure out some way to improve our estimation of the hyperparameter that requires only training data. Cross-validation allows us to do so by following the following steps:
Split the training data into several (generally non-overlapping) subsets. If we use $M$ subsets, the method is referred to as $M$-fold cross-validation. If we consider each pattern a different subset, the method is usually referred to as leave-one-out (LOO) cross-validation.
Carry out the training of the system $M$ times. For each run, use a different partition as a <i>validation</i> set, and use the restating partitions as the training set. Evaluate the performance for different choices of the hyperparameter (i.e., for different values of $k$ for the $k$-NN method).
Average the validation error over all partitions, and pick the hyperparameter that provided the minimum validation error.
Rerun the algorithm using all the training data, keeping the value of the parameter that came out of the cross-validation process.
<img src="https://chrisjmccormick.files.wordpress.com/2013/07/10_fold_cv.png"> | ### This fragment of code runs k-nn with M-fold cross validation
# Parameters:
M = 5 # Number of folds for M-cv
k_max = 40 # Maximum value of the k-nn hyperparameter to explore
# First we compute the train error curve, that will be useful for comparative visualization.
MSE_tr = [square_error(S_tr, knn_regression(X_tr, S_tr, X_tr, k)) for k in range(1, k_max+1)]
## M-CV
# Obtain the indices for the different folds
n_tr = X_tr.shape[0]
permutation = np.random.permutation(n_tr)
# Split the indices in M subsets with (almost) the same size.
set_indices = {i: [] for i in range(M)}
i = 0
for pos in range(n_tr):
set_indices[i].append(permutation[pos])
i = (i+1) % M
# Obtain the validation errors
MSE_val = np.zeros((1,k_max))
for i in range(M):
val_indices = set_indices[i]
# Take out the val_indices from the set of indices.
tr_indices = list(set(permutation) - set(val_indices))
MSE_val_iter = [square_error(S_tr[val_indices],
knn_regression(X_tr[tr_indices, :], S_tr[tr_indices],
X_tr[val_indices, :], k))
for k in range(1, k_max+1)]
MSE_val = MSE_val + np.asarray(MSE_val_iter).T
MSE_val = MSE_val/M
# Select the best k based on the validation error
k_best = np.argmin(MSE_val) + 1
# Compute the final test MSE for the selecte k
MSE_tst = square_error(S_tst, knn_regression(X_tr, S_tr, X_tst, k_best))
plt.plot(np.arange(k_max)+1, MSE_tr, 'bo', label='Training square error')
plt.plot(np.arange(k_max)+1, MSE_val.T, 'go', label='Validation square error')
plt.plot([k_best, k_best], [0, MSE_tst],'r-')
plt.plot(k_best, MSE_tst,'ro',label='Test error')
plt.legend(loc='best')
plt.show() | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
Exercise 4
Modify the previous code to use only one of the variables in the input dataset
- Following a cross-validation approach, select the best value of $k$ for the $k$-nn based in variable 0 only.
- Compute the test error for the selected valua of $k$.
6. Scikit-learn implementation
In practice, most well-known machine learning methods are implemented and available for python. Probably, the most complete module for machine learning tools is <a href=http://scikit-learn.org/stable/>Scikit-learn</a>. The following piece of code uses the method
KNeighborsRegressor
available in Scikit-learn. The example has been taken from <a href=http://scikit-learn.org/stable/auto_examples/neighbors/plot_regression.html>here</a>. As you can check, this routine allows us to build the estimation for a particular point using a weighted average of the targets of the neighbors:
To obtain the estimation at a point ${\bf x}$:
Find $k$ closest points to ${\bf x}$ in the training set
Average the corresponding targets, weighting each value according to the distance of each point to ${\bf x}$, so that closer points have a larger influence in the estimation. | # Author: Alexandre Gramfort <[email protected]>
# Fabian Pedregosa <[email protected]>
#
# License: BSD 3 clause (C) INRIA
###############################################################################
# Generate sample data
import numpy as np
import matplotlib.pyplot as plt
from sklearn import neighbors
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
###############################################################################
# Fit regression model
n_neighbors = 5
for i, weights in enumerate(['uniform', 'distance']):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
plt.subplot(2, 1, i + 1)
plt.scatter(X, y, c='k', label='data')
plt.plot(T, y_, c='g', label='prediction')
plt.axis('tight')
plt.legend()
plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors,
weights))
plt.show() | R2.kNN_Regression/regression_knn_student.ipynb | ML4DS/ML4all | mit |
GINI Water Vapor Imagery
Use MetPy's support for GINI files to read in a water vapor satellite image and plot the
data using CartoPy. | import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import xarray as xr
from metpy.cbook import get_test_data
from metpy.io import GiniFile
from metpy.plots import add_metpy_logo, add_timestamp, colortables
# Open the GINI file from the test data
f = GiniFile(get_test_data('WEST-CONUS_4km_WV_20151208_2200.gini'))
print(f) | v1.1/_downloads/5f6dfc4b913dc349eba9f04f6161b5f1/GINI_Water_Vapor.ipynb | metpy/MetPy | bsd-3-clause |
Get a Dataset view of the data (essentially a NetCDF-like interface to the
underlying data). Pull out the data and (x, y) coordinates. We use metpy.parse_cf to
handle parsing some netCDF Climate and Forecasting (CF) metadata to simplify working with
projections. | ds = xr.open_dataset(f)
x = ds.variables['x'][:]
y = ds.variables['y'][:]
dat = ds.metpy.parse_cf('WV') | v1.1/_downloads/5f6dfc4b913dc349eba9f04f6161b5f1/GINI_Water_Vapor.ipynb | metpy/MetPy | bsd-3-clause |
Plot the image. We use MetPy's xarray/cartopy integration to automatically handle parsing
the projection information. | fig = plt.figure(figsize=(10, 12))
add_metpy_logo(fig, 125, 145)
ax = fig.add_subplot(1, 1, 1, projection=dat.metpy.cartopy_crs)
wv_norm, wv_cmap = colortables.get_with_range('WVCIMSS', 100, 260)
wv_cmap.set_under('k')
im = ax.imshow(dat[:], cmap=wv_cmap, norm=wv_norm,
extent=(x.min(), x.max(), y.min(), y.max()), origin='upper')
ax.add_feature(cfeature.COASTLINE.with_scale('50m'))
add_timestamp(ax, f.prod_desc.datetime, y=0.02, high_contrast=True)
plt.show() | v1.1/_downloads/5f6dfc4b913dc349eba9f04f6161b5f1/GINI_Water_Vapor.ipynb | metpy/MetPy | bsd-3-clause |
Create Fake Index Data | names = ['foo','bar','rf']
dates = pd.date_range(start='2015-01-01',end='2018-12-31', freq=pd.tseries.offsets.BDay())
n = len(dates)
rdf = pd.DataFrame(
np.zeros((n, len(names))),
index = dates,
columns = names
)
np.random.seed(1)
rdf['foo'] = np.random.normal(loc = 0.1/252,scale=0.2/np.sqrt(252),size=n)
rdf['bar'] = np.random.normal(loc = 0.04/252,scale=0.05/np.sqrt(252),size=n)
rdf['rf'] = 0.
pdf = 100*np.cumprod(1+rdf)
pdf.plot() | examples/PTE.ipynb | pmorissette/bt | mit |
Build and run Target Strategy
I will first run a strategy that rebalances everyday.
Then I will use those weights as target to rebalance to whenever the PTE is too high. | selectTheseAlgo = bt.algos.SelectThese(['foo','bar'])
# algo to set the weights to 1/vol contributions from each asset
# with data over the last 3 months excluding yesterday
weighInvVolAlgo = bt.algos.WeighInvVol(
lookback=pd.DateOffset(months=3),
lag=pd.DateOffset(days=1)
)
# algo to rebalance the current weights to weights set in target.temp
rebalAlgo = bt.algos.Rebalance()
# a strategy that rebalances daily to 1/vol weights
strat = bt.Strategy(
'Target',
[
selectTheseAlgo,
weighInvVolAlgo,
rebalAlgo
]
)
# set integer_positions=False when positions are not required to be integers(round numbers)
backtest = bt.Backtest(
strat,
pdf,
integer_positions=False
)
res_target = bt.run(backtest)
res_target.get_security_weights().plot() | examples/PTE.ipynb | pmorissette/bt | mit |
Now use the PTE rebalance algo to trigger a rebalance whenever predicted tracking error is greater than 1%. | # algo to fire whenever predicted tracking error is greater than 1%
wdf = res_target.get_security_weights()
PTE_rebalance_Algo = bt.algos.PTE_Rebalance(
0.01,
wdf,
lookback=pd.DateOffset(months=3),
lag=pd.DateOffset(days=1),
covar_method='standard',
annualization_factor=252
)
selectTheseAlgo = bt.algos.SelectThese(['foo','bar'])
# algo to set the weights to 1/vol contributions from each asset
# with data over the last 12 months excluding yesterday
weighTargetAlgo = bt.algos.WeighTarget(
wdf
)
rebalAlgo = bt.algos.Rebalance()
# a strategy that rebalances monthly to specified weights
strat = bt.Strategy(
'PTE',
[
PTE_rebalance_Algo,
selectTheseAlgo,
weighTargetAlgo,
rebalAlgo
]
)
# set integer_positions=False when positions are not required to be integers(round numbers)
backtest = bt.Backtest(
strat,
pdf,
integer_positions=False
)
res_PTE = bt.run(backtest)
fig, ax = plt.subplots(nrows=1,ncols=1)
res_target.get_security_weights().plot(ax=ax)
realized_weights_df = res_PTE.get_security_weights()
realized_weights_df['PTE foo'] = realized_weights_df['foo']
realized_weights_df['PTE bar'] = realized_weights_df['bar']
realized_weights_df = realized_weights_df.loc[:,['PTE foo', 'PTE bar']]
realized_weights_df.plot(ax=ax)
ax.set_title('Target Weights vs PTE Weights')
ax.plot()
trans_df = pd.DataFrame(
index=res_target.prices.index,
columns=['Target','PTE']
)
transactions = res_target.get_transactions()
transactions = (transactions['quantity'] * transactions['price']).reset_index()
bar_mask = transactions.loc[:,'Security'] == 'bar'
foo_mask = transactions.loc[:,'Security'] == 'foo'
trans_df.loc[trans_df.index[4:],'Target'] = np.abs(transactions[bar_mask].iloc[:,2].values) + np.abs(transactions[foo_mask].iloc[:,2].values)
transactions = res_PTE.get_transactions()
transactions = (transactions['quantity'] * transactions['price']).reset_index()
bar_mask = transactions.loc[:,'Security'] == 'bar'
foo_mask = transactions.loc[:,'Security'] == 'foo'
trans_df.loc[transactions[bar_mask].iloc[:,0],'PTE'] = np.abs(transactions[bar_mask].iloc[:,2].values)
trans_df.loc[transactions[foo_mask].iloc[:,0],'PTE'] += np.abs(transactions[foo_mask].iloc[:,2].values)
trans_df = trans_df.fillna(0)
fig, ax = plt.subplots(nrows=1,ncols=1)
trans_df.cumsum().plot(ax=ax)
ax.set_title('Cumulative sum of notional traded')
ax.plot() | examples/PTE.ipynb | pmorissette/bt | mit |
If we plot the total risk contribution of each asset class and divide by the total volatility, then we can see that both strategy's contribute roughly similar amounts of volatility from both of the securities. | weights_target = res_target.get_security_weights()
rolling_cov_target = pdf.loc[:,weights_target.columns].pct_change().rolling(window=3*20).cov()*252
weights_PTE = res_PTE.get_security_weights().loc[:,weights_target.columns]
rolling_cov_PTE = pdf.loc[:,weights_target.columns].pct_change().rolling(window=3*20).cov()*252
trc_target = pd.DataFrame(
np.nan,
index = weights_target.index,
columns = weights_target.columns
)
trc_PTE = pd.DataFrame(
np.nan,
index = weights_PTE.index,
columns = [x + " PTE" for x in weights_PTE.columns]
)
for dt in pdf.index:
trc_target.loc[dt,:] = weights_target.loc[dt,:].values*(rolling_cov_target.loc[dt,:].values@weights_target.loc[dt,:].values)/np.sqrt(weights_target.loc[dt,:].values@rolling_cov_target.loc[dt,:].values@weights_target.loc[dt,:].values)
trc_PTE.loc[dt,:] = weights_PTE.loc[dt,:].values*(rolling_cov_PTE.loc[dt,:].values@weights_PTE.loc[dt,:].values)/np.sqrt(weights_PTE.loc[dt,:].values@rolling_cov_PTE.loc[dt,:].values@weights_PTE.loc[dt,:].values)
fig, ax = plt.subplots(nrows=1,ncols=1)
trc_target.plot(ax=ax)
trc_PTE.plot(ax=ax)
ax.set_title('Total Risk Contribution')
ax.plot() | examples/PTE.ipynb | pmorissette/bt | mit |
Looking at the Target strategy's and PTE strategy's Total Risk they are very similar. | fig, ax = plt.subplots(nrows=1,ncols=1)
trc_target.sum(axis=1).plot(ax=ax,label='Target')
trc_PTE.sum(axis=1).plot(ax=ax,label='PTE')
ax.legend()
ax.set_title('Total Risk')
ax.plot()
transactions = res_PTE.get_transactions()
transactions = (transactions['quantity'] * transactions['price']).reset_index()
bar_mask = transactions.loc[:,'Security'] == 'bar'
dates_of_PTE_transactions = transactions[bar_mask].iloc[:,0]
dates_of_PTE_transactions
fig, ax = plt.subplots(nrows=1,ncols=1)
np.sum(np.abs(trc_target.values - trc_PTE.values))
#.abs().sum(axis=1).plot()
ax.set_title('Total Risk')
ax.plot(
trc_target.index,
np.sum(np.abs(trc_target.values - trc_PTE.values),axis=1),
label='PTE'
)
for i,dt in enumerate(dates_of_PTE_transactions):
if i == 0:
ax.axvline(x=dt,color='red',label='PTE Transaction')
else:
ax.axvline(x=dt,color='red')
ax.legend()
| examples/PTE.ipynb | pmorissette/bt | mit |
data | categories = ['alt.atheism', 'soc.religion.christian']
newsgroups_train = fetch_20newsgroups(subset='train',
shuffle=True,
categories=categories)
print(f'number of training samples: {len(newsgroups_train.data)}')
example_sample_data = "\n".join(newsgroups_train.data[0].split("\n")[10:15])
example_sample_category = categories[newsgroups_train.target[0]]
print(f'\nexample training sample of category {example_sample_category}:'
f'\n\n{example_sample_data}') | Keras_CNN_newsgroups_text_classification.ipynb | wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era | gpl-3.0 |
data preparation | labels = newsgroups_train.target
texts = newsgroups_train.data
max_sequence_length = 1000
max_words = 20000
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
#print(sequences[0][:10])
print(f'{len(word_index)} unique tokens found')
labels = to_categorical(np.array(labels))
data = pad_sequences(sequences, maxlen=max_sequence_length)
print(f'data tensor shape: {data.shape}\n'
f'targets tensor shape: {labels.shape}')
indices = np.arange(data.shape[0]); np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
cross_validation_split = 0.3
nb_validation_samples = int(cross_validation_split * data.shape[0])
x_train = data[:-nb_validation_samples]
y_train = labels[:-nb_validation_samples]
x_val = data[-nb_validation_samples:]
y_val = labels[-nb_validation_samples:]
print(f'training samples shape: {x_train.shape}\n'
f'validation samples shape: {y_train.shape}\n\n'
f'training samples positive/negative reviews: {y_train.sum(axis=0)}\n'
f'validation samples positive/negative reviews: {y_val.sum(axis=0)}')
embeddings_index = {}
with open('glove.6B.100d.txt') as f:
for line in f:
values = line.split(' ')
word = values[0]
embeddings_index[word] = np.asarray(values[1:], dtype='float32')
print(f'word vectors: {len(embeddings_index)}')
word_vector_dimensionality = 100
embedding_matrix = np.random.random(
(len(word_index) + 1, word_vector_dimensionality))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# Words not in the embedding index are all zero elements.
embedding_matrix[i] = embedding_vector
print(f'embedding matrix shape: {embedding_matrix.shape}') | Keras_CNN_newsgroups_text_classification.ipynb | wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era | gpl-3.0 |
model: convolutional neural network | embedding_layer = Embedding(len(word_index) + 1,
word_vector_dimensionality,
weights=[embedding_matrix],
input_length=max_sequence_length,
trainable=False)
inputs = Input(shape=(max_sequence_length,), dtype='int32') # inputs
x = embedding_layer(inputs) # embedded sequences
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(35)(x) # global max pooling
x = Flatten()(x)
x = Dense(300, activation='relu')(x)
x = Dropout(rate=0.5)(x)
preds = Dense(2, activation='softmax', name='preds')(x)
model = Model(input=inputs, output=preds)
model.compile(loss='categorical_crossentropy',
optimizer='nadam',
metrics=['acc'])
summary_and_diagram(model)
%%time
history = model.fit(x_train, y_train, validation_data=(x_val, y_val),
epochs=60, batch_size=32, verbose=False)
model_training_plot(history)
print(f'max. validation accuracy observed: {max(model.history.history["val_acc"])}')
print(f'max. validation accuracy history index: {model.history.history["val_acc"].index(max(model.history.history["val_acc"]))}') | Keras_CNN_newsgroups_text_classification.ipynb | wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era | gpl-3.0 |
model: convolutional neural network with multiple towers of varying kernel sizes | embedding_layer = Embedding(len(word_index) + 1,
word_vector_dimensionality,
weights=[embedding_matrix],
input_length=max_sequence_length,
trainable=False)
inputs = Input(shape=(max_sequence_length,), dtype='int32')
x = embedding_layer(inputs)
convolutional_layer_towers = []
for kernel_size in [3, 4, 5]:
_x = Conv1D(filters=128, kernel_size=kernel_size, activation='relu')(x)
_x = Dropout(rate=0.1)(_x)
_x = MaxPooling1D(5)(_x)
convolutional_layer_towers.append(_x)
x = Concatenate(axis=1)(convolutional_layer_towers)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(5)(x)
x = Conv1D(128, 5, activation='relu')(x)
x = MaxPooling1D(30)(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
x = Dropout(rate=0.5)(x)
preds = Dense(2, activation='softmax', name='preds')(x)
model = Model(input=inputs, output=preds)
model.compile(loss='categorical_crossentropy',
optimizer='nadam',
metrics=['acc'])
summary_and_diagram(model)
%%time
history = model.fit(x_train, y_train, validation_data=(x_val, y_val),
epochs=100, batch_size=32, verbose=False)
model_training_plot(history)
print(f'max. validation accuracy observed: {max(model.history.history["val_acc"])}')
print(f'max. validation accuracy history index: {model.history.history["val_acc"].index(max(model.history.history["val_acc"]))}') | Keras_CNN_newsgroups_text_classification.ipynb | wdbm/Psychedelic_Machine_Learning_in_the_Cenozoic_Era | gpl-3.0 |
Here the class statement did not create anything, it just the blueprint to create "Person" objects. To create an object we need to instantiate the "Person" class. | P1 = Person()
print(type(P1)) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Now we created a "Person" object and assigned it to "P1". We can create any number of objects but please note there will be only one "Person" class. | # Doc string for class
class Person:
'''Simple Person Class'''
pass
print(Person.__doc__) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Attributes & Methods
Classes contain attributes (also called fields, members etc...) and methods (a.k.a functions). Attributes defines the characteristics of the object and methods perfom action on the object. For example, the class definition below has firstname and lastname attributes and fullname is a method. | class Person:
'''Simple Person Class
Attributes:
firstname: String representing first name of the person
lastname: String representing last name of the person
'''
def __init__(self,firstname,lastname):
'''Initialiser method for Person'''
self.firstname = firstname
self.lastname = lastname
def fullname(self):
'''Returns the full name of the person'''
return self.firstname + ' ' + self.lastname | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Inside the class body, we define two functions – these are our object’s methods. The first is called _init_, which is a special method. When we call the class object, a new instance of the class is created, and the _init_ method on this new object is immediately executed with all the parameters that we passed to the class object. The purpose of this method is thus to set up a new object using data that we have provided.
The second method is a custom method which derives the fullname of the person using the firstname and the lastname.
_init_ is sometimes called the object’s constructor, because it is used similarly to the way that constructors are used in other languages, but that is not technically correct – it’s better to call it the initialiser. There is a different method called _new_ which is more analogous to a constructor, but it is hardly ever used.
You may have noticed that both of these method definitions have self as the first parameter, and we use this variable inside the method bodies – but we don’t appear to pass this parameter in. This is because whenever we call a method on an object, the object itself is automatically passed in as the first parameter (as self). This gives us a way to access the object’s properties from inside the object’s methods.
Instance Attributes
All the attributes that are defined on the Person instance are called instance attributes. They are added to the instance when the _init_ method is executed.
Class Attributes
We can, however, also define attributes which are set on the class. These attributes will be shared by all instances of that class. In many ways they behave just like instance attributes, but there are some caveats that you should be aware of.
We define class attributes in the body of a class, at the same indentation level as method definitions (one level up from the insides of methods) | class Person:
'''Simple Person Class
Attributes:
firstname: String representing first name of the person
lastname: String representing last name of the person
'''
TITLES = ['Mr','Mrs','Master']
def __init__(self,title,firstname,lastname):
'''Initialiser method for Person'''
if title not in self.TITLES:
raise ValueError("%s is not a valid title.", title)
self.firstname = firstname
self.lastname = lastname
def fullname(self):
'''Returns the full name of the person'''
return self.firstname + ' ' + self.lastname
John = Person('Mister','John','Doe') # this will create an error
class Employee:
'''Common base class for all employees'''
empCount = 0
def __init__(self, name, salary):
self.name = name
self.salary = salary
Employee.empCount += 1
def displayCount(self):
print("Total Employee %d",Employee.empCount)
def displayEmployee(self):
print("Name : ", self.name, ", Salary: ", self.salary)
"This would create first object of Employee class"
emp1 = Employee("Zara", 2000)
"This would create second object of Employee class"
emp2 = Employee("Manni", 5000)
emp1.displayEmployee()
emp2.displayEmployee()
print("Total Employee ", Employee.empCount) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Please note that when we set an attribute on an instance which has the same name as a class attribute, we are overriding the class attribute with an instance attribute, which will take precedence over it.
Class Decorators
Class Methods
Just like we can define class attributes, which are shared between all instances of a class, we can define class methods. We do this by using the @classmethod decorator to decorate an ordinary method.
A class method still has its calling object as the first parameter, but by convention we rename this parameter from self to cls. If we call the class method from an instance, this parameter will contain the instance object, but if we call it from the class it will contain the class object. By calling the parameter cls we remind ourselves that it is not guaranteed to have any instance attributes.
Class methods exists primarily for two reasons:
Sometimes there are tasks associated with a class which we can perform using constants and other class attributes, without needing to create any class instances. If we had to use instance methods for these tasks, we would need to create an instance for no reason, which would be wasteful.
Sometimes it is useful to write a class method which creates an instance of the class after processing the input so that it is in the right format to be passed to the class constructor. This allows the constructor to be straightforward and not have to implement any complicated parsing or clean-up code. | class ClassGrades:
def __init__(self, grades):
self.grades = grades
@classmethod
def from_csv(cls, grade_csv_str):
grades = grade_csv_str.split(', ')
return cls(grades)
class_grades = ClassGrades.from_csv('92, -15, 99, 101, 77, 65, 100')
print(class_grades.grades) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Static Methods
A static method doesn’t have the calling object passed into it as the first parameter. This means that it doesn’t have access to the rest of the class or instance at all. We can call them from an instance or a class object, but they are most commonly called from class objects, like class methods.
If we are using a class to group together related methods which don’t need to access each other or any other data on the class, we may want to use this technique.
The advantage of using static methods is that we eliminate unnecessary cls or self parameters from our method definitions.
The disadvantage is that if we do occasionally want to refer to another class method or attribute inside a static method we have to write the class name out in full, which can be much more verbose than using the cls variable which is available to us inside a class method. | class ClassGrades:
def __init__(self, grades):
self.grades = grades
@classmethod
def from_csv(cls, grade_csv_str):
grades = grade_csv_str.split(', ')
cls.validate(grades)
return cls(grades)
@staticmethod
def validate(grades):
for g in grades:
if int(g) < 0 or int(g) > 100:
raise Exception()
try:
# Try out some valid grades
class_grades_valid = ClassGrades.from_csv('90, 80, 85, 94, 70')
print('Got grades:', class_grades_valid.grades)
# Should fail with invalid grades
class_grades_invalid = ClassGrades.from_csv('92, -15, 99, 101, 77, 65, 100')
print(class_grades_invalid.grades)
except:
print('Invalid!') | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
The difference between a static method and a class method is:
Static method knows nothing about the class and just deals with the parameters.
Class method works with the class since its parameter is always the class itself.
Property
Sometimes we use a method to generate a property of an object dynamically, calculating it from the object’s other properties. Sometimes you can simply use a method to access a single attribute and return it. You can also use a different method to update the value of the attribute instead of accessing it directly. Methods like this are called getters and setters, because they “get” and “set” the values of attributes, respectively.
The @property decorator lets us make a method behave like an attribute. | class Person:
'''Simple Person Class
Attributes:
firstname: String representing first name of the person
lastname: String representing last name of the person
'''
def __init__(self,firstname,lastname):
'''Initialiser method for Person'''
self.firstname = firstname
self.lastname = lastname
@property
def fullname(self):
'''Returns the full name of the person'''
return self.firstname + ' ' + self.lastname
p1 = Person('John','Doe')
print(p1.fullname) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
There are also decorators which we can use to define a setter and a deleter for our attribute (a deleter will delete the attribute from our object). The getter, setter and deleter methods must all have the same name | class Person:
'''Simple Person Class
Attributes:
firstname: String representing first name of the person
lastname: String representing last name of the person
'''
def __init__(self,firstname,lastname):
'''Initialiser method for Person'''
self.firstname = firstname
self.lastname = lastname
@property
def fullname(self):
'''Returns the full name of the person'''
return self.firstname + ' ' + self.lastname
@fullname.setter
def fullname(self,value):
firstname,lastname = value.split(" ")
self.firstname = firstname
self.lastname = lastname
@fullname.deleter
def fullname(self):
del self.firstname
del self.lastname
p1 = Person('John','Doe')
print(p1.fullname)
p1.fullname = 'Jack Daniels'
print(p1.fullname) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Inspecting an Object | class Person:
def __init__(self, name, surname):
self.name = name
self.surname = surname
def fullname(self):
return "%s %s" % (self.name, self.surname)
jane = Person("Jane", "Smith")
print(dir(jane)) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Built In Class Attributes | class Employee:
'Common base class for all employees'
empCount = 0
def __init__(self, name, salary):
self.name = name
self.salary = salary
Employee.empCount += 1
def displayCount(self):
print("Total Employee", Employee.empCount)
def displayEmployee(self):
print("Name : ", self.name, ", Salary: ", self.salary)
print ("Employee.__doc__:", Employee.__doc__)
print ("Employee.__name__:", Employee.__name__)
print ("Employee.__module__:", Employee.__module__)
print ("Employee.__bases__:", Employee.__bases__)
print ("Employee.__dict__:", Employee.__dict__) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Overriding Magic Methods | import datetime
class Person:
def __init__(self, name, surname, birthdate, address, telephone, email):
self.name = name
self.surname = surname
self.birthdate = birthdate
self.address = address
self.telephone = telephone
self.email = email
def __str__(self):
return "%s %s, born %s\nAddress: %s\nTelephone: %s\nEmail:%s" % (self.name, self.surname, self.birthdate, self.address, self.telephone, self.email)
jane = Person(
"Jane",
"Doe",
datetime.date(1992, 3, 12), # year, month, day
"No. 12 Short Street, Greenville",
"555 456 0987",
"[email protected]"
)
print(jane) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Create Class Using Key Value Arguments | class Student:
def __init__(self, **kwargs):
for k,v in kwargs.items():
setattr(self,k,v,)
def __str__(self):
attrs = ["{}={}".format(k, v) for (k, v) in self.__dict__.items()]
return str(attrs)
#classname = self.__class__.__name__
#return "{}: {}".format((classname, " ".join(attrs)))
s1 = Student(firstname="John",lastname="Doe")
print(s1.firstname)
print(s1.lastname)
print(s1)
def print_values(**kwargs):
for key, value in kwargs.items():
print("The value of {} is {}".format(key, value))
print_values(my_name="Sammy", your_name="Casey") | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Class Inheritance
Inheritance is a way of arranging objects in a hierarchy from the most general to the most specific. An object which inherits from another object is considered to be a subtype of that object.
We also often say that a class is a subclass or child class of a class from which it inherits, or that the other class is its superclass or parent class. We can refer to the most generic class at the base of a hierarchy as a base class.
Inheritance is also a way of reusing existing code easily. If we already have a class which does almost what we want, we can create a subclass in which we partially override some of its behaviour, or perhaps add some new functionality. | # Simple Example of Inheritance
class Person:
pass
# Parent class must be defined inside the paranthesis
class Employee(Person):
pass
e1 = Employee()
print(dir(e1))
class Person:
def __init__(self,firstname,lastname):
self.firstname = firstname
self.lastname = lastname
def __str__(self):
return "[{},{}]".format(self.firstname,self.lastname)
class Employee(Person):
pass
john = Employee('John','Doe')
print(john)
class Person:
def __init__(self, firstname, lastname):
self.firstname = firstname
self.lastname = lastname
def __str__(self):
return "{},{}".format(self.firstname, self.lastname)
class Employee(Person):
def __init__(self, firstname, lastname, staffid):
super().__init__(firstname, lastname)
self.staffid = staffid
def __str__(self):
return super().__str__() + ",{}".format(self.staffid)
john = Employee('Jack','Doe','12345')
print(john)
| Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Abstract Classes and Interfaces
Abstract classes are not intended to be instantiated because all the method definitions are empty – all the insides of the methods must be implemented in a subclass.
They serves as a template for suitable objects by defining a list of methods that these objects must implement. | # Abstract Classes
class shape2D:
def area(self):
raise NotImplementedError()
class shape3D:
def volume(self):
raise NotImplementedError()
sh1 = shape2D()
sh1.area()
class shape2D:
def area(self):
raise NotImplementedError()
class shape3D:
def volume(self):
raise NotImplementedError()
class Square(shape2D):
def __init__(self,width):
self.width = width
def area(self):
return self.width ** 2
s1 = Square(2)
s1.area() | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Multiple Inheritance | class Person:
pass
class Company:
pass
class Employee(Person,Company):
pass
print(Employee.mro()) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Diamond Problem
Multiple inheritance isn’t too difficult to understand if a class inherits from multiple classes which have completely different properties, but things get complicated if two parent classes implement the same method or attribute.
If classes B and C inherit from A and class D inherits from B and C, and both B and C have a method do_something, which do_something will D inherit? This ambiguity is known as the diamond problem, and different languages resolve it in different ways. In our Tutor class we would encounter this problem with the _init_ method. | class X: pass
class Y: pass
class Z: pass
class A(X,Y): pass
class B(Y,Z): pass
class M(B,A,Z): pass
# Output:
# [<class '__main__.M'>, <class '__main__.B'>,
# <class '__main__.A'>, <class '__main__.X'>,
# <class '__main__.Y'>, <class '__main__.Z'>,
# <class 'object'>]
print(M.mro()) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Method Resolution Order (MRO)
In the multiple inheritance scenario, any specified attribute is searched first in the current class. If not found, the search continues into parent classes in depth-first, left-right fashion without searching same class twice.
So, in the above example of MultiDerived class the search order is [MultiDerived, Base1, Base2, object]. This order is also called linearization of MultiDerived class and the set of rules used to find this order is called Method Resolution Order (MRO).
MRO must prevent local precedence ordering and also provide monotonicity. It ensures that a class always appears before its parents and in case of multiple parents, the order is same as tuple of base classes.
MRO of a class can be viewed as the mro attribute or mro() method. The former returns a tuple while latter returns a list. | class Person:
def __init__(self):
print('Person')
class Company:
def __init__(self):
print('Company')
class Employee(Person,Company):
def _init_(self):
super(Employee,self).__init__()
print('Employee')
e1=Employee() | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Mixins
If we use multiple inheritance, it is often a good idea for us to design our classes in a way which avoids the kind of ambiguity described above. One way of doing this is to split up optional functionality into mix-ins. A Mix-in is a class which is not intended to stand on its own – it exists to add extra functionality to another class through multiple inheritance. | class Person:
def __init__(self, name, surname, number):
self.name = name
self.surname = surname
self.number = number
class LearnerMixin:
def __init__(self):
self.classes = []
def enrol(self, course):
self.classes.append(course)
class TeacherMixin:
def __init__(self):
self.courses_taught = []
def assign_teaching(self, course):
self.courses_taught.append(course)
class Tutor(Person, LearnerMixin, TeacherMixin):
def __init__(self, *args, **kwargs):
super(Tutor, self).__init__(*args, **kwargs)
jane = Tutor("Jane", "Smith", "SMTJNX045")
#jane.enrol(a_postgrad_course)
#jane.assign_teaching(an_undergrad_course) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Now Tutor inherits from one “main” class, Person, and two mix-ins which are not related to Person. Each mix-in is responsible for providing a specific piece of optional functionality. Our mix-ins still have _init_ methods, because each one has to initialise a list of courses (we saw in the previous chapter that we can’t do this with a class attribute). Many mix-ins just provide additional methods and don’t initialise anything.
Composition
Composition is a way of aggregating objects together by making some objects attributes of other objects. Relationships like this can be one-to-one, one-to-many or many-to-many, and they can be unidirectional or bidirectional, depending on the specifics of the the roles which the objects fulfil.
The term composition implies that the two objects are quite strongly linked – one object can be thought of as belonging exclusively to the other object. If the owner object ceases to exist, the owned object will probably cease to exist as well. If the link between two objects is weaker, and neither object has exclusive ownership of the other, it can also be called aggregation. | class Student:
def __init__(self, name, student_number):
self.name = name
self.student_number = student_number
self.classes = []
def enrol(self, course_running):
self.classes.append(course_running)
course_running.add_student(self)
class Department:
def __init__(self, name, department_code):
self.name = name
self.department_code = department_code
self.courses = {}
def add_course(self, description, course_code, credits):
self.courses[course_code] = Course(description, course_code, credits, self)
return self.courses[course_code]
class Course:
def __init__(self, description, course_code, credits, department):
self.description = description
self.course_code = course_code
self.credits = credits
self.department = department
#self.department.add_course(self)
self.runnings = []
def add_running(self, year):
self.runnings.append(CourseRunning(self, year))
return self.runnings[-1]
class CourseRunning:
def __init__(self, course, year):
self.course = course
self.year = year
self.students = []
def add_student(self, student):
self.students.append(student)
maths_dept = Department("Mathematics and Applied Mathematics", "MAM")
mam1000w = maths_dept.add_course("Mathematics 1000", "MAM1000W", 1)
mam1000w_2013 = mam1000w.add_running(2013)
bob = Student("Bob", "Smith")
bob.enrol(mam1000w_2013) | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
A student can be enrolled in several courses (CourseRunning objects), and a course (CourseRunning) can have multiple students enrolled in it in a particular year, so this is a many-to-many relationship. A student knows about all his or her courses, and a course has a record of all enrolled students, so this is a bidirectional relationship. These objects aren’t very strongly coupled – a student can exist independently of a course, and a course can exist independently of a student.
A department offers multiple courses (Course objects), but in our implementation a course can only have a single department – this is a one-to-many relationship. It is also bidirectional. Furthermore, these objects are more strongly coupled – you can say that a department owns a course. The course cannot exist without the department.
A similar relationship exists between a course and its “runnings”: it is also bidirectional, one-to-many and strongly coupled – it wouldn’t make sense for “MAM1000W run in 2013” to exist on its own in the absence of “MAM1000W”.
Inheritance Methods | class Person:
pass
class Employee(Person):
pass
class Tutor(Employee):
pass
emp = Employee()
print(isinstance(emp, Tutor)) # False
print(isinstance(emp, Person)) # True
print(isinstance(emp, Employee)) # True
print(issubclass(Tutor, Person)) # True | Classes+and+Objects.ipynb | vravishankar/Jupyter-Books | mit |
Import convention
You can import explicitly from statsmodels.formula.api | from statsmodels.formula.api import ols | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Alternatively, you can just use the formula namespace of the main statsmodels.api. | sm.formula.ols | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Or you can use the following convention | import statsmodels.formula.api as smf | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance | sm.OLS.from_formula | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables.
dir(sm.formula) will print a list of available models.
Formula-compatible models have the following generic call signature: (formula, data, subset=None, *args, **kwargs)
OLS regression using formulas
To begin, we fit the linear model described on the Getting Started page. Download the data, subset columns, and list-wise delete to remove missing observations: | dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True)
df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna()
df.head() | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Fit the model: | mod = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df)
res = mod.fit()
print(res.summary()) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Categorical variables
Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories.
If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C() operator: | res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit()
print(res.params) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Patsy's mode advanced features for categorical variables are discussed in: Patsy: Contrast Coding Systems for categorical variables
Operators
We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix.
Removing variables
The "-" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by: | res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit()
print(res.params) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Multiplicative interactions
":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together: | res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit()
res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit()
print(res1.params, '\n')
print(res2.params) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Many other things are possible with operators. Please consult the patsy docs to learn more.
Functions
You can apply vectorized functions to the variables in your model: | res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit()
print(res.params) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Define a custom function: | def log_plus_1(x):
return np.log(x) + 1.
res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit()
print(res.params) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Any function that is in the calling namespace is available to the formula.
Using formulas with models that do not (yet) support them
Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices
can then be fed to the fitting function as endog and exog arguments.
To generate numpy arrays: | import patsy
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='matrix')
print(y[:5])
print(X[:5]) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
To generate pandas data frames: | f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='dataframe')
print(y[:5])
print(X[:5])
print(sm.OLS(y, X).fit().summary()) | examples/notebooks/formulas.ipynb | jseabold/statsmodels | bsd-3-clause |
Maps with Natural Earth backgrounds
I got the background image from Natural Earth; it is the 10 m, Cross Blended Hypso with Relief, Water, Drains, and Ocean Bottom. I changed the colour curves slightly in Gimp, to make the image darker.
Adjustment for Natural Earth: | from IPython.display import Image
Image(filename='./data/TravelMap/HYP_HR_SR_OB_DR/Adjustment.jpg') | MX_BarrancasDelCobre.ipynb | prisae/blog-notebooks | cc0-1.0 |
Profile from viewpoint down to Urique
Not used in blog, later added | import numpy as np
import matplotlib.pyplot as plt
fig_p,ax = plt.subplots(figsize=(tm.cm2in([10.8, 5])))
# Switch off axis and ticks
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('none')
# Get data
pdat = np.loadtxt('./data/Mexico/ProfileUrique.txt', skiprows=1)
# Ticks, hlines, axis
plt.yticks(np.arange(1,6)*500, ('500 m', '1000 m', '1500 m', '2000 m', '2500 m'))
plt.hlines([1000, 2000], -.5, 17, colors='.8')
plt.hlines([500, 1500, 2500], -.5, 17, colors='.8', lw=.5)
plt.axis([-.5, 17, 200, 2500])
# Sum up differences to get distance, distance starts now at every waypoint
distance = np.cumsum(pdat[:,4])/1000
# Plot data
plt.plot(distance, pdat[:, 2])
plt.show() | MX_BarrancasDelCobre.ipynb | prisae/blog-notebooks | cc0-1.0 |
Find MEG reference channel artifacts
Use ICA decompositions of MEG reference channels to remove intermittent noise.
Many MEG systems have an array of reference channels which are used to detect
external magnetic noise. However, standard techniques that use reference
channels to remove noise from standard channels often fail when noise is
intermittent. The technique described here (using ICA on the reference
channels) often succeeds where the standard techniques do not.
There are two algorithms to choose from: separate and together (default). In
the "separate" algorithm, two ICA decompositions are made: one on the reference
channels, and one on reference + standard channels. The reference + standard
channel components which correlate with the reference channel components are
removed.
In the "together" algorithm, a single ICA decomposition is made on reference +
standard channels, and those components whose weights are particularly heavy
on the reference channels are removed.
This technique is fully described and validated in :footcite:HannaEtAl2020 | # Authors: Jeff Hanna <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import refmeg_noise
from mne.preprocessing import ICA
import numpy as np
print(__doc__)
data_path = refmeg_noise.data_path() | 0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Read raw data, cropping to 5 minutes to save memory | raw_fname = data_path + '/sample_reference_MEG_noise-raw.fif'
raw = io.read_raw_fif(raw_fname).crop(300, 600).load_data() | 0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Note that even though standard noise removal has already
been applied to these data, much of the noise in the reference channels
(bottom of the plot) can still be seen in the standard channels. | select_picks = np.concatenate(
(mne.pick_types(raw.info, meg=True)[-32:],
mne.pick_types(raw.info, meg=False, ref_meg=True)))
plot_kwargs = dict(
duration=100, order=select_picks, n_channels=len(select_picks),
scalings={"mag": 8e-13, "ref_meg": 2e-11})
raw.plot(**plot_kwargs) | 0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
The PSD of these data show the noise as clear peaks. | raw.plot_psd(fmax=30) | 0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Run the "together" algorithm. | raw_tog = raw.copy()
ica_kwargs = dict(
method='picard',
fit_params=dict(tol=1e-4), # use a high tol here for speed
)
all_picks = mne.pick_types(raw_tog.info, meg=True, ref_meg=True)
ica_tog = ICA(n_components=60, allow_ref_meg=True, **ica_kwargs)
ica_tog.fit(raw_tog, picks=all_picks)
# low threshold (2.0) here because of cropped data, entire recording can use
# a higher threshold (2.5)
bad_comps, scores = ica_tog.find_bads_ref(raw_tog, threshold=2.0)
# Plot scores with bad components marked.
ica_tog.plot_scores(scores, bad_comps)
# Examine the properties of removed components. It's clear from the time
# courses and topographies that these components represent external,
# intermittent noise.
ica_tog.plot_properties(raw_tog, picks=bad_comps)
# Remove the components.
raw_tog = ica_tog.apply(raw_tog, exclude=bad_comps) | 0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.