markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We need to calculate the function $f(x)$'s arc-length from $[0, 4 \pi]$ $$L = \int_0^{4 \pi} \sqrt{1 + |f'(x)|^2} dx$$ In general need numerical quadrature. Non-Linear population growth Lotka-Volterra Predator-Prey model $$\frac{d R}{dt} = R \cdot (a - b \cdot F)$$ $$\frac{d F}{dt} = F \cdot (c \cdot R + d)$$ Where are the steady states? How do we solve the initial value problem? How do we understand the non-linear dynamics? How do we evaluate whether this is a good model? Interpolation and Data Fitting Finding trends in real data represented without a closed form (analytical form). Sunspot counts
data = numpy.loadtxt("./data/sunspot.dat") data.shape plt.plot(data[:, 0], data[:, 1]) plt.xlabel("Year") plt.ylabel("Number") plt.title("Number of Sunspots") plt.show()
0_intro_numerical_methods.ipynb
btw2111/intro-numerical-methods
mit
Part A For now, still using the default, quick get_results function but this time specify merge_type to not merging (no effect here as calculations are independent, the default is merge using UUIDs btw), the analyser to hybrid [3] (not blocking [4] as by default) and while we don't specify analysis start MC iterations, we specify that the MSER find starting iteration function should be used to automatically find them (the default is 'blocking' for the blocking find starting iteration function).
results = get_results(["data/0.01_ccsd.out.gz", "data/0.002_ccsd.out.gz"], merge_type='no', analyser='hybrid', start_its='mser')
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
The summary table shows the analysed data by the analyser. The hybrid analyser analyses the instantaneous projected energy (as prepared by the preparator object).
results.summary
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
The hybrid analyser's output can be viewed.
results.analyser.opt_block print(results.analyser.start_its) # Used starting iterations, found using MSER find starting iteration function. print(results.analyser.end_its) # Used end iterations, the last iteration by default.
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Part B Now, we don't use get_results to get the results object but define the extractor, preparator and analyser objects ourselves. Even though it doesn't have an effect here as there is no calculation to merge, we state that we want to merge using the 'legacy' way, i.e. don't use UUID for merging but simply determine whether iterations from one output file to the next (order matters here) are consecutive. If shift is already varying across that continuation, don't merge if 'shift_damping' differs from one output file to the next ('md_shift' specifies that this restriction only applies when shift is already varying, otherwise use 'md_always' for this restriction to always hold). Since no merge is possibly, these options are ignored and just shown here for demonstration purposes.
extra = Extractor(merge={'type': 'legacy', 'md_shift': ['qmc:shift_damping'], 'shift_key': 'Shift'})
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Define preparator object. It contains the hard coded mapping of column name meaning to column name, i.e. 'ref_key' : 'N_0, for the case of HANDE CCMC/FCIQMC. If you use a different package, you'll need to create your own preparator class.
prep = PrepHandeCcmcFciqmc()
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Define analyser. Use class method inst_hande_ccmc_fciqmc to pre-set what should be analysed (inst. projected energy), name of iteration key ('iterations'), etc. Use 'blocking' start iteration finder and specify that a graph should be shown by the start iteration finder.
ana = HybridAna.inst_hande_ccmc_fciqmc(start_its = 'blocking', find_start_kw_args={'show_graph': True})
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Now, we can execute those three objects. 'analyse_data' is a handy helper to call their .exe() methods. For each calculation, a graph is shown by the find starting iteration method.
results2 = analyse_data(["data/0.01_ccsd.out.gz", "data/0.002_ccsd.out.gz"], extra, prep, ana)
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Have used different starting iteration finder, so these will be different.
results2.analyser.start_its
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
But results are comparable.
results2.summary_pretty
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
But what if we want to analyse the shift instead of the instantaneous projected energy with hybrid analysis? -> BEWARE this is untested. Only used for illustration here! Don't use class method for analyser instantiation anymore. Keep default settings (find start iterations using 'mser' etc). Note that when doing blocking [4], not hybrid [3], the order is a bit different, the columns to be analysed are 'cols' for blocking [4] and 'hybrid_col' for hybrid analysis [3]. You might need to define both for a given analyser if you are using the starting iteration function of the other type ('blocking' with start_its='mser' or 'hybrid' with start_its='blocking'). Consult the docstring.
ana2 = HybridAna('iterations', 'Shift', 'replica id') results3 = analyse_data(["data/0.01_ccsd.out.gz", "data/0.002_ccsd.out.gz"], extra, prep, ana2) results3.summary_pretty
tools/pyhande/tutorials/3_custom_get_results_ccmc.ipynb
hande-qmc/hande
lgpl-2.1
Function 2: deciding if an agent is happy Write a function that takes the game board generated by the function you wrote above and determines whether an agent at position i in the game board of a specified type is happy for a game board of any size and a neighborhood of size N (i.e., from position i-N to i+N), and returns that information. Make sure to check that position i is actually inside the game board (i.e., make sure the request makes sense), and ensure that it behaves correctly for agents near the edges of the game board. Show that your function is behaving correctly by giving having it check every position in the game board you generated previously, and decide whether the agent in each spot is happy or not. Verify by eye that it's behaving correctly. (Hint: You're going to use this later, when you're trying to decide where to put an agent. Should you write the function assuming that the agent is already in the board, or that you're testing to see whether or not you've trying to decide whether to put it there?)
# Put your code here, using additional cells if necessary.
past-semesters/fall_2016/day-by-day/day15-Schelling-1-dimensional-segregation-day2/Day_15_Pre_Class_Notebook.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
from IPython.display import HTML HTML( """ <iframe src="https://goo.gl/forms/M7YCyE1OLzyOK7gH3?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> """ )
past-semesters/fall_2016/day-by-day/day15-Schelling-1-dimensional-segregation-day2/Day_15_Pre_Class_Notebook.ipynb
ComputationalModeling/spring-2017-danielak
agpl-3.0
Unit Tests Overview and Principles Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test. There are two parts to writing tests. 1. invoking the code under test so that it is exercised in a particular way; 1. evaluating the results of executing code under test to determine if it behaved as expected. The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage. For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed. Test cases can be of several types. Below are listed some common classifications of test cases. - Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation. - One-shot test. In this case, you call the code under test with arguments for which you know the expected result. - Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs. - Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned. Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course. A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do. Examples of Test Cases This section presents examples of test cases. The code under test is the calculation of entropy. Entropy of a set of probabilities $$ H = -\sum_i p_i \log(p_i) $$ where $\sum_i p_i = 1$.
import numpy as np # Code Under Test def entropy(ps): items = ps * np.log(ps) return np.abs(-np.sum(items)) # Smoke test entropy([0.2, 0.8])
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1. What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
# One-shot test. Need to know the correct answer. entries = [ [0, [1]], ] for entry in entries: ans = entry[0] prob = entry[1] if not np.isclose(entropy(prob), ans): print("Test failed!") print ("Test completed!")
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Question: What is an example of another one-shot test? (Hint: You need to know the expected result.) One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.
# Edge test. This is something that should cause an exception. entropy([-0.5])
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$. $$ H = -\sum_{i=1}^{n} p_i \log(p_i) = -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n}) = n (-\frac{1}{n} \log(\frac{1}{n}) ) = -\log(\frac{1}{n}) $$ For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
# Pattern test def test_equal_probabilities(n): prob = 1.0/n ps = np.repeat(prob , n) if np.isclose(entropy(ps), -np.log(prob)): print("Worked!") else: import pdb; pdb.set_trace() print ("Bad result.") # Run a test test_equal_probabilities(100000)
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better. Unittest Infrastructure There are several reasons to use a test infrastructure: - If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code. - The infrastructure provides a uniform way to report test results, and to handle test failures. - A test infrastructure can tell you about coverage so you know what tests to add. We'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following: 1. import the unittest module 1. define a class that inherits from unittest.TestCase 1. write methods that run the code to be tested and check the outcomes. The last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with "test". Second, the "test methods" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.
import unittest # Define a class in which the tests will run class UnitTests(unittest.TestCase): # Each method in the class to execute a test def test_success(self): self.assertEqual(1, 1) def test_success1(self): self.assertTrue(1 == 1) def test_failure(self): self.assertLess(1, 2) suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests) _ = unittest.TextTestRunner().run(suite) # Function the handles test loading #def test_setup(argument ?):
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach. As expected, the first test passes, but the second test fails. Exercise Rewrite the above one-shot test for entropy using the unittest infrastructure.
# Implementating a pattern test. Use functions in the test. import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_equal_probability(self): def test(count): """ Invokes the entropy function for a number of values equal to count that have the same probability. :param int count: """ raise RuntimeError ("Not implemented.") # test(2) test(20) test(200) #test_setup(TestEntropy) import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): """Write the full set of tests."""
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Testing For Exceptions Edge test cases often involves handling exceptions. One approach is to code this directly.
import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_invalid_probability(self): try: entropy([0.1, 0.5]) self.assertTrue(False) except ValueError: self.assertTrue(True) #test_setup(TestEntropy)
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
unittest provides help with testing exceptions.
import unittest # Define a class in which the tests will run class TestEntropy(unittest.TestCase): def test_invalid_probability(self): with self.assertRaises(ValueError): entropy([0.1, 0.5]) suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy) _ = unittest.TextTestRunner().run(suite)
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Test Files Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py. The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example. Discussion Question: What tests would you write for a plotting function? Test Driven Development Start by writing the tests. Then write the code. We illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output.
import unittest # Define a class in which the tests will run class TestEntryopy(unittest.TestCase): def test_oneshot(self): self.assertEqual(geomean([1,1]), 1) def test_oneshot2(self): self.assertEqual(geomean([3, 3, 3]), 3) #test_setup(TestGeomean) #def geomean(argument?): # return ?
Fall2018/09_UnitTests/unit-tests.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Ejecutar códigos con otros kernels En la celda de código también es posible ejecutar códigos de otras lenguajes. A continuación, algunas comandos mágicos para ejecutar comandos de otros lenguajes: %%bash %%HTML %%python2 %%python3 %%ruby %%perl
%%bash ls -lah
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Cargar datos
import pandas as pd df = pd.read_csv('data/kaggle-titanic.csv') df.head() df.info() df.describe()
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Gráficos
from matplotlib import pyplot as plt df.Survived.value_counts().plot(kind='bar') plt.show() import pixiedust display(df)
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Widgets
import numpy as np π = np.pi def show_wave(A, f, φ): ω = 2*π*f t = np.linspace(0, 1, 10000) f = A*np.sin(ω*t+φ) plt.grid(True) plt.plot(t, f) plt.show() show_wave(A=5, f=5, φ=2) import ipywidgets as widgets from IPython.display import display params = dict(value=1, min=1, max=100, step=1, continuous_update=False) wA = widgets.IntSlider(**params) wf = widgets.IntSlider(**params) wφ = widgets.IntSlider(value=0, min=0, max=10, step=1, continuous_update=False) widgets.interact(show_wave, A=wA, f=wf, φ=wφ);
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
Para más informaciones sobre ipywidgets, consulte el manual de usuario [6]. Help Para ver la documentación de una determinada función o clase puedes ejecutar el comando: ?str.replace() Este comando abrirá una sección en la página con la documentación deseada. Otro modo de ver la documentación es usando la función help, ej.: help(str.replace)
?str.replace() help(str.replace)
jupyter/Introducción.ipynb
xmnlab/notebooks
mit
OLS estimation Artificial data:
nsample = 100 x = np.linspace(0, 10, 100) X = np.column_stack((x, x**2)) beta = np.array([1, 0.1, 10]) e = np.random.normal(size=nsample)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Our model needs an intercept so we add a column of 1s:
X = sm.add_constant(X) y = np.dot(X, beta) + e
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
model = sm.OLS(y, X) results = model.fit() print(results.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:
print('Parameters: ', results.params) print('R2: ', results.rsquared)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
OLS non-linear curve but linear in parameters We simulate artificial data with a non-linear relationship between x and y:
nsample = 50 sig = 0.5 x = np.linspace(0, 20, nsample) X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample))) beta = [0.5, 0.5, -0.02, 5.] y_true = np.dot(X, beta) y = y_true + sig * np.random.normal(size=nsample)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
res = sm.OLS(y, X).fit() print(res.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Extract other quantities of interest:
print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('Predicted values: ', res.predict())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.
prstd, iv_l, iv_u = wls_prediction_std(res) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="data") ax.plot(x, y_true, 'b-', label="True") ax.plot(x, res.fittedvalues, 'r--.', label="OLS") ax.plot(x, iv_u, 'r--') ax.plot(x, iv_l, 'r--') ax.legend(loc='best');
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
OLS with dummy variables We generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
nsample = 50 groups = np.zeros(nsample, int) groups[20:40] = 1 groups[40:] = 2 #dummy = (groups[:,None] == np.unique(groups)).astype(float) dummy = sm.categorical(groups, drop=True) x = np.linspace(0, 20, nsample) # drop reference category X = np.column_stack((x, dummy[:,1:])) X = sm.add_constant(X, prepend=False) beta = [1., 3, -3, 10] y_true = np.dot(X, beta) e = np.random.normal(size=nsample) y = y_true + e
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Inspect the data:
print(X[:5,:]) print(y[:5]) print(groups) print(dummy[:5,:])
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
res2 = sm.OLS(y, X).fit() print(res2.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Draw a plot to compare the true relationship to OLS predictions:
prstd, iv_l, iv_u = wls_prediction_std(res2) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="Data") ax.plot(x, y_true, 'b-', label="True") ax.plot(x, res2.fittedvalues, 'r--.', label="Predicted") ax.plot(x, iv_u, 'r--') ax.plot(x, iv_l, 'r--') legend = ax.legend(loc="best")
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Joint hypothesis test F test We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:
R = [[0, 1, 0, 0], [0, 0, 1, 0]] print(np.array(R)) print(res2.f_test(R))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
You can also use formula-like syntax to test hypotheses
print(res2.f_test("x2 = x3 = 0"))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Small group effects If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:
beta = [1., 0.3, -0.0, 10] y_true = np.dot(X, beta) y = y_true + np.random.normal(size=nsample) res3 = sm.OLS(y, X).fit() print(res3.f_test(R)) print(res3.f_test("x2 = x3 = 0"))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Multicollinearity The Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
from statsmodels.datasets.longley import load_pandas y = load_pandas().endog X = load_pandas().exog X = sm.add_constant(X)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Fit and summary:
ols_model = sm.OLS(y, X) ols_results = ols_model.fit() print(ols_results.summary())
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Condition number One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:
norm_x = X.values for i, name in enumerate(X): if name == "const": continue norm_x[:,i] = X[name]/np.linalg.norm(X[name]) norm_xtx = np.dot(norm_x.T,norm_x)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Then, we take the square root of the ratio of the biggest to the smallest eigen values.
eigs = np.linalg.eigvals(norm_xtx) condition_number = np.sqrt(eigs.max() / eigs.min()) print(condition_number)
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
Dropping an observation Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:
ols_results2 = sm.OLS(y.ix[:14], X.ix[:14]).fit() print("Percentage change %4.2f%%\n"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
infl = ols_results.get_influence()
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
2./len(X)**.5 print(infl.summary_frame().filter(regex="dfb"))
examples/notebooks/ols.ipynb
yl565/statsmodels
bsd-3-clause
The data The Census Income Data Set that this sample uses for training is provided by the UC Irvine Machine Learning Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/census/data/. Training file is adult.data.csv Evaluation file is adult.test.csv (not used in this notebook) Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS. Disclaimer This dataset is provided by a third party. Google provides no representation, warranty, or other guarantees about the validity or any other aspects of this dataset. Part 1: Create your python model file First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a XGBoost model. However, there are two key differences: 1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data. 1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions. The code in this file loads the data into a pandas DataFrame and pre-processes the data with scikit-learn. This data is then loaded into a DMatrix and used to train a model. Lastly, the model is saved to a file that can be uploaded to AI Platform's prediction service. REPLACE Line 18: BUCKET_ID = 'true-ability-192918' with your GCS BUCKET_ID Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.
%%writefile ./census_training/train.py # [START setup] import datetime import os import subprocess from sklearn.preprocessing import LabelEncoder import pandas as pd from google.cloud import storage import xgboost as xgb # TODO: REPLACE 'BUCKET_CREATED_ABOVE' with your GCS BUCKET_ID BUCKET_ID = 'torryyang-xgb-models' # [END setup] # --------------------------------------- # 1. Add code to download the data from GCS (in this case, using the publicly hosted data). # AI Platform will then be able to use the data when training your model. # --------------------------------------- # [START download-data] census_data_filename = 'adult.data.csv' # Public bucket holding the census data bucket = storage.Client().bucket('cloud-samples-data') # Path to the data inside the public bucket data_dir = 'ml-engine/census/data/' # Download the data blob = bucket.blob(''.join([data_dir, census_data_filename])) blob.download_to_filename(census_data_filename) # [END download-data] # --------------------------------------- # This is where your model code would go. Below is an example model using the census dataset. # --------------------------------------- # [START define-and-load-data] # these are the column labels from the census data files COLUMNS = ( 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income-level' ) # categorical columns contain data that need to be turned into numerical values before being used by XGBoost CATEGORICAL_COLUMNS = ( 'workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'native-country' ) # Load the training census dataset with open(census_data_filename, 'r') as train_data: raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS) # remove column we are trying to predict ('income-level') from features list train_features = raw_training_data.drop('income-level', axis=1) # create training labels list train_labels = (raw_training_data['income-level'] == ' >50K') # [END define-and-load-data] # [START categorical-feature-conversion] # Since the census data set has categorical features, we need to convert # them to numerical values. # convert data in categorical columns to numerical values encoders = {col:LabelEncoder() for col in CATEGORICAL_COLUMNS} for col in CATEGORICAL_COLUMNS: train_features[col] = encoders[col].fit_transform(train_features[col]) # [END categorical-feature-conversion] # [START load-into-dmatrix-and-train] # load data into DMatrix object dtrain = xgb.DMatrix(train_features, train_labels) # train model bst = xgb.train({}, dtrain, 20) # [END load-into-dmatrix-and-train] # --------------------------------------- # 2. Export and save the model to GCS # --------------------------------------- # [START export-to-gcs] # Export the model to a file model = 'model.bst' bst.save_model(model) # Upload the model to GCS bucket = storage.Client().bucket(BUCKET_ID) blob = bucket.blob('{}/{}'.format( datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'), model)) blob.upload_from_filename(model) # [END export-to-gcs]
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Part 2: Create Trainer Package Before you can run your trainer application with AI Platform, your code and any dependencies must be placed in a Google Cloud Storage location that your Google Cloud Platform project can access. You can find more info here
%%writefile ./census_training/__init__.py # Note that __init__.py can be an empty file.
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Part 3: Submit Training Job Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags: job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: census_training_$(date +"%Y%m%d_%H%M%S") job-dir - The path to a Google Cloud Storage location to use for job output. package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated. module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name. region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'. runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information. python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7. scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use. Note: Check to make sure gcloud is set to the current PROJECT_ID
! gcloud config set project $PROJECT_ID
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Submit the training job.
! gcloud ml-engine jobs submit training census_training_$(date +"%Y%m%d_%H%M%S") \ --job-dir $JOB_DIR \ --package-path $TRAINER_PACKAGE_PATH \ --module-name $MAIN_TRAINER_MODULE \ --region $REGION \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ --scale-tier BASIC
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
[Optional] StackDriver Logging You can view the logs for your training job: 1. Go to https://console.cloud.google.com/ 1. Select "Logging" in left-hand pane 1. Select "Cloud ML Job" resource from the drop-down 1. In filter by prefix, use the value of $JOB_NAME to view the logs [Optional] Verify Model File in GCS View the contents of the destination model folder to verify that model file has indeed been uploaded to GCS. Note: The model can take a few minutes to train and show up in GCS.
! gsutil ls gs://$BUCKET_ID/census_*
notebooks/xgboost/TrainingWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Selecting evaluation dataset
#@title Paths to evaluation datasets base_path = '/content/' kolmogorov_re_1000 = { f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_64x64.nc') for i in [64, 128, 256, 512, 1024, 2048] } decaying = { f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_64x64.nc') for i in [64, 128, 256, 512, 1024, 2048] } kolmogorov_re_4000 = { f'baseline_{i}x{i}': os.path.join(base_path, f'eval_{i}x{i}_128x128.nc') for i in [128, 256, 512, 1024, 2048, 4096] } all_datasets = { 'kolmogorov_re_1000': kolmogorov_re_1000, 'decaying': decaying, 'kolmogorov_re_4000': kolmogorov_re_4000, } reference_names = { 'kolmogorov_re_1000': 'baseline_2048x2048', 'decaying': 'baseline_2048x2048', 'kolmogorov_re_4000': 'baseline_4096x4096', } ! ls /content/ #@title Loading evaluation dataset {run: "auto"} dataset_paths = all_datasets[dataset_name] datasets = {k: xarray_open(v) for k, v in dataset_paths.items()} reference_ds = datasets[reference_names[dataset_name]] grid = cfd_data.xarray_utils.grid_from_attrs(reference_ds.attrs) #@title Selecting initial conditions and baseline trajectories. sample_id = 0 time_id = 0 length = 200 # length of the trajectory. inner_steps = 10 # since we deal with subsampled datasets initial_conditions = tuple( reference_ds[velocity_name].isel( sample=slice(sample_id, sample_id + 1), time=slice(time_id, time_id + 1) ).values for velocity_name in cfd_data.xarray_utils.XR_VELOCITY_NAMES[:grid.ndim] ) target_ds = reference_ds.isel( sample=slice(sample_id, sample_id + 1), time=slice(time_id, time_id + length)) datasets = { k: v.isel(sample=slice(sample_id, sample_id + 1), time=slice(time_id, time_id + length)) for k, v in datasets.items() }
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Selecting model checkpoint to load
class CheckpointState: """Object to package up the state we load and restore.""" def __init__(self, **kwargs): for name, value in kwargs.items(): setattr(self, name, value) checkpoint_paths = { 'LI': "/content/LI_ckpt.pkl", 'LC': "/content/LC_ckpt.pkl", 'EPD': "/content/EPD_ckpt.pkl", } #@title selecting model to evaluate {run: "auto"} model_name = "LI" #@param ['LI', 'LC', 'EPD',] {type: "string"} #@title Loading the checkpoint ckpt_path = checkpoint_paths[model_name] with open(ckpt_path, 'rb') as f: ckpt = pickle.load(f) params = ckpt.eval_params shape_structure(params)
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Model inference
#@title Setting up model configuration from the checkpoint; gin.clear_config() gin.parse_config(ckpt.model_config_str) gin.parse_config(strip_imports(reference_ds.attrs['physics_config_str'])) dt = ckpt.model_time_step physics_specs = physics_specifications.get_physics_specs() model_cls = model_builder.get_model_cls(grid, dt, physics_specs) def compute_trajectory_fwd(x): solver = model_cls() x = solver.encode(x) final, trajectory = solver.trajectory( x, length, inner_steps, start_with_input=True, post_process_fn=solver.decode) return trajectory model = hk.without_apply_rng(hk.transform(compute_trajectory_fwd)) trajectory_fn = functools.partial(model.apply, params) trajectory_fn = jax.vmap(trajectory_fn) # predict a batch of trajectories; #@title Running inference; prediction = trajectory_fn(initial_conditions) prediction_ds = cfd_data.xarray_utils.velocity_trajectory_to_xarray( prediction, grid, samples=True) # roundoff error in coordinates sometimes leads to wrong alignment results; prediction_ds.coords['x'] = target_ds.coords['x'] prediction_ds.coords['y'] = target_ds.coords['y'] prediction_ds.coords['time'] = target_ds.coords['time'] datasets[model_name] = prediction_ds
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Computing summaries Note: Evaluations in this notebook are demonstrative and performed over a single sample and shorter times than those used in the paper;
summary = xarray.concat([ cfd_data.evaluation.compute_summary_dataset(ds, target_ds) for ds in datasets.values() ], dim='model') summary.coords['model'] = list(datasets.keys()) correlation = summary.vorticity_correlation.compute() spectrum = summary.energy_spectrum_mean.mean('time').compute() baseline_palette = seaborn.color_palette('YlGnBu', n_colors=7)[1:] models_color = seaborn.xkcd_palette(['burnt orange', 'taupe', 'greenish blue']) palette = baseline_palette + models_color[:(len(datasets.keys()) - 6)] #@title Vorticity correlation as a function of time plt.figure(figsize=(7, 6)) for color, model in zip(palette, summary['model'].data): style = '-' if 'baseline' in model else '--' correlation.sel(model=model).plot.line( color=color, linestyle=style, label=model, linewidth=3); plt.axhline(y=0.95, xmin=0, xmax=20, color='gray') plt.legend(); plt.title('') plt.xlim(0, 15) #@title Energy spectrum plt.figure(figsize=(10, 6)) for color, model in zip(palette, summary['model'].data): style = '-' if 'baseline' in model else '--' (spectrum.k ** 5 * spectrum).sel(model=model).plot.line( color=color, linestyle=style, label=model, linewidth=3); plt.legend(); plt.yscale('log') plt.xscale('log') plt.title('') plt.xlim(3.5, None) if dataset_name == 'kolmogorov_re_4000': plt.ylim(5e8, None) elif dataset_name == 'kolmogorov_re_1000': plt.ylim(1e9, None) elif dataset_name == 'decaying': plt.ylim(2e8, None) else: raise ValueError('Unrecognized dataset') vorticities = xarray.concat( [cfd_data.xarray_utils.vorticity_2d(ds) for ds in datasets.values()], dim='model' ).to_dataset() vorticities.coords['model'] = list(datasets.keys()) #@title Visualizing model unrolls { form-width: "30%", run: "auto"} time_range = {'min': 0, 'max': vorticities.sizes['time'], 'step': 1} last_step_to_plot = 200 #@param {type: "slider", min: 1, max: 200 , step: 5} num_to_show = 5 #@param {type: "slider", min: 1, max: 10, step: 1} time_slice = slice(None, last_step_to_plot, last_step_to_plot // num_to_show) (vorticities.isel({'time': time_slice, 'sample': 0})['vorticity'] .plot.imshow(row='model', col='time', cmap=seaborn.cm.icefire, robust=True))
notebooks/ml_model_inference_demo.ipynb
google/jax-cfd
apache-2.0
Review In the a_sample_explore_clean notebook we came up with the following query to extract a repeatable and clean sample: <pre> #standardSQL SELECT (tolls_amount + fare_amount) AS fare_amount, -- label pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude FROM `nyc-tlc.yellow.trips` WHERE -- Clean Data trip_distance > 0 AND passenger_count > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 -- repeatable 1/5000th sample AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1 </pre> We will use the same query with one change. Instead of using pickup_datetime as is, we will extract dayofweek and hourofday from it. This is to give us some categorical features in our dataset so we can illustrate how to deal with them when we get to feature engineering. The new query will be: <pre> SELECT (tolls_amount + fare_amount) AS fare_amount, -- label EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek, EXTRACT(HOUR from pickup_datetime) AS hourofday, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude -- rest same as before </pre> Split into train, evaluation, and test sets For ML modeling we need not just one, but three datasets. Train: This is what our model learns on Evaluation (aka Validation): We shouldn't evaluate our model on the same data we trained on because then we couldn't know whether it was memorizing the input data or whether it was generalizing. Therefore we evaluate on the evaluation dataset, aka validation dataset. Test: We use our evaluation dataset to tune our hyperparameters (we'll cover hyperparameter tuning in a future lesson). We need to know that our chosen set of hyperparameters will work well for data we haven't seen before because in production, that will be the case. For this reason, we create a third dataset that we never use during the model development process. We only evaluate on this once our model development is finished. Data scientists don't always create a test dataset (aka holdout dataset), but to be thorough you should. We can divide our existing 1/5000th sample three ways 70%/15%/15% (or whatever split we like) with some modulo math demonstrated below. Because we are using a hash function these results are deterministic, we'll get the same exact split every time the query is run (assuming the underlying data hasn't changed) Exercise 1 The create_query function below returns a query string that we will pass to BigQuery to collect our data. It takes as arguments the phase (TRAIN, VALID, or TEST) and the sample_size (relating to the fraction of the data we wish to sample). Complete the code below so that when the phase is set as VALID or TEST a new 15% split of the data will be created.
def create_query(phase, sample_size): basequery = """ SELECT (tolls_amount + fare_amount) AS fare_amount, EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek, EXTRACT(HOUR from pickup_datetime) AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat FROM `nyc-tlc.yellow.trips` WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1 """ if phase == "TRAIN": subsample = """ AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0) AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 70) """ elif phase == "VALID": subsample = """ # TODO: Your code goes here """ elif phase == "TEST": subsample = """ # TODO: Your code goes here """ query = basequery + subsample return query.replace("EVERY_N", sample_size)
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Write to CSV Now let's execute a query for train/valid/test and write the results to disk in csv format. We use Pandas's .to_csv() method to do so. Exercise 2 The for loop below will generate the TRAIN/VALID/TEST sampled subsets of our dataset. Complete the code in the cell below to 1) create the BigQuery query_string using the create_query function you completed above, taking our original 1/5000th of the dataset and 2) load the BigQuery results of that query_string to a DataFrame labeled df. The remaining lines of code write that DataFrame to a csv file with the appropriate naming.
from google.cloud import bigquery bq = bigquery.Client(project=PROJECT) for phase in ["TRAIN", "VALID", "TEST"]: # 1. Create query string query_string = # TODO: Your code goes here # 2. Load results into DataFrame df = # TODO: Your code goes here # 3. Write DataFrame to CSV df.to_csv("taxi-{}.csv".format(phase.lower()), index_label = False, index = False) print("Wrote {} lines to {}".format(len(df), "taxi-{}.csv".format(phase.lower())))
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note that even with a 1/5000th sample we have a good amount of data for ML. 150K training examples and 30K validation. <h3> Verify that datasets exist </h3>
!ls -l *.csv
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Preview one of the files
!head taxi-train.csv
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Looks good! We now have our ML datasets and are ready to train ML models, validate them and test them. Establish rules-based benchmark Before we start building complex ML models, it is a good idea to come up with a simple rules based model and use that as a benchmark. After all, there's no point using ML if it can't beat the traditional rules based approach! Our rule is going to be to divide the mean fare_amount by the mean estimated distance to come up with a rate and use that to predict. Recall we can't use the actual trip_distance because we won't have that available at prediction time (depends on the route taken), however we do know the users pick up and drop off location so we can use euclidean distance between those coordinates. Exercise 3 In the code below, we create a rules-based benchmark and measure the Root Mean Squared Error against the label. The function euclidean_distance takes as input a Pandas dataframe and should measure the straight line distance between the pickup location and the dropoff location. Complete the code so that the function returns Euclidean distance between the pickup and dropoff location. The compute_rmse funciton takes the actual (label) value and the predicted value and computes the Root Mean Squared Error between the the two. Complete the code below for the compute_rmse function.
import pandas as pd def euclidean_distance(df): return # TODO: Your code goes here def compute_rmse(actual, predicted): return # TODO: Your code goes here def print_rmse(df, rate, name): print("{} RMSE = {}".format(compute_rmse(df["fare_amount"], rate * euclidean_distance(df)), name)) df_train = pd.read_csv("taxi-train.csv") df_valid = pd.read_csv("taxi-valid.csv") rate = df_train["fare_amount"].mean() / euclidean_distance(df_train).mean() print_rmse(df_train, rate, "Train") print_rmse(df_valid, rate, "Valid")
courses/machine_learning/deepdive/01_bigquery/labs/c_extract_and_benchmark.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Decoding sensor space data Decoding, a.k.a MVPA or supervised machine learning applied to MEG data in sensor space. Here the classifier is applied to every time point.
import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import roc_auc_score from sklearn.cross_validation import StratifiedKFold import mne from mne.datasets import sample from mne.decoding import TimeDecoding, GeneralizationAcrossTime data_path = sample.data_path() plt.close('all')
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(2, None) # replace baselining with high-pass events = mne.read_events(event_fname) # Set up pick list: EEG + MEG - bad channels (modify to your needs) raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=None, preload=True, reject=dict(grad=4000e-13, eog=150e-6)) epochs_list = [epochs[k] for k in event_id] mne.epochs.equalize_epoch_counts(epochs_list) data_picks = mne.pick_types(epochs.info, meg=True, exclude='bads')
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Temporal decoding We'll use the default classifer for a binary classification problem which is a linear Support Vector Machine (SVM).
td = TimeDecoding(predict_mode='cross-validation', n_jobs=1) # Fit td.fit(epochs) # Compute accuracy td.score(epochs) # Plot scores across time td.plot(title='Sensor space decoding')
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Generalization Across Time This runs the analysis used in [1] and further detailed in [2] Here we'll use a stratified cross-validation scheme.
# make response vector y = np.zeros(len(epochs.events), dtype=int) y[epochs.events[:, 2] == 3] = 1 cv = StratifiedKFold(y=y) # do a stratified cross-validation # define the GeneralizationAcrossTime object gat = GeneralizationAcrossTime(predict_mode='cross-validation', n_jobs=1, cv=cv, scorer=roc_auc_score) # fit and score gat.fit(epochs, y=y) gat.score(epochs) # let's visualize now gat.plot() gat.plot_diagonal()
0.14/_downloads/plot_sensors_decoding.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Polynomial Logistic Regression
import numpy as np import pandas as pd
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The data we want to investigate is stored in the file 'fake-data.csv'. It is data that I have found somewhere. I am not sure whether this data is real or fake. Therefore, I won't discuss the attributes of the data. The point of the data is that it is a classification problem that can not be solved with ordinary logistic regression. We will introduce <em style="color:blue;">polynomial logistic regression</em> to solve this problem.
DF = pd.read_csv('fake-data.csv') DF.head()
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We extract the features from the data frame and convert it into a NumPy <em style="color:blue;">feature matrix</em>.
X = np.array(DF[['x','y']])
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We extract the target column and convert it into a NumPy array.
Y = np.array(DF['class'])
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to plot the instances according to their class we divide the feature matrix $X$ into two parts. $\texttt{X_pass}$ contains those examples that have class $1$, while $\texttt{X_fail}$ contains those examples that have class $0$.
X_pass = X[Y == 1.0] X_fail = X[Y == 0.0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us plot the data.
import matplotlib.pyplot as plt import seaborn as sns plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.1, step=0.1)) plt.yticks(np.arange(-0.8, 1.2, step=0.1)) plt.scatter(X_pass[:,0], X_pass[:,1], color='b') plt.scatter(X_fail[:,0], X_fail[:,1], color='r')
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We want to split the data into a training set and a test set. The training set will be used to compute the parameters of our model, while the testing set is only used to check the accuracy. SciKit-Learn has a predefined method train_test_split that can be used to randomly split data into a training set and a test set.
from sklearn.model_selection import train_test_split
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We will split the data at a ratio of $4:1$, i.e. $80\%$ of the data will be used for training, while the remaining $20\%$ is used to test the accuracy.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=1)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to build a <em style="color:blue;">logistic regression</em> classifier, we import the module linear_model from SciKit-Learn.
import sklearn.linear_model as lm
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The function $\texttt{logistic_regression}(\texttt{X_train}, \texttt{Y_train}, \texttt{X_test}, \texttt{Y_test})$ takes a feature matrix $\texttt{X_train}$ and a corresponding vector $\texttt{Y_train}$ and computes a logistic regression model $M$ that best fits these data. Then, the accuracy of the model is computed using the test data $\texttt{X_test}$ and $\texttt{Y_test}$.
def logistic_regression(X_train, Y_train, X_test, Y_test, reg=10000): M = lm.LogisticRegression(C=reg, tol=1e-6) M.fit(X_train, Y_train) train_score = M.score(X_train, Y_train) yPredict = M.predict(X_test) accuracy = np.sum(yPredict == Y_test) / len(Y_test) return M, train_score, accuracy
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We use this function to build a model for our data. Initially, we will take all the available data to create the model.
M, score, accuracy = logistic_regression(X, Y, X, Y) score, accuracy
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Given that there are only two classes, the accuracy of our first model is quite poor. Let us extract the coefficients so we can plot the <em style="color:blue;">decision boundary</em>.
ϑ0 = M.intercept_[0] ϑ1, ϑ2 = M.coef_[0] plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.1, step=0.1)) plt.yticks(np.arange(-0.8, 1.2, step=0.1)) plt.scatter(X_pass[:,0], X_pass[:,1], color='b') plt.scatter(X_fail[:,0], X_fail[:,1], color='r') H = np.arange(-0.8, 1.0, 0.05) P = -(ϑ0 + ϑ1 * H)/ϑ2 plt.plot(H, P, color='green')
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Clearly, pure logistic regression is not working for this example. The reason is, that a linear decision boundary is not able to separate the positive examples from the negative examples. Let us add polynomial features. This enables us to create more complex decision boundaries. The function $\texttt{extend}(X)$ takes a feature matrix $X$ that is supposed to contain two features $x$ and $y$. It creates the new features $x^2$, $y^2$ and $x\cdot y$ and returns a new feature matrix that also contains these additional features.
def extend(X): n = len(X) fx = np.reshape(X[:,0], (n, 1)) # extract first column fy = np.reshape(X[:,1], (n, 1)) # extract second column return np.hstack([fx, fy, fx*fx, fy*fy, fx*fy]) # stack everthing horizontally X_train_quadratic = extend(X_train) X_test_quadratic = extend(X_test) M, score, accuracy = logistic_regression(X_train_quadratic, Y_train, X_test_quadratic, Y_test) score, accuracy
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
This seems to work better. Let us compute the decision boundary and plot it.
ϑ0 = M.intercept_[0] ϑ1, ϑ2, ϑ3, ϑ4, ϑ5 = M.coef_[0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The decision boundary is now given by the following equation: $$ \vartheta_0 + \vartheta_1 \cdot x + \vartheta_2 \cdot y + \vartheta_3 \cdot x^2 + \vartheta_4 \cdot y^2 + \vartheta_5 \cdot x \cdot y = 0$$ This is the equation of an ellipse. Let us plot the decision boundary with the data.
a = np.arange(-1.0, 1.0, 0.005) b = np.arange(-1.0, 1.0, 0.005) A, B = np.meshgrid(a,b) A B Z = ϑ0 + ϑ1 * A + ϑ2 * B + ϑ3 * A * A + ϑ4 * B * B + ϑ5 * A * B Z plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.1, step=0.1)) plt.yticks(np.arange(-0.8, 1.2, step=0.1)) plt.scatter(X_pass[:,0], X_pass[:,1], color='b') plt.scatter(X_fail[:,0], X_fail[:,1], color='r') CS = plt.contour(A, B, Z, 0, colors='green')
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us try to add <em style="color:blue;">quartic features</em> next. These are features like $x^4$, $x^2\cdot y^2$, etc. Luckily, SciKit-Learn has function that can automize this process.
from sklearn.preprocessing import PolynomialFeatures quartic = PolynomialFeatures(4, include_bias=False) X_train_quartic = quartic.fit_transform(X_train) X_test_quartic = quartic.fit_transform(X_test) print(quartic.get_feature_names(['x', 'y']))
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us fit the quartic model.
M, score, accuracy = logistic_regression(X_train_quartic, Y_train, X_test_quartic, Y_test) score, accuracy
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The accuracy on the training set has increased, but we observe that the accuracy on the training set is actually not improving. Again, we proceed to plot the decision boundary.
ϑ0 = M.intercept_[0] ϑ1, ϑ2, ϑ3, ϑ4, ϑ5, ϑ6, ϑ7, ϑ8, ϑ9, ϑ10, ϑ11, ϑ12, ϑ13, ϑ14 = M.coef_[0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Plotting the decision boundary starts to get tedious.
a = np.arange(-1.0, 1.0, 0.005) b = np.arange(-1.0, 1.0, 0.005) A, B = np.meshgrid(a,b) Z = ϑ0 + ϑ1 * A + ϑ2 * B + \ ϑ3 * A**2 + ϑ4 * A * B + ϑ5 * B**2 + \ ϑ6 * A**3 + ϑ7 * A**2 * B + ϑ8 * A * B**2 + ϑ9 * B**3 + \ ϑ10 * A**4 + ϑ11 * A**3 * B + ϑ12 * A**2 * B**2 + ϑ13 * A * B**3 + ϑ14 * B**4 plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.1, step=0.1)) plt.yticks(np.arange(-0.8, 1.2, step=0.1)) plt.scatter(X_pass[:,0], X_pass[:,1], color='b') plt.scatter(X_fail[:,0], X_fail[:,1], color='r') CS = plt.contour(A, B, Z, 0, colors='green')
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The decision boundary looks strange. Let's get bold and try to add features of a higher power. However, in order to understand what is happening, we will only plot the training data.
X_pass_train = X_train[Y_train == 1.0] X_fail_train = X_train[Y_train == 0.0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
In order to automatize the process, we define some auxiliary functions. $\texttt{polynomial}(n)$ creates a polynomial in the variables A and B that contains all terms of the form $\Theta[k] \cdot A^i \cdot B^j$ where $i+j \leq n$.
def polynomial(n): sum = 'Θ[0]' cnt = 0 for k in range(1, n+1): for i in range(0, k+1): cnt += 1 sum += f' + Θ[{cnt}] * A**{k-i} * B**{i}' print('number of features:', cnt) return sum
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let's check this out for $n=4$.
polynomial(4)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The function $\texttt{polynomial_grid}(n, M)$ takes a number $n$ and a model $M$. It returns a meshgrid that can be used to plot the decision boundary of the model.
def polynomial_grid(n, M): Θ = [M.intercept_[0]] + list(M.coef_[0]) a = np.arange(-1.0, 1.0, 0.005) b = np.arange(-1.0, 1.0, 0.005) A, B = np.meshgrid(a,b) return eval(polynomial(n))
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The function $\texttt{plot_nth_degree_boundary}(n)$ creates a polynomial logistic regression model of degree $n$. It plots both the training data and the decision boundary.
def plot_nth_degree_boundary(n, C=10000): poly = PolynomialFeatures(n, include_bias=False) X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.fit_transform(X_test) M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C) print('The accuracy on the training set is:', score) print('The accuracy on the test set is:', accuracy) Z = polynomial_grid(n, M) plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.11, step=0.1)) plt.yticks(np.arange(-0.8, 1.21, step=0.1)) plt.scatter(X_pass_train[:,0], X_pass_train[:,1], color='b') plt.scatter(X_fail_train[:,0], X_fail_train[:,1], color='r') CS = plt.contour(A, B, Z, 0, colors='green')
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us test this for the polynomial logistic regression model of degree $4$.
plot_nth_degree_boundary(4)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
This seems to be the same shape that we have seen earlier. It looks like the function $\texttt{plot_nth_degree_boundary}(n)$ is working. Let's try higher degree polynomials.
plot_nth_degree_boundary(5)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The score on the training set has improved. What happens if we try still higher degrees?
plot_nth_degree_boundary(6)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
We captured one more of the training examples. Let's get bold, we want a $100\%$ training accuracy.
plot_nth_degree_boundary(14)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
The model is getting more complicated, but it is not getting better, as the accuracy on the test set has not improved.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=2) X_pass_train = X_train[Y_train == 1.0] X_fail_train = X_train[Y_train == 0.0]
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Let us check whether regularization can help. Below, the regularization parameter prevents the decision boundary from becoming to wiggly and thus the accuracy on the test set can increase. The function below plots all the data.
def plot_nth_degree_boundary_all(n, C): poly = PolynomialFeatures(n, include_bias=False) X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.fit_transform(X_test) M, score, accuracy = logistic_regression(X_train_poly, Y_train, X_test_poly, Y_test, C) print('The accuracy on the training set is:', score) print('The accuracy on the test set is:', accuracy) Z = polynomial_grid(n, M) plt.figure(figsize=(15, 10)) sns.set(style='darkgrid') plt.title('A Classification Problem') plt.axvline(x=0.0, c='k') plt.axhline(y=0.0, c='k') plt.xlabel('x axis') plt.ylabel('y axis') plt.xticks(np.arange(-0.9, 1.11, step=0.1)) plt.yticks(np.arange(-0.8, 1.21, step=0.1)) plt.scatter(X_pass[:,0], X_pass[:,1], color='b') plt.scatter(X_fail[:,0], X_fail[:,1], color='r') CS = plt.contour(A, B, Z, 0, colors='green') plot_nth_degree_boundary_all(14, 100.0) plot_nth_degree_boundary_all(20, 100000.0)
Python/6 Classification/Polynomial-Logistic-Regression.ipynb
karlstroetmann/Artificial-Intelligence
gpl-2.0
Load Digits Dataset Digits is a dataset of handwritten digits. Each feature is the intensity of one pixel of an 8 x 8 image.
# Load digits dataset digits = datasets.load_digits() # Create feature matrix X = digits.data # Create target vector y = digits.target # View the first observation's feature values X[0]
machine-learning/.ipynb_checkpoints/loading_scikit-learns_digits-dataset-checkpoint.ipynb
tpin3694/tpin3694.github.io
mit
The observation's feature values are presented as a vector. However, by using the images method we can load the the same feature values as a matrix and then visualize the actual handwritten character:
# View the first observation's feature values as a matrix digits.images[0] # Visualize the first observation's feature values as an image plt.gray() plt.matshow(digits.images[0]) plt.show()
machine-learning/.ipynb_checkpoints/loading_scikit-learns_digits-dataset-checkpoint.ipynb
tpin3694/tpin3694.github.io
mit
Simple TFX Pipeline Tutorial using Penguin dataset A Short tutorial to run a simple TFX pipeline. Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab". <div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/penguin_simple.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/penguin_simple.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/penguin_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td> </table></div> In this notebook-based tutorial, we will create and run a TFX pipeline for a simple classification model. The pipeline will consist of three essential TFX components: ExampleGen, Trainer and Pusher. The pipeline includes the most minimal ML workflow like importing data, training a model and exporting the trained model. Please see Understanding TFX Pipelines to learn more about various concepts in TFX. Set Up We first need to install the TFX Python package and download the dataset which we will use for our model. Upgrade Pip To avoid upgrading Pip in a system when running locally, check to make sure that we are running in Colab. Local systems can of course be upgraded separately.
try: import colab !pip install --upgrade pip except: pass
site/en-snapshot/tfx/tutorials/tfx/penguin_simple.ipynb
tensorflow/docs-l10n
apache-2.0