markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
<a id=groupby></a>
Grouping data
Next up: group data by some variable. As an example, how would we compute the average rating of each movie? If you think for a minute, you might think of these steps:
Group the data by movie: Put all the "Pulp Fiction" ratings in one bin, all the "Shawshank" ratings in another. We do that with the groupby method.
Compute a statistic (the mean, for example) for each group.
Pandas has tools that make that relatively easy. | # group
g = ml[['title', 'rating']].groupby('title')
type(g) | Code/notebooks/bootcamp_pandas-summarize.ipynb | NYUDataBootcamp/Materials | mit |
Now that we have a groupby object, what can we do with it? | # the number in each category
g.count().head(10)
# what type of object have we created?
type(g.count()) | Code/notebooks/bootcamp_pandas-summarize.ipynb | NYUDataBootcamp/Materials | mit |
Comment. Note that the combination of groupby and count created a dataframe with
Its index is the variable we grouped by. If we group by more than one, we get a multi-index.
Its columns are the other variables.
Exercise. Take the code
python
counts = ml.groupby(['title', 'movieId'])
Without running it, what is the index of counts? What are its columns? | counts = ml.groupby(['title', 'movieId']).count()
gm = g.mean()
gm.head()
# we can put them together
grouped = g.count()
grouped = grouped.rename(columns={'rating': 'Number'})
grouped['Mean'] = g.mean()
grouped.head(10)
grouped.plot.scatter(x='Number', y='Mean') | Code/notebooks/bootcamp_pandas-summarize.ipynb | NYUDataBootcamp/Materials | mit |
Create a model
We first create a toy model to simulate the groundwater levels in southeastern Austria. We will use this model to illustrate how the different methods for uncertainty quantification can be used. | gwl = pd.read_csv("data_wagna/head_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2).loc["2006":].iloc[0::10]
evap = pd.read_csv("data_wagna/evap_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2)
prec = pd.read_csv("data_wagna/rain_wagna.csv", index_col=0, parse_dates=True,
squeeze=True, skiprows=2)
# Model settings
tmin = pd.Timestamp("2007-01-01") # Needs warmup
tmax = pd.Timestamp("2016-12-31")
ml = ps.Model(gwl)
sm = ps.RechargeModel(prec, evap, recharge=ps.rch.FlexModel(),
rfunc=ps.Exponential, name="rch")
ml.add_stressmodel(sm)
# Add the ARMA(1,1) noise model and solve the Pastas model
ml.add_noisemodel(ps.ArmaModel())
ml.solve(tmin=tmin, tmax=tmax, noise=True) | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Diagnostic Checks
Before we perform the uncertainty quantification, we should check if the underlying statistical assumptions are met. We refer to the notebook on Diagnostic checking for more details on this. | ml.plots.diagnostics(); | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Confidence intervals
After the model is calibrated, a fit attribute is added to the Pastas Model object (ml.fit). This object contains information about the optimizations (e.g., the jacobian matrix) and a number of methods that can be used to quantify uncertainties. | ci = ml.fit.ci_simulation(alpha=0.05, n=1000)
ax = ml.plot(figsize=(10,3));
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Observations", "Simulation", "95% Confidence interval"], ncol=3, loc=2) | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Prediction interval | ci = ml.fit.prediction_interval(n=1000)
ax = ml.plot(figsize=(10,3));
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Observations", "Simulation", "95% Prediction interval"], ncol=3, loc=2) | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Uncertainty of step response | ci = ml.fit.ci_step_response("rch")
ax = ml.plots.step_response(figsize=(6,2))
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=4) | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Uncertainty of block response | ci = ml.fit.ci_block_response("rch")
ax = ml.plots.block_response(figsize=(6,2))
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=1) | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Uncertainty of the contributions | ci = ml.fit.ci_contribution("rch")
r = ml.get_contribution("rch")
ax = r.plot(figsize=(10,3))
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color="lightgray")
ax.legend(["Simulation", "95% Prediction interval"], ncol=3, loc=1)
plt.tight_layout() | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Custom Confidence intervals
It is also possible to compute the confidence intervals manually, for example to estimate the uncertainty in the recharge or statistics (e.g., SGI, NSE). We can call ml.fit.get_parameter_sample to obtain random parameter samples from a multivariate normal distribution using the optimal parameters and the covariance matrix. Next, we use the parameter sets to obtain multiple simulations of 'something', here the recharge. | params = ml.fit.get_parameter_sample(n=1000, name="rch")
data = {}
# Here we run the model n times with different parameter samples
for i, param in enumerate(params):
data[i] = ml.stressmodels["rch"].get_stress(p=param)
df = pd.DataFrame.from_dict(data, orient="columns").loc[tmin:tmax].resample("A").sum()
ci = df.quantile([0.025, .975], axis=1).transpose()
r = ml.get_stress("rch").resample("A").sum()
ax = r.plot.bar(figsize=(10,2), width=0.5, yerr=[r-ci.iloc[:,0], ci.iloc[:,1]-r])
ax.set_xticklabels(labels=r.index.year, rotation=0, ha='center')
ax.set_ylabel("Recharge [mm a$^{-1}$]")
ax.legend(ncol=3); | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Uncertainty of the NSE
The code pattern shown above can be used for many types of uncertainty analyses. Another example is provided below, where we compute the uncertainty of the Nash-Sutcliffe efficacy. | params = ml.fit.get_parameter_sample(n=1000)
data = []
# Here we run the model n times with different parameter samples
for i, param in enumerate(params):
sim = ml.simulate(p=param)
data.append(ps.stats.nse(obs=ml.observations(), sim=sim))
fig, ax = plt.subplots(1,1, figsize=(4,3))
plt.hist(data, bins=50, density=True)
ax.axvline(ml.stats.nse(), linestyle="--", color="k")
ax.set_xlabel("NSE [-]")
ax.set_ylabel("frequency [-]")
from scipy.stats import norm
import numpy as np
mu, std = norm.fit(data)
# Plot the PDF.
xmin, xmax = ax.set_xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
ax.plot(x, p, 'k', linewidth=2) | examples/notebooks/16_uncertainty.ipynb | pastas/pasta | mit |
Regular tables
Solid lines are original tables, dashed lines are the new tables. | z = np.linspace(0, 50, 100)
plot_rates(z, ['CloudyData_UVB=HM2012.h5',
'CloudyData_HM2012_highz.h5'],
'Chemistry', ['k24', 'k25', 'k26', 'k29', 'k30'])
pyplot.ylim(1e-29, 1e-11)
z = np.linspace(0, 50, 100)
plot_rates(z, ['CloudyData_UVB=HM2012.h5',
'CloudyData_HM2012_highz.h5'],
'Photoheating', ['piHI', 'piHeI', 'piHeII'])
pyplot.ylim(1e-26, 1e-11)
z = np.linspace(0, 50, 100)
pyplot.plot(z, k31_RFT14(z), color='red', label='k31 JHW')
pyplot.plot(z, k31_JW2012(z), color='red', ls = ':', label='k31 JW2012')
pyplot.plot(z, k31_Qin2020(z), color='red', ls = '-.', label='k31 Qin2020')
plot_rates(z, ['CloudyData_UVB=HM2012.h5',
'CloudyData_HM2012_highz.h5'],
'Chemistry', ['k27', 'k28', 'k31'])
pyplot.ylim(1e-19, 1e-7) | physics_data/UVB/grackle_tables/photo.ipynb | aemerick/galaxy_analysis | mit |
Self-shielded tables
Solid lines are original tables, dashed lines are the new tables. | z = np.linspace(0, 50, 100)
plot_rates(z, ['CloudyData_UVB=HM2012_shielded.h5',
'CloudyData_HM2012_highz_shielded.h5'],
'Chemistry', ['k24', 'k25', 'k26', 'k29', 'k30'])
pyplot.ylim(1e-29, 1e-11)
z = np.linspace(0, 50, 100)
plot_rates(z, ['CloudyData_UVB=HM2012_shielded.h5',
'CloudyData_HM2012_highz_shielded.h5'],
'Photoheating', ['piHI', 'piHeI', 'piHeII'])
pyplot.ylim(1e-26, 1e-11)
z = np.linspace(0, 50, 100)
plot_rates(z, ['CloudyData_UVB=HM2012_shielded.h5',
'CloudyData_HM2012_highz_shielded.h5'],
'Chemistry', ['k27', 'k28', 'k31'])
pyplot.ylim(1e-19, 1e-7)
pyplot.plot(z, k31_RFT14(z), color='red', label='k31 JHW')
pyplot.plot(z, k31_JW2012(z), color='red', ls = ':', label='k31 JW2012')
pyplot.plot(z, k31_Qin2020(z), color='red', ls = '-.', label='k31 Qin2020')
| physics_data/UVB/grackle_tables/photo.ipynb | aemerick/galaxy_analysis | mit |
Question 2
Use the matrix-vector multiplication and apply the Map function to this matrix and vector:
| 1 | 2 | 3 | 4 | | 1 |
|---|---|---|---| |---|
| 5 | 6 | 7 | 8 | | 2 |
| 9 | 10 | 11 | 12 | | 3 |
| 13 | 14 | 15 | 16 | | 4 |
Then, identify the key-value pairs that are output of Map. | import numpy as np
import itertools
M = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16],])
v = np.array([1, 2, 3, 4])
def mr(M, v):
t = []
mr, mc = M.shape
for i in range(mc):
for j in range(mr):
t.append((i, M[i, j] * v[j]))
t = sorted(t, key=lambda x:x[0])
for x in t:
print (x[0]+1, x[1])
r = np.zeros((mr, 1))
for key, vals in itertools.groupby(t, key=lambda x:x[0]):
vals = [x[1] for x in vals]
r[key] = sum(vals)
print '%s, %s' % (key, sum(vals))
return r.transpose()
#print np.dot(M, v.transpose())
print mr(M, v) | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
Question 3
Suppose we have the following relations: | from IPython.display import Image
Image(filename='relations.jpeg') | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
and we take their natural join. Apply the Map function to the tuples of these relations. Then, construct the elements that are input to the Reduce function. Identify these elements. | import numpy as np
import itertools
R = [(0, 1),
(1, 2),
(2, 3)]
S = [(0, 1),
(1, 2),
(2, 3)]
def hash_join(R, S):
h = {}
for a, b in R:
h.setdefault(b, []).append(a)
j = []
for b, c in S:
if not h.has_key(b):
continue
for r in h[b]:
j.append( (r, b, c) )
return j
def mr(R, S):
m = []
for a, b in R:
m.append( (b, ('R', a)) )
for b, c in S:
m.append( (b, ('S', c)) )
m = sorted(m, key=lambda x:x[0])
r = []
for key, vals in itertools.groupby(m, key=lambda x:x[0]):
vals = [x[1] for x in vals]
print key, vals
rs = [x for x in vals if x[0] == 'R']
ss = [x for x in vals if x[0] == 'S']
for ri in rs:
for si in ss:
r.append( (ri[1], key, si[1]) )
return r
print hash_join(R, S)
print mr(R, S) | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
Question 4
The figure below shows two positive points (purple squares) and two negative points (green circles): | from IPython.display import Image
Image(filename='svm1.jpeg') | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
That is, the training data set consists of:
- (x1,y1) = ((5,4),+1)
- (x2,y2) = ((8,3),+1)
- (x3,y3) = ((7,2),-1)
- (x4,y4) = ((3,3),-1)
Our goal is to find the maximum-margin linear classifier for this data. In easy cases, the shortest line between a positive and negative point has a perpendicular bisector that separates the points. If so, the perpendicular bisector is surely the maximum-margin separator. Alas, in this case, the closest pair of positive and negative points, x2 and x3, have a perpendicular bisector that misclassifies x1 as negative, so that won't work.
The next-best possibility is that we can find a pair of points on one side (i.e., either two positive or two negative points) such that a line parallel to the line through these points is the maximum-margin separator. In these cases, the limit to how far from the two points the parallel line can get is determined by the closest (to the line between the two points) of the points on the other side. For our simple data set, this situation holds.
Consider all possibilities for boundaries of this type, and express the boundary as w.x+b=0, such that w.x+b≥1 for positive points x and w.x+b≤-1 for negative points x. Assuming that w = (w1,w2), identify the value of w1, w2, and b. | import math
import numpy as np
P = [((5, 4), 1),
((8, 3), 1),
((3, 3), -1),
((7, 2), -1)]
def line(pl0, pl1, p):
dx, dy = pl1[0] - pl0[0], pl1[1] - pl0[1]
a = abs((pl1[1] - pl0[1]) * p[0] - (pl1[0] - pl0[0]) * p[1] + pl1[0]*pl0[1] - pl0[0]*pl1[1])
return a / math.sqrt(dx*dx + dy*dy)
def closest(L, pts):
dist = [line(L[0][0], L[1][0], x[0]) for x in pts]
ix = np.argmin(dist)
return pts[ix], dist[ix]
def solve(A, B):
# find the point in B closest to the line through both points in A
p, d = closest(A, B)
M = np.hstack((
np.array([list(x[0]) for x in A] + [list(p[0])]),
np.ones((3, 1))))
b = np.array([x[1] for x in A] + [p[1]])
x = np.linalg.solve(M, b)
return x, d
S = [solve([a for a in P if a[1] == 1], [a for a in P if a [1] == -1]),
solve([a for a in P if a[1] == -1], [a for a in P if a [1] == 1])]
ix = np.argmax([x[1] for x in S])
x = S[ix][0]
print 'w1 = %0.2f' % x[0]
print 'w2 = %0.2f' % x[1]
print 'b = %0.2f' % x[2] | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
Question 5
Consider the following training set of 16 points. The eight purple squares are positive examples, and the eight green circles are negative examples. | Image(filename='newsvm4.jpeg') | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
We propose to use the diagonal line with slope +1 and intercept +2 as a decision boundary, with positive examples above and negative examples below. However, like any linear boundary for this training set, some examples are misclassified. We can measure the goodness of the boundary by computing all the slack variables that exceed 0, and then using them in one of several objective functions. In this problem, we shall only concern ourselves with computing the slack variables, not an objective function.
To be specific, suppose the boundary is written in the form w.x+b=0, where w = (-1,1) and b = -2. Note that we can scale the three numbers involved as we wish, and so doing changes the margin around the boundary. However, we want to consider this specific boundary and margin. Determine the slack for each of the 16 points. | import numpy as np
pos = [(5, 10),
(7, 10),
(1, 8),
(3, 8),
(7, 8),
(1, 6),
(3, 6),
(3, 4)]
neg = [(5, 8),
(5, 6),
(7, 6),
(1, 4),
(5, 4),
(7, 4),
(1, 2),
(3, 2)]
C = [(x, 1) for x in pos] + [(x, -1) for x in neg]
w, b = np.array([-1, 1]), -2
d = np.dot(np.array([list(x[0]) for x in C]), w) + b
print("Points"+"\t"+"Slack")
for i, m in enumerate(np.sign(d) == np.array([x[1] for x in C])):
if C[i][1] == 1:
slack = 1 - d
else:
slack = 1 + d
#print "%s %d %0.2f %0.2f" % (C[i][0], C[i][1], d[i], slack[i])
print "%s\t%0.2f" % (C[i][0], slack[i]) | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
Question 6
Below we see a set of 20 points and a decision tree for classifying the points. | Image(filename='gold.jpeg')
Image(filename='dectree1.jpeg') | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
To be precise, the 20 points represent (Age,Salary) pairs of people who do or do not buy gold jewelry. Age (appreviated A in the decision tree) is the x-axis, and Salary (S in the tree) is the y-axis. Those that do are represented by gold points, and those that do not by green points. The 10 points of gold-jewelry buyers are:
(28,145), (38,115), (43,83), (50,130), (50,90), (50,60), (50,30), (55,118), (63,88), and (65,140).
The 10 points of those that do not buy gold jewelry are:
(23,40), (25,125), (29,97), (33,22), (35,63), (42,57), (44, 105), (55,63), (55,20), and (64,37).
Some of these points are correctly classified by the decision tree and some are not. Determine the classification of each point, and then indicate the points that are misclassified. | A = 0
S = 1
pos = [(28,145),
(38,115),
(43,83),
(50,130),
(50,90),
(50,60),
(50,30),
(55,118),
(63,88),
(65,140)]
neg = [(23,40),
(25,125),
(29,97),
(33,22),
(35,63),
(42,57),
(44, 105),
(55,63),
(55,20),
(64,37)]
def classify(p):
if p[A] < 45:
return p[S] >= 110
else:
return p[S] >= 75
e = [p for p, v in zip(pos, [classify(x) for x in pos]) if not v] + \
[p for p, v in zip(neg, [classify(x) for x in neg]) if v]
print e | Mining massive datasets/MapReduce SVM.ipynb | shngli/Data-Mining-Python | gpl-3.0 |
Defining a custom Dataset
The dataset's in this library consists of iterators which yield batches of the corresponding data. For the provided tasks, these dataset have 4 splits of data rather than the traditional 3. We have "train" which is data used by the task to train a model, "inner_valid" which contains validation data for use when inner training (training an instance of a task). This could be use for, say, picking hparams. "outer_valid" which is used to meta-train with -- this is unseen in inner training and thus serves as a basis to train learned optimizers against. "test" which can be used to test the learned optimizer with.
To make a dataset, simply write 4 iterators with these splits.
For performance reasons, creating these iterators cannot be slow.
The existing dataset's make extensive use of caching to share iterators across tasks which use the same data iterators.
To account for this reuse, it is expected that these iterators are always randomly sampling data and have a large shuffle buffer so as to not run into any sampling issues. | import numpy as np
def data_iterator():
bs = 3
while True:
batch = {"data": np.zeros([bs, 5])}
yield batch
@datasets_base.dataset_lru_cache
def get_datasets():
return datasets_base.Datasets(
train=data_iterator(),
inner_valid=data_iterator(),
outer_valid=data_iterator(),
test=data_iterator())
ds = get_datasets()
next(ds.train) | docs/notebooks/Part2_CustomTasks.ipynb | google/learned_optimization | apache-2.0 |
Defining a custom Task
To define a custom class, one simply needs to write a base class of Task. Let's look at a simple task consisting of a quadratic task with noisy targets. | # First we construct data iterators.
def noise_datasets():
def _fn():
while True:
yield np.random.normal(size=[4, 2]).astype(dtype=np.float32)
return datasets_base.Datasets(
train=_fn(), inner_valid=_fn(), outer_valid=_fn(), test=_fn())
class MyTask(tasks_base.Task):
datasets = noise_datasets()
def loss(self, params, rng, data):
return jnp.sum(jnp.square(params - data))
def init(self, key):
return jax.random.normal(key, shape=(4, 2))
task = MyTask()
key = jax.random.PRNGKey(0)
key1, key = jax.random.split(key)
params = task.init(key)
task.loss(params, key1, next(task.datasets.train)) | docs/notebooks/Part2_CustomTasks.ipynb | google/learned_optimization | apache-2.0 |
Meta-training on multiple tasks: TaskFamily
What we have shown previously was meta-training on a single task instance.
While sometimes this is sufficient for a given situation, in many situations we seek to meta-train a meta-learning algorithm such as a learned optimizer on a mixture of different tasks.
One path to do this is to simply run more than one meta-gradient computation, each with different tasks, average the gradients, and perform one meta-update.
This works great when the tasks are quite different -- e.g. meta-gradients when training a convnet vs a MLP.
A big negative to this is that these meta-gradient calculations are happening sequentially, and thus making poor use of hardware accelerators like GPU or TPU.
As a solution to this problem, we have an abstraction of a TaskFamily to enable better use of hardware. A TaskFamily represents a distribution over a set of tasks and specifies particular samples from this distribution as a pytree of jax types.
The function to sample these configurations is called sample, and the function to get a task from the sampled config is task_fn. TaskFamily also optionally contain datasets which are shared for all the Task it creates.
As a simple example, let's consider a family of quadratics parameterized by meansquared error to some point which itself is sampled. | PRNGKey = jnp.ndarray
TaskParams = jnp.ndarray
class FixedDimQuadraticFamily(tasks_base.TaskFamily):
"""A simple TaskFamily with a fixed dimensionality but sampled target."""
def __init__(self, dim: int):
super().__init__()
self._dim = dim
self.datasets = None
def sample(self, key: PRNGKey) -> TaskParams:
# Sample the target for the quadratic task.
return jax.random.normal(key, shape=(self._dim,))
def task_fn(self, task_params: TaskParams) -> tasks_base.Task:
dim = self._dim
class _Task(tasks_base.Task):
def loss(self, params, rng, _):
# Compute MSE to the target task.
return jnp.sum(jnp.square(task_params - params))
def init(self, key):
return jax.random.normal(key, shape=(dim,))
return _Task() | docs/notebooks/Part2_CustomTasks.ipynb | google/learned_optimization | apache-2.0 |
With this task family defined, we can create instances by sampling a configuration and creating a task. This task acts like any other task in that it has an init and a loss function. | task_family = FixedDimQuadraticFamily(10)
key = jax.random.PRNGKey(0)
task_cfg = task_family.sample(key)
task = task_family.task_fn(task_cfg)
key1, key = jax.random.split(key)
params = task.init(key)
batch = None
task.loss(params, key, batch) | docs/notebooks/Part2_CustomTasks.ipynb | google/learned_optimization | apache-2.0 |
To achive speedups, we can now leverage jax.vmap to train multiple task instances in parallel! Depending on the task, this can be considerably faster than serially executing them. | def train_task(cfg, key):
task = task_family.task_fn(cfg)
key1, key = jax.random.split(key)
params = task.init(key1)
opt = opt_base.Adam()
opt_state = opt.init(params)
for i in range(4):
params = opt.get_params(opt_state)
loss, grad = jax.value_and_grad(task.loss)(params, key, None)
opt_state = opt.update(opt_state, grad, loss=loss)
loss = task.loss(params, key, None)
return loss
task_cfg = task_family.sample(key)
print("single loss", train_task(task_cfg, key))
keys = jax.random.split(key, 32)
task_cfgs = jax.vmap(task_family.sample)(keys)
losses = jax.vmap(train_task)(task_cfgs, keys)
print("multiple losses", losses) | docs/notebooks/Part2_CustomTasks.ipynb | google/learned_optimization | apache-2.0 |
Because of this ability to apply vmap over task families, this is the main building block for a number of the high level libraries in this package. Single tasks can always be converted to a task family with: | single_task = image_mlp.ImageMLP_FashionMnist8_Relu32()
task_family = tasks_base.single_task_to_family(single_task) | docs/notebooks/Part2_CustomTasks.ipynb | google/learned_optimization | apache-2.0 |
This wrapper task family has no configuable value and always returns the base task. | cfg = task_family.sample(key)
print("config only contains a dummy value:", cfg)
task = task_family.task_fn(cfg)
# Tasks are the same
assert task == single_task | docs/notebooks/Part2_CustomTasks.ipynb | google/learned_optimization | apache-2.0 |
The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section. | import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape)) | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Visualize Data
View a sample from the dataset.
You do not need to modify this section. | import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index]) | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Preprocess Data
Shuffle the training data.
You do not need to modify this section. | from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train) | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section. | import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128 | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
TODO: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer. | from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
dropout = 0.75
# TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
weights = {
'wc1': tf.Variable(tf.random_normal([5,5,1,6])),
'wc2': tf.Variable(tf.random_normal([5,5,6,16])),
'wd1': tf.Variable(tf.random_normal([400, 120])),
'wd2': tf.Variable(tf.random_normal([120, 84])),
'wd3': tf.Variable(tf.random_normal([84, 10]))}
biases = {
'bc1': tf.Variable(tf.zeros(6)),
'bc2': tf.Variable(tf.zeros(16)),
'bd1': tf.Variable(tf.zeros(120)),
'bd2': tf.Variable(tf.zeros(84)),
'bd3': tf.Variable(tf.zeros(10))}
conv1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding='VALID')
conv1 = tf.nn.bias_add(conv1, biases['bc1'])
# TODO: Activation.
conv1 = tf.nn.relu(conv1)
# TODO: Pooling. Input = 28x28x6. Output = 14x14x6.
ksize = [1,2,2,1]
strides = [1,2,2,1]
padding = 'VALID'
conv1 = tf.nn.max_pool(conv1, ksize, strides, padding)
# TODO: Layer 2: Convolutional. Output = 10x10x16.
conv2 = tf.nn.conv2d(conv1, weights['wc2'], strides=[1, 1, 1, 1], padding='VALID')
conv2 = tf.nn.bias_add(conv2, biases['bc2'])
# TODO: Activation.
conv2 = tf.nn.relu(conv2)
# TODO: Pooling. Input = 10x10x16. Output = 5x5x16.
ksize = [1,2,2,1]
strides = [1,2,2,1]
padding = 'VALID'
conv2 = tf.nn.max_pool(conv2, ksize, strides, padding)
# TODO: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# TODO: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1 = tf.add(tf.matmul(fc0, weights['wd1']), biases['bd1'])
# TODO: Activation.
fc1 = tf.nn.relu(fc1)
# TODO: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2 = tf.add(tf.matmul(fc1, weights['wd2']), biases['bd2'])
# TODO: Activation.
fc2 = tf.nn.relu(fc2)
# TODO: Layer 5: Fully Connected. Input = 84. Output = 10.
logits = tf.add(tf.matmul(fc2, weights['wd3']), biases['bd3'])
return logits | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section. | x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10) | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section. | rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation) | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section. | correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section. | with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, 'lenet')
print("Model saved") | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy)) | CarND-LeNet-Lab/LeNet-Lab.ipynb | rajeshb/SelfDrivingCar | mit |
ENDF: Resonance Covariance Data
Let's download the ENDF/B-VII.1 evaluation for $^{157}$Gd and load it in: | # Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157'
filename, headers = urllib.request.urlretrieve(url, 'gd157.endf')
# Load into memory
gd157_endf = openmc.data.IncidentNeutron.from_endf(filename, covariance=True)
gd157_endf | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
We can access the parameters contained within File 32 in a similar manner to the File 2 parameters from before. | gd157_endf.resonance_covariance.ranges[0].parameters[:5] | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
The newly created object will contain multiple resonance regions within gd157_endf.resonance_covariance.ranges. We can access the full covariance matrix from File 32 for a given range by: | covariance = gd157_endf.resonance_covariance.ranges[0].covariance | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
This covariance matrix currently only stores the upper triangular portion as covariance matrices are symmetric. Plotting the covariance matrix: | plt.imshow(covariance, cmap='seismic',vmin=-0.008, vmax=0.008)
plt.colorbar() | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
The correlation matrix can be constructed using the covariance matrix and also give some insight into the relations among the parameters. | corr = np.zeros([len(covariance),len(covariance)])
for i in range(len(covariance)):
for j in range(len(covariance)):
corr[i, j]=covariance[i, j]/covariance[i, i]**(0.5)/covariance[j, j]**(0.5)
plt.imshow(corr, cmap='seismic',vmin=-1.0, vmax=1.0)
plt.colorbar()
| examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
Sampling and Reconstruction
The covariance module also has the ability to sample a new set of parameters using the covariance matrix. Currently the sampling uses numpy.multivariate_normal(). Because parameters are assumed to have a multivariate normal distribution this method doesn't not currently guarantee that sampled parameters will be positive. | rm_resonance = gd157_endf.resonances.ranges[0]
n_samples = 5
samples = gd157_endf.resonance_covariance.ranges[0].sample(n_samples)
type(samples[0])
| examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
The sampling routine requires the incorporation of the openmc.data.ResonanceRange for the same resonance range object. This allows each sample itself to be its own openmc.data.ResonanceRange with a new set of parameters. Looking at some of the sampled parameters below: | print('Sample 1')
samples[0].parameters[:5]
print('Sample 2')
samples[1].parameters[:5] | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
We can reconstruct the cross section from the sampled parameters using the reconstruct method of openmc.data.ResonanceRange. For more on reconstruction see the Nuclear Data example notebook. | gd157_endf.resonances.ranges
energy_range = [rm_resonance.energy_min, rm_resonance.energy_max]
energies = np.logspace(np.log10(energy_range[0]),
np.log10(energy_range[1]), 10000)
for sample in samples:
xs = sample.reconstruct(energies)
elastic_xs = xs[2]
plt.loglog(energies, elastic_xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)') | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
Subset Selection
Another capability of the covariance module is selecting a subset of the resonance parameters and the corresponding subset of the covariance matrix. We can do this by specifying the value we want to discriminate and the bounds within one energy region. Selecting only resonances with J=2: | lower_bound = 2; # inclusive
upper_bound = 2; # inclusive
rm_res_cov_sub = gd157_endf.resonance_covariance.ranges[0].subset('J',[lower_bound,upper_bound])
rm_res_cov_sub.file2res.parameters[:5] | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
The subset method will also store the corresponding subset of the covariance matrix | rm_res_cov_sub.covariance
gd157_endf.resonance_covariance.ranges[0].covariance.shape
| examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
Checking the size of the new covariance matrix to be sure it was sampled properly: | old_n_parameters = gd157_endf.resonance_covariance.ranges[0].parameters.shape[0]
old_shape = gd157_endf.resonance_covariance.ranges[0].covariance.shape
new_n_parameters = rm_res_cov_sub.file2res.parameters.shape[0]
new_shape = rm_res_cov_sub.covariance.shape
print('Number of parameters\nOriginal: '+str(old_n_parameters)+'\nSubet: '+str(new_n_parameters)+'\nCovariance Size\nOriginal: '+str(old_shape)+'\nSubset: '+str(new_shape))
| examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
And finally, we can sample from the subset as well | samples_sub = rm_res_cov_sub.sample(n_samples)
samples_sub[0].parameters[:5] | examples/jupyter/nuclear-data-resonance-covariance.ipynb | mit-crpg/openmc | mit |
Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using K acytelation.
Training data is from CUCKOO group and benchmarks are from dbptm. | par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=0)
y.supervised_training("svc")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="sequence", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("svc")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
| .ipynb_checkpoints/Lysine Acetylation -svc-checkpoint.ipynb | vzg100/Post-Translational-Modification-Prediction | mit |
Chemical Vector | par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/k_acetylation.csv")
y.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=0)
y.supervised_training("svc")
y.benchmark("Data/Benchmarks/acet.csv", "K")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/k_acetylation.csv")
x.process_data(vector_function="chemical", amino_acid="K", imbalance_function=i, random_data=1)
x.supervised_training("svc")
x.benchmark("Data/Benchmarks/acet.csv", "K")
del x
| .ipynb_checkpoints/Lysine Acetylation -svc-checkpoint.ipynb | vzg100/Post-Translational-Modification-Prediction | mit |
Simple TFX Pipeline for Vertex Pipelines
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_simple">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
<td><a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_simple.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Run in Google Cloud Vertex AI Workbench</a></td>
</table></div>
This notebook-based tutorial will create a simple TFX pipeline and run it using
Google Cloud Vertex Pipelines. This notebook is based on the TFX pipeline
we built in
Simple TFX Pipeline Tutorial.
If you are not familiar with TFX and you have not read that tutorial yet, you
should read it before proceeding with this notebook.
Google Cloud Vertex Pipelines helps you to automate, monitor, and govern
your ML systems by orchestrating your ML workflow in a serverless manner. You
can define your ML pipelines using Python with TFX, and then execute your
pipelines on Google Cloud. See
Vertex Pipelines introduction
to learn more about Vertex Pipelines.
This notebook is intended to be run on
Google Colab or on
AI Platform Notebooks. If you
are not using one of these, you can simply click "Run in Google Colab" button
above.
Set up
Before you run this notebook, ensure that you have following:
- A Google Cloud Platform project.
- A Google Cloud Storage bucket. See
the guide for creating buckets.
- Enable
Vertex AI and Cloud Storage API.
Please see
Vertex documentation
to configure your GCP project further.
Install python packages
We will install required Python packages including TFX and KFP to author ML
pipelines and submit jobs to Vertex Pipelines. | # Use the latest version of pip.
!pip install --upgrade pip
!pip install --upgrade "tfx[kfp]<2" | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Did you restart the runtime?
If you are using Google Colab, the first time that you run
the cell above, you must restart the runtime by clicking
above "RESTART RUNTIME" button or using "Runtime > Restart
runtime ..." menu. This is because of the way that Colab
loads packages.
If you are not on Colab, you can restart runtime with following cell. | # docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True) | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Login in to Google for this notebook
If you are running this notebook on Colab, authenticate with your user account: | import sys
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user() | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
If you are on AI Platform Notebooks, authenticate with Google Cloud before
running the next section, by running
sh
gcloud auth login
in the Terminal window (which you can open via File > New in the
menu). You only need to do this once per notebook instance.
Check the package versions. | import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__)) | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Set up variables
We will set up some variables used to customize the pipelines below. Following
information is required:
GCP Project id. See
Identifying your project id.
GCP Region to run pipelines. For more information about the regions that
Vertex Pipelines is available in, see the
Vertex AI locations guide.
Google Cloud Storage Bucket to store pipeline outputs.
Enter required values in the cell below before running it. | GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS
GOOGLE_CLOUD_REGION = '' # <--- ENTER THIS
GCS_BUCKET_NAME = '' # <--- ENTER THIS
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.') | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Set gcloud to use your project. | !gcloud config set project {GOOGLE_CLOUD_PROJECT}
PIPELINE_NAME = 'penguin-vertex-pipelines'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for input data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# This is the path where your model will be pushed for serving.
SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT)) | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Prepare example data
We will use the same
Palmer Penguins dataset
as
Simple TFX Pipeline Tutorial.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
species of penguins.
We need to make our own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, we need to create a directory and copy dataset to it
on GCS. | !gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/ | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Take a quick look at the CSV file. | !gsutil cat {DATA_ROOT}/penguins_processed.csv | head | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Create a pipeline
TFX pipelines are defined using Python APIs. We will define a pipeline which
consists of three components, CsvExampleGen, Trainer and Pusher. The pipeline
and model definition is almost the same as
Simple TFX Pipeline Tutorial.
The only difference is that we don't need to set metadata_connection_config
which is used to locate
ML Metadata database. Because
Vertex Pipelines uses a managed metadata service, users don't need to care
of it, and we don't need to specify the parameter.
Before actually define the pipeline, we need to write a model code for the
Trainer component first.
Write model code.
We will use the same model code as in the
Simple TFX Pipeline Tutorial. | _trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf') | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Copy the module file to GCS which can be accessed from the pipeline components.
Because model training happens on GCP, we need to upload this model definition.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline. | !gsutil cp {_trainer_module_file} {MODULE_ROOT}/ | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Write a pipeline definition
We will define a function to create a TFX pipeline. | # Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified because we don't need `metadata_path` argument.
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, serving_model_dir: str,
) -> tfx.dsl.Pipeline:
"""Creates a three component penguin pipeline with TFX."""
# Brings data into the pipeline.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a filesystem destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
# Following three components will be included in the pipeline.
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components) | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Run the pipeline on Vertex Pipelines.
We used LocalDagRunner which runs on local environment in
Simple TFX Pipeline Tutorial.
TFX provides multiple orchestrators to run your pipeline. In this tutorial we
will use the Vertex Pipelines together with the Kubeflow V2 dag runner.
We need to define a runner to actually run the pipeline. You will compile
your pipeline into our pipeline definition format using TFX APIs. | # docs_infra: no_execute
import os
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
# Following function will write the pipeline definition to PIPELINE_DEFINITION_FILE.
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
serving_model_dir=SERVING_MODEL_DIR)) | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
The generated definition file can be submitted using kfp client. | # docs_infra: no_execute
from google.cloud import aiplatform
from google.cloud.aiplatform import pipeline_jobs
import logging
logging.getLogger().setLevel(logging.INFO)
aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)
job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,
display_name=PIPELINE_NAME)
job.submit() | docs/tutorials/tfx/gcp/vertex_pipelines_simple.ipynb | tensorflow/tfx | apache-2.0 |
Logramos una precisión del 100 %, increíble, este modelo no se equivoca! deberíamos utilizarlo para jugar a la lotería y ver si ganamos algunos millones; o tal vez, no?. Veamos como se comporta con los datos de evaluación. | # precisión del modelo en datos de evaluación.
print("precisión evaluación: {0: .2f}".format(
arbol.score(x_eval, y_eval))) | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ah, ahora nuestro modelo ya no se muestra tan preciso, esto se debe a que seguramente esta sobreajustado, ya que dejamos crecer el árbol hasta que cada hoja estuviera pura (es decir que solo contenga datos de una sola de las clases a predecir). Una alternativa para reducir el sobreajuste y ver si podemos lograr que generalice mejor y por tanto tenga más precisión para datos nunca vistos, es tratar de reducir la complejidad del modelo por medio de controlar la profundidad que puede alcanzar el Árbol de Decisión. | # profundidad del arbol de decisión.
arbol.tree_.max_depth | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Este caso nuestro modelo tiene una profundidad de 22 nodos; veamos si reduciendo esa cantidad podemos mejorar la precisión en los datos de evaluación. Por ejemplo, pongamos un máximo de profundidad de tan solo 5 nodos. | # modelo dos, con control de profundiad de 5 nodos
arbol2 = DecisionTreeClassifier(criterion='entropy', max_depth=5)
# Ajustando el modelo
arbol2.fit(x_train, y_train)
# precisión del modelo en datos de entrenamiento.
print("precisión entranamiento: {0: .2f}".format(
arbol2.score(x_train, y_train))) | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Ahora podemos ver que ya no tenemos un modelo con 100% de precisión en los datos de entrenamiento, sino que la precisión es bastante inferior, 92%, sin embargo si ahora medimos la precisión con los datos de evaluación vemos que la precisión es del 90%, 3 puntos por arriba de lo que habíamos conseguido con el primer modelo que nunca se equivocaba en los datos de entrenamiento. | # precisión del modelo en datos de evaluación.
print("precisión evaluación: {0: .2f}".format(
arbol2.score(x_eval, y_eval))) | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Esta diferencia se debe a que reducimos la complejidad del modelo para intentar ganar en generalización. También debemos tener en cuenta que si seguimos reduciendo la complejidad, podemos crear un modelo demasiado simple que en vez de estar sobreajustado puede tener un desempeño muy por debajo del que podría tener; podríamos decir que el modelo estaría infraajustado y tendría un alto nivel de sesgo. Para ayudarnos a encontrar el término medio entre la complejidad del modelo y su ajuste a los datos, podemos ayudarnos de herramientas gráficas. Por ejemplo podríamos crear diferentes modelos, con distintos grados de complejidad y luego graficar la precisión en función de la complejidad. | # Grafico de ajuste del árbol de decisión
train_prec = []
eval_prec = []
max_deep_list = list(range(3, 23))
for deep in max_deep_list:
arbol3 = DecisionTreeClassifier(criterion='entropy', max_depth=deep)
arbol3.fit(x_train, y_train)
train_prec.append(arbol3.score(x_train, y_train))
eval_prec.append(arbol3.score(x_eval, y_eval))
# graficar los resultados.
plt.plot(max_deep_list, train_prec, color='r', label='entrenamiento')
plt.plot(max_deep_list, eval_prec, color='b', label='evaluacion')
plt.title('Grafico de ajuste arbol de decision')
plt.legend()
plt.ylabel('precision')
plt.xlabel('cant de nodos')
plt.show() | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
El gráfico que acabamos de construir se llama gráfico de ajuste y muestra la precisión del modelo en función de su complejidad. En nuestro ejemplo, podemos ver que el punto con mayor precisión, en los datos de evaluación, lo obtenemos con un nivel de profundidad de aproximadamente 5 nodos; a partir de allí el modelo pierde en generalización y comienza a estar sobreajustado. También podemos crear un gráfico similar con la ayuda de Scikit-learn, utilizando validation_curve. | # utilizando validation curve de sklearn
from sklearn.learning_curve import validation_curve
train_prec, eval_prec = validation_curve(estimator=arbol, X=x_train,
y=y_train, param_name='max_depth',
param_range=max_deep_list, cv=5)
train_mean = np.mean(train_prec, axis=1)
train_std = np.std(train_prec, axis=1)
test_mean = np.mean(eval_prec, axis=1)
test_std = np.std(eval_prec, axis=1)
# graficando las curvas
plt.plot(max_deep_list, train_mean, color='r', marker='o', markersize=5,
label='entrenamiento')
plt.fill_between(max_deep_list, train_mean + train_std,
train_mean - train_std, alpha=0.15, color='r')
plt.plot(max_deep_list, test_mean, color='b', linestyle='--',
marker='s', markersize=5, label='evaluacion')
plt.fill_between(max_deep_list, test_mean + test_std,
test_mean - test_std, alpha=0.15, color='b')
plt.grid()
plt.legend(loc='center right')
plt.xlabel('Cant de nodos')
plt.ylabel('Precision')
plt.show() | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
En este gráfico, también podemos ver que nuestro modelo tiene bastante varianza, representada por el área esfumada.
Métodos para reducir el Sobreajuste
Algunas de las técnicas que podemos utilizar para reducir el Sobreajuste, son:
Utilizar validación cruzada.
Recolectar más datos.
Introducir una penalización a la complejidad con alguna técnica de regularización.
Optimizar los parámetros del modelo con grid search.
Reducir la dimensión de los datos.
Aplicar técnicas de selección de atributos.
Utilizar modelos ensamblados.
Veamos algunos ejemplos.
Validación cruzada
La validación cruzada se inicia mediante el fraccionamiento de un conjunto de datos en un número $k$ de particiones (generalmente entre 5 y 10) llamadas pliegues. La validación cruzada luego itera entre los datos de evaluación y entrenamiento $k$ veces, de un modo particular. En cada iteración de la validación cruzada, un pliegue diferente se elige como los datos de evaluación. En esta iteración, los otros pliegues $k-1$ se combinan para formar los datos de entrenamiento. Por lo tanto, en cada iteración tenemos $(k-1) / k$ de los datos utilizados para el entrenamiento y $1 / k$ utilizado para la evaluación.
Cada iteración produce un modelo, y por lo tanto una estimación del rendimiento de la generalización, por ejemplo, una estimación de la precisión. Una vez finalizada la validación cruzada, todos los ejemplos se han utilizado sólo una vez para evaluar pero $k -1$ veces para entrenar. En este punto tenemos estimaciones de rendimiento de todos los pliegues y podemos calcular la media y la desviación estándar de la precisión del modelo. Veamos un ejemplo
<img alt="Validacion cruzada" title="Validacion cruzada" src="https://relopezbriega.github.io/images/validacion_cruzada.png"> | # Ejemplo cross-validation
from sklearn import cross_validation
# creando pliegues
kpliegues = cross_validation.StratifiedKFold(y=y_train, n_folds=10,
random_state=2016)
# iterando entre los plieges
precision = []
for k, (train, test) in enumerate(kpliegues):
arbol2.fit(x_train[train], y_train[train])
score = arbol2.score(x_train[test], y_train[test])
precision.append(score)
print('Pliegue: {0:}, Dist Clase: {1:}, Prec: {2:.3f}'.format(k+1,
np.bincount(y_train[train]), score))
# imprimir promedio y desvio estandar
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision))) | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
En este ejemplo, utilizamos el <a href="https://es.wikipedia.org/wiki/Iterador_(patr%C3%B3n_de_dise%C3%B1o)">iterador</a> StratifiedKFold que nos proporciona Scikit-learn. Este <a href="https://es.wikipedia.org/wiki/Iterador_(patr%C3%B3n_de_dise%C3%B1o)">iterador</a> es una versión mejorada de la validación cruzada, ya que cada pliegue va a estar estratificado para mantener las proporciones entre las clases del conjunto de datos original, lo que suele dar mejores estimaciones del sesgo y la varianza del modelo. También podríamos utilizar cross_val_score que ya nos proporciona los resultados de la precisión que tuvo el modelo en cada pliegue. | # Ejemplo con cross_val_score
precision = cross_validation.cross_val_score(estimator=arbol2,
X=x_train, y=y_train,
cv=10, n_jobs=-1)
print('precisiones: {}'.format(precision))
print('Precision promedio: {0: .3f} +/- {1: .3f}'.format(np.mean(precision),
np.std(precision))) | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
Más datos y curvas de aprendizaje
Muchas veces, reducir el Sobreajuste es tan fácil como conseguir más datos, dame más datos y te predeciré el futuro!. Aunque en la vida real nunca es una tarea tan sencilla conseguir más datos. Otra herramienta analítica que nos ayuda a entender como reducimos el Sobreajuste con la ayuda de más datos, son las curvas de aprendizaje, las cuales grafican la precisión en función del tamaño de los datos de entrenamiento. Veamos como podemos graficarlas con la ayuda de Python.
<img alt="Curva de aprendizaje" title="Curva de aprendizaje" src="https://relopezbriega.github.io/images/curva_aprendizaje.png" width="600px" height="600px" > | # Ejemplo Curvas de aprendizaje
from sklearn.learning_curve import learning_curve
train_sizes, train_scores, test_scores = learning_curve(estimator=arbol2,
X=x_train, y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10), cv=10,
n_jobs=-1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# graficando las curvas
plt.plot(train_sizes, train_mean, color='r', marker='o', markersize=5,
label='entrenamiento')
plt.fill_between(train_sizes, train_mean + train_std,
train_mean - train_std, alpha=0.15, color='r')
plt.plot(train_sizes, test_mean, color='b', linestyle='--',
marker='s', markersize=5, label='evaluacion')
plt.fill_between(train_sizes, test_mean + test_std,
test_mean - test_std, alpha=0.15, color='b')
plt.grid()
plt.title('Curva de aprendizaje')
plt.legend(loc='upper right')
plt.xlabel('Cant de ejemplos de entrenamiento')
plt.ylabel('Precision')
plt.show() | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
En este gráfico podemos ver claramente como con pocos datos la precisión entre los datos de entrenamiento y los de evaluación son muy distintas y luego a medida que la cantidad de datos va aumentando, el modelo puede generalizar mucho mejor y las precisiones se comienzan a emparejar. Este gráfico también puede ser importante a la hora de decidir invertir en la obtención de más datos, ya que por ejemplo nos indica que a partir las 2500 muestras, el modelo ya no gana mucha más precisión a pesar de obtener más datos.
Optimización de parámetros con Grid Search
La mayoría de los modelos de Machine Learning cuentan con varios parámetros para ajustar su comportamiento, por lo tanto otra alternativa que tenemos para reducir el Sobreajuste es optimizar estos parámetros por medio de un proceso conocido como grid search e intentar encontrar la combinación ideal que nos proporcione mayor precisión. El enfoque que utiliza grid search es bastante simple, se trata de una búsqueda exhaustiva por el paradigma de fuerza bruta en el que se especifica una lista de valores para diferentes parámetros, y la computadora evalúa el rendimiento del modelo para cada combinación de éstos parámetros para obtener el conjunto óptimo que nos brinda el mayor rendimiento.
Veamos un ejemplo utilizando un modelo de SVM o Máquinas de vectores de soporte, la idea va a ser optimizar los parámetros gamma y C de este modelo. El parámetro gamma define cuan lejos llega la influencia de un solo ejemplo de entrenamiento, con valores bajos que significan "lejos" y los valores altos significan "cerca". El parámetro C es el que establece la penalización por error en la clasificación un valor bajo de este parámetro hace que la superficie de decisión sea más lisa, mientras que un valor alto tiene como objetivo que todos los ejemplos se clasifiquen correctamente, dándole más libertad al modelo para elegir más ejemplos como vectores de soporte. Tengan en cuenta que como todo proceso por fuerza bruta, puede tomar bastante tiempo según la cantidad de parámetros que utilicemos para la optimización. | # Ejemplo de grid search con SVM.
from sklearn.grid_search import GridSearchCV
# creación del modelo
svm = SVC(random_state=1982)
# rango de parametros
rango_C = np.logspace(-2, 10, 10)
rango_gamma = np.logspace(-9, 3, 10)
param_grid = dict(gamma=rango_gamma, C=rango_C)
# crear grid search
gs = GridSearchCV(estimator=svm, param_grid=param_grid, scoring='accuracy',
cv=5,n_jobs=-1)
# comenzar el ajuste
gs = gs.fit(x_train, y_train)
# imprimir resultados
print(gs.best_score_)
print(gs.best_params_)
# utilizando el mejor modelo
mejor_modelo = gs.best_estimator_
mejor_modelo.fit(x_train, y_train)
print('Precisión: {0:.3f}'.format(mejor_modelo.score(x_eval, y_eval))) | content/notebooks/MachineLearningOverfitting.ipynb | relopezbriega/mi-python-blog | gpl-2.0 |
We see that $\alpha$ shifts the function on the x axis. $\beta$ alters the steepness of the function. As already mention $\gamma$ and $\delta$ determine the floor and the ceiling of the function.
Next let's perform the analysis. We first load the data. | from urllib import urlopen
f=urlopen('http://journal.sjdm.org/12/12817/data.csv')
D=np.loadtxt(f,skiprows=3,delimiter=',')[:,7:]
f.close()
D.shape
# predictors
vfair=np.array(([range(1,8)]*6)).flatten() # in cents
vmale=np.ones(42,dtype=int); vmale[21:]=0
vface=np.int32(np.concatenate([np.zeros(7),np.ones(7),np.ones(7)*2]*2))
# anova format
sid=[];face=[];fair=[]
for i in range(D.shape[0]):
for j in range(D.shape[1]):
sid.append(i)
#face.append(['angry','neutral','smile'][vface[j]])
face.append(vface[j])
fair.append(vfair[j])
coop=D.flatten()
sid=np.array(sid)
face=np.array(face)
fair=np.array(fair)
assert np.all(coop[:42]==D[0,:])
print coop.size,len(sid),len(face),len(fair)
print D.shape | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
It is good to do some manual fitting to see just how the logistic curve behaves but also to assure ourselves that the we can get a shape similar the pattern of our data. | D[D==2]=np.nan
R=np.zeros((3,7))
for i in np.unique(vface).tolist():
for j in np.unique(vfair).tolist():
sel=np.logical_and(i==vface,j==vfair)
R[i,j-1]=np.nansum(D[:,sel])/(~np.isnan(D[:,sel])).sum()
for i in range(3):
y=1/(1+exp(4.5-1.2*np.arange(1,8)))*0.5+[0.4,0.44,0.47][i]
plt.plot(range(1,8),y,':',color=['b','r','g'][i])
plt.plot(range(1,8),R[i,:],color=['b','r','g'][i])
plt.legend(['Data Angry','Model Angry','Data Neutral',
'Model Neutral','Data Smile','Model Smile'],loc=4); | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
Above I fitted the logistic curve to the data. I got the parameter values by several iterations of trial-and-error. We want to obtain precise estimates and also we wish to get an idea about the uncertainty of the estimate. We implement the model in STAN. | import pystan
model = """
data {
int<lower=0> N;
int<lower=0,upper=1> coop[N]; // acceptance
int<lower=0,upper=8> fair[N]; // fairness
int<lower=1,upper=3> face[N]; // face expression
}
parameters {
real<lower=-20,upper=20> alpha[3];
real<lower=0,upper=10> beta[3];
simplex[3] gamm[3];
}
transformed parameters{
vector[N] x;
for (i in 1:N)
x[i]<-inv_logit(alpha[face[i]]+beta[face[i]]*fair[i])
*gamm[face[i]][3]+gamm[face[i]][1];
}
model {
coop ~ bernoulli(x);
}
"""
#inpar=[{'alpha':[-4.5,-4.5,-4.5],'beta':[1.2,1.2,1.2],
# 'gamma':[0.4,0.44,0.47],'delta':[0.1,0.06,0.03]}]*4
sm = pystan.StanModel(model_code=model) | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
Run it! | dat = {'N': coop.size,'coop':np.int32(coop),'fair':fair,'face':face+1,'sid':sid}
seed=np.random.randint(2**16)
fit=sm.sampling(data=dat,iter=6000,chains=4,thin=5,warmup=2000,n_jobs=4,seed=seed)
print pystan.misc._print_stanfit(fit,pars=['alpha','beta','gamm'],digits_summary=2)
w= fit.extract()
np.save('alpha',w['alpha'])
np.save('gamm',w['gamm'])
np.save('beta',w['beta'])
del w
del fit | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
Here are the results. | #w=fit.summary(pars=['alpha','beta','gamm'])
#np.save('logregSummary.fit',w)
#w=np.load('logregSummary.fit.npy')
#w=w.tolist()
a=np.load('m1alpha.npy')
b=np.load('m1beta.npy')
g=np.load('m1gamm.npy')[:,:,0]
d=np.load('m1gamm.npy')[:,:,1]
#D[D==2]=np.nan
for i in range(3):
x=np.linspace(1,7,101)
y=1/(1+exp(-np.median(a[:,i])-np.median(b[:,i])*x))*np.median(1-g[:,i]-d[:,i])+np.median(g[:,i])
plt.plot(x,y,':',color=['b','r','g'][i])
plt.plot(range(1,8),R[i,:],color=['b','r','g'][i])
plt.legend(['Data Angry','Model Angry','Data Neutral',
'Model Neutral','Data Smile','Model Smile'],loc=4);
#for j in range(lp.size): print '%.3f [%.3f, %.3f]'%(prs[j],lp[j],up[j]) | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
The model fits quite well. We now look at the estimated values for different face conditions. | D=np.concatenate([a,b,g,1-d,1-g-d],1)
print D.shape
for n in range(D.shape[1]):
plt.subplot(2,3,[1,2,4,5,6][n/3])
k=n%3
plt.plot([k,k],[sap(D[:,n],2.5),sap(D[:,n],97.5)],color=clr)
plt.plot([k,k],[sap(D[:,n],25),sap(D[:,n],75)],color=clr,lw=3,solid_capstyle='round')
plt.plot([k],[np.median(D[:,n])],mfc=clr,mec=clr,ms=8,marker='_',mew=2)
plt.xlim([-0.5,2.5])
plt.grid(b=False,axis='x')
plt.title(['alpha','beta','gamma','delta','1-gamma-delta'][n/3])
plt.gca().set_xticks([0,1,2])
plt.gca().set_xticklabels(['angry','neutral','smile']) | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
The estimates show what we already more-or-less inferred from the graph. The 95% interval for $\alpha$ and $\beta$ coefficients are overlapping and we should consider model with the same horizontal shift and steepness for each of the face conditions. We see that the $\gamma$ and $\delta$ vary between the conditions. To better understand what is happening consider the width of the acceptance band in each condition given by $1-\gamma-\delta$ shown in the right bottom panel. From the figure it looks like all three curves occupy a band of the same width. The estimation confirms this for the case of neutral and smile condition whose estimates overlap almost perfectly. In the angry condition it is not clear where the bottom floor of the logit curve is located. The curve is still linear for lower offers. This means that a) the $1-\gamma-\delta$ estimate is larger in angry condition and b) the estimate is more uncertain. We can reasonably argue that $1-\gamma-\delta$ should be equal across conditions and that discrepant estimate for angry condition is due to error or some strange money-face interaction which we are not interested in. We end up with the following model.
$$\mathrm{coop}{i,j} \sim \mathrm{Bern}(\pi{i,j})$$
$$\pi_{i,j} =\mathrm{logit}^{-1}(\alpha+\beta\cdot\mathrm{fair}[i,j])\cdot \nu+\gamma_{\mathrm{face}[i,j]}$$ | import pystan
model = """
data {
int<lower=0> N;
int<lower=0,upper=1> coop[N]; // acceptance
int<lower=0,upper=8> fair[N]; // fairness
int<lower=1,upper=3> face[N]; // face expression
}
parameters {
real<lower=-20,upper=20> alpha;
real<lower=0,upper=10> beta;
real<lower=0,upper=1> gamm[3];
real<lower=0,upper=1> delt[3];
}
transformed parameters{
vector[N] x;
vector[3] gamma[3];
for (i in 1:3){
gamma[i][1]<-gamm[i];
gamma[i][2]<-delt[i];
gamma[i][3]<-1-gamm[i]-delt[i];
}
for (i in 1:N)
x[i]<-inv_logit(alpha+beta*fair[i])
*gamma[face[i]][3]+gamma[face[i]][1];
}
model {
coop ~ bernoulli(x);
}
"""
sm = pystan.StanModel(model_code=model)
dat = {'N': coop.size,'coop':np.int32(coop),'fair':fair,'face':face+1,'sid':sid}
seed=np.random.randint(2**16)
fit=sm.sampling(data=dat,iter=5000,chains=4,thin=5,warmup=2000,n_jobs=4,seed=seed)
outpars=['alpha','beta','gamm','delt']
print pystan.misc._print_stanfit(fit,pars=outpars,digits_summary=2)
w= fit.extract()
for op in outpars: np.save(op,w[op])
del w
del fit
a=np.load('alpha.npy')
b=np.load('beta.npy')
g=np.load('gamm.npy')
d=np.load('delt.npy')
#D[D==2]=np.nan
for i in range(3):
x=np.linspace(1,7,101)
y=1/(1+exp(-np.median(a)-np.median(b)*x))*np.median(1-g[:,i]-d[:,i])+np.median(g[:,i])
plt.plot(x,y,':',color=['b','r','g'][i])
plt.plot(range(1,8),R[i,:],color=['b','r','g'][i])
plt.legend(['Data Angry','Model Angry','Data Neutral',
'Model Neutral','Data Smile','Model Smile'],loc=4);
from scipy.stats import scoreatpercentile as sap
print g.T.shape, np.atleast_2d(d).shape
D=np.concatenate([np.atleast_2d(a),np.atleast_2d(b),np.atleast_2d(b),g.T,1-d.T,1-g.T-d.T],0).T
print D.shape
for n in range(D.shape[1]):
plt.subplot(2,3,[1,2,4,5,6][n/3])
k=n%3
plt.plot([k,k],[sap(D[:,n],2.5),sap(D[:,n],97.5)],color=clr)
plt.plot([k,k],[sap(D[:,n],25),sap(D[:,n],75)],color=clr,lw=3,solid_capstyle='round')
plt.plot([k],[np.median(D[:,n])],mfc=clr,mec=clr,ms=8,marker='_',mew=2)
plt.xlim([-0.5,2.5])
plt.grid(b=False,axis='x')
plt.title(['alpha-beta','gamma','delta','1-gamma-delta'][n/3])
plt.gca().set_xticks([0,1,2])
plt.gca().set_xticklabels(['angry','neutral','smile']) | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
Furthermore, we are concerned about the fact the the comparison across conditions is done within-subject and that the observed values are not independent. We extend the model by fitting separate logistic model to each subject. In particular, we estimate a separate $\gamma$ parameter for each subject i.e. $\gamma_{i,\mathrm{face}[i,j]}$. We use hierarchical prior that pools the estimates across subjects and also takes care of the correlation between conditions.
$$ \begin{bmatrix}
\gamma_{i,s} \ \gamma_{i,n} \ \gamma_{i,a}
\end{bmatrix}
\sim \mathcal{N} \Bigg(
\begin{bmatrix}
\mu_s \ \mu_n \ \mu_a
\end{bmatrix}
,\Sigma \Bigg)$$
where
$$
\Sigma=
\begin{pmatrix}
\sigma_s^2 & \sigma_s r_{sn} \sigma_n & \sigma_s r_{sa} \sigma_a \
\sigma_s r_{sn} \sigma_n & \sigma_n^2 & \sigma_n r_{na} \sigma_a \
\sigma_s r_{sa} \sigma_a & \sigma_n r_{na} \sigma_a & \sigma_a^2 \
\end{pmatrix}
$$
For each condition we are estimating population mean $\mu$ and population variance $\sigma^2$. Furthermore, we estimate correlation $r$ for each pair of conditions. As a consequence the estimate $\mu$ are not confounded by the correlation. | import pystan
model = """
data {
int<lower=0> N;
int<lower=0> M; // number of subjects
int sid[N]; // subject identifier
int<lower=0,upper=1> coop[N]; // acceptance
int<lower=0,upper=8> fair[N]; // fairness
int<lower=1,upper=3> face[N]; // face expression
}
parameters {
real<lower=-20,upper=20> alpha;
real<lower=0,upper=10> beta;
vector<lower=0,upper=1>[3] gamm[M];
real<lower=0,upper=1> delt;
vector<lower=0,upper=1>[3] mu;
vector<lower=0,upper=1>[3] sigma;
vector<lower=-1,upper=1>[3] r;
}
transformed parameters{
vector[N] x;
vector[3] gammt[3,M];
matrix[3,3] S;
for (i in 1:3) S[i,i]<-square(sigma[i]);
S[1,2]<- sigma[1]*r[1]*sigma[2];S[2,1]<-S[1,2];
S[1,3]<- sigma[1]*r[2]*sigma[3];S[3,1]<-S[1,3];
S[2,3]<- sigma[3]*r[3]*sigma[2];S[3,2]<-S[2,3];
for (m in 1:M){
for (i in 1:3){
gammt[i][m][1]<-gamm[m][i];
gammt[i][m][3]<-delt;
gammt[i][m][2]<- 1- gammt[i][m][1]-gammt[i][m][3];
}}
for (i in 1:N)
x[i]<-inv_logit(alpha+beta*fair[i])
*gammt[face[i]][sid[i]][3]+gammt[face[i]][sid[i]][1];
}
model {
for (i in 1:M) gamm[i]~multi_normal(mu,S);
coop ~ bernoulli(x);
}
"""
sm = pystan.StanModel(model_code=model)
dat = {'N': coop.size,'coop':np.int32(coop),'fair':fair,'face':face+1,'sid':sid+1,'M':1326}
seed=np.random.randint(2**16)
fit=sm.sampling(data=dat,iter=5000,chains=4,thin=5,warmup=2000,n_jobs=4,seed=seed)
outpars=['alpha','beta','delt','mu','sigma','r']
print pystan.misc._print_stanfit(fit,pars=outpars,digits_summary=2)
w= fit.extract()
for op in outpars: np.save(op,w[op])
del w
del fit | _ipynb/No Way Anova - Logistic Regression truimphs Anova.ipynb | simkovic/simkovic.github.io | mit |
An M-estimator minimizes the function
$$Q(e_i, \rho) = \sum_i~\rho \left (\frac{e_i}{s}\right )$$
where $\rho$ is a symmetric function of the residuals
The effect of $\rho$ is to reduce the influence of outliers
$s$ is an estimate of scale.
The robust estimates $\hat{\beta}$ are computed by the iteratively re-weighted least squares algorithm
We have several choices available for the weighting functions to be used | norms = sm.robust.norms
def plot_weights(support, weights_func, xlabels, xticks):
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(support, weights_func(support))
ax.set_xticks(xticks)
ax.set_xticklabels(xlabels, fontsize=16)
ax.set_ylim(-0.1, 1.1)
return ax | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Andrew's Wave | help(norms.AndrewWave.weights)
a = 1.339
support = np.linspace(-np.pi * a, np.pi * a, 100)
andrew = norms.AndrewWave(a=a)
plot_weights(
support, andrew.weights, ["$-\pi*a$", "0", "$\pi*a$"], [-np.pi * a, 0, np.pi * a]
) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Hampel's 17A | help(norms.Hampel.weights)
c = 8
support = np.linspace(-3 * c, 3 * c, 1000)
hampel = norms.Hampel(a=2.0, b=4.0, c=c)
plot_weights(support, hampel.weights, ["3*c", "0", "3*c"], [-3 * c, 0, 3 * c]) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Huber's t | help(norms.HuberT.weights)
t = 1.345
support = np.linspace(-3 * t, 3 * t, 1000)
huber = norms.HuberT(t=t)
plot_weights(support, huber.weights, ["-3*t", "0", "3*t"], [-3 * t, 0, 3 * t]) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Least Squares | help(norms.LeastSquares.weights)
support = np.linspace(-3, 3, 1000)
lst_sq = norms.LeastSquares()
plot_weights(support, lst_sq.weights, ["-3", "0", "3"], [-3, 0, 3]) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Ramsay's Ea | help(norms.RamsayE.weights)
a = 0.3
support = np.linspace(-3 * a, 3 * a, 1000)
ramsay = norms.RamsayE(a=a)
plot_weights(support, ramsay.weights, ["-3*a", "0", "3*a"], [-3 * a, 0, 3 * a]) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Trimmed Mean | help(norms.TrimmedMean.weights)
c = 2
support = np.linspace(-3 * c, 3 * c, 1000)
trimmed = norms.TrimmedMean(c=c)
plot_weights(support, trimmed.weights, ["-3*c", "0", "3*c"], [-3 * c, 0, 3 * c]) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Tukey's Biweight | help(norms.TukeyBiweight.weights)
c = 4.685
support = np.linspace(-3 * c, 3 * c, 1000)
tukey = norms.TukeyBiweight(c=c)
plot_weights(support, tukey.weights, ["-3*c", "0", "3*c"], [-3 * c, 0, 3 * c]) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Scale Estimators
Robust estimates of the location | x = np.array([1, 2, 3, 4, 500]) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
The mean is not a robust estimator of location | x.mean() | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
The median, on the other hand, is a robust estimator with a breakdown point of 50% | np.median(x) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Analogously for the scale
The standard deviation is not robust | x.std() | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Median Absolute Deviation
$$ median_i |X_i - median_j(X_j)|) $$
Standardized Median Absolute Deviation is a consistent estimator for $\hat{\sigma}$
$$\hat{\sigma}=K \cdot MAD$$
where $K$ depends on the distribution. For the normal distribution for example,
$$K = \Phi^{-1}(.75)$$ | stats.norm.ppf(0.75)
print(x)
sm.robust.scale.mad(x)
np.array([1, 2, 3, 4, 5.0]).std() | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Another robust estimator of scale is the Interquartile Range (IQR)
$$\left(\hat{X}{0.75} - \hat{X}{0.25}\right),$$
where $\hat{X}_{p}$ is the sample p-th quantile and $K$ depends on the distribution.
The standardized IQR, given by $K \cdot \text{IQR}$ for
$$K = \frac{1}{\Phi^{-1}(.75) - \Phi^{-1}(.25)} \approx 0.74,$$
is a consistent estimator of the standard deviation for normal data. | sm.robust.scale.iqr(x) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
The IQR is less robust than the MAD in the sense that it has a lower breakdown point: it can withstand 25\% outlying observations before being completely ruined, whereas the MAD can withstand 50\% outlying observations. However, the IQR is better suited for asymmetric distributions.
Yet another robust estimator of scale is the $Q_n$ estimator, introduced in Rousseeuw & Croux (1993), 'Alternatives to the Median Absolute Deviation'. Then $Q_n$ estimator is given by
$$
Q_n = K \left\lbrace \vert X_{i} - X_{j}\vert : i<j\right\rbrace_{(h)}
$$
where $h\approx (1/4){{n}\choose{2}}$ and $K$ is a given constant. In words, the $Q_n$ estimator is the normalized $h$-th order statistic of the absolute differences of the data. The normalizing constant $K$ is usually chosen as 2.219144, to make the estimator consistent for the standard deviation in the case of normal data. The $Q_n$ estimator has a 50\% breakdown point and a 82\% asymptotic efficiency at the normal distribution, much higher than the 37\% efficiency of the MAD. | sm.robust.scale.qn_scale(x) | v0.13.1/examples/notebooks/generated/robust_models_1.ipynb | statsmodels/statsmodels.github.io | bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.