markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
As expected, the first time we try to set the value for california, it doesn't exist in the dictionary so the right handside of the equal sign errors. Thats easy to fix like this | summed = dict()
for row in data:
key, value = row
if key not in summed:
summed[key] = int()
summed[key] = summed[key] + value
summed | python-tutorials/defaultdict.ipynb | Pinafore/ds-hw | mit |
Lets see one more example that instead of summing the numbers we wan't to collect everything into a list. So lets replace int() with list() since we wan't to make an empty list. We also need to change the summing term to use append instead | merged = dict()
for row in data:
key, value = row
if key not in merged:
merged[key] = list()
merged[key].append(value)
merged | python-tutorials/defaultdict.ipynb | Pinafore/ds-hw | mit |
Its inconvenient to do this check every time so python has a nice way to make this pattern simpler. This is what collections.defaultdict was designed for. It does the following:
Takes a single argument which is a function which we will call func
When a key is accessed (for example with merged[key], check if it exists. If it doesn't, instead of erroring initialize it to the return of func then proceed as normal
Lets see both examples from above using this | from collections import defaultdict
summed = defaultdict(int)
for row in data:
key, value = row
summed[key] = summed[key] + value
summed
merged = defaultdict(list)
for row in data:
key, value = row
merged[key].append(value)
merged
def myinit():
return -100
summed = defaultdict(myinit)
for row in data:
key, value = row
summed[key] += value
summed | python-tutorials/defaultdict.ipynb | Pinafore/ds-hw | mit |
As expected, the results are exactly the same, and it is based on the initial method you pass it. This function is called a factory method since each time a key needs to be initialized you can imagine that the function acts as a factory which creates new values. Lets cover one of the common mistakes with default dictionaries before concluding. The source of this mistake is that any time a non-existent key is accessed its initialized. | d = defaultdict(str)
# initially this is empty so all of these should be false
print('pedro in dictionary:', 'pedro' in d)
print('jordan in dictionary:', 'jordan' in d)
# Lets set something in the dictionary now and check that again
d['jordan'] = 'professor'
print('jordan is in dictionary:', 'jordan' in d)
print('pedro is in dictionary:', 'pedro' in d)
# Lets accidentally access 'pedro' before setting it then see what happens
pedro_job = d['pedro']
print('pedro is in dictionary:', 'pedro' in d)
print(d)
print('-->', d['pedro'], '<--', type(d['pedro'])) | python-tutorials/defaultdict.ipynb | Pinafore/ds-hw | mit |
So this is odd! You never set a key (only accessed it), but nonetheless pedro is in the dictionary. This is because when the 'pedro' key was accessed and not there, python set it to the return of str which returns an empty string. Lets set this to the real value and be done | d['pedro'] = 'PhD Student'
print('pedro is in dictionary:', 'pedro' in d)
print(d)
print('-->', d['pedro'], '<--', type(d['pedro'])) | python-tutorials/defaultdict.ipynb | Pinafore/ds-hw | mit |
This notebook reproduces both MAML and the similar Reptile.
The Problem
https://towardsdatascience.com/fun-with-small-image-data-sets-8c83d95d0159
The goal of both of these algorithms is to learn to do well at the K-shot learning problem.
In K-shot learning, we need to train a neural network to generalize based on a very small number of examples (often on the order of 10 or so) instead of the often thousands of examples we see in datasets like ImageNet. However, in preparation for K-shot learning, you are allowed to train on many similar K-shot problems to learn the best way to generalize based on only K examples.
This is learning to learn or metalearning. We have already seen metalearning in my post on "Learning to Learn by Gradient Descent by Gradient Descent", which you can find here:
https://becominghuman.ai/paper-repro-learning-to-learn-by-gradient-descent-by-gradient-descent-6e504cc1c0de
The metalearning approach of both Reptile and MAML is to come up with an initialization for neural networks that is easily generalizable to similar tasks. This is different to "Learning to Learn by Gradient Descent by Gradient Descent" in which we weren't learning an initialization but rather an optimizer.
This approach is very similar to transfer learning, in which we train a network on, say, ImageNet, and it later turns out that fine-tuning this network makes it easy to learn another image dataset, with much less data. Indeed, transfer learning can be seen as a form of metalearning.
The difference here is that the initial network was trained with the explicit purpose of being easily generalizable, whereas transfer learning just "accidentally" happens to work, and thus might not work optimally.
Indeed, it is fairly easy to find a in which transfer learnings fails to learn a good initialization. For this we need to look at the 1D sine wave regression problem.
Sine Wave Regression
In this K-shot problem, each task consists in learning a modified sine function. Specifically, for each task, the underlying function will be of the form y = a sin(x + b), with both a and b chosen randomly, and the goal of our neural network is to learn to find y given x based on only 10 (x, y) pairs.
Let's write our sine wave task and plot a couple of examples: | class SineWaveTask:
def __init__(self):
self.a = np.random.uniform(0.1, 5.0)
self.b = np.random.uniform(0, 2*np.pi)
self.train_x = None
def f(self, x):
return self.a * np.sin(x + self.b)
def training_set(self, size=10, force_new=False):
if self.train_x is None and not force_new:
self.train_x = np.random.uniform(-5, 5, size)
x = self.train_x
elif not force_new:
x = self.train_x
else:
x = np.random.uniform(-5, 5, size)
y = self.f(x)
return torch.Tensor(x), torch.Tensor(y)
def test_set(self, size=50):
x = np.linspace(-5, 5, size)
y = self.f(x)
return torch.Tensor(x), torch.Tensor(y)
def plot(self, *args, **kwargs):
x, y = self.test_set(size=100)
return plt.plot(x.numpy(), y.numpy(), *args, **kwargs)
SineWaveTask().plot()
SineWaveTask().plot()
SineWaveTask().plot()
plt.show() | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
To understand why this is going to be a problem for transfer learning, let's plot 1,000 of them: | for _ in range(1000):
SineWaveTask().plot(color='black') | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
Looks like there is a lot of overlap at each x value, to say the least...
Since there are multiple possible values for each x across multiple tasks, if we train a single neural net to deal with multiple tasks at the same time, its best bet will simply be to return the average y value across all tasks for each x. What does that look like? | all_x, all_y = [], []
for _ in range(10000):
curx, cury = SineWaveTask().test_set(size=100)
all_x.append(curx.numpy())
all_y.append(cury.numpy())
avg, = plt.plot(all_x[0], np.mean(all_y, axis=0))
rand, = SineWaveTask().plot()
plt.legend([avg, rand], ['Average', 'Random'])
plt.show() | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
The average is basically 0, which means a neural network trained on a lot of tasks would simply return 0 everywhere! It is unclear that this will actually help very much, and yet this is the transfer learning approach in this case...
Let's see how well it does by actually implementing the model: | TRAIN_SIZE = 10000
TEST_SIZE = 1000
class ModifiableModule(nn.Module):
def params(self):
return [p for _, p in self.named_params()]
def named_leaves(self):
return []
def named_submodules(self):
return []
def named_params(self):
subparams = []
for name, mod in self.named_submodules():
for subname, param in mod.named_params():
subparams.append((name + '.' + subname, param))
return self.named_leaves() + subparams
def set_param(self, name, param):
if '.' in name:
n = name.split('.')
module_name = n[0]
rest = '.'.join(n[1:])
for name, mod in self.named_submodules():
if module_name == name:
mod.set_param(rest, param)
break
else:
setattr(self, name, param)
def copy(self, other, same_var=False):
for name, param in other.named_params():
if not same_var:
param = V(param.data.clone(), requires_grad=True)
self.set_param(name, param)
class GradLinear(ModifiableModule):
def __init__(self, *args, **kwargs):
super().__init__()
ignore = nn.Linear(*args, **kwargs)
self.weights = V(ignore.weight.data, requires_grad=True)
self.bias = V(ignore.bias.data, requires_grad=True)
def forward(self, x):
return F.linear(x, self.weights, self.bias)
def named_leaves(self):
return [('weights', self.weights), ('bias', self.bias)]
class SineModel(ModifiableModule):
def __init__(self):
super().__init__()
self.hidden1 = GradLinear(1, 40)
self.hidden2 = GradLinear(40, 40)
self.out = GradLinear(40, 1)
def forward(self, x):
x = F.relu(self.hidden1(x))
x = F.relu(self.hidden2(x))
return self.out(x)
def named_submodules(self):
return [('hidden1', self.hidden1), ('hidden2', self.hidden2), ('out', self.out)]
SINE_TRAIN = [SineWaveTask() for _ in range(TRAIN_SIZE)]
SINE_TEST = [SineWaveTask() for _ in range(TEST_SIZE)]
ONE_SIDED_EXAMPLE = None
while ONE_SIDED_EXAMPLE is None:
cur = SineWaveTask()
x, _ = cur.training_set()
x = x.numpy()
if np.max(x) < 0 or np.min(x) > 0:
ONE_SIDED_EXAMPLE = cur
SINE_TRANSFER = SineModel()
def sine_fit1(net, wave, optim=None, get_test_loss=False, create_graph=False, force_new=False):
net.train()
if optim is not None:
optim.zero_grad()
x, y = wave.training_set(force_new=force_new)
loss = F.mse_loss(net(V(x[:, None])), V(y).unsqueeze(1))
loss.backward(create_graph=create_graph, retain_graph=True)
if optim is not None:
optim.step()
if get_test_loss:
net.eval()
x, y = wave.test_set()
loss_test = F.mse_loss(net(V(x[:, None])), V(y))
return loss.data.cpu().numpy()[0], loss_test.data.cpu().numpy()[0]
return loss.data.cpu().numpy()#[0]
def fit_transfer(epochs=1):
optim = torch.optim.Adam(SINE_TRANSFER.params())
for _ in range(epochs):
for t in random.sample(SINE_TRAIN, len(SINE_TRAIN)):
sine_fit1(SINE_TRANSFER, t, optim)
fit_transfer()
def copy_sine_model(model):
m = SineModel()
m.copy(model)
return m
def eval_sine_test(model, test, fits=(0, 1), lr=0.01):
xtest, ytest = test.test_set()
xtrain, ytrain = test.training_set()
model = copy_sine_model(model)
# Not sure if this should be Adam or SGD.
optim = torch.optim.SGD(model.params(), lr)
def get_loss(res):
return F.mse_loss(res, V(ytest[:, None])).cpu().data.numpy()#[0]
fit_res = []
if 0 in fits:
results = model(V(xtest[:, None]))
fit_res.append((0, results, get_loss(results)))
for i in range(np.max(fits)):
sine_fit1(model, test, optim)
if i + 1 in fits:
results = model(V(xtest[:, None]))
fit_res.append(
(
i + 1,
results,
get_loss(results)
)
)
return fit_res
def plot_sine_test(model, test, fits=(0, 1), lr=0.01):
xtest, ytest = test.test_set()
xtrain, ytrain = test.training_set()
fit_res = eval_sine_test(model, test, fits, lr)
train, = plt.plot(xtrain.numpy(), ytrain.numpy(), '^')
ground_truth, = plt.plot(xtest.numpy(), ytest.numpy())
plots = [train, ground_truth]
legend = ['Training Points', 'True Function']
for n, res, loss in fit_res:
cur, = plt.plot(xtest.numpy(), res.cpu().data.numpy()[:, 0], '--')
plots.append(cur)
legend.append(f'After {n} Steps')
plt.legend(plots, legend)
plt.show()
plot_sine_test(SINE_TRANSFER, SINE_TEST[0], fits=[0, 1, 10], lr=0.02) | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
Basically it looks like our transfer model learns a constant function and that it is really hard to fine tune it to something better than a constant function. It's not even clear that our transfer learning is any better than random initialization... | def plot_sine_learning(models, fits=(0, 1), lr=0.01, marker='s', linestyle='--'):
data = {'model': [], 'fits': [], 'loss': [], 'set': []}
for name, models in models:
if not isinstance(models, list):
models = [models]
for n_model, model in enumerate(models):
for n_test, test in enumerate(SINE_TEST):
n_test = n_model * len(SINE_TEST) + n_test
fit_res = eval_sine_test(model, test, fits, lr)
for n, _, loss in fit_res:
data['model'].append(name)
data['fits'].append(n)
data['loss'].append(loss)
data['set'].append(n_test)
ax = sbs.lineplot(x='fits', y='loss', hue='set',
pd.DataFrame(data), condition='model', value='loss',
time='fits', unit='set', marker=marker, linestyle=linestyle)
plot_sine_learning(
[('Transfer', SINE_TRANSFER), ('Random', SineModel())],
list(range(100)),
marker='',
linestyle='-'
) | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
MAML
We now come to MAML, the first of the two algorithms we will look at today.
As mentioned before, we are trying to find a set of weights such that running gradient descent on similar tasks makes progress as quickly as possible. MAML takes this extremely literally by running one iteration of gradient descent and then updating the initial weights based on how much progress that one iteration made towards the true task. More concretely it:
* Creates a copy of the initialization weights
* Runs an iteration of gradient descent for a random task on the copy
* Backpropagates the loss on a test set through the iteration of gradient descent and back to the initial weights, so that we can update the initial weights in a direction in which they would have been easier to update.
We thus need to take a gradient of a gradient, aka a second degree derivative in this process. Fortunately this is something that PyTorch supports now, unfortunately PyTorch makes it a bit awkward to update the parameters of a model in a way that we can still run gradient descent through them (we already saw this is "Learning to Learn by Gradient Descent by Gradient Descent"), which explains the weird way in which the model is written.
Because we are going to use second derivatives, we need to make sure that the computational graph that allowed us to compute the original gradients stays around, which is why we pass create_graph=True to .backward().
The code below also implements first order MAML, which we explain later: | def maml_sine(model, epochs, lr_inner=0.01, batch_size=1, first_order=False):
optimizer = torch.optim.Adam(model.params())
for _ in tqdm(range(epochs)):
# Note: the paper doesn't specify the meta-batch size for this task,
# so I just use 1 for now.
for i, t in enumerate(random.sample(SINE_TRAIN, len(SINE_TRAIN))):
new_model = SineModel()
new_model.copy(model, same_var=True)
loss = sine_fit1(new_model, t, create_graph=not first_order)
for name, param in new_model.named_params():
grad = param.grad
if first_order:
grad = V(grad.detach().data)
new_model.set_param(name, param - lr_inner * grad)
sine_fit1(new_model, t, force_new=True)
if (i + 1) % batch_size == 0:
optimizer.step()
optimizer.zero_grad()
SINE_MAML = [SineModel() for _ in range(5)]
for m in SINE_MAML:
maml_sine(m, 4)
plot_sine_test(SINE_MAML[0], SINE_TEST[0], fits=[0, 1, 10], lr=0.01)
plt.show()
plot_sine_learning(
[('Transfer', SINE_TRANSFER), ('MAML', SINE_MAML[0]), ('Random', SineModel())],
list(range(10)),
)
plt.show()
plot_sine_test(SINE_MAML[0], ONE_SIDED_EXAMPLE, fits=[0, 1, 10], lr=0.01)
plt.show() | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
So MAML works much better than transfer learning or random initialization for this problem. Yay!
However, it is a bit annoying that we have to use second order derivatives for this... it forces the code to be complicated and it also makes things a fair bit slower (around 33% according to the paper, which matches what we shall see here).
Is there an approximation of MAML that doesn't use the second order derivatives? Of course, we can simply pretend that the gradients that we used for the inner gradient descent just came out of nowhere, and thus just improve the initial parameters without taking into account these second order derivatives, which is what we did before by handling the first_order parameter.
So how good is this first order approximation? Almost as good as the original MAML, as it turns out! | SINE_MAML_FIRST_ORDER = [SineModel() for _ in range(5)]
for m in SINE_MAML_FIRST_ORDER:
maml_sine(m, 4, first_order=True)
plot_sine_test(SINE_MAML_FIRST_ORDER[0], SINE_TEST[0], fits=[0, 1, 10], lr=0.01)
plt.show()
plot_sine_learning(
[('MAML', SINE_MAML), ('MAML First Order', SINE_MAML_FIRST_ORDER)],
list(range(10)),
)
plt.show()
plot_sine_test(SINE_MAML_FIRST_ORDER[0], ONE_SIDED_EXAMPLE, fits=[0, 1, 10], lr=0.01)
plt.show() | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
Reptile
The first order approximation for MAML tells us that something interesting is going on: after all, it seems like how the gradients were generated should be relevant for a good initialization, and yet it apparently isn't so much.
Reptile takes this idea even further by telling us to do the following: run SGD for a few iterations on a given task, and then move your initialization weights a little bit in the direction of the weights you obtained after your k iterations of SGD. An algorithm so simple, it takes only a couple lines of pseudocode:
When I first read this, I was quite consternated: isn't this the same as training your weights alternatively on each task, just like in transfer learning? How would this ever work?
Indeed, the Reptile paper anticipates this very reaction:
You might be thinking “isn’t this the same as training on the expected loss Eτ [Lτ]?” and then checking if the date is April 1st.
As it happens, I am writing this on April 2nd, so this is all serious. So what's going on?
Well, indeed if we had run SGD for a single iteration, we would have something equivalent to the transfer learning described above, but we aren't we are using a few iterations, and so indeed the weights we update towards each time actually depend indirectly on the second derivatives of the loss, similar to MAML.
Ok, but still, why would this work? Well Reptile provides a compelling intuition for this: for each task, there are weights that are optimal. Indeed, there are probably many sets of weights that are optimal. This means that if you take several tasks, there should be a set of weights for which the distance to at least one optimal set of weights for each task is minimal. This set of weights is where we want to initialize our networks, since it is likely to be the one for which the least work is necessary to reach the optimum for any task. This is the set of weights that Reptile finds.
We can see this expressed visually in the following image: the two black lines represent the sets of optimal weights for two different tasks, while the gray line represents the initialization weights. Reptile tries to get the initialization weights closer and closer to the point where the optimal weights are nearest to each other.
Let's now implement Reptile and compare it to MAML: | def reptile_sine(model, epochs, lr_inner=0.01, lr_outer=0.001, k=32, batch_size=32):
optimizer = torch.optim.Adam(model.params(), lr=lr_outer)
name_to_param = dict(model.named_params())
for _ in tqdm(range(epochs)):
for i, t in enumerate(random.sample(SINE_TRAIN, len(SINE_TRAIN))):
new_model = SineModel()
new_model.copy(model)
inner_optim = torch.optim.SGD(new_model.params(), lr=lr_inner)
for _ in range(k):
sine_fit1(new_model, t, inner_optim)
for name, param in new_model.named_params():
cur_grad = (name_to_param[name].data - param.data) / k / lr_inner
if name_to_param[name].grad is None:
name_to_param[name].grad = V(torch.zeros(cur_grad.size()))
name_to_param[name].grad.data.add_(cur_grad / batch_size)
# if (i + 1) % 500 == 0:
# print(name_to_param[name].grad)
if (i + 1) % batch_size == 0:
to_show = name_to_param['hidden1.bias']
optimizer.step()
optimizer.zero_grad()
SINE_REPTILE = [SineModel() for _ in range(5)]
for m in SINE_REPTILE:
reptile_sine(m, 4, k=3, batch_size=1)
plot_sine_test(SINE_REPTILE[0], SINE_TEST[0], fits=[0, 1, 10], lr=0.01)
plt.show()
plot_sine_learning(
[('MAML', SINE_MAML), ('MAML First Order', SINE_MAML_FIRST_ORDER), ('Reptile', SINE_REPTILE)],
list(range(32)),
)
plt.show()
plot_sine_test(SINE_REPTILE[0], ONE_SIDED_EXAMPLE, fits=[0, 1, 10], lr=0.01)
plt.show() | pytorch/ANIML.ipynb | vermouth1992/tf-playground | apache-2.0 |
<h2 style='color:green'>Reviewing XML Parsing</h2>
See if you can use the pattern displayed above to read in and then print the text within "Rom.xml". Note that this file does not contain an "eebo" tag.
Filtering Selections
Sometimes an HTML selection returns a mixture of elements we wish to process and others we wish to skip altogether. For example, suppose a web page has multiple div1 tags, and we only wish to parse some of them. In that case, we can use a conditional to ensure we only process the ones we care about. Let's see this in action: | import bs4
# read in the xml file
soup = bs4.BeautifulSoup(open('Ode.xml'), 'html.parser')
# get a list of the div1 tags
elems = soup.find_all('div1')
# iterate over the div1 tags in soup
for i in elems:
# only proceed if the current tag has the attribute type="ode"
if i['type'] == 'ode':
# print the text content of this div1 element
print(i.get_text()) | beautifulsoup/next-steps-with-html-parsing.ipynb | YaleDHLab/lab-workshops | mit |
閾値$r$を変えたときに意見の総数に対するクラスターの数との関係。横軸$r$、縦軸$1- (\text{クラスターの数})/(\text{意見の総数})$の通常のプロット(上段)と両対数プロット(下段)。 | trial = 100
r = np.logspace(-2, np.log10(0.2), num=50)
phi1 = []
for _r in r:
_phi = 0.
for t in range(trial):
meeting = Meeting(K=50, N=6, r=_r, draw=False)
meeting.init()
_phi += len(uniq_list([x[1][1] for x in meeting.ideas]))/float(len(meeting.ideas))
phi1.append(1 - _phi/trial)
def myplot1(x, y, xfit=np.array([]), yfit=np.array([]), param=None,
scale=['linear', 'linear', 'log', 'log']):
"""my plot function
x: {'label_x', xdata}
y: {'label_y', ydata}
param: {'a': 10, 'b': 20}
"""
if param:
s = [r'$%s = %f$' % (k, v) for k, v in param.items()]
label = s[0]
for _s in s[1:]:
label += ", " + _s
label_x, xdata = x.items()[0]
label_y, ydata = y.items()[0]
fig = plt.figure(figsize=(8, 12))
ax1 = fig.add_subplot(211)
ax1.plot(xdata, ydata)
if len(xfit):
ax1.plot(xfit, yfit, label=label)
ax1.legend(loc='best')
ax1.set_xlabel(label_x)
ax1.set_ylabel(label_y)
ax1.set_xscale(scale[0])
ax1.set_yscale(scale[1])
ax2 = fig.add_subplot(212)
ax2.plot(xdata, ydata)
if len(xfit):
ax2.plot(xfit, yfit, label=label)
ax2.legend(loc='best')
ax2.set_xlabel(label_x)
ax2.set_ylabel(label_y)
ax2.set_xscale(scale[2])
ax2.set_yscale(scale[3])
plt.show() | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
通常のプロット | myplot1({r'$r$': r}, {r'$\phi$': phi1}) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
フィッティング用関数 | def myfit(fit_func, parameter, x, y, xmin, xmax):
"""my fitting and plotting function.
fit_func: function (parameter(type:list), x)
parameter: list of tuples: [('param1', param1), ('param2', param2), ...]
x, y: dict
xmin, xmax: float
"""
xkey, xdata = x.items()[0]
ykey, ydata = y.items()[0]
def fit(parameter, x, y):
return y - fit_func(parameter, x)
# use x : xmin < x < xmax
i = 0
while xdata[i] < xmin:
i += 1
imin, imax = i, i
while xdata[i] < xmax:
i += 1
imax = i - 1
paramdata = [b for a, b in parameter]
paramkey = [a for a, b in parameter]
res = leastsq(fit, paramdata, args=(xdata[imin:imax], ydata[imin:imax]))
for p in res[0]:
print xkey + ": " + str(p)
fitted = fit_func(res[0], xdata[imin:imax])
fittedparam = dict([(k, v) for k, v in zip(paramkey, res[0])])
myplot1(x, y, xdata[imin:imax], fitted, param=fittedparam) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
$\phi(r) = 10^{b}r^{a}$として最小2乗法でフィッティング | param = [('a', 1.5), ('b', 0.)]
xmin, xmax = 0., 0.07
x = {r'$r$': r}
y = {r'$\phi$': phi1}
def fit_func(parameter, x):
a = parameter[0]
b = parameter[1]
return np.power(x, a)*np.power(10, b)
myfit(fit_func, param, x, y, xmin, xmax) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
両変数を対数にした状態で直線としてフィットしてみる。得られたパラメータによるフィッティング関数のプロットは、元の状態に戻してから行う。後に示す直接べき関数として求めた場合に比べて、$r$の小さい領域での直線の傾きがよく合っているように見える。 | a = 1.5
b = 0.
param = [a, b]
rmin, rmax = 0., 0.07
def fit_func(parameter, x):
a = parameter[0]
b = parameter[1]
return a*np.log10(x) + b
def fit(parameter, x, y):
return np.log10(y) - fit_func(parameter, x)
i = 0
while r[i] < rmin:
i += 1
imin, imax = i, i
while r[i] < rmax:
i += 1
imax = i - 1
res = leastsq(fit, param, args=(r[imin:imax], phi1[imin:imax]))
print u"傾き: " + str(res[0][0])
print u"切片: " + str(res[0][1])
R1 = np.power(10, fit_func(res[0], r[imin:imax]))
myplot1({r'$r$': r}, {r'$\phi$': phi1}, r[imin:imax], R1, param={'a': res[0][0], 'b': res[0][1]}) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
S字型の曲線であるので、
$$\phi (r) = 1 - \exp \left[ - \left( \frac{r}{\omega} \right)^{a} \right]$$
としてパラメータ$\omega$に関して最小2乗法でフィッティングを行った場合。 | omega = 0.06
a = 2.0
param = [omega, a]
rmin, rmax = 0.01, 0.2
def fit_func(parameter, x):
omega = parameter[0]
a = parameter[1]
return 1 - np.exp(-(x/omega)**a)
def fit(parameter, x, y):
return y - fit_func(parameter, x)
i = 0
while r[i] < rmin:
i += 1
imin, imax = i, i
while r[i] < rmax:
i += 1
imax = i - 1
res = leastsq(fit, param, args=(r[imin:imax], phi1[imin:imax]))
print u"omega: " + str(res[0][0])
print u"a: " + str(res[0][1])
R3 = fit_func(res[0], r[imin:imax])
myplot1({r'$r$': r}, {r'$\phi$': phi1}, r[imin:imax], R3, param={'\omega': res[0][0], 'a': res[0][1]}) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
$r$を固定して$N$を変更したときのクラスター数と点の総数の間の関係
横軸を$X_{i}$の数$N$、縦軸を$1-(\text{クラスタ数}/\text{点の総数})$としたときのグラフを書いてみる。 | trial = 100
N = np.arange(1, 20)
phi6 = []
for _N in N:
_phi = 0.
for t in range(trial):
meeting = Meeting(K=50, N=_N, r=0.07, draw=False)
meeting.init()
_phi += len(uniq_list([x[1][1] for x in meeting.ideas]))/float(len(meeting.ideas))
phi6.append(1 - _phi/trial)
myplot1({r'$N$': N}, {r'$\phi$': phi6}) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
このとき、意見の総数と参加者の数、一人あたりの意見の数の間には比例の関係が成り立っており、この数のみに依存して、どちらを変えるかは問題ではない。したがって、より刻みを多く取ることのできる一人あたりの意見の数$S$を変えて計算した場合を見てみることにする。 | trial = 100
S = np.arange(10, 70)
phi7 = []
for _S in S:
_phi = 0.
for t in range(trial):
meeting = Meeting(K=50, S=_S, N=6, r=0.07, draw=False)
meeting.init()
_phi += len(uniq_list([x[1][1] for x in meeting.ideas]))/float(len(meeting.ideas))
phi7.append(1 - _phi/trial)
myplot1({r'$S$': S}, {r'$\phi$': phi7}) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
グラフの形から、
$$\phi(S) = 1- \exp\left[- \left( \frac{S}{\omega} \right)^{a}\right]$$
であるとしてフィッティングを行ってみる。 | omega = 20.
a = 1.
param = [omega, a]
def fit_func(parameter, x):
omega = parameter[0]
a = parameter[1]
return 1. - np.exp(-(x/omega)**a)
def fit(parameter, x, y):
return y - fit_func(parameter, x)
res = leastsq(fit, param, args=(S, phi7))
print u"omega: " + str(res[0][0])
print u"a: " + str(res[0][1])
R5 = fit_func(res[0], S)
myplot1({r'$S$': S}, {r'$\phi$': phi7}, S, R5, param={r'\omega': res[0][0], r'a': res[0][1]}) | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
閾値$r$を決めたときに、領域$\Omega$内の任意の点を一様に選んだとき、その中に点が存在する確率の期待値は、解析的計算によって
$$p'(r) = \frac{1}{2}r^{4} -\frac{8}{3}r^{3} + \pi r^{2}$$
$r$を定めたとき、すべての点の個数が$M$個であるとすると、一つの点の点がもつ次数の期待値$l$は
$$l = p'(r)(M-1) = \left( \frac{1}{2}r^{4} -\frac{8}{3}r^{3} + \pi r^{2} \right)(M-1)$$
となる。これを実際のシミュレーションの結果と照らして確かめる。 | trial = 100
r = np.linspace(0.01, 0.5, num=50)
phi3 = []
for _r in r:
_phi = 0.
for t in range(trial):
meeting = Meeting(K=50, N=6, r=_r, draw=False)
meeting.init()
_phi += meeting.ave_l
phi3.append(_phi/trial)
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111)
r = np.linspace(0.01, 0.5, num=50)
def func(x):
return (1./2*x**4 - 8/3.*x**3 + np.pi*x**2)*(120-1)
y = func(r)
def func2(x):
return np.sqrt((-0.25*x**8 + 8/3.*x**7 - (64/9.+np.pi)*x**6 + 16/3.*np.pi*x**5
+ (0.5-np.pi**2)*x**4 - 8/3.*x**3 + np.pi*x**2)*(120-1)/(trial))
delta = func2(r)
y1 = y + delta
y2 = y - delta
y3 = np.zeros(50)
y3[y2>0] = y2[y2>0]
ax.fill_between(r, y1, y3, facecolor='green', alpha=0.2)
ax.plot(r, phi3)
ax.plot(r, y)
ax.set_xlabel(r'$r$')
ax.set_ylabel(r"Average number of edges for each time: $l$")
plt.show() | 07_model_3_4_1.ipynb | ssh0/sotsuron_for_public | mit |
It looks like we can use culmen length to identify Adelie penguins.
Exercise: Use make_kdeplots to display the distributions of one of the other two features:
'Body Mass (g)'
'Culmen Depth (mm)' | # Solution goes here | notebooks/clustering.ipynb | AllenDowney/ThinkBayes2 | mit |
Exercise: Make a scatter plot using any other pair of variables. | # Solution goes here | notebooks/clustering.ipynb | AllenDowney/ThinkBayes2 | mit |
Summary
The k-means algorithm does unsupervised clustering, which means that we don't tell it where the clusters are; we just provide the data and ask it to find a given number of clusters.
In this notebook, we asked it to find clusters in a group of penguins based on two features, flipper length and culmen length. The clusters it finds reflect the species in the dataset, especially if we standardize the data.
In this example we used only two features, because that makes it easy to visualize the results. But k-means extends easily to any number of dimensions (see the exercise below).
So, what is this good for?
Well, Wikipedia provides this list of applications. Applying clustering analysis to these applications, I see a few general ideas:
From an engineering point of view, clustering can be used to automate some kinds of analysis people do, which might be faster, more accurate, or less expensive. And it can work with large datasets and high numbers of dimensions that people can't handle.
From a scientific point of view, clustering provides a way to test whether the patterns we see are in the data or in our minds.
This second point is related to old philosophical questions about the nature of categories. Putting things into categories seems to be a natural part of how humans think, but we have to wonder whether the categories we find truly "carve nature at its joints", as Plato put it.
If a clustering algorithm finds the same "joints" we do, we might have more confidence they are not entirely in our minds.
Exercise: Use the scikit-learn implementation of k-means to find clusters using all four features (flipper length, culmen length and depth, body mass). How do the results compare to what we got with just two features? | # Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here | notebooks/clustering.ipynb | AllenDowney/ThinkBayes2 | mit |
Modelo Reverchon
Mathematical Modeling of Supercritical Extraction of Sage Oil | P = 9 #MPa
T = 323 # K
Q = 8.83 #g/min
e = 0.4
rho = 285 #kg/m3
miu = 2.31e-5 # Pa*s
dp = 0.75e-3 # m
Dl = 0.24e-5 #m2/s
De = 8.48e-12 # m2/s
Di = 6e-13
u = 0.455e-3 #m/s
kf = 1.91e-5 #m/s
de = 0.06 # m
W = 0.160 # kg
kp = 0.2
r = 0.31 #m
n = 10
V = 12
#C = kp * qE
C = 0.1
qE = C / kp
Cn = 0.05
Cm = 0.02
t = np.linspace(0,10, 1)
ti = (r ** 2) / (15 * Di)
def reverchon(x,t):
#Ecuaciones diferenciales del modelo Reverchon
#dCdt = - (n/(e * V)) * (W * (Cn - Cm) / rho + (1 - e) * V * dqdt)
#dqdt = - (1 / ti) * (q - qE)
q = x[0]
C = x[1]
qE = C / kp
dqdt = - (1 / ti) * (q - qE)
dCdt = - (n/(e * V)) * (W * (C - Cm) / rho + (1 - e) * V * dqdt)
return [dqdt, dCdt]
reverchon([1, 2], 0)
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, qR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C solid–fluid interface [=] $kg/m^3$")
print(CR)
r = 0.31 #m
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
r = 0.231 #m
x0 = [0, 0]
t = np.linspace(0, 3000, 500)
resultado = odeint(reverchon, x0, t)
qR = resultado[:, 0]
CR = resultado[:, 1]
plt.plot(t, CR)
plt.title("Modelo Reverchon")
plt.xlabel("t [=] min")
plt.ylabel("C [=] $kg/m^3$")
fig,axes=plt.subplots(2,2)
axes[0,0].plot(t,CR)
axes[1,0].plot(t,qR)
| Modelo de impregnacion/modelo2/Activité 10_Viernes.ipynb | pysg/pyther | mit |
Trabajo futuro
Realizar modificaciones de los parametros para observar cómo afectan al comportamiento del modelo.
Realizar un ejemplo de optimización de parámetros utilizando el modelo de Reverchon.
Referencias
[1] E. Reverchon, Mathematical modelling of supercritical extraction of sage oil, AIChE J. 42 (1996) 1765–1771.
https://onlinelibrary.wiley.com/doi/pdf/10.1002/aic.690420627
[2] Amit Rai, Kumargaurao D.Punase, Bikash Mohanty, Ravindra Bhargava, Evaluation of models for supercritical fluid extraction, International Journal of Heat and Mass Transfer Volume 72, May 2014, Pages 274-287. https://www.sciencedirect.com/science/article/pii/S0017931014000398
Ajuste de parámetros con ODEs: modelo Reverchon |
#Datos experimentales
x_data = np.linspace(0,9,10)
y_data = np.array([0.000,0.416,0.489,0.595,0.506,0.493,0.458,0.394,0.335,0.309])
def f(y, t, k):
""" sistema de ecuaciones diferenciales ordinarias """
return (-k[0]*y[0], k[0]*y[0]-k[1]*y[1], k[1]*y[1])
def my_ls_func(x,teta):
f2 = lambda y, t: f(y, t, teta)
# calcular el valor de la ecuación diferencial en cada punto
r = integrate.odeint(f2, y0, x)
return r[:,1]
def f_resid(p):
# definir la función de minimos cuadrados para cada valor de y"""
return y_data - my_ls_func(x_data,p)
#resolver el problema de optimización
guess = [0.2, 0.3] #valores inicales para los parámetros
y0 = [1,0,0] #valores inciales para el sistema de ODEs
(c, kvg) = optimize.leastsq(f_resid, guess) #get params
print("parameter values are ",c)
# interpolar los valores de las ODEs usando splines
xeval = np.linspace(min(x_data), max(x_data),30)
gls = interpolate.UnivariateSpline(xeval, my_ls_func(xeval,c), k=3, s=0)
xeval = np.linspace(min(x_data), max(x_data), 200)
#Gráficar los resultados
pp.plot(x_data, y_data,'.r',xeval,gls(xeval),'-b')
pp.xlabel('t [=] min',{"fontsize":16})
pp.ylabel("C",{"fontsize":16})
pp.legend(('Datos','Modelo'),loc=0)
pp.show() | Modelo de impregnacion/modelo2/Activité 10_Viernes.ipynb | pysg/pyther | mit |
Using the awesome Panda library, we can parse the .csv file of the training set and hold the data in a table | def getTrainingData():
print("Get training data ...\n")
trainingData = pnd.read_csv("./train.csv")
trainingData['id'] = range(1, len(trainingData) + 1) #For 1-base index
return trainingData | Easy/PokerRuleInduction/PokerRuleInduction.ipynb | AhmedHani/Kaggle-Machine-Learning-Competitions | mit |
Second, We need to extract the features and the label from the table | trainingData = getTrainingData()
labels = trainingData['hand']
features = trainingData.drop(['id', 'hand'], axis=1) | Easy/PokerRuleInduction/PokerRuleInduction.ipynb | AhmedHani/Kaggle-Machine-Learning-Competitions | mit |
When dealing with Machine Learning algorithms, you need to calculate the effiency of the algorithm with the data, this could be done using several techniques such as K-Fold cross validation https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation and Precision and recall https://en.wikipedia.org/wiki/Precision_and_recall.
I've used K-fold cross validation for this problem. | def kFoldCrossValidation(kFold):
trainingData = getTrainingData()
label = trainingData['hand']
features = trainingData.drop(['id'], axis=1)
crossValidationResult = dict()
print("Start Cross Validation ...\n")
randomForest = RandomForestClassifier(n_estimators=100)
kNearestNeighbour = KNeighborsClassifier(n_neighbors=100)
crossValidationResult['RF'] = cross_val_score(randomForest, trainingData, label, cv=kFold).mean()
crossValidationResult['KNN'] = cross_val_score(kNearestNeighbour, trainingData, label, cv=kFold).mean()
print("KNN: %s\n" % str(crossValidationResult['KNN']))
print("RF: %s\n" % str(crossValidationResult['RF']))
print("\n")
return crossValidationResult['KNN'], crossValidationResult['RF'] | Easy/PokerRuleInduction/PokerRuleInduction.ipynb | AhmedHani/Kaggle-Machine-Learning-Competitions | mit |
I've decided to use K Nearest Neighbour and Random Forest according to the recommendation and the benchmark of the problem. Above, I've created instances from the Random Forest and K Nearest Neighbour modules, then get the score of each one to help me to decide which one is better. | if __name__ == '__main__':
trainingData = getTrainingData()
labels = trainingData['hand']
features = trainingData.drop(['id', 'hand'], axis=1)
KNN, RF = kFoldCrossValidation(5)
classifier = None
if KNN > RF:
classifier = KNeighborsClassifier(n_neighbors=100)
else:
classifier = RandomForestClassifier(n_estimators=10, n_jobs=-1)
testData, result = getTestData()
print("Classification in progress ...\n")
classifier.fit(features, labels)
result.insert(1, 'hand', classifier.predict(testData))
result.to_csv("./results.csv", index=False)
print("Classification Ends ...\n") | Easy/PokerRuleInduction/PokerRuleInduction.ipynb | AhmedHani/Kaggle-Machine-Learning-Competitions | mit |
Read the CSV
We use pandas read_csv(path/to/csv) method to read the csv file. Next, replace the missing values with np.NaN i.e. Not a Number. This way we can count the number of missing values per column. | df = pd.read_csv('../datasets/UCIrvineCrimeData.csv');
df = df.replace('?',np.NAN)
features = [x for x in df.columns if x not in ['state', 'community', 'communityname', 'county'
, 'ViolentCrimesPerPop']] | exploratory_data_analysis/.ipynb_checkpoints/UCIrvine_Crime_data_analysis-checkpoint.ipynb | WenboTien/Crime_data_analysis | mit |
Sklearn fundamentals
A convenient way to randomly partition the dataset into a separate test & training dataset is to use the train_test_split function from scikit-learn's cross_validation submodule | #df = df.drop(["communityname", "state", "county", "community"], axis=1)
X, y = imputed_data, df['ViolentCrimesPerPop']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0); | exploratory_data_analysis/.ipynb_checkpoints/UCIrvine_Crime_data_analysis-checkpoint.ipynb | WenboTien/Crime_data_analysis | mit |
First, we assigned the NumPy array representation of features columns to the variable X, and we assigned the predicted variable to the variable y. Then we used the train_test_split function to randomly split X and y into separate training & test datasets. By setting test_size=0.3 we assigned 30 percent of samples to X_test and the remaining 70 percent to X_train.
Sequential Feature Selection algorithm : Sequential Backward Algorithm(SBS)
Sequential feature selection algorithms are a family of greedy search algorithms that can reduce an initial d-dimensional feature space into a k-dimensional feature subspace where k < d. The idea is to select the most relevant subset of features to improve computational efficieny and reduce generalization error | class SBS():
def __init__(self, estimator, features,
scoring=r2_score, test_size=0.25,
random_state=1):
self.scoring = scoring
self.estimator = estimator
self.features = features
self.test_size = test_size
self.random_state = random_state
def fit(self, X, y):
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size = self.test_size,
random_state = self.random_state)
dim = X_train.shape[1]
self.indices_ = tuple(range(dim))
self.subsets_ = [self.indices_]
score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_)
self.scores_ = [score]
while dim > self.features:
scores = []
subsets = []
for p in combinations(self.indices_, r=dim-1):
score = self._calc_score(X_train, y_train, X_test, y_test, p)
scores.append(score)
subsets.append(p)
best = np.argmax(score)
self.indices_ = subsets[best]
self.subsets_.append(self.indices_)
dim -= 1
self.scores_.append(scores[best])
print self.scores_
self.k_score_ = self.scores_[-1]
return self
def transform(self, X):
return X[:, self.indices_]
def _calc_score(self, X_train, y_train, X_test, y_test, indices):
self.estimator.fit(X_train[:, indices], y_train)
y_pred = self.estimator.predict(X_test[:, indices])
score = self.scoring(y_test, y_pred)
return score
clf = LinearRegression()
sbs = SBS(clf, features=1)
sbs.fit(X_train, y_train)
k_feat = [len(k) for k in sbs.subsets_]
plt.plot(k_feat, sbs.scores_, marker='o')
plt.ylim([-1, 1])
plt.ylabel('Accuracy')
plt.xlabel('Number of Features')
plt.grid()
plt.show() | exploratory_data_analysis/.ipynb_checkpoints/UCIrvine_Crime_data_analysis-checkpoint.ipynb | WenboTien/Crime_data_analysis | mit |
I want to import Vgg16 as well because I'll want it's low-level features | # import os, sys
# sys.path.insert(1, os.path.join('../utils/')) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Actually, looks like Vgg's ImageNet weights won't be needed. | # from vgg16 import Vgg16
# vgg = Vgg16() | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
II. Load Data | (x_train, y_train), (x_test, y_test) = mnist.load_data() | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
III. Preprocessing
Keras Convolutional layers expect color channels, so expand an empty dimension in the input data, to account for no colors. | x_train = np.expand_dims(x_train, 1) # can also enter <axis=1> for <1>
x_test = np.expand_dims(x_test, 1)
x_train.shape | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
One-Hot Encoding the outputs: | y_train, y_test = to_categorical(y_train), to_categorical(y_test) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Since this notebook's models are all mimicking Vgg16, the input data should be preprocessed in the same way: in this case normalized by subtracting the mean and dividing by the standard deviation. It turns out this is a good idea generally. | x_mean = x_train.mean().astype(np.float32)
x_stdv = x_train.std().astype(np.float32)
def norm_input(x): return (x - x_mean) / x_stdv | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Create Data Batch Generator
ImageDataGenerator with no arguments will return a generator. Later, when data is augmented, it'll be told how to do so. I don't know what batch-size should be set to: in Lecture it was 64. | gen = image.ImageDataGenerator()
trn_batches = gen.flow(x_train, y_train, batch_size=64)
tst_batches = gen.flow(x_test, y_test, batch_size=64) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
General workflow, going forward:
* Define the model's architecture.
* Run 1 Epoch at default learning rate (0.01 ~ 0.001 depending on optimizer) to get it started.
* Jack up the learning to 0.1 (as high as you'll ever want to go) and run 1 Epoch, possibly more if you can get away with it.
* Lower the learning rate by a factor of 10 and run for a number of Epochs -- repeat until model begins to overfit (acc > valacc)
Points on internal architecture:
* Each model will have a data-preprocessing Lambda layer, which normalizes the input and assigns a shape of (1 color-channel x 28 pixels x 28 pixels)
* Weights are flattened before entering FC layers
* Convolutional Layers will come in 2 pairs (because this is similar to the Vgg model).
* Convol layer-pairs will start with 32 3x3 filters and double to 64 3x3 layers
* A MaxPooling Layer comes after each Convol-pair.
* When Batch-Normalization is applied, it is done after every layer but last (excluding MaxPooling).
* Final layer is always an FC softmax layer with 10 outputs for our 10 digits.
* Dropout, when applied, should increase toward later layers.
* Optimizer used in Lecture was Adam(), all layers but last use a ReLU activation, loss function is categorical cross-entropy.
1. Linear Model
aka 'Dense', 'Fully-Connected' | def LinModel():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
Linear_model = LinModel()
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=trn_batches.n)
Linear_model.optimizer.lr=0.1
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Linear_model.optimizer.lr=0.01
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Linear_model.optimizer.lr=0.001
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=8,
validation_data=tst_batches, nb_val_samples=tst_batches.n) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
2. Single Dense Layer
This is what people in the 80s & 90s thought of as a 'Neural Network': a single Fully-Connected hidden layer. I don't yet know why the hidden layer is ouputting 512 units. For natural-image recognition it's 4096. I'll see whether a ReLU or Softmax hidden layer works better.
By the way, the training and hyper-parameter tuning process should be automated. I want to use a NN to figure out how to do that for me. | def FCModel():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28)),
Dense(512, activation='relu'),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
FC_model = FCModel()
FC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
FC_model.optimizer=0.1
FC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
FC_model.optimizer=0.01
FC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4,
validation_data=tst_batches, nb_val_samples=tst_batches.n) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
With an accuracy of 0.9823 and validation accuracy of 0.9664, the model's starting to overfit significantly and hit its limits, so it's time to go on to the next technique.
3. Basic 'VGG' style Convolutional Neural Network
I'm specifying an output shape equal to the input shape, to suppress the warnings keras was giving me; and it stated it was defaulting to that anyway. Or maybe I should've written output_shape=input_shape
Aha: yes it's as I thought. See this thread -- output_shape warnings were added to Keras, and neither vgg16.py (nor I until now) were specifying output_shape. It's fine.
The first time I ran this, I forgot to have 2 pairs of Conv layers. At the third λr=0.01 epoch I had acc/val of 0.9964, 0.9878
Also noticing: in lecture JH was using a GPU which I think was an NVidia Titan X. I'm using an Intel Core i5 CPU on a MacBook Pro. His epochs took on average 6 seconds, mine are taking 180~190. Convolutions are also the most computationally-intensive part of the NN being built here.
Interestingly, the model with 2 Conv-layer pairs is taking avg 160s. Best Acc/Val: 0.9968/0.9944
Final: 0.9975/0.9918 - massive overfitting | def ConvModel():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),
Convolution2D(32, 3, 3, activation='relu'),
Convolution2D(32, 3, 3, activation='relu'),
MaxPooling2D(),
Convolution2D(64, 3, 3, activation='relu'),
Convolution2D(64, 3, 3, activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
CNN_model = ConvModel()
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
CNN_model.optimizer=0.1
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
CNN_model.optimizer=0.01
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# Running again until validation accuracy stops increasing
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
4. Data Augmentation | gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
trn_batches = gen.flow(x_train, y_train, batch_size=64)
tst_batches = gen.flow(x_test, y_test, batch_size=64)
CNN_Aug_model = ConvModel()
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# upping LR
print("Learning Rate, η = 0.1")
CNN_Aug_model.optimizer.lr=0.1
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# brining LR back down for more epochs
print("Learning Rate, η = 0.01")
CNN_Aug_model.optimizer.lr=0.01
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# 4 more epochs at η=0.01
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
5. Batch Normalization + Data Augmentation
See this thread for info on BatchNorm axis. | def ConvModelBN():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),
Convolution2D(32, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32, 3, 3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
CNN_BNAug_model = ConvModelBN()
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.1")
CNN_BNAug_model.optimizer=0.1
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=2, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.01")
CNN_BNAug_model.optimizer=0.01
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# some more training at 0.1 and 0.01:
print("Learning Rate, η = 0.1")
CNN_BNAug_model.optimizer=0.1
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.01")
CNN_BNAug_model.optimizer=0.01
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
6. Dropout + Batch Normalization + Data Augmentation | def ConvModelBNDo():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),
Convolution2D(32, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32, 3, 3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
CNN_BNDoAug_model = ConvModelBNDo()
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.1")
CNN_BNDoAug_model.optimizer.lr=0.1
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.01")
CNN_BNDoAug_model.optimizer.lr=0.01
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# 6 more epochs at 0.01
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate η = 0.001")
CNN_BNDoAug_model.optimizer.lr=0.001
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=12, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
7. Ensembling
Define a function to automatically train a model: | # I'll set it to display progress at the start of each LR-change
def train_model():
model = ConvModelBNDo()
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.optimizer.lr=0.1
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=3, verbose=0,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.optimizer.lr=0.01
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=11, verbose=0,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.optimizer.lr=0.001
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=11, verbose=0,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
return model
# Running a little test on the GPU now
testmodel = ConvModelBNDo()
testmodel.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
I finally got my GPU running on my workstation. Decided to leave the ghost of Bill Gates alone and put Ubuntu Linux on the second harddrive. This nvidia GTX 870M takes 17 seconds to get through the 60,000 images. The Core i5 on my Mac took an average of 340. A 20x speed up. This also means, at those numbers, a 6-strong ensemble running the regime in train_model() will take about 49 minutes and 18 seconds, instead of 16 hours and 26 minutes. You can see what the motivation was, for me to spend ~9 hours today and get the GPU working. It's a warm feeling, knowing your computer isn't just good for playing DOOM, but'll be doing its share of work real soon.
So, onward:
Create an array of models | # this'll take some time
models = [train_model() for m in xrange(6)] | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Save the models' weights -- bc this wasn't computationally cheap | from os import getcwd
path = getcwd() + 'data/mnist/'
model_path = path + 'models/'
for i,m in enumerate(models):
m.save_weights(model_path + 'MNIST_CNN' + str(i) + '.pkl') | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Create an array of predictions from the models on the test-set. I'm using a batch size of 256 because that's what was done in lecture, and prediction is such an easier task that I think the large size just helps things go faster. | ensemble_preds = np.stack([m.predict(x_test, batch_size=256) for m in models]) | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Finally, take the average of the predictions: | avg_preds = ensemble_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval() | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Boom. 0.99699.. ~ 99.7% accuracy. Same as achieved in lecture; took roughly 50 minutes to train. Unfortunately I didn't have the h5py module installed when I ran this, so the weight's can't be saved easily -- simple fix of rerunning after install.
Trying the above again, this time having h5py installed. | # this'll take some time
models = [train_model() for m in xrange(6)]
from os import getcwd
import os
path = getcwd() + '/data/mnist/'
model_path = path + 'models/'
if not os.path.exists(path):
os.mkdir('data')
os.mkdir('data/mnist')
if not os.path.exists(model_path): os.mkdir(model_path)
for i,m in enumerate(models):
m.save_weights(model_path + 'MNIST_CNN' + str(i) + '.pkl')
ensemble_preds = np.stack([m.predict(x_test, batch_size=256) for m in models])
avg_preds = ensemble_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval() | FAI_old/lesson3/L3HW_MNIST.ipynb | WNoxchi/Kaukasos | mit |
Setup
This installs a few dependencies: PyTorch, CLIP, GPT-3. | !pip install -U --no-cache-dir gdown --pre
!pip install -U sentence-transformers
!pip install openai ftfy
!nvidia-smi # Show GPU info.
import json
import os
import numpy as np
import openai
import pandas as pd
import pickle
from sentence_transformers import SentenceTransformer
from sentence_transformers import util as st_utils
import torch
openai.api_key = openai_api_key
# From: https://github.com/Deferf/CLIP_Video_Representation
if not os.path.exists('MSRVTT_test_dict_CLIP_text.pt'):
!gdown 1-3tpfZzo1_D18WdrioQzc-iogEl-KSnA -O "MSRVTT_test_dict_CLIP_text.pt"
if not os.path.exists('MSRVTT_test_dict_CLIP_visual.pt'):
!gdown 1Gp3_I_OvcKwjOQmn334-T4wfwQk29TCp -O "MSRVTT_test_dict_CLIP_visual.pt"
if not os.path.exists('test_videodatainfo.json'):
!gdown 1BzTt1Bf-XJSUXxBfJVxLL3mYWLZ6odsw -O "test_videodatainfo.json"
if not os.path.exists('JS_test_dict_CLIP_text.pt'):
!gdown --id 15mvFQxrWLNvBvFg4_9rr_Kqyzsy9dudj -O "JS_test_dict_CLIP_text.pt"
# Load generated video transcriptions from Google cloud speed-to-text API.
if not os.path.exists('video_id_to_gcloud_transcription_full.json'):
!gdown 1LTmvtf9zzw61O7D8YUqdS2mbql76nO6E -O "video_id_to_gcloud_transcription_full.json"
# Load generated summaries from LM (comment this out to generate your own with GPT-3).
if not os.path.exists('msr_full_summaries.pkl'):
!gdown 1ESXkRv3-3Kz1jZTNtkIhBXME6k1Jr9SW -O "msr_full_summaries.pkl"
# Import helper functions from Portillo-Quintero et al. 2021
!git clone https://github.com/Deferf/Experiments
%cd Experiments
from metrics import rank_at_k_precomputed,stack_encoded_dict,generate_sim_tensor,tensor_video_to_text_sim,tensor_text_to_video_metrics,normalize_matrix,pad_dict,list_recall
%cd "/content" | socraticmodels/SocraticModels_MSR_VTT.ipynb | google-research/google-research | apache-2.0 |
Load RoBERTa (masked LM) | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
roberta_model = SentenceTransformer('stsb-roberta-large').to(device) | socraticmodels/SocraticModels_MSR_VTT.ipynb | google-research/google-research | apache-2.0 |
Wrap GPT-3 (causal LM) | gpt_version = "text-davinci-002"
def prompt_llm(prompt, max_tokens=64, temperature=0, stop=None):
response = openai.Completion.create(engine=gpt_version, prompt=prompt, max_tokens=max_tokens, temperature=temperature, stop=stop)
return response["choices"][0]["text"].strip() | socraticmodels/SocraticModels_MSR_VTT.ipynb | google-research/google-research | apache-2.0 |
Evaluate on MSR-Full | # Load raw text captions from MSR-Full.
with open('test_videodatainfo.json', 'r') as j:
msr_full_info = json.loads(j.read())
msr_full_vid_id_to_captions = {}
for info in msr_full_info['sentences']:
if info['video_id'] not in msr_full_vid_id_to_captions:
msr_full_vid_id_to_captions[info['video_id']] = []
msr_full_vid_id_to_captions[info['video_id']].append(info['caption'])
# Reproduce original results with original eval code.
msr_full_vid_id_to_clip_vid_feats = torch.load("/content/MSRVTT_test_dict_CLIP_visual.pt", map_location="cpu")
msr_full_vid_ids_to_clip_text_feats = torch.load("/content/MSRVTT_test_dict_CLIP_text.pt", map_location="cpu")
msr_full_vid_ids = list(msr_full_vid_ids_to_clip_text_feats.keys())
msr_full_sim_tensor = generate_sim_tensor(msr_full_vid_ids_to_clip_text_feats, msr_full_vid_id_to_clip_vid_feats, msr_full_vid_ids)
msr_full_vid_text_sim = tensor_video_to_text_sim(msr_full_sim_tensor)
msr_full_metrics_vtt = rank_at_k_precomputed(msr_full_vid_text_sim)
print(msr_full_metrics_vtt)
# Transcription results from gCloud API.
with open('video_id_to_gcloud_transcription_full.json', 'r') as j:
msr_full_vid_id_to_transcript = json.loads(j.read())
# Sort video IDs by transcription length.
num_transcripts = 0
transcript_lengths = []
for i in msr_full_vid_ids:
if msr_full_vid_id_to_transcript[i] is None:
transcript_lengths.append(0)
else:
num_transcripts += 1
transcript_lengths.append(len(msr_full_vid_id_to_transcript[i]))
msr_full_sorted_vid_ids = [msr_full_vid_ids[i] for i in np.argsort(transcript_lengths)[::-1]]
# Summarize transcriptions with LLM.
if os.path.exists('msr_full_summaries.pkl'):
msr_full_vid_id_to_summary = pickle.load(open('msr_full_summaries.pkl', 'rb'))
else:
# Zero-shot LLM: summarize transcriptions.
msr_full_vid_id_to_summary = {}
for vid_id in msr_full_sorted_vid_ids:
transcript = msr_full_vid_id_to_transcript[vid_id]
print('Video ID:', vid_id)
print('Transcript:', transcript)
if transcript is not None:
transcript = transcript.strip()
prompt = 'I am an intelligent video captioning bot.'
prompt += f'\nI hear a person saying: "{transcript}".'
prompt += f"\nQ: What's a short video caption for this video? A: In this video,"
print('Prompt:', prompt)
summary = prompt_llm(prompt, temperature=0, stop='.')
print('Summary:', summary)
msr_full_vid_id_to_summary[vid_id] = summary
pickle.dump(msr_full_vid_id_to_summary, open(f'msr_full_summaries.pkl', 'wb'))
# Compute RoBERTa features for all captions.
msr_full_vid_id_to_roberta_feats = {}
for vid_id in msr_full_sorted_vid_ids:
msr_full_vid_id_to_roberta_feats[vid_id] = roberta_model.encode(msr_full_vid_id_to_captions[vid_id], convert_to_tensor=True, device=device)
topk = 100 # Pre-rank with top-100 from Portillo.
combine_clip_roberta = True # Combine CLIP (text-video) x RoBERTa (text-text) scores?
portillo_vid_id_to_topk_vid_ids = {}
socratic_vid_id_to_topk_vid_ids = {}
msr_full_all_clip_text_feats = torch.cat([msr_full_vid_ids_to_clip_text_feats[i] for i in msr_full_sorted_vid_ids], dim=0).cpu().numpy()
for vid_id in msr_full_sorted_vid_ids:
# Get Portillo top-K captions.
vid_feats = msr_full_vid_id_to_clip_vid_feats[vid_id] # CLIP features for all frames of the video
vid_feat = normalize_matrix(torch.mean(vid_feats, dim = 0, keepdim = True)).cpu().numpy()
clip_scores = msr_full_all_clip_text_feats @ vid_feat.T
clip_scores = clip_scores.squeeze()
clip_scores = clip_scores.reshape(-1, 20)
clip_scores = np.max(clip_scores, axis=1)
sorted_idx = np.argsort(clip_scores).squeeze()[::-1]
portillo_topk_vid_ids = [msr_full_sorted_vid_ids[i] for i in sorted_idx[:topk]]
portillo_vid_id_to_topk_vid_ids[vid_id] = portillo_topk_vid_ids
# If no LLM summary, default to Portillo ranking.
socratic_vid_id_to_topk_vid_ids[vid_id] = portillo_topk_vid_ids
if vid_id not in msr_full_vid_id_to_summary:
continue
# Get RoBERTa scores between LLM summary and captions.
summary = msr_full_vid_id_to_summary[vid_id]
summary_feat = roberta_model.encode([summary], convert_to_tensor=True, device=device)
caption_feats = torch.cat([msr_full_vid_id_to_roberta_feats[i] for i in portillo_topk_vid_ids], dim=0)
roberta_scores = st_utils.pytorch_cos_sim(caption_feats, summary_feat).detach().cpu().numpy().squeeze()
roberta_scores = roberta_scores.reshape(-1, 20)
roberta_scores = np.max(roberta_scores, axis=1)
# Re-rank top-K with RoBERTa scores.
sort_idx = np.argsort(roberta_scores, kind='stable').squeeze()[::-1]
socratic_vid_id_to_topk_vid_ids[vid_id] = [portillo_topk_vid_ids[i] for i in sort_idx]
# Combine CLIP (text-video) x RoBERTa (text-text) scores.
if combine_clip_roberta:
clip_scores = np.sort(clip_scores, kind='stable').squeeze()[::-1][:topk]
scores = clip_scores * roberta_scores
sort_idx = np.argsort(scores, kind='stable').squeeze()[::-1]
socratic_vid_id_to_topk_vid_ids[vid_id] = [portillo_topk_vid_ids[i] for i in sort_idx] # Override ranking from only LLM
# Return R@1, R@5, R@10.
def get_recall(vid_ids, socratic_subset, k=[1, 5, 10]):
recall = []
rank = []
for vid_id in vid_ids:
sorted_vid_ids = portillo_vid_id_to_topk_vid_ids[vid_id]
if vid_id in socratic_subset:
sorted_vid_ids = socratic_vid_id_to_topk_vid_ids[vid_id]
recall.append([(vid_id in sorted_vid_ids[:i]) for i in k])
rank.append(sorted_vid_ids.index(vid_id) + 1 if vid_id in sorted_vid_ids else len(sorted_vid_ids))
mdr = np.median(rank)
return np.mean(np.float32(recall) * 100, axis=0), mdr
subset_size = 1007 # Subset of long transcripts.
# Portillo only.
recall, mdr = get_recall(msr_full_sorted_vid_ids, msr_full_sorted_vid_ids[:0])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}')
# Socratic + Portillo.
recall, mdr = get_recall(msr_full_sorted_vid_ids, msr_full_sorted_vid_ids[:subset_size])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}')
# Portillo only on long transcripts.
recall, mdr = get_recall(msr_full_sorted_vid_ids[:subset_size], msr_full_sorted_vid_ids[:0])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}')
# Socratic + Portillo on long transcripts.
recall, mdr = get_recall(msr_full_sorted_vid_ids[:subset_size], msr_full_sorted_vid_ids[:subset_size])
print(f'R@1: {recall[0]:.1f}\tR@5: {recall[1]:.1f}\tR@10: {recall[2]:.1f}\tMdR: {mdr}') | socraticmodels/SocraticModels_MSR_VTT.ipynb | google-research/google-research | apache-2.0 |
We use the same setup here as we do in the 'Simulating Experimental Fluorescence Binding Data' notebook. | # We define a Kd,
Kd = 2e-9 # M
# a protein concentration,
Ptot = 1e-9 * np.ones([12],np.float64) # M
# and a gradient of ligand concentrations for our experiment.
Ltot = 20.0e-6 / np.array([10**(float(i)/2.0) for i in range(12)]) # M
def two_component_binding(Kd, Ptot, Ltot):
"""
Parameters
----------
Kd : float
Dissociation constant
Ptot : float
Total protein concentration
Ltot : float
Total ligand concentration
Returns
-------
P : float
Free protein concentration
L : float
Free ligand concentration
PL : float
Complex concentration
"""
PL = 0.5 * ((Ptot + Ltot + Kd) - np.sqrt((Ptot + Ltot + Kd)**2 - 4*Ptot*Ltot)) # complex concentration (uM)
P = Ptot - PL; # free protein concentration in sample cell after n injections (uM)
L = Ltot - PL; # free ligand concentration in sample cell after n injections (uM)
return [P, L, PL]
[L, P, PL] = two_component_binding(Kd, Ptot, Ltot)
# y will be complex concentration
# x will be total ligand concentration
plt.semilogx(Ltot,PL, 'o')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$[PL]$ / M')
plt.ylim(0,1.3e-9)
plt.axhline(Ptot[0],color='0.75',linestyle='--',label='$[P]_{tot}$')
plt.legend(); | examples/direct-fluorescence-assay/2c Bayesian fit for two component binding - simulated data- WITH EMCEE.ipynb | choderalab/assaytools | lgpl-2.1 |
Now make this a fluorescence experiment | # Making max 1400 relative fluorescence units, and scaling all of PL (complex concentration)
# to that, adding some random noise
npoints = len(Ltot)
sigma = 10.0 # size of noise
F_PL_i = (1400/1e-9)*PL + sigma * np.random.randn(npoints)
# y will be complex concentration
# x will be total ligand concentration
plt.semilogx(Ltot,F_PL_i, 'ro')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescendce$')
plt.legend();
#Let's add an F_background just so we don't ever go below zero
F_background = 40
#We also need to model fluorescence for our ligand
F_L_i = F_background + (.4/1e-8)*Ltot + sigma * np.random.randn(npoints)
#Let's also add these to our complex fluorescence readout
F_PL_i = F_background + ((1400/1e-9)*PL + sigma * np.random.randn(npoints)) + ((.4/1e-8)*L + sigma * np.random.randn(npoints))
# y will be complex concentration
# x will be total ligand concentration
plt.semilogx(Ltot,F_PL_i, 'ro')
plt.semilogx(Ltot,F_L_i, 'ko')
plt.xlabel('$[L]_{tot}$ / M')
plt.ylabel('$Fluorescence$')
plt.legend();
# We know errors from our pipetting instruments.
P_error = 0.35
L_error = 0.08
assay_volume = 100e-6 # assay volume, L
dPstated = P_error * Ptot
dLstated = L_error * Ltot
# Now we'll use our Bayesian modeling scheme from assaytools.
from assaytools import pymcmodels
pymc_model = pymcmodels.make_model(Ptot, dPstated, Ltot, dLstated,
top_complex_fluorescence=F_PL_i,
top_ligand_fluorescence=F_L_i,
use_primary_inner_filter_correction=True,
use_secondary_inner_filter_correction=True,
assay_volume=assay_volume, DG_prior='uniform')
mcmc = pymcmodels.run_mcmc(pymc_model)
import matplotlib.patches as mpatches #this is for plotting with color patches
def mcmc_three_plots(pymc_model,mcmc,Lstated):
sns.set(style='white')
sns.set_context('talk')
import pymbar
[t,g,Neff_max] = pymbar.timeseries.detectEquilibration(mcmc.DeltaG.trace())
interval= np.percentile(a=mcmc.DeltaG.trace()[t:], q=[2.5, 50.0, 97.5])
[hist,bin_edges] = np.histogram(mcmc.DeltaG.trace()[t:],bins=40,normed=True)
binwidth = np.abs(bin_edges[0]-bin_edges[1])
#set colors for 95% interval
clrs = [(0.7372549019607844, 0.5098039215686274, 0.7411764705882353) for xx in bin_edges]
idxs = bin_edges.argsort()
idxs = idxs[::-1]
gray_before = idxs[bin_edges[idxs] < interval[0]]
gray_after = idxs[bin_edges[idxs] > interval[2]]
for idx in gray_before:
clrs[idx] = (.5,.5,.5)
for idx in gray_after:
clrs[idx] = (.5,.5,.5)
plt.clf();
plt.figure(figsize=(12,3));
plt.subplot(131)
property_name = 'top_complex_fluorescence'
complex = getattr(pymc_model, property_name)
property_name = 'top_ligand_fluorescence'
ligand = getattr(pymc_model, property_name)
for top_complex_fluorescence_model in mcmc.top_complex_fluorescence_model.trace()[::10]:
plt.semilogx(Lstated, top_complex_fluorescence_model, marker='.',color='silver')
for top_ligand_fluorescence_model in mcmc.top_ligand_fluorescence_model.trace()[::10]:
plt.semilogx(Lstated, top_ligand_fluorescence_model, marker='.',color='lightcoral', alpha=0.2)
plt.semilogx(Lstated, complex.value, 'ko',label='complex')
plt.semilogx(Lstated, ligand.value, marker='o',color='firebrick',linestyle='None',label='ligand')
#plt.xlim(.5e-8,5e-5)
plt.xlabel('$[L]_T$ (M)');
plt.yticks([])
plt.ylabel('fluorescence');
plt.legend(loc=0);
plt.subplot(132)
plt.bar(bin_edges[:-1]+binwidth/2,hist,binwidth,color=clrs, edgecolor = "white");
sns.kdeplot(mcmc.DeltaG.trace()[t:],bw=.4,color=(0.39215686274509803, 0.7098039215686275, 0.803921568627451),shade=False)
plt.axvline(x=interval[0],color=(0.5,0.5,0.5),linestyle='--')
plt.axvline(x=interval[1],color=(0.5,0.5,0.5),linestyle='--')
plt.axvline(x=interval[2],color=(0.5,0.5,0.5),linestyle='--')
plt.axvline(x=np.log(Kd),color='k')
plt.xlabel('$\Delta G$ ($k_B T$)',fontsize=16);
plt.ylabel('$P(\Delta G)$',fontsize=16);
#plt.xlim(-15,-8)
hist_legend = mpatches.Patch(color=(0.7372549019607844, 0.5098039215686274, 0.7411764705882353),
label = '$\Delta G$ = %.3g [%.3g,%.3g] $k_B T$'
%(interval[1],interval[0],interval[2]) )
plt.legend(handles=[hist_legend],fontsize=10,loc=0,frameon=True);
plt.subplot(133)
plt.plot(range(0,t),mcmc.DeltaG.trace()[:t], 'g.',label='equil. at %s'%t)
plt.plot(range(t,len(mcmc.DeltaG.trace())),mcmc.DeltaG.trace()[t:], '.')
plt.xlabel('MCMC sample');
plt.ylabel('$\Delta G$ ($k_B T$)');
plt.legend(loc=2);
plt.tight_layout();
return [t,interval,hist,bin_edges,binwidth]
Kd
print 'Real Kd is 2nm or %s k_B T.' %np.log(Kd)
[t,interval,hist,bin_edges,binwidth] = mcmc_three_plots(pymc_model,mcmc,Ltot) | examples/direct-fluorescence-assay/2c Bayesian fit for two component binding - simulated data- WITH EMCEE.ipynb | choderalab/assaytools | lgpl-2.1 |
That works, but the equilibration seems to happen quite late in our sampling! Let's look at some of the other parameters. | well_area = 0.1586 # well area, cm^2 # half-area wells were used here
path_length = assay_volume / well_area
from assaytools import plots
plots.plot_mcmc_results(Ltot, Ptot, path_length, mcmc) | examples/direct-fluorescence-assay/2c Bayesian fit for two component binding - simulated data- WITH EMCEE.ipynb | choderalab/assaytools | lgpl-2.1 |
Now let's see if we can get better results using the newly implemented emcee option.
Following instructions as described here: http://twiecki.github.io/blog/2013/09/23/emcee-pymc/ | mcmc_emcee = pymcmodels.run_mcmc_emcee(pymc_model) | examples/direct-fluorescence-assay/2c Bayesian fit for two component binding - simulated data- WITH EMCEE.ipynb | choderalab/assaytools | lgpl-2.1 |
Load image
The code expects a local image filepath through the image_path variable below. | """User Parameters"""
# The train image will be scaled to a square of dimensions `train_size x train_size`
train_size = 32
# When generating the image, the network will generate for an image of
# size `test_size x test_size`
test_size = 2048
# Path to load the image you want upscaled
image_path = '../img/colors.jpg'
if not image_path:
print('Please specify an image for training the network')
else:
image = transform.resize(io.imread(image_path), (train_size, train_size))
# Just a quick line to get rid of the alpha channel if it exists
# (e.g. for transparent png files)
image = image if len(image.shape) < 3 or image.shape[2] == 3 else image[:,:,:3]
io.imshow(image) | notebooks/super-resolution_coordinates.ipynb | liviu-/notebooks | mit |
Model
For simplicity, the model below is an MLP created with TF.
Input
The input is just a matrix of floats of shape (None, 2):
- 2 refers to the 2 x, y coordinates
- and None is just a placeholder that allows for training multiple coordinates at one time for speed (i.e. using batches of unknown size) | X = tf.placeholder('float32', (None, 2)) | notebooks/super-resolution_coordinates.ipynb | liviu-/notebooks | mit |
Architecture
An MLP with several fully connected layers. The architecture was inspired from here. | def model(X, w):
h1 = tf.nn.tanh(tf.matmul(X, w['h1']))
h2 = tf.nn.tanh(tf.matmul(h1, w['h2']))
h3 = tf.nn.tanh(tf.matmul(h2, w['h3']))
h4 = tf.nn.tanh(tf.matmul(h3, w['h4']))
h5 = tf.nn.tanh(tf.matmul(h4, w['h4']))
h6 = tf.nn.tanh(tf.matmul(h5, w['h4']))
h7 = tf.nn.tanh(tf.matmul(h6, w['h4']))
h8 = tf.nn.tanh(tf.matmul(h7, w['h4']))
return tf.nn.sigmoid(tf.matmul(h8, w['out']))
def init_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.1))
# (None, None) refers to (batch_size, n_colors)
Y = tf.placeholder("float32", (None, None))
w = {
'h1': init_weights([2, 20]),
'h2': init_weights([20, 20]),
'h3': init_weights([20, 20]),
'h4': init_weights([20, 20]),
'h5': init_weights([20, 20]),
'h6': init_weights([20, 20]),
'h7': init_weights([20, 20]),
'h8': init_weights([20, 20]),
'out': init_weights([20, 3]),
}
out = model(X, w) | notebooks/super-resolution_coordinates.ipynb | liviu-/notebooks | mit |
Training
The model is trained to minimise MSE (common loss for regression problems) and uses Adam as an optimiser (any other optimiser will likely also work). | cost = tf.reduce_mean(tf.squared_difference(out, Y))
train_op = tf.train.AdamOptimizer().minimize(cost)
# Feel free to adjust the number of epochs to your liking.
n_epochs = 5e+4
# Create function to generate a coordinate matrix (i.e. matrix of normalised coordinates)
# Pardon my lambda
generate_coord = lambda size: (
np.array(list(itertools.product(np.linspace(0,1,size),np.linspace(0,1,size)))).reshape(size ** 2, 2))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Training data
x = generate_coord(train_size)
# Labels
reshaped_image = np.array(image.reshape(train_size ** 2, -1))
for epoch in range(int(n_epochs + 1)):
_, c = sess.run([train_op, cost], feed_dict={X: x, Y: reshaped_image})
# Print progress
if epoch % (n_epochs/10) == 0:
print('{:0.0%} \t Loss: {}'.format(epoch/n_epochs, c).expandtabs(7))
# Generate
new_image = sess.run(out, feed_dict={X: generate_coord(test_size)}) | notebooks/super-resolution_coordinates.ipynb | liviu-/notebooks | mit |
Evaluation
aka plotting the generated image and carefully considering whether it meets the desired standards or there's a need for readjusting either the hyperparameters or the expectations. | plt.imshow(new_image.reshape(test_size, test_size, -1)) | notebooks/super-resolution_coordinates.ipynb | liviu-/notebooks | mit |
Passive Linear Time Delays Networks
The first component of the project takes as inputs a description of a model, which can be thought of as a graph where the nodes and edges have some special properties. These properties are outlines below. | G = nx.DiGraph(selfloops=True) | graph_to_matrices.ipynb | tabakg/potapov_interpolation | gpl-3.0 |
We use a directed graph with various properties along the nodes and edges. The direction describes the propagation of signals in the system.
There are three kinds of nodes: inputs nodes, internal nodes, and output nodes. There is the same number of input and output nodes (say n). The number of internal nodes may be different. Each internal node has an associated matrix describing its relationship between its incoming and outgoing signals. It suffices for now to take $2 \times 2$ matrices of the form $\begin{pmatrix} t && -r \ r && t \end{pmatrix}$ corresponding to a beamsplitter, where $r$ and $t$ are the reflectivity and transmissivity of the beamsplitter, respectively. These satisfy $r^2+t^2 = 1$.
In general we may want other matrices, but it's not really necessary.
If the signal along several edges is thought of as a vector, multiplying by the matrix from the left represents the signal traveling through the element. This formalism works only for linear networks.
Let's make an example graph: | rs = np.asarray([0.9,0.5,0.8]) ## some sample values
ts = np.sqrt(1.-rs**2) ## ts are determined from rs
N = 2 ## number of input nodes
for i in range(N): ## make the input and output nodes
G.add_node(i*2,label='x_in_'+str(i))
G.add_node(i*2+1,label='x_out_'+str(i))
for i,(r,t) in enumerate(zip(rs,ts)): ## make the remaining nodes
G.add_node(2*N+i,label='x_'+str(i),M=np.matrix([[t,-r],[r,t]]))
G.nodes(data=True) ## display the nodes
num_nodes = len(G.nodes(data=True)) | graph_to_matrices.ipynb | tabakg/potapov_interpolation | gpl-3.0 |
Each (directed) edge $j$ has a time delay $\tau_j$. In general a delay line may have an additional phase shift $\exp(i\theta_j)$ which is determined by a number $\theta_j$.
We will also include a pair of indices for each edge. The first index corresponds to the previous node and the second index corresponds to the next node. The indices indicate enumerations of the edges with respect to the input and output nodes, respectively. If the previous or next node is an input or output node of the graph, the index will be $0$.
For now, let's assume that only internal edges have nonzero delays.
For the visualization, it would be nice if for a given node, the incoming and outgoing edges with the same index value would appear as a straight line, since this physically means the signal is being transmitted without reflecting. | ## edges to inputs
G.add_edge(0,4,delay=0.,indices=(0,0),theta=0.,edge_type = 'input',edge_num=0)
G.add_edge(2,6,delay=0.,indices=(0,1),theta=0.,edge_type = 'input',edge_num=1)
## edges to outputs
G.add_edge(4,1,delay=0.,indices=(1,0),theta=0.,edge_type = 'output',edge_num=2)
G.add_edge(6,3,delay=0.,indices=(0,0),theta=0.,edge_type = 'output',edge_num=3)
## internal edges
G.add_edge(4,5,delay=1.,indices=(0,0),theta=0.,edge_type = 'internal',edge_num=4)
G.add_edge(5,4,delay=1.,indices=(1,1),theta=0.,edge_type = 'internal',edge_num=5)
G.add_edge(5,6,delay=1.,indices=(0,0),theta=0.,edge_type = 'internal',edge_num=6)
G.add_edge(6,5,delay=1.,indices=(1,1),theta=0.,edge_type = 'internal',edge_num=7)
G.edges(data=True)
## I can make a diagram for the graph, output to file
A=nx.to_agraph(G)
A.draw('file.ps',prog='neato') | graph_to_matrices.ipynb | tabakg/potapov_interpolation | gpl-3.0 |
Convert the network of nodes and edges to the framework used in the paper.
This would take the graph structure above and generate matrices $M1,M2,M2,M3$ in the notation used in Potapov_Code.Time_Delay_Network.py. This would allow generating an instance of Time_Delay_Network. | internal_edges = {(edge[0],edge[1]):edge[2] for edge in G.edges(data=True) if edge[2]['edge_type'] == 'internal'}
m = len(internal_edges)
# input_edges = [edge for edge in G.edges(data=True) if edge[2]['edge_type'] == 'input']
# output_edges = [edge for edge in G.edges(data=True) if edge[2]['edge_type'] == 'output']
M1 = np.zeros((m,m))
internal_node_range = range(2*N,num_nodes)
internal_connections = []
for i in internal_node_range: ## internal nodes
outgoing_connections = nx.edges(G,[i])
internal_connections += [connection for connection in outgoing_connections if connection[1] in internal_node_range]
for i in internal_connections:
for j in internal_connections:
if i[1] == j[0]:
matrix_indices = G.edge[i[0]][i[1]]['indices'][0], G.edge[j[0]][j[1]]['indices'][1]
M1[internal_edges[j]['edge_num']-2*N,internal_edges[i]['edge_num']-2*N] = G.node[i[1]]['M'][matrix_indices]
M1
all_connections = []
for i in range(num_nodes): ## internal nodes
outgoing_connections = nx.edges(G,[i])
all_connections += [connection for connection in outgoing_connections if connection[1] in range(num_nodes)]
all_edges = {(edge[0],edge[1]):edge[2] for edge in G.edges(data=True)}
m_all = len(all_edges)
U = np.zeros((m_all,m_all))
for i in all_connections:
for j in all_connections:
if i[1] == j[0]:
matrix_indices = G.edge[i[0]][i[1]]['indices'][0], G.edge[j[0]][j[1]]['indices'][1]
U[all_edges[j]['edge_num'],all_edges[i]['edge_num']] = G.node[i[1]]['M'][matrix_indices]
## should coincide with M1
M1 = U[4:8,4:8]
M4 = U[:4,:4]
M3 = U[8:16,4:8]
M4 = U[8:16,8:16] | graph_to_matrices.ipynb | tabakg/potapov_interpolation | gpl-3.0 |
Usage Description
Using the run_Potapov function of this method generates the variables that will be used for the first part of the visualization. Those are contained in an instance of the Time_Delay_Network. Specifically, the outputs we will want to plot are (1) Time_Delay_Network.roots (2) Time_Delay_Network.spatial_modes.
The roots $r_1,...,r_n$ are a list of complex numbers corresponding to the modes indexed by $1,...,n$. The imaginary part of root $r_k$ correspond to the frequency of mode $k$, and the real part of $r_k$ indicate the decay coefficients of mode $k$.
The spatial_modes are a list $v_1,...,v_n$ of complex-valued vectors. Each vector $v_k$ in the list corresponds to a mode $k$, in the same order as the roots. Each vector has the same length as the number of time delays of the network, $\tau_1,...,\tau_m$. The $l_{th}$ component $v_{k,l}$ of vector $v_k$ indicates the spatially normalized amplitude of mode $k$ along the delay $\tau_l$.
What would be cool is to be able to select one or many modes $1,...,k,...,n$ and to illustrate the spatial component of the signal of the selected modes along the graph. Specifically, the frequency of the root could correspond to a color or a periodic sinusoidal shape (higher frequency would be more blue or a shorter period), or both. The absolute value of the spatial mode component could be indicated by the thickness of the signal along each time delay. A phase shift could be indicated by a shift in the frequency of a sinusoidal signal. | import Potapov_Code
Network = Potapov_Code.Time_Delay_Network.Example3() ## an example network with hardcoded values
Network.run_Potapov(commensurate_roots=True) ## run the analysis
roots = Network.roots ## roots
plt.scatter(map(lambda z: z.real, roots), map(lambda z: z.imag, roots))
Network.spatial_modes ## the spatial modes | graph_to_matrices.ipynb | tabakg/potapov_interpolation | gpl-3.0 |
First thing is to read in your data. This example uses the Australian Geofabric V2 and V3 data. Other datasets would need their own customised data prep code.
In the next 2 steps ignore the duplicate catchment warnings for the case of testing the code.
I havent dealt with all the minor details of the geofabric quite right. | DG2 = rc.read_geofabric_data(netGDB2)
rc.remove_geofabric_catch_duplicates(DG2)
nx.write_gpickle(DG2, os.path.join(pkl_path, 'PG_conflation2.p'))
DG2_idx = rc.build_index(DG2)
DG1 = rc.read_geofabric_data(netGDB1,DG2_idx.bounds)
rc.remove_geofabric_catch_duplicates(DG1)
nx.write_gpickle(DG1, os.path.join(pkl_path, 'PG_conflation.p')) | conflationExample.ipynb | artttt/RiverConflation | mit |
you can start from here by loading in the pickles that were created with the above code earlier.
Run the imports and global variables code at the top first though | DG1 = nx.read_gpickle(os.path.join(pkl_path, 'PG_conflation.p'))
DG2 = nx.read_gpickle(os.path.join(pkl_path, 'PG_conflation2.p'))
DG2_idx = rc.build_index(DG2)
# starting from pickles = 1 minute 2GB
#starting from scratch = 12 minutes
#This is done seperate to finding matches because it takes a while so its nice to split it out for debugging
#%%timeit -r1 -n1
rc.build_overlaps(DG1,DG2,DG2_idx)
# 15 minutes 1GB
rc.catch_area(DG1)
rc.catch_area(DG2)
# 1 min | conflationExample.ipynb | artttt/RiverConflation | mit |
The next step is to sum up areas to find the catchment overlap for every combination. We are only interested in the best or a short list of the overlaps that match well
The simple approach is a brute force exhustive test of all combinations. This works well for a few thousand (75 minutes for 17k x 17k) sub catchments in each graph however it would not scale well as network sizes increase.
There are a few ways to reduce the set of catchments to test for a match. One issue to keep in mind is to not make assumptions about how similar the two networks topology might be. The approach taken is to use a tunable spatial proximity limit that should be set to a size that is expected to ensure finding the best matches within that radius, setting it too small would cause missed matches, too large would just take longer.
There is also a limit on the pairs of catchments searched based on area similarity.this works well as good matches have to, by defininition, be of a similar size.
Generally the results from this method are not very sensitive to these parameters, if set conservatively, other then faster processing time. | #%%timeit -n1 -r1
rc.upstream_edge_set(DG2)
#<10sec approx 2GB
sizeRatio=0.5
matches = rc.find_all_matches(DG1,DG2,DG2_idx,searchRadius,sizeRatio,maxMatchKeep)
# depends on the search radius used.
# 8 minutes with searchRadius=0.005 (500m) and a sizeRatio=0
# 7.5 minutes with searchRadius=0.01 (1km) and a sizeRatio=0.5
best = rc.best_match(matches)
# simple outputs
# more complete outputs still to be re implemented.... stay tuned.
rc.write_debug_lines(DG1,DG2,best,os.path.join(pkl_path, 'debug_lines.shp'))
# a more refined match of nodes considering each inflow to a confluence.
#find_all_matches needs to have saved a shortlist of matches by setting maxMatchKeep to something like 10
node_matches = rc.confluence_matches(DG1,matches)
rc.write_debug_lines_confluence_matches(DG1,DG2,node_matches,os.path.join(pkl_path, 'debug_lines_nodes.shp'))
rc.write_catch(DG1,os.path.join(pkl_path, 'catch1.shp'))
rc.write_catch(DG2,os.path.join(pkl_path, 'catch2.shp'))
rc.write_stream(DG1,os.path.join(pkl_path, 'stream1.shp'))
rc.write_stream(DG2,os.path.join(pkl_path, 'stream2.shp')) | conflationExample.ipynb | artttt/RiverConflation | mit |
To kick us off, I will draw this landscape, using a matplotlib heatmap function that shows the highest altitude in red, down to the lowest altitudes in blue: | %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.dates
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.title('heights')
plt.show() | python_notebooks/Watershed Problem.ipynb | learn1do1/learn1do1.github.io | mit |
We have to make a decision. Should the water always flow down the steepest slope? Let's assume yes, even tho it may upset Ian Malcolm from Jurassic Park:
Should it pool together when the slope is 0? This describes the 3 adjacent blocks of height Zero in the heights matrix, drawn above. I'd argue yes. In order to guarantee that adjacent blocks of equal height pool together, I will initialize the slopes array to have a slope of -1. | watersheds = [[None] * len(heights) for x in range(len(heights))]
slopes = [[-1] * len(heights) for x in range(len(heights))] | python_notebooks/Watershed Problem.ipynb | learn1do1/learn1do1.github.io | mit |
The watershed matrix stores an integer for each cell. When Cells in that matrix that share the same integer, it means they belong to the same watershed.
The slopes matrix stores the steepest slope that the water can flow in each cell | import operator
def initialize_positions(heights):
positions = []
for i in range(len(heights)):
for j in range(len(heights)):
positions.append(position((i,j), heights[i][j]))
positions.sort(key=operator.attrgetter('height'))
return positions
positions = initialize_positions(heights) | python_notebooks/Watershed Problem.ipynb | learn1do1/learn1do1.github.io | mit |
Our strategy is to sort positions from deepest to highest. Starting at the deepest, let's find all adjacent positions that would flow into it. We determine those positions by using the flow_up function. We continue this search from each of the new positions we have just moved up to, until every cell in the slopes array has been visited. | # Will return all neighbors where the slope to the current position is steeper than we have yet seen.
def flow_up(heights, (i, j)):
up_coordinates = set()
neighbor_coordinates = set()
local_height = heights[i][j]
# look up, down, left, right
neighbor_coordinates.add((max(i - 1, 0),j))
neighbor_coordinates.add((min(i + 1, len(heights) - 1),j))
neighbor_coordinates.add((i,max(j - 1, 0)))
neighbor_coordinates.add((i,min(j + 1, len(heights) - 1)))
for c in neighbor_coordinates:
slope = heights[c[0]][c[1]] - local_height
if slope > slopes[c[0]][c[1]]:
slopes[c[0]][c[1]] = slope
up_coordinates.add(c)
return up_coordinates
def main():
for k, position in enumerate(positions):
if watersheds[position.coordinates[0]][position.coordinates[1]] == None:
new_coordinates = [position.coordinates]
while len(new_coordinates) > 0:
for (i, j) in new_coordinates:
watersheds[i][j] = k
past_coordinates = list(new_coordinates)
new_coordinates = set()
for coordinates in past_coordinates:
new_coordinates.update(flow_up(heights, coordinates))
main()
print watersheds
print slopes
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.title('heights')
plt.figure()
plt.imshow(slopes , interpolation='nearest', cmap='jet')
plt.title('slopes')
plt.figure()
plt.imshow(watersheds , interpolation='nearest', cmap='jet')
plt.title('watersheds') | python_notebooks/Watershed Problem.ipynb | learn1do1/learn1do1.github.io | mit |
Let's do a simple test of our functions. Let's give it a landscape that looks like a wide staircase and make sure the output is just a single watershed. | n = 10
heights = [[x] * n for x in range(n)]
watersheds = [[None] * len(heights) for x in range(len(heights))]
slopes = [[-1] * len(heights) for x in range(len(heights))]
positions = initialize_positions(heights)
positions.sort(key=operator.attrgetter('height'))
main()
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.title('heights')
plt.figure()
plt.imshow(slopes , interpolation='nearest', cmap='jet')
plt.title('slopes')
plt.figure()
plt.imshow(watersheds , interpolation='nearest', cmap='jet')
plt.title('watersheds') | python_notebooks/Watershed Problem.ipynb | learn1do1/learn1do1.github.io | mit |
It's interesting in this single-watershed case to see how simple the slopes object becomes. Either water is spreading in a flat basin (slope of 0) or it is flowing down the staircase (slope of 1).
Now we can showcase the watershed code on a random input landscape | import random
heights = [[random.randint(0, n) for x in range(n)] for x in range(n)]
watersheds = [[None] * len(heights) for x in range(len(heights))]
slopes = [[-1] * len(heights) for x in range(len(heights))]
positions = initialize_positions(heights)
main()
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.title('heights')
plt.figure()
plt.imshow(slopes , interpolation='nearest', cmap='jet')
plt.title('slopes')
plt.figure()
plt.imshow(watersheds , interpolation='nearest', cmap='jet')
plt.title('watersheds') | python_notebooks/Watershed Problem.ipynb | learn1do1/learn1do1.github.io | mit |
What happens in this program?
The program starts out with a list of desserts, and one dessert is identified as a favorite.
The for loop runs through all the desserts.
Inside the for loop, each item in the list is tested.
If the current value of dessert is equal to the value of favorite_dessert, a message is printed that this is my favorite.
If the current value of dessert is not equal to the value of favorite_dessert, a message is printed that I just like the dessert.
You can test as many conditions as you want in an if statement, as you will see in a little bit.
top
Logical Tests
Every if statement evaluates to True or False. True and False are Python keywords, which have special meanings attached to them. You can test for the following conditions in your if statements:
equality (==)
inequality (!=)
other inequalities
greater than (>)
greater than or equal to (>=)
less than (<)
less than or equal to (<=)
You can test if an item is in a list.
Whitespace
Remember learning about PEP 8? There is a section of PEP 8 that tells us it's a good idea to put a single space on either side of all of these comparison operators. If you're not sure what this means, just follow the style of the examples you see below.
Equality
Two items are equal if they have the same value. You can test for equality between numbers, strings, and a number of other objects which you will learn about later. Some of these results may be surprising, so take a careful look at the examples below.
In Python, as in many programming languages, two equals signs tests for equality.
Watch out! Be careful of accidentally using one equals sign, which can really throw things off because that one equals sign actually sets your item to the value you are testing for! | 5 == 5
3 == 5
5 == 5.0
'eric' == 'eric'
'Eric' == 'eric'
'Eric'.lower() == 'eric'.lower()
'5' == 5
'5' == str(5) | notebooks/if_statements.ipynb | nntisapeh/intro_programming | mit |
Data Analysis
In this section we will run a Cross Validation routine | from tpot import TPOTClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = preprocess()
tpot = TPOTClassifier(generations=5, population_size=20,
verbosity=2,max_eval_time_mins=20,
max_time_mins=100,scoring='f1_micro',
random_state = 17)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('FinalPipeline.py')
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import BernoulliNB
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import FunctionTransformer
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
# Train and test a classifier
def train_and_test(X_tr, y_tr, X_v, well_v):
# Feature normalization
scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)
X_tr = scaler.transform(X_tr)
X_v = scaler.transform(X_v)
# Train classifier
#clf = make_pipeline(make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="gini", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))
#clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights="distance") )
#clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion="entropy", max_features=0.0001, n_estimators=500))
# * clf = make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))
clf = make_pipeline ( XGBClassifier(learning_rate=0.12, max_depth=3, min_child_weight=10, n_estimators=150, seed = 17, colsample_bytree = 0.9) )
clf.fit(X_tr, y_tr)
# Test classifier
y_v_hat = clf.predict(X_v)
# Clean isolated facies for each well
for w in np.unique(well_v):
y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)
return y_v_hat | LA_Team/Facies_classification_LA_TEAM_05.ipynb | seg/2016-ml-contest | apache-2.0 |
Working with Python Classes
Encapsulation is seen as the bundling of data with the methods that operate on that data. It is often accomplished by providing two kinds of methods for attributes: The methods for retrieving or accessing the values of attributes are called getter methods. Getter methods do not change the values of attributes, they just return the values. The methods used for changing the values of attributes are called setter methods.
Public, Private, Protected
There are two ways to restrict the access to class attributes:
protected. First, we can prefix an attribute name with a leading underscore "_". This marks the attribute as protected. It tells users of the class not to use this attribute unless, somebody writes a subclass.
private. Second, we can prefix an attribute name with two leading underscores "__". The attribute is now inaccessible and invisible from outside. It's neither possible to read nor write to those attributes except inside of the class definition itself. | class A:
def __init__(self):
self.__priv = "I am private"
self._prot = "I am protected"
self.pub = "I am public"
x = A()
print(x.pub)
# Whenever we assign or retrieve any object attribute
# Python searches it in the object's __dict__ dictionary
print(x.__dict__) | python/class.ipynb | ethen8181/machine-learning | mit |
When the Python compiler sees a private attribute, it actually transforms the actual name to _[Class name]__[private attribute name]. However, this still does not prevent the end-user from accessing the attribute. Thus in Python land, it is more common to use public and protected attribute, write proper docstrings and assume that everyone is a consenting adult, i.e. won't do anything with the protected method unless they know what they are doing.
Class Decorators
@property The Pythonic way to introduce attributes is to make them public, and not introduce getters and setters to retrieve or change them.
@classmethod To add additional constructor to the class.
@staticmethod To attach functions to classes so people won't misuse them in wrong places.
@Property
Let's assume one day we decide to make a class that could store the temperature in degree Celsius. The temperature will be a private method, so our end-users won't have direct access to it.
The class will also implement a method to convert the temperature into degree Fahrenheit. And we also want to implement a value constraint to the temperature, so that it cannot go below -273 degree Celsius. One way of doing this is to define a getter and setter interfaces to manipulate it. | class Celsius:
def __init__(self, temperature = 0):
self.set_temperature(temperature)
def to_fahrenheit(self):
return (self.get_temperature() * 1.8) + 32
def get_temperature(self):
return self._temperature
def set_temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
self._temperature = value
# c = Celsius(-277) # this returns an error
c = Celsius(37)
c.get_temperature() | python/class.ipynb | ethen8181/machine-learning | mit |
Instead of that, now the property way. Where we define the @property and the @[attribute name].setter. | class Celsius:
def __init__(self, temperature = 0):
self._temperature = temperature
def to_fahrenheit(self):
return (self.temperature * 1.8) + 32
# have access to the value like it is an attribute instead of a method
@property
def temperature(self):
return self._temperature
# like accessing the attribute with an extra layer of error checking
@temperature.setter
def temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
print('Setting value')
self._temperature = value
c = Celsius(37)
# much easier to access then the getter, setter way
print(c.temperature)
# note that you can still access the private attribute
# and violate the temperature checking,
# but then it's the users fault not yours
c._temperature = -300
print(c._temperature)
# accessing the attribute will return the ValueError error
# c.temperature = -300 | python/class.ipynb | ethen8181/machine-learning | mit |
@classmethod and @staticmethod
@classmethods create alternative constructors for the class. An example of this behavior is there are different ways to construct a dictionary. | print(dict.fromkeys(['raymond', 'rachel', 'mathew']))
import time
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# Primary
a = Date(2012, 12, 21)
print(a.__dict__)
# Alternate
b = Date.today()
print(b.__dict__) | python/class.ipynb | ethen8181/machine-learning | mit |
The cls is critical, as it is an object that holds the class itself. This makes them work with inheritance. | class NewDate(Date):
pass
# Creates an instance of Date (cls=Date)
c = Date.today()
print(c.__dict__)
# Creates an instance of NewDate (cls=NewDate)
d = NewDate.today()
print(d.__dict__) | python/class.ipynb | ethen8181/machine-learning | mit |
The purpose of @staticmethod is to attach functions to classes. We do this to improve the findability of the function and to make sure that people are using the function in the appropriate context. | class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# the logic belongs with the date class
@staticmethod
def show_tomorrow_date():
t = time.localtime()
return t.tm_year, t.tm_mon, t.tm_mday + 1
Date.show_tomorrow_date() | python/class.ipynb | ethen8181/machine-learning | mit |
Simple Dataset
Usually when working with data we have one or more independent variables, taking the form of categories, labels, discrete sample coordinates, or bins. These variables are what we refer to as key dimensions (or kdims for short) in HoloViews. The observer or dependent variables, on the other hand, are referred to as value dimensions (vdims), and are ordinarily measured or calculated given the independent variables. The simplest useful form of a Dataset object is therefore a column 'x' and a column 'y' corresponding to the key dimensions and value dimensions respectively. An obvious visual representation of this data is a Table: | xs = range(10)
ys = np.exp(xs)
table = hv.Table((xs, ys), kdims=['x'], vdims=['y'])
table | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
However, this data has many more meaningful visual representations, and therefore the first important concept is that Dataset objects are interchangeable as long as their dimensionality allows it, meaning that you can easily create the different objects from the same data (and cast between the objects once created): | hv.Scatter(table) + hv.Curve(table) + hv.Bars(table) | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
Each of these three plots uses the same data, but represents a different assumption about the semantic meaning of that data -- the Scatter plot is appropriate if that data consists of independent samples, the Curve plot is appropriate for samples chosen from an underlying smooth function, and the Bars plot is appropriate for independent categories of data. Since all these plots have the same dimensionality, they can easily be converted to each other, but there is normally only one of these representations that is semantically appropriate for the underlying data. For this particular data, the semantically appropriate choice is Curve, since the y values are samples from the continuous function exp.
As a guide to which Elements can be converted to each other, those of the same dimensionality here should be interchangeable, because of the underlying similarity of their columnar representation:
0D: BoxWhisker, Spikes, Distribution*,
1D: Scatter, Curve, ErrorBars, Spread, Bars, BoxWhisker, Regression*
2D: Points, HeatMap, Bars, BoxWhisker, Bivariate*
3D: Scatter3D, Trisurface, VectorField, BoxWhisker, Bars
* - requires Seaborn
This categorization is based only on the kdims, which define the space in which the data has been sampled or defined. An Element can also have any number of value dimensions (vdims), which may be mapped onto various attributes of a plot such as the color, size, and orientation of the plotted items. For a reference of how to use these various Element types, see the Elements Tutorial.
Data types and Constructors
As discussed above, Dataset provide an extensible interface to store and operate on data in different formats. All interfaces support a number of standard constructors.
Storage formats
Dataset types can be constructed using one of three supported formats, (a) a dictionary of columns, (b) an NxD array with N rows and D columns, or (c) pandas dataframes: | print(repr(hv.Scatter({'x': xs, 'y': ys}) +
hv.Scatter(np.column_stack([xs, ys])) +
hv.Scatter(pd.DataFrame({'x': xs, 'y': ys})))) | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
Literals
In addition to the main storage formats, Dataset Elements support construction from three Python literal formats: (a) An iterator of y-values, (b) a tuple of columns, and (c) an iterator of row tuples. | print(repr(hv.Scatter(ys) + hv.Scatter((xs, ys)) + hv.Scatter(zip(xs, ys)))) | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
For these inputs, the data will need to be copied to a new data structure, having one of the three storage formats above. By default Dataset will try to construct a simple array, falling back to either pandas dataframes (if available) or the dictionary-based format if the data is not purely numeric. Additionally, the interfaces will try to maintain the provided data's type, so numpy arrays and pandas DataFrames will therefore always be parsed by the array and dataframe interfaces first respectively. | df = pd.DataFrame({'x': xs, 'y': ys, 'z': ys*2})
print(type(hv.Scatter(df).data)) | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
Dataset will attempt to parse the supplied data, falling back to each consecutive interface if the previous could not interpret the data. The default list of fallbacks and simultaneously the list of allowed datatypes is: | hv.Dataset.datatype | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
To select a particular storage format explicitly, supply one or more allowed datatypes: | print(type(hv.Scatter((xs, ys), datatype=['array']).data))
print(type(hv.Scatter((xs, ys), datatype=['dictionary']).data))
print(type(hv.Scatter((xs, ys), datatype=['dataframe']).data)) | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
Sharing Data
Since the formats with labelled columns do not require any specific order, each Element can effectively become a view into a single set of data. By specifying different key and value dimensions, many Elements can show different values, while sharing the same underlying data source. | overlay = hv.Scatter(df, kdims='x', vdims='y') * hv.Scatter(df, kdims='x', vdims='z')
overlay | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
We can quickly confirm that the data is actually shared: | overlay.Scatter.I.data is overlay.Scatter.II.data | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
For columnar data, this approach is much more efficient than creating copies of the data for each Element, and allows for some advanced features like linked brushing in the Bokeh backend.
Converting to raw data
Column types make it easy to export the data to the three basic formats: arrays, dataframes, and a dictionary of columns.
Array | table.array() | doc/Tutorials/Columnar_Data.ipynb | vascotenner/holoviews | bsd-3-clause |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.