markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Ha funcionat a la primera? Fer un quadrat perfecte no és fàcil, i el més normal és que calga ajustar un parell de coses:
el gir de 90 graus: si el robot gira massa, heu de disminuir el temps del sleep; si gira massa poc, augmentar-lo (podeu posar decimals)
si no va recte: és normal que un dels motors gire una mica més ràpid que l'altre; podeu ajustar les velocitats de cada motor individualment entre 0 (mínim) i 100 (màxim), per exemple:
forward(speed_B=90,speed_C=75)
Canvieu els valors i torneu a provar fins aconseguir un quadrat decent (la perfecció és impossible).
Versió pro
Els llenguatges de programació tenen estructures per a repetir blocs d'instruccions sense haver d'escriure-les tantes vegades. És el que s'anomena bucle o, en anglès, for loop.
En Python, un bucle per a repetir un bloc d'instruccions quatre vegades s'escriu així: | for i in range(4):
# avançar
# girar
# parar | task/quadrat.ipynb | ecervera/mindstorms-nb | mit |
És important que les instruccions de dins del bucle estiguen desplaçades cap a la dreta, és a dir indentades.
Substituïu els comentaris per les instruccions i proveu.
Recapitulem
Per a acabar l'exercici, i abans de passar a la següent pàgina, desconnecteu el robot: | disconnect()
next_notebook('sensors') | task/quadrat.ipynb | ecervera/mindstorms-nb | mit |
TOC and elevation corrections
Some further changes to the ICPW trends analysis are required:
Heleen has discovered some strange results for TOC for some of the Canadian sites (see e-mail received 14/03/2017 at 17.45) <br><br>
We now have elevation data for the remaining sites (see e-mail received 15/03/2017 at 08.37) <br><br>
Heleen would like a "grid cell ID" adding to the climate processing output (see e-mail received 15/03/2017 13.33)
Having made the above changes, the whole climate data and trends analysis needs re-running. This notebook deals with points 1 and 2 above; point 3 requires a small modification to the existing climate code.
1. Correct TOC
This is a bit more complicated than it first appears. It looks as though a lot of dupicate data was uploaded to the database at some point, and some of the duplicates have incorrect method names. For the Ontairo lakes, the same values have been uploaded both as DOC (in mg-C/l) and as "DOCx", which is in umol-C/l. The conversion factor from DOCx to DOC is therefore 0.012, which is very close to Heleen's estimated correction factor of dividing by 100. The problem is that the database appears to be selecting which values to display more-or-less at random. This is illustrated below. | # Create db connection
r2_func_path = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\useful_resa2_code.py'
resa2 = imp.load_source('useful_resa2_code', r2_func_path)
engine, conn = resa2.connect_to_resa2()
# Get example data
sql = ("SELECT * FROM resa2.water_chemistry_values2 "
"WHERE sample_id = (SELECT water_sample_id "
"FROM resa2.water_samples "
"WHERE station_id = 23466 "
"AND sample_date = DATE '2000-05-23') "
"AND method_id IN (10313, 10294)")
df = pd.read_sql_query(sql, engine)
df | correct_toc_elev.ipynb | JamesSample/icpw | mit |
method_id=10294 is DOC in mg-C/l, whereas method_id=10313 is DOCx in umol-C/l. Both were uploaded within the space of a few weeks back in 2006. I assume that the values with method_id=10313 are correct, and those with method_id=10294 are wrong.
It seems as though, when both methods are present, RESA2 preferentially chooses method_id=10313, which is why most of the data look OK. However, if method_id=10313 is not available, the database uses the values for method_id=10294 instead, and these values are wrong. The problem is that this selection isn't deliberate: the database only prefers method_id=10313 because it appears lower in the table than method_id=10294. Essentially, it's just a fluke that most of the data turn out OK - it could easily have been the other way around.
To fix this, I need to:
Go through all the samples from the Ontario sites and see whether there are values for both method_id=10313 and method_id=10294 <br><br>
If yes, see whether the raw values are the same. If so, delete the value for method_id=10294 <br><br>
If values are only entered with method_id=10294, check to see whether they are too large and, if so, switch the method_id to 10313
This is done below. | # Get a list of all water samples associated with
# stations in the 'ICPW_TOCTRENDS_2015_CA_ICPW' project
sql = ("SELECT water_sample_id FROM resa2.water_samples "
"WHERE station_id IN ( "
"SELECT station_id FROM resa2.stations "
"WHERE station_id IN ( "
"SELECT station_id FROM resa2.projects_stations "
"WHERE project_id IN ( "
"SELECT project_id FROM resa2.projects "
"WHERE project_name = 'ICPW_TOCTRENDS_2015_CA_ICPW')))")
samp_df = pd.read_sql_query(sql, engine)
# Loop over samples and check whether both method_ids are present
for samp_id in samp_df['water_sample_id'].values:
# Get data for this sample
sql = ("SELECT method_id, value "
"FROM resa2.water_chemistry_values2 "
"WHERE sample_id = %s "
"AND method_id IN (10294, 10313)" % samp_id)
df = pd.read_sql_query(sql, engine)
df.index = df['method_id']
del df['method_id']
# How many entries for DOC?
if len(df) == 1:
# We have just one of the two methods
if df.index[0] == 10294:
# Should be DOC in mg-C/l and values should be <50
if df['value'].values[0] > 50:
# Method_ID must be wrong
sql = ('UPDATE resa2.water_chemistry_values2 '
'SET method_id = 10313 '
'WHERE sample_id = %s '
'AND method_id = 10294' % samp_id)
result = conn.execute(sql)
# Otherwise we have both methods
elif len(df) == 2:
# Are they the same and large?
if (df.loc[10313].value == df.loc[10294].value) and (df.loc[10313].value > 50):
# Delete record for method_id=10294
sql = ('DELETE FROM resa2.water_chemistry_values2 '
'WHERE sample_id = %s '
'AND method_id = 10294' % samp_id)
result = conn.execute(sql)
print 'Finished.' | correct_toc_elev.ipynb | JamesSample/icpw | mit |
2. Update station elevations
Heleen has provided the missing elevation data, which I copied here:
C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\CRU_Climate_Data\missing_elev_data.xlsx | # Read elev data
in_xlsx = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\CRU_Climate_Data\missing_elev_data.xlsx')
elev_df = pd.read_excel(in_xlsx)
elev_df.index = elev_df['station_id']
# Loop over stations and update info
for stn_id in elev_df['station_id'].values:
# Get elev
elev = elev_df.loc[stn_id]['altitude']
# Update rows
sql = ('UPDATE resa2.stations '
'SET altitude = %s '
'WHERE station_id = %s' % (elev, stn_id))
result = conn.execute(sql) | correct_toc_elev.ipynb | JamesSample/icpw | mit |
Next, we'll load our data set. | df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",") | courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Examine the data
It's a good idea to get to know your data a little bit before you work with it.
We'll print out a quick summary of a few useful statistics on each column.
This will include things like mean, standard deviation, max, min, and various quantiles. | df.head()
df.describe() | courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb | turbomanage/training-data-analyst | apache-2.0 |
This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well | df['num_rooms'] = df['total_rooms'] / df['households']
df['num_bedrooms'] = df['total_bedrooms'] / df['households']
df['persons_per_house'] = df['population'] / df['households']
df.describe()
df.drop(['total_rooms', 'total_bedrooms', 'population', 'households'], axis = 1, inplace = True)
df.describe() | courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Build a neural network model
In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use the remaining columns as our input features.
To train our model, we'll first use the LinearRegressor interface. Then, we'll change to DNNRegressor | featcols = {
colname : tf.feature_column.numeric_column(colname) \
for colname in 'housing_median_age,median_income,num_rooms,num_bedrooms,persons_per_house'.split(',')
}
# Bucketize lat, lon so it's not so high-res; California is mostly N-S, so more lats than lons
featcols['longitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('longitude'),
np.linspace(-124.3, -114.3, 5).tolist())
featcols['latitude'] = tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'),
np.linspace(32.5, 42, 10).tolist())
featcols.keys()
# Split into train and eval
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
SCALE = 100000
BATCH_SIZE= 100
OUTDIR = './housing_trained'
train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[list(featcols.keys())],
y = traindf["median_house_value"] / SCALE,
num_epochs = None,
batch_size = BATCH_SIZE,
shuffle = True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[list(featcols.keys())],
y = evaldf["median_house_value"] / SCALE, # note the scaling
num_epochs = 1,
batch_size = len(evaldf),
shuffle=False)
# Linear Regressor
def train_and_evaluate(output_dir, num_train_steps):
myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = featcols.values(),
optimizer = myopt)
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE)
# DNN Regressor
def train_and_evaluate(output_dir, num_train_steps):
myopt = tf.train.FtrlOptimizer(learning_rate = 0.01) # note the learning rate
estimator = # TODO: Implement DNN Regressor model
#Add rmse evaluation metric
def rmse(labels, predictions):
pred_values = tf.cast(predictions['predictions'],tf.float64)
return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)}
estimator = tf.contrib.estimator.add_metrics(estimator,rmse)
train_spec=tf.estimator.TrainSpec(
input_fn = train_input_fn,
max_steps = num_train_steps)
eval_spec=tf.estimator.EvalSpec(
input_fn = eval_input_fn,
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, num_train_steps = (100 * len(traindf)) / BATCH_SIZE) | courses/machine_learning/deepdive/05_artandscience/labs/c_neuralnetwork.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px. | Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/6/6d/Particle2D.svg/320px-Particle2D.svg.png', embed=True, width = 600, height = 600)
assert True # leave this to grade the image display | assignments/assignment06/DisplayEx01.ipynb | nproctor/phys202-2015-work | mit |
Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. | %%html
<table>
<tr>
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge</th>
<th>Mass</th>
</tr>
<tr>
<th>Up</th>
<td>u</td>
<td>$\bar{u}$</td>
<td>+2/3</td>
<td>1.5-3.3</td>
</tr>
<tr>
<th>Down</th>
<td>d</td>
<td>$\bar{d}$</td>
<td>-1/3</td>
<td>3.5-6.0</td>
</tr>
<tr>
<th>Charm</th>
<td>c</td>
<td>$\bar{c}$</td>
<td>+2/3</td>
<td>1,160-1,340</td>
</tr>
<tr>
<th>Strange</th>
<td>s</td>
<td>$\bar{s}$</td>
<td>-1/3</td>
<td>70-130</td>
</tr>
<tr>
<th>Top</th>
<td>t</td>
<td>$\bar{t}$</td>
<td>+2/3</td>
<td>169,100-173,300</td>
</tr>
<tr>
<th>Bottom</th>
<td>b</td>
<td>$\bar{b}$</td>
<td>-1/3</td>
<td>4,130-4,370</td>
</tr>
</table>
assert True # leave this here to grade the quark table | assignments/assignment06/DisplayEx01.ipynb | nproctor/phys202-2015-work | mit |
1. Setup and dataset download
Download data required for this exercise.
get_ilsvrc_aux.sh to download the ImageNet data mean, labels, etc.
download_model_binary.py to download the pretrained reference model
finetune_flickr_style/assemble_data.py downloads the style training and testing data
We'll download just a small subset of the full dataset for this exercise: just 2000 of the 80K images, from 5 of the 20 style categories. (To download the full dataset, set full_dataset = True in the cell below.) | # Download just a small subset of the data for this exercise.
# (2000 of 80K images, 5 of 20 labels.)
# To download the entire dataset, set `full_dataset = True`.
full_dataset = False
if full_dataset:
NUM_STYLE_IMAGES = NUM_STYLE_LABELS = -1
else:
NUM_STYLE_IMAGES = 2000
NUM_STYLE_LABELS = 5
# This downloads the ilsvrc auxiliary data (mean file, etc),
# and a subset of 2000 images for the style recognition task.
import os
os.chdir(caffe_root) # run scripts from caffe root
!data/ilsvrc12/get_ilsvrc_aux.sh
!scripts/download_model_binary.py models/bvlc_reference_caffenet
!python examples/finetune_flickr_style/assemble_data.py \
--workers=-1 --seed=1701 \
--images=$NUM_STYLE_IMAGES --label=$NUM_STYLE_LABELS
# back to examples
os.chdir('examples') | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Define weights, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists. | import os
weights = os.path.join(caffe_root, 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')
assert os.path.exists(weights) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Load the 1000 ImageNet labels from ilsvrc12/synset_words.txt, and the 5 style labels from finetune_flickr_style/style_names.txt. | # Load ImageNet labels to imagenet_labels
imagenet_label_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
imagenet_labels = list(np.loadtxt(imagenet_label_file, str, delimiter='\t'))
assert len(imagenet_labels) == 1000
print 'Loaded ImageNet labels:\n', '\n'.join(imagenet_labels[:10] + ['...'])
# Load style labels to style_labels
style_label_file = caffe_root + 'examples/finetune_flickr_style/style_names.txt'
style_labels = list(np.loadtxt(style_label_file, str, delimiter='\n'))
if NUM_STYLE_LABELS > 0:
style_labels = style_labels[:NUM_STYLE_LABELS]
print '\nLoaded style labels:\n', ', '.join(style_labels) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
2. Defining and running the nets
We'll start by defining caffenet, a function which initializes the CaffeNet architecture (a minor variant on AlexNet), taking arguments specifying the data and number of output classes. | from caffe import layers as L
from caffe import params as P
weight_param = dict(lr_mult=1, decay_mult=1)
bias_param = dict(lr_mult=2, decay_mult=0)
learned_param = [weight_param, bias_param]
frozen_param = [dict(lr_mult=0)] * 2
def conv_relu(bottom, ks, nout, stride=1, pad=0, group=1,
param=learned_param,
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0.1)):
conv = L.Convolution(bottom, kernel_size=ks, stride=stride,
num_output=nout, pad=pad, group=group,
param=param, weight_filler=weight_filler,
bias_filler=bias_filler)
return conv, L.ReLU(conv, in_place=True)
def fc_relu(bottom, nout, param=learned_param,
weight_filler=dict(type='gaussian', std=0.005),
bias_filler=dict(type='constant', value=0.1)):
fc = L.InnerProduct(bottom, num_output=nout, param=param,
weight_filler=weight_filler,
bias_filler=bias_filler)
return fc, L.ReLU(fc, in_place=True)
def max_pool(bottom, ks, stride=1):
return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)
def caffenet(data, label=None, train=True, num_classes=1000,
classifier_name='fc8', learn_all=False):
"""Returns a NetSpec specifying CaffeNet, following the original proto text
specification (./models/bvlc_reference_caffenet/train_val.prototxt)."""
n = caffe.NetSpec()
n.data = data
param = learned_param if learn_all else frozen_param
n.conv1, n.relu1 = conv_relu(n.data, 11, 96, stride=4, param=param)
n.pool1 = max_pool(n.relu1, 3, stride=2)
n.norm1 = L.LRN(n.pool1, local_size=5, alpha=1e-4, beta=0.75)
n.conv2, n.relu2 = conv_relu(n.norm1, 5, 256, pad=2, group=2, param=param)
n.pool2 = max_pool(n.relu2, 3, stride=2)
n.norm2 = L.LRN(n.pool2, local_size=5, alpha=1e-4, beta=0.75)
n.conv3, n.relu3 = conv_relu(n.norm2, 3, 384, pad=1, param=param)
n.conv4, n.relu4 = conv_relu(n.relu3, 3, 384, pad=1, group=2, param=param)
n.conv5, n.relu5 = conv_relu(n.relu4, 3, 256, pad=1, group=2, param=param)
n.pool5 = max_pool(n.relu5, 3, stride=2)
n.fc6, n.relu6 = fc_relu(n.pool5, 4096, param=param)
if train:
n.drop6 = fc7input = L.Dropout(n.relu6, in_place=True)
else:
fc7input = n.relu6
n.fc7, n.relu7 = fc_relu(fc7input, 4096, param=param)
if train:
n.drop7 = fc8input = L.Dropout(n.relu7, in_place=True)
else:
fc8input = n.relu7
# always learn fc8 (param=learned_param)
fc8 = L.InnerProduct(fc8input, num_output=num_classes, param=learned_param)
# give fc8 the name specified by argument `classifier_name`
n.__setattr__(classifier_name, fc8)
if not train:
n.probs = L.Softmax(fc8)
if label is not None:
n.label = label
n.loss = L.SoftmaxWithLoss(fc8, n.label)
n.acc = L.Accuracy(fc8, n.label)
# write the net to a temporary file and return its filename
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(n.to_proto()))
return f.name | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Now, let's create a CaffeNet that takes unlabeled "dummy data" as input, allowing us to set its input images externally and see what ImageNet classes it predicts. | dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227]))
imagenet_net_filename = caffenet(data=dummy_data, train=False)
imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Define a function style_net which calls caffenet on data from the Flickr style dataset.
The new network will also have the CaffeNet architecture, with differences in the input and output:
the input is the Flickr style data we downloaded, provided by an ImageData layer
the output is a distribution over 20 classes rather than the original 1000 ImageNet classes
the classification layer is renamed from fc8 to fc8_flickr to tell Caffe not to load the original classifier (fc8) weights from the ImageNet-pretrained model | def style_net(train=True, learn_all=False, subset=None):
if subset is None:
subset = 'train' if train else 'test'
source = caffe_root + 'data/flickr_style/%s.txt' % subset
transform_param = dict(mirror=train, crop_size=227,
mean_file=caffe_root + 'data/ilsvrc12/imagenet_mean.binaryproto')
style_data, style_label = L.ImageData(
transform_param=transform_param, source=source,
batch_size=50, new_height=256, new_width=256, ntop=2)
return caffenet(data=style_data, label=style_label, train=train,
num_classes=NUM_STYLE_LABELS,
classifier_name='fc8_flickr',
learn_all=learn_all) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Use the style_net function defined above to initialize untrained_style_net, a CaffeNet with input images from the style dataset and weights from the pretrained ImageNet model.
Call forward on untrained_style_net to get a batch of style training data. | untrained_style_net = caffe.Net(style_net(train=False, subset='train'),
weights, caffe.TEST)
untrained_style_net.forward()
style_data_batch = untrained_style_net.blobs['data'].data.copy()
style_label_batch = np.array(untrained_style_net.blobs['label'].data, dtype=np.int32) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through imagenet_net, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes.
Below we chose an image where the network's predictions happen to be reasonable, as the image is of a beach, and "sandbar" and "seashore" both happen to be ImageNet-1000 categories. For other images, the predictions won't be this good, sometimes due to the network actually failing to recognize the object(s) present in the image, but perhaps even more often due to the fact that not all images contain an object from the (somewhat arbitrarily chosen) 1000 ImageNet categories. Modify the batch_index variable by changing its default setting of 8 to another value from 0-49 (since the batch size is 50) to see predictions for other images in the batch. (To go beyond this batch of 50 images, first rerun the above cell to load a fresh batch of data into style_net.) | def disp_preds(net, image, labels, k=5, name='ImageNet'):
input_blob = net.blobs['data']
net.blobs['data'].data[0, ...] = image
probs = net.forward(start='conv1')['probs'][0]
top_k = (-probs).argsort()[:k]
print 'top %d predicted %s labels =' % (k, name)
print '\n'.join('\t(%d) %5.2f%% %s' % (i+1, 100*probs[p], labels[p])
for i, p in enumerate(top_k))
def disp_imagenet_preds(net, image):
disp_preds(net, image, imagenet_labels, name='ImageNet')
def disp_style_preds(net, image):
disp_preds(net, image, style_labels, name='style')
batch_index = 8
image = style_data_batch[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[style_label_batch[batch_index]]
disp_imagenet_preds(imagenet_net, image) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
We can also look at untrained_style_net's predictions, but we won't see anything interesting as its classifier hasn't been trained yet.
In fact, since we zero-initialized the classifier (see caffenet definition -- no weight_filler is passed to the final InnerProduct layer), the softmax inputs should be all zero and we should therefore see a predicted probability of 1/N for each label (for N labels). Since we set N = 5, we get a predicted probability of 20% for each class. | disp_style_preds(untrained_style_net, image) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
We can also verify that the activations in layer fc7 immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the conv1 through fc7 layers. | diff = untrained_style_net.blobs['fc7'].data[0] - imagenet_net.blobs['fc7'].data[0]
error = (diff ** 2).sum()
assert error < 1e-8 | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Delete untrained_style_net to save memory. (Hang on to imagenet_net as we'll use it again later.) | del untrained_style_net | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
3. Training the style classifier
Now, we'll define a function solver to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and "snapshotting" -- see the inline comments for explanations of what they mean. You may want to play with some of the learning parameters to see if you can improve on the results here! | from caffe.proto import caffe_pb2
def solver(train_net_path, test_net_path=None, base_lr=0.001):
s = caffe_pb2.SolverParameter()
# Specify locations of the train and (maybe) test networks.
s.train_net = train_net_path
if test_net_path is not None:
s.test_net.append(test_net_path)
s.test_interval = 1000 # Test after every 1000 training iterations.
s.test_iter.append(100) # Test on 100 batches each time we test.
# The number of iterations over which to average the gradient.
# Effectively boosts the training batch size by the given factor, without
# affecting memory utilization.
s.iter_size = 1
s.max_iter = 100000 # # of times to update the net (training iterations)
# Solve using the stochastic gradient descent (SGD) algorithm.
# Other choices include 'Adam' and 'RMSProp'.
s.type = 'SGD'
# Set the initial learning rate for SGD.
s.base_lr = base_lr
# Set `lr_policy` to define how the learning rate changes during training.
# Here, we 'step' the learning rate by multiplying it by a factor `gamma`
# every `stepsize` iterations.
s.lr_policy = 'step'
s.gamma = 0.1
s.stepsize = 20000
# Set other SGD hyperparameters. Setting a non-zero `momentum` takes a
# weighted average of the current gradient and previous gradients to make
# learning more stable. L2 weight decay regularizes learning, to help prevent
# the model from overfitting.
s.momentum = 0.9
s.weight_decay = 5e-4
# Display the current training loss and accuracy every 1000 iterations.
s.display = 1000
# Snapshots are files used to store networks we've trained. Here, we'll
# snapshot every 10K iterations -- ten times during training.
s.snapshot = 10000
s.snapshot_prefix = caffe_root + 'models/finetune_flickr_style/finetune_flickr_style'
# Train on the GPU. Using the CPU to train large networks is very slow.
s.solver_mode = caffe_pb2.SolverParameter.GPU
# Write the solver to a temporary file and return its filename.
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(s))
return f.name | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Now we'll invoke the solver to train the style net's classification layer.
For the record, if you want to train the network using only the command line tool, this is the command:
<code>
build/tools/caffe train \
-solver models/finetune_flickr_style/solver.prototxt \
-weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \
-gpu 0
</code>
However, we will train using Python in this example.
We'll first define run_solvers, a function that takes a list of solvers and steps each one in a round robin manner, recording the accuracy and loss values each iteration. At the end, the learned weights are saved to a file. | def run_solvers(niter, solvers, disp_interval=10):
"""Run solvers for niter iterations,
returning the loss and accuracy recorded each iteration.
`solvers` is a list of (name, solver) tuples."""
blobs = ('loss', 'acc')
loss, acc = ({name: np.zeros(niter) for name, _ in solvers}
for _ in blobs)
for it in range(niter):
for name, s in solvers:
s.step(1) # run a single SGD step in Caffe
loss[name][it], acc[name][it] = (s.net.blobs[b].data.copy()
for b in blobs)
if it % disp_interval == 0 or it + 1 == niter:
loss_disp = '; '.join('%s: loss=%.3f, acc=%2d%%' %
(n, loss[n][it], np.round(100*acc[n][it]))
for n, _ in solvers)
print '%3d) %s' % (it, loss_disp)
# Save the learned weights from both nets.
weight_dir = tempfile.mkdtemp()
weights = {}
for name, s in solvers:
filename = 'weights.%s.caffemodel' % name
weights[name] = os.path.join(weight_dir, filename)
s.net.save(weights[name])
return loss, acc, weights | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (style_solver) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the copy_from method), and the other (scratch_style_solver) will start from a randomly initialized net.
During training, we should see that the ImageNet pretrained net is learning faster and attaining better accuracies than the scratch net. | niter = 200 # number of iterations to train
# Reset style_solver as before.
style_solver_filename = solver(style_net(train=True))
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(weights)
# For reference, we also create a solver that isn't initialized from
# the pretrained ImageNet weights.
scratch_style_solver_filename = solver(style_net(train=True))
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained', style_solver),
('scratch', scratch_style_solver)]
loss, acc, weights = run_solvers(niter, solvers)
print 'Done.'
train_loss, scratch_train_loss = loss['pretrained'], loss['scratch']
train_acc, scratch_train_acc = acc['pretrained'], acc['scratch']
style_weights, scratch_style_weights = weights['pretrained'], weights['scratch']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer. | plot(np.vstack([train_loss, scratch_train_loss]).T)
xlabel('Iteration #')
ylabel('Loss')
plot(np.vstack([train_acc, scratch_train_acc]).T)
xlabel('Iteration #')
ylabel('Accuracy') | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to be much better than the one from training from scratch. Let's see. | def eval_style_net(weights, test_iters=10):
test_net = caffe.Net(style_net(train=False), weights, caffe.TEST)
accuracy = 0
for it in xrange(test_iters):
accuracy += test_net.forward()['acc']
accuracy /= test_iters
return test_net, accuracy
test_net, accuracy = eval_style_net(style_weights)
print 'Accuracy, trained from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights)
print 'Accuracy, trained from random initialization: %3.1f%%' % (100*scratch_accuracy, ) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
4. End-to-end finetuning for style
Finally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights "end-to-end" by turning on learning in all layers of the network, starting from the RGB conv1 filters directly applied to the input image. We pass the argument learn_all=True to the style_net function defined earlier in this notebook, which tells the function to apply a positive (non-zero) lr_mult value for all parameters. Under the default, learn_all=False, all parameters in the pretrained layers (conv1 through fc7) are frozen (lr_mult = 0), and we learn only the classifier layer fc8_flickr.
Note that both networks start at roughly the accuracy achieved at the end of the previous training session, and improve significantly with end-to-end training. To be more scientific, we'd also want to follow the same additional training procedure without the end-to-end training, to ensure that our results aren't better simply because we trained for twice as long. Feel free to try this yourself! | end_to_end_net = style_net(train=True, learn_all=True)
# Set base_lr to 1e-3, the same as last time when learning only the classifier.
# You may want to play around with different values of this or other
# optimization parameters when fine-tuning. For example, if learning diverges
# (e.g., the loss gets very large or goes to infinity/NaN), you should try
# decreasing base_lr (e.g., to 1e-4, then 1e-5, etc., until you find a value
# for which learning does not diverge).
base_lr = 0.001
style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(style_weights)
scratch_style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
scratch_style_solver.net.copy_from(scratch_style_weights)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained, end-to-end', style_solver),
('scratch, end-to-end', scratch_style_solver)]
_, _, finetuned_weights = run_solvers(niter, solvers)
print 'Done.'
style_weights_ft = finetuned_weights['pretrained, end-to-end']
scratch_style_weights_ft = finetuned_weights['scratch, end-to-end']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or randomly initialized weights). | test_net, accuracy = eval_style_net(style_weights_ft)
print 'Accuracy, finetuned from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights_ft)
print 'Accuracy, finetuned from random initialization: %3.1f%%' % (100*scratch_accuracy, ) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
We'll first look back at the image we started with and check our end-to-end trained model's predictions. | plt.imshow(deprocess_net_image(image))
disp_style_preds(test_net, image) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time.
Finally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it. | batch_index = 1
image = test_net.blobs['data'].data[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[int(test_net.blobs['label'].data[batch_index])]
disp_style_preds(test_net, image) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (Pastel), but is much less confident in its prediction than the pretrained net. | disp_style_preds(scratch_test_net, image) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Of course, we can again look at the ImageNet model's predictions for the above image: | disp_imagenet_preds(imagenet_net, image) | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | wy1iu/sphereface | mit |
Here we construct a filter, $F$, such that
$$F\left(x\right) = e^{-|x|} \cos{\left(2\pi x\right)} $$
We want to show that if $F$ is used to generate sample calibration
data for the MKS, then the calculated influence coefficients are in
fact just $F$. | x0 = -10.
x1 = 10.
x = np.linspace(x0, x1, 1000)
def F(x):
return np.exp(-abs(x)) * np.cos(2 * np.pi * x)
p = plt.plot(x, F(x), color='#1a9850')
| notebooks/filter.ipynb | XinyiGong/pymks | mit |
Next we generate the sample data (X, y) using
scipy.ndimage.convolve. This performs the convolution
$$ p\left[ s \right] = \sum_r F\left[r\right] X\left[r - s\right] $$
for each sample. | import scipy.ndimage
n_space = 101
n_sample = 50
np.random.seed(201)
x = np.linspace(x0, x1, n_space)
X = np.random.random((n_sample, n_space))
y = np.array([scipy.ndimage.convolve(xx, F(x), mode='wrap') for xx in X])
| notebooks/filter.ipynb | XinyiGong/pymks | mit |
For this problem, a basis is unnecessary as no discretization is
required in order to reproduce the convolution with the MKS localization. Using
the ContinuousIndicatorBasis with n_states=2 is the equivalent of a
non-discretized convolution in space. | from pymks import MKSLocalizationModel
from pymks import PrimitiveBasis
prim_basis = PrimitiveBasis(n_states=2, domain=[0, 1])
model = MKSLocalizationModel(basis=prim_basis)
| notebooks/filter.ipynb | XinyiGong/pymks | mit |
Fit the model using the data generated by $F$. | model.fit(X, y)
| notebooks/filter.ipynb | XinyiGong/pymks | mit |
To check for internal consistency, we can compare the predicted
output with the original for a few values | y_pred = model.predict(X)
print y[0, :4]
print y_pred[0, :4]
| notebooks/filter.ipynb | XinyiGong/pymks | mit |
With a slight linear manipulation of the coefficients, they agree perfectly with the shape of the filter, $F$. | plt.plot(x, F(x), label=r'$F$', color='#1a9850')
plt.plot(x, -model.coeff[:,0] + model.coeff[:, 1],
'k--', label=r'$\alpha$')
l = plt.legend() | notebooks/filter.ipynb | XinyiGong/pymks | mit |
Verify tables exist
Run the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them. | %%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0 | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Lab Task #1: Model 4: Increase complexity of model using DNN_REGRESSOR
DNN_REGRESSOR is a new regression model_type vs. the LINEAR_REG that we have been using in previous labs.
MODEL_TYPE="DNN_REGRESSOR"
hidden_units: List of hidden units per layer; all layers are fully connected. Number of elements in the array will be the number of hidden layers. The default value for hidden_units is [Min(128, N / (𝜶(Ni+No)))] (1 hidden layer), with N the training data size, Ni, No the input layer and output layer units, respectively, 𝜶 is constant with value 10. The upper bound of the rule will make sure the model won’t be over fitting. Note that, we currently have a model size limitation to 256MB.
dropout: Probability to drop a given coordinate during training; dropout is a very common technique to avoid overfitting in DNNs. The default value is zero, which means we will not drop out any coordinate during training.
batch_size: Number of samples that will be served to train the network for each sub iteration. The default value is Min(1024, num_examples) to balance the training speed and convergence. Serving all training data in each sub-iteration may lead to convergence issues, and is not advised.
Create DNN_REGRESSOR model
Change model type to use DNN_REGRESSOR, add a list of integer HIDDEN_UNITS, and add an integer BATCH_SIZE.
* Hint: Create a model_4. | %%bigquery
CREATE OR REPLACE MODEL
babyweight.model_4
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add base features and label
FROM
babyweight.babyweight_data_train | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Get training information and evaluate
Let's first look at our training statistics. | %%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Now let's evaluate our trained model on our eval dataset. | %%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
)) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Let's use our evaluation's mean_squared_error to calculate our model's RMSE. | %%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
)) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Lab Task #2: Final Model: Apply the TRANSFORM clause
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause as we did in the last notebook. This way we can have the same transformations applied for training and prediction without modifying the queries.
Let's apply the TRANSFORM clause to the final model and run the query. | %%bigquery
CREATE OR REPLACE MODEL
babyweight.final_model
TRANSFORM(
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
# TODO: Add FEATURE CROSS of:
# is_male, bucketed_mother_age, plurality, and bucketed_gestation_weeks
OPTIONS (
# TODO: Add DNN options
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Let's first look at our training statistics. | %%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Now let's evaluate our trained model on our eval dataset. | %%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
)) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Let's use our evaluation's mean_squared_error to calculate our model's RMSE. | %%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
)) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Lab Task #3: Predict with final model.
Now that you have evaluated your model, the next step is to use it to predict the weight of a baby before it is born, using BigQuery ML.PREDICT function.
Predict from final model using an example from original dataset | %%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from original dataset
)) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Modify above prediction query using example from simulated dataset
Use the feature values you made up above, however set is_male to "Unknown" and plurality to "Multiple(2+)". This is simulating us not knowing the gender or the exact plurality. | %%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
# TODO Add base features example from simulated dataset
)) | courses/machine_learning/deepdive2/structured/labs/3c_bqml_dnn_babyweight.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Pipeline
Constants | # Required Parameters
project_id = '<ADD GCP PROJECT HERE>'
output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash
# Optional Parameters
REGION = 'us-central1'
RUNTIME_VERSION = '1.13'
PACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz'])
TRAINER_OUTPUT_GCS_PATH = output + '/train/output/' + str(int(time.time())) + '/'
DATA_GCS_PATH = output + '/reports.csv'
PYTHON_MODULE = 'trainer.task'
PIPELINE_NAME = 'Chicago Crime Prediction'
PIPELINE_FILENAME_PREFIX = 'chicago'
PIPELINE_DESCRIPTION = ''
MODEL_NAME = 'chicago_pipeline_model' + str(int(time.time()))
MODEL_VERSION = 'chicago_pipeline_model_v1' + str(int(time.time())) | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Download data
Define a download function that uses the BigQuery component | bigquery_query_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml')
QUERY = """
SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day
FROM `bigquery-public-data.chicago_crime.crime`
GROUP BY day
ORDER BY day
"""
def download(project_id, data_gcs_path):
return bigquery_query_op(
query=QUERY,
project_id=project_id,
output_gcs_path=data_gcs_path
) | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Train the model
Run training code that will pre-process the data and then submit a training job to the AI Platform. | mlengine_train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/train/component.yaml')
def train(project_id,
trainer_args,
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version):
return mlengine_train_op(
project_id=project_id,
python_module=python_module,
package_uris=package_uris,
region=region,
args=trainer_args,
job_dir=trainer_output_gcs_path,
runtime_version=runtime_version
) | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Deploy model
Deploy the model with the ID given from the training step | mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml')
def deploy(
project_id,
model_uri,
model_id,
model_version,
runtime_version):
return mlengine_deploy_op(
model_uri=model_uri,
project_id=project_id,
model_id=model_id,
version_id=model_version,
runtime_version=runtime_version,
replace_existing_version=True,
set_default=True) | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Define pipeline | @dsl.pipeline(
name=PIPELINE_NAME,
description=PIPELINE_DESCRIPTION
)
def pipeline(
data_gcs_path=DATA_GCS_PATH,
gcs_working_dir=output,
project_id=project_id,
python_module=PYTHON_MODULE,
region=REGION,
runtime_version=RUNTIME_VERSION,
package_uris=PACKAGE_URIS,
trainer_output_gcs_path=TRAINER_OUTPUT_GCS_PATH,
):
download_task = download(project_id,
data_gcs_path)
train_task = train(project_id,
json.dumps(
['--data-file-url',
'%s' % download_task.outputs['output_gcs_path'],
'--job-dir',
output]
),
package_uris,
trainer_output_gcs_path,
gcs_working_dir,
region,
python_module,
runtime_version)
deploy_task = deploy(project_id,
train_task.outputs['job_dir'],
MODEL_NAME,
MODEL_VERSION,
runtime_version)
return True
# Reference for invocation later
pipeline_func = pipeline | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Submit the pipeline for execution | pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
# Run the pipeline on a separate Kubeflow Cluster instead
# (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks)
# pipeline = kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(pipeline, arguments={}) | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Wait for the pipeline to finish | run_detail = pipeline.wait_for_run_completion(timeout=1800)
print(run_detail.run.status) | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Use the deployed model to predict (online prediction) | import os
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Create normalized input representing 14 days prior to prediction day. | %%writefile test.json
{"lstm_input": [[-1.24344569, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387, -0.71910112, -0.86641698, -0.91635456, -1.04868914, -1.01373283, -0.7690387 , -0.90387016]]}
!gcloud ai-platform predict --model=$MODEL_NAME --version=$MODEL_VERSION --json-instances=test.json | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Examine cloud services invoked by the pipeline
BigQuery query: https://console.cloud.google.com/bigquery?page=queries (click on 'Project History')
AI Platform training job: https://console.cloud.google.com/ai-platform/jobs
AI Platform model serving: https://console.cloud.google.com/ai-platform/models
Clean models | # !gcloud ai-platform versions delete $MODEL_VERSION --model $MODEL_NAME
# !gcloud ai-platform models delete $MODEL_NAME | samples/core/ai_platform/ai_platform.ipynb | kubeflow/kfp-tekton-backend | apache-2.0 |
Import all python modules that we'll need: | import phoebe
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Pull a set of Sun-like emergent intensities as a function of $\mu = \cos \theta$ from the Castelli and Kurucz database of model atmospheres (the necessary file can be downloaded from here): | wl = np.arange(900., 39999.501, 0.5)/1e10
with fits.open('T06000G40P00.fits') as hdu:
Imu = 1e7*hdu[0].data | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Grab only the normal component for testing purposes: | Inorm = Imu[-1,:] | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Now let's load a Johnson V passband and the transmission function $P(\lambda)$ contained within: | pb = phoebe.get_passband('Johnson:V') | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Tesselate the wavelength interval to the range covered by the passband: | keep = (wl >= pb.ptf_table['wl'][0]) & (wl <= pb.ptf_table['wl'][-1])
Inorm = Inorm[keep]
wl = wl[keep] | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Calculate $S(\lambda) P(\lambda)$ and plot it, to make sure everything so far makes sense: | plt.plot(wl, Inorm*pb.ptf(wl), 'b-')
plt.show() | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Now let's compute the term $\mathrm{d}(\mathrm{ln}\, I_\lambda) / \mathrm{d}(\mathrm{ln}\, \lambda)$. First we will compute $\mathrm{ln}\,\lambda$ and $\mathrm{ln}\,I_\lambda$ and plot them: | lnwl = np.log(wl)
lnI = np.log(Inorm)
plt.xlabel(r'$\mathrm{ln}\,\lambda$')
plt.ylabel(r'$\mathrm{ln}\,I_\lambda$')
plt.plot(lnwl, lnI, 'b-')
plt.show() | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
Per equation above, $B(\lambda)$ is then the slope of this curve (plus 5). Herein lies the problem: what part of this graph do we fit a line to? In versions 2 and 2.1, PHOEBE used a 5th order Legendre polynomial to fit the spectrum and then sigma-clipping to get to the continuum. Finally, it computed an average derivative of that Legendrian and proclaimed that $B(\lambda)$. The order of the Legendre polynomial and the values of sigma for sigma-clipping have been set ad-hoc and kept fixed for every single spectrum. | envelope = np.polynomial.legendre.legfit(lnwl, lnI, 5)
continuum = np.polynomial.legendre.legval(lnwl, envelope)
diff = lnI-continuum
sigma = np.std(diff)
clipped = (diff > -sigma)
while True:
Npts = clipped.sum()
envelope = np.polynomial.legendre.legfit(lnwl[clipped], lnI[clipped], 5)
continuum = np.polynomial.legendre.legval(lnwl, envelope)
diff = lnI-continuum
clipped = clipped & (diff > -sigma)
if clipped.sum() == Npts:
break
plt.xlabel(r'$\mathrm{ln}\,\lambda$')
plt.ylabel(r'$\mathrm{ln}\,I_\lambda$')
plt.plot(lnwl, lnI, 'b-')
plt.plot(lnwl, continuum, 'r-')
plt.show() | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
It is clear that there is a pretty strong systematics here that we sweep under the rug. Thus, we need to revise the way we compute the spectral index and make it robust before we claim that we support boosting.
For fun, this is what would happen if we tried to estimate $B(\lambda)$ at each $\lambda$: | dlnwl = lnwl[1:]-lnwl[:-1]
dlnI = lnI[1:]-lnI[:-1]
B = dlnI/dlnwl
plt.plot(0.5*(wl[1:]+wl[:-1]), B, 'b-')
plt.show() | development/tutorials/beaming_boosting.ipynb | phoebe-project/phoebe2-docs | gpl-3.0 |
General terminology and notations
The notation in here follow the notations in Chapter 2 of the deep learning online book. We note that there are different notations in different places, all refer differently to the same mathematical construct.
We consider $L$ layers marked by $l=1,\ldots,L$ where $l=1$ denotes the input layer
Layer $l$ has $s_l$ units referred to by $a^{l}_j, j = 1,\ldots,s_l$.
The matrix $W^l : s_{l} \times s_{l-1} , l=2,\ldots, L$ controls the mapping from layer $l-1$ to layer $l$. The vector $\mathbf{b}^l$ of size $s_{l}$ corresponds to the bias term and in layer $l$. The weight $w^l_{ij}$ is the weight associated with the connection of neuron $j$ in layer $l-1$ to neuron $i$ in layer $l$
Forward propagation: $\mathbf{a}^l = \sigma(\mathbf{z}^l)$ where $\mathbf{z}^l=W^l \mathbf{a}^{(l-1)}+ \mathbf{b}^{(l)} , l =2,\ldots,L$ where the activation function $\sigma \equiv \sigma_l$ is applied to each component of its argument vector. For simplicity of notations we often write $\sigma$ instead of $\sigma_l$
Synonims
Neuron - inspired from biology analogy
Unit - It’s one component of a large network
Feature - It implements a feature detector that’s looking at the input and will turn on iff the sought feature is present in the input
A note about dot product
The activation function works on the outcome of the done way to think about this is in terms of correlation which are normalized dot products. Thus what we really measure is the degree of correlation, or dependence, between the input vector and the coefficient vector. We can view at the dot product as:
* A correlation filter - fires if a correlation between input and weights exceeds a threshold
* A feature detector - Detect if a specific pattern occur in the input
Output unit
The output values are computed by similarly multplying the values oh $h$ by another weight matrix,
$\def \mathbf \mathbf {}$
$\mathbf{a}^L = \sigma_L(\mathbf{W^L} \cdot \mathbf{a}^{(L-1)} + \mathbf{b}^L) = \sigma_L(\mathbf{z^L}) $
Linear regression network
Defined when $\sigma_L=I$. In that case the output is in a form suitable for linear regression.
Softmax function
For classification problems we want to convert them into probabilities. This is achieved by using the softmax function.
$\sigma_L(z) = \frac{1}{\alpha}e^z$ where $\alpha = \sum_i e^{z^L_i}$ which produces an output vector $\mathbf{a}^L \equiv \mathbf{y} = (y_1,\ldots,y_{s_L}), y_i = \frac{e^{z^L_i}}{\sum_i e^{z^L_i}} , i = 1,\ldots, s_L$
The element $y_i$ is the probability that the label of the output is $i$. This is indeed the same expression utilized by logistic regression for classification of many labels. The label $i^$ that corresponds to a given input vector $\mathbf{a}^1$ ise selected as the index $i^$ for which $y_i$ is maximal.
Popular types of activation functions
Example of some popular activation functions:
* Sigmoid: Transfor inner product into an S shaped curve. There are several popular alternatives for a Sigmoid activation function:
* The logistic function: $\sigma(z) = \frac{1}{1+ e^{-z}}$ hase values in [0,1] and thus can be interperable as probabiliy.
* Hyperbolic tangent: $\sigma(z) = \frac{e^z - e^{-z}}{e^z + e^{-z}}$ with values in $(-1,1)$
* [Rectifier](https://en.wikipedia.org/wiki/Rectifier_(neural_networks): $\sigma(z) = \max(0,z)$. A unit that user a rectifier function is called a rectified linear unit (ReLU).
* [softplus](https://en.wikipedia.org/wiki/Rectifier_(neural_networks): $\sigma(z) = \ln (1+e^z)$ is a smooth approximation to the rectifier function.
Synonyms for the term "unit activation"
Unit's value: View it as a function of the input
Activation: Emphasizes that the unit may be responding or not, or to an extent; it’s most appropriate for logistic units
Output
Python example : some activation functions | def sigmoid(x):
return 1./(1.+np.exp(-x))
def rectifier(x):
return np.array([max(xv,0.0) for xv in x])
def softplus(x):
return np.log(1.0 + np.exp(x))
x = np.array([1.0,0,0])
w = np.array([0.2,-0.03,0.14])
print ' Scalar product between unit and weights ',x.dot(w)
print ' Values of Sigmoid activation function ',sigmoid(x.dot(w))
print ' Values of ta activation function ',np.tanh(x.dot(w))
print ' Values of sofplus activation function ',softplus(x.dot(w))
import pylab
z = np.linspace(-2,2,100) # 100 linearly spaced numbers
s = sigmoid(z) # computing the values of
th = np.tanh(z) # computing the values of
re = rectifier(z) # computing the values of rectifier
sp = softplus(z) # computing the values of rectifier
# compose plot
pylab.plot(z,s)
pylab.plot(z,s,'co',label='Sigmoid') # Sigmoid
pylab.plot(z,th,label='tanh') # tanh
pylab.plot(z,re,label='rectifier') # rectifier
pylab.plot(z,sp,label='softplut') # rectifier
pylab.legend()
pylab.show() # show the plot | NN playground.ipynb | srippa/nn_deep | mit |
Python example : Simple feed forward classification NN | def softmax(z):
alpha = np.sum(np.exp(z))
return np.exp(z)/alpha
# Input
a0 = np.array([1.,0,0])
# First layer
W1 = np.array([[0.2,0.15,-0.01],[0.01,-0.1,-0.06],[0.14,-0.2,-0.03]])
b1 = np.array([1.,1.,1.])
z1 = W1.dot(a0) + b1
a1 = np.tanh(z1)
# Output layer
W2 = np.array([[0.08,0.11,-0.3],[0.1,-0.15,0.08],[0.1,0.1,-0.07]])
b2 = np.array([0.,1.,0.])
z2 = W2.dot(a1) + b2
a2 = y = softmax(z2)
imax = np.argmax(y)
print ' z1 ',z1
print ' a1 ',np.tanh(z1)
print ' z2 ',z2
print ' y ',y
print ' Input vector {0} is classified to label {1} '.format(a0,imax)
print '\n'
for i in [0,1,2]:
print 'The probablity for classifying to label ',i,' is ',y[i] | NN playground.ipynb | srippa/nn_deep | mit |
Cost (or error) functions
Suppose that the expected output for an input vector ${\bf x} \equiv {\bf a^1}$ is ${\bf y} = {\bf y_x}^ = (0,1,0)$, we can now compute the error vector ${\bf e}= {\bf e_x}= {\bf a_x}^L-{\bf y_x}^$. With this error, we can now compute a cost $C=C_x$ assotiated with the output $\bf{y_x}$ of the input vector ${\bf x}$ (also called loss) function. For convinience of notations we will frequently omitt the subscript $x$.
Popular loss functions are:
* Absolute cost $C = C({\bf a}^L)=\sum_i |e_i|$
* Square cost $C= C({\bf a}^L) = \sum_i e_i^2$
* Cross entropy loss $C=C({\bf a}^L) = -\sum_i y_i^\log{a^L_i} \equiv -\sum_i y_i^\log{y_i}$. The rationale here is that the output of the softmax function is a probability distribution and we can also view the real label vector $y$ as a probability distribution (1 for the corerct label and 0 for all other labels). The cross entropy function is a common way to measure difference between distributions.
The total error from all $N$ data vectors is computed as the average of the individual error terms associated with each input vector ${\bf x}$, that is:$\frac{1}{N} \sum_x C_x$ | def abs_loss(e):
return np.sum(np.abs(e))
def sqr_loss(e):
return np.sum(e**2)
def cross_entropy_loss(y_estimated,y_real):
return -np.sum(y_real*np.log(y_estimated))
y_real = np.array([0.,1.,0])
err = a2 - ystar
print ' Error ',err
print ' Absolute loss ',abs_loss(err)
print ' Square loss ',sqr_loss(err)
print ' Cross entropy loss ',cross_entropy_loss(a2,y_real)
| NN playground.ipynb | srippa/nn_deep | mit |
The Shyft Environment
This next step is highly specific on how and where you have installed Shyft. If you have followed the guidelines at github, and cloned the three shyft repositories: i) shyft, ii) shyft-data, and iii) shyft-doc, then you may need to tell jupyter notebooks where to find shyft. Uncomment the relevant lines below.
If you have a 'system' shyft, or used conda install -s sigbjorn shyft to install shyft, then you probably will want to make sure you have set the SHYFT_DATA directory correctly, as otherwise, Shyft will assume the above structure and fail. This has to be done before import shyft. In that case, uncomment the relevant lines below.
note: it is most likely that you'll need to do one or the other. | # try to auto-configure the path, -will work in all cases where doc and data
# are checked out at same level
shyft_data_path = path.abspath("../../../shyft-data")
if path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ:
os.environ['SHYFT_DATA']=shyft_data_path
# shyft should be available either by it's install in python
# or by PYTHONPATH set by user prior to starting notebook.
# This is equivalent to the two lines below
# shyft_path=path.abspath('../../../shyft')
# sys.path.insert(0,shyft_path)
from shyft import api
import shyft
print(shyft.__path__) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
2. A Shyft simulation
The purpose of this notebook is to demonstrate setting up a Shyft simulation using existing repositories. Eventually, you will want to learn to write your own repositories, but once you understand what is presented herein, you'll be well on your way to working with Shyft.
If you prefer to take a high level approach, you can start by looking at the Run Nea Nidelva notebook. We recommend taking the time to understand the lower level functionality of Shyft, however, as it will be of value later if you want to use your own data and create your own repositories.
Orchestration and Repositories
A core philosophy of Shyft is that "Data should live at the source". What this means, is that we prefer datasets to either remain in their original format or even come directly from the data provider. To accomplish this, we use "repositories". You can read more about repositories at the Shyft Documentation.
Interfaces
Because it is our hope that users will create their own repositories to meet the specifications of their own datasets, we provide 'interfaces'. This is a programming concept that you may not be familiar with. The idea is that it is a basic example, or template, of how the class should work. You can use these and your own class can inherit from them, allowing you to override methods to meet your own specifications. We'll explore this as we move through this tutorial. A nice explanation of interfaces with python is available here.
Initial Configuration
What is required to set up a simulation? In the following we'll package some basic information into a dictionaries that may be used to configure our simualtion. We'll start by creating a couple of dictionaries that will be used to instantiate an existing repository class that was created for demonstration purposes, CFRegionModelRepository.
If it hasn't been said enough, there is a lot of functionality in the repositories! You can write a repository to suit your own use case, and it is encouraged to look at this source code. | # we need to import the repository to use it in a dictionary:
from shyft.repository.netcdf.cf_region_model_repository import CFRegionModelRepository | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
region specification
The first dictionary essentially establishes the domain of the simulation. We also specify a repository that is used to read the data that will provide Shyft a region_model (discussed below), based on geographic data. The geographic consists of properties of the catchment, e.g. "forest fraction", "lake fraction", etc. | # next, create the simulation dictionary
RegionDict = {'region_model_id': 'demo', #a unique name identifier of the simulation
'domain': {'EPSG': 32633,
'nx': 400,
'ny': 80,
'step_x': 1000,
'step_y': 1000,
'lower_left_x': 100000,
'lower_left_y': 6960000},
'repository': {'class': shyft.repository.netcdf.cf_region_model_repository.CFRegionModelRepository,
'params': {'data_file': 'netcdf/orchestration-testdata/cell_data.nc'}},
} | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
The first keys, are probably quite clear:
start_datetime: a string in the format: "2013-09-01T00:00:00"
run_time_step: an integer representing the time step of the simulation (in seconds), so for a daily step: 86400
number_of_steps: an integer for how long the simulatoin should run: 365 (for a year long simulation)
region_model_id: a string to name the simulation: 'neanidelva-ptgsk'
We also need to know where the simulation is taking place. This information is contained in the domain:
EPSG: an EPSG string to identify the coordinate system
nx: number of 'cells' in the x direction
ny: number of 'cells' in the y direction
step_x: size of cell in x direction (m)
step_y: size of cell in y direction (m)
lower_left_x: where (x) in the EPSG system the cells begin
lower_left_y: where (y) in the EPSG system the cells begin
repository: a repository that can read the file containing data for the cells (in this case it will read a netcdf file)
Model specification
The next dictionary provides information about the model that we would like to use in Shyft, or the 'Model Stack' as it is generally referred to. In this case, we are going to use the PTGSK model, and the rest of the dictionary provides the parameter values. | ModelDict = {'model_t': shyft.api.pt_gs_k.PTGSKModel, # model to construct
'model_parameters': {
'ae':{
'ae_scale_factor': 1.5},
'gs':{
'calculate_iso_pot_energy': False,
'fast_albedo_decay_rate': 6.752787747748934,
'glacier_albedo': 0.4,
'initial_bare_ground_fraction': 0.04,
'max_albedo': 0.9,
'max_water': 0.1,
'min_albedo': 0.6,
'slow_albedo_decay_rate': 37.17325702015658,
'snow_cv': 0.4,
'tx': -0.5752881492890207,
'snowfall_reset_depth': 5.0,
'surface_magnitude': 30.0,
'wind_const': 1.0,
'wind_scale': 1.8959672005350063,
'winter_end_day_of_year': 100},
'kirchner':{
'c1': -3.336197322290274,
'c2': 0.33433661533385695,
'c3': -0.12503959620315988},
'p_corr': {
'scale_factor': 1.0},
'pt':{'albedo': 0.2,
'alpha': 1.26},
}
} | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
In this dictionary we define two variables:
model_t: the import path to a shyft 'model stack' class
model_parameters: a dictionary containing specific parameter values for a particular model class
Specifics of the model_parameters dictionary will vary based on which class is used.
Okay, so far we have two dictionaries. One which provides information regarding our simulation domain, and a second which provides information on the model that we wish to run over the domain (e.g. in each of the cells). The next step, then, is to map these together and create a region_repo class.
This is achieved by using a repository, in this case, the CFRegionModelRepository we imported above. | region_repo = CFRegionModelRepository(RegionDict, ModelDict) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
The region_model
<div class="alert alert-info">
**TODO:** a notebook documenting the CFRegionModelRepository
</div>
The first step in conducting a hydrologic simulation is to define the domain of the simulation and the model type which we would like to simulate. To do this we create a region_model object. Above we created dictionaries that contain this information, and we instantiated a class called teh region_repo. In this next step, we put it together so that we have a single object which we can work with "at our fingertips". You'll note above that we have pointed to a 'data_file' earlier when we defined the RegionDict. This data file contains all the required elements to fill the cells of our domain. The informaiton is contained in a single netcdf file
Before we go further, let's look briefly at the contents of this file: | cell_data_file = os.path.join(os.environ['SHYFT_DATA'], 'netcdf/orchestration-testdata/cell_data.nc')
cell_data = Dataset(cell_data_file)
print(cell_data) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
You might be surprised to see the dimensions are 'cells', but recall that in Shyft everything is vectorized. Each 'cell' is an element within a domain, and each cell has associated variables:
location: x, y, z
characteristics: forest-fraction, reservoir-fraction, lake-fraction, glacier-fraction, catchment-id
We'll bring this data into our workspace via the region_model. Note that we have instantiated a region_repo class using one of the existing Shyft repositories, in this case one that was built for reading in the data as it is contained in the example shyft-data netcdf files: CFRegionModelRepository.
Next, we'll use the region_repo.get_region_model method to get the region_model. Note the name 'demo', in this case is arbitrary. However, depending on how you create your repository, you can specify what region model to return using this string.
<div class="alert alert-info">
**note:** *you are strongly encouraged to learn how to create repositories. This particular repository is just for demonstration purposes. In practice, one may use a repository that connects directly to a GIS service, a database, or some other data sets that contain the data required for simulations.*
<div class="alert alert-warning">
**warning**: *also, please note that below we call the 'get_region_model' method as we instantiate the class. This behavior may change in the future.*
</div>
</div> | region_model = region_repo.get_region_model('demo') | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Exploring the region_model
So we now have created a region_model, but what is it actually? This is a very fundamental class in Shyft. It is actually one of the "model stacks", such as 'PTGSK', or 'PTHSK'. Essentially, the region_model contains all the information regarding the simulation type and domain. There are many methods associated with the region_model and it will take time to understand all of them. For now, let's just explore a few key methods:
bounding_region: provides information regarding the domain of interest for the simulation
catchment_id_map: indices of the various catchments within the domain
cells: an instance of PTGSKCellAllVector that holds the individual cells for the simulation (note that this is type-specific to the model type)
ncore: an integer that sets the numbers of cores to use during simulation (Shyft is very greedy if you let it!)
time_axis: a shyft.api.TimeAxisFixedDeltaT class (basically contains information regarding the timing of the simulation)
Keep in mind that many of these methods are more 'C++'-like than 'Pythonic'. This means, that in some cases, you'll have to 'call' the method. For example: region_model.bounding_region.epsg() returns a string. You can use tab-completion to explore the region_model further: | region_model.bounding_region.epsg() | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
You'll likely note that there are a number of intriguing fucntions, e.g. initialize_cell_environment or interpolate. But before we can go further, we need some more information. Perhaps you are wondering about forcing data. So far, we haven't said anything about model input or the time of the simulation, we've only set up a container that holds all the domain and model type information about our simulation.
Still, we have made some progress. Let's look for instance at the cells: | cell_0 = region_model.cells[0]
print(cell_0.geo) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
So you can see that so far, each of the cells in the region_model contain information regarding their LandTypeFractions, geolocation, catchment_id, and area.
A particulary important attribute is region_model.region_env. This is a container for each cell that holds the "environmental timeseries", or forcing data, for the simulation. By "tabbing" from cell. you can see that each cell also has and env_ts attribute. These are containers customized to provide timeseries as required by the model type we selected, but there is no data yet. In this case we used the PTGSKModel (see the ModelDict). So for every cell in your simulation, there is a container prepared to accept the forcing data as the next cell shows. | #just so we don't see 'private' attributes
print([d for d in dir(cell_0.env_ts) if '_' not in d[0]])
region_model.size() | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Adding forcing data to the region_model
Clearly the next step is to add forcing data to our region_model object. Let's start by thinking about what kind of data we need. From above, where we looked at the env_ts attribute, it's clear that this particular model stack, PTGSKModel, requires:
precipitation
radiation
relative humidity (rel_hum)
temperature
wind speed
We have stored this information each in seperate netcdf files which each contain the observational series for a number of different stations.
<div class="alert alert-warning">
Again, these files **do not represent the recommended practice**, but are *only for demonstration purposes*. The idea here is just to demonstrate with an example repository, but *you should create your own to match **your** data*.
</div>
Our goal now is to populate the region_env.
"Sources"
We use the term sources to define a location data may be coming from. You may also come across destinations. In both cases, it just means a file, database, service of some kind, etc. that is capable of providing data. Repositories are written to connect to sources. Following our earlier approach, we'll create another dictionary to define our data sources, but first we need to import another repository: | from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository
from shyft.repository.netcdf.cf_geo_ts_repository import CFDataRepository
ForcingData = {'sources': [
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'filename': 'netcdf/orchestration-testdata/precipitation.nc'},
'types': ['precipitation']},
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'filename': 'netcdf/orchestration-testdata/temperature.nc'},
'types': ['temperature']},
{'params': {'epsg': 32633,
'filename': 'netcdf/orchestration-testdata/wind_speed.nc'},
'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'types': ['wind_speed']},
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'filename': 'netcdf/orchestration-testdata/relative_humidity.nc'},
'types': ['relative_humidity']},
{'repository': shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository,
'params': {'epsg': 32633,
'filename': 'netcdf/orchestration-testdata/radiation.nc'},
'types': ['radiation']}]
}
| notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Data Repositories
In another notebook, further information will be provided regarding the repositories. For the time being, let's look at this configuration dictionary that was created. It essentially just contains a list, keyed by the name "sources". This key is known in some of the tools that are built in the Shyft orchestration, so it is recommended to use it.
Each item in the list is a dictionary for each of the source types, the keys in the dictionaries are: repository, params, and types. The general idea and concept is that in orchestration, the object keyed by repository is a class that is instantiated by passing the objects contained in params.
Let's repeat that. From our Datasets dictionary, we get a list of "sources". Each of these sources contains a class (a repository) that is capable of getting the source data into Shyft. Whatever parameters that are required for the class to work, will be included in the "sources" dictionary. In our case, the params are quite simple, just a path to a netcdf file. But suppose our repository required credentials or other information for a database? This information could also be included in the params stanza of the dictionary.
You should explore the above referenced netcdf files that are available at the shyft-data git repository. These files contain the forcing data that will be used in the example simulation. Each one contains observational data from some stations in our catchment. Depending on how you write your repository, this data may be provided to Shyft in many different formats.
Let's explore this concept further by getting the 'temperature' data: | # get the temperature sources:
tmp_sources = [source for source in ForcingData['sources'] if 'temperature' in source['types']]
# in this example there is only one
t0 = tmp_sources[0]
# We will now instantiate the repository with the parameters that are provided
# in the dictionary.
# Note the 'call' structure expects params to contain keyword arguments, and these
# can be anything you want depending on how you create your repository
tmp_repo = t0['repository'](**t0['params'])
| notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
tmp_repo is now an instance of the Shyft CFDataRepository, and this will provide Shyft with the data when it sets up a simulation by reading the data directly out of the file referenced in the 'source'. But that is just one repository, and we defined many in fact. Furthermore, you may have a heterogenous collection of data sources -- if for example you want to get your temperature from station data, but radiation from model output. You could define different repositories in the ForcingData dictionary.
Ultimately, we bundle all these repositories up into a new class called a GeoTsRepositoryCollection that we can use to populate the region_model.region_env with data. | # we'll actually create a collection of repositories, as we have different input types.
from shyft.repository.geo_ts_repository_collection import GeoTsRepositoryCollection
def construct_geots_repo(datasets_config, epsg=None):
""" iterates over the different sources that are provided
and prepares the repository to read the data for each type"""
geo_ts_repos = []
src_types_to_extract = []
for source in datasets_config['sources']:
if epsg is not None:
source['params'].update({'epsg': epsg})
# note that here we are instantiating the different source repositories
# to place in the geo_ts list
geo_ts_repos.append(source['repository'](**source['params']))
src_types_to_extract.append(source['types'])
return GeoTsRepositoryCollection(geo_ts_repos, src_types_per_repo=src_types_to_extract)
# instantiate the repository
geots_repo = construct_geots_repo(ForcingData) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
geots_repo is now a "geographic timeseries repository", meaning that the timeseries it holds are spatially aware of their x,y,z coordinates (see CFDataRepository for details). It also has several methods. One in particular we are interested in is the get_timeseries method. However, before we can proceed, we need to define the period for the simulation.
Shyft TimeAxis
Time in Shyft is handled with specialized C++ types for computational efficiency. These are custom built objects that are 'calendar' aware. But since in python, most like to use datetime objects, we create a function: | # next, create the simulation dictionary
TimeDict = {'start_datetime': "2013-09-01T00:00:00",
'run_time_step': 86400, # seconds, daily
'number_of_steps': 360 # ~ one year
}
def time_axis_from_dict(t_dict)->api.TimeAxis:
utc = api.Calendar()
sim_start = dt.datetime.strptime(t_dict['start_datetime'], "%Y-%m-%dT%H:%M:%S")
utc_start = utc.time(sim_start.year, sim_start.month, sim_start.day,\
sim_start.hour, sim_start.minute, sim_start.second)
tstep = t_dict['run_time_step']
nstep = t_dict['number_of_steps']
time_axis = api.TimeAxis(utc_start, tstep, nstep)
return time_axis
ta_1 = time_axis_from_dict(TimeDict)
print(f'1. {ta_1} \n {ta_1.total_period()}')
# or shyft-wise, ready tested, precise and less effort, two lines
utc = api.Calendar() # 'Europe/Oslo' can be passed to calendar for time-zone
ta_2 = api.TimeAxis(utc.time(2013, 9, 1), api.deltahours(24), 365)
print(f'2. {ta_2} \n {ta_2.total_period()}') | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
We now have an object that defines the time dimension for the simulation, and we will use this to initialize the region_model with the "environmental timeseries" or env_ts data. These containers will be given data from the appropriate repositories using the get_timeseries function. Following the templates in the shyft.repository.interfaces module, you'll see that the repositories should provide the capability to "screen" data based on time criteria and optinally* geo_location criteria. | # we can extract our "bounding box" based on the `region_model` we set up
bbox = region_model.bounding_region.bounding_box(region_model.bounding_region.epsg())
period = ta_1.total_period() #just defined above
# required forcing data sets we want to retrieve
geo_ts_names = ("temperature", "wind_speed", "precipitation",
"relative_humidity", "radiation")
sources = geots_repo.get_timeseries( geo_ts_names, period) #, geo_location_criteria=bbox ) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Now we have a new dictionary, called 'sources' that contains specialized Shyft api types specific to each forcing data type. You can look at one for example: | prec = sources['precipitation']
print(len(prec))
| notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
We can explore further and see each element is in itself an api.PrecipitationSource, which has a timeseries (ts). Recall from the first tutorial that we can easily convert the timeseries.time_axis into datetime values for plotting.
Let's plot the precip of each of the sources: | fig, ax = plt.subplots(figsize=(15,10))
for pr in prec:
t,p = [dt.datetime.utcfromtimestamp(t_.start) for t_ in pr.ts.time_axis], pr.ts.values
ax.plot(t,p, label=pr.mid_point().x) #uid is empty now, but we reserve for later use
fig.autofmt_xdate()
ax.legend(title="Precipitation Input Sources")
ax.set_ylabel("precip[mm/hr]") | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Finally, the next step will take the data from the sources and connect it to our region_model.region_env class: | def get_region_environment(sources):
region_env = api.ARegionEnvironment()
region_env.temperature = sources["temperature"]
region_env.precipitation = sources["precipitation"]
region_env.radiation = sources["radiation"]
region_env.wind_speed = sources["wind_speed"]
region_env.rel_hum = sources["relative_humidity"]
return region_env
region_model.region_env = get_region_environment(sources) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
And now our forcing data is connected to the region_model. We are almost ready to run a simulation. There is just one more step. We've connected the sources to the model, but remember that Shyft is a distributed modeling framework, and we've connected point data sources (in this case). So we need to get the data from the observed points to each cell. This is done through interpolation.
Shyft Interpolation
In Shyft there are predefined routines for interpolation. In the interp_config class below one quickly recognizes the same input source type keywords that are used as keys to the params dictionary. params is simply a dictionary of dictionaries which contains the parameters used by the interpolation model that is specific for each source type. | from shyft.repository.interpolation_parameter_repository import InterpolationParameterRepository
class interp_config(object):
""" a simple class to provide the interpolation parameters """
def __init__(self):
self.interp_params = {'precipitation': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10,
'scale_factor': 1.02}},
'radiation': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10}},
'relative_humidity': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10}},
'temperature': {'method': 'btk',
'params': {'nug': 0.5,
'range': 200000.0,
'sill': 25.0,
'temperature_gradient': -0.6,
'temperature_gradient_sd': 0.25,
'zscale': 20.0}},
'wind_speed': {'method': 'idw',
'params': {'distance_measure_factor': 1.0,
'max_distance': 600000.0,
'max_members': 10}}}
def interpolation_parameters(self):
return self.interp_params
ip_conf = interp_config()
ip_repo = InterpolationParameterRepository(ip_conf)
region_model.interpolation_parameter = ip_repo.get_parameters(0) #just a '0' for now | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
The next step is to set the intial states of the model using our last repository. This one, the GeneratedStateRepository will set empty default values.
Now we are nearly ready to conduct a simulation. We just need to run a few methods to prepare the model and cells for the simulation. The region_model has a method called initalize_cell_environment that takes a time_axis type as input. We defined the time_axis above, so now we'll use it to initialize the model. At the same time, we'll set the initial_state. Then we can actually run a simulation! | from shyft.repository.generated_state_repository import GeneratedStateRepository
init_values = {'gs': {'acc_melt': 0.0,
'albedo': 0.65,
'alpha': 6.25,
'iso_pot_energy': 0.0,
'lwc': 0.1,
'sdc_melt_mean': 0.0,
'surface_heat': 30000.0,
'temp_swe': 0.0},
'kirchner': {'q': 0.01}}
state_generator = GeneratedStateRepository(region_model)#, init_values=init_values)
# we need the state_repository to have the same size as the model
#state_repo.n = region_model.size()
# there is only 1 state (indexed '0')
s0 = state_generator.get_state(0)
not_applied_list=region_model.state.apply_state( # apply state set the current state according to arguments
cell_id_state_vector=s0, # ok, easy to get
cids=[] # empty means apply all, if we wanted to only apply state for specific catchment-ids, this is where to put them
)
assert len(not_applied_list)==0, 'Ensure all states was matched and applied to the model'
region_model.initial_state=region_model.current_state # now we stash the current state to the initial state | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Conduct the simulation
We now have a region_model that is ready for simulation. As we discussed before, we still need to get the data from our point observations interpolated to the cells, and we need to get the env_ts of each cell populated. But all the machinery is now in place to make this happen.
To summarize, we've created:
region_repo, a region repository that contains information related to region of simulation and the model to be used in the simulation. From this we get a region_model
geots_repo, a geo-timeseries repository that provides a mechanism to pull the data we require from our 'sources'.
time_axis, created from the TimeAxisFixedDeltaT class of shyft to provide the period of simulation.
ip_repo, an interpolation repository which provides all the required parameters for interpolating our data to the distributed cells -- following variable specific protocols/models.
state_repo, a GeneratedStateRepository used to provide our simulation an initial state.
The next step is simply to initialize the cell environment and run the interpolation. As a practive, before simulation we reset to the initial state (we're there already, but it is something you have to do before a new simulation), and then run the cells. First we'll initialize the cell environment: | region_model.initialize_cell_environment(ta_1) | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
As a habit, we have a quick "sanity check" function to see if the model is runnable. Itis recommended to have this function when you create 'run scripts'. | def runnable(reg_mod):
""" returns True if model is properly configured
**note** this is specific depending on your model's input data requirements """
return all((reg_mod.initial_state.size() > 0, reg_mod.time_axis.size() > 0,
all([len(getattr(reg_mod.region_env, attr)) > 0 for attr in
("temperature", "wind_speed", "precipitation", "rel_hum", "radiation")])))
# run the model, e.g. as you may configure it in a script:
if runnable(region_model):
region_model.interpolate(region_model.interpolation_parameter, region_model.region_env)
region_model.revert_to_initial_state()
region_model.run_cells()
else:
print('Something wrong with model configuration.')
| notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Okay, so the simulation was run. Now we may be interested in looking at some of the output. We'll take a brief summary glance in the next section, and save a deeper dive into the simulation results for another notebook.
3. Simulation results
The first step will be simply to look at the discharge results for each subcatchment within our simulation domain. For simplicity, we can use a pandas.DataFrame to collect the data from each catchment. | # Here we are going to extact data from the simulation.
# We start by creating a list to hold discharge for each of the subcatchments.
# Then we'll get the data from the region_model object
# mapping of internal catch ID to catchment
catchment_id_map = region_model.catchment_id_map
# First get the time-axis which we'll use as the index for the data frame
ta = region_model.time_axis
# and convert it to datetimes
index = [dt.datetime.utcfromtimestamp(p.start) for p in ta]
# Now we'll add all the discharge series for each catchment
data = {}
for cid in catchment_id_map:
# get the discharge time series for the subcatchment
q_ts = region_model.statistics.discharge([int(cid)])
data[cid] = q_ts.values.to_numpy()
df = pd.DataFrame(data, index=index)
# we can simply use:
ax = df.plot(figsize=(20,15))
ax.legend(title="Catch. ID")
ax.set_ylabel("discharge [m3 s-1]") | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Okay, that was simple. Let's look at the timeseries in some individual cells. The following is a bit of a contrived example, but it shows some aspects of the api. We'll plot the temperature series of all the cells in one sub-catchment, and color them by elevation. This doesn't necessarily show anything about the simulation, per se, but rather results from the interpolation step. | from matplotlib.cm import jet as jet
from matplotlib.colors import Normalize
# get all the cells for one sub-catchment with 'id' == 1228
c1228 = [c for c in region_model.cells if c.geo.catchment_id() == 1228]
# for plotting, create an mpl normalizer based on min,max elevation
elv = [c.geo.mid_point().z for c in c1228]
norm = Normalize(min(elv), max(elv))
#plot with line color a function of elevation
fig, ax = plt.subplots(figsize=(15,10))
# here we are cycling through each of the cells in c1228
for dat,elv in zip([c.env_ts.temperature.values for c in c1228], [c.mid_point().z for c in c1228]):
ax.plot(dat, color=jet(norm(elv)), label=int(elv))
# the following is just to plot the legend entries and not related to Shyft
handles, labels = ax.get_legend_handles_labels()
# sort by labels
import operator
hl = sorted(zip(handles, labels),
key=operator.itemgetter(1))
handles2, labels2 = zip(*hl)
# show legend, but only every fifth entry
ax.legend(handles2[::5], labels2[::5], title='Elevation [m]') | notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
As we would expect from the temperature kriging method, we should find higher elevations have colder temperatures. As an exercise you could explore this relationship using a scatter plot.
Now we're going to create a function that will read initial states from the initial_state_repo. In practice, this is already done by the ConfgiSimulator, but to demonstrate lower level functions, we'll reset the states of our region_model: | state_generator.find_state?
# create a function to reaad the states from the state repository
def get_init_state_from_repo(initial_state_repo_, region_model_id_=None, timestamp=None):
state_id = 0
if hasattr(initial_state_repo_, 'n'): # No stored state, generated on-the-fly
initial_state_repo_.n = region_model.size()
else:
states = initial_state_repo_.find_state(
region_model_id_criteria=region_model_id_,
utc_timestamp_criteria=timestamp)
if len(states) > 0:
state_id = states[0].state_id # most_recent_state i.e. <= start time
else:
raise Exception('No initial state matching criteria.')
return initial_state_repo_.get_state(state_id)
init_state = get_init_state_from_repo(state_generator, timestamp=region_model.time_axis.start)
| notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Don't worry too much about the function for now, but do take note of the init_state object that we created. This is another container, this time it is a class that contains PTGSKStateWithId objects, which are specific to the model stack implemented in the simulation (in this case PTGSK). If we explore an individual state object, we'll see init_state contains, for each cell in our simulation, the state variables for each 'method' of the method stack.
Let's look more closely: | def print_pub_attr(obj):
#only public attributes
print(f'{obj.__class__.__name__}:\t',[attr for attr in dir(obj) if attr[0] is not '_'])
print(len(init_state))
init_state_cell0 = init_state[0]
# the identifier
print_pub_attr(init_state_cell0.id)
# gam snow states
print_pub_attr(init_state_cell0.state.gs)
#init_state_cell0.kirchner states
print_pub_attr(init_state_cell0.state.kirchner)
| notebooks/repository/repositories-intro.ipynb | statkraft/shyft-doc | lgpl-3.0 |
Breaking it down...
The while statement on line 2 starts the loop. The code indented beneath the while (lines 3-4) will repeat, in a linear fashion until the Boolean expression on line 2 i <= 3 is False, at which time the program continues with line 5.
Some Terminology
We call i <=3 the loop's exit condition. The variable i inside the exit condition is the only thing that we can change to make the exit condition False, therefore it is the loop control variable. On line 4 we change the loop control variable by adding one to it, this is called an increment.
Furthermore, we know how many times this loop will execute before it actually runs: 3. Even if we allowed the user to enter a number, and looped that many times, we would still know. We call this a definite loop. Whenever we iterate over a fixed number of values, regardless of whether those values are determined at run-time or not, we're using a definite loop.
If the loop control variable never forces the exit condition to be False, we have an infinite loop. As the name implies, an Infinite loop never ends and typically causes our computer to crash or lock up. | ## WARNING!!! INFINITE LOOP AHEAD
## IF YOU RUN THIS CODE YOU WILL NEED TO STOP OR RESTART THE KERNEL AFTER RUNNING THIS!!!
i = 1
while i <= 3:
print(i,"Mississippi...")
print("Blitz!") | content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
For loops
To prevent an infinite loop when the loop is definite, we use the for statement. Here's the same program using for: | for i in range(1,4):
print(i,"Mississippi...")
print("Blitz!") | content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.