markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
One confusing aspect of this loop is range(1,4) why does this loop from 1 to 3? Why not 1 to 4? Well it has to do with the fact that computers start counting at zero. The easier way to understand it is if you subtract the two numbers you get the number of times it will loop. So for example, 4-1 == 3.
1.1 You Code
In the space below, Re-Write the above program to count Mississippi from 10 to 15. You need practice writing loops, so make sure you do NOT copy the code.
Note: How many times will that loop? | # TODO Write code here
| content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
Indefinite loops
With indefinite loops we do not know how many times the program will execute. This is typically based on user action, and therefore our loop is subject to the whims of whoever interacts with it. Most applications like spreadsheets, photo editors, and games use indefinite loops. They'll run on your computer, seemingly forever, until you choose to quit the application.
The classic indefinite loop pattern involves getting input from the user inside the loop. We then inspect the input and based on that input we might exit the loop. Here's an example: | name = ""
while name != 'mike':
name = input("Say my name! : ")
print(f"Nope, my name is not {name}!") | content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
In the above example, the loop will keep on looping until we enter mike. The value mike is called the sentinal value - a value we look out for, and when it exists we stop the loop. For this reason indefinite loops are also known as sentinal-controlled loops.
The classic problem with indefinite/sentinal controlled loops is that its really difficult to get the application's logic to line up with the exit condition. For example we need to set name = "" in line 1 so that line 2 starts out as True. Also we have this wonky logic where when we say 'mike' it still prints Nope, my name is not mike! before exiting.
Break statement
The solution to this problem is to use the break statement. break tells Python to exit the loop immediately. We then re-structure all of our indefinite loops to look like this:
while True:
if sentinel-controlled-exit-condition:
break
Here's our program we-written with the break statement. This is the recommended way to write indefinite loops in this course.
NOTE: We always check for the sentinal value immediately AFTER the input() function. | while True:
name = input("Say my name!: ")
if name == 'mike':
break
print("Nope, my name is not %s!" %(name)) | content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
1.2 You Code: Debug This loop
This program should count the number of times you input the value ni. As soon as you enter a value other than ni the program will stop looping and print the count of ni's.
Example Run:
What say you? ni
What say you? ni
What say you? ni
What say you? nay
You said 'ni' 3 times.
The problem of course, is this code wasn't written correctly. Its up to you to get it working! | #TODO Debug this code
nicount=0
while True:
say = input "What say you? ")
if say == 'ni':
break
nicount = 1
print(f"You said 'ni' P {nicount} times.") | content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
Multiple exit conditions
This indefinite loop pattern makes it easy to add additional exit conditions. For example, here's the program again, but it now stops when you say my name or type in 3 wrong names.
Make sure to run this program a couple of times to understand what is happening:
First enter mike to exit the program,
Next enter the wrong name 3 times. | times = 0
while True:
name = input("Say my name!: ")
times = times + 1
if name == 'mike': # sentinal 1
print("You got it!")
break
if times == 3: # sentinal 2
print("Game over. Too many tries!")
break
print(f"Nope, my name is not {name}") | content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
Counting Characters in Text
Let's conclude the lab with you writing your own program that uses both definite and indefinite loops. This program should input some text and then a character, counting the number of characters in the text. This process will repeat until the text entered is empty.
The program should work as follows. Example run:
Enter a text, or press ENTER quit: mississippi
Which character are you searching for? i
There are 4 i's in mississippi
Enter a text, or press ENTER quit: port-au-prince
Which character are you searching for? -
There are 4 -'s in port-au-prince
Enter a text, or press ENTER quit:
Goodbye!
This seems complicated, so let's break the problem up using the problem simplification approach.
First write code to count the numbers of characters in any text. Here is the algorithm:
set count to 0
input the text
input the search character
for ch in text
if ch equals the search character
increment the count
print there are {count} {search characters} in {text}
1.3 You Code
Implement the algorithm above in code in the cell below. | # TODO Write code here
| content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
Next, we surround the code we wrote in 1.4 with a sentinal-controlled indefinite loop. The sentinal (the part that exits the loop is when the text is empty (text=="") The algorithm is:
loop
set count to 0
input the text
if text is empty quit loop
input the search character
for ch in text
if ch equals the search character
increment the count
print there are {count} {search characters} in {text}
1.4 You Code
Implement the algorithm above in code. | # TODO Write Code here:
| content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==-- | # run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit() | content/lessons/04-Iterations/LAB-Iterations.ipynb | IST256/learn-python | mit |
H2O init | h2o.init(max_mem_size = 20) #uses all cores by default
h2o.remove_all() | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
import xy_train, x_test | xy_tr = h2o.import_file(path = os.path.realpath("../daielee/xy_tr.csv"))
x_test = h2o.import_file(path = os.path.realpath("../daielee/x_test.csv"))
xy_tr_df = xy_tr.as_data_frame(use_pandas=True)
x_test_df = x_test.as_data_frame(use_pandas=True)
print (xy_tr_df.shape,x_test_df.shapepe) | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
27-AUG-2017 dl_model
Model Details
dl_model = H2ODeepLearningEstimator(epochs=1000)
dl_model.train(X, y, xy_tr)
=============
* H2ODeepLearningEstimator : Deep Learning
* Model Key: DeepLearning_model_python_1503841734286_1
ModelMetricsRegression: deeplearning
Reported on train data.
MSE: 0.02257823450695032
RMSE: 0.15026055539279204
MAE: 0.06853673758752012
RMSLE: NaN
Mean Residual Deviance: 0.02257823450695032 | X = xy_tr.col_names[0:57]
y = xy_tr.col_names[57]
dl_model = H2ODeepLearningEstimator(epochs=1000)
dl_model.train(X, y, xy_tr)
dl_model.summary
sh = dl_model.score_history()
sh = pd.DataFrame(sh)
print(sh.columns)
sh.plot(x='epochs',y = ['training_deviance', 'training_mae'])
dl_model.default_params
dl_model.model_performance(test_data=xy_tr)
pd.DataFrame(dl_model.varimp())
y_test = dl_model.predict(test_data=x_test)
print(y_test.shape) | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
28-AUG-2017 dl_model_list 1 | nuron_cnts = [40,80,160]
layer_cnts = [1,2,3,4,5]
acts = ["Tanh","Maxout","Rectifier","RectifierWithDropout"]
models_list = []
m_names_list = []
i = 0
# N 3 * L 5 * A 4 = 60n
for act in acts:
for layer_cnt in layer_cnts:
for nuron_cnt in nuron_cnts:
m_names_list.append("N:"+str(nuron_cnt)+"L:"+str(layer_cnt)+"A:"+act)
print(m_names_list[i])
models_list.append(H2ODeepLearningEstimator(
model_id=m_names_list[i],
hidden=[nuron_cnt]*layer_cnt, # more hidden layers -> more complex interactions
activation = act,
epochs=10, # to keep it short enough
score_validation_samples=10000,
overwrite_with_best_model=True,
adaptive_rate=True,
l1=0.00001, # add some L1/L2 regularization
l2=0.00001,
max_w2=10.0 # helps stability for Rectifier
))
models_list[i].train(x=X,y=y,training_frame=xy_tr,
validation_frame=xy_tr)
i+=1
for i in range(0,639): #range(len(models_list)-1):
try:
sh = models_list[i].score_history()
sh = pd.DataFrame(sh)
perform = sh['validation_deviance'].tolist()[-1]
print(models_list[i].model_id,end=" ")
print(perform)
except:
print(end="") | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
split the data 3 ways:
60% for training
20% for validation (hyper parameter tuning)
20% for final testing
We will train a data set on one set and use the others to test the validity of the model by ensuring that it can predict accurately on data the model has not been shown.
The second set will be used for validation most of the time.
The third set will be withheld until the end, to ensure that our validation accuracy is consistent with data we have never seen during the iterative process.
desicion
Use Rect-dropout | train_h2o, valid_h2o, test_h2o = xy_tr.split_frame([0.6, 0.2], seed=1234) | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
28-AUG-2017 dl_model_list 2 | nuron_cnts = [40,80,160]
layer_cnts = [1,2,3,4,5]
acts = ["RectifierWithDropout"] #"Tanh","Maxout","Rectifier",
models_list = []
m_names_list = []
time_tkn_wall =[]
time_tkn_clk=[]
i = 0
# N 3 * L 5 * A 1 = 15n
for act in acts:
for layer_cnt in layer_cnts:
for nuron_cnt in nuron_cnts:
m_names_list.append("N: "+str(nuron_cnt)+" L: "+str(layer_cnt)+" A: "+act)
print(m_names_list[i])
models_list.append(H2ODeepLearningEstimator(
model_id=m_names_list[i],
hidden=[nuron_cnt]*layer_cnt, # more hidden layers -> more complex interactions
activation = act,
epochs=10, # to keep it short enough
score_validation_samples=10000,
overwrite_with_best_model=True,
adaptive_rate=True,
l1=0.00001, # add some L1/L2 regularization
l2=0.00001,
max_w2=10.0 # helps stability for Rectifier
))
str_time_clk = time.clock()
str_time_wall = time.time()
models_list[i].train(x=X,y=y,training_frame=train,
validation_frame=valid)
time_tkn_clk.append(time.clock()-str_time_clk)
time_tkn_wall.append(time.time()-str_time_wall)
i+=1 | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
time.time() shows that the wall-clock time has passed approximately one second while time.clock() shows the CPU time spent on the current process is less than 1 microsecond. time.clock() has a much higher precision than time.time(). | for i in range(len(models_list)-1):
try:
sh = models_list[i].score_history()
sh = pd.DataFrame(sh)
perform = sh['validation_deviance'].tolist()[-1]
print(models_list[i].model_id,end=" ")
print(" clk "+str(time_tkn_clk[i])+" wall "+str(time_tkn_wall[i]),end=" ")
print(perform)
except:
print(end="") | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
28-AUG-2017 dl_model_list 3
30,40 nurons, 4,5 layers | nuron_cnts = [30,40,50]
layer_cnts = [4,5]
acts = ["RectifierWithDropout"] #"Tanh","Maxout","Rectifier",
dout=0.5
models_list = []
m_names_list = []
time_tkn_wall =[]
time_tkn_clk=[]
i = 0
# N 1 * L 10 * A 1 = 10n
for act in acts:
for layer_cnt in layer_cnts:
for nuron_cnt in nuron_cnts:
m_names_list.append("N: "+str(nuron_cnt)+" L: "+str(layer_cnt)+" A: "+act)
print(m_names_list[i])
models_list.append(H2ODeepLearningEstimator(
model_id=m_names_list[i],
hidden=[nuron_cnt]*layer_cnt, # more hidden layers -> more complex interactions
hidden_dropout_ratios=[dout]*layer_cnt,
activation = act,
epochs=500, # to keep it short enough
train_samples_per_iteration=300,
score_validation_samples=10000,
loss="absolute",
overwrite_with_best_model=True,
adaptive_rate=True,
l1=0.00001, # add some L1/L2 regularization
l2=0.0001,
max_w2=10.0, # helps stability for Rectifier
variable_importances=True
))
str_time_clk = time.clock()
str_time_wall = time.time()
models_list[i].train(x=X,y=y,training_frame=train,
validation_frame=valid)
time_tkn_clk.append(time.clock()-str_time_clk)
time_tkn_wall.append(time.time()-str_time_wall)
i+=1 | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
tests | dl_pref=dl_model.model_performance(test_data=test)
dl_model.mean
dl_pref.mae()
train.shape
models_list[0].model_id
for i in range(len(models_list)):
try:
sh = models_list[i].score_history()
sh = pd.DataFrame(sh)
sh.plot(x='epochs',y = ['training_mae', 'validation_mae'])
tr_perform = sh['training_mae'].tolist()[-1]
val_perform = sh['validation_mae'].tolist()[-1]
ts_perform= models_list[i].model_performance(test_data=test).mae()
print(models_list[i].model_id,end=" ")
print("clk "+str(round(time_tkn_clk[i],2))+"\twall "+str(round(time_tkn_wall[i]/60,2)),end="\t")
print(
"tr " + str(round(tr_perform,6)) +"\tval " + str(round(val_perform,6)) + "\tts " + str(round(ts_perform,6))
)
except:
print(end="") | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
Predict test_h2o & combine
Predict x_test & combine | import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.preprocessing import LabelEncoder
import lightgbm as lgb
import gc
from sklearn.linear_model import LinearRegression
import random
import datetime as dt
np.random.seed(17)
random.seed(17)
train = pd.read_csv("../input/train_2016_v2.csv", parse_dates=["transactiondate"])
properties = pd.read_csv("../input/properties_2016.csv")
submission = pd.read_csv("../input/sample_submission.csv")
print(len(train),len(properties),len(submission))
def get_features(df):
df["transactiondate"] = pd.to_datetime(df["transactiondate"])
df["transactiondate_year"] = df["transactiondate"].dt.year
df["transactiondate_month"] = df["transactiondate"].dt.month
df['transactiondate'] = df['transactiondate'].dt.quarter
df = df.fillna(-1.0)
return df
def MAE(y, ypred):
#logerror=log(Zestimate)−log(SalePrice)
return np.sum([abs(y[i]-ypred[i]) for i in range(len(y))]) / len(y)
train = pd.merge(train, properties, how='left', on='parcelid')
y = train['logerror'].values
test = pd.merge(submission, properties, how='left', left_on='ParcelId', right_on='parcelid')
properties = [] #memory
exc = [train.columns[c] for c in range(len(train.columns)) if train.dtypes[c] == 'O'] + ['logerror','parcelid']
col = [c for c in train.columns if c not in exc]
train = get_features(train[col])
test['transactiondate'] = '2016-01-01' #should use the most common training date
test = get_features(test[col])
reg = LinearRegression(n_jobs=-1)
reg.fit(train, y); print('fit...')
print(MAE(y, reg.predict(train)))
train = []; y = [] #memory
test_dates = ['2016-10-01','2016-11-01','2016-12-01','2017-10-01','2017-11-01','2017-12-01']
test_columns = ['201610','201611','201612','201710','201711','201712']
pred0 = models_list[1].predict(test_data=x_test).as_data_frame(use_pandas=True)
pred0.head(n=5)
OLS_WEIGHT = 0.0856
print( "\nPredicting with OLS and combining with XGB/LGB/baseline predicitons: ..." )
for i in range(len(test_dates)):
test['transactiondate'] = test_dates[i]
pred = OLS_WEIGHT * reg.predict(get_features(test)) + (1-OLS_WEIGHT)*pred0.values[:,0]
submission[test_columns[i]] = [float(format(x, '.4f')) for x in pred]
print('predict...', i)
print( "\nCombined XGB/LGB/baseline/OLS predictions:" )
print( submission.head() )
from datetime import datetime
submission.to_csv('sub{}.csv'.format(datetime.now().strftime('%Y%m%d_%H%M%S')), index=False)
h2o.model.regression.h2o_mean_absolute_error(y_actual=,y_predicted=) | zillow2017/H2Opy_v0.ipynb | minesh1291/Practicing-Kaggle | gpl-3.0 |
编写自己的回调函数
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/guide/keras/custom_callback"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_callback.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/guide/keras/custom_callback.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/guide/keras/custom_callback.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
简介
回调是一种可以在训练、评估或推断过程中自定义 Keras 模型行为的强大工具。示例包括使用 TensorBoard 来呈现训练进度和结果的 tf.keras.callbacks.TensorBoard,以及用来在训练期间定期保存模型的 tf.keras.callbacks.ModelCheckpoint。
在本指南中,您将了解什么是 Keras 回调函数,它可以做什么,以及如何构建自己的回调函数。我们提供了一些简单回调函数应用的演示,以帮助您入门。
设置 | import tensorflow as tf
from tensorflow import keras | site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
Keras 回调函数概述
所有回调函数都将 keras.callbacks.Callback 类作为子类,并重写在训练、测试和预测的各个阶段调用的一组方法。回调函数对于在训练期间了解模型的内部状态和统计信息十分有用。
您可以将回调函数的列表(作为关键字参数 callbacks)传递给以下模型方法:
keras.Model.fit()
keras.Model.evaluate()
keras.Model.predict()
回调函数方法概述
全局方法
on_(train|test|predict)_begin(self, logs=None)
在 fit/evaluate/predict 开始时调用。
on_(train|test|predict)_end(self, logs=None)
在 fit/evaluate/predict 结束时调用。
Batch-level methods for training/testing/predicting
on_(train|test|predict)_batch_begin(self, batch, logs=None)
正好在训练/测试/预测期间处理批次之前调用。
on_(train|test|predict)_batch_end(self, batch, logs=None)
在训练/测试/预测批次结束时调用。在此方法中,logs 是包含指标结果的字典。
周期级方法(仅训练)
on_epoch_begin(self, epoch, logs=None)
在训练期间周期开始时调用。
on_epoch_end(self, epoch, logs=None)
在训练期间周期开始时调用。
基本示例
让我们来看一个具体的例子。首先,导入 Tensorflow 并定义一个简单的序列式 Keras 模型: | # Define the Keras model to add callbacks to
def get_model():
model = keras.Sequential()
model.add(keras.layers.Dense(1, input_dim=784))
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=0.1),
loss="mean_squared_error",
metrics=["mean_absolute_error"],
)
return model
| site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
然后,从 Keras 数据集 API 加载 MNIST 数据进行训练和测试: | # Load example MNIST data and pre-process it
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype("float32") / 255.0
x_test = x_test.reshape(-1, 784).astype("float32") / 255.0
# Limit the data to 1000 samples
x_train = x_train[:1000]
y_train = y_train[:1000]
x_test = x_test[:1000]
y_test = y_test[:1000] | site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
接下来,定义一个简单的自定义回调函数来记录以下内容:
fit/evaluate/predict 开始和结束的时间
每个周期开始和结束的时间
每个训练批次开始和结束的时间
每个评估(测试)批次开始和结束的时间
每次推断(预测)批次开始和结束的时间 | class CustomCallback(keras.callbacks.Callback):
def on_train_begin(self, logs=None):
keys = list(logs.keys())
print("Starting training; got log keys: {}".format(keys))
def on_train_end(self, logs=None):
keys = list(logs.keys())
print("Stop training; got log keys: {}".format(keys))
def on_epoch_begin(self, epoch, logs=None):
keys = list(logs.keys())
print("Start epoch {} of training; got log keys: {}".format(epoch, keys))
def on_epoch_end(self, epoch, logs=None):
keys = list(logs.keys())
print("End epoch {} of training; got log keys: {}".format(epoch, keys))
def on_test_begin(self, logs=None):
keys = list(logs.keys())
print("Start testing; got log keys: {}".format(keys))
def on_test_end(self, logs=None):
keys = list(logs.keys())
print("Stop testing; got log keys: {}".format(keys))
def on_predict_begin(self, logs=None):
keys = list(logs.keys())
print("Start predicting; got log keys: {}".format(keys))
def on_predict_end(self, logs=None):
keys = list(logs.keys())
print("Stop predicting; got log keys: {}".format(keys))
def on_train_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: start of batch {}; got log keys: {}".format(batch, keys))
def on_train_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Training: end of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: start of batch {}; got log keys: {}".format(batch, keys))
def on_test_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Evaluating: end of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_begin(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: start of batch {}; got log keys: {}".format(batch, keys))
def on_predict_batch_end(self, batch, logs=None):
keys = list(logs.keys())
print("...Predicting: end of batch {}; got log keys: {}".format(batch, keys))
| site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
我们来试一下: | model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=1,
verbose=0,
validation_split=0.5,
callbacks=[CustomCallback()],
)
res = model.evaluate(
x_test, y_test, batch_size=128, verbose=0, callbacks=[CustomCallback()]
)
res = model.predict(x_test, batch_size=128, callbacks=[CustomCallback()]) | site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
logs 字典的用法
logs 字典包含损失值,以及批次或周期结束时的所有指标。示例包括损失和平均绝对误差。 | class LossAndErrorPrintingCallback(keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_test_batch_end(self, batch, logs=None):
print(
"Up to batch {}, the average loss is {:7.2f}.".format(batch, logs["loss"])
)
def on_epoch_end(self, epoch, logs=None):
print(
"The average loss for epoch {} is {:7.2f} "
"and mean absolute error is {:7.2f}.".format(
epoch, logs["loss"], logs["mean_absolute_error"]
)
)
model = get_model()
model.fit(
x_train,
y_train,
batch_size=128,
epochs=2,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
)
res = model.evaluate(
x_test,
y_test,
batch_size=128,
verbose=0,
callbacks=[LossAndErrorPrintingCallback()],
) | site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
self.model 属性的用法
除了在调用其中一种方法时接收日志信息外,回调还可以访问与当前一轮训练/评估/推断有关的模型:self.model。
以下是您可以在回调函数中使用 self.model 进行的一些操作:
设置 self.model.stop_training = True 以立即中断训练。
转变优化器(可作为 self.model.optimizer)的超参数,例如 self.model.optimizer.learning_rate。
定期保存模型。
在每个周期结束时,在少量测试样本上记录 model.predict() 的输出,以用作训练期间的健全性检查。
在每个周期结束时提取中间特征的可视化,随时间推移监视模型当前的学习内容。
其他
下面我们通过几个示例来看看它是如何工作的。
Keras 回调函数应用示例
在达到最小损失时尽早停止
第一个示例展示了如何通过设置 self.model.stop_training(布尔)属性来创建能够在达到最小损失时停止训练的 Callback。您还可以提供参数 patience 来指定在达到局部最小值后应该等待多少个周期然后停止。
tf.keras.callbacks.EarlyStopping 提供了一种更完整、更通用的实现。 | import numpy as np
class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
"""Stop training when the loss is at its min, i.e. the loss stops decreasing.
Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
"""
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get("loss")
if np.less(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=30,
verbose=0,
callbacks=[LossAndErrorPrintingCallback(), EarlyStoppingAtMinLoss()],
) | site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
学习率规划
在此示例中,我们展示了如何在学习过程中使用自定义回调来动态更改优化器的学习率。
有关更通用的实现,请查看 callbacks.LearningRateScheduler。 | class CustomLearningRateScheduler(keras.callbacks.Callback):
"""Learning rate scheduler which sets the learning rate according to schedule.
Arguments:
schedule: a function that takes an epoch index
(integer, indexed from 0) and current learning rate
as inputs and returns a new learning rate as output (float).
"""
def __init__(self, schedule):
super(CustomLearningRateScheduler, self).__init__()
self.schedule = schedule
def on_epoch_begin(self, epoch, logs=None):
if not hasattr(self.model.optimizer, "lr"):
raise ValueError('Optimizer must have a "lr" attribute.')
# Get the current learning rate from model's optimizer.
lr = float(tf.keras.backend.get_value(self.model.optimizer.learning_rate))
# Call schedule function to get the scheduled learning rate.
scheduled_lr = self.schedule(epoch, lr)
# Set the value back to the optimizer before this epoch starts
tf.keras.backend.set_value(self.model.optimizer.lr, scheduled_lr)
print("\nEpoch %05d: Learning rate is %6.4f." % (epoch, scheduled_lr))
LR_SCHEDULE = [
# (epoch to start, learning rate) tuples
(3, 0.05),
(6, 0.01),
(9, 0.005),
(12, 0.001),
]
def lr_schedule(epoch, lr):
"""Helper function to retrieve the scheduled learning rate based on epoch."""
if epoch < LR_SCHEDULE[0][0] or epoch > LR_SCHEDULE[-1][0]:
return lr
for i in range(len(LR_SCHEDULE)):
if epoch == LR_SCHEDULE[i][0]:
return LR_SCHEDULE[i][1]
return lr
model = get_model()
model.fit(
x_train,
y_train,
batch_size=64,
steps_per_epoch=5,
epochs=15,
verbose=0,
callbacks=[
LossAndErrorPrintingCallback(),
CustomLearningRateScheduler(lr_schedule),
],
) | site/zh-cn/guide/keras/custom_callback.ipynb | tensorflow/docs-l10n | apache-2.0 |
Adding Boundary Pores
When performing transport simulations it is often useful to have 'boundary' pores attached to the surface(s) of the network where boundary conditions can be applied. When using the Cubic class, two methods are available for doing this: add_boundaries, which is specific for the Cubic class, and add_boundary_pores, which is a generic method that can also be used on other network types and which is inherited from GenericNetwork. The first method automatically adds boundaries to ALL six faces of the network and offsets them from the network by 1/2 of the value provided as the network spacing. The second method provides total control over which boundary pores are created and where they are positioned, but requires the user to specify to which pores the boundary pores should be attached to. Let's explore these two options: | pn.add_boundary_pores(labels=['top', 'bottom']) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Let's quickly visualize this network with the added boundaries: | #NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10]) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Adding and Removing Pores and Throats
OpenPNM uses a list-based data storage scheme for all properties, including topological connections. One of the benefits of this approach is that adding and removing pores and throats from the network is essentially as simple as adding or removing rows from the data arrays. The one exception to this 'simplicity' is that the 'throat.conns' array must be treated carefully when trimming pores, so OpenPNM provides the extend and trim functions for adding and removing, respectively. To demonstrate, let's reduce the coordination number of the network to create a more random structure: | Ts = np.random.rand(pn.Nt) < 0.1 # Create a mask with ~10% of throats labeled True
op.topotools.trim(network=pn, throats=Ts) # Use mask to indicate which throats to trim | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
When the trim function is called, it automatically checks the health of the network afterwards, so logger messages might appear on the command line if problems were found such as isolated clusters of pores or pores with no throats. This health check is performed by calling the Network's check_network_health method which returns a HealthDict containing the results of the checks: | a = pn.check_network_health()
print(a) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
The HealthDict contains several lists including things like duplicate throats and isolated pores, but also a suggestion of which pores to trim to return the network to a healthy state. Also, the HealthDict has a health attribute that is False is any checks fail. | op.topotools.trim(network=pn, pores=a['trim_pores']) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Let's take another look at the network to see the trimmed pores and throats: | #NBVAL_IGNORE_OUTPUT
fig = op.topotools.plot_connections(pn, c='r')
fig = op.topotools.plot_coordinates(pn, c='b', fig=fig)
fig.set_size_inches([10, 10]) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Define Geometry Objects
The boundary pores we've added to the network should be treated a little bit differently. Specifically, they should have no volume or length (as they are not physically representative of real pores). To do this, we create two separate Geometry objects, one for internal pores and one for the boundaries: | Ps = pn.pores('*boundary', mode='not')
Ts = pn.throats('*boundary', mode='not')
geom = op.geometry.StickAndBall(network=pn, pores=Ps, throats=Ts, name='intern')
Ps = pn.pores('*boundary')
Ts = pn.throats('*boundary')
boun = op.geometry.Boundary(network=pn, pores=Ps, throats=Ts, name='boun') | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
The StickAndBall class is preloaded with the pore-scale models to calculate all the necessary size information (pore diameter, pore.volume, throat lengths, throat.diameter, etc). The Boundary class is speciall and is only used for the boundary pores. In this class, geometrical properties are set to small fixed values such that they don't affect the simulation results.
Define Multiple Phase Objects
In order to simulate relative permeability of air through a partially water-filled network, we need to create each Phase object. OpenPNM includes pre-defined classes for each of these common fluids: | air = op.phases.Air(network=pn)
water = op.phases.Water(network=pn)
water['throat.contact_angle'] = 110
water['throat.surface_tension'] = 0.072 | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Aside: Creating a Custom Phase Class
In many cases you will want to create your own fluid, such as an oil or brine, which may be commonly used in your research. OpenPNM cannot predict all the possible scenarios, but luckily it is easy to create a custom Phase class as follows: | from openpnm.phases import GenericPhase
class Oil(GenericPhase):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.add_model(propname='pore.viscosity',
model=op.models.misc.polynomial,
prop='pore.temperature',
a=[1.82082e-2, 6.51E-04, -3.48E-7, 1.11E-10])
self['pore.molecular_weight'] = 116 # g/mol | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Creating a Phase class basically involves placing a series of self.add_model commands within the __init__ section of the class definition. This means that when the class is instantiated, all the models are added to itself (i.e. self).
**kwargs is a Python trick that captures all arguments in a dict called kwargs and passes them to another function that may need them. In this case they are passed to the __init__ method of Oil's parent by the super function. Specifically, things like name and network are expected.
The above code block also stores the molecular weight of the oil as a constant value
Adding models and constant values in this way could just as easily be done in a run script, but the advantage of defining a class is that it can be saved in a file (i.e. 'my_custom_phases') and reused in any project. | oil = Oil(network=pn)
print(oil) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Define Physics Objects for Each Geometry and Each Phase
In the tutorial #2 we created two Physics object, one for each of the two Geometry objects used to handle the stratified layers. In this tutorial, the internal pores and the boundary pores each have their own Geometry, but there are two Phases, which also each require a unique Physics: | phys_water_internal = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom)
phys_air_internal = op.physics.GenericPhysics(network=pn, phase=air, geometry=geom)
phys_water_boundary = op.physics.GenericPhysics(network=pn, phase=water, geometry=boun)
phys_air_boundary = op.physics.GenericPhysics(network=pn, phase=air, geometry=boun) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
To reiterate, one Physics object is required for each Geometry AND each Phase, so the number can grow to become annoying very quickly Some useful tips for easing this situation are given below.
Create a Custom Pore-Scale Physics Model
Perhaps the most distinguishing feature between pore-network modeling papers is the pore-scale physics models employed. Accordingly, OpenPNM was designed to allow for easy customization in this regard, so that you can create your own models to augment or replace the ones included in the OpenPNM models libraries. For demonstration, let's implement the capillary pressure model proposed by Mason and Morrow in 1994. They studied the entry pressure of non-wetting fluid into a throat formed by spheres, and found that the converging-diverging geometry increased the capillary pressure required to penetrate the throat. As a simple approximation they proposed $P_c = -2 \sigma \cdot cos(2/3 \theta) / R_t$
Pore-scale models are written as basic function definitions: | def mason_model(target, diameter='throat.diameter', theta='throat.contact_angle',
sigma='throat.surface_tension', f=0.6667):
proj = target.project
network = proj.network
phase = proj.find_phase(target)
Dt = network[diameter]
theta = phase[theta]
sigma = phase[sigma]
Pc = 4*sigma*np.cos(f*np.deg2rad(theta))/Dt
return Pc[phase.throats(target.name)] | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Let's examine the components of above code:
The function receives a target object as an argument. This indicates which object the results will be returned to.
The f value is a scale factor that is applied to the contact angle. Mason and Morrow suggested a value of 2/3 as a decent fit to the data, but we'll make this an adjustable parameter with 2/3 as the default.
Note the pore.diameter is actually a Geometry property, but it is retrieved via the network using the data exchange rules outlined in the second tutorial.
All of the calculations are done for every throat in the network, but this pore-scale model may be assigned to a target like a Physics object, that is a subset of the full domain. As such, the last line extracts values from the Pc array for the location of target and returns just the subset.
The actual values of the contact angle, surface tension, and throat diameter are NOT sent in as numerical arrays, but rather as dictionary keys to the arrays. There is one very important reason for this: if arrays had been sent, then re-running the model would use the same arrays and hence not use any updated values. By having access to dictionary keys, the model actually looks up the current values in each of the arrays whenever it is run.
It is good practice to include the dictionary keys as arguments, such as sigma = 'throat.contact_angle'. This way the user can control where the contact angle could be stored on the target object.
Copy Models Between Physics Objects
As mentioned above, the need to specify a separate Physics object for each Geometry and Phase can become tedious. It is possible to copy the pore-scale models assigned to one object onto another object. First, let's assign the models we need to phys_water_internal: | mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys_water_internal.add_model(propname='throat.hydraulic_conductance',
model=mod)
phys_water_internal.add_model(propname='throat.entry_pressure',
model=mason_model) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Now make a copy of the models on phys_water_internal and apply it all the other water Physics objects: | phys_water_boundary.models = phys_water_internal.models | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
The only 'gotcha' with this approach is that each of the Physics objects must be regenerated in order to place numerical values for all the properties into the data arrays: | phys_water_boundary.regenerate_models()
phys_air_internal.regenerate_models()
phys_air_internal.regenerate_models() | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Adjust Pore-Scale Model Parameters
The pore-scale models are stored in a ModelsDict object that is itself stored under the models attribute of each object. This arrangement is somewhat convoluted, but it enables integrated storage of models on the object's wo which they apply. The models on an object can be inspected with print(phys_water_internal), which shows a list of all the pore-scale properties that are computed by a model, and some information about the model's regeneration mode.
Each model in the ModelsDict can be individually inspected by accessing it using the dictionary key corresponding to pore-property that it calculates, i.e. print(phys_water_internal)['throat.capillary_pressure']). This shows a list of all the parameters associated with that model. It is possible to edit these parameters directly: | phys_water_internal.models['throat.entry_pressure']['f'] = 0.75 # Change value
phys_water_internal.regenerate_models() # Regenerate model with new 'f' value | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
More details about the ModelsDict and ModelWrapper classes can be found in :ref:models.
Perform Multiphase Transport Simulations
Use the Built-In Drainage Algorithm to Generate an Invading Phase Configuration | inv = op.algorithms.Porosimetry(network=pn)
inv.setup(phase=water)
inv.set_inlets(pores=pn.pores(['top', 'bottom']))
inv.run() | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
The inlet pores were set to both 'top' and 'bottom' using the pn.pores method. The algorithm applies to the entire network so the mapping of network pores to the algorithm pores is 1-to-1.
The run method automatically generates a list of 25 capillary pressure points to test, but you can also specify more pores, or which specific points to tests. See the methods documentation for the details.
Once the algorithm has been run, the resulting capillary pressure curve can be viewed with plot_drainage_curve. If you'd prefer a table of data for plotting in your software of choice you can use get_drainage_data which prints a table in the console.
Set Pores and Throats to Invaded
After running, the mip object possesses an array containing the pressure at which each pore and throat was invaded, stored as 'pore.inv_Pc' and 'throat.inv_Pc'. These arrays can be used to obtain a list of which pores and throats are invaded by water, using Boolean logic: | Pi = inv['pore.invasion_pressure'] < 5000
Ti = inv['throat.invasion_pressure'] < 5000 | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
The resulting Boolean masks can be used to manually adjust the hydraulic conductivity of pores and throats based on their phase occupancy. The following lines set the water filled throats to near-zero conductivity for air flow: | Ts = phys_water_internal.map_throats(~Ti, origin=water)
phys_water_internal['throat.hydraulic_conductance'][Ts] = 1e-20 | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
The logic of these statements implicitly assumes that transport between two pores is only blocked if the throat is filled with the other phase, meaning that both pores could be filled and transport is still permitted. Another option would be to set the transport to near-zero if either or both of the pores are filled as well.
The above approach can get complicated if there are several Geometry objects, and it is also a bit laborious. There is a pore-scale model for this under Physics.models.multiphase called conduit_conductance. The term conduit refers to the path between two pores that includes 1/2 of each pores plus the connecting throat.
Calculate Relative Permeability of Each Phase
We are now ready to calculate the relative permeability of the domain under partially flooded conditions. Instantiate an StokesFlow object: | water_flow = op.algorithms.StokesFlow(network=pn, phase=water)
water_flow.set_value_BC(pores=pn.pores('left'), values=200000)
water_flow.set_value_BC(pores=pn.pores('right'), values=100000)
water_flow.run()
Q_partial, = water_flow.rate(pores=pn.pores('right')) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
The relative permeability is the ratio of the water flow through the partially water saturated media versus through fully water saturated media; hence we need to find the absolute permeability of water. This can be accomplished by regenerating the phys_water_internal object, which will recalculate the 'throat.hydraulic_conductance' values and overwrite our manually entered near-zero values from the inv simulation using phys_water_internal.models.regenerate(). We can then re-use the water_flow algorithm: | phys_water_internal.regenerate_models()
water_flow.run()
Q_full, = water_flow.rate(pores=pn.pores('right')) | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
And finally, the relative permeability can be found from: | K_rel = Q_partial/Q_full
print(f"Relative permeability: {K_rel:.5f}") | examples/tutorials/Intro to OpenPNM - Advanced.ipynb | TomTranter/OpenPNM | mit |
Data: | # Get pricing data for an energy (XLE) and industrial (XLI) ETF
xle = get_pricing('XLE', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')
xli = get_pricing('XLI', fields = 'price', start_date = '2016-01-01', end_date = '2017-01-01')
# Compute returns
xle_returns = xle.pct_change()[1:]
xli_returns = xli.pct_change()[1:] | notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb | quantopian/research_public | apache-2.0 |
Exercise 1 : Hypothesis Testing on Variances.
Plot the histogram of the returns of XLE and XLI
Check to see if each return stream is normally distributed
If the assets are normally distributed, use the F-test to perform a hypothesis test and decide whether they have the two assets have the same variance.
If the assets are not normally distributed, use the Levene test (in the scipy library) to perform a hypothesis test on variance. | xle = plt.hist(xle_returns, bins=30)
xli = plt.hist(xli_returns, bins=30, color='r')
plt.xlabel('returns')
plt.ylabel('Frequency')
plt.title('Histogram of the returns of XLE and XLI')
plt.legend(['XLE returns', 'XLI returns']);
# Checking for normality using function above.
print 'XLE'
normal_test(xle_returns)
print 'XLI'
normal_test(xli_returns)
# Because the data is not normally distributed, we must use the levene and not the F-test of variance.
stats.levene(xle_returns, xli_returns) | notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb | quantopian/research_public | apache-2.0 |
Since we find a pvalue for the Levene test of less than our $\alpha$ level (0.05), we can reject the null hypothesis that the variability of the two groups are equal thus implying that the variances are unequal.
Exercise 2 : Hypothesis Testing on Means.
Since we know that the variances are not equal, we must use Welch's t-test.
- Calculate the mean returns of XLE and XLI.
- Find the difference between the two means.
- Calculate the standard deviation of the returns of XLE and XLI
- Using the formula given above, calculate the t-test statistic (Using $\alpha = 0.05$) for Welch's t-test to test whether the mean returns of XLE and XLI are different.
- Consult the Hypothesis Testing Lecture to calculate the p-value for this test. Are the mean returns of XLE and XLI the same?
Now use the t-test function for two independent samples from the scipy library. Compare the results. | # Manually calculating the t-statistic
N1 = len(xle_returns)
N2 = len(xli_returns)
m1 = xle_returns.mean()
m2 = xli_returns.mean()
s1 = xle_returns.std()
s2 = xli_returns.std()
test_statistic = (m1 - m2) / (s1**2 / N1 + s2**2 / N2)**0.5
print 't-test statistic:', test_statistic
# Alternative form, using the scipy library on python.
stats.ttest_ind(xle_returns, xli_returns, equal_var=False) | notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb | quantopian/research_public | apache-2.0 |
Exercise 3 : Skewness
Calculate the mean and median of the two assets
Calculate the skewness using the scipy library | # Calculate the mean and median of xle and xli using the numpy library
xle_mean = np.mean(xle_returns)
xle_median = np.median(xle_returns)
print 'Mean of XLE returns = ', xle_mean, '; median = ', xle_median
xli_mean = np.mean(xli_returns)
xli_median = np.median(xli_returns)
print 'Mean of XLI returns = ', xli_mean, '; median = ', xli_median
# Print values of Skewness for xle and xli returns
print 'Skew of XLE returns:', stats.skew(xle_returns)
print 'Skew of XLI returns:', stats.skew(xli_returns) | notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb | quantopian/research_public | apache-2.0 |
And the skewness of XLE returns of values > 0 means that there is more weight in the right tail of the distribution. The skewness of XLI returns of value > 0 means that there is more weight in the left tail of the distribution.
Exercise 4 : Kurtosis
Check the kurtosis of the two assets, using the scipy library.
Using the seaborn library, plot the distribution of XLE and XLI returns.
Recall:
- Kurtosis > 3 is leptokurtic, a highly peaked, narrow deviation from the mean
- Kurtosis = 3 is mesokurtic. The most significant mesokurtic distribution is the normal distribution family.
- Kurtosis < 3 is platykurtic, a lower-peaked, broad deviation from the mean | # Print value of Kurtosis for xle and xli returns
print 'kurtosis:', stats.kurtosis(xle_returns)
print 'kurtosis:', stats.kurtosis(xli_returns)
# Distribution plot of XLE returns in red (for Kurtosis of 1.6).
# Distribution plot of XLI returns in blue (for Kurtosis of 2.0).
xle = sns.distplot(xle_returns, color = 'r', axlabel = 'xle')
xli = sns.distplot(xli_returns, axlabel = 'xli'); | notebooks/lectures/Case_Study_Comparing_ETFs/answers/notebook.ipynb | quantopian/research_public | apache-2.0 |
Dijkstra's Shortest Path Algorithm
The notebook Set.ipynb implements <em style="color:blue">sets</em> as
<a href="https://en.wikipedia.org/wiki/AVL_tree">AVL trees</a>.
The API provided by Set offers the following API:
- Set() creates an empty set.
- S.isEmpty() checks whether the set Sis empty.
- S.member(x) checks whether x is an element of the given set S.
- S.insert(x) inserts x into the set S.
This does not return a new set but rather modifies the given set S.
- S.delete(x) deletes x from the set S.
This does not return a new set but rather modifies the set S.
- S.pop() returns the <em style="color:blue">smallest element</em> of the set S.
Furthermore, this element is removed from the given set S.
Since sets are implemented as ordered binary trees, the elements of a set need to be comparable, i.e. if
x and y are inserted into a set, then the expression x < y has to be defined and has to return a
Boolean value. Furthermore, the relation < has to be a
<a href="https://en.wikipedia.org/wiki/linear_order">linear order</a>.
The class Set can be used to implement a priority queue that supports the
<em style="color:blue">removal</em> of elements. | %run Set.ipynb | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
The function call shortest_path takes a node source and a set Edges.
The function shortest_path takes two arguments.
- source is the start node.
- Edges is a dictionary that encodes the set of edges of the graph. For every node x the value of Edges[x] has the form
$$ \bigl[ (y_1, l_1), \cdots, (y_n, l_n) \bigr]. $$
This list is interpreted as follows: For every $i = 1,\cdots,n$ there is an edge
$(x, y_i)$ pointing from $x$ to $y_i$ and this edge has the length $l_i$.
The function returns the dictionary Distance. For every node u such that there is a path from source to
u, Distance[u] is the length of the shortest path from source to u. The implementation uses
<a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm">Dijkstra's algorithm</a> and proceeds as follows:
Distance is a dictionary mapping nodes to their estimated distance from the node
source. If d = Distance[x], then we know that there is a path of length d leading
from source to x. However, in general we do not know whether there is a path shorter
than d that also connects the source to the node x.
The function shortest_path maintains an additional variable called Visited.
This variable contains the set of those nodes that have been <em style="color:blue">visited</em>
by the algorithm.
To be more precise, Visited contains those nodes u that have been removed from the
Fringe and for which all neighboring nodes, i.e. those nodes y such that
there is an edge (u,y), have been examined. It can be shown that once a node u is added to
Visited, Distance[u] is the length of the shortest path from source to u.
Fringe is a priority queue that contains pairs of the form (d, x), where x is a node and d
is the distance that x has from the node source. This priority queue is implemented as a set,
which in turn is represented by an ordered binary tree. The fact that we store the node x and the
distance d as a pair (d,x) implies that the distances are used as priorities because pairs are
compared lexicographically.
Initially the only node that is known to be
reachable from source is the node source. Hence Fringe is initialized as the
set { (0, source) }.
As long as the set Fringe is not empty, line 7 of the implementation removes that node u
from the set Fringe that has the smallest distance d from the node source.
Next, all edges leading away from u are visited. If there is an edge (u, v) that has length l,
then we check whether the node v has already a distance assigned. If the node v already has the
distance dv assigned but the value d + l is less than dv, then we have found a
shorter path from source to v. This path leads from source to u and then proceeds
to v via the edge (u,v).
If v had already been visited before and hence dv=Distance[v] is defined, we
have to update the priority of the v in the Fringe. The easiest way to do this is to remove
the old pair (dv, v) from the Fringe and replace this pair by the new pair
(d+l, v), because d+l is the new estimate of the distance between source and v and
d+l is the new priority of v.
Once we have inspected all neighbours of the node u, u is added to the set of those nodes that have
been Visited.
When the Fringe has been exhausted, the dictionary Distance contains the distances of
every node that is reachable from the node source | def shortest_path(source, Edges):
Distance = { source: 0 }
Visited = { source }
Fringe = Set()
Fringe.insert( (0, source) )
while not Fringe.isEmpty():
d, u = Fringe.pop() # get and remove smallest element
for v, l in Edges[u]:
dv = Distance.get(v, None)
if dv == None or d + l < dv:
if dv != None:
Fringe.delete( (dv, v) )
Distance[v] = d + l
Fringe.insert( (d + l, v) )
Visited.add(u)
return Distance | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
The version of shortest_path given below provides a graphical animation of the algorithm. | def shortest_path(source, Edges):
Distance = { source: 0 }
Visited = { source } # set only needed for visualization
Fringe = Set()
Fringe.insert( (0, source) )
while not Fringe.isEmpty():
d, u = Fringe.pop()
display(toDot(source, u, Edges, Fringe, Distance, Visited))
print('_' * 80)
for v, l in Edges[u]:
dv = Distance.get(v, None)
if dv == None or d + l < dv:
if dv != None:
Fringe.delete( (dv, v) )
Distance[v] = d + l
Fringe.insert( (d + l, v) )
Visited.add(u)
display(toDot(source, None, Edges, Fringe, Distance, Visited))
return Distance | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
Code to Display the Directed Graph | import graphviz as gv | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
The function $\texttt{toDot}(\texttt{source}, \texttt{Edges}, \texttt{Fringe}, \texttt{Distance}, \texttt{Visited})$ takes a graph that is represented by
its Edges, a set of nodes Fringe, and a dictionary Distance that has the distance of a node from the node source, and set Visited of nodes that have already been visited. | def toDot(source, p, Edges, Fringe, Distance, Visited):
V = set()
for x in Edges.keys():
V.add(x)
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
dot.attr(rankdir='LR', size='8,5')
for x in V:
if x == source:
dot.node(str(x), color='blue', shape='doublecircle')
else:
d = str(Distance.get(x, ''))
if x == p:
dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='magenta')
elif x in Distance and Fringe.member( (Distance[x], x) ):
dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='red')
elif x in Visited:
dot.node(str(x), label='{' + str(x) + '|' + d + '}', color='blue')
else:
dot.node(str(x), label='{' + str(x) + '|' + d + '}')
for u in V:
for v, l in Edges[u]:
dot.edge(str(u), str(v), label=str(l))
return dot | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
Code for Testing | Edges = { 'a': [ ('c', 2), ('b', 9)],
'b': [('d', 1)],
'c': [('e', 5), ('g', 3)],
'd': [('f', 2), ('e', 4)],
'e': [('f', 1), ('b', 2)],
'f': [('h', 5)],
'g': [('e', 1)],
'h': []
}
s = 'a'
sp = shortest_path(s, Edges)
sp | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
Crossing the Tunnel
Four persons, Alice, Britney, Charly and Daniel have to cross a tunnel.
The tunnel is so narrow, that at most two persons can cross it together.
In order to cross the tunnel, a torch is needed. Together, they only
have a single torch.
1. Alice is the fastest and can cross the tunnel in 1 minute.
2. Britney needs 2 minutes to cross the tunnel.
3. Charly is slower and needs 4 minutes.
4. Daniel is slowest and takes 5 minutes to cross the tunnel.
What is the fastest plan to cross the tunnel?
We will model this problem as a graph theoretical problem. The nodes of the graph will be sets
of people. In particular, it will be the set of people at the entrance of the tunnel. In order to model the torch, the torch can also be a member of these sets. | All = frozenset({ 'Alice', 'Britney', 'Charly', 'Daniel', 'Torch' }) | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
The timining is modelled by a dictionary. | Time = { 'Alice': 1, 'Britney': 2, 'Charly': 4, 'Daniel': 5, 'Torch': 0 } | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
The function $\texttt{power}(M)$ defined below computes the power list of the set $M$, i.e. we have:
$$ \texttt{power}(M) = 2^M = \bigl{A \mid A \subseteq M \bigr} $$ | def power(M):
if M == set():
return { frozenset() }
else:
C = set(M) # C is a copy of M as we don't want to change the set M
x = C.pop() # pop removes the element x from the set C
P1 = power(C)
P2 = { A | {x} for A in P1 }
return P1 | P2 | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
If $B$ is a set of persons, then $\texttt{duration}(B)$ is the time that this group needs to cross the tunnel.
$B$ also contains 'Torch'. | def duration(B):
return max(Time[x] for x in B) | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
$\texttt{left_right}(S)$ describes a crossing of the tunnel from the entrance at the left side left to the exit at the right side of the tunnel. | def left_right(S):
return [(S - B, duration(B)) for B in power(S) if 'Torch' in B and 2 <= len(B) <= 3] | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
$\texttt{right_left}(S)$ describes a crossing of the tunnel from right to left. | def right_left(S):
return [(S | B, duration(B)) for B in power(All - S) if 'Torch' in B and 2 <= len(B) <= 3]
Edges = { S: left_right(S) + right_left(S) for S in power(All) }
len(Edges) | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
The function shortest_path is Dijkstra's algorithm. It returns both a dictionary Parent containing
the parent nodes and a dictionary Distance with the distances. The dictionary Parent can be used to
compute the shortest path leading from the node source to some other node. | def shortest_path(source, Edges):
Distance = { source: 0 }
Parent = {}
Fringe = Set()
Fringe.insert( (0, source) )
while not Fringe.isEmpty():
d, u = Fringe.pop()
for v, l in Edges[u]:
dv = Distance.get(v, None)
if dv == None or d + l < dv:
if dv != None:
Fringe.delete( (dv, v) )
Distance[v] = d + l
Fringe.insert( (d + l, v) )
Parent[v] = u
return Parent, Distance
Parent, Distance = shortest_path(frozenset(All), Edges) | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
Let us see whether the goal was reachable and how long it takes to reach the goal. | goal = frozenset()
Distance[goal] | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
Given to nodes source and goal and a dictionary containing the parent of every node, the function
find_path returns the path from source to goal. | def find_path(source, goal, Parent):
p = Parent.get(goal)
if p == None:
return [source]
return find_path(source, p, Parent) + [goal]
Path = find_path(frozenset(All), frozenset(), Parent)
def print_path():
total = 0
print("_" * 81);
for i in range(len(Path)):
Left = set(Path[i])
Right = set(All) - set(Left)
if Left == set() or Right == set():
print(Left, " " * 25, Right)
else:
print(Left, " " * 30, Right)
print("_" * 81);
if i < len(Path) - 1:
if "Torch" in Path[i]:
Diff = set(Path[i]) - set(Path[i+1])
time = duration(Diff)
total += time
print(" " * 20, ">>> ", Diff, ':', time, " >>>")
else:
Diff = set(Path[i+1]) - set(Path[i])
time = duration(Diff)
total += time
print(" " * 20, "<<< ", Diff, ':', time, " <<<")
print("_" * 81)
print('Total time:', total, 'minutes.')
print_path() | Python/Chapter-09/Dijkstra.ipynb | Danghor/Algorithms | gpl-2.0 |
Create transformers | import pyspark.ml.feature as ft
births = births \
.withColumn( 'BIRTH_PLACE_INT',
births['BIRTH_PLACE'] \
.cast(typ.IntegerType())) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Having done this, we can now create our first Transformer. | encoder = ft.OneHotEncoder(
inputCol='BIRTH_PLACE_INT',
outputCol='BIRTH_PLACE_VEC') | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Let's now create a single column with all the features collated together. | featuresCreator = ft.VectorAssembler(
inputCols=[
col[0]
for col
in labels[2:]] + \
[encoder.getOutputCol()],
outputCol='features'
) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Create an estimator
In this example we will (once again) us the Logistic Regression model. | import pyspark.ml.classification as cl | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Once loaded, let's create the model. | logistic = cl.LogisticRegression(
maxIter=10,
regParam=0.01,
labelCol='INFANT_ALIVE_AT_REPORT') | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Create a pipeline
All that is left now is to creat a Pipeline and fit the model. First, let's load the Pipeline from the package. | from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[
encoder,
featuresCreator,
logistic
]) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Fit the model
Conventiently, DataFrame API has the .randomSplit(...) method. | births_train, births_test = births \
.randomSplit([0.7, 0.3], seed=666) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Now run our pipeline and estimate our model. | model = pipeline.fit(births_train)
test_model = model.transform(births_test) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Here's what the test_model looks like. | test_model.take(1) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Model performance
Obviously, we would like to now test how well our model did. | import pyspark.ml.evaluation as ev
evaluator = ev.BinaryClassificationEvaluator(
rawPredictionCol='probability',
labelCol='INFANT_ALIVE_AT_REPORT')
print(evaluator.evaluate(test_model,
{evaluator.metricName: 'areaUnderROC'}))
print(evaluator.evaluate(test_model, {evaluator.metricName: 'areaUnderPR'})) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Saving the model
PySpark allows you to save the Pipeline definition for later use. | pipelinePath = './infant_oneHotEncoder_Logistic_Pipeline'
pipeline.write().overwrite().save(pipelinePath) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
So, you can load it up later and use straight away to .fit(...) and predict. | loadedPipeline = Pipeline.load(pipelinePath)
loadedPipeline \
.fit(births_train)\
.transform(births_test)\
.take(1) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
You can also save the whole model | from pyspark.ml import PipelineModel
modelPath = './infant_oneHotEncoder_Logistic_PipelineModel'
model.write().overwrite().save(modelPath)
loadedPipelineModel = PipelineModel.load(modelPath)
test_loadedModel = loadedPipelineModel.transform(births_test) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Parameter hyper-tuning
Grid search
Load the .tuning part of the package. | import pyspark.ml.tuning as tune | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Next let's specify our model and the list of parameters we want to loop through. | logistic = cl.LogisticRegression(
labelCol='INFANT_ALIVE_AT_REPORT')
grid = tune.ParamGridBuilder() \
.addGrid(logistic.maxIter,
[2, 10, 50]) \
.addGrid(logistic.regParam,
[0.01, 0.05, 0.3]) \
.build() | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Next, we need some way of comparing the models. | evaluator = ev.BinaryClassificationEvaluator(
rawPredictionCol='probability',
labelCol='INFANT_ALIVE_AT_REPORT') | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Create the logic that will do the validation work for us. | cv = tune.CrossValidator(
estimator=logistic,
estimatorParamMaps=grid,
evaluator=evaluator
) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Create a purely transforming Pipeline. | pipeline = Pipeline(stages=[encoder,featuresCreator])
data_transformer = pipeline.fit(births_train) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Having done this, we are ready to find the optimal combination of parameters for our model. | cvModel = cv.fit(data_transformer.transform(births_train)) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
The cvModel will return the best model estimated. We can now use it to see if it performed better than our previous model. | data_train = data_transformer \
.transform(births_test)
results = cvModel.transform(data_train)
print(evaluator.evaluate(results,
{evaluator.metricName: 'areaUnderROC'}))
print(evaluator.evaluate(results,
{evaluator.metricName: 'areaUnderPR'})) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
What parameters has the best model? The answer is a little bit convoluted but here's how you can extract it. | results = [
(
[
{key.name: paramValue}
for key, paramValue
in zip(
params.keys(),
params.values())
], metric
)
for params, metric
in zip(
cvModel.getEstimatorParamMaps(),
cvModel.avgMetrics
)
]
sorted(results,
key=lambda el: el[1],
reverse=True)[0] | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Train-Validation splitting
Use the ChiSqSelector to select only top 5 features, thus limiting the complexity of our model. | selector = ft.ChiSqSelector(
numTopFeatures=5,
featuresCol=featuresCreator.getOutputCol(),
outputCol='selectedFeatures',
labelCol='INFANT_ALIVE_AT_REPORT'
)
logistic = cl.LogisticRegression(
labelCol='INFANT_ALIVE_AT_REPORT',
featuresCol='selectedFeatures'
)
pipeline = Pipeline(stages=[encoder,featuresCreator,selector])
data_transformer = pipeline.fit(births_train) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
The TrainValidationSplit object gets created in the same fashion as the CrossValidator model. | tvs = tune.TrainValidationSplit(
estimator=logistic,
estimatorParamMaps=grid,
evaluator=evaluator
) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
As before, we fit our data to the model, and calculate the results. | tvsModel = tvs.fit(
data_transformer \
.transform(births_train)
)
data_train = data_transformer \
.transform(births_test)
results = tvsModel.transform(data_train)
print(evaluator.evaluate(results,
{evaluator.metricName: 'areaUnderROC'}))
print(evaluator.evaluate(results,
{evaluator.metricName: 'areaUnderPR'})) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Other features of PySpark ML in action
Feature extraction
NLP related feature extractors
Simple dataset. | text_data = spark.createDataFrame([
['''Machine learning can be applied to a wide variety
of data types, such as vectors, text, images, and
structured data. This API adopts the DataFrame from
Spark SQL in order to support a variety of data types.'''],
['''DataFrame supports many basic and structured types;
see the Spark SQL datatype reference for a list of
supported types. In addition to the types listed in
the Spark SQL guide, DataFrame can use ML Vector types.'''],
['''A DataFrame can be created either implicitly or
explicitly from a regular RDD. See the code examples
below and the Spark SQL programming guide for examples.'''],
['''Columns in a DataFrame are named. The code examples
below use names such as "text," "features," and "label."''']
], ['input']) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
First, we need to tokenize this text. | tokenizer = ft.RegexTokenizer(
inputCol='input',
outputCol='input_arr',
pattern='\s+|[,.\"]') | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
The output of the tokenizer looks similar to this. | tok = tokenizer \
.transform(text_data) \
.select('input_arr')
tok.take(1) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Use the StopWordsRemover(...). | stopwords = ft.StopWordsRemover(
inputCol=tokenizer.getOutputCol(),
outputCol='input_stop') | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
The output of the method looks as follows | stopwords.transform(tok).select('input_stop').take(1) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Build NGram model and the Pipeline. | ngram = ft.NGram(n=2,
inputCol=stopwords.getOutputCol(),
outputCol="nGrams")
pipeline = Pipeline(stages=[tokenizer, stopwords, ngram]) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Now that we have the pipeline we follow in the very similar fashion as before. | data_ngram = pipeline \
.fit(text_data) \
.transform(text_data)
data_ngram.select('nGrams').take(1) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
That's it. We got our n-grams and we can then use them in further NLP processing.
Discretize continuous variables
It is sometimes useful to band the values into discrete buckets. | import numpy as np
x = np.arange(0, 100)
x = x / 100.0 * np.pi * 4
y = x * np.sin(x / 1.764) + 20.1234
schema = typ.StructType([
typ.StructField('continuous_var',
typ.DoubleType(),
False
)
])
data = spark.createDataFrame([[float(e), ] for e in y], schema=schema) | Chapter06/LearningPySpark_Chapter06.ipynb | drabastomek/learningPySpark | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.