markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Train a model for the competition
The code cell above trains a Random Forest model on train_X and train_y.
Use the code cell below to build a Random Forest model and train it on all of X and y. | # To improve accuracy, create a new Random Forest model which you will train on all training data
rf_model_on_full_data = ____
# fit rf_model_on_full_data on all data from the training data
____ | notebooks/machine_learning/raw/ex7.ipynb | Kaggle/learntools | apache-2.0 |
Now, read the file of "test" data, and apply your model to make predictions. | # path to file you will use for predictions
test_data_path = '../input/test.csv'
# read test data file using pandas
test_data = ____
# create test_X which comes from test_data but includes only the columns you used for prediction.
# The list of columns is stored in a variable called features
test_X = ____
# make predictions which we will submit.
test_preds = ____ | notebooks/machine_learning/raw/ex7.ipynb | Kaggle/learntools | apache-2.0 |
Before submitting, run a check to make sure your test_preds have the right format. | # Check your answer (To get credit for completing the exercise, you must get a "Correct" result!)
step_1.check()
# step_1.solution()
#%%RM_IF(PROD)%%
rf_model_on_full_data = RandomForestRegressor()
rf_model_on_full_data.fit(X, y)
test_data_path = '../input/test.csv'
test_data = pd.read_csv(test_data_path)
test_X = test_data[features]
test_preds = rf_model_on_full_data.predict(test_X)
step_1.assert_check_passed() | notebooks/machine_learning/raw/ex7.ipynb | Kaggle/learntools | apache-2.0 |
Generate a submission
Run the code cell below to generate a CSV file with your predictions that you can use to submit to the competition. | # Run the code to save predictions in the format used for competition scoring
output = pd.DataFrame({'Id': test_data.Id,
'SalePrice': test_preds})
output.to_csv('submission.csv', index=False) | notebooks/machine_learning/raw/ex7.ipynb | Kaggle/learntools | apache-2.0 |
Examples of plugins usage in folium
In this notebook we show a few illustrations of folium's plugin extensions.
This is a development notebook
Adds a button to enable/disable zoom scrolling.
ScrollZoomToggler | from folium import plugins
m = folium.Map([45, 3], zoom_start=4)
plugins.ScrollZoomToggler().add_to(m)
m.save(os.path.join('results', 'Plugins_0.html'))
m | examples/Plugins.ipynb | shankari/folium | mit |
Fullscreen | m = folium.Map(location=[41.9, -97.3], zoom_start=4)
plugins.Fullscreen(
position='topright',
title='Expand me',
titleCancel='Exit me',
forceSeparateButton=True).add_to(m)
m.save(os.path.join('results', 'Plugins_4.html'))
m # Click on the top right button. | examples/Plugins.ipynb | shankari/folium | mit |
Shape error | def some_method(data):
a = data[:,0:2]
c = data[:,1]
s = (a + c)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_data = tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8],
[2.8, 4.2, 5.6],
[2.9, 8.3, 7.3]
])
print(sess.run(some_method(fake_data)))
def some_method(data):
a = data[:,0:2]
print(a.get_shape())
c = data[:,1]
print(c.get_shape())
s = (a + c)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_data = tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8],
[2.8, 4.2, 5.6],
[2.9, 8.3, 7.3]
])
print(sess.run(some_method(fake_data)))
def some_method(data):
a = data[:,0:2]
print(a.get_shape())
c = data[:,1:3]
print(c.get_shape())
s = (a + c)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_data = tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8],
[2.8, 4.2, 5.6],
[2.9, 8.3, 7.3]
])
print(sess.run(some_method(fake_data)))
import tensorflow as tf
x = tf.constant([[3, 2],
[4, 5],
[6, 7]])
print("x.shape", x.shape)
expanded = tf.expand_dims(x, 1)
print("expanded.shape", expanded.shape)
sliced = tf.slice(x, [0, 1], [2, 1])
print("sliced.shape", sliced.shape)
with tf.Session() as sess:
print("expanded: ", expanded.eval())
print("sliced: ", sliced.eval()) | courses/machine_learning/deepdive/03_tensorflow/debug_demo.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Vector vs scalar | def some_method(data):
print(data.get_shape())
a = data[:,0:2]
print(a.get_shape())
c = data[:,1:3]
print(c.get_shape())
s = (a + c)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_data = tf.constant([5.0, 3.0, 7.1])
print(sess.run(some_method(fake_data)))
def some_method(data):
print(data.get_shape())
a = data[:,0:2]
print(a.get_shape())
c = data[:,1:3]
print(c.get_shape())
s = (a + c)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_data = tf.constant([5.0, 3.0, 7.1])
fake_data = tf.expand_dims(fake_data, 0)
print(sess.run(some_method(fake_data))) | courses/machine_learning/deepdive/03_tensorflow/debug_demo.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Type error | def some_method(a, b):
s = (a + b)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_a = tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8],
])
fake_b = tf.constant([
[2, 4, 5],
[2, 8, 7]
])
print(sess.run(some_method(fake_a, fake_b)))
def some_method(a, b):
b = tf.cast(b, tf.float32)
s = (a + b)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_a = tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8],
])
fake_b = tf.constant([
[2, 4, 5],
[2, 8, 7]
])
print(sess.run(some_method(fake_a, fake_b))) | courses/machine_learning/deepdive/03_tensorflow/debug_demo.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
TensorFlow debugger
Wrap your normal Session object with tf_debug.LocalCLIDebugWrapperSession | import tensorflow as tf
from tensorflow.python import debug as tf_debug
def some_method(a, b):
b = tf.cast(b, tf.float32)
s = (a / b)
s2 = tf.matmul(s, tf.transpose(s))
return tf.sqrt(s2)
with tf.Session() as sess:
fake_a = [
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8],
]
fake_b = [
[2, 0, 5],
[2, 8, 7]
]
a = tf.placeholder(tf.float32, shape=[2, 3])
b = tf.placeholder(tf.int32, shape=[2, 3])
k = some_method(a, b)
# Note: won't work without the ui_type="readline" argument because
# Datalab is not an interactive terminal and doesn't support the default "curses" ui_type.
# If you are running this a standalone program, omit the ui_type parameter and add --debug
# when invoking the TensorFlow program
# --debug (e.g: python debugger.py --debug )
sess = tf_debug.LocalCLIDebugWrapperSession(sess, ui_type="readline")
sess.add_tensor_filter("has_inf_or_nan", tf_debug.has_inf_or_nan)
print(sess.run(k, feed_dict = {a: fake_a, b: fake_b})) | courses/machine_learning/deepdive/03_tensorflow/debug_demo.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
In the tfdbg> window that comes up, try the following:
* run -f has_inf_or_nan
* Notice that several tensors are dumped once the filter criterion is met
* List the inputs to a specific tensor:
* li transpose:0
* Print the value of a tensor
* pt transpose:0
* Where is the inf?
Visit https://www.tensorflow.org/programmers_guide/debugger for usage details of tfdbg
tf.Print()
Create a python script named debugger.py with the contents shown below. | %%writefile debugger.py
import tensorflow as tf
def some_method(a, b):
b = tf.cast(b, tf.float32)
s = (a / b)
print_ab = tf.Print(s, [a, b])
s = tf.where(tf.is_nan(s), print_ab, s)
return tf.sqrt(tf.matmul(s, tf.transpose(s)))
with tf.Session() as sess:
fake_a = tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8],
])
fake_b = tf.constant([
[2, 0, 5],
[2, 8, 7]
])
print(sess.run(some_method(fake_a, fake_b))) | courses/machine_learning/deepdive/03_tensorflow/debug_demo.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Execute the python script | %%bash
python debugger.py | courses/machine_learning/deepdive/03_tensorflow/debug_demo.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Alpha-Beta Pruning with Progressive Deepening, Move Ordering, and Memoization
The function pd_evaluate takes three arguments:
- State is the current state of the game,
- limit determines how deep the game tree is searched,
- f is either the function maxValue or the function minValue.
The function pd_evaluate uses progressive deepening to compute the value of State. The given State is evaluated for a depth of $0$, $1$, $\cdots$, limit. The values calculated for a depth of $l$ are stored and used to sort the states when State is next evaluated for a depth of $l+1$. This is beneficial for alpha-beta pruning because alpha-beta pruning can cut off more branches from the search tree if we start by evaluating the best moves first. | import time
def pd_evaluate(State, time_limit, f):
start = time.time()
limit = 0
while True:
value = evaluate(State, limit, f)
stop = time.time()
if value in [-1, 1] or stop - start > time_limit:
print(f'searched to depth {limit}, using {round(stop - start, 3)} seconds')
return value, limit
limit += 1 | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The function evaluate takes five arguments:
- State is the current state of the game,
- limit determines the lookahead. To be more precise, it is the number of half-moves that are investigated to compute the value. If limit is 0 and the game has not ended, the game is evaluated via the function heuristic. This function is supposed to be defined in the notebook defining the game.
- f is either the function maxValue or the function minValue.
f = maxValue if it's the maximizing player's turn in State. Otherwise,
f = minValue.
- alpha and beta are the parameters from alpha-beta pruning.
The function evaluate returns the value that the given State has if both players play their optimal game.
- If the maximizing player can force a win, the return value is 1.
- If the maximizing player can at best force a draw, the return value is 0.
- If the maximizing player might lose even when playing optimal, the return value is -1.
Otherwise, the value is calculated according to a heuristic.
For reasons of efficiency, the function evaluate is memoized using the global variable gCache. This work in the same way as described in the notebook Alpha-Beta-Pruning-Memoization.ipynb. | def evaluate(State, limit, f, alpha=-1, beta=1):
global gCache
if (State, limit) in gCache:
flag, v = gCache[(State, limit)]
if flag == '=':
return v
if flag == '≤':
if v <= alpha:
return v
elif alpha < v < beta:
w = f(State, limit, alpha, v)
store_cache(State, limit, alpha, v, w)
return w
else: # beta <= v:
w = f(State, limit, alpha, beta)
store_cache(State, limit, alpha, beta, w)
return w
if flag == '≥':
if beta <= v:
return v
elif alpha < v < beta:
w = f(State, limit, v, beta)
store_cache(State, limit, v, beta, w)
return w
else: # v <= alpha
w = f(State, limit, alpha, beta)
store_cache(State, limit, alpha, beta, w)
return w
else:
v = f(State, limit, alpha, beta)
store_cache(State, limit, alpha, beta, v)
return v | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The function maxValue satisfies the following specification:
- $\alpha \leq \texttt{value}(s) \leq \beta \;\rightarrow\;\texttt{maxValue}(s, l, \alpha, \beta) = \texttt{value}(s)$
- $\texttt{value}(s) \leq \alpha \;\rightarrow\; \texttt{maxValue}(s, l, \alpha, \beta) \leq \alpha$
- $\beta \leq \texttt{value}(s) \;\rightarrow\; \beta \leq \texttt{maxValue}(s, \alpha, \beta)$
It assumes that gPlayers[0] is the maximizing player. This function implements alpha-beta pruning. After searching up to a depth of limit, the value is approximated using the function heuristic. | def maxValue(State, limit, alpha=-1, beta=1):
if finished(State):
return utility(State)
if limit == 0:
return heuristic(State)
value = alpha
NextStates = next_states(State, gPlayers[0])
if len(NextStates) == 1: # singular value extension
return evaluate(NextStates[0], limit, minValue, value, beta)
Moves = [] # empty priority queue
for ns in NextStates:
# heaps are sorted ascendingly, hence the minus
heapq.heappush(Moves, (-value_cache(ns, limit-2), ns))
while Moves:
_, ns = heapq.heappop(Moves)
value = max(value, evaluate(ns, limit-1, minValue, value, beta))
if value >= beta:
return value
return value | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The function minValue satisfies the following specification:
- $\alpha \leq \texttt{value}(s) \leq \beta \;\rightarrow\;\texttt{minValue}(s, l, \alpha, \beta) = \texttt{value}(s)$
- $\texttt{value}(s) \leq \alpha \;\rightarrow\; \texttt{minValue}(s, l, \alpha, \beta) \leq \alpha$
- $\beta \leq \texttt{value}(s) \;\rightarrow\; \beta \leq \texttt{minValue}(s, \alpha, \beta)$
It assumes that gPlayers[1] is the minimizing player. This function implements alpha-beta pruning. After searching up to a depth of limit, the value is approximated using the function heuristic. | def minValue(State, limit, alpha=-1, beta=1):
if finished(State):
return utility(State)
if limit == 0:
return heuristic(State)
value = beta
NextStates = next_states(State, gPlayers[1])
if len(NextStates) == 1:
return evaluate(NextStates[0], limit, maxValue, alpha, value)
Moves = [] # empty priority queue
for ns in NextStates:
heapq.heappush(Moves, (value_cache(ns, limit-2), ns))
while Moves:
_, ns = heapq.heappop(Moves)
value = min(value, evaluate(ns, limit-1, maxValue, alpha, value))
if value <= alpha:
return value
return value
%%capture
%run Connect-Four.ipynb | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
In the state shown below, Red can force a win by pushing his stones in the 6th row. Due to this fact, *alpha-beta pruning is able to prune large parts of the search path and hence the evaluation is fast. | canvas = create_canvas()
draw(gTestState, canvas, '?')
gCache = {}
%%time
value, limit = pd_evaluate(gTestState, 10, maxValue)
value
len(gCache)
gCache = {}
%%time
value, limit = pd_evaluate(gStart, 5, maxValue)
value
len(gCache) | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
In order to evaluate the effect of progressive deepening, we reset the cache and can then evaluate the test state without progressive deepening. | gCache = {}
%%time
value = evaluate(gTestState, 8, maxValue)
value
len(gCache) | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
Playing the Game
The function best_move takes two arguments:
- State is the current state of the game,
- limit is the depth limit of the recursion.
The function best_move returns a pair of the form $(v, s)$ where $s$ is a state and $v$ is the value of this state. The state $s$ is a state that is reached from State if the player makes one of her optimal moves. In order to have some variation in the game, the function randomly chooses any of the optimal moves. | def best_move(State, time_limit):
NextStates = next_states(State, gPlayers[0])
if len(NextStates) == 1:
return pd_evaluate(State, time_limit, maxValue), NextStates[0]
bestValue, limit = pd_evaluate(State, time_limit, maxValue)
BestMoves = [s for s in NextStates
if evaluate(s, limit-1, minValue) == bestValue
]
BestState = random.choice(BestMoves)
return bestValue, BestState | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
The function play_game plays on the given canvas. The game played is specified indirectly by specifying the following:
- Start is a global variable defining the start state of the game.
- next_states is a function such that $\texttt{next_states}(s, p)$ computes the set of all possible states that can be reached from state $s$ if player $p$ is next to move.
- finished is a function such that $\texttt{finished}(s)$ is true for a state $s$ if the game is over in state $s$.
- utility is a function such that $\texttt{utility}(s, p)$ returns either -1, 0, or 1 in the terminal state $s$. We have that
- $\texttt{utility}(s, p)= -1$ iff the game is lost for player $p$ in state $s$,
- $\texttt{utility}(s, p)= 0$ iff the game is drawn, and
- $\texttt{utility}(s, p)= 1$ iff the game is won for player $p$ in state $s$. | def play_game(canvas, time_limit):
global gCache, gMoveCounter
State = gStart
while (True):
gCache = {}
firstPlayer = gPlayers[0]
val, State = best_move(State, time_limit)
draw(State, canvas, f'value = {round(val, 2)}.')
if finished(State):
final_msg(State)
break
IPython.display.clear_output(wait=True)
State = get_move(State)
draw(State, canvas, '')
if finished(State):
IPython.display.clear_output(wait=True)
final_msg(State)
break
canvas = create_canvas()
draw(gStart, canvas, f'Current value of game for "X": {round(0, 2)}')
play_game(canvas, 2)
len(gCache) | Python/3 Games/Game.ipynb | karlstroetmann/Artificial-Intelligence | gpl-2.0 |
What are the metrics for "holding the position"? | print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
import pickle
with open('../../data/dyna_10000_states_full_training.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent) | notebooks/prod/n09_dyna_10000_states_full_training.ipynb | mtasende/Machine-Learning-Nanodegree-Capstone | mit |
TensorFlow Lite による芸術的スタイル転送
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/examples/style_transfer/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/examples/style_transfer/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/examples/style_transfer/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/examples/style_transfer/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
最近開発されたディープラーニングの中で最も面白い開発の 1 つとして、芸術的スタイル転送または パスティーシュ(模倣)として知られる能力があります。これは芸術的スタイルを表現する画像とコンテンツを表現する画像から成る 2 つの入力画像に基づいて新しい画像を創造するものです。
この手法を使用すると、様々なスタイルの美しく新しい作品を生成することができます。
TensorFlow Lite を初めて使用する場合、Android を使用する場合は、以下のサンプルアプリをご覧ください。
<a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/examples/style_transfer/android">Android の例</a>
Android や iOS 以外のプラットフォームを使用する場合、または、すでに <a href="https://www.tensorflow.org/api_docs/python/tf/lite">TensorFlow Lite API</a> に精通している場合は、このチュートリアルに従い、事前トレーニング済みの TensorFlow Lite モデル を使用して、任意のコンテンツ画像とスタイル画像のペアにスタイル転送を適用する方法を学ぶことができます。モデルを使用して、独自のモバイルアプリにスタイル転送を追加することができます。
モデルは GitHub でオープンソース化されています。異なるパラメータを使用してモデルの再トレーニング(例えば、コンテンツレイヤーの重みを増やしてよりコンテンツ画像に近い出力画像にするなど)が可能です。
モデルアーキテクチャの理解
この芸術的スタイル転送モデルは、2 つのサブモデルで構成されています。
スタイル予測モデル: 入力スタイル画像を 100 次元スタイルのボトルネックベクトルに変換する MobilenetV2 ベースのニューラルネットワーク。
スタイル変換モデル: コンテンツ画像にスタイルのボトルネックベクトルを適用し、スタイル化された画像を生成するニューラルネットワーク。
アプリが特定のスタイル画像セットのみをサポートする必要がある場合は、それらのスタイルのボトルネックベクトルを事前に計算して、そのスタイル予測モデルをアプリのバイナリから除外します。
セットアップ
依存関係をインポートします。 | import tensorflow as tf
print(tf.__version__)
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools | site/ja/lite/examples/style_transfer/overview.ipynb | tensorflow/docs-l10n | apache-2.0 |
コンテンツ画像とスタイル画像、および事前トレーニング済みの TensorFlow Lite モデルをダウンロードします。 | content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1?lite-format=tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1?lite-format=tflite') | site/ja/lite/examples/style_transfer/overview.ipynb | tensorflow/docs-l10n | apache-2.0 |
入力を前処理する
コンテンツ画像とスタイル画像は RGB 画像である必要があります。ピクセル値は [0..1] 間の float32 の数値です。
スタイル画像のサイズは (1, 256, 256, 3) である必要があります。画像を中央でクロップしてサイズを変更します。
コンテンツ画像は (1, 384, 384, 3) である必要があります。画像を中央でクロップしてサイズを変更します。 | # Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process by resizing an central cropping it.
def preprocess_image(image, target_dim):
# Resize the image so that the shorter dimension becomes 256px.
shape = tf.cast(tf.shape(image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
image = tf.image.resize(image, new_shape)
# Central crop the image.
image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)
return image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_image(content_image, 384)
preprocessed_style_image = preprocess_image(style_image, 256)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape) | site/ja/lite/examples/style_transfer/overview.ipynb | tensorflow/docs-l10n | apache-2.0 |
入力を可視化する | def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image') | site/ja/lite/examples/style_transfer/overview.ipynb | tensorflow/docs-l10n | apache-2.0 |
TensorFlow Lite でスタイル転送を実行する
スタイルを予測する | # Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape) | site/ja/lite/examples/style_transfer/overview.ipynb | tensorflow/docs-l10n | apache-2.0 |
スタイルを変換する | # Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image') | site/ja/lite/examples/style_transfer/overview.ipynb | tensorflow/docs-l10n | apache-2.0 |
スタイルをブレンドする
コンテンツ画像のスタイルをスタイル化された出力にブレンドさせることができます。こうすると、出力がよりコンテンツ画像のように見えるようになります。 | # Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_image(content_image, 256)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image') | site/ja/lite/examples/style_transfer/overview.ipynb | tensorflow/docs-l10n | apache-2.0 |
Description
A 100-MVA, 14.4-kV, 0.8-PF-lagging, 50-Hz, two-pole, Y-connected synchronous generator has a per-unit synchronous reactance of 1.1 and a per-unit armature resistance of 0.011. | Vl = 14.4e3 # [V]
S = 100e6 # [VA]
ra = 0.011 # [pu]
xs = 1.1 # [pu]
PF = 0.8
p = 2
fse = 50 # [Hz] | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
(a)
What are its synchronous reactance and armature resistance in ohms?
(b)
What is the magnitude of the internal generated voltage $E_A$ at the rated conditions?
What is its torque angle $\delta$ at these conditions?
(c)
Ignoring losses in this generator
What torque must be applied to its shaft by the prime mover at full load?
SOLUTION
The base phase voltage of this generator is: | Vphase_base = Vl / sqrt(3)
print('Vphase_base = {:.0f} V'.format(Vphase_base)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
Therefore, the base impedance of the generator is:
$$Z_\text{base} = \frac{3V^2_{\phi_\text{base}}}{S_\text{base}}$$ | Zbase = 3*Vphase_base**2 / S
print('Zbase = {:.3f} Ω'.format(Zbase)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
(b)
The generator impedance in ohms are: | Ra = ra * Zbase
Xs = xs * Zbase
print('''
Ra = {:.4f} Ω Xs = {:.3f} Ω
==============================='''.format(Ra, Xs)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
(b)
The rated armature current is:
$$I_A = I_L = \frac{S}{\sqrt{3}V_T}$$ | Ia_amp = S / (sqrt(3) * Vl)
print('Ia_amp = {:.0f} A'.format(Ia_amp)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
The power factor is 0.8 lagging, so: | Ia_angle = -arccos(PF)
Ia = Ia_amp * (cos(Ia_angle) + sin(Ia_angle)*1j)
print('Ia = {:.0f} ∠{:.2f}° A'.format(abs(Ia), Ia_angle / pi *180)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
It is very often the case that especially in larger machines the armature resistance $R_A$ is simply neclected and one calulates the armature voltage simply as:
$$\vec{E}A = \vec{V}\phi + jX_S\vec{I}_A$$
But since in this case we were given the armature resistance explicitly we should also use it.
Therefore, the internal generated voltage is
$$\vec{E}A = \vec{V}\phi + (R_A + jX_S)\vec{I}_A$$ | EA = Vphase_base + (Ra + Xs*1j) * Ia
EA_angle = arctan(EA.imag/EA.real)
print('EA = {:.1f} V ∠{:.1f}°'.format(abs(EA), EA_angle/pi*180)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
Therefore, the magnitude of the internal generated voltage $E_A$ is: | abs(EA) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
V, and the torque angle $\delta$ is: | EA_angle/pi*180 | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
degrees.
(c)
Ignoring losses, the input power would equal the output power. Since | Pout = PF * S
print('Pout = {:.1F} MW'.format(Pout/1e6)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
and,
$$n_\text{sync} = \frac{120f_{se}}{P}$$ | n_sync = 120*fse / p
print('n_sync = {:.0F} r/min'.format(n_sync)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
the applied torque would be:
$$\tau_\text{app} = \tau_\text{ind} = \frac{P_\text{out}}{\omega_\text{sync}}$$ | w_sync = n_sync * (2*pi/60.0)
tau_app = Pout / w_sync
print('''
τ_app = {:.0f} Nm
================='''.format(tau_app)) | Chapman/Ch4-Problem_4-07.ipynb | dietmarw/EK5312_ElectricalMachines | unlicense |
To access an element in a nested list, first index to the inner list, then index to the item.
Example:
list_of_lists = [[1,2], [3,4], []]
Acess the first index to the inner list and index to the item
python
inner_list = list_of_lists[1] # [3,4]
print inner_list[0] # 3
Or even quicker:
python
list_of_lists[1][0] # 3
TRY IT
1) To get dragon roll from the sushi order, first we get the second element (index 1) then we get the the second item (index 1)
2) Print california roll from the list everyones_order:
3) Print all items from the second person's order
Mutable Lists
Lists are mutable, that means that you can change elements.
To assign a new value to an element
my_list = [1, 2, 3]
my_list[0] = 100 | sushi_order[0] = 'caterpillar roll'
print(sushi_order) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Update the last element in prices to be 21.00 and print out the new result | prices[-1] = 21.00
print(prices) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Operators and Lists
The in operator allows you to see if an element is contained in a list | sushi_order
print(('hamachi' in sushi_order))
if 'otoro' in sushi_order:
print("Big spender!") | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
You can use some arithmatic operators on lists
The + operator concatenates two lists
The * operator duplicates a list that many times | print((sushi_order * 3))
exprep = ['rep'+str(i) for i in range(5)]
exprep
print((prices + sushi_order)) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Note: You can only concatenate lists with lists! If you want to add a "non-list" element you can use the append() function. | newprices = prices.copy()
newprices.append(22)
print(newprices)
prices | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Don't forget, you can use the for and in keywords to loop through a list | for item in sushi_order:
print(("I'd like to order the {}.".format(item)))
print("And hold the wasabi!")
for ind, item in enumerate(sushi_order):
print(("I'd like to order the {0} for {1}.".format(item, prices[ind]))) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Create a variable called lots_of_sushi that repeats the inexpensive list two times | lots_of_sushi = inexpensive*2
print(lots_of_sushi) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Adding and deleting elements
To add an element to a list, you have a few options
the append method adds an element or elements to the end of a list, if you pass it a list, the next element with be a list (making a list of lists)
the extend method takes a list of elements and adds them all to the end, not creating a list of lists
use the + operator like you saw before | my_sushis = ['maguro', 'rock n roll']
my_sushis.append('avocado roll')
print(my_sushis)
my_sushis.append(['hamachi', 'california roll'])
print(my_sushis)
my_sushis = ['maguro', 'rock n roll']
my_sushis.extend(['hamachi', 'california roll'])
print(my_sushis) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Add 'rock n roll' to sushi_order then delete the first element of sushi_order
List Functions
max will return maximum value of list
min returns minimum value of list
sum returns the sum of the values in a list
len returns the number of elements in a list # Just a reminder | numbers = [1, 1, 2, 3, 5, 8]
print((max(numbers)))
print((min(numbers)))
print((sum(numbers)))
print((len(numbers)))
| Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Find the average of numbers using list functions (and not a loop!) | sum(numbers)/len(numbers) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Aliasing
If you assign a list to another variable, it will still refer to the same list. This can cause trouble if you change one list because the other will change too. | cooked_rolls = ['unagi roll', 'shrimp tempura roll']
my_order = cooked_rolls
my_order.append('hamachi')
print(my_order)
print(cooked_rolls) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
To check this, you can use the is operator to see if both variable refer to the same object | print((my_order is cooked_rolls)) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Tuples
Tuples are very similar to lists. The major difference is that tuples are immutable meaning that you can not add, remove, or assign new values to a tuple.
The creator of a tuple is the comma , but by convention people usually surround tuples with parenthesis. | noodles = ('soba', 'udon', 'ramen', 'lo mein', 'somen', 'rice noodle')
print((type(noodles))) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
To create a single element tuple, you need to add a comma to the end of that element (it looks kinda weird) | single_element_tuple = (1,)
print(single_element_tuple)
print((type(single_element_tuple))) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
You can use the indexing and slicing you learned for lists the same with tuples.
But, because tuples are immutable, you cannot use the append, pop, del, extend, or remove methods or even assign new values to indexes | print((noodles[0]))
print((noodles[4:]))
# This should throw an error
noodles[0] = 'spaghetti' | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
You can loop through tuples the same way you loop through lists, using for in | for noodle in noodles:
print(("Yummy, yummy {0} and {1}".format(noodle, 'sushi'))) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Create a tuple containing 'soy sauce' 'ginger' and 'wasabi' and save it in a variable called accompaniments
Zip
the zip function takes any number of lists of the same length and returns a list of tuples where the tuples will contain the i-th element from each of the lists.
This is really useful when combining lists that are related (especially for looping) | print((list(zip([1,2,3], [4,5,6]))))
sushi = ['salmon', 'tuna', 'sea urchin']
prices = [5.5, 6.75, 8]
sushi_and_prices = list(zip(sushi, prices))
sushi_and_prices
for sushi, price in sushi_and_prices:
print(("The {0} costs ${1}".format(sushi, price))) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Enumerate
While the zip function iterates over two lists, the built-in function enumerate loops through indices and elements of a list. It returns a list of tuples containing the index and value of that element.
for index, value in enumerate(list):
... | exotic_sushi = ['tako', 'toro', 'uni', 'hirame']
for index, item in enumerate(exotic_sushi):
print((index, item)) | Lesson07_ListsandTuples/Lists and Tuples - after class.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Embarked feature | sns.countplot(data=df, hue="Survived", x="Embarked")
sns.barplot(data=df, x="Embarked", y="Survived")
sns.countplot(data=df, x="Age")
sns.boxplot(data=df, x="Survived", y="Age")
sns.stripplot(
x="Survived", y="Age", data=df, jitter=True, edgecolor="gray", alpha=0.25)
sns.FacetGrid(df, hue="Survived", size=6).map(sns.kdeplot, "Age").add_legend() | titanic-data-exploration.ipynb | muatik/dm | mit |
The chart above corrects the guess: unfortunatelly, passenger class plays a crucial role. | sns.countplot(data=df[df['Pclass'] == 3], hue="Survived", x="Sex")
sns.barplot(x="Sex", y="Survived", hue="Pclass", data=df);
def titanicFit(df):
X = df[["Sex", "Age", "Pclass", "Embarked"]]
y = df["Survived"]
X.Age.fillna(X.Age.mean(), inplace=True)
X.Sex.replace(to_replace="male", value=1, inplace=True)
X.Sex.replace(to_replace="female", value=0, inplace=True)
X.Embarked.replace(to_replace="S", value=1, inplace=True)
X.Embarked.replace(to_replace="C", value=2, inplace=True)
X.Embarked.replace(to_replace="Q", value=3, inplace=True)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(
X, y, test_size=0.3, random_state=0)
clf = svm.SVC(kernel="rbf")
parameters = [
{
"kernel" :["linear"]
}, {
"kernel" :["rbf"], "C":[1, 10, 100], "gamma":[0.001, 0.002, 0.01]}
]
clf = grid_search.GridSearchCV(
svm.SVC(), param_grid=parameters, cv=5).fit(X, y)
return clf
#print clf.score(X_test, y_test)
clf = titanicFit(df[df.Embarked.isnull() == False])
clf.grid_scores_ | titanic-data-exploration.ipynb | muatik/dm | mit |
Vamos agora criar um modelo baseado nesse conjunto de dados. Vamos utilizar o algoritmo de árvore de decisão para fazer isso. | from sklearn import tree
clf = tree.DecisionTreeClassifier() | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
clf consiste no classificador baseado na árvore de decisão. Precisamos treina-lo com o conjunto da base de dados de treinamento. | clf = clf.fit(features, labels) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Observer que o classificador recebe com parâmetro as features e os labels. Esse classificador é um tipo de classificador supervisionado, logo precisa conhecer o "gabarito" das instâncias que estão sendo passadas.
Uma vez que temos o modelo construído, podemos utiliza-lo para classificar uma instância desconhecida. | # Peso 160 e Textura Irregular. Observe que esse tipo de fruta não está presente na base de dados.
print(clf.predict([[160, 0]])) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Ele classificou essa fruta como sendo uma Laranja.
HelloWorld++
Vamos estender um pouco mais esse HelloWorld. Claro que o exemplo anterior foi só para passar a idéia de funcionamento de um sistema desse tipo. No entanto, o nosso programa não está aprendendo muita coisa já que a quantidade de exemplos passada para ele é muito pequena. Vamos trabalhar com um exemplo um pouco maior.
Para esse exemplo, vamos utilizar o Iris Dataset. Esse é um clássico dataset utilizado na aprendizagem de máquina. Ele tem o propósito mais didático e a tarefa é classificar 3 espécies de um tipo de flor (Iris). A classificação é feita a partir de 4 características da planta: sepal length, sepal width, petal length e petal width.
<img src="http://5047-presscdn.pagely.netdna-cdn.com/wp-content/uploads/2015/04/iris_petal_sepal.png" />
As flores são classificadas em 3 tipos: Iris Setosa, Iris Versicolor e Iris Virginica.
Vamos para o código ;)
O primeiro passo é carregar a base de dados. Os arquivos desta base estão disponíveis no UCI Machine Learning Repository. No entanto, como é uma base bastante utilizada, o ScikitLearn permite importá-la diretamente da biblioteca. | from sklearn.datasets import load_iris
dataset_iris = load_iris() | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Imprimindo as características: | print(dataset_iris.feature_names) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Imprimindo os labels: | print(dataset_iris.target_names) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Imprimindo os dados: | print(dataset_iris.data)
# Nessa lista, 0 = setosa, 1 = versicolor e 2 = verginica
print(dataset_iris.target) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Antes de continuarmos, vale a pena mostrar que o Scikit-Learn exige alguns requisitos para se trabalhar com os dados. Esse tutorial não tem como objetivo fazer um estudo detalhado da biblioteca, mas é importante tomar conhecimento de tais requisitos para entender alguns exemplos que serão mostrados mais à frente. São eles:
As features e os labels devem ser armazenados em objetos distintos
Ambos devem ser numéricos
Ambos devem ser representados por uma Array Numpy
Ambos devem ter tamanhos específicos
Vamos ver estas informações no Iris-dataset. | # Verifique os tipos das features e das classes
print(type(dataset_iris.data))
print(type(dataset_iris.target))
# Verifique o tamanho das features (primeira dimensão = numero de instâncias, segunda dimensão = número de atributos)
print(dataset_iris.data.shape)
# Verifique o tamanho dos labels
print(dataset_iris.target.shape) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Quando importamos a base diretamente do ScikitLearn, as features e labels já vieram em objetos distintos. Só por questão de simplificação dos nomes, vou renomeá-los. | X = dataset_iris.data
Y = dataset_iris.target | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Construindo e testando um modelo de treinamento
Uma vez que já temos nossa base de dados, o próximo passo é construir nosso modelo de aprendizagem de máquina capaz de utilizar o dataset. No entanto, antes de construirmos nosso modelo é preciso saber qual modelo desenvolver e para isso precisamos definir qual o nosso propósito na tarefa de treinamento.
Existem vários tipos de tarefas dentro da aprendizagem de máquina. Como dito anteriormente, vamos trabalhar com a tarefa de classificação. A classificação consiste em criar um modelo a partir de dados que estejam de alguma forma classificados. O modelo gerado é capaz de determinar qual classe uma instância pertence a partir dos dados que foram dados como entrada.
Na apresentação do dataset da Iris vimos que cada instância é classificada com um tipo (no caso, o tipo da espécie a qual a planta pertence). Sendo assim, vamos tratar esse problema como um problema de classificação. Existem outras tarefas dentro da aprendizagem de máquina, como: clusterização, agrupamento, dentre outras. Mais detalhes de cada uma deles serão apresentados na aula de aprendizagem de máquina.
O passo seguinte é construir o modelo. Para tal, vamos seguir 4 passos:
Passo 1: Importar o classificador que deseja utilizar
Passo 2: Instanciar o modelo
Passo 3: Treinar o modelo
Passo 4: Fazer predições para novos valores
Nessa apresentação, vamos continuar utilizando o modelo de Árvore de Decisão. O fato de usá-la nesta etapa é que é fácil visualizar o que o modelo está fazendo com os dados.
Para nosso exemplo, vamos treinar o modelo com um conjunto de dados e, em seguida, vamos testá-lo com um conjunto de dados que não foram utilizados para treinar. Para isso, vamos retirar algumas instâncias da base de treinamento e usá-las posteriormente para testá-la. Vamos chamar isso de dividir a base em base de treino e base de teste. É fácil perceber que não faz sentido testarmos nosso modelo com um padrão que ele já conhece. Por isso, faz-se necessária essa separação. | import numpy as np
# Determinando os índices que serão retirados da base de treino para formar a base de teste
test_idx = [0, 50, 100] # as instâncias 0, 50 e 100 da base de dados
# Criando a base de treino
train_target = np.delete(dataset_iris.target, test_idx)
train_data = np.delete(dataset_iris.data, test_idx, axis=0)
# Criando a base de teste
test_target = dataset_iris.target[test_idx]
test_data = dataset_iris.data[test_idx]
print("Tamanho dos dados originais: ", dataset_iris.data.shape) #np.delete não modifica os dados originais
print("Tamanho do treinamento: ", train_data.shape)
print("Tamanho do teste: ", test_data.shape) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Agora que já temos nosso dataset separado, vamos criar o classificador e treina-lo com os dados de treinamento. | clf = tree.DecisionTreeClassifier()
clf.fit(train_data, train_target) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
O classificador foi treinado, agora vamos utiliza-lo para classificar as instâncias da base de teste. | print(clf.predict(test_data)) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Como estamos trabalhando com o aprendizado supervisionado, podemos comparar com o target que já conhecemos da base de teste. | print(test_target) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Observe que, neste caso, nosso classificador teve uma acurácia de 100% acertando todas as instâncias informadas. Claro que esse é só um exemplo e normalmente trabalhamos com valores de acurácias menores que 100%. No entanto, vale ressaltar que para algumas tarefas, como reconhecimento de imagens, as taxas de acurácias estão bem próximas de 100%.
Visualizando nosso modelo
A vantagem em se trablhar com a árvore de decisão é que podemos visualizar exatamente o que modelo faz. De forma geral, uma árvore de decisão é uma árvore que permite serparar o conjunto de dados. Cada nó da árvore é "uma pergunta" que direciona aquela instância ao longo da árvore. Nos nós folha da árvore se encontram as classes. Esse tipo de modelo será mais detalhado mais a frente no nosso curso.
Para isso, vamos utilizar um código que visualiza a árvore gerada. | from IPython.display import Image
import pydotplus
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=dataset_iris.feature_names,
class_names=dataset_iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png(), width=800) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Observe que nos nós internos pergunta sim ou não para alguma característica. Por exemplo, no nó raiz a pergunta é "pedal width é menor ou igual a 0.8". Isso significa que se a instância que estou querendo classificar possui pedal width menor que 0.8 ela será classificada como setosa. Se isso não for true ela será redirecionada para outro nó que irá analisar outra característica. Esse processo continua até que consiga atingir um nó folha. Como execício faça a classificação, acompahando na tabela, para as instâncias de testes. | print(test_data)
print(test_target) | Introduction/Tutorial01_HelloWorld.ipynb | adolfoguimaraes/machinelearning | mit |
Vertex Training: Distributed Hyperparameter Tuning
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td> <td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Overview
This notebook demonstrates how to run a hyperparameter tuning job with Vertex Training to discover optimal hyperparameter values for an ML model. To speed up the training process, MirroredStrategy from the tf.distribute module is used to distribute training across multiple GPUs on a single machine.
Dataset
The dataset used for this tutorial is the horses or humans dataset from TensorFlow Datasets. The trained model predicts if an image is of a horse or a human.
Objective
In this notebook, you create a custom-trained model from a Python script in a Docker container. You learn how to modify training application code for hyperparameter tuning and submit a Vertex Training hyperparameter tuning job with the Python SDK.
The steps performed include:
Create a Vertex AI custom job for training a model.
Launch hyperparameter tuning job with the Python SDK.
Cleanup resources.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install the latest version of Vertex SDK for Python. | import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform | notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Cloud SDK, you will need to provide a staging bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI. | BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
print(BUCKET_URI) | notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Import libraries and define constants | import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import hyperparameter_tuning as hpt | notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create and run hyperparameter tuning job on Vertex AI
Once your container is pushed to Google Container Registry, you use the Vertex SDK to create and run the hyperparameter tuning job.
You define the following specifications:
* worker_pool_specs: Dictionary specifying the machine type and Docker image. This example defines a single node cluster with one n1-standard-4 machine with two NVIDIA_TESLA_T4 GPUs.
* parameter_spec: Dictionary specifying the parameters to optimize. The dictionary key is the string assigned to the command line argument for each hyperparameter in your training application code, and the dictionary value is the parameter specification. The parameter specification includes the type, min/max values, and scale for the hyperparameter.
* metric_spec: Dictionary specifying the metric to optimize. The dictionary key is the hyperparameter_metric_tag that you set in your training application code, and the value is the optimization goal. | worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": "NVIDIA_TESLA_T4",
"accelerator_count": 2,
},
"replica_count": 1,
"container_spec": {"image_uri": IMAGE_URI},
}
]
metric_spec = {"accuracy": "maximize"}
parameter_spec = {
"learning_rate": hpt.DoubleParameterSpec(min=0.001, max=1, scale="log"),
"momentum": hpt.DoubleParameterSpec(min=0, max=1, scale="linear"),
"units": hpt.DiscreteParameterSpec(values=[64, 128, 512], scale=None),
} | notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a CustomJob. | print(BUCKET_URI)
# Create a CustomJob
JOB_NAME = "horses-humans-hyperparam-job" + TIMESTAMP
my_custom_job = aiplatform.CustomJob(
display_name=JOB_NAME,
project=PROJECT_ID,
worker_pool_specs=worker_pool_specs,
staging_bucket=BUCKET_URI,
) | notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Then, create and run a HyperparameterTuningJob.
There are a few arguments to note:
max_trial_count: Sets an upper bound on the number of trials the service will run. The recommended practice is to start with a smaller number of trials and get a sense of how impactful your chosen hyperparameters are before scaling up.
parallel_trial_count: If you use parallel trials, the service provisions multiple training processing clusters. The worker pool spec that you specify when creating the job is used for each individual training cluster. Increasing the number of parallel trials reduces the amount of time the hyperparameter tuning job takes to run; however, it can reduce the effectiveness of the job overall. This is because the default tuning strategy uses results of previous trials to inform the assignment of values in subsequent trials.
search_algorithm: The available search algorithms are grid, random, or default (None). The default option applies Bayesian optimization to search the space of possible hyperparameter values and is the recommended algorithm. | # Create and run HyperparameterTuningJob
hp_job = aiplatform.HyperparameterTuningJob(
display_name=JOB_NAME,
custom_job=my_custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=15,
parallel_trial_count=3,
project=PROJECT_ID,
search_algorithm=None,
)
hp_job.run() | notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Click on the generated link in the output to see your run in the Cloud Console. When the job completes, you will see the results of the tuning trials.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial: | # Set this to true only if you'd like to delete your bucket
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI | notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Time stepping | # Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
kx=np.arange(5,nx-4)
print("Starting time stepping...")
## Time stepping
for n in range(2,nt):
# Inject source wavelet
p[xscr]=p[xscr]+q[n]
# Update velocity
for kx in range(6,nx-5):
# Calculating spatial derivative
p_x=i_dx*(1225.0/1024.0)*(p[kx+1]-p[kx])+i_dx*(-245.0/3072.0)*(p[kx+2]-p[kx-1])+i_dx*(49.0/5120.0)*(p[kx+3]-p[kx-2])+i_dx*(-5.0/7168.0)*(p[kx+4]-p[kx-3])
# Update velocity
vx[kx]=vx[kx]-dt/rho[kx]*p_x
# Update pressure
for kx in range(6,nx-5):
# Calculating spatial derivative
vx_x=i_dx*(1225.0/1024.0)*(vx[kx]-vx[kx-1])+i_dx*(-245.0/3072.0)*(vx[kx+1]-vx[kx-2])+i_dx*(49.0/5120.0)*(vx[kx+2]-vx[kx-3])+i_dx*(-5.0/7168.0)*(vx[kx+3]-vx[kx-4])
# Update pressure
p[kx]=p[kx]-l[kx]*dt*(vx_x);
# Save seismograms
Seismogramm[0,n]=p[xrec1]
Seismogramm[1,n]=p[xrec2]
Seismogramm[2,n]=p[xrec3]
print("Finished time stepping!") | JupyterNotebook/1D/FD_1D_DX8_DT2.ipynb | florianwittkamp/FD_ACOUSTIC | gpl-3.0 |
Save seismograms | ## Save seismograms
np.save("Seismograms/FD_1D_DX8_DT2",Seismogramm)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
| JupyterNotebook/1D/FD_1D_DX8_DT2.ipynb | florianwittkamp/FD_ACOUSTIC | gpl-3.0 |
Then we can setup the horsetail matching object, using TP2 from the demo problems as our quantity of interest. Recall this is a fuction that takes two inputs: values of the design variables, x, and values of the uncertain parameters, u, and returns the quantity of interest, q.
Interval uncertainties are given as the third argument to a horsetail matching object, or through the int_uncertainties keyword. So the following two objects are equivalent: | from horsetailmatching.demoproblems import TP2
def my_target(h):
return 1
theHM = HorsetailMatching(TP2, u_prob, u_int,
ftarget=(my_target, my_target), samples_prob=n_samples, samples_int=50)
theHM = HorsetailMatching(TP2, prob_uncertainties=[u_prob_alternative], int_uncertainties=[u_int],
ftarget=(my_target, my_target), samples_prob=n_samples, samples_int=50) | notebooks/MixedUncertainties.ipynb | lwcook/horsetail-matching | mit |
Note that under mixed uncertainties we can set separate targets for the upper and lower bounds on the CDF (the two horsetail curves) by passing a tuple of (target_for_upper_bound, target_for_lower_bound) to the ftarget argument.
Note also that here we also specified the number of samples to take from the probabilistic uncertainties and how many to take from the interval uncertainties using the arguments samples_prob and samples_int. A nested structure is used to evaluate the metric under mixed uncertainties and so the total number of samples taken will be (samples_prob) x (samples_int).
If specifying uncertainties using a sampling function, the number of samples returned by this function needs to be the same as the number specified in the samples_prob attribute.
We can use the getHorsetail() method to visualize the horsetail plot, whch can then be plotted using matplotlib.
This time because we are dealing with mixed uncertainties we can get a CDF at each value of the sampled interval uncertainties (the third returned argument from getHorsetail() gives a list of these CDFs) of which the envelope gives the upper and lower bounds - the horsetail plot - which is highlighed in blue here. | print(theHM.evalMetric([2, 3]))
upper, lower, CDFs = theHM.getHorsetail()
(q1, h1, t1) = upper
(q2, h2, t2) = lower
for CDF in CDFs:
plt.plot(CDF[0], CDF[1], c='grey', lw=0.5)
plt.plot(q1, h1, 'b')
plt.plot(q2, h2, 'b')
plt.plot(t1, h1, 'k--')
plt.plot(t2, h2, 'k--')
plt.xlim([0, 15])
plt.ylim([0, 1])
plt.xlabel('Quantity of Interest')
plt.show() | notebooks/MixedUncertainties.ipynb | lwcook/horsetail-matching | mit |
Since this problem is highly non-linear, we obtain an interestingly shaped horsetail plot with CDFs that cross. Note that the target is plotted in dashed lines. Now to optimize the horsetail matching metric, we simply use the evalMetric method in an optimizer as before: | from scipy.optimize import minimize
solution = minimize(theHM.evalMetric, x0=[1,1], method='Nelder-Mead')
print(solution) | notebooks/MixedUncertainties.ipynb | lwcook/horsetail-matching | mit |
Now we can inspect the horsetail plot of the optimum design by using the getHorsetail method again: | upper, lower, CDFs = theHM.getHorsetail()
for CDF in CDFs:
plt.plot(CDF[0], CDF[1], c='grey', lw=0.5)
plt.plot(upper[0], upper[1], 'r')
plt.plot(lower[0], lower[1], 'r')
plt.plot([theHM.ftarget[0](y) for y in upper[1]], upper[1], 'k--')
plt.plot([theHM.ftarget[1](y) for y in lower[1]], lower[1], 'k--')
plt.xlim([0, 15])
plt.ylim([0, 1])
plt.xlabel('Quantity of Interest')
plt.show() | notebooks/MixedUncertainties.ipynb | lwcook/horsetail-matching | mit |
Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
Basic Summary of the Data Set | #Number of original training examples
n_train_ori = X_train_ori.shape[0]
print("Number of original training examples =", n_train_ori)
# Number of training examples after image agumentation
n_train = X_train.shape[0]
print("Number of training examples =", n_train)
# Number of validation examples
n_validation = X_valid.shape[0]
print("Number of validation examples =", n_validation)
# Number of testing examples.
n_test = X_test.shape[0]
print("Number of testing examples =", n_test)
# Shape of an traffic sign image
image_shape = X_train.shape[1:]
print("Image data shape =", image_shape)
# Unique classes/labels there are in the dataset.
n_classes = len(set(y_train_ori))
print("Number of classes =", n_classes) | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
Include an exploratory visualization of the dataset | ### Data exploration visualization
# Visualizations will be shown in the notebook.
%matplotlib inline
def plotTrafficSign(n_rows, n_cols):
"""
This function displays random images from the trainign data set.
"""
fig, axes = plt.subplots(nrows = n_rows, ncols = n_cols, figsize=(60,30))
for row in axes:
for col in row:
index = randint(0,n_train_ori)
col.imshow(X_train_ori[index,:,:,:])
col.set_title(y_train_ori[index])
#Plot traffic signs for visualization
plotTrafficSign(10, 5)
#Plot distribution of data
sns.distplot(y_train_ori, kde=False, bins=n_classes)
sns.distplot(y_valid, kde=False, bins=n_classes)
sns.distplot(y_test, kde=False, bins=n_classes) | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
Histogram of the data shows that the trainign data is unevenly distributed. This might affect the training of CNN model.
Comparing the distribution across the 3 sets (training/validation/test), it seems that the distribution is similar in all the sets.
Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.
The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
Neural network architecture (is the network over or underfitting?)
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, (pixel - 128)/ 128 is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. | ### Preprocess the data.
def dataGeneration():
"""
This function auguments the training data by creating new data (via image rotation)
"""
global X_train
global y_train
global y_train_ori
global X_train_ori
global n_train_ori
#Create new data by fliping the images in the vertical and horizontal directions
X_train[0:n_train_ori,:,:,:] = X_train_ori[:,:,:,:]
y_train[0:n_train_ori] = y_train_ori[:]
width = X_train.shape[1]
height = X_train.shape[2]
center = (width/ 2, height/ 2)
for index in range(n_train_ori):
#Rotate by 10 degrees
rotation = cv2.getRotationMatrix2D(center, 10, 1.0)
X_train[n_train_ori+index,:,:,:] = cv2.warpAffine(X_train_ori[index,:,:,:], rotation, (width, height))
y_train[n_train_ori+index] = y_train_ori[index]
#Flip the image horizontally
rotation = cv2.getRotationMatrix2D(center, -10, 1.0)
X_train[2*n_train_ori+index,:,:,:] = cv2.warpAffine(X_train_ori[index,:,:,:], rotation, (width, height))
y_train[2*n_train_ori+index] = y_train_ori[index]
def normalize(X_input):
"""
This function normalizes the data
"""
#Min-Max normalization of data
range_min = 0.1
range_max = 0.9
data_min = 0
data_max = 255
X_input = range_min + (((X_input - data_min)*(range_max - range_min) )/(data_max - data_min))
return X_input
def randomize(X_input, y_input):
"""
This function randomizes the data.
"""
#Randomize the data
X_input, y_input = shuffle(X_input, y_input)
return X_input, y_input
dataGeneration()
X_train = normalize(X_train)
X_valid = normalize(X_valid)
X_test = normalize(X_test)
X_train, y_train = randomize(X_train, y_train) | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
Model Architecture | def LeNet(x, keep_prob=1.0):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
global n_classes
# Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
#Dropout
conv1 = tf.nn.dropout(conv1, keep_prob)
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
#Dropout
conv2 = tf.nn.dropout(conv2, keep_prob)
# Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 400. Output = 300.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 300), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(300))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
#Dropout
fc1 = tf.nn.dropout(fc1, keep_prob)
# Layer 4: Fully Connected. Input = 300. Output = 200.
fc2_W = tf.Variable(tf.truncated_normal(shape=(300, 200), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(200))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
#Dropout
fc2 = tf.nn.dropout(fc2, keep_prob)
# Layer 5: Fully Connected. Input = 200. Output = n_classes.
fc3_W = tf.Variable(tf.truncated_normal(shape=(200, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
class CModel:
def __init__(self, input_conv, target, learning_rate = 0.001,
epochs = 10, batch_size = 128, keep_prob=1.0, debug_logging = False):
"""
This is the ctor for the class CModel.
It initializes various hyper parameters required for training.
"""
self.learning_rate = learning_rate
self.epoch = epochs
self.batch_size = batch_size
self.debug_logging = debug_logging
self.input_conv = input_conv
self.target = target
self.logits = None
self.one_hot_out_class = None
self.keep_prob = keep_prob
def __loss(self):
"""
This function calculates the loss.
"""
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=self.one_hot_out_class, logits=self.logits)
loss_operation = tf.reduce_mean(cross_entropy)
return loss_operation
def __optimize(self, loss_operation):
"""
This function runs the optimizer to train the weights.
"""
optimizer = tf.train.AdamOptimizer(learning_rate = self.learning_rate)
minimize_loss = optimizer.minimize(loss_operation)
return minimize_loss
def trainLeNet(self):
"""
This function trains the LeNet network.
"""
print("n_classes ",n_classes)
self.logits = LeNet(self.input_conv,self.keep_prob)
self.one_hot_out_class = tf.one_hot(self.target, n_classes)
loss_operation = self.__loss()
minimize_loss = self.__optimize(loss_operation)
return minimize_loss
def accuracy(self):
"""
This function calculates the accuracy of the model.
"""
prediction, _ = self.prediction()
correct_prediction = tf.equal(prediction, tf.argmax(self.one_hot_out_class, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return accuracy_operation
def prediction(self):
return tf.argmax(self.logits, 1), tf.nn.top_k(tf.nn.softmax(self.logits), k=5) | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. | #Model training
class CEvaluate:
def __init__(self, learning_rate=0.001, epoch=10, batch_size=128):
self.input_conv = tf.placeholder(tf.float32, (None, 32, 32, 3))
self.target = tf.placeholder(tf.int32, (None))
self.keep_prob = tf.placeholder(tf.float32)
self.model = CModel(self.input_conv, self.target, learning_rate, epoch, batch_size, self.keep_prob)
self.train = self.model.trainLeNet()
self.accuracy_operation = self.model.accuracy()
self.epoch = epoch
self.batch_size = batch_size
self.saver = tf.train.Saver()
self.prediction = self.model.prediction()
def __evaluate(self, X_data, y_data, keep_prob=1):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, self.batch_size):
batch_x, batch_y = X_data[offset:offset+self.batch_size], y_data[offset:offset+self.batch_size]
accuracy = sess.run(self.accuracy_operation, feed_dict={self.input_conv: batch_x, \
self.target: batch_y, self.keep_prob: 1.0})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
def test(self):
global X_test
global y_test
with tf.Session() as sess:
self.saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = self.__evaluate(X_test, y_test)
print("Test Accuracy = ", test_accuracy)
def predictions(self, test_images):
with tf.Session() as sess:
self.saver.restore(sess, './lenet')
predict, top_k_softmax = sess.run(self.prediction, feed_dict={self.input_conv: test_images, self.keep_prob: 1.0})
return predict, top_k_softmax
def run(self):
global X_train
global y_train
global X_valid
global y_valid
validation_accuracy = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
for i in range(self.epoch):
print("Epoch == ", i)
for offset in range(0, num_examples, self.batch_size):
end = offset + self.batch_size
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(self.train, feed_dict={self.input_conv: batch_x, self.target: batch_y, self.keep_prob: 0.9})
validation_accuracy.append(self.__evaluate(X_valid, y_valid))
print("Validation Accuracy == ", validation_accuracy[i])
self.saver.save(sess, './lenet')
plt.plot(validation_accuracy)
plt.xlabel("Epoch")
plt.ylabel("Validation Accuracy")
plt.title("Tracking of validation accuracy")
plt.show()
learning_rate = 0.001
epoch = 30
batch_size = 128
eval_model = CEvaluate(learning_rate, epoch, batch_size)
eval_model.run()
eval_model.test() | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images | ### Load the images and plot them.
import os
test_images = os.listdir('test_images')
num_test_images = 5
X_new_test = np.empty((num_test_images, 32, 32, 3))
y_new_test = np.empty(num_test_images)
dic = {"60.jpg":3, "70.jpg":4, "roadwork.jpg":25, "stop.jpg":14, "yield.jpg":13}
for index, image_name in enumerate(test_images):
image_path = os.path.join('test_images', image_name)
original_image = mpimg.imread(image_path)
X_new_test[index,:,:,:] = cv2.resize(original_image,(32,32),interpolation=cv2.INTER_AREA)
y_new_test[index] = dic[image_name]
plt.imshow(X_new_test[index,:,:,:])
plt.show() | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
Predict the Sign Type for Each Image/Analyze Performance/ Output Soft Max | with open('signnames.csv', mode='r') as file:
reader = csv.reader(file)
sign_mapping = {rows[0]:rows[1] for rows in reader}
X_new_test = normalize(X_new_test)
predict, top_k_softmax = eval_model.predictions(X_new_test)
for output,expected in zip(predict,y_new_test):
print("Expected {} ...... Output {}".format(sign_mapping[str(int(expected))], sign_mapping[str(output)]))
### Calculate the accuracy for these 5 new images.
count = 0
for result, expectation in zip(predict, y_new_test):
if result == expectation:
count = count+1
accuracy = count/num_test_images
print("accuracy of the prediction of new test images", accuracy) | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
Output Top 5 Softmax Probabilities For Each Image Found on the Web | print("top_k_softmax == ", top_k_softmax) | Traffic_Sign_Classifier.ipynb | rohitbahl1986/TrafficSignClassifier | mit |
2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015? | # STEP 1: Exploring the data structure using just one of the dates from the question
bookcat_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=0c3ba2a8848c44eea6a3443a17e57448')
bookcat_data = bookcat_response.json()
print(type(bookcat_data))
print(bookcat_data.keys())
bookcat = bookcat_data['results']
print(type(bookcat))
print(bookcat[0])
# STEP 2: Writing a loop that runs the same code for both dates (no function, as only one variable)
dates = ['2009-06-06', '2015-06-15']
for date in dates:
bookcatN_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=' + date + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')
bookcatN_data = bookcatN_response.json()
bookcatN = bookcatN_data['results']
category_listN = []
for category in bookcatN:
category_listN.append(category['display_name'])
print(" ")
print("THESE WERE THE DIFFERENT BOOK CATEGORIES THE NYT RANKED ON", date)
for cat in category_listN:
print(cat) | foundations-homework/05/.ipynb_checkpoints/homework-05-gruen-nyt-checkpoint.ipynb | gcgruen/homework | mit |
3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy. | # STEP 1a: EXPLORING THE DATA
test_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')
test_data = test_response.json()
print(type(test_data))
print(test_data.keys())
test_hits = test_data['response']
print(type(test_hits))
print(test_hits.keys())
# STEP 1b: EXPLORING THE META DATA
test_hits_meta = test_data['response']['meta']
print("The meta data of the search request is a", type(test_hits_meta))
print("The dictionary despot_hits_meta has the following keys:", test_hits_meta.keys())
print("The search requests with the TEST URL yields total:")
test_hit_count = test_hits_meta['hits']
print(test_hit_count)
# STEP 2: BUILDING THE CODE TO LOOP THROUGH DIFFERENT SPELLINGS
despot_names = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']
for name in despot_names:
despot_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + name +'+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')
despot_data = despot_response.json()
despot_hits_meta = despot_data['response']['meta']
despot_hit_count = despot_hits_meta['hits']
print("The NYT has referred to the Libyan despot", despot_hit_count, "times using the spelling", name) | foundations-homework/05/.ipynb_checkpoints/homework-05-gruen-nyt-checkpoint.ipynb | gcgruen/homework | mit |
1.1 Subset the dataset into the moderate variable levels
In order to verifify whether the moderator variabel, urbanrate, plays a role into the interaction between incomeperperon and lifeexpectancy, we'll subset our dataset into two groups: onde group for countries below 50% of urbanrate population, and the other group with countries equal or above 50% of urbanrate population. | # Dataset with low urban rate.
df_low = df[df.urbanrate < 50]
# Dataset with high urban rate.
df_high = df[df.urbanrate >= 50] | Week_4.ipynb | srodriguex/coursera_data_analysis_tools | mit |
1.2 Pearson correlation $r$
For each subset, we'll conduct the Pearson correlation analysis and verify the results. | r_low = pearsonr(df_low.incomeperperson, df_low.lifeexpectancy)
r_high = pearsonr(df_high.incomeperperson, df_high.lifeexpectancy)
print('Correlation in LOW urban rate: {}'.format(r_low))
print('Correlation in HIGH urban rate: {}'.format(r_high))
print('Percentage of variability LOW urban rate: {:2}%'.
format(round(r_low[0]**2*100,2)))
print('Percentage of variability HIGH urban rate: {:2}%'.
format(round(r_high[0]**2*100,2)))
# Silent matplotlib warning.
import warnings
warnings.filterwarnings('ignore',category=FutureWarning)
# Setting an apropriate size for the graph.
f,a = subplots(1, 2)
f.set_size_inches(12,6)
# Plot the graph.
sn.regplot(df_low.incomeperperson, df_low.lifeexpectancy, ax=a[0]);
a[0].set_title('Countries with LOW urbanrate', fontweight='bold');
sn.regplot(df_high.incomeperperson, df_high.lifeexpectancy, ax=a[1]);
a[1].set_title('Countries with HIGH urbanrate', fontweight='bold');
| Week_4.ipynb | srodriguex/coursera_data_analysis_tools | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.