markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Visualizando o histograma da imagem com realce de contraste
Observe que quanto menor a largura da janela, mais pixels terão valores 0 e 255.
Quando de visualiza seu histograma, aparecerá um grande pico nestes dois valores
que são o extremo do histograma. Para evitar que estes valores entrem no plot,
faz-se um fatiamento do histograma do segundo pixel ao penúltimo: h[1:-1]. A
seguir mostramos o histograma contendo os valores 0 e 255 e depois não
utilizando estes valores: | hg = ia.histogram(g)
plt.figure(1)
plt.plot(hg),plt.title('Histograma da Imagens realçada')
plt.show()
plt.figure(2)
plt.plot(hg[1:-1]),plt.title('Idem, porém sem valores 0 e 255')
plt.show() | master/tutorial_contraste_iterativo_2.ipynb | robertoalotufo/ia898 | mit |
Loading a network is easy. caffe.Classifier takes care of everything. Note the arguments for configuring input preprocessing: mean subtraction switched on by giving a mean array, input channel swapping takes care of mapping RGB into the reference ImageNet model's BGR order, and raw scaling multiplies the feature scale from the input [0,1] to the ImageNet model's [0,255].
We will set the phase to test since we are doing testing, and will first use CPU for the computation. | caffe.set_mode_cpu()
net = caffe.Classifier(MODEL_FILE, PRETRAINED,
mean=np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1),
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256)) | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
Let's take a look at our example image with Caffe's image loading helper. | input_image = caffe.io.load_image(IMAGE_FILE)
plt.imshow(input_image) | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
Time to classify. The default is to actually do 10 predictions, cropping the center and corners of the image as well as their mirrored versions, and average over the predictions: | prediction = net.predict([input_image]) # predict takes any number of images, and formats them for the Caffe net automatically
print 'prediction shape:', prediction[0].shape
plt.plot(prediction[0])
print 'predicted class:', prediction[0].argmax() | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
You can see that the prediction is 1000-dimensional, and is pretty sparse.
The predicted class 281 is "Tabby cat." Our pretrained model uses the synset ID ordering of the classes, as listed in ../data/ilsvrc12/synset_words.txt if you fetch the auxiliary imagenet data by ../data/ilsvrc12/get_ilsvrc_aux.sh. If you look at the top indices that maximize the prediction score, they are cats, foxes, and other cute mammals. Not unreasonable predictions, right?
Now let's classify by the center crop alone by turning off oversampling. Note that this makes a single input, although if you inspect the model definition prototxt you'll see the network has a batch size of 10. The python wrapper handles batching and padding for you! | prediction = net.predict([input_image], oversample=False)
print 'prediction shape:', prediction[0].shape
plt.plot(prediction[0])
print 'predicted class:', prediction[0].argmax() | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
Now, why don't we see how long it takes to perform the classification end to end? This result is run from an Intel i5 CPU, so you may observe some performance differences. | %timeit net.predict([input_image]) | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
It may look a little slow, but note that time is spent on cropping, python interfacing, and running 10 images. For performance, if you really want to make prediction fast, you can optionally code in C++ and pipeline operations better. For experimenting and prototyping the current speed is fine.
Let's time classifying a single image with input preprocessed: | # Resize the image to the standard (256, 256) and oversample net input sized crops.
input_oversampled = caffe.io.oversample([caffe.io.resize_image(input_image, net.image_dims)], net.crop_dims)
# 'data' is the input blob name in the model definition, so we preprocess for that input.
caffe_input = np.asarray([net.transformer.preprocess('data', in_) for in_ in input_oversampled])
# forward() takes keyword args for the input blobs with preprocessed input arrays.
%timeit net.forward(data=caffe_input) | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
OK, so how about GPU? it is actually pretty easy: | caffe.set_mode_gpu() | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
Voila! Now we are in GPU mode. Let's see if the code gives the same result: | prediction = net.predict([input_image])
print 'prediction shape:', prediction[0].shape
plt.plot(prediction[0]) | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
Good, everything is the same. And how about time consumption? The following benchmark is obtained on the same machine with a GTX 770 GPU: | # Full pipeline timing.
%timeit net.predict([input_image])
# Forward pass timing.
%timeit net.forward(data=caffe_input) | examples/classification.ipynb | gogartom/caffe-textmaps | mit |
If you've set up your environment properly, this cell should run without problems: | import math
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from datascience import *
from client.api.notebook import Notebook
ok = Notebook('hw1.ok') | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Now, run this cell to log into OkPy.
This is the submission system for the class; you will use this
website to confirm that you've submitted your assignment. | ok.auth(inline=True) | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https://docs.python.org/3/tutorial/ is a good place to
start. | 2 + 2
# This is a comment.
# In Python, the ** operator performs exponentiation.
math.e**(-2)
print("Hello" + ",", "world!")
"Hello, cell output!"
def add2(x):
"""This docstring explains what this function does: it adds 2 to a number."""
return x + 2
def makeAdder(amount):
"""Make a function that adds the given amount to a number."""
def addAmount(x):
return x + amount
return addAmount
add3 = makeAdder(3)
add3(4)
# add4 is very similar to add2, but it's been created using a lambda expression.
add4 = lambda x: x + 4
add4(5)
sameAsMakeAdder = lambda amount: lambda x: x + amount
add5 = sameAsMakeAdder(5)
add5(6)
def fib(n):
if n <= 1:
return 1
# Functions can call themselves recursively.
return fib(n-1) + fib(n-2)
fib(4)
# A for loop repeats a block of code once for each
# element in a given collection.
for i in range(5):
if i % 2 == 0:
print(2**i)
else:
print("Odd power of 2")
# A list comprehension is a convenient way to apply a function
# to each element in a given collection.
# The String method join appends together all its arguments
# separated by the given string. So we append each element produced
# by the list comprehension, each separated by a newline ("\n").
print("\n".join([str(2**i) if i % 2 == 0 else "Odd power of 2" for i in range(5)])) | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 1
Question 1a
Write a function nums_reversed that takes in an integer n and returns a string
containing the numbers 1 through n including n in reverse order, separated
by spaces. For example:
>>> nums_reversed(5)
'5 4 3 2 1'
Note: The ellipsis (...) indicates something you should fill in. It doesn't necessarily imply you should replace it with only one line of code. | def nums_reversed(n):
...
_ = ok.grade('q01a')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 1b
Write a function string_splosion that takes in a non-empty string like
"Code" and returns a long string containing every prefix of the input.
For example:
>>> string_splosion('Code')
'CCoCodCode'
>>> string_splosion('data!')
'ddadatdatadata!'
>>> string_splosion('hi')
'hhi' | def string_splosion(string):
...
_ = ok.grade('q01b')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 1c
Write a function double100 that takes in a list of integers
and returns True only if the list has two 100s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True | def double100(nums):
...
_ = ok.grade('q01c')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 1d
Write a function median that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25 | def median(number_list):
...
_ = ok.grade('q01d')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
3. NumPy
The NumPy library lets us do fast, simple computing with numbers in Python.
3.1. Arrays
The basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays: | array1 = np.array([2, 3, 4, 5])
array2 = np.arange(4)
array1, array2 | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Math operations on arrays happen element-wise. Here's what we mean: | array1 * 2
array1 * array2
array1 ** array2 | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
Jupyter pro-tip: Pull up the docs for any function in Jupyter by running a cell with
the function name and a ? at the end: | np.arange? | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Another Jupyter pro-tip: Pull up the docs for any function in Jupyter by typing the function
name, then <Shift>-<Tab> on your keyboard. Super convenient when you forget the order
of the arguments to a function. You can press <Tab> multiple tabs to expand the docs.
Try it on the function below: | np.linspace | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 2
Using the np.linspace function, create an array called xs that contains
100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that
contains the value of $ \sin{x} $ at each of those 100 points.
Hint: Use the np.sin function. You should be able to define each variable with one line of code.) | xs = ...
ys = ...
_ = ok.grade('q02')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
The plt.plot function from another library called matplotlib lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question: | plt.plot(xs, ys) | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
This is a useful recipe for plotting any function:
1. Use linspace or arange to make a range of x-values.
2. Apply the function to each point to produce y-values.
3. Plot the points.
You might remember from calculus that the derivative of the sin function is the cos function. That means that the slope of the curve you plotted above at any point xs[i] is given by cos(xs[i]). You can try verifying this by plotting cos in the next cell. | # Try plotting cos here. | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.
Consider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is:
(ys[i+1] - ys[i]) / (xs[i+1] - xs[i])
If the difference between xs[i+1] and xs[i] were infinitessimal, we'd have exactly the derivative. In numerical differentiation we take advantage of the fact that it's often good enough to use "really small" differences instead.
Question 3
Define a function called derivative that takes in an array of x-values and their
corresponding y-values and computes the slope of the line connecting each point to the next point.
>>> derivative(np.array([0, 1, 2]), np.array([2, 4, 6]))
np.array([2., 2.])
>>> derivative(np.arange(5), np.arange(5) ** 2)
np.array([0., 2., 4., 6.])
Notice that the output array has one less element than the inputs since we can't
find the slope for the last point.
It's possible to do this in one short line using slicing, but feel free to use whatever method you know.
Then, use your derivative function to compute the slopes for each point in xs, ys.
Store the slopes in an array called slopes. | def derivative(xvals, yvals):
...
slopes = ...
slopes[:5]
_ = ok.grade('q03')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 4
Plot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?
Note: Since we have only 99 slopes, you'll need to take off the last x-value before plotting to avoid an error. | ...
... | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend. | plt.plot(xs[:-1], slopes, label="Numerical derivative")
plt.plot(xs[:-1], np.cos(xs[:-1]), label="True derivative")
# You can just call plt.legend(), but the legend will cover up
# some of the graph. Use bbox_to_anchor=(x,y) to set the x-
# and y-coordinates of the center-left point of the legend,
# where, for example, (0, 0) is the bottom-left of the graph
# and (1, .5) is all the way to the right and halfway up.
plt.legend(bbox_to_anchor=(1, .5), loc="center left"); | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers. | # The zeros function creates an array with the given shape.
# For a 2-dimensional array like this one, the first
# coordinate says how far the array goes *down*, and the
# second says how far it goes *right*.
array3 = np.zeros((4, 5))
array3
# The shape attribute returns the dimensions of the array.
array3.shape
# You can think of array3 as an array containing 4 arrays, each
# containing 5 zeros. Accordingly, we can set or get the third
# element of the second array in array 3 using standard Python
# array indexing syntax twice:
array3[1][2] = 7
array3
# This comes up so often that there is special syntax provided
# for it. The comma syntax is equivalent to using multiple
# brackets:
array3[1, 2] = 8
array3 | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Arrays allow you to assign to multiple places at once. The special character : means "everything." | array4 = np.zeros((3, 5))
array4[:, 2] = 5
array4 | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works. | array5 = np.zeros((3, 5))
rows = np.array([1, 0, 2])
cols = np.array([3, 1, 4])
# Indices (1,3), (0,1), and (2,4) will be set.
array5[rows, cols] = 3
array5 | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 5
Create a 50x50 array called twice_identity that contains all zeros except on the
diagonal, where it contains the value 2.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.) | twice_identity = ...
...
twice_identity
_ = ok.grade('q05')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, 1.txt through 10.txt. Peek at the files in a text
editor of your choice.
Question 6
How do you think the contents of the file are structured? Take your best guess.
Write your answer here, replacing this text.
Question 7
Create a function called read_file_lines that takes in a filename as its argument.
This function should return a Python list containing the lines of the
file as strings. That is, if 1.txt contains:
1 2 3
3 4 5
7 8 9
the return value should be: ['1 2 3\n', '3 4 5\n', '7 8 9\n'].
Then, use the read_file_lines function on the file 1.txt, reading the contents
into a variable called file1.
Hint: Check out this Stack Overflow page on reading lines of files. | def read_file_lines(filename):
...
...
file1 = ...
file1[:5]
_ = ok.grade('q07')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a channel
(disregarding line 1). So there are 3 channels: red, green, and blue.
Question 8
Define a function called lines_to_image that takes in the contents of a
file as a list (such as file1). It should return an array containing integers of
shape (n_rows, n_cols, 3). That is, it contains the pixel triplets organized in the
correct number of rows and columns.
For example, if the file originally contained:
4 2
0 0 0
10 10 10
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
The resulting array should be a 3-dimensional array that looks like this:
array([
[ [0,0,0], [10,10,10] ],
[ [2,2,2], [3,3,3] ],
[ [4,4,4], [5,5,5] ],
[ [6,6,6], [7,7,7] ]
])
The string method split and the function np.reshape might be useful.
Important note: You must call .astype(np.uint8) on the final array before
returning so that numpy will recognize the array represents an image.
Once you've defined the function, set image1 to the result of calling
lines_to_image on file1. | def lines_to_image(file_lines):
...
image_array = ...
# Make sure to call astype like this on the 3-dimensional array
# you produce, before returning it.
return image_array.astype(np.uint8)
image1 = ...
image1.shape
_ = ok.grade('q08')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 9
Images in numpy are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided show_images function to display image1. You may call it
like show_images(image1). If you later have multiple images to display, you
can call show_images([image1, image2]) to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is? | def show_images(images, ncols=2, figsize=(10, 7), **kwargs):
"""
Shows one or more color images.
images: Image or list of images. Each image is a 3-dimensional
array, where dimension 1 indexes height and dimension 2
the width. Dimension 3 indexes the 3 color values red,
blue, and green (so it always has length 3).
"""
def show_image(image, axis=plt):
plt.imshow(image, **kwargs)
if not (isinstance(images, list) or isinstance(images, tuple)):
images = [images]
images = [image.astype(np.uint8) for image in images]
nrows = math.ceil(len(images) / ncols)
ncols = min(len(images), ncols)
plt.figure(figsize=figsize)
for i, image in enumerate(images):
axis = plt.subplot2grid(
(nrows, ncols),
(i // ncols, i % ncols),
)
axis.tick_params(bottom='off', left='off', top='off', right='off',
labelleft='off', labelbottom='off')
axis.grid(False)
show_image(image, axis)
# Show image1 here:
... | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function expand_image_range that takes in an image. It returns a
new copy of the image with the following transformation:
old value | new value
========= | =========
0 | 12
1 | 37
2 | 65
3 | 89
4 | 114
5 | 137
6 | 162
7 | 187
8 | 214
9 | 240
10 | 250
This expands the color range of the image. For example, a pixel that previously
had the value [5 5 5] (almost-black) will now have the value [137 137 137]
(gray).
Set expanded1 to the expanded image1, then display it with show_images.
This page
from the numpy docs has some useful information that will allow you
to use indexing instead of for loops.
However, the slickest implementation uses one very short line of code.
Hint: If you index an array with another array or list as in question 5, your
array (or list) of indices can contain repeats, as in array1[[0, 1, 0]].
Investigate what happens in that case. | # This array is provided for your convenience.
transformed = np.array([12, 37, 65, 89, 114, 137, 162, 187, 214, 240, 250])
def expand_image_range(image):
...
expanded1 = ...
show_images(expanded1)
_ = ok.grade('q10')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called reveal_file that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set expanded_images to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use show_images to display the expanded_images. | def reveal_file(filename):
...
filenames = ['1.txt', '2.txt', '3.txt', '4.txt', '5.txt',
'6.txt', '7.txt', '8.txt', '9.txt', '10.txt']
expanded_images = ...
show_images(expanded_images, ncols=5) | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a [2 4 0] pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function proportion_by_channel. It takes in an image. It assigns
each pixel to its greatest-intensity channel: red, green, or blue. Then
the function returns an array of length three containing the proportion of
pixels categorized as red, the proportion categorized as green, and the
proportion categorized as blue (respectively). (Again, don't count pixels
that are tied between 2 or 3 colors as any category, but do count them
in the denominator when you're computing proportions.)
For example:
```
test_im = np.array([
[ [5, 2, 2], [2, 5, 10] ]
])
proportion_by_channel(test_im)
array([ 0.5, 0, 0.5 ])
If tied, count neither as the highest
test_im = np.array([
[ [5, 2, 5], [2, 50, 50] ]
])
proportion_by_channel(test_im)
array([ 0, 0, 0 ])
```
Then, set image_proportions to the result of proportion_by_channel called
on each image in expanded_images as a 2d array.
Hint: It's fine to use a for loop, but for a difficult challenge, try
avoiding it. (As a side benefit, your code will be much faster.) Our solution
uses the NumPy functions np.reshape, np.sort, np.argmax, and np.bincount. | def proportion_by_channel(image):
...
image_proportions = ...
image_proportions
_ = ok.grade('q12')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Let's plot the proportions you computed above on a bar chart: | # You'll learn about Pandas and DataFrames soon.
import pandas as pd
pd.DataFrame({
'red': image_proportions[:, 0],
'green': image_proportions[:, 1],
'blue': image_proportions[:, 2]
}, index=pd.Series(['Image {}'.format(n) for n in range(1, 11)], name='image'))\
.iloc[::-1]\
.plot.barh(); | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function summer_or_winter. It takes in an image and
returns True if the image is a summer image and False if the image is a
winter image.
Do not hard-code the function to the 10 images you currently have (eg.
if image1, return False). We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function. | def summer_or_winter(image):
...
_ = ok.grade('q13')
_ = ok.backup() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Congrats! You've created your very first classifier for this class.
Question 14
How do you think your classification function will perform
in general?
Why do you think it will perform that way?
What do you think would most likely give you false positives?
False negatives?
Write your answer here, replacing this text.
Final note: While our approach here is simplistic, skin color segmentation
-- figuring out which parts of the image belong to a human body -- is a
key step in many algorithms such as face detection.
Optional: Our code to encode images
Here are the functions we used to generate the text files for this assignment.
Feel free to send not-so-secret messages to your friends if you'd like. | import skimage as sk
import skimage.io as skio
def read_image(filename):
'''Reads in an image from a filename'''
return skio.imread(filename)
def compress_image(im):
'''Takes an image as an array and compresses it to look black.'''
res = im / 25
return res.astype(np.uint8)
def to_text_file(im, filename):
'''
Takes in an image array and a filename for the resulting text file.
Creates the encoded text file for later decoding.
'''
h, w, c = im.shape
to_rgb = ' '.join
to_row = '\n'.join
to_lines = '\n'.join
rgb = [[to_rgb(triplet) for triplet in row] for row in im.astype(str)]
lines = to_lines([to_row(row) for row in rgb])
with open(filename, 'w') as f:
f.write('{} {}\n'.format(h, w))
f.write(lines)
f.write('\n')
summers = skio.imread_collection('orig/summer/*.jpg')
winters = skio.imread_collection('orig/winter/*.jpg')
len(summers)
sum_nums = np.array([ 5, 6, 9, 3, 2, 11, 12])
win_nums = np.array([ 10, 7, 8, 1, 4, 13, 14])
for im, n in zip(summers, sum_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
for im, n in zip(winters, win_nums):
to_text_file(compress_image(im), '{}.txt'.format(n)) | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work. | _ = ok.grade_all() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
Now, run this code in your terminal to make a
git commit
that saves a snapshot of your changes in git. The last line of the cell
runs git push, which will send your work to your personal Github repo.
```
Tell git to commit all the changes so far
git add -A
Tell git to make the commit
git commit -m "hw1 finished"
Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. | # Now, we'll submit to okpy
_ = ok.submit() | sp17/hw/hw1/hw1.ipynb | DS-100/sp17-materials | gpl-3.0 |
First: load and "featurize"
Featurization refers to the process of converting the conformational
snapshots from your MD trajectories into vectors in some space $\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models the trajectory as a time-dependent
mixture of multivariate Gaussians.
In general, the featurization is somewhat of an art. For this example, we're using Mixtape's SuperposeFeaturizer, which superposes each snapshot onto a reference frame (trajectories[0][0] in this example), and then measure the distance from each
atom to its position in the reference conformation as the 'feature' | print(AlanineDipeptide.description())
dataset = AlanineDipeptide().get()
trajectories = dataset.trajectories
topology = trajectories[0].topology
indices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']]
featurizer = SuperposeFeaturizer(indices, trajectories[0][0])
sequences = featurizer.transform(trajectories) | examples/hmm-and-msm.ipynb | dotsdl/msmbuilder | lgpl-2.1 |
Now sequences is our featurized data. | lag_times = [1, 10, 20, 30, 40]
hmm_ts0 = {}
hmm_ts1 = {}
n_states = [3, 5]
for n in n_states:
hmm_ts0[n] = []
hmm_ts1[n] = []
for lag_time in lag_times:
strided_data = [s[i::lag_time] for s in sequences for i in range(lag_time)]
hmm = GaussianFusionHMM(n_states=n, n_features=sequences[0].shape[1], n_init=1).fit(strided_data)
timescales = hmm.timescales_ * lag_time
hmm_ts0[n].append(timescales[0])
hmm_ts1[n].append(timescales[1])
print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales))
print()
figure(figsize=(14,3))
for i, n in enumerate(n_states):
subplot(1,len(n_states),1+i)
plot(lag_times, hmm_ts0[n])
plot(lag_times, hmm_ts1[n])
if i == 0:
ylabel('Relaxation Timescale')
xlabel('Lag Time')
title('%d states' % n)
show()
msmts0, msmts1 = {}, {}
lag_times = [1, 10, 20, 30, 40]
n_states = [4, 8, 16, 32, 64]
for n in n_states:
msmts0[n] = []
msmts1[n] = []
for lag_time in lag_times:
assignments = KCenters(n_clusters=n).fit_predict(sequences)
msm = MarkovStateModel(lag_time=lag_time, verbose=False).fit(assignments)
timescales = msm.timescales_
msmts0[n].append(timescales[0])
msmts1[n].append(timescales[1])
print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales[0:2]))
print()
figure(figsize=(14,3))
for i, n in enumerate(n_states):
subplot(1,len(n_states),1+i)
plot(lag_times, msmts0[n])
plot(lag_times, msmts1[n])
if i == 0:
ylabel('Relaxation Timescale')
xlabel('Lag Time')
title('%d states' % n)
show() | examples/hmm-and-msm.ipynb | dotsdl/msmbuilder | lgpl-2.1 |
Vertex AI Pipelines: TPU model train, upload, and deploy using google-cloud-pipeline-components
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This notebook shows how to use the components defined in google_cloud_pipeline_components in conjunction with an experimental run_as_aiplatform_custom_job method, to build a Vertex AI Pipelines workflow that trains a custom model using TPUs, uploads the model as a Model resource, creates an Endpoint resource, and deploys the Model resource to the Endpoint resource.
Note: TPU VM Training is currently an opt-in feature. Your GCP project must first be added to the feature allowlist. Please email your project information(project id/number) to [email protected] for the allowlist. You will receive an email as soon as your project is ready.
Dataset
The dataset used for this tutorial is the cifar10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, or truck.
Objective
In this tutorial, you create an custom model using a pipeline with components from google_cloud_pipeline_components and a custom pipeline component you build.
In addition, you use the kfp.v2.google.experimental.run_as_aiplatform_custom_job method to train a custom model leveraging TPUs.
The steps performed include:
Build a custom container for the custom model.
Train the custom model with TPUs.
Uploads the trained model as a Model resource.
Creates an Endpoint resource.
Deploys the Model resource to the Endpoint resource.
The components are documented here.
(From that page, see also the CustomPythonPackageTrainingJobRunOp and CustomContainerTrainingJobRunOp components, which similarly run 'custom' training, but as with the related google.cloud.aiplatform.CustomContainerTrainingJob and google.cloud.aiplatform.CustomPythonPackageTrainingJob methods from the Vertex AI SDK, also upload the trained model).
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python. | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Install the latest GA version of google-cloud-pipeline-components library as well. | ! pip3 install $USER_FLAG kfp google-cloud-pipeline-components --upgrade | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. | BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below. | SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
SERVICE_ACCOUNT = SERVICE_ACCOUNT.replace("*", "")
SERVICE_ACCOUNT = SERVICE_ACCOUNT.replace(" ", "")
print(SERVICE_ACCOUNT) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account. | ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines: | PIPELINE_ROOT = "{}/pipeline_root/tpu_cifar10_pipeline".format(BUCKET_URI) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Additional imports. | import kfp
from google_cloud_pipeline_components import aiplatform as gcc_aip
from kfp.v2.dsl import component
from kfp.v2.google import experimental | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket. | aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Set up variables
Next, set up some variables used throughout the tutorial.
Set hardware accelerators
You can set hardware accelerators for both training and prediction.
Set the variables TRAIN_TPU/TRAIN_NTPU to use a container training image supporting a TPU and the number of TPUs allocated and DEPLOY_GPU/DEPLOY_NGPU to user a container deployment image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance.
Currently, while TPUs are in experimental, use the following numbers to represent the 2 TPUs available. Both have 8 accelerators:
6 = TPU_V2
7 = TPU_V3
For example, to use a TPU_V3 training container image, you would specify:
(7, 8)
See the locations where accelerators are available.
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TensorFlow releases earlier than 2.3 for GPU support fail to load the custom model in this tutorial. This issue is caused by static graph operations that are generated in the serving function. This is a known issue, which is fixed in TensorFlow 2.3. If you encounter this issue with your own custom models, use a container image for TensorFlow 2.3 or later with GPU support. | from google.cloud.aiplatform import gapic
# Use TPU Accelerators. Temporarily using numeric codes, until types are added to the SDK
# 6 = TPU_V2
# 7 = TPU_V3
TRAIN_TPU, TRAIN_NTPU = (7, 8) # Using TPU_V3 with 8 accelerators
DEPLOY_GPU, DEPLOY_NGPU = (gapic.AcceleratorType.NVIDIA_TESLA_K80, 1) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Set pre-built containers
Vertex AI provides pre-built containers to run training and prediction.
For the latest list, see Pre-built containers for training and Pre-built containers for prediction | DEPLOY_VERSION = "tf2-gpu.2-6"
DEPLOY_IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/{}:latest".format(
DEPLOY_VERSION
)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Set machine types
Next, set the machine types to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure your compute resources for training and prediction.
machine type
cloud-tpu : used for TPU training. See the TPU Architecture site for details
n1-standard: 3.75GB of memory per vCPU
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. | MACHINE_TYPE = "cloud-tpu"
# TPU VMs do not require VCPU definition
TRAIN_COMPUTE = MACHINE_TYPE
print("Train machine type", TRAIN_COMPUTE)
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
if not TRAIN_NTPU or TRAIN_NTPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "tpu"
EPOCHS = 20
STEPS = 10000
TRAINER_ARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
# create working dir to pass to job spec
WORKING_DIR = f"{PIPELINE_ROOT}/model"
MODEL_DISPLAY_NAME = f"tpu_train_deploy_{TIMESTAMP}"
print(TRAINER_ARGS, WORKING_DIR, MODEL_DISPLAY_NAME) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a custom container
We will create a directory and write all of our container build artifacts into that folder. | CONTAINER_ARTIFACTS_DIR = "tpu-container-artifacts"
!mkdir {CONTAINER_ARTIFACTS_DIR}
import os | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Write the Dockerfile | dockerfile = """FROM python:3.8
WORKDIR /root
# Copies the trainer code to the docker image.
COPY train.py /root/train.py
RUN pip3 install tensorflow-datasets
# Install TPU Tensorflow and dependencies.
# libtpu.so must be under the '/lib' directory.
RUN wget https://storage.googleapis.com/cloud-tpu-tpuvm-artifacts/libtpu/20210525/libtpu.so -O /lib/libtpu.so
RUN chmod 777 /lib/libtpu.so
RUN wget https://storage.googleapis.com/cloud-tpu-tpuvm-artifacts/tensorflow/20210525/tf_nightly-2.6.0-cp38-cp38-linux_x86_64.whl
RUN pip3 install tf_nightly-2.6.0-cp38-cp38-linux_x86_64.whl
RUN rm tf_nightly-2.6.0-cp38-cp38-linux_x86_64.whl
ENTRYPOINT ["python3", "train.py"]
"""
with open(os.path.join(CONTAINER_ARTIFACTS_DIR, "Dockerfile"), "w") as f:
f.write(dockerfile) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Training script
In the next cell, you write the contents of the training script, train.py. In summary:
Get the directory where to save the model artifacts from the environment variable AIP_MODEL_DIR. This variable is set by the training service.
Loads CIFAR10 dataset from TF Datasets (tfds).
Builds a model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs and steps according to the arguments args.epochs and args.steps
Saves the trained model (save(MODEL_DIR)) to the specified model directory.
TPU specific changes are listed below:
- Added a section that finds the TPU cluster, connects to it, and sets the training strategy to TPUStrategy
- Added a section that saves the trained TPU model to the local device, so that it can be saved to the AIP_MODEL_DIR | %%writefile {CONTAINER_ARTIFACTS_DIR}/train.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple TPU devices
elif args.distribute == 'tpu':
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu="local")
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
MODEL_DIR = os.getenv("AIP_MODEL_DIR")
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
if args.distribute=="tpu":
save_locally = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model.save(MODEL_DIR, options=save_locally)
else:
model.save(MODEL_DIR) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Build Container
Run these artifact registry and docker steps once | !gcloud services enable artifactregistry.googleapis.com
!sudo usermod -a -G docker ${USER}
REPOSITORY = "tpu-training-repository"
IMAGE = "tpu-train"
!gcloud auth configure-docker us-central1-docker.pkg.dev --quiet
!gcloud artifacts repositories create $REPOSITORY --repository-format=docker \
--location=us-central1 --description="Vertex TPU training repository" | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Build the training image | TRAIN_IMAGE = f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}:latest"
print(TRAIN_IMAGE)
%cd $CONTAINER_ARTIFACTS_DIR
# Use quiet flag as the output is fairly large
!docker build --quiet \
--tag={TRAIN_IMAGE} \
.
!docker push {TRAIN_IMAGE}
%cd .. | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Define custom model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
The experimental.run_as_aiplatform_custom_job method takes as arguments the previously defined component, and the list of worker_pool_specs— in this case one— with which the custom training job is configured.
Then, google_cloud_pipeline_components components are used to define the rest of the pipeline: upload the model, create an endpoint, and deploy the model to the endpoint.
Note: While not shown in this example, the model deploy will create an endpoint if one is not provided. | WORKER_POOL_SPECS = [
{
"containerSpec": {
"args": TRAINER_ARGS,
"env": [{"name": "AIP_MODEL_DIR", "value": WORKING_DIR}],
"imageUri": TRAIN_IMAGE,
},
"replicaCount": "1",
"machineSpec": {
"machineType": TRAIN_COMPUTE,
"accelerator_type": TRAIN_TPU,
"accelerator_count": TRAIN_NTPU,
},
}
] | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Define pipeline components
The following example define a custom pipeline component for this tutorial:
This component doesn't do anything (but run a print statement). | @component
def tpu_training_task_op(input1: str):
print("training task: {}".format(input1)) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
The pipeline has four main steps:
1) The run_as_experimental_custom_job runs the docker container which will execute the training task
2) The ModelUploadOp uploads the trained model to Vertex
3) The EndpointCreateOp creates the model endpoint
4) Finally, the ModelDeployOp deploys the model to the endpoint | @kfp.dsl.pipeline(name="train-endpoint-deploy" + TIMESTAMP)
def pipeline(
project: str = PROJECT_ID,
model_display_name: str = MODEL_DISPLAY_NAME,
serving_container_image_uri: str = DEPLOY_IMAGE,
):
train_task = tpu_training_task_op("tpu model training")
experimental.run_as_aiplatform_custom_job(
train_task,
worker_pool_specs=WORKER_POOL_SPECS,
)
model_upload_op = gcc_aip.ModelUploadOp(
project=project,
display_name=model_display_name,
artifact_uri=WORKING_DIR,
serving_container_image_uri=serving_container_image_uri,
)
model_upload_op.after(train_task)
endpoint_create_op = gcc_aip.EndpointCreateOp(
project=project,
display_name="tpu-pipeline-created-endpoint",
)
gcc_aip.ModelDeployOp(
endpoint=endpoint_create_op.outputs["endpoint"],
model=model_upload_op.outputs["model"],
deployed_model_display_name=model_display_name,
dedicated_resources_machine_type=DEPLOY_COMPUTE,
dedicated_resources_min_replica_count=1,
dedicated_resources_max_replica_count=1,
dedicated_resources_accelerator_type=DEPLOY_GPU.name,
dedicated_resources_accelerator_count=DEPLOY_NGPU,
) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Compile the pipeline
Next, compile the pipeline. | from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="tpu train cifar10_pipeline.json".replace(" ", "_"),
) | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Run the pipeline
Next, run the pipeline. | DISPLAY_NAME = "tpu_cifar10_training_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="tpu train cifar10_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
)
job.run()
! rm tpu_train_cifar10_pipeline.json | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Click on the generated link to see your run in the Cloud Console.
In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket | delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
# Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "tabular" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_URI" in globals():
! gsutil rm -r $BUCKET_URI | notebooks/community/pipelines/google_cloud_pipeline_components_TPU_model_train_upload_deploy.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Loading Data
First, we want to create our word vectors. For simplicity, we're going to be using a pretrained model.
As one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300.
In an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50.
We're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values. | # data from: http://ai.stanford.edu/~amaas/data/sentiment/
TRAIN_INPUT = 'data/train.csv'
TEST_INPUT = 'data/test.csv'
# data manually generated
MY_TEST_INPUT = 'data/mytest.csv'
# wordtovec
# https://nlp.stanford.edu/projects/glove/
# the matrix will contain 400,000 word vectors, each with a dimensionality of 50.
word_list = np.load('word_list.npy')
word_list = word_list.tolist() # originally loaded as numpy array
word_list = [word.decode('UTF-8') for word in word_list] # encode words as UTF-8
print('Loaded the word list, length:', len(word_list))
word_vector = np.load('word_vector.npy')
print ('Loaded the word vector, shape:', word_vector.shape) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
We can also search our word list for a word like "baseball", and then access its corresponding vector through the embedding matrix. | baseball_index = word_list.index('baseball')
print('Example: baseball')
print(word_vector[baseball_index]) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence "I thought the movie was incredible and inspiring". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete. | max_seq_length = 10 # maximum length of sentence
num_dims = 50 # dimensions for each word vector
first_sentence = np.zeros((max_seq_length), dtype='int32')
first_sentence[0] = word_list.index("i")
first_sentence[1] = word_list.index("thought")
first_sentence[2] = word_list.index("the")
first_sentence[3] = word_list.index("movie")
first_sentence[4] = word_list.index("was")
first_sentence[5] = word_list.index("incredible")
first_sentence[6] = word_list.index("and")
first_sentence[7] = word_list.index("inspiring")
# first_sentence[8] = 0
# first_sentence[9] = 0
print(first_sentence.shape)
print(first_sentence) # shows the row index for each word | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
TODO### Insert image
The 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence. | with tf.Session() as sess:
print(tf.nn.embedding_lookup(word_vector, first_sentence).eval().shape) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have.
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review. | from os import listdir
from os.path import isfile, join
positiveFiles = ['positiveReviews/' + f for f in listdir('positiveReviews/') if isfile(join('positiveReviews/', f))]
negativeFiles = ['negativeReviews/' + f for f in listdir('negativeReviews/') if isfile(join('negativeReviews/', f))]
numWords = []
for pf in positiveFiles:
with open(pf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Positive files finished')
for nf in negativeFiles:
with open(nf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Negative files finished')
numFiles = len(numWords)
print('The total number of files is', numFiles)
print('The total number of words in the files is', sum(numWords))
print('The average number of words in the files is', sum(numWords)/len(numWords)) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
We can also use the Matplot library to visualize this data in a histogram format. | import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(numWords, 50)
plt.xlabel('Sequence Length')
plt.ylabel('Frequency')
plt.axis([0, 1200, 0, 8000])
plt.show() | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set. | max_seq_len = 250 | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Data | ids_matrix = np.load('ids_matrix.npy').tolist() | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Parameters | # Parameters for training
STEPS = 100000
BATCH_SIZE = 64
# Parameters for data processing
REVIEW_KEY = 'review'
SEQUENCE_LENGTH_KEY = 'sequence_length' | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Separating train and test data
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews.
Let's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews. | POSITIVE_REVIEWS = 12500
# copying sequences
data_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]
# generating labels
data_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]
# also creating a length column, this will be used by the Dynamic RNN
# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
data_length = [max_seq_len for i in range(len(ids_matrix))] | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing. | data = list(zip(data_sequences, data_labels, data_length))
random.shuffle(data) # shuffle
data = np.asarray(data)
# separating train and test data
limit = int(len(data) * 0.9)
train_data = data[:limit]
test_data = data[limit:] | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Verifying if the train and test data have enough positive and negative examples | LABEL_INDEX = 1
def _number_of_pos_labels(df):
pos_labels = 0
for value in df:
if value[LABEL_INDEX] == [1, 0]:
pos_labels += 1
return pos_labels
pos_labels_train = _number_of_pos_labels(train_data)
total_labels_train = len(train_data)
pos_labels_test = _number_of_pos_labels(test_data)
total_labels_test = len(test_data)
print('Total number of positive labels:', pos_labels_train + pos_labels_test)
print('Proportion of positive labels on the Train data:', pos_labels_train/total_labels_train)
print('Proportion of positive labels on the Test data:', pos_labels_test/total_labels_test) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Input functions | def get_input_fn(df, batch_size, num_epochs=1, shuffle=True):
def input_fn():
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data
sequences = np.asarray([v for v in df[:,0]], dtype=np.int32)
labels = np.asarray([v for v in df[:,1]], dtype=np.int32)
length = np.asarray(df[:,2], dtype=np.int32)
dataset = (
tf.contrib.data.Dataset.from_tensor_slices((sequences, labels, length)) # reading data from memory
.repeat(num_epochs) # repeat dataset the number of epochs
.batch(batch_size)
)
# for our "manual" test we don't want to shuffle the data
if shuffle:
dataset = dataset.shuffle(buffer_size=100000)
# create iterator
review, label, length = dataset.make_one_shot_iterator().get_next()
features = {
REVIEW_KEY: review,
SEQUENCE_LENGTH_KEY: length,
}
return features, label
return input_fn
features, label = get_input_fn(train_data, 2)()
with tf.Session() as sess:
items = sess.run(features)
print(items[REVIEW_KEY])
print
items = sess.run(features)
print(items[REVIEW_KEY])
print
train_input_fn = get_input_fn(train_data, BATCH_SIZE, None)
test_input_fn = get_input_fn(test_data, BATCH_SIZE) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Creating the Estimator model | def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01,
embed_dim=128):
def model_fn(features, labels, mode):
review = features[REVIEW_KEY]
sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], tf.int32)
# Creating embedding
data = tf.Variable(tf.zeros([BATCH_SIZE, max_seq_len, 50]),dtype=tf.float32)
data = tf.nn.embedding_lookup(word_vector, review)
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
# Runs the RNN model dynamically
# more about it at:
# https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs, sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(
last_activations, units, activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
predictions_softmax = tf.nn.softmax(predictions)
loss = None
train_op = None
preds_op = {
'prediction': predictions_softmax,
'label': labels
}
eval_op = {
"accuracy": tf.metrics.accuracy(
tf.argmax(input=predictions_softmax, axis=1),
tf.argmax(input=labels, axis=1))
}
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.softmax_cross_entropy(labels, predictions)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer=optimizer,
learning_rate=learning_rate)
return model_fn_lib.EstimatorSpec(mode,
predictions=predictions_softmax,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_op)
return model_fn
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=2, # since are just 2 classes
dnn_layer_sizes=[128, 64], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001,
embed_dim=512) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Create and Run Experiment | # create experiment
def generate_experiment_fn():
"""
Create an experiment function given hyperparameters.
Returns:
A function (output_dir) -> Experiment where output_dir is a string
representing the location of summaries, checkpoints, and exports.
this function is used by learn_runner to create an Experiment which
executes model code provided in the form of an Estimator and
input functions.
All listed arguments in the outer function are used to create an
Estimator, and input functions (training, evaluation, serving).
Unlisted args are passed through to Experiment.
"""
def _experiment_fn(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
return _experiment_fn
# run experiment
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir='testing2')) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Making Predictions
Let's generate our own sentence to see how the model classifies them. | def generate_data_row(sentence, label):
length = max_seq_length
sequence = np.zeros((length), dtype='int32')
for i, word in enumerate(sentence):
sequence[i] = word_list.index(word)
return sequence, label, length
data_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]
# generating labels
data_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]
# also creating a length column, this will be used by the Dynamic RNN
# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
data_length = [max_seq_len for i in range(len(ids_matrix))]
first_sentence[0] = word_list.index("i")
first_sentence[1] = word_list.index("thought")
first_sentence[2] = word_list.index("the")
first_sentence[3] = word_list.index("movie")
first_sentence[4] = word_list.index("was")
first_sentence[5] = word_list.index("incredible")
first_sentence[6] = word_list.index("and")
first_sentence[7] = word_list.index("inspiring")
# first_sentence[8] = 0
# first_sentence[9] = 0
print(first_sentence.shape)
print(first_sentence) # shows the row index for each word
preds = estimator.predict(input_fn=my_test_input_fn, as_iterable=True)
sentences = _get_csv_column(MY_TEST_INPUT, 'review')
print()
for p, s in zip(preds, sentences):
print('sentence:', s)
print('bad review:', p[0], 'good review:', p[1])
print('-' * 10) | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-batch_64-checkpoint.ipynb | mari-linhares/tensorflow-workshop | apache-2.0 |
Let's add a color column. | def addcolor(addition):
return additionToColor[addition]
wtb['color'] = np.vectorize(addcolor)(wtb['addition']) | notebooks/Colors.ipynb | jamesnw/wtb-data | mit |
Now group by the new color column, get the mean, and sort the values high to low. | wtb.groupby(by='color').mean().sort_values('vote',ascending=False) | notebooks/Colors.ipynb | jamesnw/wtb-data | mit |
There we have it. Blue is the best tasting color.
But brown is awfully close. I wonder how the ranges compare. Let's take a look at a histogram. | %matplotlib inline
wtb.groupby(by='color').boxplot(subplots=False,rot=45) | notebooks/Colors.ipynb | jamesnw/wtb-data | mit |
Create and convert a TensorFlow model
This notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. The model created in this notebook is used in the hello_world sample for TensorFlow Lite for Microcontrollers.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Import dependencies
Our first task is to import the dependencies we need. Run the following cell to do so: | # TensorFlow is an open source machine learning library
import tensorflow as tf
# Numpy is a math library
import numpy as np
# Matplotlib is a graphing library
import matplotlib.pyplot as plt
# math is Python's math library
import math | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Generate data
Deep learning networks learn to model patterns in underlying data. In this notebook, we're going to train a network to model data generated by a sine function. This will result in a model that can take a value, x, and predict its sine, y.
In a real world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to do this, we can demonstrate the basic principles of machine learning.
In the hello_world sample for TensorFlow Lite for Microcontrollers, we'll use this model to control LEDs that light up in a sequence.
The code in the following cell will generate a set of random x values, calculate their sine values, and display them on a graph: | # We'll generate this many sample datapoints
SAMPLES = 1000
# Set a "seed" value, so we get the same random numbers each time we run this
# notebook
np.random.seed(1337)
# Generate a uniformly distributed set of random numbers in the range from
# 0 to 2π, which covers a complete sine wave oscillation
x_values = np.random.uniform(low=0, high=2*math.pi, size=SAMPLES)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)
# Calculate the corresponding sine values
y_values = np.sin(x_values)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(x_values, y_values, 'b.')
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Add some noise
Since it was generated directly by the sine function, our data fits a nice, smooth curve.
However, machine learning models are good at extracting underlying meaning from messy, real world data. To demonstrate this, we can add some noise to our data to approximate something more life-like.
In the following cell, we'll add some random noise to each value, then draw a new graph: | # Add a small random number to each y value
y_values += 0.1 * np.random.randn(*y_values.shape)
# Plot our data
plt.plot(x_values, y_values, 'b.')
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Split our data
We now have a noisy dataset that approximates real world data. We'll be using this to train our model.
To evaluate the accuracy of the model we train, we'll need to compare its predictions to real data and check how well they match up. This evaluation happens during training (where it is referred to as validation) and after training (referred to as testing) It's important in both cases that we use fresh data that was not already used to train the model.
To ensure we have data to use for evaluation, we'll set some aside before we begin training. We'll reserve 20% of our data for validation, and another 20% for testing. The remaining 60% will be used to train the model. This is a typical split used when training models.
The following code will split our data and then plot each set as a different color: | # We'll use 60% of our data for training and 20% for testing. The remaining 20%
# will be used for validation. Calculate the indices of each section.
TRAIN_SPLIT = int(0.6 * SAMPLES)
TEST_SPLIT = int(0.2 * SAMPLES + TRAIN_SPLIT)
# Use np.split to chop our data into three parts.
# The second argument to np.split is an array of indices where the data will be
# split. We provide two indices, so the data will be divided into three chunks.
x_train, x_test, x_validate = np.split(x_values, [TRAIN_SPLIT, TEST_SPLIT])
y_train, y_test, y_validate = np.split(y_values, [TRAIN_SPLIT, TEST_SPLIT])
# Double check that our splits add up correctly
assert (x_train.size + x_validate.size + x_test.size) == SAMPLES
# Plot the data in each partition in different colors:
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
| tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Design a model
We're going to build a model that will take an input value (in this case, x) and use it to predict a numeric output value (the sine of x). This type of problem is called a regression.
To achieve this, we're going to create a simple neural network. It will use layers of neurons to attempt to learn any patterns underlying the training data, so it can make predictions.
To begin with, we'll define two layers. The first layer takes a single input (our x value) and runs it through 16 neurons. Based on this input, each neuron will become activated to a certain degree based on its internal state (its weight and bias values). A neuron's degree of activation is expressed as a number.
The activation numbers from our first layer will be fed as inputs to our second layer, which is a single neuron. It will apply its own weights and bias to these inputs and calculate its own activation, which will be output as our y value.
Note: To learn more about how neural networks function, you can explore the Learn TensorFlow codelabs.
The code in the following cell defines our model using Keras, TensorFlow's high-level API for creating deep learning networks. Once the network is defined, we compile it, specifying parameters that determine how it will be trained: | # We'll use Keras to create a simple model architecture
from tensorflow.keras import layers
model_1 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_1.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# Final layer is a single neuron, since we want to output a single value
model_1.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_1.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Train the model
Once we've defined the model, we can use our data to train it. Training involves passing an x value into the neural network, checking how far the network's output deviates from the expected y value, and adjusting the neurons' weights and biases so that the output is more likely to be correct the next time.
Training runs this process on the full dataset multiple times, and each full run-through is known as an epoch. The number of epochs to run during training is a parameter we can set.
During each epoch, data is run through the network in multiple batches. Each batch, several pieces of data are passed into the network, producing output values. These outputs' correctness is measured in aggregate and the network's weights and biases are adjusted accordingly, once per batch. The batch size is also a parameter we can set.
The code in the following cell uses the x and y values from our training data to train the model. It runs for 1000 epochs, with 16 pieces of data in each batch. We also pass in some data to use for validation. As you will see when you run the cell, training can take a while to complete: | # Train the model on our training data while validating on our validation set
history_1 = model_1.fit(x_train, y_train, epochs=1000, batch_size=16,
validation_data=(x_validate, y_validate)) | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Check the training metrics
During training, the model's performance is constantly being measured against both our training data and the validation data that we set aside earlier. Training produces a log of data that tells us how the model's performance changed over the course of the training process.
The following cells will display some of that data in a graphical form: | # Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_1.history['loss']
val_loss = history_1.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Look closer at the data
The graph shows the loss (or the difference between the model's predictions and the actual data) for each epoch. There are several ways to calculate loss, and the method we have used is mean squared error. There is a distinct loss value given for the training and the validation data.
As we can see, the amount of loss rapidly decreases over the first 25 epochs, before flattening out. This means that the model is improving and producing more accurate predictions!
Our goal is to stop training when either the model is no longer improving, or when the training loss is less than the validation loss, which would mean that the model has learned to predict the training data so well that it can no longer generalize to new data.
To make the flatter part of the graph more readable, let's skip the first 50 epochs: | # Exclude the first few epochs so the graph is easier to read
SKIP = 50
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Further metrics
From the plot, we can see that loss continues to reduce until around 600 epochs, at which point it is mostly stable. This means that there's no need to train our network beyond 600 epochs.
However, we can also see that the lowest loss value is still around 0.155. This means that our network's predictions are off by an average of ~15%. In addition, the validation loss values jump around a lot, and is sometimes even higher.
To gain more insight into our model's performance we can plot some more data. This time, we'll plot the mean absolute error, which is another way of measuring how far the network's predictions are from the actual numbers: | plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_1.history['mae']
val_mae = history_1.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
This graph of mean absolute error tells another story. We can see that training data shows consistently lower error than validation data, which means that the network may have overfit, or learned the training data so rigidly that it can't make effective predictions about new data.
In addition, the mean absolute error values are quite high, ~0.305 at best, which means some of the model's predictions are at least 30% off. A 30% error means we are very far from accurately modelling the sine wave function.
To get more insight into what is happening, we can plot our network's predictions for the training data against the expected values: | # Use the model to make predictions from our validation data
predictions = model_1.predict(x_train)
# Plot the predictions along with to the test data
plt.clf()
plt.title('Training data predicted vs actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_train, predictions, 'r.', label='Predicted')
plt.legend()
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Oh dear! The graph makes it clear that our network has learned to approximate the sine function in a very limited way. From 0 <= x <= 1.1 the line mostly fits, but for the rest of our x values it is a rough approximation at best.
The rigidity of this fit suggests that the model does not have enough capacity to learn the full complexity of the sine wave function, so it's only able to approximate it in an overly simplistic way. By making our model bigger, we should be able to improve its performance.
Change our model
To make our model bigger, let's add an additional layer of neurons. The following cell redefines our model in the same way as earlier, but with an additional layer of 16 neurons in the middle: | model_2 = tf.keras.Sequential()
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
We'll now train the new model. To save time, we'll train for only 600 epochs: | history_2 = model_2.fit(x_train, y_train, epochs=600, batch_size=16,
validation_data=(x_validate, y_validate)) | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Evaluate our new model
Each training epoch, the model prints out its loss and mean absolute error for training and validation. You can read this in the output above (note that your exact numbers may differ):
Epoch 600/600
600/600 [==============================] - 0s 109us/sample - loss: 0.0124 - mae: 0.0892 - val_loss: 0.0116 - val_mae: 0.0845
You can see that we've already got a huge improvement - validation loss has dropped from 0.15 to 0.015, and validation MAE has dropped from 0.31 to 0.1.
The following cell will print the same graphs we used to evaluate our original model, but showing our new training history: | # Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
loss = history_2.history['loss']
val_loss = history_2.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.clf()
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history_2.history['mae']
val_mae = history_2.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Great results! From these graphs, we can see several exciting things:
Our network has reached its peak accuracy much more quickly (within 200 epochs instead of 600)
The overall loss and MAE are much better than our previous network
Metrics are better for validation than training, which means the network is not overfitting
The reason the metrics for validation are better than those for training is that validation metrics are calculated at the end of each epoch, while training metrics are calculated throughout the epoch, so validation happens on a model that has been trained slightly longer.
This all means our network seems to be performing well! To confirm, let's check its predictions against the test dataset we set aside earlier: | # Calculate and print the loss on our test dataset
loss = model_2.evaluate(x_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(x_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(x_test, y_test, 'b.', label='Actual')
plt.plot(x_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show() | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Much better! The evaluation metrics we printed show that the model has a low loss and MAE on the test data, and the predictions line up visually with our data fairly well.
The model isn't perfect; its predictions don't form a smooth sine curve. For instance, the line is almost straight when x is between 4.2 and 5.2. If we wanted to go further, we could try further increasing the capacity of the model, perhaps using some techniques to defend from overfitting.
However, an important part of machine learning is knowing when to quit, and this model is good enough for our use case - which is to make some LEDs blink in a pleasing pattern.
Convert to TensorFlow Lite
We now have an acceptably accurate model in-memory. However, to use this with TensorFlow Lite for Microcontrollers, we'll need to convert it into the correct format and download it as a file. To do this, we'll use the TensorFlow Lite Converter. The converter outputs a file in a special, space-efficient format for use on memory-constrained devices.
Since this model is going to be deployed on a microcontroller, we want it to be as tiny as possible! One technique for reducing the size of models is called quantization. It reduces the precision of the model's weights, which saves memory, often without much impact on accuracy. Quantized models also run faster, since the calculations required are simpler.
The TensorFlow Lite Converter can apply quantization while it converts the model. In the following cell, we'll convert the model twice: once with quantization, once without: | # Convert the model to the TensorFlow Lite format without quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
tflite_model = converter.convert()
# Save the model to disk
open("sine_model.tflite", "wb").write(tflite_model)
# Convert the model to the TensorFlow Lite format with quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
# Save the model to disk
open("sine_model_quantized.tflite", "wb").write(tflite_model) | tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb | gunan/tensorflow | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.