markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Equivalently, we could explicitly set the equality operator: | ag.apps.list(search={'id.eq': app.id}) | content/notebooks/Python SDK.ipynb | agaveapi/SC17-container-tutorial | bsd-3-clause |
Typically, the list of available search terms is identical to the attributes included in the JSON returned when requesting the full resource description. Operators include 'like', 'lt', 'gt', 'lte', 'gte', etc. See the official Agave documentation for the complete list.
Here we retrieve all apps with a name is "like" opensees: | ag.apps.list(search={'name.like': 'opensees'}) | content/notebooks/Python SDK.ipynb | agaveapi/SC17-container-tutorial | bsd-3-clause |
Two results were returned, both with name "opensees".
You can include multiple search expressions in the form of additional key:value pairs to build a more restrictive query. Here we restrict the result to opensees apps with revision at least 25: | ag.apps.list(search={'name.like': 'opensees', 'revision.gte': 25}) | content/notebooks/Python SDK.ipynb | agaveapi/SC17-container-tutorial | bsd-3-clause |
Preparation
To get started, we first load all the required imports. Please make sure you installed dist-keras, and seaborn. Furthermore, we assume that you have access to an installation which provides Apache Spark.
Before you start this notebook, place the MNIST dataset (which is provided in a zip in examples/data within this repository) on HDFS. Or in the case HDFS is not available, place it on the local filesystem. But make sure the path to the file is identical for all computing nodes. | %matplotlib inline
import numpy as np
import seaborn as sns
import time
from pyspark import SparkContext
from pyspark import SparkConf
from matplotlib import pyplot as plt
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import OneHotEncoder
from pyspark.ml.feature import MinMaxScaler
from pyspark.ml.feature import StringIndexer
from distkeras.transformers import *
from distkeras.utils import * | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
In the following cell, adapt the parameters to fit your personal requirements. | # Modify these variables according to your needs.
application_name = "MNIST Preprocessing"
using_spark_2 = False
local = False
path_train = "data/mnist_train.csv"
path_test = "data/mnist_test.csv"
if local:
# Tell master to use local resources.
master = "local[*]"
num_processes = 3
num_executors = 1
else:
# Tell master to use YARN.
master = "yarn-client"
num_executors = 20
num_processes = 1
# This variable is derived from the number of cores and executors, and will be used to assign the number of model trainers.
num_workers = num_executors * num_processes
print("Number of desired executors: " + `num_executors`)
print("Number of desired processes / executor: " + `num_processes`)
print("Total number of workers: " + `num_workers`)
import os
# Use the DataBricks CSV reader, this has some nice functionality regarding invalid values.
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.4.0 pyspark-shell'
conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_processes`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.executor.memory", "20g")
conf.set("spark.yarn.executor.memoryOverhead", "2")
conf.set("spark.locality.wait", "0")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
# Check if the user is running Spark 2.0 +
if using_spark_2:
sc = SparkSession.builder.config(conf=conf) \
.appName(application_name) \
.getOrCreate()
else:
# Create the Spark context.
sc = SparkContext(conf=conf)
# Add the missing imports
from pyspark import SQLContext
sqlContext = SQLContext(sc)
# Record time of starting point.
time_start = time.time()
# Check if we are using Spark 2.0
if using_spark_2:
reader = sc
else:
reader = sqlContext
# Read the training set.
raw_dataset_train = reader.read.format('com.databricks.spark.csv') \
.options(header='true', inferSchema='true') \
.load(path_train)
# Read the test set.
raw_dataset_test = reader.read.format('com.databricks.spark.csv') \
.options(header='true', inferSchema='true') \
.load(path_test)
# Repartition the datasets.
raw_dataset_train = raw_dataset_train.repartition(num_workers)
raw_dataset_test = raw_dataset_test.repartition(num_workers) | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
As shown in the output of the cell above, we see that every pixel is associated with a seperate column. In order to ensure compatibility with Apache Spark, we vectorize the columns, and add the resulting vectors as a seperate column. However, in order to achieve this, we first need a list of the required columns. This is shown in the cell below. | # First, we would like to extract the desired features from the raw dataset.
# We do this by constructing a list with all desired columns.
features = raw_dataset_train.columns
features.remove('label') | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
Once we have a list of columns names, we can pass this to Spark's VectorAssembler. This VectorAssembler will take a list of features, vectorize them, and place them in a column defined in outputCol. | # Next, we use Spark's VectorAssembler to "assemble" (create) a vector of all desired features.
# http://spark.apache.org/docs/latest/ml-features.html#vectorassembler
vector_assembler = VectorAssembler(inputCols=features, outputCol="features")
# This transformer will take all columns specified in features, and create an additional column "features" which will contain all the desired features aggregated into a single vector.
training_set = vector_assembler.transform(raw_dataset_train)
test_set = vector_assembler.transform(raw_dataset_test) | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
Once we have the inputs for our Neural Network (features column) after applying the VectorAssembler, we should also define the outputs. Since we are dealing with a classification task, the output of our Neural Network should be a one-hot encoded vector with 10 elements. For this, we provide a OneHotTransformer which accomplish this exact task. | # Define the number of output classes.
nb_classes = 10
encoder = OneHotTransformer(nb_classes, input_col="label", output_col="label_encoded")
training_set = encoder.transform(training_set)
test_set = encoder.transform(test_set) | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
MNIST
MNIST is a dataset of handwritten digits. Every image is a 28 by 28 pixel grayscale image. This means that every pixel has a value between 0 and 255. Some examples of instances within this dataset are shown in the cells below.
Normalization
In this Section, we will normalize the feature vectors between the 0 and 1 range. | # Clear the datasets in the case you ran this cell before.
training_set = training_set.select("features", "label", "label_encoded")
test_set = test_set.select("features", "label", "label_encoded")
# Allocate a MinMaxTransformer using Distributed Keras.
# o_min -> original_minimum
# n_min -> new_minimum
transformer = MinMaxTransformer(n_min=0.0, n_max=1.0, \
o_min=0.0, o_max=250.0, \
input_col="features", \
output_col="features_normalized")
# Transform the datasets.
training_set = transformer.transform(training_set)
test_set = transformer.transform(test_set) | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
Convolutions
In order to make the dense vectors compatible with convolution operations in Keras, we add another column which contains the matrix form of these images. We provide a utility class (MatrixTransformer), which helps you with this. | reshape_transformer = ReshapeTransformer("features_normalized", "matrix", (28, 28, 1))
training_set = reshape_transformer.transform(training_set)
test_set = reshape_transformer.transform(test_set) | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
Dense Transformation
At the moment, dist-keras does not support SparseVectors due to the numpy dependency. As a result, we have to convert the SparseVector to a DenseVector. We added a simple utility transformer which does this for you. | dense_transformer = DenseTransformer(input_col="features_normalized", output_col="features_normalized_dense")
training_set = dense_transformer.transform(training_set)
test_set = dense_transformer.transform(test_set) | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
Artificial Enlargement
We want to make the dataset 100 times larger to simulate larger datasets, and to evaluate optimizer performance. | df = training_set
expansion = 10
for i in range(0, expansion):
df = df.unionAll(training_set)
training_set = df
training_set.cache() | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
Writing to HDFS
In order to prevent constant preprocessing, and ensure optimizer performance, we write the data to HDFS in a Parquet format. | training_set.write.parquet("data/mnist_train.parquet")
test_set.write.parquet("data/mnist_test.parquet")
# Record end of transformation.
time_end = time.time()
dt = time_end - time_start
print("Took " + str(dt) + " seconds.")
!hdfs dfs -rm -r data/mnist_test.parquet
!hdfs dfs -rm -r data/mnist_train.parquet | examples/mnist_preprocessing.ipynb | ad960009/dist-keras | gpl-3.0 |
Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson! | import math
from scipy import stats
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
#return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
# to store the slopes & intercepts from previous frame
previous_lslope = 1
previous_lintercept = 0
previous_rslope = 1
previous_rintercept = 0
def draw_lines(img, lines, color=[255, 0, 0], thickness=8):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# to store our x,y co-ordinate of left & right lane lines
left_x = []
left_y = []
right_x = []
right_y = []
for line in lines:
for x1,y1,x2,y2 in line:
# calculate the slope
slope = (y2-y1)/(x2-x1)
# if positive slope, then right line because (0,0) is top left corner
if (slope > 0.5) and (slope < 0.65):
# store the points
right_x.append(x1)
right_x.append(x2)
right_y.append(y1)
right_y.append(y2)
# draw the actual detected hough lines as well to visually comapre the error
# cv2.line(img, (x1, y1), (x2, y2), [0,255,0], 2)
# else its a left line
elif (slope < -0.5) and (slope > -0.7):
# store the points
left_x.append(x1)
left_x.append(x2)
left_y.append(y1)
left_y.append(y2)
# draw the actual detected hough lines as well to visually comapre the error
# cv2.line(img, (x1, y1), (x2, y2), [0,255,0], 2)
global previous_lslope
global previous_lintercept
global previous_rslope
global previous_rintercept
# use linear regression to find the slope & intercepts of our left & right lines
if left_x and left_y:
previous_lslope, previous_lintercept, lr_value, lp_value, lstd_err = stats.linregress(left_x,left_y)
if right_x and right_y:
previous_rslope, previous_rintercept, rr_value, rp_value, rstd_err = stats.linregress(right_x,right_y)
# else in all other conditions use the slope & intercept from the previous frame, presuming the next
# frames will result in correct slope & incercepts for lane lines
# FIXME: this logic will fail in conditions when lane lines are not detected in consecutive next frames
# better to not show/detect false lane lines?
# extrapolate the lines in the lower half of the image using our detected slope & intercepts
x = img.shape[1]
y = img.shape[0]
# left line
l_y1 = int(round(y))
l_y2 = int(round(y*0.6))
l_x1_lr = int(round((l_y1-previous_lintercept)/previous_lslope))
l_x2_lr = int(round((l_y2-previous_lintercept)/previous_lslope))
# right line
r_y1 = int(round(y))
r_y2 = int(round(y*0.6))
r_x1_lr = int(round((r_y1-previous_rintercept)/previous_rslope))
r_x2_lr = int(round((r_y2-previous_rintercept)/previous_rslope))
# draw the extrapolated lines onto the image
cv2.line(img, (l_x1_lr, l_y1), (l_x2_lr, l_y2), color, thickness)
cv2.line(img, (r_x1_lr, r_y1), (r_x2_lr, r_y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ) | P1.ipynb | nitheeshkl/Udacity_CarND_LaneLines_P1 | mit |
Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. | # TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
def showImage(img,cmap=None):
# create a new figure to show each image
plt.figure()
# show image
plt.imshow(img,cmap=cmap)
def detectLaneLines(image):
# get image sizes. X=columsns & Y=rows
x = image.shape[1]
y = image.shape[0]
# convert the image to gray scale
grayImage = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# blur the image
blurImage = gaussian_blur(grayImage,3)
# perform edge detection
cannyImage = canny(blurImage,50,150)
# define the co-ordinates for the interested section of the image.
# we're only interested in the bottom half where there is road
vertices = np.array([[(x*0.15,y),(x*0.45,y*0.6),(x*0.55,y*0.6),(x*0.85,y)]],dtype=np.int32)
# create a masked image of only the interested region
maskedImage = region_of_interest(cannyImage, vertices)
# detect & draw lines using hough algo on our masked image
rho = 1 # distance resolution in pixels of the Hough grid
theta = (np.pi/180)*1 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 3 #minimum number of pixels making up a line
max_line_gap = 3 # maximum gap in pixels between connectable line segments
houghImage = hough_lines(maskedImage,rho, theta, threshold, min_line_length, max_line_gap)
# merge the mask layer onto the original image and return it
return weighted_img(houghImage,image)
# read a sample image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
# show detected lanes in the sample
showImage(detectLaneLines(image),cmap='gray')
| P1.ipynb | nitheeshkl/Udacity_CarND_LaneLines_P1 | mit |
Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems. | # Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
return detectLaneLines(image) | P1.ipynb | nitheeshkl/Udacity_CarND_LaneLines_P1 | mit |
Load wiki page view and stock price data into Spark DataFrames.
wiki_obs is a Spark dataframe of (timestamp, page, views) of types (Timestamp, String, Double). ticker_obs is a Spark dataframe of (timestamp, symbol, price) of types (Timestamp, String, Double). | wiki_obs = ld.load_wiki_df(sqlCtx, '/user/srowen/wiki.tsv')
ticker_obs = ld.load_ticker_df(sqlCtx, '/user/srowen/ticker.tsv') | ds-for-ws-student.ipynb | cdalzell/ds-for-wall-street | apache-2.0 |
Display the first 5 elements of the wiki_obs RDD.
wiki_obs contains Row objects with the fields (timestamp, page, views).
Display the first 5 elements of the tickers_obs RDD.
ticker_obs contains Row objects with the fields (timestamp, symbol, views).
Create datetime index.
Create time series RDD from observations and index. Remove time instants with NaNs.
Cache the tsrdd.
Examine the first element in the RDD.
Time series have values and a datetime index. We can create a tsrdd for hourly stock prices from an index and a Spark DataFrame of observations. ticker_tsrdd is an RDD of tuples where each tuple has the form (ticker symbol, stock prices) where ticker symbol is a string and stock prices is a 1D np.ndarray. We create a nicely formatted string representation of this pair in print_ticker_info(). Notice how we access the two elements of the tuple. | def print_ticker_info(ticker):
print ('The first ticker symbol is: {} \nThe first 20 elements of the associated ' +\
'series are:\n {}').format(ticker[0], ticker[1][:20]) | ds-for-ws-student.ipynb | cdalzell/ds-for-wall-street | apache-2.0 |
Create a wiki page view tsrdd and set the index to match the index of ticker_tsrdd.
Linearly interpolate to impute missing values.
wiki_tsrdd is an RDD of tuples where each tuple has the form (page title, wiki views) where page title is a string and wiki views is a 1D np.ndarray. We have cached both RDDs because we will be doing many subsequent operations on them.
Filter out symbols with more than the minimum number of NaNs.
Then filter out instants with NaNs. | def count_nans(vec):
return np.count_nonzero(np.isnan(vec)) | ds-for-ws-student.ipynb | cdalzell/ds-for-wall-street | apache-2.0 |
Join together wiki_tsrdd and ticker_tsrdd
First, we use this dict to look up the corresponding stock ticker symbol and rekey the wiki page view time series. We then join the data sets together. The result is an RDD of tuples where each element is of the form (ticker_symbol, (wiki_series, ticker_series)). We count the number of elements in the resulting rdd to see how many matches we have.
Correlation and Relationships
Define a function for computing the pearson r correlation of the stock price and wiki page traffic associated with a company.
Here we look up a specific stock and corrsponding wiki page, and provide an example of
computing the pearson correlation locally. We use scipy.stats.stats.pearsonr to compute the pearson correlation and corresponding two sided p value. wiki_vol_corr and corr_with_offset both return this as a tuple of (corr, p_value). | from scipy.stats.stats import pearsonr
def wiki_vol_corr(page_key):
# lookup individual time series by key.
ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array
wiki = wiki_tsrdd.find_series(page_key) # numpy array
return pearsonr(ticker, wiki)
def corr_with_offset(page_key, offset):
"""offset is an integer that describes how many time intervals we have slid
the wiki series ahead of the ticker series."""
ticker = ticker_tsrdd.find_series(page_symbols[page_key]) # numpy array
wiki = wiki_tsrdd.find_series(page_key) # numpy array
return pearsonr(ticker[offset:], wiki[:-offset]) | ds-for-ws-student.ipynb | cdalzell/ds-for-wall-street | apache-2.0 |
Create a plot of the joint distribution of wiki trafiic and stock prices for a specific company using seaborn's jointplot function. | def joint_plot(page_key, ticker, wiki, offset=0):
with sns.axes_style("white"):
sns.jointplot(x=ticker, y=wiki, kind="kde", color="b");
plt.xlabel('Stock Price')
plt.ylabel('Wikipedia Page Views')
plt.title('Joint distribution of {} stock price\n and Wikipedia page views.'\
+'\nWith a {} day offset'.format(page_key, offset), y=1.20) | ds-for-ws-student.ipynb | cdalzell/ds-for-wall-street | apache-2.0 |
Find the companies with the highest correlation between stock prices time series and wikipedia page traffic.
Note that comparing a tuple means you compare the composite object lexicographically.
Add in filtering out less than useful correlation results.
There are a lot of invalid correlations that get computed, so lets filter those out.
Find the top 10 correlations as defined by the ordering on tuples.
Create a joint plot of some of the stronger relationships.
Volatility
Compute per-day volatility for each symbol.
Make sure we don't have any NaNs.
Visualize volatility
Plot daily volatility in stocks over time.
What does the distribution of volatility for the whole market look like? Add volatility for individual stocks in a datetime bin.
Find stocks with the highest average daily volatility.
Plot stocks with the highest average daily volatility over time.
We first map over ticker_daily_vol to find the index of the value with the highest volatility. We then relate that back to the index set on the RDD to find the corresponding datetime.
A large number of stock symbols had their most volatile days on August 24th and August 25th of
this year.
Regress volatility against page views
Resample the wiki page view data set so we have total pageviews by day.
Cache the wiki page view RDD.
Resample the wiki page view data set so we have total pageviews by day. This means reindexing the time series and aggregating data together with daily buckets. We use np.nansum to add up numbers while treating nans like zero.
Validate data by checking for nans.
Fit a linear regression model to every pair in the joined wiki-ticker RDD and extract R^2 scores. | def regress(X, y):
model = linear_model.LinearRegression()
model.fit(X, y)
score = model.score(X, y)
return (score, model)
lag = 2
lead = 2
joined = regressions = wiki_daily_views.flatMap(get_page_symbol) \
.join(ticker_daily_vol)
models = joined.mapValues(lambda x: regress(tsutil.lead_and_lag(lead, lag, x[0]), x[1][lag:-lead]))
models.cache()
models.count() | ds-for-ws-student.ipynb | cdalzell/ds-for-wall-street | apache-2.0 |
To clone the local repository | $ git clone https://github.com/VandyAstroML/Vanderbilt_Computational_Bootcamp.git | notebooks/Week_02/02_Python_Git_Github_Tutorial.ipynb | VandyAstroML/Vanderbilt_Computational_Bootcamp | mit |
Problem 2
Create a generator that yields "n" random numbers between a low and high number (that are inputs). Note: Use the random library. For example: | import random
random.randint(1,10)
def rand_num(low,high,n):
for i in range(n+1):
yield random.randint(low, high)
for num in rand_num(1,10,12):
print num | Iterators and Generators Homework.ipynb | spacedrabbit/PythonBootcamp | mit |
Problem 3
Use the iter() function to convert the string below | s = 'hello'
#code here
for letter in iter(s):
print letter | Iterators and Generators Homework.ipynb | spacedrabbit/PythonBootcamp | mit |
Problem 4
Explain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement.
A generator, utilizing a yield statement, returns an iterator object. The iterator object will yield/return a value each time it is called upon to iterate through its code. So in cases where a return statement would be used to return the entirely of a list, the generator would only return the current iteration of the list, remembering its state where it was last yielded.
Extra Credit!
Can you explain what gencomp is in the code below? (Note: We never covered this in lecture! You will have to do some googling/Stack Overflowing!) | my_list = [1,2,3,4,5]
gencomp = (item for item in my_list if item > 3)
for item in gencomp:
print item | Iterators and Generators Homework.ipynb | spacedrabbit/PythonBootcamp | mit |
Dataset description
Datasource: http://yann.lecun.com/exdb/mnist/
The training dataset consists of 60,000 training digits and the test set contains 10,000 samples, respectively. The images in the MNIST dataset consist of pixels, and each pixel is represented by a gray scale intensity value. Here, we unroll the pixels into 1D row vectors, which represent the rows in our image array (784 per row or image). The second array (labels) returned by the load_mnist function contains the corresponding target variable, the class labels (integers 0-9) of the handwritten digits.
Csv version of the files are available in the following links.
CSV training set http://www.pjreddie.com/media/files/mnist_train.csv
CSV test set http://www.pjreddie.com/media/files/mnist_test.csv | training = pd.read_csv("/data/MNIST/mnist_train.csv", header = None)
testing = pd.read_csv("/data/MNIST/mnist_test.csv", header = None)
X_train, y_train = training.iloc[:, 1:].values, training.iloc[:, 0].values
X_test, y_test = testing.iloc[:, 1:].values, testing.iloc[:, 0].values
print("Shape of X_train: ", X_train.shape, "shape of y_train: ", y_train.shape)
print("Shape of X_test: ", X_test.shape, "shape of y_test: ", y_test.shape) | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
Distribution of class frequencies | label_counts = pd.DataFrame({
"train": pd.Series(y_train).value_counts().sort_index(),
"test": pd.Series(y_test).value_counts().sort_index()
})
(label_counts / label_counts.sum()).plot.bar()
plt.xlabel("Class")
plt.ylabel("Frequency (normed)") | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
Chisquare test on class frequencies | scipy.stats.chisquare(label_counts.train, label_counts.test) | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
Display a few sample images | fig, axes = plt.subplots(5, 5, figsize = (15, 9))
for i, ax in enumerate(fig.axes):
img = X_train[i, :].reshape(28, 28)
ax.imshow(img, cmap = "Greys", interpolation="nearest")
ax.set_title("True: %i" % y_train[i])
plt.tight_layout() | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
View different variations of a digit | fig, axes = plt.subplots(10, 5, figsize = (15, 20))
for i, ax in enumerate(fig.axes):
img = X_train[y_train == 7][i, :].reshape(28, 28)
ax.imshow(img, cmap = "Greys", interpolation="nearest")
plt.tight_layout() | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
Feature scaling | scaler = preprocessing.StandardScaler()
X_train_std = scaler.fit_transform(X_train.astype(np.float64))
X_test_std = scaler.transform(X_test.astype(np.float64)) | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
Applying logistic regression classifier | %%time
lr = linear_model.LogisticRegression()
lr.fit(X_train_std, y_train)
print("accuracy:", lr.score(X_test_std, y_test)) | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
Display wrong predictions | y_test_pred = lr.predict(X_test_std)
miss_indices = (y_test != y_test_pred)
misses = X_test[miss_indices]
print("No of miss: ", misses.shape[0])
fig, axes = plt.subplots(10, 5, figsize = (15, 20))
misses_actual = y_test[miss_indices]
misses_pred = y_test_pred[miss_indices]
for i, ax in enumerate(fig.axes):
img = misses[i].reshape(28, 28)
ax.imshow(img, cmap = "Greys", interpolation="nearest")
ax.set_title("A: %s, P: %d" % (misses_actual[i], misses_pred[i]))
plt.tight_layout() | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
Applying SGD classifier | inits = np.random.randn(10, 784)
inits = inits / np.std(inits, axis=1).reshape(10, -1)
%%time
est = linear_model.SGDClassifier(n_jobs=4, tol=1e-5, eta0 = 0.15,
learning_rate = "invscaling",
alpha = 0.01, max_iter= 100)
est.fit(X_train_std, y_train, inits)
print("accuracy", est.score(X_test_std, y_test), "iterations:", est.n_iter_)
fig, _ = plt.subplots(3, 4, figsize = (15, 10))
for i, ax in enumerate(fig.axes):
if i < est.coef_.shape[0]:
ax.imshow(est.coef_[i, :].reshape(28, 28), cmap = "bwr", interpolation="nearest")
else:
ax.remove()
pd.DataFrame(est.coef_[0, :].reshape(28, 28)) | Scikit - 10 Image Classification (MNIST dataset).ipynb | abulbasar/machine-learning | apache-2.0 |
First, we load all test data. | df = pd.read_csv('stress-ng/third/torpor-results/alltests.csv') | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Define some predicates for machines and limits | machine_is_issdm_6 = df['machine'] == 'issdm-6'
machine_is_t2_micro = df['machine'] == 't2.micro'
machine_is_kv3 = df['machine'] == 'kv3'
limits_is_with = df['limits'] == 'with'
limits_is_without = df['limits'] == 'without' | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Show the number of stress tests on different machines | df_issdm_6_with_limit = df[machine_is_issdm_6 & limits_is_with]
df_t2_micro_with_limit = df[machine_is_t2_micro & limits_is_with]
df_kv3_without_limit = df[machine_is_kv3 & limits_is_without]
print(
len(df_issdm_6_with_limit), # machine issdm-6 with limit
len(df[machine_is_issdm_6 & limits_is_without]), # machine issdm-6 without limit
len(df_t2_micro_with_limit), # machine t2.micro with limit
len(df[machine_is_t2_micro & limits_is_without]), # machine t2.micro without limit
len(df_kv3_without_limit) # machine kv3 without limit
) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Because those failed benchmarks are not shown in the result report, we want to know how many common successful stress tests on the target machine and kv3. | issdm_6_with_limit_merge_kv3 = pd.merge(df_issdm_6_with_limit, df_kv3_without_limit, how='inner', on='benchmark')
t2_micro_with_limit_merge_kv3 = pd.merge(df_t2_micro_with_limit, df_kv3_without_limit, how='inner', on='benchmark')
print(
# common successful tests from issdm-6 and kv3
len(issdm_6_with_limit_merge_kv3),
# common successful tests from t2.micro and kv3
len(t2_micro_with_limit_merge_kv3)
) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Read the normalized results. | df_normalized = pd.read_csv('stress-ng/third/torpor-results/alltests_with_normalized_results_1.1.csv') | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Show some of the data lines. The normalized value is the speedup based on kv3. It becomes a negative value when the benchmark runs on the target machine is slower than on kv3 (slowdown). | df_normalized.head() | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Show those benchmarks are not both successful completed on the issdm-6 and kv3. | df_issdm_6_with_limit[~df_issdm_6_with_limit['benchmark'].isin(issdm_6_with_limit_merge_kv3['benchmark'])] | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Show those benchmarks are not both successful completed on the t2.micro and kv3. | df_t2_micro_with_limit[~df_t2_micro_with_limit['benchmark'].isin(t2_micro_with_limit_merge_kv3['benchmark'])] | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
We can find the number of benchmarks are speed-up and slowdown, respectively. | normalized_limits_is_with = df_normalized['limits'] == 'with'
normalized_limits_is_without = df_normalized['limits'] == 'without'
normalized_machine_is_issdm_6 = df_normalized['machine'] == 'issdm-6'
normalized_machine_is_t2_micro = df_normalized['machine'] == 't2.micro'
normalized_is_speed_up = df_normalized['normalized'] > 0
normalized_is_slow_down = df_normalized['normalized'] < 0
print(
# issdm-6 without CPU restriction
len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 1. speed-up
len(df_normalized[normalized_limits_is_without & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 2. slowdown
# issdm-6 with CPU restriction
len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_speed_up]), # 3. speed-up
len(df_normalized[normalized_limits_is_with & normalized_machine_is_issdm_6 & normalized_is_slow_down]), # 4. slowdown
# t2.micro without CPU restriction
len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 5. speed-up
len(df_normalized[normalized_limits_is_without & normalized_machine_is_t2_micro & normalized_is_slow_down]), # 6. slowdown
# t2.micro with CPU restriction
len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_speed_up]), # 7. speed-up
len(df_normalized[normalized_limits_is_with & normalized_machine_is_t2_micro & normalized_is_slow_down]) # 8. slowdown
) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
The average of normalized value for results under CPU restriction | print(
# For issdm-6
df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]['normalized'].mean(),
# For t2_micro
df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]['normalized'].mean()
) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Experiment Results from issdm-6
Let's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on issdm-6. | df_normalized_issdm_6_without_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_without]
df_normalized_issdm_6_without_limit.normalized.hist(bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 without CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Here is the rank of normalized value from stress tests without CPU restriction | df_normalized_issdm_6_without_limit_sorted = df_normalized_issdm_6_without_limit.sort_values(by='normalized', ascending=0)
df_normalized_issdm_6_without_limit_sorted_head = df_normalized_issdm_6_without_limit_sorted.head()
df_normalized_issdm_6_without_limit_sorted_tail = df_normalized_issdm_6_without_limit_sorted.tail()
df_normalized_issdm_6_without_limit_sorted_head.append(df_normalized_issdm_6_without_limit_sorted_tail) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Now let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on issdm-6. | df_normalized_issdm_6_with_limit = df_normalized[normalized_machine_is_issdm_6 & normalized_limits_is_with]
df_normalized_issdm_6_with_limit.normalized.hist(color='Orange', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 with CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Here is the rank of normalized value from stress tests with CPU restriction | df_normalized_issdm_6_with_limit_sorted = df_normalized_issdm_6_with_limit.sort_values(by='normalized', ascending=0)
df_normalized_issdm_6_with_limit_sorted_head = df_normalized_issdm_6_with_limit_sorted.head()
df_normalized_issdm_6_with_limit_sorted_tail = df_normalized_issdm_6_with_limit_sorted.tail()
df_normalized_issdm_6_with_limit_sorted_head.append(df_normalized_issdm_6_with_limit_sorted_tail) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
We notice that the stressng-cpu-jenkin looks like an outlier. Let's redraw the histogram without this one. | df_normalized_issdm_6_no_outlier = df_normalized_issdm_6_with_limit['benchmark'] != 'stressng-cpu-jenkin'
df_normalized_issdm_6_with_limit[df_normalized_issdm_6_no_outlier].normalized.hist(color='Green', bins=150, figsize=(25,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on issdm-6 with CPU restriction (no outlier)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Summary
We got the boundary of normalized value on issdm-6 from -29.394675 to 54.266945 by using parameters --cpuset-cpus=1 --cpu-quota=7234 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 1 (See cpu for more details).
Experiment Results from t2.micro
Let's have a look at the histogram of frequency of normalized value based on stress tests without CPU restriction running on t2.micro. | df_normalized_t2_micro_without_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_without]
df_normalized_t2_micro_without_limit.normalized.hist(bins=150,figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro without CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Here is the rank of normalized value from stress tests without CPU restriction | df_normalized_t2_micro_without_limit_sorted = df_normalized_t2_micro_without_limit.sort_values(by='normalized', ascending=0)
df_normalized_t2_micro_without_limit_sorted_head = df_normalized_t2_micro_without_limit_sorted.head()
df_normalized_t2_micro_without_limit_sorted_tail = df_normalized_t2_micro_without_limit_sorted.tail()
df_normalized_t2_micro_without_limit_sorted_head.append(df_normalized_t2_micro_without_limit_sorted_tail) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Let's have a look at the histogram of frequency of normalized value based on stress tests with CPU restriction running on t2.micro. | df_normalized_t2_micro_with_limit = df_normalized[normalized_machine_is_t2_micro & normalized_limits_is_with]
df_normalized_t2_micro_with_limit.normalized.hist(color='Orange', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro with CPU restriction', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Here is the rank of normalized value from stress tests with CPU restriction | df_normalized_t2_micro_with_limit_sorted = df_normalized_t2_micro_with_limit.sort_values(by='normalized', ascending=0)
df_normalized_t2_micro_with_limit_sorted_head = df_normalized_t2_micro_with_limit_sorted.head()
df_normalized_t2_micro_with_limit_sorted_tail = df_normalized_t2_micro_with_limit_sorted.tail()
df_normalized_t2_micro_with_limit_sorted_head.append(df_normalized_t2_micro_with_limit_sorted_tail) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
We notice that the stressng-memory-stack looks like an outlier. Let's redraw the histogram without this one. | df_normalized_t2_micro_no_outlier = df_normalized_t2_micro_with_limit['benchmark'] != 'stressng-memory-stack'
df_normalized_t2_micro_with_limit[df_normalized_t2_micro_no_outlier].normalized.hist(color='Green', bins=150, figsize=(30,12), xlabelsize=20, ylabelsize=20)
plt.title('stress tests run on t2.micro with CPU restriction (no outlier)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
The stressng-cpu-jenkin benchmark is a collection of (non-cryptographic) hash functions for multi-byte keys. See Jenkins hash function from Wikipedia for more details.
Summary
We got the boundary of normalized value on t2.micro from -198.440535 to 119.904761 by using parameters --cpuset-cpus=0 --cpu-quota=25750 --cpu-period=100000, which means the docker container only uses 7.234ms CPU worth of run-time every 100ms on cpu 0 (See cpu for more details).
Verification
Now we use 9 other benchmark programs to verify this result. These programs are,
- blogbench: filesystem benchmark.
- compilebench: It tries to age a filesystem by simulating some of the disk IO common in creating, compiling, patching, stating and reading kernel trees.
- fhourstones: This integer benchmark solves positions in the game of connect-4.
- himeno: Himeno benchmark score is affected by the performance of a computer, especially memory band width. This benchmark program takes measurements to proceed major loops in solving the Poisson’s equation solution using the Jacobi iteration method.
- interbench: It is designed to measure the effect of changes in Linux kernel design or system configuration changes such as cpu, I/O scheduler and filesystem changes and options.
- nbench: NBench(Wikipedia) is a synthetic computing benchmark program developed in the mid-1990s by the now defunct BYTE magazine intended to measure a computer's CPU, FPU, and Memory System speed.
- pybench: It is a collection of tests that provides a standardized way to measure the performance of Python implementations.
- ramsmp: RAMspeed is a free open source command line utility to measure cache and memory performance of computer systems.
- stockfish-7: It is a simple benchmark by letting Stockfish analyze a set of positions for a given limit each.
Read verification tests data. | df_verification = pd.read_csv('verification/results/2/alltests_with_normalized_results_1.1.csv') | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Show number of test benchmarks. | len(df_verification) / 2 | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Order the test results by the absolute of normalized value | df_verification_rank = df_verification.reindex(df_verification.normalized.abs().sort_values(ascending=0).index)
df_verification_rank.head(8) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Verification Tests on issdm-6
Histogram of frequency of normalized value. | df_verification_issdm_6 = df_verification[df_verification['machine'] == 'issdm-6']
df_verification_issdm_6.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on issdm-6', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Print the max the min normalized value, | print(
df_verification_issdm_6['normalized'].max(),
df_verification_issdm_6['normalized'].min()
) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
The average of noramlized value is, | df_verification_issdm_6['normalized'].mean() | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
If we remove all nbench tests, the frequency histogram changes to | df_verification_issdm_6_no_nbench = df_verification_issdm_6[~df_verification_issdm_6['benchmark'].str.startswith('nbench')]
df_verification_issdm_6_no_nbench.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on issdm-6 (no nbench)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
The max the min normalized value changes to, | print(
df_verification_issdm_6_no_nbench['normalized'].max(),
df_verification_issdm_6_no_nbench['normalized'].min()
) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
The average of noramlized value changes to, | df_verification_issdm_6_no_nbench['normalized'].mean() | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Verification Tests on t2.micro
Histogram of frequency of normalized value. | df_verification_t2_micro = df_verification[df_verification['machine'] == 't2.micro']
df_verification_t2_micro.normalized.hist(color='y', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests run on t2.micro', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
The average of noramlized value of the verification benchmarks is, | df_verification_t2_micro['normalized'].mean() | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Let's see the frequency histogram after removing right-most four outliers. | df_verification_top_benchmakrs = df_verification_rank[df_verification_rank['machine'] == 't2.micro'].head(4)['benchmark']
df_verification_t2_micro_no_outliers = df_verification_t2_micro[~df_verification_t2_micro['benchmark'].isin(df_verification_top_benchmakrs)]
df_verification_t2_micro_no_outliers.normalized.hist(color='greenyellow', bins=150,figsize=(20,10), xlabelsize=20, ylabelsize=20)
plt.title('verification tests on t2.micro (no outliers)', fontsize=30)
plt.xlabel('Normalized Value (re-execution / original)', fontsize=25)
plt.ylabel('Frequency (# of benchmarks)', fontsize=25) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Print the max the min normalized value, | print(
df_verification_t2_micro_no_outliers['normalized'].max(),
df_verification_t2_micro_no_outliers['normalized'].min()
) | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
The average of noramlized value without the four outliners is, | df_verification_t2_micro_no_outliers['normalized'].mean() | experiments/benchmarking/visualize_2.ipynb | ljishen/kividry | apache-2.0 |
Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch. | def process_decoding_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
td_end_removed = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
td_start_added = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), td_end_removed], 1)
return td_start_added
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_decoding_input(process_decoding_input) | language-translation/Project 4 Submission/dlnd_language_translation.ipynb | tkurfurst/deep-learning | mit |
Identifying source code lines
We can now focus on the changed source code lines. We can identify | diff_raw["i"] = diff_raw.raw.str[1:].str.len() - diff_raw.raw.str[1:].str.lstrip().str.len()
diff_raw.head()
%%timeit
diff_raw['added'] = diff_raw.raw.str.extract("^\+( *).*$", expand=True)[0].str.len()
diff_raw['deleted'] = diff_raw.raw.str.extract("^-( *).*$", expand=True)[0].str.len()
diff_raw.head() | prototypes/Complexity over Time (easier way maybe).ipynb | feststelltaste/software-analytics | gpl-3.0 |
Create an input pipeline using tf.data
Next, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model.
Here, we create an input pipeline using tf.data. This function is missing two lines. Correct and run the cell. | # A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
# TODO 1 -- Your code here
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds | courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Now that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable. | # TODO 1
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
Numeric columns
The output of a feature column becomes the input to the model. A numeric is the simplest type of column. It is used to represent real valued features. When using this column, your model will receive the column value from the dataframe unchanged.
In the California housing prices dataset, most columns from the dataframe are numeric. Let' create a variable called numeric_cols to hold only the numerical feature columns. | # TODO 1
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Scaler function
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling. Here we are creating a function named 'get_scal' which takes a list of numerical features and returns a 'minmax' function, which will be used in tf.feature_column.numeric_column() as normalizer_fn in parameters. 'Minmax' function itself takes a 'numerical' number from a particular feature and return scaled value of that number.
Next, we scale the numerical feature columns that we assigned to the variable "numeric cols". | # Scalar def get_scal(feature):
# TODO 1
# TODO 1
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Now that we have created an input pipeline using tf.data and compiled a Keras Sequential Model, we now create the input function for the test data and to initialize the test_predict variable. | # TODO 1
test_predict = test_input_fn(dict(test_data)) | courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
The arrays returns a predicted value. What do these numbers mean? Let's compare this value to the test set.
Go to the test.csv you read in a few cells up. Locate the first line and find the median_house_value - which should be 249,000 dollars near the ocean. What value did your model predict for the median_house_value? Was it a solid model performance? Let's see if we can improve this a bit with feature engineering!
Engineer features to create categorical and numerical features
Now we create a cell that indicates which features will be used in the model.
Note: Be sure to bucketize 'housing_median_age' and ensure that 'ocean_proximity' is one-hot encoded. And, don't forget your numeric values! | # TODO 2
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Categorical Feature
In this dataset, 'ocean_proximity' is represented as a string. We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector.
Next, we create a categorical feature using 'ocean_proximity'. | # TODO 2
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Bucketized Feature
Often, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider our raw data that represents a homes' age. Instead of representing the house age as a numeric column, we could split the home age into several buckets using a bucketized column. Notice the one-hot values below describe which age range each row matches.
Next we create a bucketized column using 'housing_median_age' | # TODO 2
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Feature Cross
Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features.
Next, we create a feature cross of 'housing_median_age' and 'ocean_proximity'. | # TODO 2
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Next we create a prediction model. Note: You may use the same values from the previous prediciton. | # TODO 2
| courses/machine_learning/deepdive2/feature_engineering/labs/3_keras_basic_feat_eng-lab.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Import data | folder = 'hillary-clinton-emails/'
emails = pd.read_csv(folder + 'Emails.csv', index_col='Id')
emails.head(5) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Analyse Emails | emails.head() | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
The columns ExtractedBodyText is supposed to be the content of the mail but some of the mail have a ExtractedBodyText = NaN but the Rawtext seems to contains something | emails.columns
print('Number of emails: ', len(emails))
bodyNaN = emails.ExtractedBodyText.isnull().sum()
print('Number of emails with ExtractedBodyText=NaN: {}, ({:.2f}%)'.format(emails.ExtractedBodyText.isnull().sum(), bodyNaN/ len(emails))) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
We could also use the subject since it is usually a summary of the mail | bodyNaN = emails.ExtractedSubject.isnull().sum()
print('Number of emails with ExtractedSubject=NaN: {}, ({:.2f}%)'.format(emails.ExtractedBodyText.isnull().sum(), bodyNaN/ len(emails))) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Now let's try to combine the subject and the body and drop the mail that have both subject= NaN and body = Nan | subBodyNan = emails[np.logical_and(emails.ExtractedBodyText.isnull(),emails.ExtractedSubject.isnull())]
print('Number of email where both subject and body is NaN: {}({:.2f})'.format(len(subBodyNan), len(subBodyNan)/ len(emails))) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Well, that number is small enough to drop all email where both Extracted subject and Extracted body is NaN.
Let's drop them and create a new columns subjectBody that is the concatenation of the 2 columns ExtractedSubject and ExtractedBody. From now we will work with that columns | emails = emails[~ np.logical_and(emails.ExtractedBodyText.isnull(), emails.ExtractedSubject.isnull())]
len(emails)
emails.ExtractedBodyText.fillna('',inplace=True)
emails.ExtractedSubject.fillna('',inplace=True)
emails['SubjectBody'] = emails.ExtractedBodyText + emails.ExtractedSubject
emails.SubjectBody.head() | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Last check to be sur that our columns of interest don't have anymore NaN | print('Number of NaN in columns SubjectBody: ' ,emails.SubjectBody.isnull().sum()) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Keep only mail that mentions a country
Structure of a country in pycountry.countres | list(pycountry.countries)[0] | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
First we create a dataframe with one line by countries and we count for each countries its occurences in the mail.
Since a country can be reference in many way (Switzerland, switzerland, CH), we need to consider all the possible form.
We may have a problem with word that have many meaning like US(country) and us (pronoun) so we can't just take all the country name in loer case and all the mail in lower case and just compare.
We first try to use that technique:
1. the country name can appear either in lower case, with the first letter in uppercase or all in uppercase
2. alpha_2 and alpha_3 are always used in uppercase
But we still have a lot of problem. Indeed a lot of mail contain sentences all in upper cas (see below):
- SUBJECT TO AGREEMENT ON SENSITIVE INFORMATION & REDACTIONS. NO FOIA WAIVER. STATE-SCB0045012
For example this sentence will match for Togo because of the TO and also for norway because of the NO. An other example is Andorra that appears in 55 mails thanks to AND
At first we also wanted to keep the upper case since it can be helpfull to do the sentiment analysis. Look at those 2 sentence and their corresponding score:
- VADER is very smart, handsome, and funny.: compound: 0.8545, neg: 0.0, neu: 0.299, pos: 0.701,
- VADER is VERY SMART, handsome, and FUNNY.: compound: 0.9227, neg: 0.0, neu: 0.246, pos: 0.754,
The score is not the same. But since we have a lot of information in Upper case and it nothing to do with sentiment, we will put all mails in lower case. And we will also remove the stopwords.
We know that we remove the occurance of USA under 'us' but it will also remove the 'and', 'can', 'it',... | emails.SubjectBody.head(100).apply(print) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Tokenize and remove stopwords |
from gensim import corpora, models, utils
from nltk.corpus import stopwords
sw = stopwords.words('english') + ['re', 'fw', 'fvv', 'fwd']
sw = sw + ['pm', "a", "about", "above", "above", "across", "after", "afterwards", "again", "against", "all", "almost", "alone", "along", "already", "also","although","always","am","among", "amongst", "amoungst", "amount", "an", "and", "another", "any","anyhow","anyone","anything","anyway", "anywhere", "are", "around", "as", "at", "back","be","became", "because","become","becomes", "becoming", "been", "before", "beforehand", "behind", "being", "below", "beside", "besides", "between", "beyond", "bill", "both", "bottom","but", "by", "call", "can", "cannot", "cant", "co", "con", "could", "couldnt", "cry", "de", "describe", "detail", "do", "done", "down", "due", "during", "each", "eg", "eight", "either", "eleven","else", "elsewhere", "empty", "enough", "etc", "even", "ever", "every", "everyone", "everything", "everywhere", "except", "few", "fifteen", "fify", "fill", "find", "fire", "first", "five", "for", "former", "formerly", "forty", "found", "four", "from", "front", "full", "further", "get", "give", "go", "had", "has", "hasnt", "have", "he", "hence", "her", "here", "hereafter", "hereby", "herein", "hereupon", "hers", "herself", "him", "himself", "his", "how", "however", "hundred", "ie", "if", "in", "inc", "indeed", "interest", "into", "is", "it", "its", "itself", "keep", "last", "latter", "latterly", "least", "less", "ltd", "made", "many", "may", "me", "meanwhile", "might", "mill", "mine", "more", "moreover", "most", "mostly", "move", "much", "must", "my", "myself", "name", "namely", "neither", "never", "nevertheless", "next", "nine", "no", "nobody", "none", "noone", "nor", "not", "nothing", "now", "nowhere", "of", "off", "often", "on", "once", "one", "only", "onto", "or", "other", "others", "otherwise", "our", "ours", "ourselves", "out", "over", "own","part", "per", "perhaps", "please", "put", "rather", "re", "same", "see", "seem", "seemed", "seeming", "seems", "serious", "several", "she", "should", "show", "side", "since", "sincere", "six", "sixty", "so", "some", "somehow", "someone", "something", "sometime", "sometimes", "somewhere", "still", "such", "system", "take", "ten", "than", "that", "the", "their", "them", "themselves", "then", "thence", "there", "thereafter", "thereby", "therefore", "therein", "thereupon", "these", "they", "thickv", "thin", "third", "this", "those", "though", "three", "through", "throughout", "thru", "thus", "to", "together", "too", "top", "toward", "towards", "twelve", "twenty", "two", "un", "under", "until", "up", "upon", "us", "very", "via", "was", "we", "well", "were", "what", "whatever", "when", "whence", "whenever", "where", "whereafter", "whereas", "whereby", "wherein", "whereupon", "wherever", "whether", "which", "while", "whither", "who", "whoever", "whole", "whom", "whose", "why", "will", "with", "within", "without", "would", "yet", "you", "your", "yours", "yourself", "yourselves", "the"]
def fil(row):
t = utils.simple_preprocess(row.SubjectBody)
filt = list(filter(lambda x: x not in sw, t))
return ' '.join(filt)
emails['SubjectBody'] = emails.apply(fil, axis=1)
emails.head(10)
countries = np.array([[country.name.lower(), country.alpha_2.lower(), country.alpha_3.lower()] for country in list(pycountry.countries)])
countries[:5]
countries.shape
countries = pd.DataFrame(countries, columns=['Name', 'Alpha_2', 'Alpha_3'])
countries.head()
countries.shape
countries.isin(['aruba']).any().any()
def check_country(row):
return countries.isin(row.SubjectBody.split()).any().any()
emails_country = emails[emails.apply(check_country, axis=1)]
len(emails_country)
| HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Sentiment analysis
We explain before our precessing. Now we will do the sentiment analysis only on the subject and the body
So we will only consider the subject and the body | sentiments = pd.DataFrame(emails_country.SubjectBody)
sentiments.head()
sentiments.shape | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Analysis
We will do a sentiment analysis on each sentense and then compute a socre for each country
We will compare different module:
- nltk.sentiment.vader that attribute a score to each sentence
- liuhu that has a set of positive word and one of neg word. We count the positif word and neg word in each sentence and compute the mean
Vader (That part takes time) | sentiments.head()
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
def sentiment_analysis(row):
score = sid.polarity_scores(row)
return pd.Series({'Pos': score['pos'], 'Neg': score['neg'], 'Compound_':score['compound'] })
sentiments = pd.concat([sentiments, sentiments.SubjectBody.apply(sentiment_analysis)], axis=1)
sentiments.to_csv('mailScore.csv')
sentiments.head() | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Liuhu | from nltk.corpus import opinion_lexicon
sentimentsLihuh = pd.read_csv('mailScore.csv', index_col='Id')
#transform the array of positiv and negatif word in dict
dicPosNeg = dict()
for word in opinion_lexicon.positive():
dicPosNeg[word] = 1
for word in opinion_lexicon.negative():
dicPosNeg[word] = -1
def sentiment_liuhu(sentence):
counter = []
for word in sentence.split():
value = dicPosNeg.get(word, -999)
if value != -999:
counter.append(value)
if len(counter) == 0 :
return pd.Series({'Sum_': int(0), 'Mean_': int(0) })
return pd.Series({'Sum_': np.sum(counter), 'Mean_': np.mean(counter) })
sentimentsLihuh = pd.concat([sentimentsLihuh, sentimentsLihuh.SubjectBody.apply(sentiment_liuhu)], axis=1)
sentimentsLihuh.to_csv('mailScore2.csv')
sentimentsLihuh | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Aggregate by countries
We groupe by country and compute the mean of each score | sentiments = pd.read_csv('mailScore2.csv', index_col='Id')
sentiments.head()
def aggScoreByCountry(country):
condition = sentiments.apply(lambda x: np.any(country.isin(x.SubjectBody.split())), axis=1)
sent = sentiments[condition]
if len(sent) == 0:
print(country.Name, -999)
return pd.Series({'Compound_':-999, 'Mean_':-999, 'Appearance': int(len(sent))})
compound_ = np.mean(sent.Compound_)
mean_ = np.mean(sent.Mean_)
print(country.Name, compound_)
return pd.Series({'Compound_': compound_, 'Mean_': mean_, 'Appearance': int(len(sent))})
countries = pd.concat([countries, countries.apply(lambda x: aggScoreByCountry(x), axis=1)],axis=1)
countries.to_csv('countriesScore.csv') | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Drop all country that have a score of -999 (they never appear in the mails) | countries = countries[countries.Compound_ != -999]
len(countries) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
It's a lot of country. We will also use a thresold for the appearance. We only keep mails that are mentioned in a minimum number of emails | minimum_appearance = 15
countries_min = countries[countries.Appearance >= minimum_appearance]
len(countries_min) | HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Plot
We plot the 2 analysis. The first plot show an historgram with the vador score and in color the appearance in the mail.
In the second plot the histogram shows the liuhu score and in color the appearance in the mail
we only consider countries that are at least mention 15 times. Otherwise we end up with to much countries | # Set up colors : red to green
countries_sorted = countries_min.sort(columns=['Compound_'])
plt.figure(figsize=(16, 6), dpi=80)
appearance = np.array(countries_sorted.Appearance)
colors = cm.RdYlGn((appearance / float(max(y))))
plot = plt.scatter(appearance, appearance, c=appearance, cmap = 'RdYlGn')
plt.clf()
colorBar = plt.colorbar(plot)
colorBar.ax.set_title("Appearance")
index = np.arange(len(countries_sorted))
bar_width = 0.95
plt.bar(range(countries_sorted.shape[0]), countries_sorted.Compound_, align='center', tick_label=countries_sorted.Name, color=colors)
plt.xticks(rotation=90, ha='center')
plt.title('Using Vader')
plt.xlabel('Countries')
plt.ylabel('Vador Score')
countries_sorted = countries_min.sort(columns=['Mean_'])
plt.figure(figsize=(16, 6), dpi=80)
appearance = np.array(countries_sorted.Appearance)
colors = cm.RdYlGn((appearance / float(max(y))))
plot = plt.scatter(appearance, appearance, c=appearance, cmap = 'RdYlGn')
plt.clf()
colorBar = plt.colorbar(plot)
colorBar.ax.set_title("Appearance")
index = np.arange(len(countries_sorted))
bar_width = 0.95
plt.bar(range(countries_sorted.shape[0]), countries_sorted.Mean_, align='center', tick_label=countries_sorted.Name, color=colors)
plt.xticks(rotation=90, ha='center')
plt.title('Liuhu Score')
plt.xlabel('Countries')
plt.ylabel('Liuhu Score')
| HW05-TamingText/Part2.ipynb | christophebertrand/ada-epfl | mit |
Let's try several polynomial fits to the data: | for j,degree in enumerate(degrees):
for i in range(sub):
#create data - sample from sine wave
x = np.random.random((n,1))*2*np.pi
y = np.sin(x) + np.random.normal(mean,std,(n,1))
#TODO
#create features corresponding to degree - ex: 1, x, x^2, x^3...
A =
#TODO:
#fit model using least squares solution (linear regression)
#later include ridge regression/normalization
coeffs =
#store predictions for each sampling
preds[:,i] = poly.fit_transform(Xtest).dot(coeffs)[:,0]
#plot 9 images
if i < 9:
plt.subplot(3,3,i+1)
plt.plot(values,poly.fit_transform(values).dot(coeffs),x,y,'.b')
plt.axis([0,2*np.pi,-2,2])
plt.suptitle('PolyFit = %i' % (degree))
plt.show()
#TODO
#Calculate mean bias, variance, and MSE (UNCOMMENT CODE BELOW!)
#bias[j] =
#variance[j] =
#mse[j] =
| handsOn_lecture10_bias-variance_tradeoff/draft/bias_variance_solutions.ipynb | eecs445-f16/umich-eecs445-f16 | mit |
Here's some of the code you've written so far. Start by running it again. | # Import helpful libraries
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
# Load the data, and separate the target
iowa_file_path = '../input/train.csv'
home_data = pd.read_csv(iowa_file_path)
y = home_data.SalePrice
# Create X (After completing the exercise, you can return to modify this line!)
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
# Select columns corresponding to features, and preview the data
X = home_data[features]
X.head()
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Define a random forest model
rf_model = RandomForestRegressor(random_state=1)
rf_model.fit(train_X, train_y)
rf_val_predictions = rf_model.predict(val_X)
rf_val_mae = mean_absolute_error(rf_val_predictions, val_y)
print("Validation MAE for Random Forest Model: {:,.0f}".format(rf_val_mae)) | notebooks/machine_learning/raw/ex7.ipynb | Kaggle/learntools | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.