markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Lets use apply to calculate a meaningful baseball statistics, slugging percentage: $$SLG = \frac{1B + (2 \times 2B) + (3 \times 3B) + (4 \times HR)}{AB}$$ And just for fun, we will format the resulting estimate.
slg = lambda x: (x['h']-x['X2b']-x['X3b']-x['hr'] + 2*x['X2b'] + 3*x['X3b'] + 4*x['hr'])/(x['ab']+1e-6) baseball.apply(slg, axis=1).apply(lambda x: '%.3f' % x)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Sorting and Ranking Pandas objects include methods for re-ordering data.
baseball_newind.sort_index().head() baseball_newind.sort_index(ascending=False).head() baseball_newind.sort_index(axis=1).head()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
We can also use order to sort a Series by value, rather than by label.
baseball.hr.order(ascending=False)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
For a DataFrame, we can sort according to the values of one or more columns using the by argument of sort_index:
baseball[['player','sb','cs']].sort_index(ascending=[False,True], by=['sb', 'cs']).head(10)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Ranking does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series.
baseball.hr.rank()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Ties are assigned the mean value of the tied ranks, which may result in decimal values.
pd.Series([100,100]).rank()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset:
baseball.hr.rank(method='first')
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Calling the DataFrame's rank method results in the ranks of all columns:
baseball.rank(ascending=False).head() baseball[['r','h','hr']].rank(ascending=False).head()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Exercise Calculate on base percentage for each player, and return the ordered series of estimates. $$OBP = \frac{H + BB + HBP}{AB + BB + HBP + SF}$$ Hierarchical indexing In the baseball example, I was forced to combine 3 fields to obtain a unique index that was not simply an integer value. A more elegant way to have done this would be to create a hierarchical index from the three fields.
baseball_h = baseball.set_index(['year', 'team', 'player']) baseball_h.head(10)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
This index is a MultiIndex object that consists of a sequence of tuples, the elements of which is some combination of the three columns used to create the index. Where there are multiple repeated values, Pandas does not print the repeats, making it easy to identify groups of values.
baseball_h.index[:10] baseball_h.index.is_unique baseball_h.ix[(2007, 'ATL', 'francju01')]
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index:
mb = pd.read_csv("data/microbiome.csv", index_col=['Taxon','Patient']) mb.head(10) mb.index
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
With a hierachical index, we can select subsets of the data based on a partial index:
mb.ix['Proteobacteria']
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Hierarchical indices can be created on either or both axes. Here is a trivial example:
frame = pd.DataFrame(np.arange(12).reshape(( 4, 3)), index =[['a', 'a', 'b', 'b'], [1, 2, 1, 2]], columns =[['Ohio', 'Ohio', 'Colorado'], ['Green', 'Red', 'Green']]) frame
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
If you want to get fancy, both the row and column indices themselves can be given names:
frame.index.names = ['key1', 'key2'] frame.columns.names = ['state', 'color'] frame
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
With this, we can do all sorts of custom indexing:
frame.ix['a']['Ohio'] frame.ix['b', 2]['Colorado']
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Additionally, the order of the set of indices in a hierarchical MultiIndex can be changed by swapping them pairwise:
mb.swaplevel('Patient', 'Taxon').head()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Data can also be sorted by any index level, using sortlevel:
mb.sortlevel('Patient', ascending=False).head()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Missing data The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand. Missing data are represented in Series and DataFrame objects by the NaN floating point value. However, None is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy).
foo = pd.Series([NaN, -3, None, 'foobar']) foo foo.isnull()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Missing values may be dropped or indexed out:
bacteria2 bacteria2.dropna() bacteria2[bacteria2.notnull()]
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
By default, dropna drops entire rows in which one or more values are missing.
data data.dropna()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
This can be overridden by passing the how='all' argument, which only drops a row when every field is a missing value.
data.dropna(how='all')
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument.
data.ix[7, 'year'] = nan data data.dropna(thresh=4)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects. If we want to drop missing values column-wise instead of row-wise, we use axis=1.
data.dropna(axis=1)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the fillna argument.
bacteria2.fillna(0) data.fillna({'year': 2013, 'treatment':2})
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Notice that fillna by default returns a new object with the desired filling behavior, rather than changing the Series or DataFrame in place (in general, we like to do this, by the way!).
data
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
We can alter values in-place using inplace=True.
_ = data.year.fillna(2013, inplace=True) data
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Missing values can also be interpolated, using any one of a variety of methods:
bacteria2.fillna(method='bfill') bacteria2.fillna(bacteria2.mean())
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Data summarization We often wish to summarize data in Series or DataFrame objects, so that they can more easily be understood or compared with similar data. The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures.
baseball.sum()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Clearly, sum is more meaningful for some columns than others. For methods like mean for which application to string variables is not just meaningless, but impossible, these columns are automatically exculded:
baseball.mean()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
The important difference between NumPy's functions and Pandas' methods is that the latter have built-in support for handling missing data.
bacteria2 bacteria2.mean()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Sometimes we may not want to ignore missing values, and allow the nan to propagate.
bacteria2.mean(skipna=False)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Passing axis=1 will summarize over rows instead of columns, which only makes sense in certain situations.
extra_bases = baseball[['X2b','X3b','hr']].sum(axis=1) extra_bases.order(ascending=False)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
A useful summarization that gives a quick snapshot of multiple statistics for a Series or DataFrame is describe:
baseball.describe()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
describe can detect non-numeric data and sometimes yield useful information about it.
baseball.player.describe()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
We can also calculate summary statistics across multiple columns, for example, correlation and covariance. $$cov(x,y) = \sum_i (x_i - \bar{x})(y_i - \bar{y})$$
baseball.hr.cov(baseball.X2b)
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
$$corr(x,y) = \frac{cov(x,y)}{(n-1)s_x s_y} = \frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_i (x_i - \bar{x})^2 \sum_i (y_i - \bar{y})^2}}$$
baseball.hr.corr(baseball.X2b) baseball.ab.corr(baseball.h) baseball.corr()
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
If we have a DataFrame with a hierarchical index (or indices), summary statistics can be applied with respect to any of the index levels:
mb.head() mb.sum(level='Taxon')
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Writing Data to Files As well as being able to read several data input formats, Pandas can also export data to a variety of storage formats. We will bring your attention to just a couple of these.
mb.to_csv("mb.csv")
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
The to_csv method writes a DataFrame to a comma-separated values (csv) file. You can specify custom delimiters (via sep argument), how missing values are written (via na_rep argument), whether the index is writen (via index argument), whether the header is included (via header argument), among other options. An efficient way of storing data to disk is in binary format. Pandas supports this using Python’s built-in pickle serialization.
baseball.to_pickle("baseball_pickle")
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
The complement to to_pickle is the read_pickle function, which restores the pickle to a DataFrame or Series:
pd.read_pickle("baseball_pickle")
Day_01/01_Pandas/1. Introduction to Pandas.ipynb
kialio/gsfcpyboot
mit
Data Mining
# Reading in the data allmatches = pd.read_csv("../data/matches.csv") alldeliveries = pd.read_csv("../data/deliveries.csv") allmatches.head(10) # Selecting Seasons 2008 - 2015 matches_seasons = allmatches.loc[allmatches['season'] != 2016] deliveries_seasons = alldeliveries.loc[alldeliveries['match_id'] < 518] # Selecting teams DD, KKR, MI, RCB, KXIP, RR, CSK matches_teams = matches_seasons.loc[(matches_seasons['team1'].isin(['Kolkata Knight Riders', \ 'Royal Challengers Bangalore', 'Delhi Daredevils', 'Chennai Super Kings', 'Rajasthan Royals', \ 'Mumbai Indians', 'Kings XI Punjab'])) & (matches_seasons['team2'].isin(['Kolkata Knight Riders', \ 'Royal Challengers Bangalore', 'Delhi Daredevils', 'Chennai Super Kings', 'Rajasthan Royals', \ 'Mumbai Indians', 'Kings XI Punjab']))] matches_team_matchids = matches_teams.id.unique() deliveries_teams = deliveries_seasons.loc[deliveries_seasons['match_id'].isin(matches_team_matchids)] print "Teams selected:\n" for team in matches_teams.team1.unique(): print team # Neglect matches with inconsistencies like 'No Result' or 'D/L Applied' matches = matches_teams.loc[(matches_teams['result'] == 'normal') & (matches_teams['dl_applied'] == 0)] matches_matchids = matches.id.unique() deliveries = deliveries_teams.loc[deliveries_teams['match_id'].isin(matches_matchids)] # Verifying consistency between datasets (matches.id.unique() == deliveries.match_id.unique()).all()
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Building Features
# Team Strike rates for first 5 batsmen in the team (Higher the better) def getMatchDeliveriesDF(match_id): return deliveries.loc[deliveries['match_id'] == match_id] def getInningsOneBatsmen(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 1].batsman.unique()[0:5] def getInningsTwoBatsmen(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 2].batsman.unique()[0:5] def getBatsmanStrikeRate(batsman, match_id): onstrikedeliveries = deliveries.loc[(deliveries['match_id'] < match_id) & (deliveries['batsman'] == batsman)] total_runs = onstrikedeliveries['batsman_runs'].sum() total_balls = onstrikedeliveries.shape[0] if total_balls != 0: return (total_runs/total_balls) * 100 else: return None def getTeamStrikeRate(batsmen, match_id): strike_rates = [] for batsman in batsmen: bsr = getBatsmanStrikeRate(batsman, match_id) if bsr != None: strike_rates.append(bsr) return np.mean(strike_rates) def getAverageStrikeRates(match_id): match_deliveries = getMatchDeliveriesDF(match_id) innOneBatsmen = getInningsOneBatsmen(match_deliveries) innTwoBatsmen = getInningsTwoBatsmen(match_deliveries) teamOneSR = getTeamStrikeRate(innOneBatsmen, match_id) teamTwoSR = getTeamStrikeRate(innTwoBatsmen, match_id) return teamOneSR, teamTwoSR # Testing Functionality getAverageStrikeRates(517) # Bowler Rating : Wickets/Run (Higher the Better) # Team 1: Batting First; Team 2: Fielding First def getInningsOneBowlers(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 1].bowler.unique()[0:4] def getInningsTwoBowlers(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 2].bowler.unique()[0:4] def getBowlerWPR(bowler, match_id): balls = deliveries.loc[(deliveries['match_id'] < match_id) & (deliveries['bowler'] == bowler)] total_runs = balls['total_runs'].sum() total_wickets = balls.loc[balls['dismissal_kind'].isin(['caught', 'bowled', 'lbw', \ 'caught and bowled', 'stumped'])].shape[0] if total_runs != 0: return (total_wickets/total_runs) * 100 else: return total_wickets def getTeamWPR(bowlers, match_id): totalWPRs = [] for bowler in bowlers: totalWPRs.append(getBowlerWPR(bowler, match_id)) return np.mean(totalWPRs) def getAverageWPR(match_id): match_deliveries = getMatchDeliveriesDF(match_id) innOneBowlers = getInningsOneBowlers(match_deliveries) innTwoBowlers = getInningsTwoBowlers(match_deliveries) teamOneWPR = getTeamWPR(innTwoBowlers, match_id) teamTwoWPR = getTeamWPR(innOneBowlers, match_id) return teamOneWPR, teamTwoWPR #Testing Functionality getAverageWPR(517) # Man of the Match Awards for players of both Teams def getInningsOneAllBatsmen(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 1].batsman.unique() def getInningsTwoAllBatsmen(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 2].batsman.unique() def getInningsOneAllBowlers(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 2].bowler.unique() def getInningsTwoAllBowlers(match_deliveries): return match_deliveries.loc[match_deliveries['inning'] == 1].bowler.unique() def getTeam(batsmen,bowlers): p = [] p = np.append(p, batsmen) for i in bowlers: if i not in batsmen: p = np.append(p, i) return p def getPlayerMVPAwards(player, match_id): return matches.loc[(matches["player_of_match"] == player) & (matches['id'] < match_id)].shape[0] def getTeamMVPAwards(team, match_id): mvpAwards = 0 for player in team: mvpAwards = mvpAwards + getPlayerMVPAwards(player,match_id) return mvpAwards def bothTeamMVPAwards(match_id): matchDeliveries = getMatchDeliveriesDF(match_id) innOneBatsmen = getInningsOneAllBatsmen(matchDeliveries) innTwoBatsmen = getInningsTwoAllBatsmen(matchDeliveries) innOneBowlers = getInningsTwoAllBowlers(matchDeliveries) innTwoBowlers = getInningsOneAllBowlers(matchDeliveries) team1 = getTeam(innOneBatsmen, innTwoBowlers) team2 = getTeam(innTwoBatsmen, innOneBowlers) team1Awards = getTeamMVPAwards(team1,match_id) team2Awards = getTeamMVPAwards(team2,match_id) return team1Awards, team2Awards #Testing Functionality bothTeamMVPAwards(517) #Function to generate squad rating def generateSquadRating(match_id): gameday_teams = deliveries.loc[(deliveries['match_id'] == match_id)].batting_team.unique() teamOne = gameday_teams[0] teamTwo = gameday_teams[1] teamOneSR, teamTwoSR = getAverageStrikeRates(match_id) teamOneWPR, teamTwoWPR = getAverageWPR(match_id) teamOneMVPs, teamTwoMVPs = bothTeamMVPAwards(match_id) print "Comparing squads for {} vs {}".format(teamOne,teamTwo) print "\nAverage Strike Rate for Batsmen in {} : {}".format(teamOne,teamOneSR) print "\nAverage Strike Rate for Batsmen in {} : {}".format(teamTwo,teamTwoSR) print "\nBowler Rating (W/R) for {} : {}".format(teamOne,teamOneWPR) print "\nBowler Rating (W/R) for {} : {}".format(teamTwo,teamTwoWPR) print "\nNumber of MVP Awards in {} : {}".format(teamOne,teamOneMVPs) print "\nNumber of MVP Awards in {} : {}".format(teamTwo,teamTwoMVPs) #Testing Functionality generateSquadRating(517) ## 2nd Feature : Previous Encounter # Won by runs and won by wickets (Higher the better) def getTeam1(match_id): return matches.loc[matches["id"] == match_id].team1.unique() def getTeam2(match_id): return matches.loc[matches["id"] == match_id].team2.unique() def getPreviousEncDF(match_id): team1 = getTeam1(match_id) team2 = getTeam2(match_id) return matches.loc[(matches["id"] < match_id) & (((matches["team1"].isin(team1)) & (matches["team2"].isin(team2))) | ((matches["team1"].isin(team2)) & (matches["team2"].isin(team1))))] def getTeamWBR(match_id, team): WBR = 0 DF = getPreviousEncDF(match_id) winnerDF = DF.loc[DF["winner"] == team] WBR = winnerDF['win_by_runs'].sum() return WBR def getTeamWBW(match_id, team): WBW = 0 DF = getPreviousEncDF(match_id) winnerDF = DF.loc[DF["winner"] == team] WBW = winnerDF['win_by_wickets'].sum() return WBW def getTeamWinPerc(match_id): dF = getPreviousEncDF(match_id) timesPlayed = dF.shape[0] team1 = getTeam1(match_id)[0].strip("[]") timesWon = dF.loc[dF["winner"] == team1].shape[0] if timesPlayed != 0: winPerc = (timesWon/timesPlayed) * 100 else: winPerc = 0 return winPerc def getBothTeamStats(match_id): DF = getPreviousEncDF(match_id) team1 = getTeam1(match_id)[0].strip("[]") team2 = getTeam2(match_id)[0].strip("[]") timesPlayed = DF.shape[0] timesWon = DF.loc[DF["winner"] == team1].shape[0] WBRTeam1 = getTeamWBR(match_id, team1) WBRTeam2 = getTeamWBR(match_id, team2) WBWTeam1 = getTeamWBW(match_id, team1) WBWTeam2 = getTeamWBW(match_id, team2) print "Out of {} times in the past {} have won {} times({}%) from {}".format(timesPlayed, team1, timesWon, getTeamWinPerc(match_id), team2) print "{} won by {} total runs and {} total wickets.".format(team1, WBRTeam1, WBWTeam1) print "{} won by {} total runs and {} total wickets.".format(team2, WBRTeam2, WBWTeam2) #Testing functionality getBothTeamStats(517) #3rd Feature: Recent Form (Win Percentage of 3 previous matches of a team in the same season) #Higher the better def getMatchYear(match_id): return matches.loc[matches["id"] == match_id].season.unique() def getTeam1DF(match_id, year): team1 = getTeam1(match_id) return matches.loc[(matches["id"] < match_id) & (matches["season"] == year) & ((matches["team1"].isin(team1)) | (matches["team2"].isin(team1)))].tail(3) def getTeam2DF(match_id, year): team2 = getTeam2(match_id) return matches.loc[(matches["id"] < match_id) & (matches["season"] == year) & ((matches["team1"].isin(team2)) | (matches["team2"].isin(team2)))].tail(3) def getTeamWinPercentage(match_id): win = 0 total = 0 year = int(getMatchYear(match_id)) team1 = getTeam1(match_id)[0].strip("[]") team2 = getTeam2(match_id)[0].strip("[]") team1DF = getTeam1DF(match_id, year) team2DF = getTeam2DF(match_id, year) team1TotalMatches = team1DF.shape[0] team1WinMatches = team1DF.loc[team1DF["winner"] == team1].shape[0] team2TotalMatches = team2DF.shape[0] team2WinMatches = team2DF.loc[team2DF["winner"] == team2].shape[0] if (team1TotalMatches != 0) and (team2TotalMatches !=0): winPercTeam1 = ((team1WinMatches / team1TotalMatches) * 100) winPercTeam2 = ((team2WinMatches / team2TotalMatches) * 100) elif (team1TotalMatches != 0) and (team2TotalMatches ==0): winPercTeam1 = ((team1WinMatches / team1TotalMatches) * 100) winPercTeam2 = 0 elif (team1TotalMatches == 0) and (team2TotalMatches !=0): winPercTeam1 = 0 winPercTeam2 = ((team2WinMatches / team2TotalMatches) * 100) else: winPercTeam1 = 0 winPercTeam2 = 0 return winPercTeam1, winPercTeam2 def displayTeamWin(match_id): year = int(getMatchYear(match_id)) team1 = getTeam1(match_id)[0].strip("[]") team2 = getTeam2(match_id)[0].strip("[]") P,Q = getTeamWinPercentage(match_id) print "In the season of {}, {} has a win percentage of {}% and {} has a win percentage of {}% ".format(year, team1, P, team2, Q) #Function to implement all features def getAllFeatures(match_id): generateSquadRating(match_id) print ("\n") getBothTeamStats(match_id) print("\n") displayTeamWin(match_id) #Testing Functionality getAllFeatures(517)
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Adding Columns
#Create Column for Team 1 Winning Status (1 = Won, 0 = Lost) matches['team1Winning'] = np.where(matches['team1'] == matches['winner'], 1, 0) #New Column for Difference of Average Strike rates (First Team SR - Second Team SR) [Negative value means Second team is better] firstTeamSR = [] secondTeamSR = [] for i in matches['id'].unique(): P, Q = getAverageStrikeRates(i) firstTeamSR.append(P), secondTeamSR.append(Q) firstSRSeries = pd.Series(firstTeamSR) secondSRSeries = pd.Series(secondTeamSR) matches["Avg_SR_Difference"] = firstSRSeries.values - secondSRSeries.values #New Column for Difference of Wickets Per Run (First Team WPR - Second Team WPR) [Negative value means Second team is better] firstTeamWPR = [] secondTeamWPR = [] for i in matches['id'].unique(): R, S = getAverageWPR(i) firstTeamWPR.append(R), secondTeamWPR.append(S) firstWPRSeries = pd.Series(firstTeamWPR) secondWPRSeries = pd.Series(secondTeamWPR) matches["Avg_WPR_Difference"] = firstWPRSeries.values - secondWPRSeries.values #New column for difference of MVP Awards (Negative value means Second team is better) firstTeamMVP = [] secondTeamMVP = [] for i in matches['id'].unique(): T, U = bothTeamMVPAwards(i) firstTeamMVP.append(T), secondTeamMVP.append(U) firstMVPSeries = pd.Series(firstTeamMVP) secondMVPSeries = pd.Series(secondTeamMVP) matches["Total_MVP_Difference"] = firstMVPSeries.values - secondMVPSeries.values #New column for win percentage of Team1 in previous encounter firstTeamWP = [] for i in matches['id'].unique(): WP = getTeamWinPerc(i) firstTeamWP.append(WP) firstWPSeries = pd.Series(firstTeamWP) matches["Prev_Enc_Team1_WinPerc"] = firstWPSeries.values #New column for Recent form(Win Percentage in the current season) of 1st Team compared to 2nd Team(Negative means 2nd team has higher win percentage) firstTeamRF = [] secondTeamRF = [] for i in matches['id'].unique(): K, L = getTeamWinPercentage(i) firstTeamRF.append(K), secondTeamRF.append(L) firstRFSeries = pd.Series(firstTeamRF) secondRFSeries = pd.Series(secondTeamRF) matches["Total_RF_Difference"] = firstRFSeries.values - secondRFSeries.values #Testing matches.tail(20)
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Visualisation
#Graph for Strike Rate matches.boxplot(column = 'Avg_SR_Difference', by='team1Winning', showfliers= False) #Graph for WPR Difference matches.boxplot(column = 'Avg_WPR_Difference', by='team1Winning', showfliers= False) # Graph for MVP Difference matches.boxplot(column = 'Total_MVP_Difference', by='team1Winning', showfliers= False) #Graph for Previous encounters Win Percentage of Team #1 matches.boxplot(column = 'Prev_Enc_Team1_WinPerc', by='team1Winning', showfliers= False) # Graph for Recent form(Win Percentage in the same season) matches.boxplot(column = 'Total_RF_Difference', by='team1Winning', showfliers= False)
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Predictions for the data
from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn import svm from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB from sklearn.cross_validation import train_test_split from sklearn import metrics from patsy import dmatrices y, X = dmatrices('team1Winning ~ 0 + Avg_SR_Difference + Avg_WPR_Difference + Total_MVP_Difference + Prev_Enc_Team1_WinPerc + \ Total_RF_Difference', matches, return_type="dataframe") y_arr = np.ravel(y)
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Training and testing on Entire Data
# instantiate a logistic regression model, and fit with X and y model = LogisticRegression() model = model.fit(X, y_arr) # check the accuracy on the training set print "Accuracy is", model.score(X, y_arr)*100, "%"
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Splitting train and test using train_test_split
# evaluate the model by splitting into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y_arr, random_state = 0) # Logistic Regression on train_test_split model2 = LogisticRegression() model2.fit(X_train, y_train) # predict class labels for the test set predicted = model2.predict(X_test) # generate evaluation metrics print "Accuracy is ", metrics.accuracy_score(y_test, predicted)*100, "%" # KNN Classification on train_test_split k_range = list(range(1, 61)) k_score = [] for k in k_range: knn = KNeighborsClassifier(n_neighbors = k) knn.fit(X_train, y_train) y_pred = knn.predict(X_test) k_score.append(metrics.accuracy_score(y_test, y_pred)) plt.plot(k_range, k_score) # Best values of k in train_test_split knn = KNeighborsClassifier(n_neighbors = 50) knn.fit(X_train, y_train) y_pred = knn.predict(X_test) print "Accuracy is ", metrics.accuracy_score(y_test, y_pred)*100, "%"
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Splitting Training Set (2008-2013) and Test Set (2013-2015) based on Seasons
#Splitting X_timetrain = X.loc[X.index < 398] Y_timetrain = y.loc[y.index < 398] Y_timetrain_arr = np.ravel(Y_timetrain) X_timetest = X.loc[X.index >= 398] Y_timetest = y.loc[y.index >= 398] Y_timetest_arr = np.ravel(Y_timetest) # Logistic Regression on time-based split sets model3 = LogisticRegression() model3.fit(X_timetrain, Y_timetrain_arr) timepredicted = model3.predict(X_timetest) print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, timepredicted)*100, "%" # KNN Classification on time-based split sets k_range = list(range(1, 61)) k_score = [] for k in k_range: knn = KNeighborsClassifier(n_neighbors = k) knn.fit(X_timetrain, Y_timetrain_arr) y_pred = knn.predict(X_timetest) k_score.append(metrics.accuracy_score(Y_timetest_arr, y_pred)) plt.plot(k_range, k_score) # Best values of k in time-based split data knn1 = KNeighborsClassifier(n_neighbors = 31) knn1.fit(X_timetrain, Y_timetrain_arr) y_pred = knn1.predict(X_timetest) print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, y_pred)*100, "%"
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Support Vector Machines
clf = svm.SVC(gamma=0.001, C=10) clf.fit(X_timetrain, Y_timetrain_arr) clf_pred = clf.predict(X_timetest) print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, clf_pred)*100, "%"
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Random Forests
rfc = RandomForestClassifier(n_jobs = -1, random_state = 1) rfc.fit(X_timetrain, Y_timetrain_arr) rfc_pred = rfc.predict(X_timetest) print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, rfc_pred)*100, "%" fi = zip(X.columns, rfc.feature_importances_) print "Feature Importance according to Random Forests Model\n" for i in fi: print i[0], ":", i[1]
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Naive Bayes Classifier
gclf = GaussianNB() gclf.fit(X_timetrain, Y_timetrain_arr) gclf_pred = gclf.predict(X_timetest) print "Accuracy is ", metrics.accuracy_score(Y_timetest_arr, gclf_pred) *100, "%"
src/Match Outcome Prediction with IPL Data (Gursahej).ipynb
XInterns/IPL-Sparkers
mit
Setup We haven't really looked into the detail of how this works yet - so this is provided for self-study for those who are interested. We'll look at it closely next week.
path = get_file('nietzsche.txt', origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt") text = open(path).read().lower() print('corpus length:', len(text)) !tail {path} -n25 #path = 'data/wiki/' #text = open(path+'small.txt').read().lower() #print('corpus length:', len(text)) #text = text[0:1000000] chars = sorted(list(set(text))) vocab_size = len(chars)+1 print('total chars:', vocab_size) chars.insert(0, "\0") ''.join(chars[1:-6]) char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) idx = [char_indices[c] for c in text] idx[:10] ''.join(indices_char[i] for i in idx[:70])
fastAI/deeplearning1/nbs/char-rnn.ipynb
quoniammm/mine-tensorflow-examples
mit
Preprocess and create model
maxlen = 40 sentences = [] next_chars = [] for i in range(0, len(idx) - maxlen+1): sentences.append(idx[i: i + maxlen]) next_chars.append(idx[i+1: i+maxlen+1]) print('nb sequences:', len(sentences)) sentences = np.concatenate([[np.array(o)] for o in sentences[:-2]]) next_chars = np.concatenate([[np.array(o)] for o in next_chars[:-2]]) sentences.shape, next_chars.shape n_fac = 24 model=Sequential([ Embedding(vocab_size, n_fac, input_length=maxlen), LSTM(512, input_dim=n_fac,return_sequences=True, dropout_U=0.2, dropout_W=0.2, consume_less='gpu'), Dropout(0.2), LSTM(512, return_sequences=True, dropout_U=0.2, dropout_W=0.2, consume_less='gpu'), Dropout(0.2), TimeDistributed(Dense(vocab_size)), Activation('softmax') ]) model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())
fastAI/deeplearning1/nbs/char-rnn.ipynb
quoniammm/mine-tensorflow-examples
mit
Train
def print_example(): seed_string="ethics is a basic foundation of all that" for i in range(320): x=np.array([char_indices[c] for c in seed_string[-40:]])[np.newaxis,:] preds = model.predict(x, verbose=0)[0][-1] preds = preds/np.sum(preds) next_char = choice(chars, p=preds) seed_string = seed_string + next_char print(seed_string) model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1) print_example() model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1) print_example() model.optimizer.lr=0.001 model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1) print_example() model.optimizer.lr=0.0001 model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1) print_example() model.save_weights('data/char_rnn.h5') model.optimizer.lr=0.00001 model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1) print_example() model.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1) print_example() print_example() model.save_weights('data/char_rnn.h5')
fastAI/deeplearning1/nbs/char-rnn.ipynb
quoniammm/mine-tensorflow-examples
mit
Polynomial regression, revisited We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
def polynomial_sframe(feature, degree): # assume that degree >= 1 # initialize the SFrame: poly_sframe = graphlab.SFrame() # and set poly_sframe['power_1'] equal to the passed feature poly_sframe['power_1'] = feature # first check if degree > 1 if degree > 1: # then loop over the remaining degrees: # range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree for power in range(2, degree+1): # first we'll give the column a name: name = 'power_' + str(power) # then assign poly_sframe[name] to the appropriate power of feature poly_sframe[name] = feature ** power return poly_sframe
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
import matplotlib.pyplot as plt %matplotlib inline sales = graphlab.SFrame('kc_house_data.gl/')
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
sales = sales.sort(['sqft_living','price'])
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
l2_small_penalty = 1e-5
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.) With the L2 penalty specified above, fit the model and print out the learned weights. Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
poly15_data = polynomial_sframe(sales['sqft_living'], 15) # use equivalent of `polynomial_sframe` poly15_features = poly15_data.column_names() # get the name of the features poly15_data['price'] = sales['price'] # add price to the data since it's the target model1 = graphlab.linear_regression.create(poly15_data, target = 'price', features = poly15_features, l2_penalty=l2_small_penalty, validation_set=None,verbose=False) model1.get("coefficients")
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION: What's the learned value for the coefficient of feature power_1? Observe overfitting Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3. First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
(semi_split1, semi_split2) = sales.random_split(.5,seed=0) (set_1, set_2) = semi_split1.random_split(0.5, seed=0) (set_3, set_4) = semi_split2.random_split(0.5, seed=0)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model. Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
def get_poly_model(set_data, l2_penalty): poly15_data = polynomial_sframe(set_data['sqft_living'], 15) poly15_features = poly15_data.column_names() # get the name of the features poly15_data['price'] = set_data['price'] # add price to the data since it's the target model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = poly15_features, l2_penalty=l2_penalty, validation_set=None,verbose=False) return poly15_data, model15 def get_coef(set_data, l2_penalty): poly15_data, model15 = get_poly_model(set_data, l2_penalty) return model15.get("coefficients") def plot_fitted_line(set_data, l2_penalty): poly15_data, model15 = get_poly_model(set_data, l2_penalty) return plt.plot(poly15_data['power_1'],poly15_data['price'],'.', poly15_data['power_1'], model15.predict(poly15_data),'-') set_1_coef = get_coef(set_1, l2_small_penalty) print set_1_coef[set_1_coef['name'] == 'power_1'] plot_fitted_line(set_1, l2_small_penalty) set_2_coef = get_coef(set_2, l2_small_penalty) print set_2_coef[set_2_coef['name'] == 'power_1'] plot_fitted_line(set_2, l2_small_penalty) set_3_coef = get_coef(set_3, l2_small_penalty) print set_3_coef[set_3_coef['name'] == 'power_1'] plot_fitted_line(set_3, l2_small_penalty) set_4_coef = get_coef(set_4, l2_small_penalty) print set_4_coef[set_4_coef['name'] == 'power_1'] plot_fitted_line(set_4, l2_small_penalty)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
The four curves should differ from one another a lot, as should the coefficients you learned. QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.) Ridge regression comes to rescue Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.) With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the change in the l2_penalty parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
l2_new_penalty = 1e5 set_1_coef = get_coef(set_1, l2_new_penalty) print set_1_coef[set_1_coef['name'] == 'power_1'] plot_fitted_line(set_1, l2_new_penalty) set_2_coef = get_coef(set_2, l2_new_penalty) print set_2_coef[set_2_coef['name'] == 'power_1'] plot_fitted_line(set_2, l2_new_penalty) set_3_coef = get_coef(set_3, l2_new_penalty) print set_3_coef[set_3_coef['name'] == 'power_1'] plot_fitted_line(set_3, l2_new_penalty) set_4_coef = get_coef(set_4, l2_new_penalty) print set_4_coef[set_4_coef['name'] == 'power_1'] plot_fitted_line(set_4, l2_new_penalty)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
These curves should vary a lot less, now that you applied a high degree of regularization. QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.) Selecting an L2 penalty via cross-validation Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way. We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows: Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br> Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br> ...<br> Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data. To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
(train_valid, test) = sales.random_split(.9, seed=1) train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1. With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
n = len(train_valid_shuffled) k = 10 # 10-fold cross-validation for i in xrange(k): start = (n*i)/k end = (n*(i+1))/k-1 print i, (start, end)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
train_valid_shuffled[0:10] # rows 0 to 9
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above. Extract the fourth segment (segment 3) and assign it to a variable called validation4.
print len(train_valid_shuffled) # start = (n*i)/k # end = (n*(i+1))/k-1 # validation4 = train_valid_shuffled[(n*3)/k : (n*(3+1))/k-1] #5818, 7757 validation4 = train_valid_shuffled[5818 : 7757]
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
print int(round(validation4['price'].mean(), 0))
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
n = len(train_valid_shuffled) first_two = train_valid_shuffled[0:2] last_two = train_valid_shuffled[n-2:n] print first_two.append(last_two)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
first_part = train_valid_shuffled[0:5817] last_part = train_valid_shuffled[7758:] train4 = first_part.append(last_part) print len(train4)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
print int(round(train4['price'].mean(), 0))
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets. For each i in [0, 1, ..., k-1]: Compute starting and ending indices of segment i and call 'start' and 'end' Form validation set by taking a slice (start:end+1) from the data. Form training set by appending slice (end+1:n) to the end of slice (0:start). Train a linear model using training set just formed, with a given l2_penalty Compute validation error using validation set just formed
import numpy as np def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list): rss_sum = 0 n = len(data) for i in xrange(k): start = (n*i)/k end = (n*(i+1))/k-1 validation_set = data[start:end+1] training_set = data[0:start].append(data[end+1:n]) model = graphlab.linear_regression.create(training_set, target = output_name, features = features_list, l2_penalty=l2_penalty, validation_set=None,verbose=False) predictions = model.predict(validation_set) residuals = validation_set['price'] - predictions rss = sum(residuals * residuals) rss_sum += rss validation_error = rss_sum / k # average = sum / size or you can use np.mean(list_of_validation_error) return validation_error
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following: * We will again be aiming to fit a 15th-order polynomial model using the sqft_living input * For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).) * Run 10-fold cross-validation with l2_penalty * Report which L2 penalty produced the lowest average validation error. Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
poly_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15) my_features = poly_data.column_names() poly_data['price'] = train_valid_shuffled['price'] val_err_dict = {} for l2_penalty in np.logspace(1, 7, num=13): val_err = k_fold_cross_validation(10, l2_penalty, poly_data, 'price', my_features) print l2_penalty#, val_err val_err_dict[l2_penalty] = val_err print val_err_dict import pprint pprint.pprint(val_err_dict) print min(val_err_dict.items(), key=lambda x: x[1]) min_val = min(val_err_dict.itervalues()) print min_val print min(val_err_dict, key=val_err_dict.get)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation? You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
l2_penalty = graphlab.SArray(val_err_dict.keys()) validation_error = graphlab.SArray(val_err_dict.values()) sf = graphlab.SFrame({'l2_penalty':l2_penalty,'validation_error':validation_error}) print sf # Plot the l2_penalty values in the x axis and the cross-validation error in the y axis. # Using plt.xscale('log') will make your plot more intuitive. plt.plot(sf['l2_penalty'],sf['validation_error'],'k.') plt.xscale('log')
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Once you found the best value for the L2 penalty using cross-validation, it is important to retrain a final model on all of the training data using this value of l2_penalty. This way, your final model will be trained on the entire dataset.
poly_data = polynomial_sframe(train_valid_shuffled['sqft_living'], 15) features_list = poly_data.column_names() poly_data['price'] = train_valid_shuffled['price'] l2_penalty_best = 1000.0 model = graphlab.linear_regression.create(poly_data, target='price', features=features_list, l2_penalty=l2_penalty_best, validation_set=None)
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
QUIZ QUESTION: Using the best L2 penalty found above, train a model using all training data. What is the RSS on the TEST data of the model you learn with this L2 penalty?
poly_test = polynomial_sframe(test['sqft_living'], 15) predictions = model.predict(poly_test) errors = predictions-test['price'] rss = (errors*errors).sum() print rss
machine_learning/2_regression/assignment/week4/week-4-ridge-regression-assignment-1-exercise.ipynb
tuanavu/coursera-university-of-washington
mit
Users Involved There were about 1634 users involved in creating the data set, the top 10 of all users accounts for 40% of the created data. There is no direct evidence from the user name that any of them are bot-like users. This could be determined by further research. Many users (over 60%) have made less than 10 entries.
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_users(...): pipeline = [ {"$match": {"created.user": {"$exists": True}}}, {"$group": {"_id": "$created.user", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] l = list(project_coll.aggregate(pipeline)) print str(len(l)) + " users were involved:" pprint.pprint(l[1:5]+["..."]+l[-5:])
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Types of Amenities The attribute amenity inspired me to do further research in which kind of buildings / objects / facilities are stored in the Open Street Map data in larger quantities in order to do more detailed research on those objects. Especially Restaurants, Pubs and Churches / Places of Worship were investigated further (as can be seen below).
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_amenities(...): pipeline = [ {"$match": {"amenity": {"$exists": True}}}, {"$group": {"_id": "$amenity", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] l = list(project_coll.aggregate(pipeline)) pprint.pprint(l[1:10]+['...'])
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Popular Leisure Activities The attribute leisure shows the types of leisure activities one can do in Dresden and inspired me to invesigate more on popular sports in the city (leisure=sports_center or leisure=stadium).
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_amenities(...): pipeline = [ {"$match": {"leisure": {"$exists": True}}}, {"$group": {"_id": "$leisure", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] l = list(project_coll.aggregate(pipeline)) pprint.pprint(l[1:10]+['...'])
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Religions in Places of Worship Grouping and sorting by the occurences of the religion attribute for all amenities classified as place_of_worship or community_center gives us an indication, how prevalent religions are in our city: obviously, christian is the most prevalent here.
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_religions(...): pipeline = [ {"$match": {"amenity":{"$in": ["place_of_worship","community_center"]}}}, {"$group": {"_id": "$religion", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] l = list(project_coll.aggregate(pipeline)) pprint.pprint(l)
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Cuisines in Restaurants We can list the types of cuisines in restaurants (elements with attribute amenity matching restaurant) and sort them in decending order. We can notice certain inconsistencies or overlaps in the classifications of this data: e.g., a kebab cuisine may very well be also classified as an arab cuisine or may, in fact a sub- or super-classification of this cuisine. One could, e.g., eliminate or cluster together especially occurences of cuisines which are less common, but Without having a formal taxonomy of all cuisines, I decided that is probably best to leave the data as-is in order to not sacrifice preciseness for consistency.
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_cuisines(...): pipeline = [ {"$match": {"amenity": "restaurant"}}, {"$group": {"_id": "$cuisine", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] l = list(project_coll.aggregate(pipeline)) pprint.pprint(l[1:10]+['...'])
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Beers in Pubs Germans do love their beers and the dataset shows that certain pubs, restaurants or bars are sponsored by certain beer brands (often advertised on the pubs entrance). We can analyze the prevalence of beer brands by grouping and sorting by occurence of the attribute brewery for all the amenities classified as respective establishment. Most popular are Radeberger, a very popular local beer, Feldschlösschen, a swiss beer and Dresdner Felsenkeller, a very local and niche-sort-of beer.
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_beers(...): pipeline = [ {"$match": {"amenity": {"$in":["pub","bar","restaurant"]}}}, {"$group": {"_id": "$brewery", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] l = list(project_coll.aggregate(pipeline)) pprint.pprint(l)
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Popular Sports To investigate, which sports are popular, we can group and sort by the (occurence of the) sport attribute for all elements classified as sports_centre or stadium in their leisure attribute. Unsurprisingly for a german city, we notice that 9pin (bowling) and soccer are the most popular sports, followed by climbing, an activity very much enjoyed by people in Dresden, presumably because of the close-by sand-stone mountains of the national park Sächsische Schweiz.
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_sports(...): pipeline = [ {"$match": {"leisure": {"$in": ["sports_centre","stadium"]}}}, {"$group": {"_id": "$sport", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ] l = list(project_coll.aggregate(pipeline)) pprint.pprint(l[1:5]+['...'])
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Where to Dance in Dresden I am a passionate social dancer, so a list of dance schools in Dresden should not be abscent from this investigation. We can quickly grab all elements which have the leisure attribute set to dancing.
from Project.notebook_stub import project_coll import pprint # Query used - see function: Project.audit_stats_map.stats_dances(...): l = list(project_coll.distinct("name", {"leisure": "dance"})) pprint.pprint(l[1:10]+['...'])
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Problems Encountered in the Map / Data Quality Try it out: Use python project.py -q to obtain the data from this chapter. See Sample Output in file Project/output_project.py_-q.txt. The script also writes a CSV file to Project/data/audit_buildings.csv, which is also beautified into a Excel File. Leading Zeros As already discussed, during the parsing stage, we are using an optimistic approach of parsing any numerical value as integer or float, if it is parsable as such. However, we noticed that we should not do this, if leading zeros are present as those hold semantics for phone numbers and zip codes. Otherwise, this cleaning approach gives us a much smaller representation of the data in MongoDB and in-memory. Normalizing / Cleaning Cuisines As hinted in section Cuisines in Restaurant, classification of cuisines is inconsistent. There are two problems with this value: There are multiple values separated by ';' which makes the parameter hard to parse. We overcome this by creating a parameter cuisineTag which stores the cuisine classifications as an array: python db.eval('''db.osmnodes.find({ "cuisine": {"$exists": true}, "amenity": "restaurant" }).snapshot().forEach(function(val, idx) { val.cuisineTags = val.cuisine.split(';'); db.osmnodes.save(val) }) ''') Some values are inconsistently used; therefore, we unify them with a mapping table and a subsequent MongoDB update: ```python cuisines_synonyms = { 'german': ['regional', 'schnitzel', 'buschenschank'], 'portuguese': ['Portugiesisches_Restaurant_&_Weinbar'], 'italian': ['pizza', 'pasta'], 'mediterranean': ['fish', 'seafood'], 'japanese': ['sushi'], 'turkish': ['kebab'], 'american': ['steak_house'] } # not mapped: # greek, asian, chinese, indian, international, vietnamese, thai, spanish, arabic # sudanese, russian, korean, hungarian, syrian, vegan, soup, croatian, african # balkan, mexican, french, cuban, lebanese for target in cuisines_synonyms: db.osmnodes.update( { "cuisine": {"$exists": True}, "amenity": "restaurant", "cuisineTags": {"$in": cuisines_synonyms[target]} }, { "$pullAll": { "cusineTags": cuisines_synonyms[target] }, "$addToSet": { "cuisineTags": [ target ] } }, multi=False ) ``` This allows us to convert a restaurant with the MongoDB representation {..., "cuisine": "pizza;kebab", ...} to the alternative representation {..., "cuisine": "pizza;kebab", "cuisineTag": ["italian", "turkish"], ...} Auditing Phone Numbers Phone re scattered over different attributes (address.phone, phone and mobile_phone) and come in different styles of formating (like +49 351 123 45 vs. 0049-351-12345). First, we retrieve a list of all phone numbers. With the goal in mind to later store the normalized phone number back into the attribute phone, this value has to be read first, and only if it is empty, mobile_phone or address.phone should be used.
from Project.notebook_stub import project_coll # Query used - see function: Project.audit_quality_map.audit_phone_numbers(...): pipeline = [ {"$match": {"$or": [ {"phone": {"$exists": True}}, {"mobile_phone": {"$exists": True}}, {"address.phone": {"$exists": True}} ]}}, {"$project": { "_id": 1, "phone": {"$ifNull": ["$phone", {"$ifNull": ["$mobile_phone", "$address.phone"]}]} }} ] l = project_coll.aggregate(pipeline) # Output too long... See the file Project/output_project.py_-q.txt
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Cleaning Phone Numbers Try it out: Use python project.py -C to clean in debug mode. See Sample Output in file Project/output_project.py_-C.txt. The script also writes a CSV file to Project/data/clean_phones.csv, which is also beautified into a Excel File. Cleaning the phone numbers involves: * unifying the different phone attributes (phone, address.phone and mobile_phone) - this is already taken care by extracting the phone numbers during the audit stage * if possible, canonicalizing the phone number notations by parsing them using a regular expression: python phone_regex = re.compile(ur'^(\(?([\+|\*]|00) *(?P&lt;country&gt;[1-9][0-9]*)\)?)?' + # country code ur'[ \/\-\.]*\(?0?\)?[ \/\-\.]*' + # separator ur'(\(0?(?P&lt;area1&gt;[1-9][0-9 ]*)\)|0?(?P&lt;area2&gt;[1-9][0-9]*))?' + # area code ur'[ \/\-\.]*' + # separator ur'(?P&lt;number&gt;([0-9]+ *[\/\-.]? *)*)$', # number re.UNICODE) The regular expression is resilient to various separators ("/", "-", " ", "(0)") and bracket notation of phone numbers. It is not resilient for some unicode characters or written lists of phone numbers which are designed to be interpreted by humans (using separators like ",", "/-" or "oder" lit. or). During the cleaning stage, an output is written which phone numbers could not be parsed. This contains only a tiny fraction of phone numbers (9 or 0.5%) which would be easily cleanable by hand. The following objects couldn't be parsed: normalized 55f57294b1c8a72c34523897 +49 35207 81429 or 81469 55f57299b1c8a72c345272cd +49 351 8386837, +49 176 67032256 55f572c2b1c8a72c34546689 0351 4810426 55f572c3b1c8a72c34546829 +49 351 8902284 or 2525375 55f572fdb1c8a72c34574963 +49 351 4706625, +49 351 0350602 55f573bdb1c8a72c3460bdb3 +49 351 87?44?44?00 55f573bdb1c8a72c3460c066 0162 2648953, 0162 2439168 55f573edb1c8a72c346304b1 03512038973, 03512015831 55f5740eb1c8a72c34649008 0351 4455193 / -118 If the phone number was parsable, the country code, area code and rest of the phone number are separated and subsequently strung together to a canonical form. The data to be transformed is stored into a Pandas Dataframe. By using the option -C instead of -c the execution of the transformation can be surpressed and the Dataframe instead be written to a CSV file which might be further beautified into an Excel File in order to test or debug the transformation before writing it to the database with the -c option. Auditing Street Names (Spoiler Alert: No Cleaning Necessary) Auditing the map's street names analogous to how it was done in the Data Wrangling course was done as follows: Check, whether 'weird' street names occur, which do not end on a suffix like street (in German -straße or Straße, depending on whether it is a compound word or not). It is assumed that then, they would most likely end in an abbreviation like str.. For this we use a regular expression querying all streets <u>not</u> ending with a particular suffix like [Ss]traße (street), [Ww]eg (way) etc. This is accomplished by a chain of "negative lookbehind" expressions ((?&lt;!...)) which must all in sequence evaluate to "true" in order to flag a street name as non-conforming.
from Project.notebook_stub import project_coll # Query used - see function: Project.audit_quality_map.audit_streets(...): expectedStreetPattern = \ u"^.*(?<![Ss]tra\u00dfe)(?<![Ww]eg)(?<![Aa]llee)(?<![Rr]ing)(?<![Bb]erg)" + \ u"(?<![Pp]ark)(?<![Hh]\u00f6he)(?<![Pp]latz)(?<![Bb]r\u00fccke)(?<![Gg]rund)$" l = list(project_coll.distinct("name", { "type": "way", "name": {"$regex": expectedStreetPattern} })) # Output too long... See the file Project/output_project.py_-q.txt
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Skimming through the list, it was noticable that the nature of the german language (and how in Germany streetnames work) results in the fact, that there are many small places without a suffix like "street" but "their own thing" (like Am Hang lit. 'At The Slope', Beerenhut lit. 'Berry Hat', Im Grunde lit. 'In The Ground'). The street names can therefore not be processed just by looking at the suffixes - I tried something different... Cross Auditing Street Names with Street Addresses (Spoiler Alert: No Cleaning Necessary) I did not want to trust the street names of the data set fully yet. Next, I tried figuring out if street names of buildings were consistent with street names of objects in close proximity. Therefore, a JavaScript query is run directly on the database server returning all buildings with the objects nearby having an address.street parameter. This should allow us to cross-audit if objects in close proximity do have the same street names.
from Project.notebook_stub import project_db # Query used - see function: Project.audit_quality_map.audit_buildings(...): buildings_with_streets = project_db.eval(''' db.osmnodes.ensureIndex({pos:"2dsphere"}); result = []; db.osmnodes.find( {"building": {"$exists": true}, "address.street": {"$exists": true}, "pos": {"$exists": true}}, {"address.street": "", "pos": ""} ).forEach(function(val, idx) { val.nearby = db.osmnodes.distinct("address.street", {"_id": {"$ne": val._id}, "pos": {"$near": {"$geometry": {"type": "Point", "coordinates": val.pos}, "$maxDistance": 50, "$minDistance": 0}}} ); result.push(val); }) return result; ''') # Output too long... See the file Project/output_project.py_-q.txt
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
The resulting objects are then iterated through and the best and worst fitting nearby street name are identified each using the Levenshtein distance. For each object, a row is created in a DataFrame which is subsequently exported to a csv file Project/data/audit_buildings.csv that was manually beautified into an Excel File. As can be seen, street names of nearby objects mostly match those of the building itself (Levenshtein distance is zero). If they deviate greatly, they are totally different street names in the same area and not just "typos" or non-conforming abbreviations. Auditing Zip Codes (Spoiler Alert: No Cleaning Necessary) Try it out: Use python project.py -Z which runs the auditing script for zipcodes. See Sample Output in file Project/output_project.py_-Z.txt. To be able to run this script correctly, the zipcode data from Geonames.org needs to be downloaded and installed first using the -z option (see output in `Project/output_project.py_-Z.txt). This part of the auditing process makes use of an additional at Geonames.org to resolve and audit the zip codes in the data set. During the "installation process" (option -z) the zipcode data (provided as a tab-separated file) is downloaded and, line-by-line, stored to a (separate) MongoDB collection. However, we are only interested "zipcode" (2) and "place" (3) During the auditing stage (option -Z) we first get a list of all used zipcode using the following query: python pipeline = [ { "$match": {"address.postcode": {"$exists": 1}} }, { "$group": {"_id": "$address.postcode", "count": {"$sum": 1}} }, { "$sort": {"count": 1} } ] The zipcodes are then all looked up in the zipcode collection using the $in-operator. The data obtained is joined back into the original result. python zipcodeObjects = zipcodeColl.find( {"zipcode": {"$in": [z["_id"] for z in zipcodeList]}} ) The following output shows that there the lesser used zipcodes are from the Dresden metropolitan area, not Dresden itself:
from Project.audit_zipcode_map import audit_zipcode_map from Project.notebook_stub import project_server, project_port import pprint zipcodeJoined = audit_zipcode_map(project_server, project_port, quiet=True) pprint.pprint(zipcodeJoined[1:10]+['...'])
.ipynb_checkpoints/Data Analysis Project 3 - Data Wrangle OpenStreetMaps Data-checkpoint.ipynb
qwertzuhr/2015_Data_Analyst_Project_3
agpl-3.0
Post-training integer quantization with int16 activations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_quant_16x8"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview TensorFlow Lite now supports converting activations to 16-bit integer values and weights to 8-bit integer values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. We refer to this mode as the "16x8 quantization mode". This mode can improve accuracy of the quantized model significantly, when activations are sensitive to the quantization, while still achieving almost 3-4x reduction in model size. Moreover, this fully quantized model can be consumed by integer-only hardware accelerators. Some examples of models that benefit from this mode of the post-training quantization include: * super-resolution, * audio signal processing such as noise cancelling and beamforming, * image de-noising, * HDR reconstruction from a single image In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer using this mode. At the end you check the accuracy of the converted model and compare it to the original float32 model. Note that this example demonstrates the usage of this mode and doesn't show benefits over other available quantization techniques in TensorFlow Lite. Build an MNIST model Setup
import logging logging.getLogger("tensorflow").setLevel(logging.DEBUG) import tensorflow as tf from tensorflow import keras import numpy as np import pathlib
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Check that the 16x8 quantization mode is available
tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Train and export the model
# Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=1, validation_data=(test_images, test_labels) )
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy. Convert to a TensorFlow Lite model Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model. Now, convert the model using TFliteConverter into default float32 format:
converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert()
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Write it out to a .tflite file:
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True) tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model)
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
To instead quantize the model to 16x8 quantization mode, first set the optimizations flag to use default optimizations. Then specify that 16x8 quantization mode is the required supported operation in the target specification:
converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
As in the case of int8 post-training quantization, it is possible to produce a fully integer quantized model by setting converter options inference_input(output)_type to tf.int16. Set the calibration data:
mnist_train, _ = tf.keras.datasets.mnist.load_data() images = tf.cast(mnist_train[0], tf.float32) / 255.0 mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1) def representative_data_gen(): for input_value in mnist_ds.take(100): # Model has only one input so each data point has one element. yield [input_value] converter.representative_dataset = representative_data_gen
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Finally, convert the model as usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
tflite_16x8_model = converter.convert() tflite_model_16x8_file = tflite_models_dir/"mnist_model_quant_16x8.tflite" tflite_model_16x8_file.write_bytes(tflite_16x8_model)
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Note how the resulting file is approximately 1/3 the size.
!ls -lh {tflite_models_dir}
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Run the TensorFlow Lite models Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter. Load the model into the interpreters
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) interpreter.allocate_tensors() interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file)) interpreter_16x8.allocate_tensors()
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Test the models on one image
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] interpreter.set_tensor(input_index, test_image) interpreter.invoke() predictions = interpreter.get_tensor(output_index) import matplotlib.pylab as plt plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter_16x8.get_input_details()[0]["index"] output_index = interpreter_16x8.get_output_details()[0]["index"] interpreter_16x8.set_tensor(input_index, test_image) interpreter_16x8.invoke() predictions = interpreter_16x8.get_tensor(output_index) plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False)
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Evaluate the models
# A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for test_image in test_images: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) # Compare prediction results with ground truth labels to calculate accuracy. accurate_count = 0 for index in range(len(prediction_digits)): if prediction_digits[index] == test_labels[index]: accurate_count += 1 accuracy = accurate_count * 1.0 / len(prediction_digits) return accuracy print(evaluate_model(interpreter))
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0
Repeat the evaluation on the 16x8 quantized model:
# NOTE: This quantization mode is an experimental post-training mode, # it does not have any optimized kernels implementations or # specialized machine learning hardware accelerators. Therefore, # it could be slower than the float interpreter. print(evaluate_model(interpreter_16x8))
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
davidzchen/tensorflow
apache-2.0