id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
17,200 | ting the number of clusters to --the text to the left shows the index of the cluster and the total number of points in the cluster herethe clustering seems to have picked up on "dark skinned and smiling,"collared shirt,"smiling woman,"hussein,and "high forehead we could also find these highly similar clusters using the dendrogramif we did more detailed analysis summary of clustering methods this section has shown that applying and evaluating clustering is highly qualitative procedureand often most helpful in the exploratory phase of data analysis we looked at three clustering algorithmsk-meansdbscanand agglomerative clustering all three have way of controlling the granularity of clustering -means and agglomerative clustering allow you to specify the number of desired clusterswhile dbscan lets you define proximity using the eps parameterwhich indirectly influences cluster size all three methods can be used on largereal-world datasetsare relatively easy to understandand allow for clustering into many clusters each of the algorithms has somewhat different strengths -means allows for characterization of the clusters using the cluster means it can also be viewed as decomposition methodwhere each data point is represented by its cluster center dbscan allows for the detection of "noise pointsthat are not assigned any clusterand it can help automatically determine the number of clusters in contrast to the other two methodsit allow for complex cluster shapesas we saw in the two_moons example dbscan sometimes produces clusters of very differing sizewhich can be strength or weakness agglomerative clustering can provide whole hierarchy of possible partitions of the datawhich can be easily inspected via dendrograms clustering |
17,201 | this introduced range of unsupervised learning algorithms that can be applied for exploratory data analysis and preprocessing having the right representation of the data is often crucial for supervised or unsupervised learning to succeedand preprocessing and decomposition methods play an important part in data preparation decompositionmanifold learningand clustering are essential tools to further your understanding of your dataand can be the only ways to make sense of your data in the absence of supervision information even in supervised settingexploratory tools are important for better understanding of the properties of the data often it is hard to quantify the usefulness of an unsupervised algorithmthough this shouldn' deter you from using them to gather insights from your data with these methods under your beltyou are now equipped with all the essential learning algorithms that machine learning practitioners use every day we encourage you to try clustering and decomposition methods both on twodimensional toy data and on real-world datasets included in scikit-learnlike the digitsirisand cancer datasets unsupervised learning and preprocessing |
17,202 | let' briefly review the api that we introduced in and all algorithms in scikit-learnwhether preprocessingsupervised learningor unsupervised learning algorithmsare implemented as classes these classes are called estimators in scikitlearn to apply an algorithmyou first have to instantiate an object of the particular classin[ ]from sklearn linear_model import logisticregression logreg logisticregression(the estimator class contains the algorithmand also stores the model that is learned from data using the algorithm you should set any parameters of the model when constructing the model object these parameters include regularizationcomplexity controlnumber of clusters to findetc all estimators have fit methodwhich is used to build the model the fit method always requires as its first argument the data xrepresented as numpy array or scipy sparse matrixwhere each row represents single data point the data is always assumed to be numpy array or scipy sparse matrix that has continuous (floating-pointentries supervised algorithms also require argumentwhich is one-dimensional numpy array containing target values for regression or classification ( the known output labels or responsesthere are two main ways to apply learned model in scikit-learn to create prediction in the form of new output like yyou use the predict method to create new representation of the input data xyou use the transform method table - summarizes the use cases of the predict and transform methods table - scikit-learn api summary estimator fit(x_train[y_train]estimator predict(x_textestimator transform(x_testclassification regression clustering preprocessing dimensionality reduction feature extraction feature selection additionallyall supervised models have score(x_testy_testmethod that allows an evaluation of the model in table - x_train and y_train refer to the training data and training labelswhile x_test and y_test refer to the test data and test labels (if applicablesummary and outlook |
17,203 | representing data and engineering features so farwe've assumed that our data comes in as two-dimensional array of floatingpoint numberswhere each column is continuous feature that describes the data points for many applicationsthis is not how the data is collected particularly common type of feature is the categorical features also known as discrete featuresthese are usually not numeric the distinction between categorical features and continuous features is analogous to the distinction between classification and regressiononly on the input side rather than the output side examples of continuous features that we have seen are pixel brightnesses and size measurements of plant flowers examples of categorical features are the brand of productthe color of productor the department (booksclothinghardwareit is sold in these are all properties that can describe productbut they don' vary in continuous way product belongs either in the clothing department or in the books department there is no middle ground between books and clothingand no natural order for the different categories (books is not greater or less than clothinghardware is not between books and clothingetc regardless of the types of features your data consists ofhow you represent them can have an enormous effect on the performance of machine learning models we saw in and that scaling of the data is important in other wordsif you don' rescale your data (sayto unit variance)then it makes difference whether you represent measurement in centimeters or inches we also saw in that it can be helpful to augment your data with additional featureslike adding interactions (productsof features or more general polynomials the question of how to represent your data best for particular application is known as feature engineeringand it is one of the main tasks of data scientists and machine |
17,204 | the right way can have bigger influence on the performance of supervised model than the exact parameters you choose in this we will first go over the important and very common case of categorical featuresand then give some examples of helpful transformations for specific combinations of features and models categorical variables as an examplewe will use the dataset of adult incomes in the united statesderived from the census database the task of the adult dataset is to predict whether worker has an income of over $ , or under $ , the features in this dataset include the workersageshow they are employed (self employedprivate industry employeegovernment employeeetc )their educationtheir gendertheir working hours per weekoccupationand more table - shows the first few entries in the dataset table - the first few entries in the adult dataset age workclass state-gov education bachelors gender hours-per-week occupation male adm-clerical income <= self-emp-not-inc bachelors male exec-managerial <= private hs-grad male handlers-cleaners <= private th male handlers-cleaners <= private bachelors female prof-specialty <= private masters female exec-managerial <= private th female other-service <= self-emp-not-inc hs-grad male exec-managerial > private masters female prof-specialty > private bachelors male exec-managerial > private some-college male exec-managerial > the task is phrased as classification task with the two classes being income <= and > it would also be possible to predict the exact incomeand make this regression task howeverthat would be much more difficultand the division is interesting to understand on its own in this datasetage and hours-per-week are continuous featureswhich we know how to treat the workclasseducationsexand occupation features are categoricalhowever all of them come from fixed list of possible valuesas opposed to rangeand denote qualitative propertyas opposed to quantity representing data and engineering features |
17,205 | data we know from that logistic regression makes predictionsyusing the following formulay [ [ [ [ [px[pb where [iand are coefficients learned from the training set and [iare the input features this formula makes sense when [iare numbersbut not when [ is "mastersor "bachelorsclearly we need to represent our data in some different way when applying logistic regression the next section will explain how we can overcome this problem one-hot-encoding (dummy variablesby far the most common way to represent categorical variables is using the one-hotencoding or one-out-of- encodingalso known as dummy variables the idea behind dummy variables is to replace categorical variable with one or more new features that can have the values and the values and make sense in the formula for linear binary classification (and for all other models in scikit-learn)and we can represent any number of categories by introducing one new feature per categoryas described here let' say for the workclass feature we have possible values of "government employee""private employee""self employed"and "self employed incorpo ratedto encode these four possible valueswe create four new featurescalled "gov ernment employee""private employee""self employed"and "self employed incorporateda feature is if workclass for this person has the corresponding value and otherwiseso exactly one of the four new features will be for each data point this is why this is called one-hot or one-out-of- encoding the principle is illustrated in table - single feature is encoded using four new features when using this data in machine learning algorithmwe would drop the original workclass feature and only keep the - features table - encoding the workclass feature using one-hot encoding workclass government employee government employee private employee self employed self employed incorporated private employee self employed self employed incorporated categorical variables |
17,206 | the dummy encoding used in statistics for simplicitywe encode each category with different binary feature in statisticsit is common to encode categorical feature with different possible values into - features (the last one is represented as all zerosthis is done to simplify the analysis (more technicallythis will avoid making the data matrix rank-deficientthere are two ways to convert your data to one-hot encoding of categorical variablesusing either pandas or scikit-learn at the time of writingusing pandas is slightly easierso let' go this route first we load the data using pandas from comma-separated values (csvfilein[ ]import pandas as pd the file has no headers naming the columnsso we pass header=none and provide the column names explicitly in "namesdata pd read_csv"/home/andy/datasets/adult data"header=noneindex_col=falsenames=['age''workclass''fnlwgt''education''education-num''marital-status''occupation''relationship''race''gender''capital-gain''capital-loss''hours-per-week''native-country''income']for illustration purposeswe only select some of the columns data data[['age''workclass''education''gender''hours-per-week''occupation''income']ipython display allows nice output formatting within the jupyter notebook display(data head()table - shows the result table - the first five rows of the adult dataset age workclass state-gov education gender hours-per-week occupation bachelors male adm-clerical income <= self-emp-not-inc bachelors male exec-managerial private hs-grad male handlers-cleaners <= private th male handlers-cleaners <= private bachelors female prof-specialty <= <= checking string-encoded categorical data after reading dataset like thisit is often good to first check if column actually contains meaningful categorical data when working with data that was input by humans (sayusers on website)there might not be fixed set of categoriesand differences in spelling and capitalization might require preprocessing for exampleit might be that some people specified gender as "maleand some as "man,and we representing data and engineering features |
17,207 | check the contents of column is using the value_counts function of pandas series (the type of single column in dataframe)to show us what the unique values are and how often they appearin[ ]print(data gender value_counts()out[ ]male female namegenderdtypeint we can see that there are exactly two values for gender in this datasetmale and femalemeaning the data is already in good format to be represented using onehot-encoding in real applicationyou should look at all columns and check their values we will skip this here for brevity' sake there is very simple way to encode the data in pandasusing the get_dummies function the get_dummies function automatically transforms all columns that have object type (like stringsor are categorical (which is special pandas concept that we haven' talked about yet)in[ ]print("original features:\ "list(data columns)"\ "data_dummies pd get_dummies(dataprint("features after get_dummies:\ "list(data_dummies columns)out[ ]original features['age''workclass''education''gender''hours-per-week''occupation''income'features after get_dummies['age''hours-per-week''workclass_ ?''workclass_ federal-gov''workclass_ local-gov''workclass_ never-worked''workclass_ private''workclass_ self-emp-inc''workclass_ self-emp-not-inc''workclass_ state-gov''workclass_ without-pay''education_ th''education_ th''education_ th''education_ st- th''education_ preschool''education_ prof-school''education_ some-college''gender_ female''gender_ male''occupation_ ?''occupation_ adm-clerical''occupation_ armed-forces''occupation_ craft-repair''occupation_ exec-managerial''occupation_ farming-fishing''occupation_ handlers-cleaners''occupation_ tech-support''occupation_ transport-moving''income_ 'categorical variables |
17,208 | while the categorical features were expanded into one new feature for each possible valuein[ ]data_dummies head(out[ ]age hoursworkclass_ perweek workclass_ workclass_ federallocal-gov gov occupation_ techsupport occupation_ income_ income_ transport moving rows columns we can now use the values attribute to convert the data_dummies dataframe into numpy arrayand then train machine learning model on it be careful to separate the target variable (which is now encoded in two income columnsfrom the data before training model including the output variableor some derived property of the output variableinto the feature representation is very common mistake in building supervised machine learning models be carefulcolumn indexing in pandas includes the end of the rangeso 'age':'occupation_ transport-movingis inclusive of occupation_ transport-moving this is different from slicing numpy arraywhere the end of range is not includedfor examplenp arange( )[ : doesn' include the entry with index in this casewe extract only the columns containing features--that isall columns from age to occupation_ transport-moving this range contains all the features but not the targetin[ ]features data_dummies ix[:'age':'occupation_ transport-moving'extract numpy arrays features values data_dummies['income_ > 'values print(" shape{ shape{}format( shapey shape) representing data and engineering features |
17,209 | shape( shape( ,now the data is represented in way that scikit-learn can work withand we can proceed as usualin[ ]from sklearn linear_model import logisticregression from sklearn model_selection import train_test_split x_trainx_testy_trainy_test train_test_split(xyrandom_state= logreg logisticregression(logreg fit(x_trainy_trainprint("test score{ }format(logreg score(x_testy_test))out[ ]test score in this examplewe called get_dummies on dataframe containing both the training and the test data this is important to ensure categorical values are represented in the same way in the training set and the test set imagine we have the training and test sets in two different data frames if the "private employeevalue for the workclass feature does not appear in the test setpandas will assume there are only three possible values for this feature and will create only three new dummy features now our training and test sets have different numbers of featuresand we can' apply the model we learned on the training set to the test set anymore even worseimagine the workclass feature has the values "government employeeand "private employeein the training setand "self employedand "self employed incorporatedin the test set in both casespandas will create two new dummy featuresso the encoded data frames will have the same number of features howeverthe two dummy features have entirely different meanings in the training and test sets the column that means "government employeefor the training set would encode "self employedfor the test set if we built machine learning model on this data it would work very badlybecause it would assume the columns mean the same things (because they are in the same positionwhen in fact they mean very different things to fix thiseither call get_dummies on dataframe that contains both the training and the test data pointsor make sure that the column names are the same for the training and test sets after calling get_dummiesto ensure they have the same semantics categorical variables |
17,210 | in the example of the adult datasetthe categorical variables were encoded as strings on the one handthat opens up the possibility of spelling errorsbut on the other handit clearly marks variable as categorical oftenwhether for ease of storage or because of the way the data is collectedcategorical variables are encoded as integers for exampleimagine the census data in the adult dataset was collected using questionnaireand the answers for workclass were recorded as (first box ticked) (second box ticked) (third box ticked)and so on now the column will contain numbers from to instead of strings like "private"and it won' be immediately obvious to someone looking at the table representing the dataset whether they should treat this variable as continuous or categorical knowing that the numbers indicate employment statushoweverit is clear that these are very distinct states and should not be modeled by single continuous variable categorical features are often encoded using integers that they are numbers doesn' mean that they should necessarily be treated as continuous features it is not always clear whether an integer feature should be treated as continuous or discrete (and one-hotencodedif there is no ordering between the semantics that are encoded (like in the workclass example)the feature must be treated as discrete for other caseslike five-star ratingsthe better encoding depends on the particular task and data and which machine learning algorithm is used the get_dummies function in pandas treats all numbers as continuous and will not create dummy variables for them to get around thisyou can either use scikitlearn' onehotencoderfor which you can specify which variables are continuous and which are discreteor convert numeric columns in the dataframe to strings to illustratelet' create dataframe object with two columnsone containing strings and one containing integersin[ ]create dataframe with an integer feature and categorical string feature demo_df pd dataframe({'integer feature'[ ]'categorical feature'['socks''fox''socks''box']}display(demo_dftable - shows the result representing data and engineering features |
17,211 | categorical feature integer feature socks fox socks box using get_dummies will only encode the string feature and will not change the integer featureas you can see in table - in[ ]pd get_dummies(demo_dftable - one-hot-encoded version of the data from table - leaving the integer feature unchanged integer feature categorical feature_box categorical feature_fox categorical feature_socks if you want dummy variables to be created for the "integer featurecolumnyou can explicitly list the columns you want to encode using the columns parameter thenboth features will be treated as categorical (see table - )in[ ]demo_df['integer feature'demo_df['integer feature'astype(strpd get_dummies(demo_dfcolumns=['integer feature''categorical feature']table - one-hot encoding of the data shown in table - encoding the integer and string features integer feature_ integer feature_ integer feature_ categorical feature_box categorical feature_fox categorical feature_socks categorical variables |
17,212 | the best way to represent data depends not only on the semantics of the databut also on the kind of model you are using linear models and tree-based models (such as decision treesgradient boosted treesand random forests)two large and very commonly used familieshave very different properties when it comes to how they work with different feature representations let' go back to the wave regression dataset that we used in it has only single input feature here is comparison of linear regression model and decision tree regressor on this dataset (see figure - )in[ ]from sklearn linear_model import linearregression from sklearn tree import decisiontreeregressor xy mglearn datasets make_wave(n_samples= line np linspace(- endpoint=falsereshape(- reg decisiontreeregressor(min_samples_split= fit(xyplt plot(linereg predict(line)label="decision tree"reg linearregression(fit(xyplt plot(linereg predict(line)label="linear regression"plt plot( [: ] ' ' =' 'plt ylabel("regression output"plt xlabel("input feature"plt legend(loc="best"as you knowlinear models can only model linear relationshipswhich are lines in the case of single feature the decision tree can build much more complex model of the data howeverthis is strongly dependent on the representation of the data one way to make linear models more powerful on continuous data is to use binning (also known as discretizationof the feature to split it up into multiple featuresas described here representing data and engineering features |
17,213 | we imagine partition of the input range for the feature (in this casethe numbers from - to into fixed number of bins--say data point will then be represented by which bin it falls into to determine thiswe first have to define the bins in this casewe'll define bins equally spaced between - and we use the np linspace function for thiscreating entrieswhich will create bins--they are the spaces in between two consecutive boundariesin[ ]bins np linspace(- print("bins{}format(bins)out[ ]bins[- - - - - herethe first bin contains all data points with feature values - to - the second bin contains all points with feature values from - to - and so on nextwe record for each data point which bin it falls into this can be easily computed using the np digitize functionbinningdiscretizationlinear modelsand trees |
17,214 | which_bin np digitize(xbins=binsprint("\ndata points:\ " [: ]print("\nbin membership for data points:\ "which_bin[: ]out[ ]data points[[- [- ]bin membership for data points[ [ ]what we did here is transform the single continuous input feature in the wave dataset into categorical feature that encodes which bin data point is in to use scikitlearn model on this datawe transform this discrete feature to one-hot encoding using the onehotencoder from the preprocessing module the onehotencoder does the same encoding as pandas get_dummiesthough it currently only works on categorical variables that are integersin[ ]from sklearn preprocessing import onehotencoder transform using the onehotencoder encoder onehotencoder(sparse=falseencoder fit finds the unique values that appear in which_bin encoder fit(which_bintransform creates the one-hot encoding x_binned encoder transform(which_binprint(x_binned[: ]out[ ][ ]because we specified binsthe transformed dataset x_binned now is made up of features representing data and engineering features |
17,215 | print("x_binned shape{}format(x_binned shape)out[ ]x_binned shape( now we build new linear regression model and new decision tree model on the one-hot-encoded data the result is visualized in figure - together with the bin boundariesshown as dotted black linesin[ ]line_binned encoder transform(np digitize(linebins=bins)reg linearregression(fit(x_binnedyplt plot(linereg predict(line_binned)label='linear regression binned'reg decisiontreeregressor(min_samples_split= fit(x_binnedyplt plot(linereg predict(line_binned)label='decision tree binned'plt plot( [: ] ' ' =' 'plt vlines(bins- linewidth= alpha plt legend(loc="best"plt ylabel("regression output"plt xlabel("input feature"figure - comparing linear regression and decision tree regression on binned features binningdiscretizationlinear modelsand trees |
17,216 | regression model and the decision tree make exactly the same predictions for each binthey predict constant value as features are constant within each binany model must predict the same value for all points within bin comparing what the models learned before binning the features and afterwe see that the linear model became much more flexiblebecause it now has different value for each binwhile the decision tree model got much less flexible binning features generally has no beneficial effect for tree-based modelsas these models can learn to split up the data anywhere in sensethat means decision trees can learn whatever binning is most useful for predicting on this data additionallydecision trees look at multiple features at oncewhile binning is usually done on per-feature basis howeverthe linear model benefited greatly in expressiveness from the transformation of the data if there are good reasons to use linear model for particular dataset--saybecause it is very large and high-dimensionalbut some features have nonlinear relations with the output--binning can be great way to increase modeling power interactions and polynomials another way to enrich feature representationparticularly for linear modelsis adding interaction features and polynomial features of the original data this kind of feature engineering is often used in statistical modelingbut it' also common in many practical machine learning applications as first examplelook again at figure - the linear model learned constant value for each bin in the wave dataset we knowhoweverthat linear models can learn not only offsetsbut also slopes one way to add slope to the linear model on the binned data is to add the original feature (the -axis in the plotback in this leads to an dimensional datasetas seen in figure - in[ ]x_combined np hstack([xx_binned]print(x_combined shapeout[ ]( in[ ]reg linearregression(fit(x_combinedyline_combined np hstack([lineline_binned]plt plot(linereg predict(line_combined)label='linear regression combined'for bin in binsplt plot([binbin][- ]':' =' ' representing data and engineering features |
17,217 | plt ylabel("regression output"plt xlabel("input feature"plt plot( [: ] ' ' =' 'figure - linear regression using binned features and single global slope in this examplethe model learned an offset for each bintogether with slope the learned slope is downwardand shared across all the bins--there is single -axis featurewhich has single slope because the slope is shared across all binsit doesn' seem to be very helpful we would rather have separate slope for each binwe can achieve this by adding an interaction or product feature that indicates which bin data point is in and where it lies on the -axis this feature is product of the bin indicator and the original feature let' create this datasetin[ ]x_product np hstack([x_binnedx x_binned]print(x_product shapeout[ ]( the dataset now has featuresthe indicators for which bin data point is inand product of the original feature and the bin indicator you can think of the product interactions and polynomials |
17,218 | within the binand zero everywhere else figure - shows the result of the linear model on this new representationin[ ]reg linearregression(fit(x_productyline_product np hstack([line_binnedline line_binned]plt plot(linereg predict(line_product)label='linear regression product'for bin in binsplt plot([binbin][- ]':' =' 'plt plot( [: ] ' ' =' 'plt ylabel("regression output"plt xlabel("input feature"plt legend(loc="best"figure - linear regression with separate slope per bin as you can seenow each bin has its own offset and slope in this model representing data and engineering features |
17,219 | nomials of the original features for given feature xwe might want to consider * * * and so on this is implemented in polynomialfeatures in the preprocessing modulein[ ]from sklearn preprocessing import polynomialfeatures include polynomials up to * the default "include_bias=trueadds feature that' constantly poly polynomialfeatures(degree= include_bias=falsepoly fit(xx_poly poly transform(xusing degree of yields featuresin[ ]print("x_poly shape{}format(x_poly shape)out[ ]x_poly shape( let' compare the entries of x_poly to those of xin[ ]print("entries of :\ {}format( [: ])print("entries of x_poly:\ {}format(x_poly[: ])out[ ]entries of [[- [- ]entries of x_poly[- - - - - - - - ]- - you can obtain the semantics of the features by calling the get_feature_names methodwhich provides the exponent for each featureinteractions and polynomials |
17,220 | print("polynomial feature names:\ {}format(poly get_feature_names())out[ ]polynomial feature names[' '' ^ '' ^ '' ^ '' ^ '' ^ '' ^ '' ^ '' ^ '' ^ 'you can see that the first column of x_poly corresponds exactly to xwhile the other columns are the powers of the first entry it' interesting to see how large some of the values can get the second column has entries above , orders of magnitude different from the rest using polynomial features together with linear regression model yields the classical model of polynomial regression (see figure - )in[ ]reg linearregression(fit(x_polyyline_poly poly transform(lineplt plot(linereg predict(line_poly)label='polynomial linear regression'plt plot( [: ] ' ' =' 'plt ylabel("regression output"plt xlabel("input feature"plt legend(loc="best"figure - linear regression with tenth-degree polynomial features representing data and engineering features |
17,221 | data howeverpolynomials of high degree tend to behave in extreme ways on the boundaries or in regions with little data as comparisonhere is kernel svm model learned on the original datawithout any transformation (see figure - )in[ ]from sklearn svm import svr for gamma in [ ]svr svr(gamma=gammafit(xyplt plot(linesvr predict(line)label='svr gamma={}format(gamma)plt plot( [: ] ' ' =' 'plt ylabel("regression output"plt xlabel("input feature"plt legend(loc="best"figure - comparison of different gamma parameters for an svm with rbf kernel using more complex modela kernel svmwe are able to learn similarly complex prediction to the polynomial regression without an explicit transformation of the features interactions and polynomials |
17,222 | boston housing dataset we already used polynomial features on this dataset in now let' have look at how these features were constructedand at how much the polynomial features help first we load the dataand rescale it to be between and using minmaxscalerin[ ]from sklearn datasets import load_boston from sklearn model_selection import train_test_split from sklearn preprocessing import minmaxscaler boston load_boston(x_trainx_testy_trainy_test train_test_split (boston databoston targetrandom_state= rescale data scaler minmaxscaler(x_train_scaled scaler fit_transform(x_trainx_test_scaled scaler transform(x_testnowwe extract polynomial features and interactions up to degree of in[ ]poly polynomialfeatures(degree= fit(x_train_scaledx_train_poly poly transform(x_train_scaledx_test_poly poly transform(x_test_scaledprint("x_train shape{}format(x_train shape)print("x_train_poly shape{}format(x_train_poly shape)out[ ]x_train shape( x_train_poly shape( the data originally had featureswhich were expanded into interaction features these new features represent all possible interactions between two different original featuresas well as the square of each original feature degree= here means that we look at all features that are the product of up to two original features the exact correspondence between input and output features can be found using the get_feature_names methodin[ ]print("polynomial feature names:\ {}format(poly get_feature_names())out[ ]polynomial feature names[' '' '' '' '' '' '' '' '' '' '' '' '' '' '' ^ '' '' '' '' '' '' '' '' '' '' '' '' '' ^ '' ' representing data and engineering features |
17,223 | ' '' '' ^ '' '' '' '' '' '' '' '' '' '' '' ^ '' '' '' '' '' '' '' '' '' '' ^ '' '' '' '' '' '' '' '' '' ^ '' '' '' '' '' '' '' '' ^ '' '' '' '' '' '' '' ^ '' '' '' '' '' '' ^ '' '' '' '' '' ^ '' '' '' '' ^ '' '' '' ^ '' '' ^ 'the first new feature is constant featurecalled " here the next features are the original features (called " to " "then follows the first feature squared (" ^ "and combinations of the first and the other features let' compare the performance using ridge on the data with and without interactionsin[ ]from sklearn linear_model import ridge ridge ridge(fit(x_train_scaledy_trainprint("score without interactions{ }formatridge score(x_test_scaledy_test))ridge ridge(fit(x_train_polyy_trainprint("score with interactions{ }formatridge score(x_test_polyy_test))out[ ]score without interactions score with interactions clearlythe interactions and polynomial features gave us good boost in performance when using ridge when using more complex model like random forestthe story is bit differentthoughin[ ]from sklearn ensemble import randomforestregressor rf randomforestregressor(n_estimators= fit(x_train_scaledy_trainprint("score without interactions{ }formatrf score(x_test_scaledy_test))rf randomforestregressor(n_estimators= fit(x_train_polyy_trainprint("score with interactions{ }format(rf score(x_test_polyy_test))out[ ]score without interactions score with interactions interactions and polynomials |
17,224 | performance of ridge adding interactions and polynomials actually decreases performance slightly univariate nonlinear transformations we just saw that adding squared or cubed features can help linear models for regression there are other transformations that often prove useful for transforming certain featuresin particularapplying mathematical functions like logexpor sin while tree-based models only care about the ordering of the featureslinear models and neural networks are very tied to the scale and distribution of each featureand if there is nonlinear relation between the feature and the targetthat becomes hard to model --particularly in regression the functions log and exp can help by adjusting the relative scales in the data so that they can be captured better by linear model or neural network we saw an application of that in with the memory price data the sin and cos functions can come in handy when dealing with data that encodes periodic patterns most models work best when each feature (and in regression also the targetis loosely gaussian distributed--that isa histogram of each feature should have something resembling the familiar "bell curveshape using transformations like log and exp is hacky but simple and efficient way to achieve this particularly common case when such transformation can be helpful is when dealing with integer count data by count datawe mean features like "how often did user log in?counts are never negativeand often follow particular statistical patterns we are using synthetic dataset of counts here that has properties similar to those you can find in the wild the features are all integer-valuedwhile the response is continuousin[ ]rnd np random randomstate( x_org rnd normal(size=( ) rnd normal(size= rnd poisson( np exp(x_org) np dot(x_orgwlet' look at the first entries of the first feature all are integer values and positivebut apart from that it' hard to make out particular pattern if we count the appearance of each valuethe distribution of values becomes clearer representing data and engineering features |
17,225 | print("number of feature appearances:\ {}format(np bincount( [: ]))out[ ]number of feature appearances[ the value seems to be the most commonwith appearances (bincount always starts at )and the counts for higher values fall quickly howeverthere are some very high valueslike appearing twice we visualize the counts in figure - in[ ]bins np bincount( [: ]plt bar(range(len(bins))binscolor=' 'plt ylabel("number of appearances"plt xlabel("value"figure - histogram of feature values for [ univariate nonlinear transformations |
17,226 | values (many small ones and few very large onesis very common in practice howeverit is something most linear models can' handle very well let' try to fit ridge regression to this modelin[ ]from sklearn linear_model import ridge x_trainx_testy_trainy_test train_test_split(xyrandom_state= score ridge(fit(x_trainy_trainscore(x_testy_testprint("test score{ }format(score)out[ ]test score as you can see from the relatively low scoreridge was not able to really capture the relationship between and applying logarithmic transformation can helpthough because the value appears in the data (and the logarithm is not defined at )we can' actually just apply logbut we have to compute log( )in[ ]x_train_log np log(x_train x_test_log np log(x_test after the transformationthe distribution of the data is less asymmetrical and doesn' have very large outliers anymore (see figure - )in[ ]plt hist(np log(x_train_log[: )bins= color='gray'plt ylabel("number of appearances"plt xlabel("value" this is poisson distributionwhich is quite fundamental to count data representing data and engineering features |
17,227 | building ridge model on the new data provides much better fitin[ ]score ridge(fit(x_train_logy_trainscore(x_test_logy_testprint("test score{ }format(score)out[ ]test score finding the transformation that works best for each combination of dataset and model is somewhat of an art in this exampleall the features had the same properties this is rarely the case in practiceand usually only subset of the features should be transformedor sometimes each feature needs to be transformed in different way as we mentioned earlierthese kinds of transformations are irrelevant for tree-based models but might be essential for linear models sometimes it is also good idea to transform the target variable in regression trying to predict counts (saynumber of ordersis fairly common taskand using the log( transformation often helps this is very crude approximation of using poisson regressionwhich would be the proper solution from probabilistic standpoint univariate nonlinear transformations |
17,228 | have huge influence on how models perform on given dataset this is particularly true for less complex models like linear models and naive bayes models tree-based modelson the other handare often able to discover important interactions themselvesand don' require transforming the data explicitly most of the time other modelslike svmsnearest neighborsand neural networksmight sometimes benefit from using binninginteractionsor polynomialsbut the implications there are usually much less clear than in the case of linear models automatic feature selection with so many ways to create new featuresyou might get tempted to increase the dimensionality of the data way beyond the number of original features howeveradding more features makes all models more complexand so increases the chance of overfitting when adding new featuresor with high-dimensional datasets in generalit can be good idea to reduce the number of features to only the most useful onesand discard the rest this can lead to simpler models that generalize better but how can you know how good each feature isthere are three basic strategiesunivariate statisticsmodel-based selectionand iterative selection we will discuss all three of them in detail all of these methods are supervised methodsmeaning they need the target for fitting the model this means we need to split the data into training and test setsand fit the feature selection only on the training part of the data univariate statistics in univariate statisticswe compute whether there is statistically significant relationship between each feature and the target then the features that are related with the highest confidence are selected in the case of classificationthis is also known as analysis of variance (anovaa key property of these tests is that they are univariatemeaning that they only consider each feature individually consequentlya feature will be discarded if it is only informative when combined with another feature univariate tests are often very fast to computeand don' require building model on the other handthey are completely independent of the model that you might want to apply after the feature selection to use univariate feature selection in scikit-learnyou need to choose testusually either f_classif (the defaultfor classification or f_regression for regressionand method to discard features based on the -values determined in the test all methods for discarding parameters use threshold to discard all features with too high -value (which means they are unlikely to be related to the targetthe methods differ in how they compute this thresholdwith the simplest ones being selectkb estwhich selects fixed number of featuresand selectpercentilewhich selects fixed percentage of features let' apply the feature selection for classification on the representing data and engineering features |
17,229 | features to the data we expect the feature selection to be able to identify the features that are noninformative and remove themin[ ]from sklearn datasets import load_breast_cancer from sklearn feature_selection import selectpercentile from sklearn model_selection import train_test_split cancer load_breast_cancer(get deterministic random numbers rng np random randomstate( noise rng normal(size=(len(cancer data) )add noise features to the data the first features are from the datasetthe next are noise x_w_noise np hstack([cancer datanoise]x_trainx_testy_trainy_test train_test_splitx_w_noisecancer targetrandom_state= test_size use f_classif (the defaultand selectpercentile to select of features select selectpercentile(percentile= select fit(x_trainy_traintransform training set x_train_selected select transform(x_trainprint("x_train shape{}format(x_train shape)print("x_train_selected shape{}format(x_train_selected shape)out[ ]x_train shape( x_train_selected shape( as you can seethe number of features was reduced from to ( percent of the original number of featureswe can find out which features have been selected using the get_support methodwhich returns boolean mask of the selected features (visualized in figure - )in[ ]mask select get_support(print(maskvisualize the mask -black is truewhite is false plt matshow(mask reshape( - )cmap='gray_r'plt xlabel("sample index"out[ ]true true true true true true true true true true true true true true true true true true true false true false true false false true true true true true false false false true false true automatic feature selection |
17,230 | false true false true false false false false false false true false true false false false false true false true false false false false true true false true false false false falsefigure - features selected by selectpercentile as you can see from the visualization of the maskmost of the selected features are the original featuresand most of the noise features were removed howeverthe recovery of the original features is not perfect let' compare the performance of logistic regression on all features against the performance using only the selected featuresin[ ]from sklearn linear_model import logisticregression transform test data x_test_selected select transform(x_testlr logisticregression(lr fit(x_trainy_trainprint("score with all features{ }format(lr score(x_testy_test))lr fit(x_train_selectedy_trainprint("score with only selected features{ }formatlr score(x_test_selectedy_test))out[ ]score with all features score with only selected features in this caseremoving the noise features improved performanceeven though some of the original features were lost this was very simple synthetic exampleand outcomes on real data are usually mixed univariate feature selection can still be very helpfulthoughif there is such large number of features that building model on them is infeasibleor if you suspect that many features are completely uninformative model-based feature selection model-based feature selection uses supervised machine learning model to judge the importance of each featureand keeps only the most important ones the supervised model that is used for feature selection doesn' need to be the same model that is used for the final supervised modeling the feature selection model needs to provide some measure of importance for each featureso that they can be ranked by this measure decision trees and decision tree-based models provide feature_importances_ representing data and engineering features |
17,231 | coefficientswhich can also be used to capture feature importances by considering the absolute values as we saw in linear models with penalty learn sparse coefficientswhich only use small subset of features this can be viewed as form of feature selection for the model itselfbut can also be used as preprocessing step to select features for another model in contrast to univariate selectionmodel-based selection considers all features at onceand so can capture interactions (if the model can capture themto use model-based feature selectionwe need to use the selectfrommodel transformerin[ ]from sklearn feature_selection import selectfrommodel from sklearn ensemble import randomforestclassifier select selectfrommodelrandomforestclassifier(n_estimators= random_state= )threshold="median"the selectfrommodel class selects all features that have an importance measure of the feature (as provided by the supervised modelgreater than the provided threshold to get comparable result to what we got with univariate feature selectionwe used the median as thresholdso that half of the features will be selected we use random forest classifier with trees to compute the feature importances this is quite complex model and much more powerful than using univariate tests now let' actually fit the modelin[ ]select fit(x_trainy_trainx_train_l select transform(x_trainprint("x_train shape{}format(x_train shape)print("x_train_l shape{}format(x_train_l shape)out[ ]x_train shape( x_train_l shape( againwe can have look at the features that were selected (figure - )in[ ]mask select get_support(visualize the mask -black is truewhite is false plt matshow(mask reshape( - )cmap='gray_r'plt xlabel("sample index"figure - features selected by selectfrommodel using the randomforestclassifier automatic feature selection |
17,232 | select featuressome of the noise features are also selected let' take look at the performancein[ ]x_test_l select transform(x_testscore logisticregression(fit(x_train_l y_trainscore(x_test_l y_testprint("test score{ }format(score)out[ ]test score with the better feature selectionwe also gained some improvements here iterative feature selection in univariate testing we used no modelwhile in model-based selection we used single model to select features in iterative feature selectiona series of models are builtwith varying numbers of features there are two basic methodsstarting with no features and adding features one by one until some stopping criterion is reachedor starting with all features and removing features one by one until some stopping criterion is reached because series of models are builtthese methods are much more computationally expensive than the methods we discussed previously one particular method of this kind is recursive feature elimination (rfe)which starts with all featuresbuilds modeland discards the least important feature according to the model then new model is built using all but the discarded featureand so on until only prespecified number of features are left for this to workthe model used for selection needs to provide some way to determine feature importanceas was the case for the model-based selection herewe use the same random forest model that we used earlierand get the results shown in figure - in[ ]from sklearn feature_selection import rfe select rfe(randomforestclassifier(n_estimators= random_state= )n_features_to_select= select fit(x_trainy_trainvisualize the selected featuresmask select get_support(plt matshow(mask reshape( - )cmap='gray_r'plt xlabel("sample index" representing data and engineering features |
17,233 | classifier model the feature selection got better compared to the univariate and model-based selectionbut one feature was still missed running this code also takes significantly longer than that for the model-based selectionbecause random forest model is trained timesonce for each feature that is dropped let' test the accuracy of the logistic regression model when using rfe for feature selectionin[ ]x_train_rfeselect transform(x_trainx_test_rfeselect transform(x_testscore logisticregression(fit(x_train_rfey_trainscore(x_test_rfey_testprint("test score{ }format(score)out[ ]test score we can also use the model used inside the rfe to make predictions this uses only the feature set that was selectedin[ ]print("test score{ }format(select score(x_testy_test))out[ ]test score herethe performance of the random forest used inside the rfe is the same as that achieved by training logistic regression model on top of the selected features in other wordsonce we've selected the right featuresthe linear model performs as well as the random forest if you are unsure when selecting what to use as input to your machine learning algorithmsautomatic feature selection can be quite helpful it is also great for reducing the amount of features needed--for exampleto speed up prediction or to allow for more interpretable models in most real-world casesapplying feature selection is unlikely to provide large gains in performance howeverit is still valuable tool in the toolbox of the feature engineer automatic feature selection |
17,234 | feature engineering is often an important place to use expert knowledge for particular application while the purpose of machine learning in many cases is to avoid having to create set of expert-designed rulesthat doesn' mean that prior knowledge of the application or domain should be discarded oftendomain experts can help in identifying useful features that are much more informative than the initial representation of the data imagine you work for travel agency and want to predict flight prices let' say you have record of prices together with datesairlinesstart locationsand destinations machine learning model might be able to build decent model from that some important factors in flight priceshowevercannot be learned for exampleflights are usually more expensive during peak vacation months and around holidays while the dates of some holidays (like christmasare fixedand their effect can therefore be learned from the dateothers might depend on the phases of the moon (like hanukkah and easteror be set by authorities (like school holidaysthese events cannot be learned from the data if each flight is only recorded using the (gregoriandate howeverit is easy to add feature that encodes whether flight was onprecedingor following public or school holiday in this wayprior knowledge about the nature of the task can be encoded in the features to aid machine learning algorithm adding feature does not force machine learning algorithm to use itand even if the holiday information turns out to be noninformative for flight pricesaugmenting the data with this information doesn' hurt we'll now look at one particular case of using expert knowledge--though in this case it might be more rightfully called "common sense the task is predicting bicycle rentals in front of andreas' house in new yorkciti bike operates network of bicycle rental stations with subscription system the stations are all over the city and provide convenient way to get around bike rental data is made public in an anonymized form and has been analyzed in various ways the task we want to solve is to predict for given time and day how many people will rent bike in front of andreas' house--so he knows if any bikes will be left for him we first load the data for august for this particular station as pandas data frame we resample the data into three-hour intervals to obtain the main trends for each dayin[ ]citibike mglearn datasets load_citibike( representing data and engineering features |
17,235 | print("citi bike data:\ {}format(citibike head())out[ ]citi bike datastarttime : : : : : : : : : : freq hnameonedtypefloat the following example shows visualization of the rental frequencies for the whole month (figure - )in[ ]plt figure(figsize=( )xticks pd date_range(start=citibike index min()end=citibike index max()freq=' 'plt xticks(xticksxticks strftime("% % -% ")rotation= ha="left"plt plot(citibikelinewidth= plt xlabel("date"plt ylabel("rentals"figure - number of bike rentals over time for selected citi bike station looking at the datawe can clearly distinguish day and night for each -hour interval the patterns for weekdays and weekends also seem to be quite different when evaluating prediction task on time series like thiswe usually want to learn from the past and predict for the future this means when doing split into training and test setwe want to use all the data up to certain date as the training set and all the data past that date as the test set this is how we would usually use time series predictiongiven everything that we know about rentals in the pastwhat do we think will utilizing expert knowledge |
17,236 | daysas our training setand the remaining data pointscorresponding to the remaining daysas our test set the only feature that we are using in our prediction task is the date and time when particular number of rentals occurred sothe input feature is the date and time--say : : --and the output is the number of rentals in the following three hours (three in this caseaccording to our dataframea (surprisinglycommon way that dates are stored on computers is using posix timewhich is the number of seconds since january : : (aka the beginning of unix timeas first trywe can use this single integer feature as our data representationin[ ]extract the target values (number of rentalsy citibike values convert the time to posix time using "%sx citibike index strftime("% "astype("int"reshape(- we first define function to split the data into training and test setsbuild the modeland visualize the resultin[ ]use the first data points for trainingand the rest for testing n_train function to evaluate and plot regressor on given feature set def eval_on_features(featurestargetregressor)split the given features into training and test set x_trainx_test features[:n_train]features[n_train:also split the target array y_trainy_test target[:n_train]target[n_train:regressor fit(x_trainy_trainprint("test-set ^ { }format(regressor score(x_testy_test))y_pred regressor predict(x_testy_pred_train regressor predict(x_trainplt figure(figsize=( )plt xticks(range( len( ) )xticks strftime("% % -% ")rotation= ha="left"plt plot(range(n_train)y_trainlabel="train"plt plot(range(n_trainlen(y_testn_train)y_test'-'label="test"plt plot(range(n_train)y_pred_train'--'label="prediction train"plt plot(range(n_trainlen(y_testn_train)y_pred'--'label="prediction test"plt legend(loc=( )plt xlabel("date"plt ylabel("rentals" representing data and engineering features |
17,237 | makes this seem like good model to start with we use the posix time feature and pass random forest regressor to our eval_on_features function figure - shows the resultin[ ]from sklearn ensemble import randomforestregressor regressor randomforestregressor(n_estimators= random_state= plt figure(eval_on_features(xyregressorout[ ]test-set ^ - figure - predictions made by random forest using only the posix time the predictions on the training set are quite goodas is usual for random forests howeverfor the test seta constant line is predicted the is - which means that we learned nothing what happenedthe problem lies in the combination of our feature and the random forest the value of the posix time feature for the test set is outside of the range of the feature values in the training setthe points in the test set have timestamps that are later than all the points in the training set treesand therefore random forestscannot extrapolate to feature ranges outside the training set the result is that the model simply predicts the target value of the closest point in the training set--which is the last time it observed any data clearly we can do better than this this is where our "expert knowledgecomes in from looking at the rental figures in the training datatwo factors seem to be very importantthe time of day and the day of the week solet' add these two features we can' really learn anything from the posix timeso we drop that feature firstlet' use only the hour of the day as figure - showsnow the predictions have the same pattern for each day of the weekutilizing expert knowledge |
17,238 | x_hour citibike index hour reshape(- eval_on_features(x_houryregressorout[ ]test-set ^ figure - predictions made by random forest using only the hour of the day the is already much betterbut the predictions clearly miss the weekly pattern now let' also add the day of the week (see figure - )in[ ]x_hour_week np hstack([citibike index dayofweek reshape(- )citibike index hour reshape(- )]eval_on_features(x_hour_weekyregressorout[ ]test-set ^ figure - predictions with random forest using day of week and hour of day features representing data and engineering features |
17,239 | week and time of day it has an of and shows pretty good predictive performance what this model likely is learning is the mean number of rentals for each combination of weekday and time of day from the first days of august this actually does not require complex model like random forestso let' try with simpler modellinearregression (see figure - )in[ ]from sklearn linear_model import linearregression eval_on_features(x_hour_weekylinearregression()out[ ]test-set ^ figure - predictions made by linear regression using day of week and hour of day as features linearregression works much worseand the periodic pattern looks odd the reason for this is that we encoded day of week and time of day using integerswhich are interpreted as categorical variables thereforethe linear model can only learn linear function of the time of day--and it learned that later in the daythere are more rentals howeverthe patterns are much more complex than that we can capture this by interpreting the integers as categorical variablesby transforming them using one hotencoder (see figure - )in[ ]enc onehotencoder(x_hour_week_onehot enc fit_transform(x_hour_weektoarray(utilizing expert knowledge |
17,240 | eval_on_features(x_hour_week_onehotyridge()out[ ]test-set ^ figure - predictions made by linear regression using one-hot encoding of hour of day and day of week this gives us much better match than the continuous feature encoding now the linear model learns one coefficient for each day of the weekand one coefficient for each time of the day that means that the "time of daypattern is shared over all days of the weekthough using interaction featureswe can allow the model to learn one coefficient for each combination of day and time of day (see figure - )in[ ]poly_transformer polynomialfeatures(degree= interaction_only=trueinclude_bias=falsex_hour_week_onehot_poly poly_transformer fit_transform(x_hour_week_onehotlr ridge(eval_on_features(x_hour_week_onehot_polyylrout[ ]test-set ^ representing data and engineering features |
17,241 | and hour of day features this transformation finally yields model that performs similarly well to the random forest big benefit of this model is that it is very clear what is learnedone coefficient for each day and time we can simply plot the coefficients learned by the modelsomething that would not be possible for the random forest firstwe create feature names for the hour and day featuresin[ ]hour ["% : for in range( )day ["mon""tue""wed""thu""fri""sat""sun"features day hour then we name all the interaction features extracted by polynomialfeaturesusing the get_feature_names methodand keep only the features with nonzero coefficientsin[ ]features_poly poly_transformer get_feature_names(featuresfeatures_nonzero np array(features_poly)[lr coef_ ! coef_nonzero lr coef_[lr coef_ ! now we can visualize the coefficients learned by the linear modelas seen in figure - in[ ]plt figure(figsize=( )plt plot(coef_nonzero' 'plt xticks(np arange(len(coef_nonzero))features_nonzerorotation= plt xlabel("feature magnitude"plt ylabel("feature"utilizing expert knowledge |
17,242 | summary and outlook in this we discussed how to deal with different data types (in particularwith categorical variableswe emphasized the importance of representing data in way that is suitable for the machine learning algorithm--for exampleby one-hotencoding categorical variables we also discussed the importance of engineering new featuresand the possibility of utilizing expert knowledge in creating derived features from your data in particularlinear models might benefit greatly from generating new features via binning and adding polynomials and interactionswhile more complexnonlinear models like random forests and svms might be able to learn more complex tasks without explicitly expanding the feature space in practicethe features that are used (and the match between features and methodis often the most important piece in making machine learning approach work well now that you have good idea of how to represent your data in an appropriate way and which algorithm to use for which taskthe next will focus on evaluating the performance of machine learning models and selecting the right parameter settings representing data and engineering features |
17,243 | model evaluation and improvement having discussed the fundamentals of supervised and unsupervised learningand having explored variety of machine learning algorithmswe will now dive more deeply into evaluating models and selecting parameters we will focus on the supervised methodsregression and classificationas evaluating and selecting models in unsupervised learning is often very qualitative process (as we saw in to evaluate our supervised modelsso far we have split our dataset into training set and test set using the train_test_split functionbuilt model on the training set by calling the fit methodand evaluated it on the test set using the score methodwhich for classification computes the fraction of correctly classified samples here' an example of that processin[ ]from sklearn datasets import make_blobs from sklearn linear_model import logisticregression from sklearn model_selection import train_test_split create synthetic dataset xy make_blobs(random_state= split data and labels into training and test set x_trainx_testy_trainy_test train_test_split(xyrandom_state= instantiate model and fit it to the training set logreg logisticregression(fit(x_trainy_trainevaluate the model on the test set print("test set score{ }format(logreg score(x_testy_test))out[ ]test set score |
17,244 | ested in measuring how well our model generalizes to newpreviously unseen data we are not interested in how well our model fit the training setbut rather in how well it can make predictions for data that was not observed during training in this we will expand on two aspects of this evaluation we will first introduce cross-validationa more robust way to assess generalization performanceand discuss methods to evaluate classification and regression performance that go beyond the default measures of accuracy and provided by the score method we will also discuss grid searchan effective method for adjusting the parameters in supervised models for the best generalization performance cross-validation cross-validation is statistical method of evaluating generalization performance that is more stable and thorough than using split into training and test set in crossvalidationthe data is instead split repeatedly and multiple models are trained the most commonly used version of cross-validation is -fold cross-validationwhere is user-specified numberusually or when performing five-fold cross-validationthe data is first partitioned into five parts of (approximatelyequal sizecalled folds nexta sequence of models is trained the first model is trained using the first fold as the test setand the remaining folds ( - are used as the training set the model is built using the data in folds - and then the accuracy is evaluated on fold then another model is builtthis time using fold as the test set and the data in folds and as the training set this process is repeated using folds and as test sets for each of these five splits of the data into training and test setswe compute the accuracy in the endwe have collected five accuracy values the process is illustrated in figure - in[ ]mglearn plots plot_cross_validation(figure - data splitting in five-fold cross-validation usuallythe first fifth of the data is the first foldthe second fifth of the data is the second foldand so on model evaluation and improvement |
17,245 | cross-validation is implemented in scikit-learn using the cross_val_score function from the model_selection module the parameters of the cross_val_score function are the model we want to evaluatethe training dataand the ground-truth labels let' evaluate logisticregression on the iris datasetin[ ]from sklearn model_selection import cross_val_score from sklearn datasets import load_iris from sklearn linear_model import logisticregression iris load_iris(logreg logisticregression(scores cross_val_score(logregiris datairis targetprint("cross-validation scores{}format(scores)out[ ]cross-validation scores by defaultcross_val_score performs three-fold cross-validationreturning three accuracy values we can change the number of folds used by changing the cv parameterin[ ]scores cross_val_score(logregiris datairis targetcv= print("cross-validation scores{}format(scores)out[ ]cross-validation scores common way to summarize the cross-validation accuracy is to compute the meanin[ ]print("average cross-validation score{ }format(scores mean())out[ ]average cross-validation score using the mean cross-validation we can conclude that we expect the model to be around accurate on average looking at all five scores produced by the five-fold cross-validationwe can also conclude that there is relatively high variance in the accuracy between foldsranging from accuracy to accuracy this could imply that the model is very dependent on the particular folds used for trainingbut it could also just be consequence of the small size of the dataset cross-validation |
17,246 | there are several benefits to using cross-validation instead of single split into training and test set firstremember that train_test_split performs random split of the data imagine that we are "luckywhen randomly splitting the dataand all examples that are hard to classify end up in the training set in that casethe test set will only contain "easyexamplesand our test set accuracy will be unrealistically high converselyif we are "unlucky,we might have randomly put all the hard-toclassify examples in the test set and consequently obtain an unrealistically low score howeverwhen using cross-validationeach example will be in the training set exactly onceeach example is in one of the foldsand each fold is the test set once thereforethe model needs to generalize well to all of the samples in the dataset for all of the cross-validation scores (and their meanto be high having multiple splits of the data also provides some information about how sensitive our model is to the selection of the training dataset for the iris datasetwe saw accuracies between and this is quite rangeand it provides us with an idea about how the model might perform in the worst case and best case scenarios when applied to new data another benefit of cross-validation as compared to using single split of the data is that we use our data more effectively when using train_test_splitwe usually use of the data for training and of the data for evaluation when using five-fold cross-validationin each iteration we can use four-fifths of the data ( %to fit the model when using -fold cross-validationwe can use nine-tenths of the data ( %to fit the model more data will usually result in more accurate models the main disadvantage of cross-validation is increased computational cost as we are now training models instead of single modelcross-validation will be roughly times slower than doing single split of the data it is important to keep in mind that cross-validation is not way to build model that can be applied to new data cross-validation does not return model when calling cross_val_scoremultiple models are built internallybut the purpose of cross-validation is only to evaluate how well given algorithm will generalize when trained on specific dataset stratified -fold cross-validation and other strategies splitting the dataset into folds by starting with the first one- -th part of the dataas described in the previous sectionmight not always be good idea for examplelet' have look at the iris dataset model evaluation and improvement |
17,247 | from sklearn datasets import load_iris iris load_iris(print("iris labels:\ {}format(iris target)out[ ]iris labels[ as you can seethe first third of the data is the class the second third is the class and the last third is the class imagine doing three-fold cross-validation on this dataset the first fold would be only class so in the first split of the datathe test set would be only class and the training set would be only classes and as the classes in training and test sets would be different for all three splitsthe three-fold cross-validation accuracy would be zero on this dataset that is not very helpfulas we can do much better than accuracy on iris as the simple -fold strategy fails herescikit-learn does not use it for classificationbut rather uses stratified -fold cross-validation in stratified cross-validationwe split the data such that the proportions between classes are the same in each fold as they are in the whole datasetas illustrated in figure - in[ ]mglearn plots plot_stratified_cross_validation(figure - comparison of standard cross-validation and stratified cross-validation when the data is ordered by class label cross-validation |
17,248 | belong to class bthen stratified cross-validation ensures that in each fold of samples belong to class and of samples belong to class it is usually good idea to use stratified -fold cross-validation instead of -fold cross-validation to evaluate classifierbecause it results in more reliable estimates of generalization performance in the case of only of samples belonging to class busing standard -fold cross-validation it might easily happen that one fold only contains samples of class using this fold as test set would not be very informative about the overall performance of the classifier for regressionscikit-learn uses the standard -fold cross-validation by default it would be possible to also try to make each fold representative of the different values the regression target hasbut this is not commonly used strategy and would be surprising to most users more control over cross-validation we saw earlier that we can adjust the number of folds that are used in cross_val_score using the cv parameter howeverscikit-learn allows for much finer control over what happens during the splitting of the data by providing crossvalidation splitter as the cv parameter for most use casesthe defaults of -fold crossvalidation for regression and stratified -fold for classification work wellbut there are some cases where you might want to use different strategy sayfor examplewe want to use the standard -fold cross-validation on classification dataset to reproduce someone else' results to do thiswe first have to import the kfold splitter class from the model_selection module and instantiate it with the number of folds we want to usein[ ]from sklearn model_selection import kfold kfold kfold(n_splits= thenwe can pass the kfold splitter object as the cv parameter to cross_val_scorein[ ]print("cross-validation scores:\ {}formatcross_val_score(logregiris datairis targetcv=kfold))out[ ]cross-validation scores this waywe can verify that it is indeed really bad idea to use three-fold (nonstratifiedcross-validation on the iris dataset model evaluation and improvement |
17,249 | kfold kfold(n_splits= print("cross-validation scores:\ {}formatcross_val_score(logregiris datairis targetcv=kfold))out[ ]cross-validation scores remembereach fold corresponds to one of the classes in the iris datasetand so nothing can be learned another way to resolve this problem is to shuffle the data instead of stratifying the foldsto remove the ordering of the samples by label we can do that by setting the shuffle parameter of kfold to true if we shuffle the datawe also need to fix the random_state to get reproducible shuffling otherwiseeach run of cross_val_score would yield different resultas each time different split would be used (this might not be problembut can be surprisingshuffling the data before splitting it yields much better resultin[ ]kfold kfold(n_splits= shuffle=truerandom_state= print("cross-validation scores:\ {}formatcross_val_score(logregiris datairis targetcv=kfold))out[ ]cross-validation scores leave-one-out cross-validation another frequently used cross-validation method is leave-one-out you can think of leave-one-out cross-validation as -fold cross-validation where each fold is single sample for each splityou pick single data point to be the test set this can be very time consumingparticularly for large datasetsbut sometimes provides better estimates on small datasetsin[ ]from sklearn model_selection import leaveoneout loo leaveoneout(scores cross_val_score(logregiris datairis targetcv=looprint("number of cv iterations"len(scores)print("mean accuracy{ }format(scores mean())out[ ]number of cv iterationsmean accuracy cross-validation |
17,250 | anothervery flexible strategy for cross-validation is shuffle-split cross-validation in shuffle-split cross-validationeach split samples train_size many points for the training set and test_size many (disjointpoint for the test set this splitting is repeated n_iter times figure - illustrates running four iterations of splitting dataset consisting of pointswith training set of points and test sets of points each (you can use integers for train_size and test_size to use absolute sizes for these setsor floating-point numbers to use fractions of the whole dataset)in[ ]mglearn plots plot_shuffle_split(figure - shufflesplit with pointstrain_size= test_size= and n_iter= the following code splits the dataset into training set and test set for iterationsin[ ]from sklearn model_selection import shufflesplit shuffle_split shufflesplit(test_size train_size n_splits= scores cross_val_score(logregiris datairis targetcv=shuffle_splitprint("cross-validation scores:\ {}format(scores)out[ ]cross-validation scores shuffle-split cross-validation allows for control over the number of iterations independently of the training and test sizeswhich can sometimes be helpful it also allows for using only part of the data in each iterationby providing train_size and test_size settings that don' add up to one subsampling the data in this way can be useful for experimenting with large datasets there is also stratified variant of shufflesplitaptly named stratifiedshuffles plitwhich can provide more reliable results for classification tasks model evaluation and improvement |
17,251 | another very common setting for cross-validation is when there are groups in the data that are highly related say you want to build system to recognize emotions from pictures of facesand you collect dataset of pictures of people where each person is captured multiple timesshowing various emotions the goal is to build classifier that can correctly identify emotions of people not in the dataset you could use the default stratified cross-validation to measure the performance of classifier here howeverit is likely that pictures of the same person will be in both the training and the test set it will be much easier for classifier to detect emotions in face that is part of the training setcompared to completely new face to accurately evaluate the generalization to new faceswe must therefore ensure that the training and test sets contain images of different people to achieve thiswe can use groupkfoldwhich takes an array of groups as argument that we can use to indicate which person is in the image the groups array here indicates groups in the data that should not be split when creating the training and test setsand should not be confused with the class label this example of groups in the data is common in medical applicationswhere you might have multiple samples from the same patientbut are interested in generalizing to new patients similarlyin speech recognitionyou might have multiple recordings of the same speaker in your datasetbut are interested in recognizing speech of new speakers the following is an example of using synthetic dataset with grouping given by the groups array the dataset consists of data pointsand for each of the data pointsgroups specifies which group (think patientthe point belongs to the groups specify that there are four groupsand the first three samples belong to the first groupthe next four samples belong to the second groupand so onin[ ]from sklearn model_selection import groupkfold create synthetic dataset xy make_blobs(n_samples= random_state= assume the first three samples belong to the same groupthen the next fouretc groups [ scores cross_val_score(logregxygroupscv=groupkfold(n_splits= )print("cross-validation scores:\ {}format(scores)out[ ]cross-validation scores the samples don' need to be ordered by groupwe just did this for illustration purposes the splits that are calculated based on these labels are visualized in figure - cross-validation |
17,252 | entirely in the test setin[ ]mglearn plots plot_label_kfold(figure - label-dependent splitting with groupkfold there are more splitting strategies for cross-validation in scikit-learnwhich allow for an even greater variety of use cases (you can find these in the scikit-learn user guidehoweverthe standard kfoldstratifiedkfoldand groupkfold are by far the most commonly used ones grid search now that we know how to evaluate how well model generalizeswe can take the next step and improve the model' generalization performance by tuning its parameters we discussed the parameter settings of many of the algorithms in scikit-learn in and and it is important to understand what the parameters mean before trying to adjust them finding the values of the important parameters of model (the ones that provide the best generalization performanceis tricky taskbut necessary for almost all models and datasets because it is such common taskthere are standard methods in scikit-learn to help you with it the most commonly used method is grid searchwhich basically means trying all possible combinations of the parameters of interest consider the case of kernel svm with an rbf (radial basis functionkernelas implemented in the svc class as we discussed in there are two important parametersthe kernel bandwidthgammaand the regularization parameterc say we want to try the values and for the parameter cand the same for gamma because we have six different settings for and gamma that we want to trywe have combinations of parameters in total looking at all possible combinations creates table (or gridof parameter settings for the svmas shown here model evaluation and improvement |
17,253 | svc( = gamma= svc( = gamma= svc( = gamma= gamma= svc( = gamma= svc( = gamma= svc( = gamma= gamma= svc( = gamma= svc( = gamma= svc( = gamma= simple grid search we can implement simple grid search just as for loops over the two parameterstraining and evaluating classifier for each combinationin[ ]naive grid search implementation from sklearn svm import svc x_trainx_testy_trainy_test train_test_splitiris datairis targetrandom_state= print("size of training set{size of test set{}formatx_train shape[ ]x_test shape[ ])best_score for gamma in [ ]for in [ ]for each combination of parameterstrain an svc svm svc(gamma=gammac=csvm fit(x_trainy_trainevaluate the svc on the test set score svm score(x_testy_testif we got better scorestore the score and parameters if score best_scorebest_score score best_parameters {' ' 'gamma'gammaprint("best score{ }format(best_score)print("best parameters{}format(best_parameters)out[ ]size of training set size of test set best score best parameters{' ' 'gamma' the danger of overfitting the parameters and the validation set given this resultwe might be tempted to report that we found model that performs with accuracy on our dataset howeverthis claim could be overly optimistic (or just wrong)for the following reasonwe tried many different parameters and grid search |
17,254 | carry over to new data because we used the test data to adjust the parameterswe can no longer use it to assess how good the model is this is the same reason we needed to split the data into training and test sets in the first placewe need an independent dataset to evaluateone that was not used to create the model one way to resolve this problem is to split the data againso we have three setsthe training set to build the modelthe validation (or developmentset to select the parameters of the modeland the test set to evaluate the performance of the selected parameters figure - shows what this looks likein[ ]mglearn plots plot_threefold_split(figure - threefold split of data into training setvalidation setand test set after selecting the best parameters using the validation setwe can rebuild model using the parameter settings we foundbut now training on both the training data and the validation data this waywe can use as much data as possible to build our model this leads to the following implementationin[ ]from sklearn svm import svc split data into train+validation set and test set x_trainvalx_testy_trainvaly_test train_test_splitiris datairis targetrandom_state= split train+validation set into training and validation sets x_trainx_validy_trainy_valid train_test_splitx_trainvaly_trainvalrandom_state= print("size of training set{size of validation set{size of test set:{}\nformat(x_train shape[ ]x_valid shape[ ]x_test shape[ ])best_score for gamma in [ ]for in [ ]for each combination of parameterstrain an svc svm svc(gamma=gammac=csvm fit(x_trainy_trainevaluate the svc on the test set score svm score(x_validy_validif we got better scorestore the score and parameters if score best_scorebest_score score best_parameters {' ' 'gamma'gamma model evaluation and improvement |
17,255 | and evaluate it on the test set svm svc(**best_parameterssvm fit(x_trainvaly_trainvaltest_score svm score(x_testy_testprint("best score on validation set{ }format(best_score)print("best parameters"best_parametersprint("test set score with best parameters{ }format(test_score)out[ ]size of training set size of validation set size of test set best score on validation set best parameters{' ' 'gamma' test set score with best parameters the best score on the validation set is %slightly lower than beforeprobably because we used less data to train the model (x_train is smaller now because we split our dataset twicehoweverthe score on the test set--the score that actually tells us how well we generalize--is even lowerat so we can only claim to classify new data correctlynot correctly as we thought beforethe distinction between the training setvalidation setand test set is fundamentally important to applying machine learning methods in practice any choices made based on the test set accuracy "leakinformation from the test set into the model thereforeit is important to keep separate test setwhich is only used for the final evaluation it is good practice to do all exploratory analysis and model selection using the combination of training and validation setand reserve the test set for final evaluation--this is even true for exploratory visualization strictly speakingevaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is grid search with cross-validation while the method of splitting the data into traininga validationand test set that we just saw is workableand relatively commonly usedit is quite sensitive to how exactly the data is split from the output of the previous code snippet we can see that gridsearchcv selects ' ' 'gamma' as the best parameterswhile the output of the code in the previous section selects ' ' 'gamma' as the best parameters for better estimate of the generalization performanceinstead of using single split into training and validation setwe can use cross-validation to evaluate the performance of each parameter combination this method can be coded up as followsgrid search |
17,256 | for gamma in [ ]for in [ ]for each combination of parameterstrain an svc svm svc(gamma=gammac=cperform cross-validation scores cross_val_score(svmx_trainvaly_trainvalcv= compute mean cross-validation accuracy score np mean(scoresif we got better scorestore the score and parameters if score best_scorebest_score score best_parameters {' ' 'gamma'gammarebuild model on the combined training and validation set svm svc(**best_parameterssvm fit(x_trainvaly_trainvalto evaluate the accuracy of the svm using particular setting of and gamma using five-fold cross-validationwe need to train models as you can imaginethe main downside of the use of cross-validation is the time it takes to train all these models the following visualization (figure - illustrates how the best parameter setting is selected in the preceding codein[ ]mglearn plots plot_cross_val_selection(figure - results of grid search with cross-validation for each parameter setting (only subset is shown)five accuracy values are computedone for each split in the cross-validation then the mean validation accuracy is computed for each parameter setting the parameters with the highest mean validation accuracy are chosenmarked by the circle model evaluation and improvement |
17,257 | rithm on specific dataset howeverit is often used in conjunction with parameter search methods like grid search for this reasonmany people use the term cross-validation colloquially to refer to grid search with cross-validation the overall process of splitting the datarunning the grid searchand evaluating the final parameters is illustrated in figure - in[ ]mglearn plots plot_grid_search_overview(figure - overview of the process of parameter selection and model evaluation with gridsearchcv because grid search with cross-validation is such commonly used method to adjust parametersscikit-learn provides the gridsearchcv classwhich implements it in the form of an estimator to use the gridsearchcv classyou first need to specify the parameters you want to search over using dictionary gridsearchcv will then perform all the necessary model fits the keys of the dictionary are the names of parameters we want to adjust (as given when constructing the model--in this casec and gamma)and the values are the parameter settings we want to try out trying the values and for and gamma translates to the following dictionaryin[ ]param_grid {' '[ ]'gamma'[ ]print("parameter grid:\ {}format(param_grid)out[ ]parameter grid{' '[ ]'gamma'[ ]grid search |
17,258 | grid to search (param_grid)and the cross-validation strategy we want to use (sayfive-fold stratified cross-validation)in[ ]from sklearn model_selection import gridsearchcv from sklearn svm import svc grid_search gridsearchcv(svc()param_gridcv= gridsearchcv will use cross-validation in place of the split into training and validation set that we used before howeverwe still need to split the data into training and test setto avoid overfitting the parametersin[ ]x_trainx_testy_trainy_test train_test_splitiris datairis targetrandom_state= the grid_search object that we created behaves just like classifierwe can call the standard methods fitpredictand score on it howeverwhen we call fitit will run cross-validation for each combination of parameters we specified in param_gridin[ ]grid_search fit(x_trainy_trainfitting the gridsearchcv object not only searches for the best parametersbut also automatically fits new model on the whole training dataset with the parameters that yielded the best cross-validation performance what happens in fit is therefore equivalent to the result of the in[ code we saw at the beginning of this section the gridsearchcv class provides very convenient interface to access the retrained model using the predict and score methods to evaluate how well the best found parameters generalizewe can call score on the test setin[ ]print("test set score{ }format(grid_search score(x_testy_test))out[ ]test set score choosing the parameters using cross-validationwe actually found model that achieves accuracy on the test set the important thing here is that we did not use the test set to choose the parameters the parameters that were found are scored in the scikit-learn estimator that is created using another estimator is called meta-estimator gridsearchcv is the most commonly used meta-estimatorbut we will see more later model evaluation and improvement |
17,259 | over the different splits for this parameter settingis stored in best_score_in[ ]print("best parameters{}format(grid_search best_params_)print("best cross-validation score{ }format(grid_search best_score_)out[ ]best parameters{' ' 'gamma' best cross-validation score againbe careful not to confuse best_score_ with the generalization performance of the model as computed by the score method on the test set using the score method (or evaluating the output of the predict methodemploys model trained on the whole training set the best_score_ attribute stores the mean cross-validation accuracywith cross-validation performed on the training set sometimes it is helpful to have access to the actual model that was found--for exampleto look at coefficients or feature importances you can access the model with the best parameters trained on the whole training set using the best_estimator_ attributein[ ]print("best estimator:\ {}format(grid_search best_estimator_)out[ ]best estimatorsvc( = cache_size= class_weight=nonecoef = decision_function_shape=nonedegree= gamma= kernel='rbf'max_iter=- probability=falserandom_state=noneshrinking=truetol= verbose=falsebecause grid_search itself has predict and score methodsusing best_estimator_ is not needed to make predictions or evaluate the model analyzing the result of cross-validation it is often helpful to visualize the results of cross-validationto understand how the model generalization depends on the parameters we are searching as grid searches are quite computationally expensive to runoften it is good idea to start with relatively coarse and small grid we can then inspect the results of the cross-validated grid searchand possibly expand our search the results of grid search can be found in the cv_results_ attributewhich is dictionary storing all aspects of the search it grid search |
17,260 | after converting it to pandas dataframein[ ]import pandas as pd convert to dataframe results pd dataframe(grid_search cv_results_show the first rows display(results head()out[ ] param_c param_gamma rank_test_score split _test_score params {' ' 'gamma' {' ' 'gamma' {' ' 'gamma' {' ' 'gamma' {' ' 'gamma' split _test_score mean_test_score split _test_score split _test_score split _test_score std_test_score each row in results corresponds to one particular parameter setting for each settingthe results of all cross-validation splits are recordedas well as the mean and standard deviation over all splits as we were searching two-dimensional grid of parameters ( and gamma)this is best visualized as heat map (figure - first we extract the mean validation scoresthen we reshape the scores so that the axes correspond to and gammain[ ]scores np array(results mean_test_scorereshape( plot the mean cross-validation scores mglearn tools heatmap(scoresxlabel='gamma'xticklabels=param_grid['gamma']ylabel=' 'yticklabels=param_grid[' ']cmap="viridis" model evaluation and improvement |
17,261 | each point in the heat map corresponds to one run of cross-validationwith particular parameter setting the color encodes the cross-validation accuracywith light colors meaning high accuracy and dark colors meaning low accuracy you can see that svc is very sensitive to the setting of the parameters for many of the parameter settingsthe accuracy is around %which is quite badfor other settings the accuracy is around we can take away from this plot several things firstthe parameters we adjusted are very important for obtaining good performance both parameters ( and gammamatter lotas adjusting them can change the accuracy from to additionallythe ranges we picked for the parameters are ranges in which we see significant changes in the outcome it' also important to note that the ranges for the parameters are large enoughthe optimum values for each parameter are not on the edges of the plot now let' look at some plots (shown in figure - where the result is less idealbecause the search ranges were not chosen properlyfigure - heat map visualizations of misspecified search grids grid search |
17,262 | figaxes plt subplots( figsize=( )param_grid_linear {' 'np linspace( )'gamma'np linspace( )param_grid_one_log {' 'np linspace( )'gamma'np logspace(- )param_grid_range {' 'np logspace(- )'gamma'np logspace(- - )for param_gridax in zip([param_grid_linearparam_grid_one_logparam_grid_range]axes)grid_search gridsearchcv(svc()param_gridcv= grid_search fit(x_trainy_trainscores grid_search cv_results_['mean_test_score'reshape( plot the mean cross-validation scores scores_image mglearn tools heatmapscoresxlabel='gamma'ylabel=' 'xticklabels=param_grid['gamma']yticklabels=param_grid[' ']cmap="viridis"ax=axplt colorbar(scores_imageax=axes tolist()the first panel shows no changes at allwith constant color over the whole parameter grid in this casethis is caused by improper scaling and range of the parameters and gamma howeverif no change in accuracy is visible over the different parameter settingsit could also be that parameter is just not important at all it is usually good to try very extreme values firstto see if there are any changes in the accuracy as result of changing parameter the second panel shows vertical stripe pattern this indicates that only the setting of the gamma parameter makes any difference this could mean that the gamma parameter is searching over interesting values but the parameter is not--or it could mean the parameter is not important the third panel shows changes in both and gamma howeverwe can see that in the entire bottom left of the plotnothing interesting is happening we can probably exclude the very small values from future grid searches the optimum parameter setting is at the top right as the optimum is in the border of the plotwe can expect that there might be even better values beyond this borderand we might want to change our search range to include more parameters in this region tuning the parameter grid based on the cross-validation scores is perfectly fineand good way to explore the importance of different parameters howeveryou should not test different parameter ranges on the final test set--as we discussed earliereval model evaluation and improvement |
17,263 | to use search over spaces that are not grids in some casestrying all possible combinations of all parameters as gridsearchcv usually doesis not good idea for examplesvc has kernel parameterand depending on which kernel is chosenother parameters will be relevant if ker nel='linear'the model is linearand only the parameter is used if kernel='rbf'both the and gamma parameters are used (but not other parameters like degreein this casesearching over all possible combinations of cgammaand kernel wouldn' make senseif kernel='linear'gamma is not usedand trying different values for gamma would be waste of time to deal with these kinds of "conditionalparametersgridsearchcv allows the param_grid to be list of dictionaries each dictionary in the list is expanded into an independent grid possible grid search involving kernel and parameters could look like thisin[ ]param_grid [{'kernel'['rbf']' '[ ]'gamma'[ ]}{'kernel'['linear']' '[ ]}print("list of grids:\ {}format(param_grid)out[ ]list of grids[{'kernel'['rbf']' '[ ]'gamma'[ ]}{'kernel'['linear']' '[ ]}in the first gridthe kernel parameter is always set to 'rbf(not that the entry for kernel is list of length one)and both the and gamma parameters are varied in the second gridthe kernel parameter is always set to linearand only is varied now let' apply this more complex parameter searchin[ ]grid_search gridsearchcv(svc()param_gridcv= grid_search fit(x_trainy_trainprint("best parameters{}format(grid_search best_params_)print("best cross-validation score{ }format(grid_search best_score_)out[ ]best parameters{' ' 'kernel''rbf''gamma' best cross-validation score grid search |
17,264 | variedin[ ]results pd dataframe(grid_search cv_results_we display the transposed table so that it better fits on the pagedisplay(results tout[ ]param_c param_gamma nan nan nan nan param_kernel rbf rbf rbf rbf linear linear linear linear params { kernelrbfgamma { kernelrbfgamma { kernelrbfgamma { { kernelrbfkernelgamma linear{ kernellinear{ kernellinear{ kernellinearmean_test_score rank_test_score split _test_score split _test_score split _test_score split _test_score split _test_score std_test_score rows columns using different cross-validation strategies with grid search similarly to cross_val_scoregridsearchcv uses stratified -fold cross-validation by default for classificationand -fold cross-validation for regression howeveryou can also pass any cross-validation splitteras described in "more control over crossvalidationon page as the cv parameter in gridsearchcv in particularto get only single split into training and validation setyou can use shufflesplit or stratifiedshufflesplit with n_iter= this might be helpful for very large datasetsor very slow models nested cross-validation in the preceding exampleswe went from using single split of the data into trainingvalidationand test sets to splitting the data into training and test sets and then performing cross-validation on the training set but when using gridsearchcv as model evaluation and improvement |
17,265 | which might make our results unstable and make us depend too much on this single split of the data we can go step furtherand instead of splitting the original data into training and test sets onceuse multiple splits of cross-validation this will result in what is called nested cross-validation in nested cross-validationthere is an outer loop over splits of the data into training and test sets for each of thema grid search is run (which might result in different best parameters for each split in the outer loopthenfor each outer splitthe test set score using the best settings is reported the result of this procedure is list of scores--not modeland not parameter setting the scores tell us how well model generalizesgiven the best parameters found by the grid as it doesn' provide model that can be used on new datanested crossvalidation is rarely used when looking for predictive model to apply to future data howeverit can be useful for evaluating how well given model works on particular dataset implementing nested cross-validation in scikit-learn is straightforward we call cross_val_score with an instance of gridsearchcv as the modelin[ ]scores cross_val_score(gridsearchcv(svc()param_gridcv= )iris datairis targetcv= print("cross-validation scores"scoresprint("mean cross-validation score"scores mean()out[ ]cross-validation scores mean cross-validation score the result of our nested cross-validation can be summarized as "svc can achieve mean cross-validation accuracy on the iris dataset"--nothing more and nothing less herewe used stratified five-fold cross-validation in both the inner and the outer loop as our param_grid contains combinations of parametersthis results in whopping models being builtmaking nested cross-validation very expensive procedure herewe used the same cross-validation splitter in the inner and the outer loophoweverthis is not necessary and you can use any combination of cross-validation strategies in the inner and outer loops it can be bit tricky to understand what is happening in the single line given aboveand it can be helpful to visualize it as for loopsas done in the following simplified implementationgrid search |
17,266 | def nested_cv(xyinner_cvouter_cvclassifierparameter_grid)outer_scores [for each split of the data in the outer cross-validation (split method returns indicesfor training_samplestest_samples in outer_cv split(xy)find best parameter using inner cross-validation best_parms {best_score -np inf iterate over parameters for parameters in parameter_gridaccumulate score over inner splits cv_scores [iterate over inner cross-validation for inner_traininner_test in inner_cv splitx[training_samples] [training_samples])build classifier given parameters and training data clf classifier(**parametersclf fit( [inner_train] [inner_train]evaluate on inner test set score clf score( [inner_test] [inner_test]cv_scores append(scorecompute mean score over inner folds mean_score np mean(cv_scoresif mean_score best_scoreif better than so farremember parameters best_score mean_score best_params parameters build classifier on best parameters using outer training set clf classifier(**best_paramsclf fit( [training_samples] [training_samples]evaluate outer_scores append(clf score( [test_samples] [test_samples])return np array(outer_scoresnowlet' run this function on the iris datasetin[ ]from sklearn model_selection import parametergridstratifiedkfold scores nested_cv(iris datairis targetstratifiedkfold( )stratifiedkfold( )svcparametergrid(param_grid)print("cross-validation scores{}format(scores)out[ ]cross-validation scores parallelizing cross-validation and grid search while running grid search over many parameters and on large datasets can be computationally challengingit is also embarrassingly parallel this means that building model evaluation and improvement |
17,267 | be done completely independently from the other parameter settings and models this makes grid search and cross-validation ideal candidates for parallelization over multiple cpu cores or over cluster you can make use of multiple cores in grid searchcv and cross_val_score by setting the n_jobs parameter to the number of cpu cores you want to use you can set n_jobs=- to use all available cores you should be aware that scikit-learn does not allow nesting of parallel operations soif you are using the n_jobs option on your model (for examplea random forest)you cannot use it in gridsearchcv to search over this model if your dataset and model are very largeit might be that using many cores uses up too much memoryand you should monitor your memory usage when building large models in parallel it is also possible to parallelize grid search and cross-validation over multiple machines in clusteralthough at the time of writing this is not supported within scikit-learn it ishoweverpossible to use the ipython parallel framework for parallel grid searchesif you don' mind writing the for loop over parameters as we did in "simple grid searchon page for spark usersthere is also the recently developed spark-sklearn packagewhich allows running grid search over an already established spark cluster evaluation metrics and scoring so farwe have evaluated classification performance using accuracy (the fraction of correctly classified samplesand regression performance using howeverthese are only two of the many possible ways to summarize how well supervised model performs on given dataset in practicethese evaluation metrics might not be appropriate for your applicationand it is important to choose the right metric when selecting between models and adjusting parameters keep the end goal in mind when selecting metricyou should always have the end goal of the machine learning application in mind in practicewe are usually interested not just in making accurate predictionsbut in using these predictions as part of larger decisionmaking process before picking machine learning metricyou should think about the high-level goal of the applicationoften called the business metric the consequences of choosing particular algorithm for machine learning application are evaluation metrics and scoring |
17,268 | decreasing the number of hospital admissions it could also be getting more users for your websiteor having users spend more money in your shop when choosing model or adjusting parametersyou should pick the model or parameter values that have the most positive influence on the business metric often this is hardas assessing the business impact of particular model might require putting it in production in real-life system in the early stages of developmentand for adjusting parametersit is often infeasible to put models into production just for testing purposesbecause of the high business or personal risks that can be involved imagine evaluating the pedestrian avoidance capabilities of self-driving car by just letting it drive aroundwithout verifying it firstif your model is badpedestrians will be in troubletherefore we often need to find some surrogate evaluation procedureusing an evaluation metric that is easier to compute for examplewe could test classifying images of pedestrians against nonpedestrians and measure accuracy keep in mind that this is only surrogateand it pays off to find the closest metric to the original business goal that is feasible to evaluate this closest metric should be used whenever possible for model evaluation and selection the result of this evaluation might not be single number--the consequence of your algorithm could be that you have more customersbut each customer will spend less--but it should capture the expected business impact of choosing one model over another in this sectionwe will first discuss metrics for the important special case of binary classificationthen turn to multiclass classification and finally regression metrics for binary classification binary classification is arguably the most common and conceptually simple application of machine learning in practice howeverthere are still number of caveats in evaluating even this simple task before we dive into alternative metricslet' have look at the ways in which measuring accuracy might be misleading remember that for binary classificationwe often speak of positive class and negative classwith the understanding that the positive class is the one we are looking for kinds of errors oftenaccuracy is not good measure of predictive performanceas the number of mistakes we make does not contain all the information we are interested in imagine an application to screen for the early detection of cancer using an automated test if we ask scientifically minded readers to excuse the commercial language in this section not losing track of the end goal is equally important in sciencethough the authors are not aware of similar phrase to "business impactbeing used in that realm model evaluation and improvement |
17,269 | patient will undergo additional screening herewe would call positive test (an indication of cancerthe positive classand negative test the negative class we can' assume that our model will always work perfectlyand it will make mistakes for any applicationwe need to ask ourselves what the consequences of these mistakes might be in the real world one possible mistake is that healthy patient will be classified as positiveleading to additional testing this leads to some costs and an inconvenience for the patient (and possibly some mental distressan incorrect positive prediction is called false positive the other possible mistake is that sick patient will be classified as negativeand will not receive further tests and treatment the undiagnosed cancer might lead to serious health issuesand could even be fatal mistake of this kind--an incorrect negative prediction--is called false negative in statisticsa false positive is also known as type errorand false negative as type ii error we will stick to "false negativeand "false positive,as they are more explicit and easier to remember in the cancer diagnosis exampleit is clear that we want to avoid false negatives as much as possiblewhile false positives can be viewed as more of minor nuisance while this is particularly drastic examplethe consequence of false positives and false negatives are rarely the same in commercial applicationsit might be possible to assign dollar values to both kinds of mistakeswhich would allow measuring the error of particular prediction in dollarsinstead of accuracy this might be much more meaningful for making business decisions on which model to use imbalanced datasets types of errors play an important role when one of two classes is much more frequent than the other one this is very common in practicea good example is click-through predictionwhere each data point represents an "impression,an item that was shown to user this item might be an ador related storyor related person to follow on social media site the goal is to predict whetherif shown particular itema user will click on it (indicating they are interestedmost things users are shown on the internet (in particularadswill not result in click you might need to show user ads or articles before they find something interesting enough to click on this results in dataset where for each "no clickdata pointsthere is "clickeddata pointin other words of the samples belong to the "no clickclass datasets in which one class is much more frequent than the other are often called imbalanced datasetsor datasets with imbalanced classes in realityimbalanced data is the normand it is rare that the events of interest have equal or even similar frequency in the data now let' say you build classifier that is accurate on the click prediction task what does that tell you accuracy sounds impressivebut this doesn' take the evaluation metrics and scoring |
17,270 | machine learning modelby always predicting "no click on the other handeven with imbalanced dataa accurate model could in fact be quite good howeveraccuracy doesn' allow us to distinguish the constant "no clickmodel from potentially good model to illustratewe'll create : imbalanced dataset from the digits datasetby classifying the digit against the nine other classesin[ ]from sklearn datasets import load_digits digits load_digits( digits target = x_trainx_testy_trainy_test train_test_splitdigits datayrandom_state= we can use the dummyclassifier to always predict the majority class (here "not nine"to see how uninformative accuracy can bein[ ]from sklearn dummy import dummyclassifier dummy_majority dummyclassifier(strategy='most_frequent'fit(x_trainy_trainpred_most_frequent dummy_majority predict(x_testprint("unique predicted labels{}format(np unique(pred_most_frequent))print("test score{ }format(dummy_majority score(x_testy_test))out[ ]unique predicted labels[falsetest score we obtained close to accuracy without learning anything this might seem strikingbut think about it for minute imagine someone telling you their model is accurate you might think they did very good job but depending on the problemthat might be possible by just predicting one classlet' compare this against using an actual classifierin[ ]from sklearn tree import decisiontreeclassifier tree decisiontreeclassifier(max_depth= fit(x_trainy_trainpred_tree tree predict(x_testprint("test score{ }format(tree score(x_testy_test))out[ ]test score model evaluation and improvement |
17,271 | constant predictor this could indicate either that something is wrong with how we used decisiontreeclassifieror that accuracy is in fact not good measure here for comparison purposeslet' evaluate two more classifierslogisticregression and the default dummyclassifierwhich makes random predictions but produces classes with the same proportions as in the training setin[ ]from sklearn linear_model import logisticregression dummy dummyclassifier(fit(x_trainy_trainpred_dummy dummy predict(x_testprint("dummy score{ }format(dummy score(x_testy_test))logreg logisticregression( = fit(x_trainy_trainpred_logreg logreg predict(x_testprint("logreg score{ }format(logreg score(x_testy_test))out[ ]dummy score logreg score the dummy classifier that produces random output is clearly the worst of the lot (according to accuracy)while logisticregression produces very good results howevereven the random classifier yields over accuracy this makes it very hard to judge which of these results is actually helpful the problem here is that accuracy is an inadequate measure for quantifying predictive performance in this imbalanced setting for the rest of this we will explore alternative metrics that provide better guidance in selecting models in particularwe would like to have metrics that tell us how much better model is than making "most frequentpredictions or random predictionsas they are computed in pred_most_frequent and pred_dummy if we use metric to assess our modelsit should definitely be able to weed out these nonsense predictions confusion matrices one of the most comprehensive ways to represent the result of evaluating binary classification is using confusion matrices let' inspect the predictions of logisticregres sion from the previous section using the confusion_matrix function we already stored the predictions on the test set in pred_logregevaluation metrics and scoring |
17,272 | from sklearn metrics import confusion_matrix confusion confusion_matrix(y_testpred_logregprint("confusion matrix:\ {}format(confusion)out[ ]confusion matrix[[ ]the output of confusion_matrix is two-by-two arraywhere the rows correspond to the true classes and the columns correspond to the predicted classes each entry counts how often sample that belongs to the class corresponding to the row (here"not nineand "nine"was classified as the class corresponding to the column the following plot (figure - illustrates this meaningin[ ]mglearn plots plot_confusion_matrix_illustration(figure - confusion matrix of the "nine vs restclassification task model evaluation and improvement |
17,273 | cationswhile other entries tell us how many samples of one class got mistakenly classified as another class if we declare " ninethe positive classwe can relate the entries of the confusion matrix with the terms false positive and false negative that we introduced earlier to complete the picturewe call correctly classified samples belonging to the positive class true positives and correctly classified samples belonging to the negative class true negatives these terms are usually abbreviated fpfntpand tn and lead to the following interpretation for the confusion matrix (figure - )in[ ]mglearn plots plot_binary_confusion_matrix(figure - confusion matrix for binary classification now let' use the confusion matrix to compare the models we fitted earlier (the two dummy modelsthe decision treeand the logistic regression)in[ ]print("most frequent class:"print(confusion_matrix(y_testpred_most_frequent)print("\ndummy model:"print(confusion_matrix(y_testpred_dummy)print("\ndecision tree:"print(confusion_matrix(y_testpred_tree)print("\nlogistic regression"print(confusion_matrix(y_testpred_logreg) the main diagonal of two-dimensional array or matrix is [iievaluation metrics and scoring |
17,274 | most frequent class[[ ]dummy model[[ ]decision tree[[ ]logistic regression [[ ]looking at the confusion matrixit is quite clear that something is wrong with pred_most_frequentbecause it always predicts the same class pred_dummyon the other handhas very small number of true positives ( )particularly compared to the number of false negatives and false positives--there are many more false positives than true positivesthe predictions made by the decision tree make much more sense than the dummy predictionseven though the accuracy was nearly the same finallywe can see that logistic regression does better than pred_tree in all aspectsit has more true positives and true negatives while having fewer false positives and false negatives from this comparisonit is clear that only the decision tree and the logistic regression give reasonable resultsand that the logistic regression works better than the tree on all accounts howeverinspecting the full confusion matrix is bit cumbersomeand while we gained lot of insight from looking at all aspects of the matrixthe process was very manual and qualitative there are several ways to summarize the information in the confusion matrixwhich we will discuss next relation to accuracy we already saw one way to summarize the result in the confusion matrix--by computing accuracywhich can be expressed asaccuracy tp+tn tp+tn fp fn in other wordsaccuracy is the number of correct predictions (tp and tndivided by the number of all samples (all entries of the confusion matrix summed upprecisionrecalland -score there are several other ways to summarize the confusion matrixwith the most common ones being precision and recall precision measures how many of the samples predicted as positive are actually positive model evaluation and improvement |
17,275 | tp tp+fp precision is used as performance metric when the goal is to limit the number of false positives as an exampleimagine model for predicting whether new drug will be effective in treating disease in clinical trials clinical trials are notoriously expensiveand pharmaceutical company will only want to run an experiment if it is very sure that the drug will actually work thereforeit is important that the model does not produce many false positives--in other wordsthat it has high precision precision is also known as positive predictive value (ppvrecallon the other handmeasures how many of the positive samples are captured by the positive predictionsrecall tp tp+fn recall is used as performance metric when we need to identify all positive samplesthat iswhen it is important to avoid false negatives the cancer diagnosis example from earlier in this is good example for thisit is important to find all people that are sickpossibly including healthy patients in the prediction other names for recall are sensitivityhit rateor true positive rate (tprthere is trade-off between optimizing recall and optimizing precision you can trivially obtain perfect recall if you predict all samples to belong to the positive class-there will be no false negativesand no true negatives either howeverpredicting all samples as positive will result in many false positivesand therefore the precision will be very low on the other handif you find model that predicts only the single data point it is most sure about as positive and the rest as negativethen precision will be perfect (assuming this data point is in fact positive)but recall will be very bad precision and recall are only two of many classification measures derived from tpfptnand fn you can find great summary of all the measures on wikipedia in the machine learning communityprecision and recall are arguably the most commonly used measures for binary classificationbut other communities might use other related metrics sowhile precision and recall are very important measureslooking at only one of them will not provide you with the full picture one way to summarize them is the -score or -measurewhich is with the harmonic mean of precision and recallf= precision*recall precision+recall evaluation metrics and scoring |
17,276 | into accountit can be better measure than accuracy on imbalanced binary classification datasets let' run it on the predictions for the "nine vs restdataset that we computed earlier herewe will assume that the "nineclass is the positive class (it is labeled as true while the rest is labeled as false)so the positive class is the minority classin[ ]from sklearn metrics import _score print(" score most frequent{ }formatf _score(y_testpred_most_frequent))print(" score dummy{ }format( _score(y_testpred_dummy))print(" score tree{ }format( _score(y_testpred_tree))print(" score logistic regression{ }formatf _score(y_testpred_logreg))out[ ] score most frequent score dummy score tree score logistic regression we can note two things here firstwe get an error message for the most_frequent predictionas there were no predictions of the positive class (which makes the denominator in the -score zeroalsowe can see pretty strong distinction between the dummy predictions and the tree predictionswhich wasn' clear when looking at accuracy alone using the -score for evaluationwe summarized the predictive performance again in one number howeverthe -score seems to capture our intuition of what makes good model much better than accuracy did disadvantage of the -scorehoweveris that it is harder to interpret and explain than accuracy if we want more comprehensive summary of precisionrecalland -scorewe can use the classification_report convenience function to compute all three at onceand print them in nice formatin[ ]from sklearn metrics import classification_report print(classification_report(y_testpred_most_frequenttarget_names=["not nine""nine"]) model evaluation and improvement |
17,277 | precision recall -score support not nine nine avg total the classification_report function produces one line per class (heretrue and falseand reports precisionrecalland -score with this class as the positive class beforewe assumed the minority "nineclass was the positive class if we change the positive class to "not nine,we can see from the output of classification_report that we obtain an -score of with the most_frequent model furthermorefor the "not nineclass we have recall of as we classified all samples as "not nine the last column next to the -score provides the support of each classwhich simply means the number of samples in this class according to the ground truth the last row in the classification report shows weighted (by the number of samples in the classaverage of the numbers for each class here are two more reportsone for the dummy classifier and one for the logistic regressionin[ ]print(classification_report(y_testpred_dummytarget_names=["not nine""nine"])out[ ]precision recall -score support not nine nine avg total in[ ]print(classification_report(y_testpred_logregtarget_names=["not nine""nine"])out[ ]precision recall -score support not nine nine avg total evaluation metrics and scoring |
17,278 | models and very good model are not as clear any more picking which class is declared the positive class has big impact on the metrics while the -score for the dummy classification is (vs for the logistic regressionon the "nineclassfor the "not nineclass it is vs which both seem like reasonable results looking at all the numbers together paints pretty accurate picturethoughand we can clearly see the superiority of the logistic regression model taking uncertainty into account the confusion matrix and the classification report provide very detailed analysis of particular set of predictions howeverthe predictions themselves already threw away lot of information that is contained in the model as we discussed in chapter most classifiers provide decision_function or predict_proba method to assess degrees of certainty about predictions making predictions can be seen as thresholding the output of decision_function or predict_proba at certain fixed point--in binary classification we use for the decision function and for predict_proba the following is an example of an imbalanced binary classification taskwith points in the negative class classified against points in the positive class the training data is shown on the left in figure - we train kernel svm model on this dataand the plots to the right of the training data illustrate the values of the decision function as heat map you can see black circle in the plot in the top centerwhich denotes the threshold of the decision_function being exactly zero points inside this circle will be classified as the positive classand points outside as the negative classin[ ]from mglearn datasets import make_blobs xy make_blobs(n_samples=( )centers= cluster_std=[ ]random_state= x_trainx_testy_trainy_test train_test_split(xyrandom_state= svc svc(gamma fit(x_trainy_trainin[ ]mglearn plots plot_decision_threshold( model evaluation and improvement |
17,279 | threshold we can use the classification_report function to evaluate precision and recall for both classesin[ ]print(classification_report(y_testsvc predict(x_test))out[ ]precision recall -score support avg total for class we get fairly small recalland precision is mixed because class is so much largerthe classifier focuses on getting class rightand not the smaller class let' assume in our application it is more important to have high recall for class as in the cancer screening example earlier this means we are willing to risk more false positives (false class in exchange for more true positives (which will increase the recallthe predictions generated by svc predict really do not fulfill this requirementbut we can adjust the predictions to focus on higher recall of class by changing the decision threshold away from by defaultpoints with deci sion_function value greater than will be classified as class we want more points to be classified as class so we need to decrease the thresholdevaluation metrics and scoring |
17,280 | y_pred_lower_threshold svc decision_function(x_test let' look at the classification report for this predictionin[ ]print(classification_report(y_testy_pred_lower_threshold)out[ ]precision recall -score support avg total as expectedthe recall of class went upand the precision went down we are now classifying larger region of space as class as illustrated in the top-right panel of figure - if you value precision over recall or the other way aroundor your data is heavily imbalancedchanging the decision threshold is the easiest way to obtain better results as the decision_function can have arbitrary rangesit is hard to provide rule of thumb regarding how to pick threshold if you do set thresholdyou need to be careful not to do so using the test set as with any other parametersetting decision threshold on the test set is likely to yield overly optimistic results use validation set or cross-validation instead picking threshold for models that implement the predict_proba method can be easieras the output of predict_proba is on fixed to scaleand models probabilities by defaultthe threshold of means that if the model is more than "surethat point is of the positive classit will be classified as such increasing the threshold means that the model needs to be more confident to make positive decision (and less confident to make negative decisionwhile working with probabilities may be more intuitive than working with arbitrary thresholdsnot all models provide realistic models of uncertainty ( decisiontree that is grown to its full depth is always sure of its decisionseven though it might often be wrongthis relates to the concept of calibrationa calibrated model is model that provides an accurate measure of its uncertainty discussing calibration in detail is beyond the scope of this bookbut you can find more details in the paper "predicting good probabilities with supervised learningby alexandru niculescu-mizil and rich caruana model evaluation and improvement |
17,281 | as we just discussedchanging the threshold that is used to make classification decision in model is way to adjust the trade-off of precision and recall for given classifier maybe you want to miss less than of positive samplesmeaning desired recall of this decision depends on the applicationand it should be driven by business goals once particular goal is set--saya particular recall or precision value for class-- threshold can be set appropriately it is always possible to set threshold to fulfill particular targetlike recall the hard part is to develop model that still has reasonable precision with this threshold--if you classify everything as positiveyou will have recallbut your model will be useless setting requirement on classifier like recall is often called setting the operating point fixing an operating point is often helpful in business settings to make performance guarantees to customers or other groups inside your organization oftenwhen developing new modelit is not entirely clear what the operating point will be for this reasonand to understand modeling problem betterit is instructive to look at all possible thresholdsor all possible trade-offs of precision and recalls at once this is possible using tool called the precision-recall curve you can find the function to compute the precision-recall curve in the sklearn metrics module it needs the ground truth labeling and predicted uncertaintiescreated via either decision_function or predict_probain[ ]from sklearn metrics import precision_recall_curve precisionrecallthresholds precision_recall_curvey_testsvc decision_function(x_test)the precision_recall_curve function returns list of precision and recall values for all possible thresholds (all values that appear in the decision functionin sorted orderso we can plot curveas seen in figure - in[ ]use more data points for smoother curve xy make_blobs(n_samples=( )centers= cluster_std=[ ]random_state= x_trainx_testy_trainy_test train_test_split(xyrandom_state= svc svc(gamma fit(x_trainy_trainprecisionrecallthresholds precision_recall_curvey_testsvc decision_function(x_test)find threshold closest to zero close_zero np argmin(np abs(thresholds)plt plot(precision[close_zero]recall[close_zero]' 'markersize= label="threshold zero"fillstyle="none" =' 'mew= plt plot(precisionrecalllabel="precision recall curve"plt xlabel("precision"plt ylabel("recall"evaluation metrics and scoring |
17,282 | each point along the curve in figure - corresponds to possible threshold of the decision_function we can seefor examplethat we can achieve recall of at precision of about the black circle marks the point that corresponds to threshold of the default threshold for decision_function this point is the trade-off that is chosen when calling the predict method the closer curve stays to the upper-right cornerthe better the classifier point at the upper right means high precision and high recall for the same threshold the curve starts at the top-left cornercorresponding to very low thresholdclassifying everything as the positive class raising the threshold moves the curve toward higher precisionbut also lower recall raising the threshold more and morewe get to situation where most of the points classified as being positive are true positivesleading to very high precision but lower recall the more the model keeps recall high as precision goes upthe better looking at this particular curve bit morewe can see that with this model it is possible to get precision of up to around with very high recall if we want much higher precisionwe have to sacrifice lot of recall in other wordson the left the curve is relatively flatmeaning that recall does not go down lot when we require increased precision for precision greater than each gain in precision costs us lot of recall different classifiers can work well in different parts of the curve--that isat different operating points let' compare the svm we trained to random forest trained on the same dataset the randomforestclassifier doesn' have decision_functiononly predict_proba the precision_recall_curve function expects as its second argument certainty measure for the positive class (class )so we pass the probability of sample being class --that isrf predict_proba(x_test)[: the default threshold for predict_proba in binary classification is so this is the point we marked on the curve (see figure - ) model evaluation and improvement |
17,283 | from sklearn ensemble import randomforestclassifier rf randomforestclassifier(n_estimators= random_state= max_features= rf fit(x_trainy_trainrandomforestclassifier has predict_probabut not decision_function precision_rfrecall_rfthresholds_rf precision_recall_curvey_testrf predict_proba(x_test)[: ]plt plot(precisionrecalllabel="svc"plt plot(precision[close_zero]recall[close_zero]' 'markersize= label="threshold zero svc"fillstyle="none" =' 'mew= plt plot(precision_rfrecall_rflabel="rf"close_default_rf np argmin(np abs(thresholds_rf )plt plot(precision_rf[close_default_rf]recall_rf[close_default_rf]'^' =' 'markersize= label="threshold rf"fillstyle="none"mew= plt xlabel("precision"plt ylabel("recall"plt legend(loc="best"figure - comparing precision recall curves of svm and random forest from the comparison plot we can see that the random forest performs better at the extremesfor very high recall or very high precision requirements around the middle (approximately precision= )the svm performs better if we only looked at the -score to compare overall performancewe would have missed these subtleties the -score only captures one point on the precision-recall curvethe one given by the default thresholdevaluation metrics and scoring |
17,284 | print(" _score of random forest{ }formatf _score(y_testrf predict(x_test)))print(" _score of svc{ }format( _score(y_testsvc predict(x_test)))out[ ] _score of random forest _score of svc comparing two precision-recall curves provides lot of detailed insightbut is fairly manual process for automatic model comparisonwe might want to summarize the information contained in the curvewithout limiting ourselves to particular threshold or operating point one particular way to summarize the precision-recall curve is by computing the integral or area under the curve of the precision-recall curvealso known as the average precision you can use the average_precision_score function to compute the average precision because we need to compute the roc curve and consider multiple thresholdsthe result of decision_function or predict_proba needs to be passed to average_precision_scorenot the result of predictin[ ]from sklearn metrics import average_precision_score ap_rf average_precision_score(y_testrf predict_proba(x_test)[: ]ap_svc average_precision_score(y_testsvc decision_function(x_test)print("average precision of random forest{ }format(ap_rf)print("average precision of svc{ }format(ap_svc)out[ ]average precision of random forest average precision of svc when averaging over all possible thresholdswe see that the random forest and svc perform similarly wellwith the random forest even slightly ahead this is quite different from the result we got from _score earlier because average precision is the area under curve that goes from to average precision always returns value between (worstand (bestthe average precision of classifier that assigns decision_function at random is the fraction of positive samples in the dataset receiver operating characteristics (rocand auc there is another tool that is commonly used to analyze the behavior of classifiers at different thresholdsthe receiver operating characteristics curveor roc curve for short similar to the precision-recall curvethe roc curve considers all possible there are some minor technical differences between the area under the precision-recall curve and average precision howeverthis explanation conveys the general idea model evaluation and improvement |
17,285 | the false positive rate (fpragainst the true positive rate (tprrecall that the true positive rate is simply another name for recallwhile the false positive rate is the fraction of false positives out of all negative samplesfpr fp fp+tn the roc curve can be computed using the roc_curve function (see figure - )in[ ]from sklearn metrics import roc_curve fprtprthresholds roc_curve(y_testsvc decision_function(x_test)plt plot(fprtprlabel="roc curve"plt xlabel("fpr"plt ylabel("tpr (recall)"find threshold closest to zero close_zero np argmin(np abs(thresholds)plt plot(fpr[close_zero]tpr[close_zero]' 'markersize= label="threshold zero"fillstyle="none" =' 'mew= plt legend(loc= figure - roc curve for svm for the roc curvethe ideal curve is close to the top leftyou want classifier that produces high recall while keeping low false positive rate compared to the default threshold of the curve shows that we can achieve significantly higher recall (around while only increasing the fpr slightly the point closest to the top left might be better operating point than the one chosen by default againbe aware that choosing threshold should not be done on the test setbut on separate validation set evaluation metrics and scoring |
17,286 | figure - in[ ]from sklearn metrics import roc_curve fpr_rftpr_rfthresholds_rf roc_curve(y_testrf predict_proba(x_test)[: ]plt plot(fprtprlabel="roc curve svc"plt plot(fpr_rftpr_rflabel="roc curve rf"plt xlabel("fpr"plt ylabel("tpr (recall)"plt plot(fpr[close_zero]tpr[close_zero]' 'markersize= label="threshold zero svc"fillstyle="none" =' 'mew= close_default_rf np argmin(np abs(thresholds_rf )plt plot(fpr_rf[close_default_rf]tpr[close_default_rf]'^'markersize= label="threshold rf"fillstyle="none" =' 'mew= plt legend(loc= figure - comparing roc curves for svm and random forest as for the precision-recall curvewe often want to summarize the roc curve using single numberthe area under the curve (this is commonly just referred to as the aucand it is understood that the curve in question is the roc curvewe can compute the area under the roc curve using the roc_auc_score function model evaluation and improvement |
17,287 | from sklearn metrics import roc_auc_score rf_auc roc_auc_score(y_testrf predict_proba(x_test)[: ]svc_auc roc_auc_score(y_testsvc decision_function(x_test)print("auc for random forest{ }format(rf_auc)print("auc for svc{ }format(svc_auc)out[ ]auc for random forest auc for svc comparing the random forest and svm using the auc scorewe find that the random forest performs quite bit better than the svm recall that because average precision is the area under curve that goes from to average precision always returns value between (worstand (bestpredicting randomly always produces an auc of no matter how imbalanced the classes in dataset are this makes auc much better metric for imbalanced classification problems than accuracy the auc can be interpreted as evaluating the ranking of positive samples it' equivalent to the probability that randomly picked point of the positive class will have higher score according to the classifier than randomly picked point from the negative class soa perfect auc of means that all positive points have higher score than all negative points for classification problems with imbalanced classesusing auc for model selection is often much more meaningful than using accuracy let' go back to the problem we studied earlier of classifying all nines in the digits dataset versus all other digits we will classify the dataset with an svm with three different settings of the kernel bandwidthgamma (see figure - )in[ ] digits target = x_trainx_testy_trainy_test train_test_splitdigits datayrandom_state= plt figure(for gamma in [ ]svc svc(gamma=gammafit(x_trainy_trainaccuracy svc score(x_testy_testauc roc_auc_score(y_testsvc decision_function(x_test)fprtpr_ roc_curve(y_test svc decision_function(x_test)print("gamma { faccuracy { fauc { }formatgammaaccuracyauc)plt plot(fprtprlabel="gamma={ }format(gamma)plt xlabel("fpr"plt ylabel("tpr"plt xlim(- plt ylim( plt legend(loc="best"evaluation metrics and scoring |
17,288 | gamma gamma gamma accuracy accuracy accuracy auc auc auc figure - comparing roc curves of svms with different settings of gamma the accuracy of all three settings of gamma is the same this might be the same as chance performanceor it might not looking at the auc and the corresponding curvehoweverwe see clear distinction between the three models with gamma= the auc is actually at chance levelmeaning that the output of the decision_func tion is as good as random with gamma= performance drastically improves to an auc of finallywith gamma= we get perfect auc of that means that all positive points are ranked higher than all negative points according to the decision function in other wordswith the right thresholdthis model can classify the data perfectly! knowing thiswe can adjust the threshold on this model and obtain great predictions if we had only used accuracywe would never have discovered this for this reasonwe highly recommend using auc when evaluating models on imbalanced data keep in mind that auc does not make use of the default thresholdthoughso adjusting the decision threshold might be necessary to obtain useful classification results from model with high auc metrics for multiclass classification now that we have discussed evaluation of binary classification tasks in depthlet' move on to metrics to evaluate multiclass classification basicallyall metrics for multiclass classification are derived from binary classification metricsbut averaged looking at the curve for gamma= in detailyou can see small kink close to the top left that means that at least one point was not ranked correctly the auc of is consequence of rounding to the second decimal point model evaluation and improvement |
17,289 | of correctly classified examples and againwhen classes are imbalancedaccuracy is not great evaluation measure imagine three-class classification problem with of points belonging to class belonging to class band belonging to class what does being accurate mean on this datasetin generalmulticlass classification results are harder to understand than binary classification results apart from accuracycommon tools are the confusion matrix and the classification report we saw in the binary case in the previous section let' apply these two detailed evaluation methods on the task of classifying the different handwritten digits in the digits datasetin[ ]from sklearn metrics import accuracy_score x_trainx_testy_trainy_test train_test_splitdigits datadigits targetrandom_state= lr logisticregression(fit(x_trainy_trainpred lr predict(x_testprint("accuracy{ }format(accuracy_score(y_testpred))print("confusion matrix:\ {}format(confusion_matrix(y_testpred))out[ ]accuracy confusion matrix[[ ]the model has an accuracy of %which already tells us that we are doing pretty well the confusion matrix provides us with some more detail as for the binary caseeach row corresponds to true labeland each column corresponds to predicted label you can find visually more appealing plot in figure - in[ ]scores_image mglearn tools heatmapconfusion_matrix(y_testpred)xlabel='predicted label'ylabel='true label'xticklabels=digits target_namesyticklabels=digits target_namescmap=plt cm gray_rfmt="% "plt title("confusion matrix"plt gca(invert_yaxis(evaluation metrics and scoring |
17,290 | for the first classthe digit there are samples in the classand all of these samples were classified as class (there are no false negatives for class we can see that because all other entries in the first row of the confusion matrix are we can also see that no other digits were mistakenly classified as because all other entries in the first column of the confusion matrix are (there are no false positives for class some digits were confused with othersthough--for examplethe digit (third row)three of which were classified as the digit (fourth columnthere was also one digit that was classified as (third columnfourth rowand one digit that was classified as (thrid columnfourth rowwith the classification_report functionwe can compute the precisionrecalland -score for each classin[ ]print(classification_report(y_testpred)out[ ] precision recall -score support avg total model evaluation and improvement |
17,291 | sions with this class for class on the other handprecision is because no other class was mistakenly classified as while for class there are no false negativesso the recall is we can also see that the model has particular difficulties with classes and the most commonly used metric for imbalanced datasets in the multiclass setting is the multiclass version of the -score the idea behind the multiclass -score is to compute one binary -score per classwith that class being the positive class and the other classes making up the negative classes thenthese per-class -scores are averaged using one of the following strategies"macroaveraging computes the unweighted per-class -scores this gives equal weight to all classesno matter what their size is "weightedaveraging computes the mean of the per-class -scoresweighted by their support this is what is reported in the classification report "microaveraging computes the total number of false positivesfalse negativesand true positives over all classesand then computes precisionrecalland fscore using these counts if you care about each sample equally muchit is recommended to use the "microaverage -scoreif you care about each class equally muchit is recommended to use the "macroaverage -scorein[ ]print("micro average score{ }format ( _score(y_testpredaverage="micro"))print("macro average score{ }format ( _score(y_testpredaverage="macro"))out[ ]micro average score macro average score regression metrics evaluation for regression can be done in similar detail as we did for classification-for exampleby analyzing overpredicting the target versus underpredicting the target howeverin most applications we've seenusing the default used in the score method of all regressors is enough sometimes business decisions are made on the basis of mean squared error or mean absolute errorwhich might give incentive to tune models using these metrics in generalthoughwe have found to be more intuitive metric to evaluate regression models evaluation metrics and scoring |
17,292 | we have discussed many evaluation methods in detailand how to apply them given the ground truth and model howeverwe often want to use metrics like auc in model selection using gridsearchcv or cross_val_score luckily scikit-learn provides very simple way to achieve thisvia the scoring argument that can be used in both gridsearchcv and cross_val_score you can simply provide string describing the evaluation metric you want to use sayfor examplewe want to evaluate the svm classifier on the "nine vs resttask on the digits datasetusing the auc score changing the score from the default (accuracyto auc can be done by providing "roc_aucas the scoring parameterin[ ]default scoring for classification is accuracy print("default scoring{}formatcross_val_score(svc()digits datadigits target = ))providing scoring="accuracydoesn' change the results explicit_accuracy cross_val_score(svc()digits datadigits target = scoring="accuracy"print("explicit accuracy scoring{}format(explicit_accuracy)roc_auc cross_val_score(svc()digits datadigits target = scoring="roc_auc"print("auc scoring{}format(roc_auc)out[ ]default scoring explicit accuracy scoring auc scoring similarlywe can change the metric used to pick the best parameters in grid searchcvin[ ]x_trainx_testy_trainy_test train_test_splitdigits datadigits target = random_state= we provide somewhat bad grid to illustrate the pointparam_grid {'gamma'[ ]using the default scoring of accuracygrid gridsearchcv(svc()param_grid=param_gridgrid fit(x_trainy_trainprint("grid-search with accuracy"print("best parameters:"grid best_params_print("best cross-validation score (accuracy)){ }format(grid best_score_)print("test set auc{ }formatroc_auc_score(y_testgrid decision_function(x_test)))print("test set accuracy{ }format(grid score(x_testy_test)) model evaluation and improvement |
17,293 | grid-search with accuracy best parameters{'gamma' best cross-validation score (accuracy)) test set auc test set accuracy in[ ]using auc scoring insteadgrid gridsearchcv(svc()param_grid=param_gridscoring="roc_auc"grid fit(x_trainy_trainprint("\ngrid-search with auc"print("best parameters:"grid best_params_print("best cross-validation score (auc){ }format(grid best_score_)print("test set auc{ }formatroc_auc_score(y_testgrid decision_function(x_test)))print("test set accuracy{ }format(grid score(x_testy_test))out[ ]grid-search with auc best parameters{'gamma' best cross-validation score (auc) test set auc test set accuracy when using accuracythe parameter gamma= is selectedwhile gamma= is selected when using auc the cross-validation accuracy is consistent with the test set accuracy in both cases howeverusing auc found better parameter setting in terms of auc and even in terms of accuracy the most important values for the scoring parameter for classification are accuracy (the default)roc_auc for the area under the roc curveaverage_precision for the area under the precision-recall curvef _macrof _microand _weighted for the binary -score and the different weighted variants for regressionthe most commonly used values are for the scoremean_squared_error for mean squared errorand mean_absolute_error for mean absolute error you can find full list of supported arguments in the documentation or by looking at the scorer dictionary defined in the metrics scorer module finding higher-accuracy solution using auc is likely consequence of accuracy being bad measure of model performance on imbalanced data evaluation metrics and scoring |
17,294 | from sklearn metrics scorer import scorers print("available scorers:\ {}format(sorted(scorers keys()))out[ ]available scorers['accuracy''adjusted_rand_score''average_precision'' '' _macro'' _micro'' _samples'' _weighted''log_loss''mean_absolute_error''mean_squared_error''median_absolute_error''precision''precision_macro''precision_micro''precision_samples''precision_weighted'' ''recall''recall_macro''recall_micro''recall_samples''recall_weighted''roc_auc'summary and outlook in this we discussed cross-validationgrid searchand evaluation metricsthe cornerstones of evaluating and improving machine learning algorithms the tools described in this together with the algorithms described in and are the bread and butter of every machine learning practitioner there are two particular points that we made in this that warrant repeatingbecause they are often overlooked by new practitioners the first has to do with cross-validation cross-validation or the use of test set allow us to evaluate machine learning model as it will perform in the future howeverif we use the test set or cross-validation to select model or select model parameterswe "use upthe test dataand using the same data to evaluate how well our model will do in the future will lead to overly optimistic estimates we therefore need to resort to split into training data for model buildingvalidation data for model and parameter selectionand test data for model evaluation instead of simple splitwe can replace each of these splits with cross-validation the most commonly used form (as described earlieris training/test split for evaluationand using cross-validation on the training set for model and parameter selection the second point has to do with the importance of the evaluation metric or scoring function used for model selection and model evaluation the theory of how to make business decisions from the predictions of machine learning model is somewhat beyond the scope of this book howeverit is rarely the case that the end goal of machine learning task is building model with high accuracy make sure that the metric you choose to evaluate and select model for is good stand-in for what the model will actually be used for in realityclassification problems rarely have balanced classesand often false positives and false negatives have very different consequences we highly recommend foster provost and tom fawcett' book data science for business ( 'reillyfor more information on this topic model evaluation and improvement |
17,295 | ric accordingly the model evaluation and selection techniques we have described so far are the most important tools in data scientist' toolbox grid search and cross-validation as we've described them in this can only be applied to single supervised model we have seen beforehoweverthat many models require preprocessingand that in some applicationslike the face recognition example in extracting different representation of the data can be useful in the next we will introduce the pipeline classwhich allows us to use grid search and cross-validation on these complex chains of algorithms summary and outlook |
17,296 | algorithm chains and pipelines for many machine learning algorithmsthe particular representation of the data that you provide is very importantas we discussed in this starts with scaling the data and combining features by hand and goes all the way to learning features using unsupervised machine learningas we saw in consequentlymost machine learning applications require not only the application of single algorithmbut the chaining together of many different processing steps and machine learning models in this we will cover how to use the pipeline class to simplify the process of building chains of transformations and models in particularwe will see how we can combine pipeline and gridsearchcv to search over parameters for all processing steps at once as an example of the importance of chaining modelswe noticed that we can greatly improve the performance of kernel svm on the cancer dataset by using the min maxscaler for preprocessing here' code for splitting the datacomputing the minimum and maximumscaling the dataand training the svmin[ ]from sklearn svm import svc from sklearn datasets import load_breast_cancer from sklearn model_selection import train_test_split from sklearn preprocessing import minmaxscaler load and split the data cancer load_breast_cancer(x_trainx_testy_trainy_test train_test_splitcancer datacancer targetrandom_state= compute minimum and maximum on the training data scaler minmaxscaler(fit(x_train |
17,297 | rescale the training data x_train_scaled scaler transform(x_trainsvm svc(learn an svm on the scaled training data svm fit(x_train_scaledy_trainscale the test data and score the scaled data x_test_scaled scaler transform(x_testprint("test score{ }format(svm score(x_test_scaledy_test))out[ ]test score parameter selection with preprocessing now let' say we want to find better parameters for svc using gridsearchcvas discussed in how should we go about doing thisa naive approach might look like thisin[ ]from sklearn model_selection import gridsearchcv for illustration purposes onlydon' use this codeparam_grid {' '[ ]'gamma'[ ]grid gridsearchcv(svc()param_grid=param_gridcv= grid fit(x_train_scaledy_trainprint("best cross-validation accuracy{ }format(grid best_score_)print("best set score{ }format(grid score(x_test_scaledy_test))print("best parameters"grid best_params_out[ ]best cross-validation accuracy best set score best parameters{'gamma' ' ' herewe ran the grid search over the parameters of svc using the scaled data howeverthere is subtle catch in what we just did when scaling the datawe used all the data in the training set to find out how to train it we then use the scaled training data to run our grid search using cross-validation for each split in the cross-validationsome part of the original training set will be declared the training part of the splitand some the test part of the split the test part is used to measure what new data will look like to model trained on the training part howeverwe already used the information contained in the test part of the splitwhen scaling the data remember that the test part in each split in the cross-validation is part of the training setand we used the information from the entire training set to find the right scaling of the data algorithm chains and pipelines |
17,298 | new data (sayin form of our test set)this data will not have been used to scale the training dataand it might have different minimum and maximum than the training data the following example (figure - shows how the data processing during cross-validation and the final evaluation differin[ ]mglearn plots plot_improper_processing(figure - data usage when preprocessing outside the cross-validation loop sothe splits in the cross-validation no longer correctly mirror how new data will look to the modeling process we already leaked information from these parts of the data into our modeling process this will lead to overly optimistic results during cross-validationand possibly the selection of suboptimal parameters to get around this problemthe splitting of the dataset during cross-validation should be done before doing any preprocessing any process that extracts knowledge from the dataset should only ever be applied to the training portion of the datasetso any cross-validation should be the "outermost loopin your processing to achieve this in scikit-learn with the cross_val_score function and the grid searchcv functionwe can use the pipeline class the pipeline class is class that allows "gluingtogether multiple processing steps into single scikit-learn estimaparameter selection with preprocessing |
17,299 | like any other model in scikit-learn the most common use case of the pipeline class is in chaining preprocessing steps (like scaling of the datatogether with supervised model like classifier building pipelines let' look at how we can use the pipeline class to express the workflow for training an svm after scaling the data with minmaxscaler (for now without the grid searchfirstwe build pipeline object by providing it with list of steps each step is tuple containing name (any string of your choosing and an instance of an estimatorin[ ]from sklearn pipeline import pipeline pipe pipeline([("scaler"minmaxscaler())("svm"svc())]herewe created two stepsthe firstcalled "scaler"is an instance of minmaxscalerand the secondcalled "svm"is an instance of svc nowwe can fit the pipelinelike any other scikit-learn estimatorin[ ]pipe fit(x_trainy_trainherepipe fit first calls fit on the first step (the scaler)then transforms the training data using the scalerand finally fits the svm with the scaled data to evaluate on the test datawe simply call pipe scorein[ ]print("test score{ }format(pipe score(x_testy_test))out[ ]test score calling the score method on the pipeline first transforms the test data using the scalerand then calls the score method on the svm using the scaled test data as you can seethe result is identical to the one we got from the code at the beginning of the when doing the transformations by hand using the pipelinewe reduced the code needed for our "preprocessing classificationprocess the main benefit of using the pipelinehoweveris that we can now use this single estimator in cross_val_score or gridsearchcv with one exceptionthe name can' contain double underscore__ algorithm chains and pipelines |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.