id
int64
0
25.6k
text
stringlengths
0
4.59k
17,000
has very active user community it contains number of state-of-the-art machine learning algorithmsas well as comprehensive documentation about each algorithm scikit-learn is very popular tooland the most prominent python library for machine learning it is widely used in industry and academiaand wealth of tutorials and code snippets are available online scikit-learn works well with number of other scientific python toolswhich we will discuss later in this while reading thiswe recommend that you also browse the scikit-learn user guide and api documentation for additional details on and many more options for each algorithm the online documentation is very thoroughand this book will provide you with all the prerequisites in machine learning to understand it in detail installing scikit-learn scikit-learn depends on two other python packagesnumpy and scipy for plotting and interactive developmentyou should also install matplotlibipythonand the jupyter notebook we recommend using one of the following prepackaged python distributionswhich will provide the necessary packagesanaconda python distribution made for large-scale data processingpredictive analyticsand scientific computing anaconda comes with numpyscipymatplotlibpandasipythonjupyter notebookand scikit-learn available on mac oswindowsand linuxit is very convenient solution and is the one we suggest for people without an existing installation of the scientific python packages anaconda now also includes the commercial intel mkl library for free using mkl (which is done automatically when anaconda is installedcan give significant speed improvements for many algorithms in scikit-learn enthought canopy another python distribution for scientific computing this comes with numpyscipymatplotlibpandasand ipythonbut the free version does not come with scikit-learn if you are part of an academicdegree-granting institutionyou can request an academic license and get free access to the paid subscription version of enthought canopy enthought canopy is available for python xand works on mac oswindowsand linux python( ,ya free python distribution for scientific computingspecifically for windows python( ,ycomes with numpyscipymatplotlibpandasipythonand scikit-learn introduction
17,001
packagespip install numpy scipy matplotlib ipython scikit-learn pandas essential libraries and tools understanding what scikit-learn is and how to use it is importantbut there are few other libraries that will enhance your experience scikit-learn is built on top of the numpy and scipy scientific python libraries in addition to numpy and scipywe will be using pandas and matplotlib we will also introduce the jupyter notebookwhich is browser-based interactive programming environment brieflyhere is what you should know about these tools in order to get the most out of scikit-learn jupyter notebook the jupyter notebook is an interactive environment for running code in the browser it is great tool for exploratory data analysis and is widely used by data scientists while the jupyter notebook supports many programming languageswe only need the python support the jupyter notebook makes it easy to incorporate codetextand imagesand all of this book was in fact written as jupyter notebook all of the code examples we include can be downloaded from github numpy numpy is one of the fundamental packages for scientific computing in python it contains functionality for multidimensional arrayshigh-level mathematical functions such as linear algebra operations and the fourier transformand pseudorandom number generators in scikit-learnthe numpy array is the fundamental data structure scikit-learn takes in data in the form of numpy arrays any data you're using will have to be converted to numpy array the core functionality of numpy is the ndarray classa multidimensional ( -dimensionalarray all elements of the array must be of the same type numpy array looks like thisin[ ]import numpy as np np array([[ ][ ]]print(" :\ {}format( ) if you are unfamiliar with numpy or matplotlibwe recommend reading the first of the scipy lecture notes essential libraries and tools
17,002
[[ [ ]we will be using numpy lot in this bookand we will refer to objects of the numpy ndarray class as "numpy arraysor just "arrays scipy scipy is collection of functions for scientific computing in python it providesamong other functionalityadvanced linear algebra routinesmathematical function optimizationsignal processingspecial mathematical functionsand statistical distributions scikit-learn draws from scipy' collection of functions for implementing its algorithms the most important part of scipy for us is scipy sparsethis provides sparse matriceswhich are another representation that is used for data in scikitlearn sparse matrices are used whenever we want to store array that contains mostly zerosin[ ]from scipy import sparse create numpy array with diagonal of onesand zeros everywhere else eye np eye( print("numpy array:\ {}format(eye)out[ ]numpy array[ ]in[ ]convert the numpy array to scipy sparse matrix in csr format only the nonzero entries are stored sparse_matrix sparse csr_matrix(eyeprint("\nscipy sparse csr matrix:\ {}format(sparse_matrix)out[ ]scipy sparse csr matrix( ( ( ( introduction
17,003
not fit into memory)so we need to create sparse representations directly here is way to create the same sparse matrix as beforeusing the coo formatin[ ]data np ones( row_indices np arange( col_indices np arange( eye_coo sparse coo_matrix((data(row_indicescol_indices))print("coo representation:\ {}format(eye_coo)out[ ]coo representation( ( ( ( more details on scipy sparse matrices can be found in the scipy lecture notes matplotlib matplotlib is the primary scientific plotting library in python it provides functions for making publication-quality visualizations such as line chartshistogramsscatter plotsand so on visualizing your data and different aspects of your analysis can give you important insightsand we will be using matplotlib for all our visualizations when working inside the jupyter notebookyou can show figures directly in the browser by using the %matplotlib notebook and %matplotlib inline commands we recommend using %matplotlib notebookwhich provides an interactive environment (though we are using %matplotlib inline to produce this bookfor examplethis code produces the plot in figure - in[ ]%matplotlib inline import matplotlib pyplot as plt generate sequence of numbers from - to with steps in between np linspace(- create second array using sine np sin(xthe plot function makes line chart of one array against another plt plot(xymarker=" "essential libraries and tools
17,004
pandas pandas is python library for data wrangling and analysis it is built around data structure called the dataframe that is modeled after the dataframe simply puta pandas dataframe is tablesimilar to an excel spreadsheet pandas provides great range of methods to modify and operate on this tablein particularit allows sql-like queries and joins of tables in contrast to numpywhich requires that all entries in an array be of the same typepandas allows each column to have separate type (for exampleintegersdatesfloating-point numbersand stringsanother valuable tool provided by pandas is its ability to ingest from great variety of file formats and databaseslike sqlexcel filesand comma-separated values (csvfiles going into detail about the functionality of pandas is out of the scope of this book howeverpython for data analysis by wes mckinney ( 'reilly provides great guide here is small example of creating dataframe using dictionaryin[ ]import pandas as pd create simple dataset of people data {'name'["john""anna""peter""linda"]'location["new york""paris""berlin""london"]'age[ data_pandas pd dataframe(dataipython display allows "pretty printingof dataframes in the jupyter notebook display(data_pandas introduction
17,005
age location name new york john paris anna berlin peter london linda there are several possible ways to query this table for examplein[ ]select all rows that have an age column greater than display(data_pandas[data_pandas age ]this produces the following resultage location name berlin peter london linda mglearn this book comes with accompanying codewhich you can find on github the accompanying code includes not only all the examples shown in this bookbut also the mglearn library this is library of utility functions we wrote for this bookso that we don' clutter up our code listings with details of plotting and data loading if you're interestedyou can look up all the functions in the repositorybut the details of the mglearn module are not really important to the material in this book if you see call to mglearn in the codeit is usually way to make pretty picture quicklyor to get our hands on some interesting data throughout the book we make ample use of numpymatplotlib and pandas all the code will assume the following importsimport numpy as np import matplotlib pyplot as plt import pandas as pd import mglearn we also assume that you will run the code in jupyter notebook with the %matplotlib notebook or %matplotlib inline magic enabled to show plots if you are not using the notebook or these magic commandsyou will have to call plt show to actually show any of the figures essential libraries and tools
17,006
there are two major versions of python that are widely used at the momentpython (more precisely and python (with the latest release being at the time of writingthis sometimes leads to some confusion python is no longer actively developedbut because python contains major changespython code usually does not run on python if you are new to pythonor are starting new project from scratchwe highly recommend using the latest version of python without changes if you have large codebase that you rely on that is written for python you are excused from upgrading for now howeveryou should try to migrate to python as soon as possible when writing any new codeit is for the most part quite easy to write code that runs under python and python if you don' have to interface with legacy softwareyou should definitely use python all the code in this book is written in way that works for both versions howeverthe exact output might differ slightly under python versions used in this book we are using the following versions of the previously mentioned libraries in this bookin[ ]import sys print("python version{}format(sys version)import pandas as pd print("pandas version{}format(pd __version__)import matplotlib print("matplotlib version{}format(matplotlib __version__)import numpy as np print("numpy version{}format(np __version__)import scipy as sp print("scipy version{}format(sp __version__)import ipython print("ipython version{}format(ipython __version__)import sklearn print("scikit-learn version{}format(sklearn __version__) the six package can be very handy for that introduction
17,007
python version|anaconda ( -bit)(defaultjul [gcc (red hat - )pandas versionmatplotlib versionnumpy versionscipy versionipython versionscikit-learn version : : while it is not important to match these versions exactlyyou should have version of scikit-learn that is as least as recent as the one we used now that we have everything set uplet' dive into our first application of machine learning this book assumes that you have version or later of scikitlearn the model_selection module was added in and if you use an earlier version of scikit-learnyou will need to adjust the imports from this module first applicationclassifying iris species in this sectionwe will go through simple machine learning application and create our first model in the processwe will introduce some core concepts and terms let' assume that hobby botanist is interested in distinguishing the species of some iris flowers that she has found she has collected some measurements associated with each iristhe length and width of the petals and the length and width of the sepalsall measured in centimeters (see figure - she also has the measurements of some irises that have been previously identified by an expert botanist as belonging to the species setosaversicoloror virginica for these measurementsshe can be certain of which species each iris belongs to let' assume that these are the only species our hobby botanist will encounter in the wild our goal is to build machine learning model that can learn from the measurements of these irises whose species is knownso that we can predict the species for new iris first applicationclassifying iris species
17,008
because we have measurements for which we know the correct species of iristhis is supervised learning problem in this problemwe want to predict one of several options (the species of iristhis is an example of classification problem the possible outputs (different species of irisesare called classes every iris in the dataset belongs to one of three classesso this problem is three-class classification problem the desired output for single data point (an irisis the species of this flower for particular data pointthe species it belongs to is called its label meet the data the data we will use for this example is the iris dataseta classical dataset in machine learning and statistics it is included in scikit-learn in the datasets module we can load it by calling the load_iris functionin[ ]from sklearn datasets import load_iris iris_dataset load_iris(the iris object that is returned by load_iris is bunch objectwhich is very similar to dictionary it contains keys and values introduction
17,009
print("keys of iris_dataset\ {}format(iris_dataset keys())out[ ]keys of iris_datasetdict_keys(['target_names''feature_names''descr''data''target']the value of the key descr is short description of the dataset we show the beginning of the description here (feel free to look up the rest yourself)in[ ]print(iris_dataset['descr'][: "\ "out[ ]iris plants database ===================notes ---data set characteristics:number of instances ( in each of three classes:number of attributes numericpredictive att ---the value of the key target_names is an array of stringscontaining the species of flower that we want to predictin[ ]print("target names{}format(iris_dataset['target_names'])out[ ]target names['setosa'versicolor'virginica'the value of feature_names is list of stringsgiving the description of each featurein[ ]print("feature names\ {}format(iris_dataset['feature_names'])out[ ]feature names['sepal length (cm)''sepal width (cm)''petal length (cm)''petal width (cm)'the data itself is contained in the target and data fields data contains the numeric measurements of sepal lengthsepal widthpetal lengthand petal width in numpy arraya first applicationclassifying iris species
17,010
print("type of data{}format(type(iris_dataset['data']))out[ ]type of datathe rows in the data array correspond to flowerswhile the columns represent the four measurements that were taken for each flowerin[ ]print("shape of data{}format(iris_dataset['data'shape)out[ ]shape of data( we see that the array contains measurements for different flowers remember that the individual items are called samples in machine learningand their properties are called features the shape of the data array is the number of samples multiplied by the number of features this is convention in scikit-learnand your data will always be assumed to be in this shape here are the feature values for the first five samplesin[ ]print("first five columns of data:\ {}format(iris_dataset['data'][: ])out[ ]first five columns of data[ ]from this datawe can see that all of the first five flowers have petal width of cm and that the first flower has the longest sepalat cm the target array contains the species of each of the flowers that were measuredalso as numpy arrayin[ ]print("type of target{}format(type(iris_dataset['target']))out[ ]type of targettarget is one-dimensional arraywith one entry per flower introduction
17,011
print("shape of target{}format(iris_dataset['target'shape)out[ ]shape of target( ,the species are encoded as integers from to in[ ]print("target:\ {}format(iris_dataset['target'])out[ ]target[ the meanings of the numbers are given by the iris['target_names'array means setosa means versicolorand means virginica measuring successtraining and testing data we want to build machine learning model from this data that can predict the species of iris for new set of measurements but before we can apply our model to new measurementswe need to know whether it actually works--that iswhether we should trust its predictions unfortunatelywe cannot use the data we used to build the model to evaluate it this is because our model can always simply remember the whole training setand will therefore always predict the correct label for any point in the training set this "rememberingdoes not indicate to us whether our model will generalize well (in other wordswhether it will also perform well on new datato assess the model' performancewe show it new data (data that it hasn' seen beforefor which we have labels this is usually done by splitting the labeled data we have collected (hereour flower measurementsinto two parts one part of the data is used to build our machine learning modeland is called the training data or training set the rest of the data will be used to assess how well the model worksthis is called the test datatest setor hold-out set scikit-learn contains function that shuffles the dataset and splits it for youthe train_test_split function this function extracts of the rows in the data as the training settogether with the corresponding labels for this data the remaining of the datatogether with the remaining labelsis declared as the test set deciding first applicationclassifying iris species
17,012
what arbitrarybut using test set containing of the data is good rule of thumb in scikit-learndata is usually denoted with capital xwhile labels are denoted by lowercase this is inspired by the standard formulation ( )= in mathematicswhere is the input to function and is the output following more conventions from mathematicswe use capital because the data is two-dimensional array ( matrixand lowercase because the target is one-dimensional array ( vectorlet' call train_test_split on our data and assign the outputs using this nomenclaturein[ ]from sklearn model_selection import train_test_split x_trainx_testy_trainy_test train_test_splitiris_dataset['data']iris_dataset['target']random_state= before making the splitthe train_test_split function shuffles the dataset using pseudorandom number generator if we just took the last of the data as test setall the data points would have the label as the data points are sorted by the label (see the output for iris['target'shown earlierusing test set containing only one of the three classes would not tell us much about how well our model generalizesso we shuffle our data to make sure the test data contains data from all classes to make sure that we will get the same output if we run the same function several timeswe provide the pseudorandom number generator with fixed seed using the random_state parameter this will make the outcome deterministicso this line will always have the same outcome we will always fix the random_state in this way when using randomized procedures in this book the output of the train_test_split function is x_trainx_testy_trainand y_testwhich are all numpy arrays x_train contains of the rows of the datasetand x_test contains the remaining %in[ ]print("x_train shape{}format(x_train shape)print("y_train shape{}format(y_train shape)out[ ]x_train shape( y_train shape( , introduction
17,013
print("x_test shape{}format(x_test shape)print("y_test shape{}format(y_test shape)out[ ]x_test shape( y_test shape( ,first things firstlook at your data before building machine learning model it is often good idea to inspect the datato see if the task is easily solvable without machine learningor if the desired information might not be contained in the data additionallyinspecting your data is good way to find abnormalities and peculiarities maybe some of your irises were measured using inches and not centimetersfor example in the real worldinconsistencies in the data and unexpected measurements are very common one of the best ways to inspect data is to visualize it one way to do this is by using scatter plot scatter plot of the data puts one feature along the -axis and another along the -axisand draws dot for each data point unfortunatelycomputer screens have only two dimensionswhich allows us to plot only two (or maybe threefeatures at time it is difficult to plot datasets with more than three features this way one way around this problem is to do pair plotwhich looks at all possible pairs of features if you have small number of featuressuch as the four we have herethis is quite reasonable you should keep in mindhoweverthat pair plot does not show the interaction of all of features at onceso some interesting aspects of the data may not be revealed when visualizing it this way figure - is pair plot of the features in the training set the data points are colored according to the species the iris belongs to to create the plotwe first convert the numpy array into pandas dataframe pandas has function to create pair plots called scatter_matrix the diagonal of this matrix is filled with histograms of each featurein[ ]create dataframe from data in x_train label the columns using the strings in iris_dataset feature_names iris_dataframe pd dataframe(x_traincolumns=iris_dataset feature_namescreate scatter matrix from the dataframecolor by y_train grr pd scatter_matrix(iris_dataframec=y_trainfigsize=( )marker=' 'hist_kwds={'bins' } = alpha cmap=mglearn cm first applicationclassifying iris species
17,014
from the plotswe can see that the three classes seem to be relatively well separated using the sepal and petal measurements this means that machine learning model will likely be able to learn to separate them building your first modelk-nearest neighbors now we can start building the actual machine learning model there are many classification algorithms in scikit-learn that we could use here we will use -nearest neighbors classifierwhich is easy to understand building this model only consists of storing the training set to make prediction for new data pointthe algorithm finds the point in the training set that is closest to the new point then it assigns the label of this training point to the new data point introduction
17,015
to the new data pointwe can consider any fixed number of neighbors in the training (for examplethe closest three or five neighborsthenwe can make prediction using the majority class among these neighbors we will go into more detail about this in for nowwe'll use only single neighbor all machine learning models in scikit-learn are implemented in their own classeswhich are called estimator classes the -nearest neighbors classification algorithm is implemented in the kneighborsclassifier class in the neighbors module before we can use the modelwe need to instantiate the class into an object this is when we will set any parameters of the model the most important parameter of kneighbor sclassifier is the number of neighborswhich we will set to in[ ]from sklearn neighbors import kneighborsclassifier knn kneighborsclassifier(n_neighbors= the knn object encapsulates the algorithm that will be used to build the model from the training dataas well the algorithm to make predictions on new data points it will also hold the information that the algorithm has extracted from the training data in the case of kneighborsclassifierit will just store the training set to build the model on the training setwe call the fit method of the knn objectwhich takes as arguments the numpy array x_train containing the training data and the numpy array y_train of the corresponding training labelsin[ ]knn fit(x_trainy_trainout[ ]kneighborsclassifier(algorithm='auto'leaf_size= metric='minkowski'metric_params=nonen_jobs= n_neighbors= = weights='uniform'the fit method returns the knn object itself (and modifies it in place)so we get string representation of our classifier the representation shows us which parameters were used in creating the model nearly all of them are the default valuesbut you can also find n_neighbors= which is the parameter that we passed most models in scikit-learn have many parametersbut the majority of them are either speed optimizations or for very special use cases you don' have to worry about the other parameters shown in this representation printing scikit-learn model can yield very long stringsbut don' be intimidated by these we will cover all the important parameters in in the remainder of this bookwe will not show the output of fit because it doesn' contain any new information first applicationclassifying iris species
17,016
we can now make predictions using this model on new data for which we might not know the correct labels imagine we found an iris in the wild with sepal length of cma sepal width of cma petal length of cmand petal width of cm what species of iris would this bewe can put this data into numpy arrayagain by calculating the shape--that isthe number of samples ( multiplied by the number of features ( )in[ ]x_new np array([[ ]]print("x_new shape{}format(x_new shape)out[ ]x_new shape( note that we made the measurements of this single flower into row in twodimensional numpy arrayas scikit-learn always expects two-dimensional arrays for the data to make predictionwe call the predict method of the knn objectin[ ]prediction knn predict(x_newprint("prediction{}format(prediction)print("predicted target name{}formatiris_dataset['target_names'][prediction])out[ ]prediction[ predicted target name['setosa'our model predicts that this new iris belongs to the class meaning its species is setosa but how do we know whether we can trust our modelwe don' know the correct species of this samplewhich is the whole point of building the modelevaluating the model this is where the test set that we created earlier comes in this data was not used to build the modelbut we do know what the correct species is for each iris in the test set thereforewe can make prediction for each iris in the test data and compare it against its label (the known specieswe can measure how well the model works by computing the accuracywhich is the fraction of flowers for which the right species was predicted introduction
17,017
y_pred knn predict(x_testprint("test set predictions:\ {}format(y_pred)out[ ]test set predictions[ in[ ]print("test set score{ }format(np mean(y_pred =y_test))out[ ]test set score we can also use the score method of the knn objectwhich will compute the test set accuracy for usin[ ]print("test set score{ }format(knn score(x_testy_test))out[ ]test set score for this modelthe test set accuracy is about which means we made the right prediction for of the irises in the test set under some mathematical assumptionsthis means that we can expect our model to be correct of the time for new irises for our hobby botanist applicationthis high level of accuracy means that our model may be trustworthy enough to use in later we will discuss how we can improve performanceand what caveats there are in tuning model summary and outlook let' summarize what we learned in this we started with brief introduction to machine learning and its applicationsthen discussed the distinction between supervised and unsupervised learning and gave an overview of the tools we'll be using in this book thenwe formulated the task of predicting which species of iris particular flower belongs to by using physical measurements of the flower we used dataset of measurements that was annotated by an expert with the correct species to build our modelmaking this supervised learning task there were three possible speciessetosaversicoloror virginicawhich made the task three-class classification problem the possible species are called classes in the classification problemand the species of single iris is called its label the iris dataset consists of two numpy arraysone containing the datawhich is referred to as in scikit-learnand one containing the correct or desired outputssummary and outlook
17,018
data point and one column per feature the array is one-dimensional arraywhich here contains one class labelan integer ranging from to for each of the samples we split our dataset into training setto build our modeland test setto evaluate how well our model will generalize to newpreviously unseen data we chose the -nearest neighbors classification algorithmwhich makes predictions for new data point by considering its closest neighbor(sin the training set this is implemented in the kneighborsclassifier classwhich contains the algorithm that builds the model as well as the algorithm that makes prediction using the model we instantiated the classsetting parameters then we built the model by calling the fit methodpassing the training data (x_trainand training outputs (y_trainas parameters we evaluated the model using the score methodwhich computes the accuracy of the model we applied the score method to the test set data and the test set labels and found that our model is about accuratemeaning it is correct of the time on the test set this gave us the confidence to apply the model to new data (in our examplenew flower measurementsand trust that the model will be correct about of the time here is summary of the code needed for the whole training and evaluation procedurein[ ]x_trainx_testy_trainy_test train_test_splitiris_dataset['data']iris_dataset['target']random_state= knn kneighborsclassifier(n_neighbors= knn fit(x_trainy_trainprint("test set score{ }format(knn score(x_testy_test))out[ ]test set score this snippet contains the core code for applying any machine learning algorithm using scikit-learn the fitpredictand score methods are the common interface to supervised models in scikit-learnand with the concepts introduced in this you can apply these models to many machine learning tasks in the next we will go into more depth about the different kinds of supervised models in scikit-learn and how to apply them successfully introduction
17,019
supervised learning as we mentioned earliersupervised machine learning is one of the most commonly used and successful types of machine learning in this we will describe supervised learning in more detail and explain several popular supervised learning algorithms we already saw an application of supervised machine learning in classifying iris flowers into several species using physical measurements of the flowers remember that supervised learning is used whenever we want to predict certain outcome from given inputand we have examples of input/output pairs we build machine learning model from these input/output pairswhich comprise our training set our goal is to make accurate predictions for newnever-before-seen data supervised learning often requires human effort to build the training setbut afterward automates and often speeds up an otherwise laborious or infeasible task classification and regression there are two major types of supervised machine learning problemscalled classification and regression in classificationthe goal is to predict class labelwhich is choice from predefined list of possibilities in we used the example of classifying irises into one of three possible species classification is sometimes separated into binary classificationwhich is the special case of distinguishing between exactly two classesand multiclass classificationwhich is classification between more than two classes you can think of binary classification as trying to answer yes/no question classifying emails as either spam or not spam is an example of binary classification problem in this binary classification taskthe yes/no question being asked would be "is this email spam?
17,020
tive class and the other class being the negative class herepositive doesn' represent having benefit or valuebut rather what the object of the study is sowhen looking for spam"positivecould mean the spam class which of the two classes is called positive is often subjective matterand specific to the domain the iris exampleon the other handis an example of multiclass classification problem another example is predicting what language website is in from the text on the website the classes here would be pre-defined list of possible languages for regression tasksthe goal is to predict continuous numberor floating-point number in programming terms (or real number in mathematical termspredicting person' annual income from their educationtheir ageand where they live is an example of regression task when predicting incomethe predicted value is an amountand can be any number in given range another example of regression task is predicting the yield of corn farm given attributes such as previous yieldsweatherand number of employees working on the farm the yield again can be an arbitrary number an easy way to distinguish between classification and regression tasks is to ask whether there is some kind of continuity in the output if there is continuity between possible outcomesthen the problem is regression problem think about predicting annual income there is clear continuity in the output whether person makes $ , or $ , year does not make tangible differenceeven though these are different amounts of moneyif our algorithm predicts $ , or $ , when it should have predicted $ , we don' mind that much by contrastfor the task of recognizing the language of website (which is classification problem)there is no matter of degree website is in one languageor it is in another there is no continuity between languagesand there is no language that is between english and french generalizationoverfittingand underfitting in supervised learningwe want to build model on the training data and then be able to make accurate predictions on newunseen data that has the same characteristics as the training set that we used if model is able to make accurate predictions on unseen datawe say it is able to generalize from the training set to the test set we want to build model that is able to generalize as accurately as possible we ask linguists to excuse the simplified presentation of languages as distinct and fixed entities supervised learning
17,021
training set if the training and test sets have enough in commonwe expect the model to also be accurate on the test set howeverthere are some cases where this can go wrong for exampleif we allow ourselves to build very complex modelswe can always be as accurate as we like on the training set let' take look at made-up example to illustrate this point say novice data scientist wants to predict whether customer will buy boatgiven records of previous boat buyers and customers who we know are not interested in buying boat the goal is to send out promotional emails to people who are likely to actually make purchasebut not bother those customers who won' be interested suppose we have the customer records shown in table - table - example data about customers age number of owns house number of children marital status owns dog bought boat cars owned yes widowed no yes yes married no yes no married yes no no single no no no divorced yes no yes married yes no no single no no yes married yes no yes divorced no yes yes divorced no no yes married yes yes no single no no after looking at the data for whileour novice data scientist comes up with the following rule"if the customer is older than and has less than children or is not divorcedthen they want to buy boat when asked how well this rule of his doesour data scientist answers"it' percent accurate!and indeedon the data that is in the tablethe rule is perfectly accurate there are many possible rules we could come up with that would explain perfectly if someone in this dataset wants to buy boat no age appears twice in the dataso we could say people who are or in the real worldthis is actually tricky problem while we know that the other customers haven' bought boat from us yetthey might have bought one from someone elseor they may still be saving and plan to buy one in the future generalizationoverfittingand underfitting
17,022
rules that work well on this dataremember that we are not interested in making predictions for this datasetwe already know the answers for these customers we want to know if new customers are likely to buy boat we therefore want to find rule that will work well for new customersand achieving percent accuracy on the training set does not help us there we might not expect that the rule our data scientist came up with will work very well on new customers it seems too complexand it is supported by very little data for examplethe "or is not divorcedpart of the rule hinges on single customer the only measure of whether an algorithm will perform well on new data is the evaluation on the test set howeverintuitively we expect simple models to generalize better to new data if the rule was "people older than want to buy boat,and this would explain the behavior of all the customerswe would trust it more than the rule involving children and marital status in addition to age thereforewe always want to find the simplest model building model that is too complex for the amount of information we haveas our novice data scientist didis called overfitting overfitting occurs when you fit model too closely to the particularities of the training set and obtain model that works well on the training set but is not able to generalize to new data on the other handif your model is too simple--say"everybody who owns house buys boat"--then you might not be able to capture all the aspects of and variability in the dataand your model will do badly even on the training set choosing too simple model is called underfitting the more complex we allow our model to bethe better we will be able to predict on the training data howeverif our model becomes too complexwe start focusing too much on each individual data point in our training setand the model will not generalize well to new data there is sweet spot in between that will yield the best generalization performance this is the model we want to find the trade-off between overfitting and underfitting is illustrated in figure - and also provablywith the right math supervised learning
17,023
relation of model complexity to dataset size it' important to note that model complexity is intimately tied to the variation of inputs contained in your training datasetthe larger variety of data points your dataset containsthe more complex model you can use without overfitting usuallycollecting more data points will yield more varietyso larger datasets allow building more complex models howeversimply duplicating the same data points or collecting very similar data will not help going back to the boat selling exampleif we saw , more rows of customer dataand all of them complied with the rule "if the customer is older than and has less than children or is not divorcedthen they want to buy boat,we would be much more likely to believe this to be good rule than when it was developed using only the rows in table - having more data and building appropriately more complex models can often work wonders for supervised learning tasks in this bookwe will focus on working with datasets of fixed sizes in the real worldyou often have the ability to decide how much data to collectwhich might be more beneficial than tweaking and tuning your model never underestimate the power of more data supervised machine learning algorithms we will now review the most popular machine learning algorithms and explain how they learn from data and how they make predictions we will also discuss how the concept of model complexity plays out for each of these modelsand provide an oversupervised machine learning algorithms
17,024
nesses of each algorithmand what kind of data they can best be applied to we will also explain the meaning of the most important parameters and options many algorithms have classification and regression variantand we will describe both it is not necessary to read through the descriptions of each algorithm in detailbut understanding the models will give you better feeling for the different ways machine learning algorithms can work this can also be used as reference guideand you can come back to it when you are unsure about the workings of any of the algorithms some sample datasets we will use several datasets to illustrate the different algorithms some of the datasets will be small and synthetic (meaning made-up)designed to highlight particular aspects of the algorithms other datasets will be largereal-world examples an example of synthetic two-class classification dataset is the forge datasetwhich has two features the following code creates scatter plot (figure - visualizing all of the data points in this dataset the plot has the first feature on the -axis and the second feature on the -axis as is always the case in scatter plotseach data point is represented as one dot the color and shape of the dot indicates its classin[ ]generate dataset xy mglearn datasets make_forge(plot dataset mglearn discrete_scatter( [: ] [: ]yplt legend(["class ""class "]loc= plt xlabel("first feature"plt ylabel("second feature"print(" shape{}format( shape)out[ ] shape( discussing all of them is beyond the scope of the bookand we refer you to the scikit-learn documentation for more details supervised learning
17,025
as you can see from shapethis dataset consists of data pointswith features to illustrate regression algorithmswe will use the synthetic wave dataset the wave dataset has single input feature and continuous target variable (or responsethat we want to model the plot created here (figure - shows the single feature on the -axis and the regression target (the outputon the -axisin[ ]xy mglearn datasets make_wave(n_samples= plt plot(xy' 'plt ylim(- plt xlabel("feature"plt ylabel("target"supervised machine learning algorithms
17,026
showing the regression target we are using these very simplelow-dimensional datasets because we can easily visualize them-- printed page has two dimensionsso data with more than two features is hard to show any intuition derived from datasets with few features (also called low-dimensional datasetsmight not hold in datasets with many features (highdimensional datasetsas long as you keep that in mindinspecting algorithms on low-dimensional datasets can be very instructive we will complement these small synthetic datasets with two real-world datasets that are included in scikit-learn one is the wisconsin breast cancer dataset (cancerfor short)which records clinical measurements of breast cancer tumors each tumor is labeled as "benign(for harmless tumorsor "malignant(for cancerous tumors)and the task is to learn to predict whether tumor is malignant based on the measurements of the tissue the data can be loaded using the load_breast_cancer function from scikit-learnin[ ]from sklearn datasets import load_breast_cancer cancer load_breast_cancer(print("cancer keys()\ {}format(cancer keys()) supervised learning
17,027
cancer keys()dict_keys(['feature_names''data''descr''target''target_names']datasets that are included in scikit-learn are usually stored as bunch objectswhich contain some information about the dataset as well as the actual data all you need to know about bunch objects is that they behave like dictionarieswith the added benefit that you can access values using dot (as in bunch key instead of bunch['key']the dataset consists of data pointswith features eachin[ ]print("shape of cancer data{}format(cancer data shape)out[ ]shape of cancer data( of these data points are labeled as malignant and as benignin[ ]print("sample counts per class:\ {}format{nv for nv in zip(cancer target_namesnp bincount(cancer target))})out[ ]sample counts per class{'benign' 'malignant' to get description of the semantic meaning of each featurewe can have look at the feature_names attributein[ ]print("feature names:\ {}format(cancer feature_names)out[ ]feature names['mean radius'mean texture'mean perimeter'mean area'mean smoothness'mean compactness'mean concavity'mean concave points'mean symmetry'mean fractal dimension'radius error'texture error'perimeter error'area error'smoothness error'compactness error'concavity error'concave points error'symmetry error'fractal dimension error'worst radius'worst texture'worst perimeter'worst area'worst smoothness'worst compactness'worst concavity'worst concave points'worst symmetry'worst fractal dimension'supervised machine learning algorithms
17,028
we will also be using real-world regression datasetthe boston housing dataset the task associated with this dataset is to predict the median value of homes in several boston neighborhoods in the susing information such as crime rateproximity to the charles riverhighway accessibilityand so on the dataset contains data pointsdescribed by featuresin[ ]from sklearn datasets import load_boston boston load_boston(print("data shape{}format(boston data shape)out[ ]data shape( againyou can get more information about the dataset by reading the descr attribute of boston for our purposes herewe will actually expand this dataset by not only considering these measurements as input featuresbut also looking at all products (also called interactionsbetween features in other wordswe will not only consider crime rate and highway accessibility as featuresbut also the product of crime rate and highway accessibility including derived feature like these is called feature engineeringwhich we will discuss in more detail in this derived dataset can be loaded using the load_extended_boston functionin[ ]xy mglearn datasets load_extended_boston(print(" shape{}format( shape)out[ ] shape( the resulting features are the original features together with the possible combinations of two features within those we will use these datasets to explain and illustrate the properties of the different machine learning algorithms but for nowlet' get to the algorithms themselves firstwe will revisit the -nearest neighbors ( -nnalgorithm that we saw in the previous this is called the binomial coefficientwhich is the number of combinations of elements that can be selected from set of elements often this is written as supervised learning and spoken as " choose "--in this case" choose
17,029
the -nn algorithm is arguably the simplest machine learning algorithm building the model consists only of storing the training dataset to make prediction for new data pointthe algorithm finds the closest data points in the training dataset--its "nearest neighbors -neighbors classification in its simplest versionthe -nn algorithm only considers exactly one nearest neighborwhich is the closest training data point to the point we want to make prediction for the prediction is then simply the known output for this training point figure - illustrates this for the case of classification on the forge datasetin[ ]mglearn plots plot_knn_classification(n_neighbors= figure - predictions made by the one-nearest-neighbor model on the forge dataset herewe added three new data pointsshown as stars for each of themwe marked the closest point in the training set the prediction of the one-nearest-neighbor algorithm is the label of that point (shown by the color of the crosssupervised machine learning algorithms
17,030
numberkof neighbors this is where the name of the -nearest neighbors algorithm comes from when considering more than one neighborwe use voting to assign label this means that for each test pointwe count how many neighbors belong to class and how many neighbors belong to class we then assign the class that is more frequentin other wordsthe majority class among the -nearest neighbors the following example (figure - uses the three closest neighborsin[ ]mglearn plots plot_knn_classification(n_neighbors= figure - predictions made by the three-nearest-neighbors model on the forge dataset againthe prediction is shown as the color of the cross you can see that the prediction for the new data point at the top left is not the same as the prediction when we used only one neighbor while this illustration is for binary classification problemthis method can be applied to datasets with any number of classes for more classeswe count how many neighbors belong to each class and again predict the most common class now let' look at how we can apply the -nearest neighbors algorithm using scikitlearn firstwe split our data into training and test set so we can evaluate generalization performanceas discussed in supervised learning
17,031
from sklearn model_selection import train_test_split xy mglearn datasets make_forge(x_trainx_testy_trainy_test train_test_split(xyrandom_state= nextwe import and instantiate the class this is when we can set parameterslike the number of neighbors to use herewe set it to in[ ]from sklearn neighbors import kneighborsclassifier clf kneighborsclassifier(n_neighbors= nowwe fit the classifier using the training set for kneighborsclassifier this means storing the datasetso we can compute neighbors during predictionin[ ]clf fit(x_trainy_trainto make predictions on the test datawe call the predict method for each data point in the test setthis computes its nearest neighbors in the training set and finds the most common class among thesein[ ]print("test set predictions{}format(clf predict(x_test))out[ ]test set predictions[ to evaluate how well our model generalizeswe can call the score method with the test data together with the test labelsin[ ]print("test set accuracy{ }format(clf score(x_testy_test))out[ ]test set accuracy we see that our model is about accuratemeaning the model predicted the class correctly for of the samples in the test dataset analyzing kneighborsclassifier for two-dimensional datasetswe can also illustrate the prediction for all possible test points in the xy-plane we color the plane according to the class that would be assigned to point in this region this lets us view the decision boundarywhich is the divide between where the algorithm assigns class versus where it assigns class supervised machine learning algorithms
17,032
threeand nine neighbors shown in figure - in[ ]figaxes plt subplots( figsize=( )for n_neighborsax in zip([ ]axes)the fit method returns the object selfso we can instantiate and fit in one line clf kneighborsclassifier(n_neighbors=n_neighborsfit(xymglearn plots plot_ d_separator(clfxfill=trueeps= ax=axalpha mglearn discrete_scatter( [: ] [: ]yax=axax set_title("{neighbor( )format(n_neighbors)ax set_xlabel("feature "ax set_ylabel("feature "axes[ legend(loc= figure - decision boundaries created by the nearest neighbors model for different values of n_neighbors as you can see on the left in the figureusing single neighbor results in decision boundary that follows the training data closely considering more and more neighbors leads to smoother decision boundary smoother boundary corresponds to simpler model in other wordsusing few neighbors corresponds to high model complexity (as shown on the right side of figure - )and using many neighbors corresponds to low model complexity (as shown on the left side of figure - if you consider the extreme case where the number of neighbors is the number of all data points in the training seteach test point would have exactly the same neighbors (all training pointsand all predictions would be the samethe class that is most frequent in the training set let' investigate whether we can confirm the connection between model complexity and generalization that we discussed earlier we will do this on the real-world breast cancer dataset we begin by splitting the dataset into training and test set then supervised learning
17,033
the results are shown in figure - in[ ]from sklearn datasets import load_breast_cancer cancer load_breast_cancer(x_trainx_testy_trainy_test train_test_splitcancer datacancer targetstratify=cancer targetrandom_state= training_accuracy [test_accuracy [try n_neighbors from to neighbors_settings range( for n_neighbors in neighbors_settingsbuild the model clf kneighborsclassifier(n_neighbors=n_neighborsclf fit(x_trainy_trainrecord training set accuracy training_accuracy append(clf score(x_trainy_train)record generalization accuracy test_accuracy append(clf score(x_testy_test)plt plot(neighbors_settingstraining_accuracylabel="training accuracy"plt plot(neighbors_settingstest_accuracylabel="test accuracy"plt ylabel("accuracy"plt xlabel("n_neighbors"plt legend(the plot shows the training and test set accuracy on the -axis against the setting of n_neighbors on the -axis while real-world plots are rarely very smoothwe can still recognize some of the characteristics of overfitting and underfitting (note that because considering fewer neighbors corresponds to more complex modelthe plot is horizontally flipped relative to the illustration in figure - considering single nearest neighborthe prediction on the training set is perfect but when more neighbors are consideredthe model becomes simpler and the training accuracy drops the test set accuracy for using single neighbor is lower than when using more neighborsindicating that using the single nearest neighbor leads to model that is too complex on the other handwhen considering neighborsthe model is too simple and performance is even worse the best performance is somewhere in the middleusing around six neighbors stillit is good to keep the scale of the plot in mind the worst performance is around accuracywhich might still be acceptable supervised machine learning algorithms
17,034
-neighbors regression there is also regression variant of the -nearest neighbors algorithm againlet' start by using the single nearest neighborthis time using the wave dataset we've added three test data points as green stars on the -axis the prediction using single neighbor is just the target value of the nearest neighbor these are shown as blue stars in figure - in[ ]mglearn plots plot_knn_regression(n_neighbors= supervised learning
17,035
againwe can use more than the single closest neighbor for regression when using multiple nearest neighborsthe prediction is the averageor meanof the relevant neighbors (figure - )in[ ]mglearn plots plot_knn_regression(n_neighbors= supervised machine learning algorithms
17,036
the -nearest neighbors algorithm for regression is implemented in the kneighbors regressor class in scikit-learn it' used similarly to kneighborsclassifierin[ ]from sklearn neighbors import kneighborsregressor xy mglearn datasets make_wave(n_samples= split the wave dataset into training and test set x_trainx_testy_trainy_test train_test_split(xyrandom_state= instantiate the model and set the number of neighbors to consider to reg kneighborsregressor(n_neighbors= fit the model using the training data and training targets reg fit(x_trainy_trainnow we can make predictions on the test setin[ ]print("test set predictions:\ {}format(reg predict(x_test)) supervised learning
17,037
test set predictions[- - - - - - we can also evaluate the model using the score methodwhich for regressors returns the score the scorealso known as the coefficient of determinationis measure of goodness of prediction for regression modeland yields score between and value of corresponds to perfect predictionand value of corresponds to constant model that just predicts the mean of the training set responsesy_trainin[ ]print("test set ^ { }format(reg score(x_testy_test))out[ ]test set ^ herethe score is which indicates relatively good model fit analyzing kneighborsregressor for our one-dimensional datasetwe can see what the predictions look like for all possible feature values (figure - to do thiswe create test dataset consisting of many points on the linein[ ]figaxes plt subplots( figsize=( )create , data pointsevenly spaced between - and line np linspace(- reshape(- for n_neighborsax in zip([ ]axes)make predictions using or neighbors reg kneighborsregressor(n_neighbors=n_neighborsreg fit(x_trainy_trainax plot(linereg predict(line)ax plot(x_trainy_train'^' =mglearn cm ( )markersize= ax plot(x_testy_test' ' =mglearn cm ( )markersize= ax set_title"{neighbor( )\ train score{ ftest score{ }formatn_neighborsreg score(x_trainy_train)reg score(x_testy_test))ax set_xlabel("feature"ax set_ylabel("target"axes[ legend(["model predictions""training data/target""test data/target"]loc="best"supervised machine learning algorithms
17,038
values of n_neighbors as we can see from the plotusing only single neighboreach point in the training set has an obvious influence on the predictionsand the predicted values go through all of the data points this leads to very unsteady prediction considering more neighbors leads to smoother predictionsbut these do not fit the training data as well strengthsweaknessesand parameters in principlethere are two important parameters to the kneighbors classifierthe number of neighbors and how you measure distance between data points in practiceusing small number of neighbors like three or five often works wellbut you should certainly adjust this parameter choosing the right distance measure is somewhat beyond the scope of this book by defaulteuclidean distance is usedwhich works well in many settings one of the strengths of -nn is that the model is very easy to understandand often gives reasonable performance without lot of adjustments using this algorithm is good baseline method to try before considering more advanced techniques building the nearest neighbors model is usually very fastbut when your training set is very large (either in number of features or in number of samplesprediction can be slow when using the -nn algorithmit' important to preprocess your data (see chapter this approach often does not perform well on datasets with many features (hundreds or more)and it does particularly badly with datasets where most features are most of the time (so-called sparse datasetssowhile the nearest -neighbors algorithm is easy to understandit is not often used in practicedue to prediction being slow and its inability to handle many features the method we discuss next has neither of these drawbacks supervised learning
17,039
linear models are class of models that are widely used in practice and have been studied extensively in the last few decadeswith roots going back over hundred years linear models make prediction using linear function of the input featureswhich we will explain shortly linear models for regression for regressionthe general prediction formula for linear model looks as followsy [ [ [ [ [px[pb herex[ to [pdenotes the features (in this examplethe number of features is pof single data pointw and are parameters of the model that are learnedand is the prediction the model makes for dataset with single featurethis isy [ [ which you might remember from high school mathematics as the equation for line herew[ is the slope and is the -axis offset for more featuresw contains the slopes along each feature axis alternativelyyou can think of the predicted response as being weighted sum of the input featureswith weights (which can be negativegiven by the entries of trying to learn the parameters [ and on our one-dimensional wave dataset might lead to the following line (see figure - )in[ ]mglearn plots plot_linear_regression_wave(out[ ] [ ] - supervised machine learning algorithms
17,040
we added coordinate cross into the plot to make it easier to understand the line looking at [ we see that the slope should be around which we can confirm visually in the plot the intercept is where the prediction line should cross the -axisthis is slightly below zerowhich you can also confirm in the image linear models for regression can be characterized as regression models for which the prediction is line for single featurea plane when using two featuresor hyperplane in higher dimensions (that iswhen using more featuresif you compare the predictions made by the straight line with those made by the kneighborsregressor in figure - using straight line to make predictions seems very restrictive it looks like all the fine details of the data are lost in sensethis is true it is strong (and somewhat unrealisticassumption that our target is linear supervised learning
17,041
skewed perspective for datasets with many featureslinear models can be very powerful in particularif you have more features than training data pointsany target can be perfectly modeled (on the training setas linear function there are many different linear models for regression the difference between these models lies in how the model parameters and are learned from the training dataand how model complexity can be controlled we will now take look at the most popular linear models for regression linear regression (aka ordinary least squareslinear regressionor ordinary least squares (ols)is the simplest and most classic linear method for regression linear regression finds the parameters and that minimize the mean squared error between predictions and the true regression targetsyon the training set the mean squared error is the sum of the squared differences between the predictions and the true values linear regression has no parameterswhich is benefitbut it also has no way to control model complexity here is the code that produces the model you can see in figure - in[ ]from sklearn linear_model import linearregression xy mglearn datasets make_wave(n_samples= x_trainx_testy_trainy_test train_test_split(xyrandom_state= lr linearregression(fit(x_trainy_trainthe "slopeparameters ( )also called weights or coefficientsare stored in the coef_ attributewhile the offset or intercept (bis stored in the intercept_ attributein[ ]print("lr coef_{}format(lr coef_)print("lr intercept_{}format(lr intercept_)out[ ]lr coef_ lr intercept_- this is easy to see if you know some linear algebra supervised machine learning algorithms
17,042
of coef_ and intercept_ scikit-learn always stores anything that is derived from the training data in attributes that end with trailing underscore that is to separate them from parameters that are set by the user the intercept_ attribute is always single float numberwhile the coef_ attribute is numpy array with one entry per input feature as we only have single input feature in the wave datasetlr coef_ only has single entry let' look at the training set and test set performancein[ ]print("training set score{ }format(lr score(x_trainy_train))print("test set score{ }format(lr score(x_testy_test))out[ ]training set score test set score an of around is not very goodbut we can see that the scores on the training and test sets are very close together this means we are likely underfittingnot overfitting for this one-dimensional datasetthere is little danger of overfittingas the model is very simple (or restrictedhoweverwith higher-dimensional datasets (meaning datasets with large number of features)linear models become more powerfuland there is higher chance of overfitting let' take look at how linearre gression performs on more complex datasetlike the boston housing dataset remember that this dataset has samples and derived features firstwe load the dataset and split it into training and test set then we build the linear regression model as beforein[ ]xy mglearn datasets load_extended_boston(x_trainx_testy_trainy_test train_test_split(xyrandom_state= lr linearregression(fit(x_trainy_trainwhen comparing training set and test set scoreswe find that we predict very accurately on the training setbut the on the test set is much worsein[ ]print("training set score{ }format(lr score(x_trainy_train))print("test set score{ }format(lr score(x_testy_test)) supervised learning
17,043
training set score test set score this discrepancy between performance on the training set and the test set is clear sign of overfittingand therefore we should try to find model that allows us to control complexity one of the most commonly used alternatives to standard linear regression is ridge regressionwhich we will look into next ridge regression ridge regression is also linear model for regressionso the formula it uses to make predictions is the same one used for ordinary least squares in ridge regressionthoughthe coefficients (ware chosen not only so that they predict well on the training databut also to fit an additional constraint we also want the magnitude of coefficients to be as small as possiblein other wordsall entries of should be close to zero intuitivelythis means each feature should have as little effect on the outcome as possible (which translates to having small slope)while still predicting well this constraint is an example of what is called regularization regularization means explicitly restricting model to avoid overfitting the particular kind used by ridge regression is known as regularization ridge regression is implemented in linear_model ridge let' see how well it does on the extended boston housing datasetin[ ]from sklearn linear_model import ridge ridge ridge(fit(x_trainy_trainprint("training set score{ }format(ridge score(x_trainy_train))print("test set score{ }format(ridge score(x_testy_test))out[ ]training set score test set score as you can seethe training set score of ridge is lower than for linearregressionwhile the test set score is higher this is consistent with our expectation with linear regressionwe were overfitting our data ridge is more restricted modelso we are less likely to overfit less complex model means worse performance on the training setbut better generalization as we are only interested in generalization performancewe should choose the ridge model over the linearregression model mathematicallyridge penalizes the norm of the coefficientsor the euclidean length of supervised machine learning algorithms
17,044
coefficientsand its performance on the training set how much importance the model places on simplicity versus training set performance can be specified by the userusing the alpha parameter in the previous examplewe used the default parameter alpha= there is no reason why this will give us the best trade-offthough the optimum setting of alpha depends on the particular dataset we are using increasing alpha forces coefficients to move more toward zerowhich decreases training set performance but might help generalization for examplein[ ]ridge ridge(alpha= fit(x_trainy_trainprint("training set score{ }format(ridge score(x_trainy_train))print("test set score{ }format(ridge score(x_testy_test))out[ ]training set score test set score decreasing alpha allows the coefficients to be less restrictedmeaning we move right in figure - for very small values of alphacoefficients are barely restricted at alland we end up with model that resembles linearregressionin[ ]ridge ridge(alpha= fit(x_trainy_trainprint("training set score{ }format(ridge score(x_trainy_train))print("test set score{ }format(ridge score(x_testy_test))out[ ]training set score test set score herealpha= seems to be working well we could try decreasing alpha even more to improve generalization for nownotice how the parameter alpha corresponds to the model complexity as shown in figure - we will discuss methods to properly select parameters in we can also get more qualitative insight into how the alpha parameter changes the model by inspecting the coef_ attribute of models with different values of alpha higher alpha means more restricted modelso we expect the entries of coef_ to have smaller magnitude for high value of alpha than for low value of alpha this is confirmed in the plot in figure - supervised learning
17,045
plt plot(ridge coef_' 'label="ridge alpha= "plt plot(ridge coef_'^'label="ridge alpha= "plt plot(ridge coef_' 'label="ridge alpha= "plt plot(lr coef_' 'label="linearregression"plt xlabel("coefficient index"plt ylabel("coefficient magnitude"plt hlines( len(lr coef_)plt ylim(- plt legend(figure - comparing coefficient magnitudes for ridge regression with different values of alpha and linear regression herethe -axis enumerates the entries of coef_x= shows the coefficient associated with the first featurex= the coefficient associated with the second featureand so on up to = the -axis shows the numeric values of the corresponding values of the coefficients the main takeaway here is that for alpha= the coefficients are mostly between around - and the coefficients for the ridge model with alpha= are somewhat larger the dots corresponding to alpha= have larger magnitude stilland many of the dots corresponding to linear regression without any regularization (which would be alpha= are so large they are outside of the chart supervised machine learning algorithms
17,046
but vary the amount of training data available for figure - we subsampled the boston housing dataset and evaluated linearregression and ridge(alpha= on subsets of increasing size (plots that show model performance as function of dataset size are called learning curves)in[ ]mglearn plots plot_ridge_n_samples(figure - learning curves for ridge regression and linear regression on the boston housing dataset as one would expectthe training score is higher than the test score for all dataset sizesfor both ridge and linear regression because ridge is regularizedthe training score of ridge is lower than the training score for linear regression across the board howeverthe test score for ridge is betterparticularly for small subsets of the data for less than data pointslinear regression is not able to learn anything as more and more data becomes available to the modelboth models improveand linear regression catches up with ridge in the end the lesson here is that with enough training dataregularization becomes less importantand given enough dataridge and supervised learning
17,047
when using the full dataset is just by chanceanother interesting aspect of figure - is the decrease in training performance for linear regression if more data is addedit becomes harder for model to overfitor memorize the data lasso an alternative to ridge for regularizing linear regression is lasso as with ridge regressionusing the lasso also restricts coefficients to be close to zerobut in slightly different waycalled regularization the consequence of regularization is that when using the lassosome coefficients are exactly zero this means some features are entirely ignored by the model this can be seen as form of automatic feature selection having some coefficients be exactly zero often makes model easier to interpretand can reveal the most important features of your model let' apply the lasso to the extended boston housing datasetin[ ]from sklearn linear_model import lasso lasso lasso(fit(x_trainy_trainprint("training set score{ }format(lasso score(x_trainy_train))print("test set score{ }format(lasso score(x_testy_test))print("number of features used{}format(np sum(lasso coef_ ! ))out[ ]training set score test set score number of features used as you can seelasso does quite badlyboth on the training and the test set this indicates that we are underfittingand we find that it used only of the features similarly to ridgethe lasso also has regularization parameteralphathat controls how strongly coefficients are pushed toward zero in the previous examplewe used the default of alpha= to reduce underfittinglet' try decreasing alpha when we do thiswe also need to increase the default setting of max_iter (the maximum number of iterations to run) the lasso penalizes the norm of the coefficient vector--or in other wordsthe sum of the absolute values of the coefficients supervised machine learning algorithms
17,048
we increase the default setting of "max_iter"otherwise the model would warn us that we should increase max_iter lasso lasso(alpha= max_iter= fit(x_trainy_trainprint("training set score{ }format(lasso score(x_trainy_train))print("test set score{ }format(lasso score(x_testy_test))print("number of features used{}format(np sum(lasso coef_ ! ))out[ ]training set score test set score number of features used lower alpha allowed us to fit more complex modelwhich worked better on the training and test data the performance is slightly better than using ridgeand we are using only of the features this makes this model potentially easier to understand if we set alpha too lowhoweverwe again remove the effect of regularization and end up overfittingwith result similar to linearregressionin[ ]lasso lasso(alpha= max_iter= fit(x_trainy_trainprint("training set score{ }format(lasso score(x_trainy_train))print("test set score{ }format(lasso score(x_testy_test))print("number of features used{}format(np sum(lasso coef_ ! ))out[ ]training set score test set score number of features used againwe can plot the coefficients of the different modelssimilarly to figure - the result is shown in figure - in[ ]plt plot(lasso coef_' 'label="lasso alpha= "plt plot(lasso coef_'^'label="lasso alpha= "plt plot(lasso coef_' 'label="lasso alpha= "plt plot(ridge coef_' 'label="ridge alpha= "plt legend(ncol= loc=( )plt ylim(- plt xlabel("coefficient index"plt ylabel("coefficient magnitude" supervised learning
17,049
of alpha and ridge regression for alpha= we not only see that most of the coefficients are zero (which we already knew)but that the remaining coefficients are also small in magnitude decreasing alpha to we obtain the solution shown as the green dotswhich causes most features to be exactly zero using alpha= we get model that is quite unregularizedwith most coefficients nonzero and of large magnitude for comparisonthe best ridge solution is shown in teal the ridge model with alpha= has similar predictive performance as the lasso model with alpha= but using ridgeall coefficients are nonzero in practiceridge regression is usually the first choice between these two models howeverif you have large amount of features and expect only few of them to be importantlasso might be better choice similarlyif you would like to have model that is easy to interpretlasso will provide model that is easier to understandas it will select only subset of the input features scikit-learn also provides the elasticnet classwhich combines the penalties of lasso and ridge in practicethis combination works bestthough at the price of having two parameters to adjustone for the regularizationand one for the regularization supervised machine learning algorithms
17,050
linear models are also extensively used for classification let' look at binary classification first in this casea prediction is made using the following formulay [ [ [ [ [px[pb the formula looks very similar to the one for linear regressionbut instead of just returning the weighted sum of the featureswe threshold the predicted value at zero if the function is smaller than zerowe predict the class - if it is larger than zerowe predict the class + this prediction rule is common to all linear models for classification againthere are many different ways to find the coefficients (wand the intercept (bfor linear models for regressionthe outputyis linear function of the featuresa lineplaneor hyperplane (in higher dimensionsfor linear models for classificationthe decision boundary is linear function of the input in other wordsa (binarylinear classifier is classifier that separates two classes using linea planeor hyperplane we will see examples of that in this section there are many algorithms for learning linear models these algorithms all differ in the following two waysthe way in which they measure how well particular combination of coefficients and intercept fits the training data if and what kind of regularization they use different algorithms choose different ways to measure what "fitting the training set wellmeans for technical mathematical reasonsit is not possible to adjust and to minimize the number of misclassifications the algorithms produceas one might hope for our purposesand many applicationsthe different choices for item in the preceding list (called loss functionsare of little significance the two most common linear classification algorithms are logistic regressionimplemented in linear_model logisticregressionand linear support vector machines (linear svms)implemented in svm linearsvc (svc stands for support vector classifierdespite its namelogisticregression is classification algorithm and not regression algorithmand it should not be confused with linearregression we can apply the logisticregression and linearsvc models to the forge datasetand visualize the decision boundary as found by the linear models (figure - ) supervised learning
17,051
from sklearn linear_model import logisticregression from sklearn svm import linearsvc xy mglearn datasets make_forge(figaxes plt subplots( figsize=( )for modelax in zip([linearsvc()logisticregression()]axes)clf model fit(xymglearn plots plot_ d_separator(clfxfill=falseeps= ax=axalpha mglearn discrete_scatter( [: ] [: ]yax=axax set_title("{}format(clf __class__ __name__)ax set_xlabel("feature "ax set_ylabel("feature "axes[ legend(figure - decision boundaries of linear svm and logistic regression on the forge dataset with the default parameters in this figurewe have the first feature of the forge dataset on the -axis and the second feature on the -axisas before we display the decision boundaries found by linearsvc and logisticregression respectively as straight linesseparating the area classified as class on the top from the area classified as class on the bottom in other wordsany new data point that lies above the black line will be classified as class by the respective classifierwhile any point that lies below the black line will be classified as class the two models come up with similar decision boundaries note that both misclassify two of the points by defaultboth models apply an regularizationin the same way that ridge does for regression for logisticregression and linearsvc the trade-off parameter that determines the strength of the regularization is called cand higher values of correspond to less supervised machine learning algorithms
17,052
ticregression and linearsvc try to fit the training set as best as possiblewhile with low values of the parameter cthe models put more emphasis on finding coefficient vector (wthat is close to zero there is another interesting aspect of how the parameter acts using low values of will cause the algorithms to try to adjust to the "majorityof data pointswhile using higher value of stresses the importance that each individual data point be classified correctly here is an illustration using linearsvc (figure - )in[ ]mglearn plots plot_linear_svc_regularization(figure - decision boundaries of linear svm on the forge dataset for different values of on the lefthand sidewe have very small corresponding to lot of regularization most of the points in class are at the topand most of the points in class are at the bottom the strongly regularized model chooses relatively horizontal linemisclassifying two points in the center plotc is slightly higherand the model focuses more on the two misclassified samplestilting the decision boundary finallyon the righthand sidethe very high value of in the model tilts the decision boundary lotnow correctly classifying all points in class one of the points in class is still misclassifiedas it is not possible to correctly classify all points in this dataset using straight line the model illustrated on the righthand side tries hard to correctly classify all pointsbut might not capture the overall layout of the classes well in other wordsthis model is likely overfitting similarly to the case of regressionlinear models for classification might seem very restrictive in low-dimensional spacesonly allowing for decision boundaries that are straight lines or planes againin high dimensionslinear models for classification supervised learning
17,053
tant when considering more features let' analyze linearlogistic in more detail on the breast cancer datasetin[ ]from sklearn datasets import load_breast_cancer cancer load_breast_cancer(x_trainx_testy_trainy_test train_test_splitcancer datacancer targetstratify=cancer targetrandom_state= logreg logisticregression(fit(x_trainy_trainprint("training set score{ }format(logreg score(x_trainy_train))print("test set score{ }format(logreg score(x_testy_test))out[ ]training set score test set score the default value of = provides quite good performancewith accuracy on both the training and the test set but as training and test set performance are very closeit is likely that we are underfitting let' try to increase to fit more flexible modelin[ ]logreg logisticregression( = fit(x_trainy_trainprint("training set score{ }format(logreg score(x_trainy_train))print("test set score{ }format(logreg score(x_testy_test))out[ ]training set score test set score using = results in higher training set accuracyand also slightly increased test set accuracyconfirming our intuition that more complex model should perform better we can also investigate what happens if we use an even more regularized model than the default of = by setting = in[ ]logreg logisticregression( = fit(x_trainy_trainprint("training set score{ }format(logreg score(x_trainy_train))print("test set score{ }format(logreg score(x_testy_test))out[ ]training set score test set score supervised machine learning algorithms
17,054
an already underfit modelboth training and test set accuracy decrease relative to the default parameters finallylet' look at the coefficients learned by the models with the three different settings of the regularization parameter (figure - )in[ ]plt plot(logreg coef_ ' 'label=" = "plt plot(logreg coef_ '^'label=" = "plt plot(logreg coef_ ' 'label=" = "plt xticks(range(cancer data shape[ ])cancer feature_namesrotation= plt hlines( cancer data shape[ ]plt ylim(- plt xlabel("coefficient index"plt ylabel("coefficient magnitude"plt legend(as logisticregression applies an regularization by defaultthe result looks similar to that produced by ridge in figure - stronger regularization pushes coefficients more and more toward zerothough coefficients never become exactly zero inspecting the plot more closelywe can also see an interesting effect in the third coefficientfor "mean perimeter for = and = the coefficient is negativewhile for = the coefficient is positivewith magnitude that is even larger than for = interpreting model like thisone might think the coefficient tells us which class feature is associated with for exampleone might think that high "texture errorfeature is related to sample being "malignant howeverthe change of sign in the coefficient for "mean perimetermeans that depending on which model we look ata high "mean perimetercould be taken as being either indicative of "benignor indicative of "malignant this illustrates that interpretations of coefficients of linear models should always be taken with grain of salt supervised learning
17,055
different values of supervised machine learning algorithms
17,056
its the model to using only few features here is the coefficient plot and classification accuracies for regularization (figure - )in[ ]for cmarker in zip([ ][' ''^'' '])lr_l logisticregression( =cpenalty=" "fit(x_trainy_trainprint("training accuracy of logreg with ={ }{ }formatclr_l score(x_trainy_train))print("test accuracy of logreg with ={ }{ }formatclr_l score(x_testy_test))plt plot(lr_l coef_ tmarkerlabel=" ={ }format( )plt xticks(range(cancer data shape[ ])cancer feature_namesrotation= plt hlines( cancer data shape[ ]plt xlabel("coefficient index"plt ylabel("coefficient magnitude"plt ylim(- plt legend(loc= out[ ]training accuracy of logreg with = test accuracy of logreg with = training accuracy of logreg with = test accuracy of logreg with = training accuracy of logreg with = test accuracy of logreg with = as you can seethere are many parallels between linear models for binary classification and linear models for regression as in regressionthe main difference between the models is the penalty parameterwhich influences the regularization and whether the model will use all available features or select only subset supervised learning
17,057
cancer dataset for different values of linear models for multiclass classification many linear classification models are for binary classification onlyand don' extend naturally to the multiclass case (with the exception of logistic regressiona common technique to extend binary classification algorithm to multiclass classification algorithm is the one-vs -rest approach in the one-vs -rest approacha binary model is learned for each class that tries to separate that class from all of the other classesresulting in as many binary models as there are classes to make predictionall binary classifiers are run on test point the classifier that has the highest score on its single class "wins,and this class label is returned as the prediction supervised machine learning algorithms
17,058
and one intercept (bfor each class the class for which the result of the classification confidence formula given here is highest is the assigned class labelw[ [ [ [ [px[pb the mathematics behind multiclass logistic regression differ somewhat from the onevs -rest approachbut they also result in one coefficient vector and intercept per classand the same method of making prediction is applied let' apply the one-vs -rest method to simple three-class classification dataset we use two-dimensional datasetwhere each class is given by data sampled from gaussian distribution (see figure - )in[ ]from sklearn datasets import make_blobs xy make_blobs(random_state= mglearn discrete_scatter( [: ] [: ]yplt xlabel("feature "plt ylabel("feature "plt legend(["class ""class ""class "]figure - two-dimensional toy dataset containing three classes supervised learning
17,059
in[ ]linear_svm linearsvc(fit(xyprint("coefficient shape"linear_svm coef_ shapeprint("intercept shape"linear_svm intercept_ shapeout[ ]coefficient shape( intercept shape( ,we see that the shape of the coef_ is ( )meaning that each row of coef_ contains the coefficient vector for one of the three classes and each column holds the coefficient value for specific feature (there are two in this datasetthe intercept_ is now one-dimensional arraystoring the intercepts for each class let' visualize the lines given by the three binary classifiers (figure - )in[ ]mglearn discrete_scatter( [: ] [: ]yline np linspace(- for coefinterceptcolor in zip(linear_svm coef_linear_svm intercept_[' '' '' '])plt plot(line-(line coef[ interceptcoef[ ] =colorplt ylim(- plt xlim(- plt xlabel("feature "plt ylabel("feature "plt legend(['class ''class ''class ''line class ''line class ''line class ']loc=( )you can see that all the points belonging to class in the training data are above the line corresponding to class which means they are on the "class side of this binary classifier the points in class are above the line corresponding to class which means they are classified as "restby the binary classifier for class the points belonging to class are to the left of the line corresponding to class which means the binary classifier for class also classifies them as "rest thereforeany point in this area will be classified as class by the final classifier (the result of the classification confidence formula for classifier is greater than zerowhile it is smaller than zero for the other two classesbut what about the triangle in the middle of the plotall three binary classifiers classify points there as "rest which class would point there be assigned tothe answer is the one with the highest value for the classification formulathe class of the closest line supervised machine learning algorithms
17,060
the following example (figure - shows the predictions for all regions of the spacein[ ]mglearn plots plot_ d_classification(linear_svmxfill=truealpha mglearn discrete_scatter( [: ] [: ]yline np linspace(- for coefinterceptcolor in zip(linear_svm coef_linear_svm intercept_[' '' '' '])plt plot(line-(line coef[ interceptcoef[ ] =colorplt legend(['class ''class ''class ''line class ''line class ''line class ']loc=( )plt xlabel("feature "plt ylabel("feature " supervised learning
17,061
strengthsweaknessesand parameters the main parameter of linear models is the regularization parametercalled alpha in the regression models and in linearsvc and logisticregression large values for alpha or small values for mean simple models in particular for the regression modelstuning these parameters is quite important usually and alpha are searched for on logarithmic scale the other decision you have to make is whether you want to use regularization or regularization if you assume that only few of your features are actually importantyou should use otherwiseyou should default to can also be useful if interpretability of the model is important as will use only few featuresit is easier to explain which features are important to the modeland what the effects of these features are linear models are very fast to trainand also fast to predict they scale to very large datasets and work well with sparse data if your data consists of hundreds of thousands or millions of samplesyou might want to investigate using the solver='sagoption in logisticregression and ridgewhich can be faster than the default on large datasets other options are the sgdclassifier class and the sgdregressor classwhich implement even more scalable versions of the linear models described here another strength of linear models is that they make it relatively easy to understand how prediction is madeusing the formulas we saw earlier for regression and classification unfortunatelyit is often not entirely clear why coefficients are the way they are this is particularly true if your dataset has highly correlated featuresin these casesthe coefficients might be hard to interpret supervised machine learning algorithms
17,062
the number of samples they are also often used on very large datasetssimply because it' not feasible to train other models howeverin lower-dimensional spacesother models might yield better generalization performance we will look at some examples in which linear models fail in "kernelized support vector machineson page method chaining the fit method of all scikit-learn models returns self this allows you to write code like the followingwhich we've already used extensively in this in[ ]instantiate model and fit it in one line logreg logisticregression(fit(x_trainy_trainherewe used the return value of fit (which is selfto assign the trained model to the variable logreg this concatenation of method calls (here __init__ and then fitis known as method chaining another common application of method chaining in scikit-learn is to fit and predict in one linein[ ]logreg logisticregression(y_pred logreg fit(x_trainy_trainpredict(x_testfinallyyou can even do model instantiationfittingand predicting in one linein[ ]y_pred logisticregression(fit(x_trainy_trainpredict(x_testthis very short variant is not idealthough lot is happening in single linewhich might make the code hard to read additionallythe fitted logistic regression model isn' stored in any variableso we can' inspect it or use it to predict on any other data naive bayes classifiers naive bayes classifiers are family of classifiers that are quite similar to the linear models discussed in the previous section howeverthey tend to be even faster in training the price paid for this efficiency is that naive bayes models often provide generalization performance that is slightly worse than that of linear classifiers like logisticregression and linearsvc the reason that naive bayes models are so efficient is that they learn parameters by looking at each feature individually and collect simple per-class statistics from each feature there are three kinds of naive bayes classifiers implemented in scikit supervised learning
17,063
any continuous datawhile bernoullinb assumes binary data and multinomialnb assumes count data (that isthat each feature represents an integer count of somethinglike how often word appears in sentencebernoullinb and multinomialnb are mostly used in text data classification the bernoullinb classifier counts how often every feature of each class is not zero this is most easily understood with an examplein[ ] np array([[ ][ ][ ][ ]] np array([ ]herewe have four data pointswith four binary features each there are two classes and for class (the first and third data points)the first feature is zero two times and nonzero zero timesthe second feature is zero one time and nonzero one timeand so on these same counts are then calculated for the data points in the second class counting the nonzero entries per class in essence looks like thisin[ ]counts {for label in np unique( )iterate over each class count (sumentries of per feature counts[labelx[ =labelsum(axis= print("feature counts:\ {}format(counts)out[ ]feature counts{ array([ ]) array([ ])the other two naive bayes modelsmultinomialnb and gaussiannbare slightly different in what kinds of statistics they compute multinomialnb takes into account the average value of each feature for each classwhile gaussiannb stores the average value as well as the standard deviation of each feature for each class to make predictiona data point is compared to the statistics for each of the classesand the best matching class is predicted interestinglyfor both multinomialnb and bernoullinbthis leads to prediction formula that is of the same form as in the linear models (see "linear models for classificationon page unfortunatelycoef_ for the naive bayes models has somewhat different meaning than in the linear modelsin that coef_ is not the same as supervised machine learning algorithms
17,064
multinomialnb and bernoullinb have single parameteralphawhich controls model complexity the way alpha works is that the algorithm adds to the data alpha many virtual data points that have positive values for all the features this results in "smoothingof the statistics large alpha means more smoothingresulting in less complex models the algorithm' performance is relatively robust to the setting of alphameaning that setting alpha is not critical for good performance howevertuning it usually improves accuracy somewhat gaussiannb is mostly used on very high-dimensional datawhile the other two variants of naive bayes are widely used for sparse count data such as text multinomialnb usually performs better than binarynbparticularly on datasets with relatively large number of nonzero features ( large documentsthe naive bayes models share many of the strengths and weaknesses of the linear models they are very fast to train and to predictand the training procedure is easy to understand the models work very well with high-dimensional sparse data and are relatively robust to the parameters naive bayes models are great baseline models and are often used on very large datasetswhere training even linear model might take too long decision trees decision trees are widely used models for classification and regression tasks essentiallythey learn hierarchy of if/else questionsleading to decision these questions are similar to the questions you might ask in game of questions imagine you want to distinguish between the following four animalsbearshawkspenguinsand dolphins your goal is to get to the right answer by asking as few if/else questions as possible you might start off by asking whether the animal has feathersa question that narrows down your possible animals to just two if the answer is "yes,you can ask another question that could help you distinguish between hawks and penguins for exampleyou could ask whether the animal can fly if the animal doesn' have feathersyour possible animal choices are dolphins and bearsand you will need to ask question to distinguish between these two animals--for exampleasking whether the animal has fins this series of questions can be expressed as decision treeas shown in figure - in[ ]mglearn plots plot_animal_tree( supervised learning
17,065
in this illustrationeach node in the tree either represents question or terminal node (also called leafthat contains the answer the edges connect the answers to question with the next question you would ask in machine learning parlancewe built model to distinguish between four classes of animals (hawkspenguinsdolphinsand bearsusing the three features "has feathers,"can fly,and "has fins instead of building these models by handwe can learn them from data using supervised learning building decision trees let' go through the process of building decision tree for the classification dataset shown in figure - the dataset consists of two half-moon shapeswith each class consisting of data points we will refer to this dataset as two_moons learning decision tree means learning the sequence of if/else questions that gets us to the true answer most quickly in the machine learning settingthese questions are called tests (not to be confused with the test setwhich is the data we use to test to see how generalizable our model isusually data does not come in the form of binary yes/no features as in the animal examplebut is instead represented as continuous features such as in the dataset shown in figure - the tests that are used on continuous data are of the form "is feature larger than value ?supervised machine learning algorithms
17,066
to build treethe algorithm searches over all possible tests and finds the one that is most informative about the target variable figure - shows the first test that is picked splitting the dataset vertically at [ ]= yields the most informationit best separates the points in class from the points in class the top nodealso called the rootrepresents the whole datasetconsisting of points belonging to class and points belonging to class the split is done by testing whether [ < indicated by black line if the test is truea point is assigned to the left nodewhich contains points belonging to class and points belonging to class otherwise the point is assigned to the right nodewhich contains points belonging to class and points belonging to class these two nodes correspond to the top and bottom regions shown in figure - even though the first split did good job of separating the two classesthe bottom region still contains points belonging to class and the top region still contains points belonging to class we can build more accurate model by repeating the process of looking for the best test in both regions figure - shows that the most informative next split for the left and the right region is based on [ supervised learning
17,067
figure - decision boundary of tree with depth (leftand corresponding decision tree (rightthis recursive process yields binary tree of decisionswith each node containing test alternativelyyou can think of each test as splitting the part of the data that is currently being considered along one axis this yields view of the algorithm as building hierarchical partition as each test concerns only single featurethe regions in the resulting partition always have axis-parallel boundaries the recursive partitioning of the data is repeated until each region in the partition (each leaf in the decision treeonly contains single target value ( single class or single regression valuea leaf of the tree that contains data points that all share the same target value is called pure the final partitioning for this dataset is shown in figure - supervised machine learning algorithms
17,068
tree (right)the full tree is quite large and hard to visualize prediction on new data point is made by checking which region of the partition of the feature space the point lies inand then predicting the majority target (or the single target in the case of pure leavesin that region the region can be found by traversing the tree from the root and going left or rightdepending on whether the test is fulfilled or not it is also possible to use trees for regression tasksusing exactly the same technique to make predictionwe traverse the tree based on the tests in each node and find the leaf the new data point falls into the output for this data point is the mean target of the training points in this leaf controlling complexity of decision trees typicallybuilding tree as described here and continuing until all leaves are pure leads to models that are very complex and highly overfit to the training data the presence of pure leaves mean that tree is accurate on the training seteach data point in the training set is in leaf that has the correct majority class the overfitting can be seen on the left of figure - you can see the regions determined to belong to class in the middle of all the points belonging to class on the other handthere is small strip predicted as class around the point belonging to class to the very right this is not how one would imagine the decision boundary to lookand the decision boundary focuses lot on single outlier points that are far away from the other points in that class there are two common strategies to prevent overfittingstopping the creation of the tree early (also called pre-pruning)or building the tree but then removing or collapsing nodes that contain little information (also called post-pruning or just pruningpossible criteria for pre-pruning include limiting the maximum depth of the treelimiting the maximum number of leavesor requiring minimum number of points in node to keep splitting it supervised learning
17,069
decisiontreeclassifier classes scikit-learn only implements pre-pruningnot post-pruning let' look at the effect of pre-pruning in more detail on the breast cancer dataset as alwayswe import the dataset and split it into training and test part then we build model using the default setting of fully developing the tree (growing the tree until all leaves are purewe fix the random_state in the treewhich is used for tiebreaking internallyin[ ]from sklearn tree import decisiontreeclassifier cancer load_breast_cancer(x_trainx_testy_trainy_test train_test_splitcancer datacancer targetstratify=cancer targetrandom_state= tree decisiontreeclassifier(random_state= tree fit(x_trainy_trainprint("accuracy on training set{ }format(tree score(x_trainy_train))print("accuracy on test set{ }format(tree score(x_testy_test))out[ ]accuracy on training set accuracy on test set as expectedthe accuracy on the training set is %--because the leaves are purethe tree was grown deep enough that it could perfectly memorize all the labels on the training data the test set accuracy is slightly worse than for the linear models we looked at previouslywhich had around accuracy if we don' restrict the depth of decision treethe tree can become arbitrarily deep and complex unpruned trees are therefore prone to overfitting and not generalizing well to new data now let' apply pre-pruning to the treewhich will stop developing the tree before we perfectly fit to the training data one option is to stop building the tree after certain depth has been reached here we set max_depth= meaning only four consecutive questions can be asked (cf figures - and - limiting the depth of the tree decreases overfitting this leads to lower accuracy on the training setbut an improvement on the test setin[ ]tree decisiontreeclassifier(max_depth= random_state= tree fit(x_trainy_trainprint("accuracy on training set{ }format(tree score(x_trainy_train))print("accuracy on test set{ }format(tree score(x_testy_test))supervised machine learning algorithms
17,070
accuracy on training set accuracy on test set analyzing decision trees we can visualize the tree using the export_graphviz function from the tree module this writes file in the dot file formatwhich is text file format for storing graphs we set an option to color the nodes to reflect the majority class in each node and pass the class and features names so the tree can be properly labeledin[ ]from sklearn tree import export_graphviz export_graphviz(treeout_file="tree dot"class_names=["malignant""benign"]feature_names=cancer feature_namesimpurity=falsefilled=truewe can read this file and visualize itas seen in figure - using the graphviz module (or you can use any program that can read dot files)in[ ]import graphviz with open("tree dot"as fdot_graph read(graphviz source(dot_graphfigure - visualization of the decision tree built on the breast cancer dataset supervised learning
17,071
makes predictionsand is good example of machine learning algorithm that is easily explained to nonexperts howevereven with tree of depth fouras seen herethe tree can become bit overwhelming deeper trees ( depth of is not uncommonare even harder to grasp one method of inspecting the tree that may be helpful is to find out which path most of the data actually takes the n_samples shown in each node in figure - gives the number of samples in that nodewhile value provides the number of samples per class following the branches to the rightwe see that worst radius < creates node that contains only benign but malignant samples the rest of this side of the tree then uses some finer distinctions to split off these remaining benign samples of the samples that went to the right in the initial splitnearly all of them ( end up in the leaf to the very right taking left at the rootfor worst radius we end up with malignant and benign samples nearly all of the benign samples end up in the second leaf from the rightwith most of the other leaves containing very few samples feature importance in trees instead of looking at the whole treewhich can be taxingthere are some useful properties that we can derive to summarize the workings of the tree the most commonly used summary is feature importancewhich rates how important each feature is for the decision tree makes it is number between and for each featurewhere means "not used at alland means "perfectly predicts the target the feature importances always sum to in[ ]print("feature importances:\ {}format(tree feature_importances_)out[ ]feature importances we can visualize the feature importances in way that is similar to the way we visualize the coefficients in the linear model (figure - )in[ ]def plot_feature_importances_cancer(model)n_features cancer data shape[ plt barh(range(n_features)model feature_importances_align='center'plt yticks(np arange(n_features)cancer feature_namesplt xlabel("feature importance"plt ylabel("feature"plot_feature_importances_cancer(treesupervised machine learning algorithms
17,072
cancer dataset here we see that the feature used in the top split ("worst radius"is by far the most important feature this confirms our observation in analyzing the tree that the first level already separates the two classes fairly well howeverif feature has low feature_importanceit doesn' mean that this feature is uninformative it only means that the feature was not picked by the treelikely because another feature encodes the same information in contrast to the coefficients in linear modelsfeature importances are always positiveand don' encode which class feature is indicative of the feature importances tell us that "worst radiusis importantbut not whether high radius is indicative of sample being benign or malignant in factthere might not be such simple relationship between features and classas you can see in the following example (figures - and - )in[ ]tree mglearn plots plot_tree_not_monotone(display(treeout[ ]feature importances supervised learning
17,073
notonous relationship with the class labeland the decision boundaries found by decision tree figure - decision tree learned on the data shown in figure - the plot shows dataset with two features and two classes hereall the information is contained in [ ]and [ is not used at all but the relation between [ and supervised machine learning algorithms
17,074
means class and low value means class (or vice versawhile we focused our discussion here on decision trees for classificationall that was said is similarly true for decision trees for regressionas implemented in decision treeregressor the usage and analysis of regression trees is very similar to that of classification trees there is one particular property of using tree-based models for regression that we want to point outthough the decisiontreeregressor (and all other tree-based regression modelsis not able to extrapolateor make predictions outside of the range of the training data let' look into this in more detailusing dataset of historical computer memory (ramprices figure - shows the datasetwith the date on the -axis and the price of one megabyte of ram in that year on the -axisin[ ]import pandas as pd ram_prices pd read_csv("data/ram_price csv"plt semilogy(ram_prices dateram_prices priceplt xlabel("year"plt ylabel("price in $/mbyte"figure - historical development of the price of ramplotted on log scale supervised learning
17,075
seems to be quite linear and so should be relatively easy to predictapart from some bumps we will make forecast for the years after using the historical data up to that pointwith the date as our only feature we will compare two simple modelsa decisiontreeregressor and linearregression we rescale the prices using logarithmso that the relationship is relatively linear this doesn' make difference for the decisiontreeregressorbut it makes big difference for linearregression (we will discuss this in more depth in after training the models and making predictionswe apply the exponential map to undo the logarithm transform we make predictions on the whole dataset for visualization purposes herebut for quantitative evaluation we would only consider the test datasetin[ ]from sklearn tree import decisiontreeregressor use historical data to forecast prices after the year data_train ram_prices[ram_prices date data_test ram_prices[ram_prices date > predict prices based on date x_train data_train date[:np newaxiswe use log-transform to get simpler relationship of data to target y_train np log(data_train pricetree decisiontreeregressor(fit(x_trainy_trainlinear_reg linearregression(fit(x_trainy_trainpredict on all data x_all ram_prices date[:np newaxispred_tree tree predict(x_allpred_lr linear_reg predict(x_allundo log-transform price_tree np exp(pred_treeprice_lr np exp(pred_lrfigure - created herecompares the predictions of the decision tree and the linear regression model with the ground truthin[ ]plt semilogy(data_train datedata_train pricelabel="training data"plt semilogy(data_test datedata_test pricelabel="test data"plt semilogy(ram_prices dateprice_treelabel="tree prediction"plt semilogy(ram_prices dateprice_lrlabel="linear prediction"plt legend(supervised machine learning algorithms
17,076
by regression tree on the ram price data the difference between the models is quite striking the linear model approximates the data with lineas we knew it would this line provides quite good forecast for the test data (the years after )while glossing over some of the finer variations in both the training and the test data the tree modelon the other handmakes perfect predictions on the training datawe did not restrict the complexity of the treeso it learned the whole dataset by heart howeveronce we leave the data range for which the model has datathe model simply keeps predicting the last known point the tree has no ability to generate "newresponsesoutside of what was seen in the training data this shortcoming applies to all models based on trees strengthsweaknessesand parameters as discussed earlierthe parameters that control model complexity in decision trees are the pre-pruning parameters that stop the building of the tree before it is fully developed usuallypicking one of the pre-pruning strategies--setting either it is actually possible to make very good forecasts with tree-based models (for examplewhen trying to predict whether price will go up or downthe point of this example was not to show that trees are bad model for time seriesbut to illustrate particular property of how trees make predictions supervised learning
17,077
ting decision trees have two advantages over many of the algorithms we've discussed so farthe resulting model can easily be visualized and understood by nonexperts (at least for smaller trees)and the algorithms are completely invariant to scaling of the data as each feature is processed separatelyand the possible splits of the data don' depend on scalingno preprocessing like normalization or standardization of features is needed for decision tree algorithms in particulardecision trees work well when you have features that are on completely different scalesor mix of binary and continuous features the main downside of decision trees is that even with the use of pre-pruningthey tend to overfit and provide poor generalization performance thereforein most applicationsthe ensemble methods we discuss next are usually used in place of single decision tree ensembles of decision trees ensembles are methods that combine multiple machine learning models to create more powerful models there are many models in the machine learning literature that belong to this categorybut there are two ensemble models that have proven to be effective on wide range of datasets for classification and regressionboth of which use decision trees as their building blocksrandom forests and gradient boosted decision trees random forests as we just observeda main drawback of decision trees is that they tend to overfit the training data random forests are one way to address this problem random forest is essentially collection of decision treeswhere each tree is slightly different from the others the idea behind random forests is that each tree might do relatively good job of predictingbut will likely overfit on part of the data if we build many treesall of which work well and overfit in different wayswe can reduce the amount of overfitting by averaging their results this reduction in overfittingwhile retaining the predictive power of the treescan be shown using rigorous mathematics to implement this strategywe need to build many decision trees each tree should do an acceptable job of predicting the targetand should also be different from the other trees random forests get their name from injecting randomness into the tree building to ensure each tree is different there are two ways in which the trees in random forest are randomizedby selecting the data points used to build tree and by selecting the features in each split test let' go into this process in more detail supervised machine learning algorithms
17,078
number of trees to build (the n_estimators parameter of randomforestregressor or randomforestclassifierlet' say we want to build trees these trees will be built completely independently from each otherand the algorithm will make different random choices for each tree to make sure the trees are distinct to build treewe first take what is called bootstrap sample of our data that isfrom our n_samples data pointswe repeatedly draw an example randomly with replacement (meaning the same sample can be picked multiple times)n_samples times this will create dataset that is as big as the original datasetbut some data points will be missing from it (approximately one third)and some will be repeated to illustratelet' say we want to create bootstrap sample of the list [' '' '' '' ' possible bootstrap sample would be [' '' '' '' 'another possible sample would be [' '' '' '' 'nexta decision tree is built based on this newly created dataset howeverthe algorithm we described for the decision tree is slightly modified instead of looking for the best test for each nodein each node the algorithm randomly selects subset of the featuresand it looks for the best possible test involving one of these features the number of features that are selected is controlled by the max_features parameter this selection of subset of features is repeated separately in each nodeso that each node in tree can make decision using different subset of the features the bootstrap sampling leads to each decision tree in the random forest being built on slightly different dataset because of the selection of features in each nodeeach split in each tree operates on different subset of features togetherthese two mechanisms ensure that all the trees in the random forest are different critical parameter in this process is max_features if we set max_features to n_fea turesthat means that each split can look at all features in the datasetand no randomness will be injected in the feature selection (the randomness due to the bootstrapping remainsthoughif we set max_features to that means that the splits have no choice at all on which feature to testand can only search over different thresholds for the feature that was selected randomly thereforea high max_fea tures means that the trees in the random forest will be quite similarand they will be able to fit the data easilyusing the most distinctive features low max_features means that the trees in the random forest will be quite differentand that each tree might need to be very deep in order to fit the data well to make prediction using the random forestthe algorithm first makes prediction for every tree in the forest for regressionwe can average these results to get our final prediction for classificationa "soft votingstrategy is used this means each algorithm makes "softpredictionproviding probability for each possible output supervised learning
17,079
highest probability is predicted analyzing random forests let' apply random forest consisting of five trees to the two_moons dataset we studied earlierin[ ]from sklearn ensemble import randomforestclassifier from sklearn datasets import make_moons xy make_moons(n_samples= noise= random_state= x_trainx_testy_trainy_test train_test_split(xystratify=yrandom_state= forest randomforestclassifier(n_estimators= random_state= forest fit(x_trainy_trainthe trees that are built as part of the random forest are stored in the estimator_ attribute let' visualize the decision boundaries learned by each treetogether with their aggregate prediction as made by the forest (figure - )in[ ]figaxes plt subplots( figsize=( )for (axtreein enumerate(zip(axes ravel()forest estimators_))ax set_title("tree {}format( )mglearn plots plot_tree_partition(x_trainy_traintreeax=axmglearn plots plot_ d_separator(forestx_trainfill=trueax=axes[- - ]alpha axes[- - set_title("random forest"mglearn discrete_scatter(x_train[: ]x_train[: ]y_trainyou can clearly see that the decision boundaries learned by the five trees are quite different each of them makes some mistakesas some of the training points that are plotted here were not actually included in the training sets of the treesdue to the bootstrap sampling the random forest overfits less than any of the trees individuallyand provides much more intuitive decision boundary in any real applicationwe would use many more trees (often hundreds or thousands)leading to even smoother boundaries supervised machine learning algorithms
17,080
sion boundary obtained by averaging their predicted probabilities as another examplelet' apply random forest consisting of trees on the breast cancer datasetin[ ]x_trainx_testy_trainy_test train_test_splitcancer datacancer targetrandom_state= forest randomforestclassifier(n_estimators= random_state= forest fit(x_trainy_trainprint("accuracy on training set{ }format(forest score(x_trainy_train))print("accuracy on test set{ }format(forest score(x_testy_test))out[ ]accuracy on training set accuracy on test set the random forest gives us an accuracy of %better than the linear models or single decision treewithout tuning any parameters we could adjust the max_fea tures settingor apply pre-pruning as we did for the single decision tree howeveroften the default parameters of the random forest already work quite well similarly to the decision treethe random forest provides feature importanceswhich are computed by aggregating the feature importances over the trees in the forest typicallythe feature importances provided by the random forest are more reliable than the ones provided by single tree take look at figure - supervised learning
17,081
plot_feature_importances_cancer(forestfigure - feature importances computed from random forest that was fit to the breast cancer dataset as you can seethe random forest gives nonzero importance to many more features than the single tree similarly to the single decision treethe random forest also gives lot of importance to the "worst radiusfeaturebut it actually chooses "worst perimeterto be the most informative feature overall the randomness in building the random forest forces the algorithm to consider many possible explanationsthe result being that the random forest captures much broader picture of the data than single tree strengthsweaknessesand parameters random forests for regression and classification are currently among the most widely used machine learning methods they are very powerfuloften work well without heavy tuning of the parametersand don' require scaling of the data essentiallyrandom forests share all of the benefits of decision treeswhile making up for some of their deficiencies one reason to still use decision trees is if you need compact representation of the decision-making process it is basically impossible to interpret tens or hundreds of trees in detailand trees in random forests tend to be deeper than decision trees (because of the use of feature subsetsthereforeif you need to summarize the prediction making in visual way to nonexpertsa single decision tree might be better choice while building random forests on large datasets might be somewhat time consumingit can be parallelized across multiple cpu supervised machine learning algorithms
17,082
modern computers do)you can use the n_jobs parameter to adjust the number of cores to use using more cpu cores will result in linear speed-ups (using two coresthe training of the random forest will be twice as fast)but specifying n_jobs larger than the number of cores will not help you can set n_jobs=- to use all the cores in your computer you should keep in mind that random forestsby their natureare randomand setting different random states (or not setting the random_state at allcan drastically change the model that is built the more trees there are in the forestthe more robust it will be against the choice of random state if you want to have reproducible resultsit is important to fix the random_state random forests don' tend to perform well on very high dimensionalsparse datasuch as text data for this kind of datalinear models might be more appropriate random forests usually work well even on very large datasetsand training can easily be parallelized over many cpu cores within powerful computer howeverrandom forests require more memory and are slower to train and to predict than linear models if time and memory are important in an applicationit might make sense to use linear model instead the important parameters to adjust are n_estimatorsmax_featuresand possibly pre-pruning options like max_depth for n_estimatorslarger is always better averaging more trees will yield more robust ensemble by reducing overfitting howeverthere are diminishing returnsand more trees need more memory and more time to train common rule of thumb is to build "as many as you have time/memory for as described earliermax_features determines how random each tree isand smaller max_features reduces overfitting in generalit' good rule of thumb to use the default valuesmax_features=sqrt(n_featuresfor classification and max_fea tures=log (n_featuresfor regression adding max_features or max_leaf_nodes might sometimes improve performance it can also drastically reduce space and time requirements for training and prediction gradient boosted regression trees (gradient boosting machinesthe gradient boosted regression tree is another ensemble method that combines multiple decision trees to create more powerful model despite the "regressionin the namethese models can be used for regression and classification in contrast to the random forest approachgradient boosting works by building trees in serial mannerwhere each tree tries to correct the mistakes of the previous one by defaultthere is no randomization in gradient boosted regression treesinsteadstrong pre-pruning is used gradient boosted trees often use very shallow treesof depth one to fivewhich makes the model smaller in terms of memory and makes predictions faster supervised learning
17,083
context known as weak learners)like shallow trees each tree can only provide good predictions on part of the dataand so more and more trees are added to iteratively improve performance gradient boosted trees are frequently the winning entries in machine learning competitionsand are widely used in industry they are generally bit more sensitive to parameter settings than random forestsbut can provide better accuracy if the parameters are set correctly apart from the pre-pruning and the number of trees in the ensembleanother important parameter of gradient boosting is the learning_ratewhich controls how strongly each tree tries to correct the mistakes of the previous trees higher learning rate means each tree can make stronger correctionsallowing for more complex models adding more trees to the ensemblewhich can be accomplished by increasing n_estimatorsalso increases the model complexityas the model has more chances to correct mistakes on the training set here is an example of using gradientboostingclassifier on the breast cancer dataset by default trees of maximum depth and learning rate of are usedin[ ]from sklearn ensemble import gradientboostingclassifier x_trainx_testy_trainy_test train_test_splitcancer datacancer targetrandom_state= gbrt gradientboostingclassifier(random_state= gbrt fit(x_trainy_trainprint("accuracy on training set{ }format(gbrt score(x_trainy_train))print("accuracy on test set{ }format(gbrt score(x_testy_test))out[ ]accuracy on training set accuracy on test set as the training set accuracy is %we are likely to be overfitting to reduce overfittingwe could either apply stronger pre-pruning by limiting the maximum depth or lower the learning ratesupervised machine learning algorithms
17,084
gbrt gradientboostingclassifier(random_state= max_depth= gbrt fit(x_trainy_trainprint("accuracy on training set{ }format(gbrt score(x_trainy_train))print("accuracy on test set{ }format(gbrt score(x_testy_test))out[ ]accuracy on training set accuracy on test set in[ ]gbrt gradientboostingclassifier(random_state= learning_rate= gbrt fit(x_trainy_trainprint("accuracy on training set{ }format(gbrt score(x_trainy_train))print("accuracy on test set{ }format(gbrt score(x_testy_test))out[ ]accuracy on training set accuracy on test set both methods of decreasing the model complexity reduced the training set accuracyas expected in this caselowering the maximum depth of the trees provided significant improvement of the modelwhile lowering the learning rate only increased the generalization performance slightly as for the other decision tree-based modelswe can again visualize the feature importances to get more insight into our model (figure - as we used treesit is impractical to inspect them alleven if they are all of depth in[ ]gbrt gradientboostingclassifier(random_state= max_depth= gbrt fit(x_trainy_trainplot_feature_importances_cancer(gbrt supervised learning
17,085
fit to the breast cancer dataset we can see that the feature importances of the gradient boosted trees are somewhat similar to the feature importances of the random foreststhough the gradient boosting completely ignored some of the features as both gradient boosting and random forests perform well on similar kinds of dataa common approach is to first try random forestswhich work quite robustly if random forests work well but prediction time is at premiumor it is important to squeeze out the last percentage of accuracy from the machine learning modelmoving to gradient boosting often helps if you want to apply gradient boosting to large-scale problemit might be worth looking into the xgboost package and its python interfacewhich at the time of writing is faster (and sometimes easier to tunethan the scikit-learn implementation of gradient boosting on many datasets strengthsweaknessesand parameters gradient boosted decision trees are among the most powerful and widely used models for supervised learning their main drawback is that they require careful tuning of the parameters and may take long time to train similarly to other tree-based modelsthe algorithm works well without scaling and on mixture of binary and continuous features as with other tree-based modelsit also often does not work well on high-dimensional sparse data the main parameters of gradient boosted tree models are the number of treesn_esti matorsand the learning_ratewhich controls the degree to which each tree is allowed to correct the mistakes of the previous trees these two parameters are highly supervised machine learning algorithms
17,086
model of similar complexity in contrast to random forestswhere higher n_esti mators value is always betterincreasing n_estimators in gradient boosting leads to more complex modelwhich may lead to overfitting common practice is to fit n_estimators depending on the time and memory budgetand then search over different learning_rates another important parameter is max_depth (or alternatively max_leaf_nodes)to reduce the complexity of each tree usually max_depth is set very low for gradient boosted modelsoften not deeper than five splits kernelized support vector machines the next type of supervised model we will discuss is kernelized support vector machines we explored the use of linear support vector machines for classification in "linear models for classificationon page kernelized support vector machines (often just referred to as svmsare an extension that allows for more complex models that are not defined simply by hyperplanes in the input space while there are support vector machines for classification and regressionwe will restrict ourselves to the classification caseas implemented in svc similar concepts apply to support vector regressionas implemented in svr the math behind kernelized support vector machines is bit involvedand is beyond the scope of this book you can find the details in of hastietibshiraniand friedman' the elements of statistical learning howeverwe will try to give you some sense of the idea behind the method linear models and nonlinear features as you saw in figure - linear models can be quite limiting in low-dimensional spacesas lines and hyperplanes have limited flexibility one way to make linear model more flexible is by adding more features--for exampleby adding interactions or polynomials of the input features let' look at the synthetic dataset we used in "feature importance in treeson page (see figure - )in[ ]xy make_blobs(centers= random_state= mglearn discrete_scatter( [: ] [: ]yplt xlabel("feature "plt ylabel("feature " supervised learning
17,087
linear model for classification can only separate points using lineand will not be able to do very good job on this dataset (see figure - )in[ ]from sklearn svm import linearsvc linear_svm linearsvc(fit(xymglearn plots plot_ d_separator(linear_svmxmglearn discrete_scatter( [: ] [: ]yplt xlabel("feature "plt ylabel("feature "now let' expand the set of input featuressay by also adding feature * the square of the second featureas new feature instead of representing each data point as two-dimensional point(feature feature )we now represent it as threedimensional point(feature feature feature * this new representation is illustrated in figure - in three-dimensional scatter plot we picked this particular feature to add for illustration purposes the choice is not particularly important supervised machine learning algorithms
17,088
in[ ]add the squared first feature x_new np hstack([xx[: :* ]from mpl_toolkits mplot import axes daxes figure plt figure(visualize in ax axes (figureelev=- azim=- plot first all the points with = then all with = mask = ax scatter(x_new[mask ]x_new[mask ]x_new[mask ] =' 'cmap=mglearn cm = ax scatter(x_new[~mask ]x_new[~mask ]x_new[~mask ] =' 'marker='^'cmap=mglearn cm = ax set_xlabel("feature "ax set_ylabel("feature "ax set_zlabel("feature * " supervised learning
17,089
feature derived from feature in the new representation of the datait is now indeed possible to separate the two classes using linear modela plane in three dimensions we can confirm this by fitting linear model to the augmented data (see figure - )in[ ]linear_svm_ linearsvc(fit(x_newycoefintercept linear_svm_ coef_ ravel()linear_svm_ intercept_ show linear decision boundary figure plt figure(ax axes (figureelev=- azim=- xx np linspace(x_new[: min( x_new[: max( yy np linspace(x_new[: min( x_new[: max( xxyy np meshgrid(xxyyzz (coef[ xx coef[ yy intercept-coef[ ax plot_surface(xxyyzzrstride= cstride= alpha= ax scatter(x_new[mask ]x_new[mask ]x_new[mask ] =' 'cmap=mglearn cm = ax scatter(x_new[~mask ]x_new[~mask ]x_new[~mask ] =' 'marker='^'cmap=mglearn cm = ax set_xlabel("feature "ax set_ylabel("feature "ax set_zlabel("feature * "supervised machine learning algorithms
17,090
as function of the original featuresthe linear svm model is not actually linear anymore it is not linebut more of an ellipseas you can see from the plot created here (figure - )in[ ]zz yy * dec linear_svm_ decision_function(np c_[xx ravel()yy ravel()zz ravel()]plt contourf(xxyydec reshape(xx shape)levels=[dec min() dec max()]cmap=mglearn cm alpha= mglearn discrete_scatter( [: ] [: ]yplt xlabel("feature "plt ylabel("feature " supervised learning
17,091
features the kernel trick the lesson here is that adding nonlinear features to the representation of our data can make linear models much more powerful howeveroften we don' know which features to addand adding many features (like all possible interactions in dimensional feature spacemight make computation very expensive luckilythere is clever mathematical trick that allows us to learn classifier in higher-dimensional space without actually computing the newpossibly very large representation this is known as the kernel trickand it works by directly computing the distance (more preciselythe scalar productsof the data points for the expanded feature representationwithout ever actually computing the expansion there are two ways to map your data into higher-dimensional space that are commonly used with support vector machinesthe polynomial kernelwhich computes all possible polynomials up to certain degree of the original features (like feature * feature * )and the radial basis function (rbfkernelalso known as the gaussian kernel the gaussian kernel is bit harder to explainas it corresponds to an infinite-dimensional feature space one way to explain the gaussian kernel is that supervised machine learning algorithms
17,092
decreases for higher degrees in practicethe mathematical details behind the kernel svm are not that importantthoughand how an svm with an rbf kernel makes decision can be summarized quite easily--we'll do so in the next section understanding svms during trainingthe svm learns how important each of the training data points is to represent the decision boundary between the two classes typically only subset of the training points matter for defining the decision boundarythe ones that lie on the border between the classes these are called support vectors and give the support vector machine its name to make prediction for new pointthe distance to each of the support vectors is measured classification decision is made based on the distances to the support vectorand the importance of the support vectors that was learned during training (stored in the dual_coef_ attribute of svcthe distance between data points is measured by the gaussian kernelkrbf( exp ( || || herex and are data points| |denotes euclidean distanceand (gammais parameter that controls the width of the gaussian kernel figure - shows the result of training support vector machine on twodimensional two-class dataset the decision boundary is shown in blackand the support vectors are larger points with the wide outline the following code creates this plot by training an svm on the forge datasetin[ ]from sklearn svm import svc xy mglearn tools make_handcrafted_dataset(svm svc(kernel='rbf' = gamma= fit(xymglearn plots plot_ d_separator(svmxeps mglearn discrete_scatter( [: ] [: ]yplot support vectors sv svm support_vectors_ class labels of support vectors are given by the sign of the dual coefficients sv_labels svm dual_coef_ ravel( mglearn discrete_scatter(sv[: ]sv[: ]sv_labelss= markeredgewidth= plt xlabel("feature "plt ylabel("feature " this follows from the taylor expansion of the exponential map supervised learning
17,093
in this casethe svm yields very smooth and nonlinear (not straight lineboundary we adjusted two parameters herethe parameter and the gamma parameterwhich we will now discuss in detail tuning svm parameters the gamma parameter is the one shown in the formula given in the previous sectionwhich controls the width of the gaussian kernel it determines the scale of what it means for points to be close together the parameter is regularization parametersimilar to that used in the linear models it limits the importance of each point (or more preciselytheir dual_coef_let' have look at what happens when we vary these parameters (figure - )in[ ]figaxes plt subplots( figsize=( )for axc in zip(axes[- ])for agamma in zip(axrange(- ))mglearn plots plot_svm(log_c=clog_gamma=gammaax=aaxes[ legend(["class ""class ""sv class ""sv class "]ncol= loc= )supervised machine learning algorithms
17,094
eters and gamma going from left to rightwe increase the value of the parameter gamma from to small gamma means large radius for the gaussian kernelwhich means that many points are considered close by this is reflected in very smooth decision boundaries on the leftand boundaries that focus more on single points further to the right low value of gamma means that the decision boundary will vary slowlywhich yields model of low complexitywhile high value of gamma yields more complex model going from top to bottomwe increase the parameter from to as with the linear modelsa small means very restricted modelwhere each data point can only have very limited influence you can see that at the top left the decision boundary looks nearly linearwith the misclassified points barely having any influence on the line increasing cas shown on the bottom rightallows these points to have stronger influence on the model and makes the decision boundary bend to correctly classify them supervised learning
17,095
gamma= /n_featuresin[ ]x_trainx_testy_trainy_test train_test_splitcancer datacancer targetrandom_state= svc svc(svc fit(x_trainy_trainprint("accuracy on training set{ }format(svc score(x_trainy_train))print("accuracy on test set{ }format(svc score(x_testy_test))out[ ]accuracy on training set accuracy on test set the model overfits quite substantiallywith perfect score on the training set and only accuracy on the test set while svms often perform quite wellthey are very sensitive to the settings of the parameters and to the scaling of the data in particularthey require all the features to vary on similar scale let' look at the minimum and maximum values for each featureplotted in log-space (figure - )in[ ]plt plot(x_train min(axis= )' 'label="min"plt plot(x_train max(axis= )'^'label="max"plt legend(loc= plt xlabel("feature index"plt ylabel("feature magnitude"plt yscale("log"from this plot we can determine that features in the breast cancer dataset are of completely different orders of magnitude this can be somewhat of problem for other models (like linear models)but it has devastating effects for the kernel svm let' examine some ways to deal with this issue supervised machine learning algorithms
17,096
arithmic scalepreprocessing data for svms one way to resolve this problem is by rescaling each feature so that they are all approximately on the same scale common rescaling method for kernel svms is to scale the data such that all features are between and we will see how to do this using the minmaxscaler preprocessing method in where we'll give more details for nowlet' do this "by hand"in[ ]compute the minimum value per feature on the training set min_on_training x_train min(axis= compute the range of each feature (max minon the training set range_on_training (x_train min_on_trainingmax(axis= subtract the minand divide by range afterwardmin= and max= for each feature x_train_scaled (x_train min_on_trainingrange_on_training print("minimum for each feature\ {}format(x_train_scaled min(axis= ))print("maximum for each feature\ {}format(x_train_scaled max(axis= )) supervised learning
17,097
minimum for each feature maximum for each feature in[ ]use the same transformation on the test setusing min and range of the training set (see for detailsx_test_scaled (x_test min_on_trainingrange_on_training in[ ]svc svc(svc fit(x_train_scaledy_trainprint("accuracy on training set{ }formatsvc score(x_train_scaledy_train))print("accuracy on test set{ }format(svc score(x_test_scaledy_test))out[ ]accuracy on training set accuracy on test set scaling the data made huge differencenow we are actually in an underfitting regimewhere training and test set performance are quite similar but less close to accuracy from herewe can try increasing either or gamma to fit more complex model for examplein[ ]svc svc( = svc fit(x_train_scaledy_trainprint("accuracy on training set{ }formatsvc score(x_train_scaledy_train))print("accuracy on test set{ }format(svc score(x_test_scaledy_test))out[ ]accuracy on training set accuracy on test set hereincreasing allows us to improve the model significantlyresulting in accuracy supervised machine learning algorithms
17,098
kernelized support vector machines are powerful models and perform well on variety of datasets svms allow for complex decision boundarieseven if the data has only few features they work well on low-dimensional and high-dimensional data ( few and many features)but don' scale very well with the number of samples running an svm on data with up to , samples might work wellbut working with datasets of size , or more can become challenging in terms of runtime and memory usage another downside of svms is that they require careful preprocessing of the data and tuning of the parameters this is whythese daysmost people instead use tree-based models such as random forests or gradient boosting (which require little or no preprocessingin many applications furthermoresvm models are hard to inspectit can be difficult to understand why particular prediction was madeand it might be tricky to explain the model to nonexpert stillit might be worth trying svmsparticularly if all of your features represent measurements in similar units ( all are pixel intensitiesand they are on similar scales the important parameters in kernel svms are the regularization parameter cthe choice of the kerneland the kernel-specific parameters although we primarily focused on the rbf kernelother choices are available in scikit-learn the rbf kernel has only one parametergammawhich is the inverse of the width of the gaussian kernel gamma and both control the complexity of the modelwith large values in either resulting in more complex model thereforegood settings for the two parameters are usually strongly correlatedand and gamma should be adjusted together neural networks (deep learninga family of algorithms known as neural networks has recently seen revival under the name "deep learning while deep learning shows great promise in many machine learning applicationsdeep learning algorithms are often tailored very carefully to specific use case herewe will only discuss some relatively simple methodsnamely multilayer perceptrons for classification and regressionthat can serve as starting point for more involved deep learning methods multilayer perceptrons (mlpsare also known as (vanillafeed-forward neural networksor sometimes just neural networks the neural network model mlps can be viewed as generalizations of linear models that perform multiple stages of processing to come to decision supervised learning
17,099
[ [ [ [ [px[pb in plain englishy is weighted sum of the input features [ to [ ]weighted by the learned coefficients [ to [pwe could visualize this graphically as shown in figure - in[ ]display(mglearn plots plot_logistic_regression_graph()figure - visualization of logistic regressionwhere input features and predictions are shown as nodesand the coefficients are connections between the nodes hereeach node on the left represents an input featurethe connecting lines represent the learned coefficientsand the node on the right represents the outputwhich is weighted sum of the inputs in an mlp this process of computing weighted sums is repeated multiple timesfirst computing hidden units that represent an intermediate processing stepwhich are again combined using weighted sums to yield the final result (figure - )in[ ]display(mglearn plots plot_single_hidden_layer_graph()supervised machine learning algorithms