id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
17,100 | this model has lot more coefficients (also called weightsto learnthere is one between every input and every hidden unit (which make up the hidden layer)and one between every unit in the hidden layer and the output computing series of weighted sums is mathematically the same as computing just one weighted sumso to make this model truly more powerful than linear modelwe need one extra trick after computing weighted sum for each hidden unita nonlinear function is applied to the result--usually the rectifying nonlinearity (also known as rectified linear unit or reluor the tangens hyperbolicus (tanhthe result of this function is then used in the weighted sum that computes the outputy the two functions are visualized in figure - the relu cuts off values below zerowhile tanh saturates to - for low input values and + for high input values either nonlinear function allows the neural network to learn much more complicated functions than linear model couldin[ ]line np linspace(- plt plot(linenp tanh(line)label="tanh"plt plot(linenp maximum(line )label="relu"plt legend(loc="best"plt xlabel(" "plt ylabel("relu( )tanh( )" supervised learning |
17,101 | tion function for the small neural network pictured in figure - the full formula for computing in the case of regression would be (when using tanh nonlinearity) [ tanh( [ [ [ [ [ [ [ [ ] [ tanh( [ [ [ [ [ [ [ [ ] [ tanh( [ [ [ [ [ [ [ [ ] [ [ [ [ [ [ herew are the weights between the input and the hidden layer hand are the weights between the hidden layer and the output the weights and are learned from datax are the input featuresy is the computed outputand are intermediate computations an important parameter that needs to be set by the user is the number of nodes in the hidden layer this can be as small as for very small or simple datasets and as big as , for very complex data it is also possible to add additional hidden layersas shown in figure - supervised machine learning algorithms |
17,102 | mglearn plots plot_two_hidden_layer_graph(figure - multilayer perceptron with two hidden layers having large neural networks made up of many of these layers of computation is what inspired the term "deep learning tuning neural networks let' look into the workings of the mlp by applying the mlpclassifier to the two_moons dataset we used earlier in this the results are shown in figure - in[ ]from sklearn neural_network import mlpclassifier from sklearn datasets import make_moons xy make_moons(n_samples= noise= random_state= x_trainx_testy_trainy_test train_test_split(xystratify=yrandom_state= mlp mlpclassifier(algorithm=' -bfgs'random_state= fit(x_trainy_trainmglearn plots plot_ d_separator(mlpx_trainfill=truealpha mglearn discrete_scatter(x_train[: ]x_train[: ]y_trainplt xlabel("feature "plt ylabel("feature " supervised learning |
17,103 | the two_moons dataset as you can seethe neural network learned very nonlinear but relatively smooth decision boundary we used algorithm=' -bfgs'which we will discuss later by defaultthe mlp uses hidden nodeswhich is quite lot for this small dataset we can reduce the number (which reduces the complexity of the modeland still get good result (figure - )in[ ]mlp mlpclassifier(algorithm=' -bfgs'random_state= hidden_layer_sizes=[ ]mlp fit(x_trainy_trainmglearn plots plot_ d_separator(mlpx_trainfill=truealpha mglearn discrete_scatter(x_train[: ]x_train[: ]y_trainplt xlabel("feature "plt ylabel("feature "supervised machine learning algorithms |
17,104 | the two_moons dataset with only hidden unitsthe decision boundary looks somewhat more ragged the default nonlinearity is relushown in figure - with single hidden layerthis means the decision function will be made up of straight line segments if we want smoother decision boundarywe could add more hidden units (as in figure - )add second hidden layer (figure - )or use the tanh nonlinearity (figure - )in[ ]using two hidden layerswith units each mlp mlpclassifier(algorithm=' -bfgs'random_state= hidden_layer_sizes=[ ]mlp fit(x_trainy_trainmglearn plots plot_ d_separator(mlpx_trainfill=truealpha mglearn discrete_scatter(x_train[: ]x_train[: ]y_trainplt xlabel("feature "plt ylabel("feature " supervised learning |
17,105 | using two hidden layerswith units eachnow with tanh nonlinearity mlp mlpclassifier(algorithm=' -bfgs'activation='tanh'random_state= hidden_layer_sizes=[ ]mlp fit(x_trainy_trainmglearn plots plot_ d_separator(mlpx_trainfill=truealpha mglearn discrete_scatter(x_train[: ]x_train[: ]y_trainplt xlabel("feature "plt ylabel("feature "figure - decision boundary learned using hidden layers with hidden units eachwith rect activation function supervised machine learning algorithms |
17,106 | eachwith tanh activation function finallywe can also control the complexity of neural network by using an penalty to shrink the weights toward zeroas we did in ridge regression and the linear classifiers the parameter for this in the mlpclassifier is alpha (as in the linear regression models)and it' set to very low value (little regularizationby default figure - shows the effect of different values of alpha on the two_moons datasetusing two hidden layers of or units eachin[ ]figaxes plt subplots( figsize=( )for axxn_hidden_nodes in zip(axes[ ])for axalpha in zip(axx[ ])mlp mlpclassifier(algorithm=' -bfgs'random_state= hidden_layer_sizes=[n_hidden_nodesn_hidden_nodes]alpha=alphamlp fit(x_trainy_trainmglearn plots plot_ d_separator(mlpx_trainfill=truealpha ax=axmglearn discrete_scatter(x_train[: ]x_train[: ]y_trainax=axax set_title("n_hidden=[{}{}]\nalpha={ }formatn_hidden_nodesn_hidden_nodesalpha) supervised learning |
17,107 | tings of the alpha parameter as you probably have realized by nowthere are many ways to control the complexity of neural networkthe number of hidden layersthe number of units in each hidden layerand the regularization (alphathere are actually even morewhich we won' go into here an important property of neural networks is that their weights are set randomly before learning is startedand this random initialization affects the model that is learned that means that even when using exactly the same parameterswe can obtain very different models when using different random seeds if the networks are largeand their complexity is chosen properlythis should not affect accuracy too muchbut it is worth keeping in mind (particularly for smaller networksfigure - shows plots of several modelsall learned with the same settings of the parametersin[ ]figaxes plt subplots( figsize=( )for iax in enumerate(axes ravel())mlp mlpclassifier(algorithm=' -bfgs'random_state=ihidden_layer_sizes=[ ]mlp fit(x_trainy_trainmglearn plots plot_ d_separator(mlpx_trainfill=truealpha ax=axmglearn discrete_scatter(x_train[: ]x_train[: ]y_trainax=axsupervised machine learning algorithms |
17,108 | initializations to get better understanding of neural networks on real-world datalet' apply the mlpclassifier to the breast cancer dataset we start with the default parametersin[ ]print("cancer data per-feature maxima:\ {}format(cancer data max(axis= ))out[ ]cancer data per-feature maxima in[ ]x_trainx_testy_trainy_test train_test_splitcancer datacancer targetrandom_state= mlp mlpclassifier(random_state= mlp fit(x_trainy_trainprint("accuracy on training set{ }format(mlp score(x_trainy_train))print("accuracy on test set{ }format(mlp score(x_testy_test))out[ ]accuracy on training set accuracy on test set the accuracy of the mlp is quite goodbut not as good as the other models as in the earlier svc examplethis is likely due to scaling of the data neural networks also expect all input features to vary in similar wayand ideally to have mean of and supervised learning |
17,109 | we will do this by hand herebut we'll introduce the standardscaler to do this automatically in in[ ]compute the mean value per feature on the training set mean_on_train x_train mean(axis= compute the standard deviation of each feature on the training set std_on_train x_train std(axis= subtract the meanand scale by inverse standard deviation afterwardmean= and std= x_train_scaled (x_train mean_on_trainstd_on_train use the same transformation (using training mean and stdon the test set x_test_scaled (x_test mean_on_trainstd_on_train mlp mlpclassifier(random_state= mlp fit(x_train_scaledy_trainprint("accuracy on training set{ }formatmlp score(x_train_scaledy_train))print("accuracy on test set{ }format(mlp score(x_test_scaledy_test))out[ ]accuracy on training set accuracy on test set convergencewarningstochastic optimizermaximum iterations reached and the optimization hasn' converged yet the results are much better after scalingand already quite competitive we got warning from the modelthoughthat tells us that the maximum number of iterations has been reached this is part of the adam algorithm for learning the modeland tells us that we should increase the number of iterationsin[ ]mlp mlpclassifier(max_iter= random_state= mlp fit(x_train_scaledy_trainprint("accuracy on training set{ }formatmlp score(x_train_scaledy_train))print("accuracy on test set{ }format(mlp score(x_test_scaledy_test))out[ ]accuracy on training set accuracy on test set supervised machine learning algorithms |
17,110 | the generalization performance stillthe model is performing quite well as there is some gap between the training and the test performancewe might try to decrease the model' complexity to get better generalization performance herewe choose to increase the alpha parameter (quite aggressivelyfrom to to add stronger regularization of the weightsin[ ]mlp mlpclassifier(max_iter= alpha= random_state= mlp fit(x_train_scaledy_trainprint("accuracy on training set{ }formatmlp score(x_train_scaledy_train))print("accuracy on test set{ }format(mlp score(x_test_scaledy_test))out[ ]accuracy on training set accuracy on test set this leads to performance on par with the best models so far while it is possible to analyze what neural network has learnedthis is usually much trickier than analyzing linear model or tree-based model one way to introspect what was learned is to look at the weights in the model you can see an example of this in the scikit-learn example gallery for the breast cancer datasetthis might be bit hard to understand the following plot (figure - shows the weights that were learned connecting the input to the first hidden layer the rows in this plot correspond to the input featureswhile the columns correspond to the hidden units light colors represent large positive valueswhile dark colors represent negative valuesin[ ]plt figure(figsize=( )plt imshow(mlp coefs_[ ]interpolation='none'cmap='viridis'plt yticks(range( )cancer feature_namesplt xlabel("columns in weight matrix"plt ylabel("input feature"plt colorbar( you might have noticed at this point that many of the well-performing models achieved exactly the same accuracy of this means that all of the models make exactly the same number of mistakeswhich is four if you compare the actual predictionsyou can even see that they make exactly the same mistakesthis might be consequence of the dataset being very smallor it may be because these points are really different from the rest supervised learning |
17,111 | breast cancer dataset one possible inference we can make is that features that have very small weights for all of the hidden units are "less importantto the model we can see that "mean smoothnessand "mean compactness,in addition to the features found between "smoothness errorand "fractal dimension error,have relatively low weights compared to other features this could mean that these are less important features or possibly that we didn' represent them in way that the neural network could use we could also visualize the weights connecting the hidden layer to the output layerbut those are even harder to interpret while the mlpclassifier and mlpregressor provide easy-to-use interfaces for the most common neural network architecturesthey only capture small subset of what is possible with neural networks if you are interested in working with more flexible or larger modelswe encourage you to look beyond scikit-learn into the fantastic deep learning libraries that are out there for python usersthe most well-established are keraslasagnaand tensor-flow lasagna builds on the theano librarywhile keras can use either tensor-flow or theano these libraries provide much more flexible interface to build neural networks and track the rapid progress in deep learning research all of the popular deep learning libraries also allow the use of highperformance graphics processing units (gpus)which scikit-learn does not support using gpus allows us to accelerate computations by factors of to xand they are essential for applying deep learning methods to large-scale datasets strengthsweaknessesand parameters neural networks have reemerged as state-of-the-art models in many applications of machine learning one of their main advantages is that they are able to capture information contained in large amounts of data and build incredibly complex models given enough computation timedataand careful tuning of the parametersneural networks often beat other machine learning algorithms (for classification and regression taskssupervised machine learning algorithms |
17,112 | ful ones--often take long time to train they also require careful preprocessing of the dataas we saw here similarly to svmsthey work best with "homogeneousdatawhere all the features have similar meanings for data that has very different kinds of featurestree-based models might work better tuning neural network parameters is also an art unto itself in our experimentswe barely scratched the surface of possible ways to adjust neural network models and how to train them estimating complexity in neural networks the most important parameters are the number of layers and the number of hidden units per layer you should start with one or two hidden layersand possibly expand from there the number of nodes per hidden layer is often similar to the number of input featuresbut rarely higher than in the low to mid-thousands helpful measure when thinking about the model complexity of neural network is the number of weights or coefficients that are learned if you have binary classification dataset with featuresand you have hidden unitsthen there are , weights between the input and the first hidden layer there are also weights between the hidden layer and the output layerfor total of around , weights if you add second hidden layer with hidden unitsthere will be another , weights from the first hidden layer to the second hidden layerresulting in total of , weights if instead you use one layer with , hidden unitsyou are learning , , weights from the input to the hidden layer and , weights from the hidden layer to the output layerfor total of , if you add second hidden layer you add , , , , weightsfor whopping total of , , -- times larger than the model with two hidden layers of size common way to adjust parameters in neural network is to first create network that is large enough to overfitmaking sure that the task can actually be learned by the network thenonce you know the training data can be learnedeither shrink the network or increase alpha to add regularizationwhich will improve generalization performance in our experimentswe focused mostly on the definition of the modelthe number of layers and nodes per layerthe regularizationand the nonlinearity these define the model we want to learn there is also the question of how to learn the modelor the algorithm that is used for learning the parameterswhich is set using the algorithm parameter there are two easy-to-use choices for algorithm the default is 'adam'which works well in most situations but is quite sensitive to the scaling of the data (so it is important to always scale your data to mean and unit variancethe other one is ' -bfgs'which is quite robust but might take long time on larger models or larger datasets there is also the more advanced 'sgdoptionwhich is what many deep learning researchers use the 'sgdoption comes with many additional param supervised learning |
17,113 | their definitions in the user guide when starting to work with mlpswe recommend sticking to 'adamand ' -bfgsfit resets model an important property of scikit-learn models is that calling fit will always reset everything model previously learned so if you build model on one datasetand then call fit again on different datasetthe model will "forgeteverything it learned from the first dataset you can call fit as often as you like on modeland the outcome will be the same as calling fit on "newmodel uncertainty estimates from classifiers another useful part of the scikit-learn interface that we haven' talked about yet is the ability of classifiers to provide uncertainty estimates of predictions oftenyou are not only interested in which class classifier predicts for certain test pointbut also how certain it is that this is the right class in practicedifferent kinds of mistakes lead to very different outcomes in real-world applications imagine medical application testing for cancer making false positive prediction might lead to patient undergoing additional testswhile false negative prediction might lead to serious disease not being treated we will go into this topic in more detail in there are two different functions in scikit-learn that can be used to obtain uncertainty estimates from classifiersdecision_function and predict_proba most (but not allclassifiers have at least one of themand many classifiers have both let' look at what these two functions do on synthetic two-dimensional datasetwhen building gradientboostingclassifier classifierwhich has both decision_function and predict_proba methodin[ ]from sklearn ensemble import gradientboostingclassifier from sklearn datasets import make_blobsmake_circles xy make_circles(noise= factor= random_state= we rename the classes "blueand "redfor illustration purposes y_named np array(["blue""red"])[ywe can call train_test_split with arbitrarily many arraysall will be split in consistent manner x_trainx_testy_train_namedy_test_namedy_trainy_test train_test_split(xy_namedyrandom_state= build the gradient boosting model gbrt gradientboostingclassifier(random_state= gbrt fit(x_trainy_train_nameduncertainty estimates from classifiers |
17,114 | in the binary classification casethe return value of decision_function is of shape (n_samples,)and it returns one floating-point number for each samplein[ ]print("x_test shape{}format(x_test shape)print("decision function shape{}formatgbrt decision_function(x_testshape)out[ ]x_test shape( decision function shape( ,this value encodes how strongly the model believes data point to belong to the "positiveclassin this case class positive values indicate preference for the positive classand negative values indicate preference for the "negative(otherclassin[ ]show the first few entries of decision_function print("decision function:\ {}format(gbrt decision_function(x_test)[: ])out[ ]decision function - - - we can recover the prediction by looking only at the sign of the decision functionin[ ]print("thresholded decision function:\ {}formatgbrt decision_function(x_test )print("predictions:\ {}format(gbrt predict(x_test))out[ ]thresholded decision functiontrue false false false true true false true true true false true true false true false false false true true true true true false falsepredictions['red'blue'blue'blue'red'red'blue'red'red'red'blue'red'red'blue'red'blue'blue'blue'red'red'red'red'red'blue'blue'for binary classificationthe "negativeclass is always the first entry of the classes_ attributeand the "positiveclass is the second entry of classes_ so if you want to fully recover the output of predictyou need to make use of the classes_ attribute supervised learning |
17,115 | make the boolean true/false into and greater_zero (gbrt decision_function(x_test astype(intuse and as indices into classes_ pred gbrt classes_[greater_zeropred is the same as the output of gbrt predict print("pred is equal to predictions{}formatnp all(pred =gbrt predict(x_test)))out[ ]pred is equal to predictionstrue the range of decision_function can be arbitraryand depends on the data and the model parametersin[ ]decision_function gbrt decision_function(x_testprint("decision function minimum{ fmaximum{ }formatnp min(decision_function)np max(decision_function))out[ ]decision function minimum- maximum this arbitrary scaling makes the output of decision_function often hard to interpret in the following example we plot the decision_function for all points in the plane using color codingnext to visualization of the decision boundaryas we saw earlier we show training points as circles and test data as triangles (figure - )in[ ]figaxes plt subplots( figsize=( )mglearn tools plot_ d_separator(gbrtxax=axes[ ]alpha fill=truecm=mglearn cm scores_image mglearn tools plot_ d_scores(gbrtxax=axes[ ]alpha cm=mglearn reblfor ax in axesplot training and test points mglearn discrete_scatter(x_test[: ]x_test[: ]y_testmarkers='^'ax=axmglearn discrete_scatter(x_train[: ]x_train[: ]y_trainmarkers=' 'ax=axax set_xlabel("feature "ax set_ylabel("feature "cbar plt colorbar(scores_imageax=axes tolist()axes[ legend(["test class ""test class ""train class ""train class "]ncol= loc= )uncertainty estimates from classifiers |
17,116 | ing model on two-dimensional toy dataset encoding not only the predicted outcome but also how certain the classifier is provides additional information howeverin this visualizationit is hard to make out the boundary between the two classes predicting probabilities the output of predict_proba is probability for each classand is often more easily understood than the output of decision_function it is always of shape (n_samples for binary classificationin[ ]print("shape of probabilities{}format(gbrt predict_proba(x_testshape)out[ ]shape of probabilities( the first entry in each row is the estimated probability of the first classand the second entry is the estimated probability of the second class because it is probabilitythe output of predict_proba is always between and and the sum of the entries for both classes is always in[ ]show the first few entries of predict_proba print("predicted probabilities:\ {}formatgbrt predict_proba(x_test[: ])) supervised learning |
17,117 | predicted probabilities[ ]because the probabilities for the two classes sum to exactly one of the classes will be above certainty that class is the one that is predicted you can see in the previous output that the classifier is relatively certain for most points how well the uncertainty actually reflects uncertainty in the data depends on the model and the parameters model that is more overfitted tends to make more certain predictionseven if they might be wrong model with less complexity usually has more uncertainty in its predictions model is called calibrated if the reported uncertainty actually matches how correct it is--in calibrated modela prediction made with certainty would be correct of the time in the following example (figure - we again show the decision boundary on the datasetnext to the class probabilities for the class in[ ]figaxes plt subplots( figsize=( )mglearn tools plot_ d_separatorgbrtxax=axes[ ]alpha fill=truecm=mglearn cm scores_image mglearn tools plot_ d_scoresgbrtxax=axes[ ]alpha cm=mglearn reblfunction='predict_proba'for ax in axesplot training and test points mglearn discrete_scatter(x_test[: ]x_test[: ]y_testmarkers='^'ax=axmglearn discrete_scatter(x_train[: ]x_train[: ]y_trainmarkers=' 'ax=axax set_xlabel("feature "ax set_ylabel("feature "cbar plt colorbar(scores_imageax=axes tolist()axes[ legend(["test class ""test class ""train class ""train class "]ncol= loc= ) because the probabilities are floating-point numbersit is unlikely that they will both be exactly howeverif that happensthe prediction is made at random uncertainty estimates from classifiers |
17,118 | ing model shown in figure - the boundaries in this plot are much more well-definedand the small areas of uncertainty are clearly visible the scikit-learn website has great comparison of many models and what their uncertainty estimates look like we've reproduced this in figure - and we encourage you to go though the example there figure - comparison of several classifiers in scikit-learn on synthetic datasets (image courtesy uncertainty in multiclass classification so farwe've only talked about uncertainty estimates in binary classification but the decision_function and predict_proba methods also work in the multiclass setting let' apply them on the iris datasetwhich is three-class classification dataset supervised learning |
17,119 | from sklearn datasets import load_iris iris load_iris(x_trainx_testy_trainy_test train_test_splitiris datairis targetrandom_state= gbrt gradientboostingclassifier(learning_rate= random_state= gbrt fit(x_trainy_trainin[ ]print("decision function shape{}format(gbrt decision_function(x_testshape)plot the first few entries of the decision function print("decision function:\ {}format(gbrt decision_function(x_test)[: :])out[ ]decision function shape( decision function[[- - - - [- - [- - [- - - ]in the multiclass casethe decision_function has the shape (n_samplesn_classesand each column provides "certainty scorefor each classwhere large score means that class is more likely and small score means the class is less likely you can recover the predictions from these scores by finding the maximum entry for each data pointin[ ]print("argmax of decision function:\ {}formatnp argmax(gbrt decision_function(x_test)axis= ))print("predictions:\ {}format(gbrt predict(x_test))out[ ]argmax of decision function[ predictions[ the output of predict_proba has the same shape(n_samplesn_classesagainthe probabilities for the possible classes for each data point sum to uncertainty estimates from classifiers |
17,120 | show the first few entries of predict_proba print("predicted probabilities:\ {}format(gbrt predict_proba(x_test)[: ])show that sums across rows are one print("sums{}format(gbrt predict_proba(x_test)[: sum(axis= ))out[ ]predicted probabilities[ ]sums we can again recover the predictions by computing the argmax of predict_probain[ ]print("argmax of predicted probabilities:\ {}formatnp argmax(gbrt predict_proba(x_test)axis= ))print("predictions:\ {}format(gbrt predict(x_test))out[ ]argmax of predicted probabilities[ predictions[ to summarizepredict_proba and decision_function always have shape (n_sam plesn_classes)--apart from decision_function in the special binary case in the binary casedecision_function only has one columncorresponding to the "positiveclass classes_[ this is mostly for historical reasons you can recover the prediction when there are n_classes many columns by computing the argmax across columns be carefulthoughif your classes are stringsor you use integers but they are not consecutive and starting from if you want to compare results obtained with predict to results obtained via decision_function or pre dict_probamake sure to use the classes_ attribute of the classifier to get the actual class names supervised learning |
17,121 | logreg logisticregression(represent each target by its class name in the iris dataset named_target iris target_names[y_trainlogreg fit(x_trainnamed_targetprint("unique classes in training data{}format(logreg classes_)print("predictions{}format(logreg predict(x_test)[: ])argmax_dec_func np argmax(logreg decision_function(x_test)axis= print("argmax of decision function{}format(argmax_dec_func[: ])print("argmax combined with classes_{}formatlogreg classes_[argmax_dec_func][: ])out[ ]unique classes in training data['setosa'versicolor'virginica'predictions['versicolor'setosa'virginica'versicolor'versicolor'setosa'versicolor'virginica'versicolor'versicolor'argmax of decision function[ argmax combined with classes_['versicolor'setosa'virginica'versicolor'versicolor'setosa'versicolor'virginica'versicolor'versicolor'summary and outlook we started this with discussion of model complexitythen discussed generalizationor learning model that is able to perform well on newpreviously unseen data this led us to the concepts of underfittingwhich describes model that cannot capture the variations present in the training dataand overfittingwhich describes model that focuses too much on the training data and is not able to generalize to new data very well we then discussed wide array of machine learning models for classification and regressionwhat their advantages and disadvantages areand how to control model complexity for each of them we saw that for many of the algorithmssetting the right parameters is important for good performance some of the algorithms are also sensitive to how we represent the input dataand in particular to how the features are scaled thereforeblindly applying an algorithm to dataset without understanding the assumptions the model makes and the meanings of the parameter settings will rarely lead to an accurate model this contains lot of information about the algorithmsand it is not necessary for you to remember all of these details for the following howeversome knowledge of the models described here--and which to use in specific situation--is important for successfully applying machine learning in practice here is quick summary of when to use each modelsummary and outlook |
17,122 | for small datasetsgood as baselineeasy to explain linear models go-to as first algorithm to trygood for very large datasetsgood for very highdimensional data naive bayes only for classification even faster than linear modelsgood for very large datasets and high-dimensional data often less accurate than linear models decision trees very fastdon' need scaling of the datacan be visualized and easily explained random forests nearly always perform better than single decision treevery robust and powerful don' need scaling of data not good for very high-dimensional sparse data gradient boosted decision trees often slightly more accurate than random forests slower to train but faster to predict than random forestsand smaller in memory need more parameter tuning than random forests support vector machines powerful for medium-sized datasets of features with similar meaning require scaling of datasensitive to parameters neural networks can build very complex modelsparticularly for large datasets sensitive to scaling of the data and to the choice of parameters large models need long time to train when working with new datasetit is in general good idea to start with simple modelsuch as linear model or naive bayes or nearest neighbors classifierand see how far you can get after understanding more about the datayou can consider moving to an algorithm that can build more complex modelssuch as random forestsgradient boosted decision treessvmsor neural networks you should now be in position where you have some idea of how to applytuneand analyze the models we discussed here in this we focused on the binary classification caseas this is usually easiest to understand most of the algorithms presented have classification and regression variantshoweverand all of the classification algorithms support both binary and multiclass classification try applying any of these algorithms to the built-in datasets in scikit-learnlike the boston_housing or diabetes datasets for regressionor the digits dataset for multiclass classification playing around with the algorithms on different datasets will give you better feel for supervised learning |
17,123 | they are to the representation of the data while we analyzed the consequences of different parameter settings for the algorithms we investigatedbuilding model that actually generalizes well to new data in production is bit trickier than that we will see how to properly adjust parameters and how to find good parameters automatically in firstthoughwe will dive in more detail into unsupervised learning and preprocessing in the next summary and outlook |
17,124 | unsupervised learning and preprocessing the second family of machine learning algorithms that we will discuss is unsupervised learning algorithms unsupervised learning subsumes all kinds of machine learning where there is no known outputno teacher to instruct the learning algorithm in unsupervised learningthe learning algorithm is just shown the input data and asked to extract knowledge from this data types of unsupervised learning we will look into two kinds of unsupervised learning in this transformations of the dataset and clustering unsupervised transformations of dataset are algorithms that create new representation of the data which might be easier for humans or other machine learning algorithms to understand compared to the original representation of the data common application of unsupervised transformations is dimensionality reductionwhich takes high-dimensional representation of the dataconsisting of many featuresand finds new way to represent this data that summarizes the essential characteristics with fewer features common application for dimensionality reduction is reduction to two dimensions for visualization purposes another application for unsupervised transformations is finding the parts or components that "make upthe data an example of this is topic extraction on collections of text documents herethe task is to find the unknown topics that are talked about in each documentand to learn what topics appear in each document this can be useful for tracking the discussion of themes like electionsgun controlor pop stars on social media clustering algorithmson the other handpartition data into distinct groups of similar items consider the example of uploading photos to social media site to allow you |
17,125 | the same person howeverthe site doesn' know which pictures show whomand it doesn' know how many different people appear in your photo collection sensible approach would be to extract all the faces and divide them into groups of faces that look similar hopefullythese correspond to the same personand the images can be grouped together for you challenges in unsupervised learning major challenge in unsupervised learning is evaluating whether the algorithm learned something useful unsupervised learning algorithms are usually applied to data that does not contain any label informationso we don' know what the right output should be thereforeit is very hard to say whether model "did well for exampleour hypothetical clustering algorithm could have grouped together all the pictures that show faces in profile and all the full-face pictures this would certainly be possible way to divide collection of pictures of people' facesbut it' not the one we were looking for howeverthere is no way for us to "tellthe algorithm what we are looking forand often the only way to evaluate the result of an unsupervised algorithm is to inspect it manually as consequenceunsupervised algorithms are used often in an exploratory settingwhen data scientist wants to understand the data betterrather than as part of larger automatic system another common application for unsupervised algorithms is as preprocessing step for supervised algorithms learning new representation of the data can sometimes improve the accuracy of supervised algorithmsor can lead to reduced memory and time consumption before we start with "realunsupervised algorithmswe will briefly discuss some simple preprocessing methods that often come in handy even though preprocessing and scaling are often used in tandem with supervised learning algorithmsscaling methods don' make use of the supervised informationmaking them unsupervised preprocessing and scaling in the previous we saw that some algorithmslike neural networks and svmsare very sensitive to the scaling of the data thereforea common practice is to adjust the features so that the data representation is more suitable for these algorithms oftenthis is simple per-feature rescaling and shift of the data the following code (figure - shows simple examplein[ ]mglearn plots plot_scaling( unsupervised learning and preprocessing |
17,126 | different kinds of preprocessing the first plot in figure - shows synthetic two-class classification dataset with two features the first feature (the -axis valueis between and the second feature (the -axis valueis between around and the following four plots show four different ways to transform the data that yield more standard ranges the standardscaler in scikit-learn ensures that for each feature the mean is and the variance is bringing all features to the same magnitude howeverthis scaling does not ensure any particular minimum and maximum values for the features the robustscaler works similarly to the standardscaler in that it ensures statistical properties for each feature that guarantee that they are on the same scale howeverthe robustscaler uses the median and quartiles, instead of mean and variance this makes the robustscaler ignore data points that are very different from the rest (like measurement errorsthese odd data points are also called outliersand can lead to trouble for other scaling techniques the minmaxscaleron the other handshifts the data such that all features are exactly between and for the two-dimensional dataset this means all of the data is con the median of set of numbers is the number such that half of the numbers are smaller than and half of the numbers are larger than the lower quartile is the number such that one-fourth of the numbers are smaller than xand the upper quartile is the number such that one-fourth of the numbers are larger than preprocessing and scaling |
17,127 | between and finallythe normalizer does very different kind of rescaling it scales each data point such that the feature vector has euclidean length of in other wordsit projects data point on the circle (or spherein the case of higher dimensionswith radius of this means every data point is scaled by different number (by the inverse of its lengththis normalization is often used when only the direction (or angleof the data mattersnot the length of the feature vector applying data transformations now that we've seen what the different kinds of transformations dolet' apply them using scikit-learn we will use the cancer dataset that we saw in preprocessing methods like the scalers are usually applied before applying supervised machine learning algorithm as an examplesay we want to apply the kernel svm (svcto the cancer datasetand use minmaxscaler for preprocessing the data we start by loading our dataset and splitting it into training set and test set (we need separate training and test sets to evaluate the supervised model we will build after the preprocessing)in[ ]from sklearn datasets import load_breast_cancer from sklearn model_selection import train_test_split cancer load_breast_cancer(x_trainx_testy_trainy_test train_test_split(cancer datacancer targetrandom_state= print(x_train shapeprint(x_test shapeout[ ]( ( as reminderthe dataset contains data pointseach represented by measurements we split the dataset into samples for the training set and samples for the test set as with the supervised models we built earlierwe first import the class that implements the preprocessingand then instantiate itin[ ]from sklearn preprocessing import minmaxscaler scaler minmaxscaler( unsupervised learning and preprocessing |
17,128 | maxscalerthe fit method computes the minimum and maximum value of each feature on the training set in contrast to the classifiers and regressors of the scaler is only provided with the data (x_trainwhen fit is calledand y_train is not usedin[ ]scaler fit(x_trainout[ ]minmaxscaler(copy=truefeature_range=( )to apply the transformation that we just learned--that isto actually scale the training data--we use the transform method of the scaler the transform method is used in scikit-learn whenever model returns new representation of the datain[ ]transform data x_train_scaled scaler transform(x_trainprint dataset properties before and after scaling print("transformed shape{}format(x_train_scaled shape)print("per-feature minimum before scaling:\ {}format(x_train min(axis= ))print("per-feature maximum before scaling:\ {}format(x_train max(axis= ))print("per-feature minimum after scaling:\ {}formatx_train_scaled min(axis= ))print("per-feature maximum after scaling:\ {}formatx_train_scaled max(axis= ))out[ ]transformed shape( per-feature minimum before scaling per-feature maximum before scaling per-feature minimum after scaling per-feature maximum after scaling preprocessing and scaling |
17,129 | shifted and scaled you can see that all of the features are now between and as desired to apply the svm to the scaled datawe also need to transform the test set this is again done by calling the transform methodthis time on x_testin[ ]transform test data x_test_scaled scaler transform(x_testprint test data properties after scaling print("per-feature minimum after scaling:\ {}format(x_test_scaled min(axis= ))print("per-feature maximum after scaling:\ {}format(x_test_scaled max(axis= ))out[ ]per-feature minimum after scaling - per-feature maximum after scaling - - - - maybe somewhat surprisinglyyou can see that for the test setafter scalingthe minimum and maximum are not and some of the features are even outside the - rangethe explanation is that the minmaxscaler (and all the other scalersalways applies exactly the same transformation to the training and the test set this means the transform method always subtracts the training set minimum and divides by the training set rangewhich might be different from the minimum and range for the test set scaling training and test data the same way it is important to apply exactly the same transformation to the training set and the test set for the supervised model to work on the test set the following example (figure - illustrates what would happen if we were to use the minimum and range of the test set insteadin[ ]from sklearn datasets import make_blobs make synthetic data x_ make_blobs(n_samples= centers= random_state= cluster_std= split it into training and test sets x_trainx_test train_test_split(xrandom_state= test_size plot the training and test sets figaxes plt subplots( figsize=( ) unsupervised learning and preprocessing |
17,130 | =mglearn cm ( )label="training set" = axes[ scatter(x_test[: ]x_test[: ]marker='^' =mglearn cm ( )label="test set" = axes[ legend(loc='upper left'axes[ set_title("original data"scale the data using minmaxscaler scaler minmaxscaler(scaler fit(x_trainx_train_scaled scaler transform(x_trainx_test_scaled scaler transform(x_testvisualize the properly scaled data axes[ scatter(x_train_scaled[: ]x_train_scaled[: ] =mglearn cm ( )label="training set" = axes[ scatter(x_test_scaled[: ]x_test_scaled[: ]marker='^' =mglearn cm ( )label="test set" = axes[ set_title("scaled data"rescale the test set separately so test set min is and test set max is do not do thisfor illustration purposes only test_scaler minmaxscaler(test_scaler fit(x_testx_test_scaled_badly test_scaler transform(x_testvisualize wrongly scaled data axes[ scatter(x_train_scaled[: ]x_train_scaled[: ] =mglearn cm ( )label="training set" = axes[ scatter(x_test_scaled_badly[: ]x_test_scaled_badly[: ]marker='^' =mglearn cm ( )label="test set" = axes[ set_title("improperly scaled data"for ax in axesax set_xlabel("feature "ax set_ylabel("feature "figure - effect of scaling training and test data shown on the left together (centerand separately (rightpreprocessing and scaling |
17,131 | circles and the test set shown as triangles the second panel is the same databut scaled using the minmaxscaler herewe called fit on the training setand then called transform on the training and test sets you can see that the dataset in the second panel looks identical to the firstonly the ticks on the axes have changed now all the features are between and you can also see that the minimum and maximum feature values for the test data (the trianglesare not and the third panel shows what would happen if we scaled the training set and test set separately in this casethe minimum and maximum feature values for both the training and the test set are and but now the dataset looks different the test points moved incongruously to the training setas they were scaled differently we changed the arrangement of the data in an arbitrary way clearly this is not what we want to do as another way to think about thisimagine your test set is single point there is no way to scale single point correctlyto fulfill the minimum and maximum requirements of the minmaxscaler but the size of your test set should not change your processing shortcuts and efficient alternatives oftenyou want to fit model on some datasetand then transform it this is very common taskwhich can often be computed more efficiently than by simply calling fit and then transform for this use caseall models that have transform method also have fit_transform method here is an example using standardscalerin[ ]from sklearn preprocessing import standardscaler scaler standardscaler(calling fit and transform in sequence (using method chainingx_scaled scaler fit(xtransform(xsame resultbut more efficient computation x_scaled_d scaler fit_transform(xwhile fit_transform is not necessarily more efficient for all modelsit is still good practice to use this method when trying to transform the training set the effect of preprocessing on supervised learning now let' go back to the cancer dataset and see the effect of using the minmaxscaler on learning the svc (this is different way of doing the same scaling we did in chapter firstlet' fit the svc on the original data again for comparison unsupervised learning and preprocessing |
17,132 | from sklearn svm import svc x_trainx_testy_trainy_test train_test_split(cancer datacancer targetrandom_state= svm svc( = svm fit(x_trainy_trainprint("test set accuracy{ }format(svm score(x_testy_test))out[ ]test set accuracy nowlet' scale the data using minmaxscaler before fitting the svcin[ ]preprocessing using - scaling scaler minmaxscaler(scaler fit(x_trainx_train_scaled scaler transform(x_trainx_test_scaled scaler transform(x_testlearning an svm on the scaled training data svm fit(x_train_scaledy_trainscoring on the scaled test set print("scaled test set accuracy{ }formatsvm score(x_test_scaledy_test))out[ ]scaled test set accuracy as we saw beforethe effect of scaling the data is quite significant even though scaling the data doesn' involve any complicated mathit is good practice to use the scaling mechanisms provided by scikit-learn instead of reimplementing them yourselfas it' easy to make mistakes even in these simple computations you can also easily replace one preprocessing algorithm with another by changing the class you useas all of the preprocessing classes have the same interfaceconsisting of the fit and transform methodsin[ ]preprocessing using zero mean and unit variance scaling from sklearn preprocessing import standardscaler scaler standardscaler(scaler fit(x_trainx_train_scaled scaler transform(x_trainx_test_scaled scaler transform(x_testpreprocessing and scaling |
17,133 | svm fit(x_train_scaledy_trainscoring on the scaled test set print("svm test accuracy{ }format(svm score(x_test_scaledy_test))out[ ]svm test accuracy now that we've seen how simple data transformations for preprocessing worklet' move on to more interesting transformations using unsupervised learning dimensionality reductionfeature extractionand manifold learning as we discussed earliertransforming data using unsupervised learning can have many motivations the most common motivations are visualizationcompressing the dataand finding representation that is more informative for further processing one of the simplest and most widely used algorithms for all of these is principal component analysis we'll also look at two other algorithmsnon-negative matrix factorization (nmf)which is commonly used for feature extractionand -snewhich is commonly used for visualization using two-dimensional scatter plots principal component analysis (pcaprincipal component analysis is method that rotates the dataset in way such that the rotated features are statistically uncorrelated this rotation is often followed by selecting only subset of the new featuresaccording to how important they are for explaining the data the following example (figure - illustrates the effect of pca on synthetic two-dimensional datasetin[ ]mglearn plots plot_pca_illustration(the first plot (top leftshows the original data pointscolored to distinguish among them the algorithm proceeds by first finding the direction of maximum variancelabeled "component this is the direction (or vectorin the data that contains most of the informationor in other wordsthe direction along which the features are most correlated with each other thenthe algorithm finds the direction that contains the most information while being orthogonal (at right angleto the first direction in two dimensionsthere is only one possible orientation that is at right anglebut in higher-dimensional spaces there would be (infinitelymany orthogonal directions although the two components are drawn as arrowsit doesn' really matter where the head and the tail arewe could have drawn the first component from the center up to unsupervised learning and preprocessing |
17,134 | cess are called principal componentsas they are the main directions of variance in the data in generalthere are as many principal components as original features figure - transformation of data with pca the second plot (top rightshows the same databut now rotated so that the first principal component aligns with the -axis and the second principal component aligns with the -axis before the rotationthe mean was subtracted from the dataso that the transformed data is centered around zero in the rotated representation found by pcathe two axes are uncorrelatedmeaning that the correlation matrix of the data in this representation is zero except for the diagonal we can use pca for dimensionality reduction by retaining only some of the principal components in this examplewe might keep only the first principal componentas dimensionality reductionfeature extractionand manifold learning |
17,135 | two-dimensional dataset to one-dimensional dataset notehoweverthat instead of keeping only one of the original featureswe found the most interesting direction (top left to bottom right in the first paneland kept this directionthe first principal component finallywe can undo the rotation and add the mean back to the data this will result in the data shown in the last panel in figure - these points are in the original feature spacebut we kept only the information contained in the first principal component this transformation is sometimes used to remove noise effects from the data or visualize what part of the information is retained using the principal components applying pca to the cancer dataset for visualization one of the most common applications of pca is visualizing high-dimensional datasets as we saw in it is hard to create scatter plots of data that has more than two features for the iris datasetwe were able to create pair plot (figure - in that gave us partial picture of the data by showing us all the possible combinations of two features but if we want to look at the breast cancer dataseteven using pair plot is tricky this dataset has featureswhich would result in scatter plotswe' never be able to look at all these plots in detaillet alone try to understand them there is an even simpler visualization we can usethough--computing histograms of each of the features for the two classesbenign and malignant cancer (figure - )in[ ]figaxes plt subplots( figsize=( )malignant cancer data[cancer target = benign cancer data[cancer target = ax axes ravel(for in range( )_bins np histogram(cancer data[: ]bins= ax[ihist(malignant[: ]bins=binscolor=mglearn cm ( )alpha ax[ihist(benign[: ]bins=binscolor=mglearn cm ( )alpha ax[iset_title(cancer feature_names[ ]ax[iset_yticks(()ax[ set_xlabel("feature magnitude"ax[ set_ylabel("frequency"ax[ legend(["malignant""benign"]loc="best"fig tight_layout( unsupervised learning and preprocessing |
17,136 | dimensionality reductionfeature extractionand manifold learning |
17,137 | appears with feature in certain range (called bineach plot overlays two histogramsone for all of the points in the benign class (blueand one for all the points in the malignant class (redthis gives us some idea of how each feature is distributed across the two classesand allows us to venture guess as to which features are better at distinguishing malignant and benign samples for examplethe feature "smoothness errorseems quite uninformativebecause the two histograms mostly overlapwhile the feature "worst concave pointsseems quite informativebecause the histograms are quite disjoint howeverthis plot doesn' show us anything about the interactions between variables and how these relate to the classes using pcawe can capture the main interactions and get slightly more complete picture we can find the first two principal componentsand visualize the data in this new two-dimensional space with single scatter plot before we apply pcawe scale our data so that each feature has unit variance using standardscalerin[ ]from sklearn datasets import load_breast_cancer cancer load_breast_cancer(scaler standardscaler(scaler fit(cancer datax_scaled scaler transform(cancer datalearning the pca transformation and applying it is as simple as applying preprocessing transformation we instantiate the pca objectfind the principal components by calling the fit methodand then apply the rotation and dimensionality reduction by calling transform by defaultpca only rotates (and shiftsthe databut keeps all principal components to reduce the dimensionality of the datawe need to specify how many components we want to keep when creating the pca objectin[ ]from sklearn decomposition import pca keep the first two principal components of the data pca pca(n_components= fit pca model to breast cancer data pca fit(x_scaledtransform data onto the first two principal components x_pca pca transform(x_scaledprint("original shape{}format(str(x_scaled shape))print("reduced shape{}format(str(x_pca shape)) unsupervised learning and preprocessing |
17,138 | original shape( reduced shape( we can now plot the first two principal components (figure - )in[ ]plot first vs second principal componentcolored by class plt figure(figsize=( )mglearn discrete_scatter(x_pca[: ]x_pca[: ]cancer targetplt legend(cancer target_namesloc="best"plt gca(set_aspect("equal"plt xlabel("first principal component"plt ylabel("second principal component"figure - two-dimensional scatter plot of the breast cancer dataset using the first two principal components it is important to note that pca is an unsupervised methodand does not use any class information when finding the rotation it simply looks at the correlations in the data for the scatter plot shown herewe plotted the first principal component against the dimensionality reductionfeature extractionand manifold learning |
17,139 | you can see that the two classes separate quite well in this two-dimensional space this leads us to believe that even linear classifier (that would learn line in this spacecould do reasonably good job at distinguishing the two classes we can also see that the malignant (redpoints are more spread out than the benign (bluepoints --something that we could already see bit from the histograms in figure - downside of pca is that the two axes in the plot are often not very easy to interpret the principal components correspond to directions in the original dataso they are combinations of the original features howeverthese combinations are usually very complexas we'll see shortly the principal components themselves are stored in the components_ attribute of the pca object during fittingin[ ]print("pca component shape{}format(pca components_ shape)out[ ]pca component shape( each row in components_ corresponds to one principal componentand they are sorted by their importance (the first principal component comes firstetc the columns correspond to the original features attribute of the pca in this example"mean radius,"mean texture,and so on let' have look at the content of components_in[ ]print("pca components:\ {}format(pca components_)out[ ]pca components[ [- - - - - - - - - - - - - ]we can also visualize the coefficients using heat map (figure - )which might be easier to understandin[ ]plt matshow(pca components_cmap='viridis'plt yticks([ ]["first component""second component"]plt colorbar(plt xticks(range(len(cancer feature_names))cancer feature_namesrotation= ha='left'plt xlabel("feature"plt ylabel("principal components" unsupervised learning and preprocessing |
17,140 | you can see that in the first componentall features have the same sign (it' negativebut as we mentioned earlierit doesn' matter which direction the arrow points inthat means that there is general correlation between all features as one measurement is highthe others are likely to be high as well the second component has mixed signsand both of the components involve all of the features this mixing of all features is what makes explaining the axes in figure - so tricky eigenfaces for feature extraction another application of pca that we mentioned earlier is feature extraction the idea behind feature extraction is that it is possible to find representation of your data that is better suited to analysis than the raw representation you were given great example of an application where feature extraction is helpful is with images images are made up of pixelsusually stored as redgreenand blue (rgbintensities objects in images are usually made up of thousands of pixelsand only together are they meaningful we will give very simple application of feature extraction on images using pcaby working with face images from the labeled faces in the wild dataset this dataset contains face images of celebrities downloaded from the internetand it includes faces of politicianssingersactorsand athletes from the early we use grayscale versions of these imagesand scale them down for faster processing you can see some of the images in figure - in[ ]from sklearn datasets import fetch_lfw_people people fetch_lfw_people(min_faces_per_person= resize= image_shape people images[ shape fixaxes plt subplots( figsize=( )subplot_kw={'xticks'()'yticks'()}for targetimageax in zip(people targetpeople imagesaxes ravel())ax imshow(imageax set_title(people target_names[target]dimensionality reductionfeature extractionand manifold learning |
17,141 | there are , imageseach pixels largebelonging to different peoplein[ ]print("people images shape{}format(people images shape)print("number of classes{}format(len(people target_names))out[ ]people images shape( number of classes the dataset is bit skewedhowevercontaining lot of images of george bush and colin powellas you can see herein[ ]count how often each target appears counts np bincount(people targetprint counts next to target names for (countnamein enumerate(zip(countspeople target_names))print("{ : { : }format(namecount)end='if ( = print( unsupervised learning and preprocessing |
17,142 | alejandro toledo amelie mauresmo angelina jolie atal bihari vajpayee carlos menem david beckham george bush gerhard schroeder gray davis hamid karzai hugo chavez laura bush lleyton hewitt mahmoud abbas michael bloomberg nestor kirchner pete sampras ricardo lagos rudolph giuliani serena williams tiger woods tom ridge vicente fox winona ryder alvaro uribe andre agassi arnold schwarzenegger bill clinton colin powell donald rumsfeld george robertson gloria macapagal arroyo guillermo coria hans blix igor ivanov lindsay davenport luiz inacio lula da silva megawati sukarnoputri naomi watts paul bremer recep tayyip erdogan roh moo-hyun saddam hussein silvio berlusconi tom daschle tony blair vladimir putin to make the data less skewedwe will only take up to images of each person (otherwisethe feature extraction would be overwhelmed by the likelihood of george bush)in[ ]mask np zeros(people target shapedtype=np boolfor target in np unique(people target)mask[np where(people target =target)[ ][: ] x_people people data[masky_people people target[maskscale the grayscale values to be between and instead of and for better numeric stability x_people x_people common task in face recognition is to ask if previously unseen face belongs to known person from database this has applications in photo collectionsocial mediaand security applications one way to solve this problem would be to build classifier where each person is separate class howeverthere are usually many different people in face databasesand very few images of the same person ( very few training examples per classthat makes it hard to train most classifiers additionallydimensionality reductionfeature extractionand manifold learning |
17,143 | model simple solution is to use one-nearest-neighbor classifier that looks for the most similar face image to the face you are classifying this classifier could in principle work with only single training example per class let' take look at how well kneighborsclassifier does herein[ ]from sklearn neighbors import kneighborsclassifier split the data into training and test sets x_trainx_testy_trainy_test train_test_splitx_peopley_peoplestratify=y_peoplerandom_state= build kneighborsclassifier using one neighbor knn kneighborsclassifier(n_neighbors= knn fit(x_trainy_trainprint("test set score of -nn{ }format(knn score(x_testy_test))out[ ]test set score of -nn we obtain an accuracy of %which is not actually that bad for -class classification problem (random guessing would give you around / accuracy)but is also not great we only correctly identify person every fourth time this is where pca comes in computing distances in the original pixel space is quite bad way to measure similarity between faces when using pixel representation to compare two imageswe compare the grayscale value of each individual pixel to the value of the pixel in the corresponding position in the other image this representation is quite different from how humans would interpret the image of faceand it is hard to capture the facial features using this raw representation for exampleusing pixel distances means that shifting face by one pixel to the right corresponds to drastic changewith completely different representation we hope that using distances along principal components can improve our accuracy herewe enable the whitening option of pcawhich rescales the principal components to have the same scale this is the same as using standardscaler after the transformation reusing the data from figure - againwhitening corresponds to not only rotating the databut also rescaling it so that the center panel is circle instead of an ellipse (see figure - )in[ ]mglearn plots plot_pca_whitening( unsupervised learning and preprocessing |
17,144 | we fit the pca object to the training data and extract the first principal components then we transform the training and test datain[ ]pca pca(n_components= whiten=truerandom_state= fit(x_trainx_train_pca pca transform(x_trainx_test_pca pca transform(x_testprint("x_train_pca shape{}format(x_train_pca shape)out[ ]x_train_pca shape( the new data has featuresthe first principal components nowwe can use the new representation to classify our images using one-nearest-neighbors classifierin[ ]knn kneighborsclassifier(n_neighbors= knn fit(x_train_pcay_trainprint("test set accuracy{ }format(knn score(x_test_pcay_test))out[ ]test set accuracy our accuracy improved quite significantlyfrom to %confirming our intuition that the principal components might provide better representation of the data dimensionality reductionfeature extractionand manifold learning |
17,145 | remember that components correspond to directions in the input space the input space here is -pixel grayscale imagesso directions within this space are also -pixel grayscale images let' look at the first couple of principal components (figure - )in[ ]print("pca components_ shape{}format(pca components_ shape)out[ ]pca components_ shape( in[ ]fixaxes plt subplots( figsize=( )subplot_kw={'xticks'()'yticks'()}for (componentaxin enumerate(zip(pca components_axes ravel()))ax imshow(component reshape(image_shape)cmap='viridis'ax set_title("{componentformat(( ))while we certainly cannot understand all aspects of these componentswe can guess which aspects of the face images some of the components are capturing the first component seems to mostly encode the contrast between the face and the backgroundthe second component encodes differences in lighting between the right and the left half of the faceand so on while this representation is slightly more semantic than the raw pixel valuesit is still quite far from how human might perceive face as the pca model is based on pixelsthe alignment of the face (the position of eyeschinand noseand the lighting both have strong influence on how similar two images are in their pixel representation but alignment and lighting are probably not what human would perceive first when asking people to rate similarity of facesthey are more likely to use attributes like agegenderfacial expressionand hair stylewhich are attributes that are hard to infer from the pixel intensities it' important to keep in mind that algorithms often interpret data (particularly visual datasuch as imageswhich humans are very familiar withquite differently from how human would unsupervised learning and preprocessing |
17,146 | let' come back to the specific case of pcathough we introduced the pca transformation as rotating the data and then dropping the components with low variance another useful interpretation is to try to find some numbers (the new feature values after the pca rotationso that we can express the test points as weighted sum of the principal components (see figure - figure - schematic view of pca as decomposing an image into weighted sum of components herex and so on are the coefficients of the principal components for this data pointin other wordsthey are the representation of the image in the rotated space dimensionality reductionfeature extractionand manifold learning |
17,147 | the reconstructions of the original data using only some components in figure - after dropping the second component and arriving at the third panelwe undid the rotation and added the mean back to obtain new points in the original space with the second component removedas shown in the last panel we can do similar transformation for the faces by reducing the data to only some principal components and then rotating back into the original space this return to the original feature space can be done using the inverse_transform method herewe visualize the reconstruction of some faces using or , components (figure - )in[ ]mglearn plots plot_pca_faces(x_trainx_testimage_shapefigure - reconstructing three face images using increasing numbers of principal components you can see that when we use only the first principal componentsonly the essence of the picturelike the face orientation and lightingis captured by using more and more principal componentsmore and more details in the image are preserved this unsupervised learning and preprocessing |
17,148 | using as many components as there are pixels would mean that we would not discard any information after the rotationand we would reconstruct the image perfectly we can also try to use pca to visualize all the faces in the dataset in scatter plot using the first two principal components (figure - )with classes given by who is shown in the imagesimilarly to what we did for the cancer datasetin[ ]mglearn discrete_scatter(x_train_pca[: ]x_train_pca[: ]y_trainplt xlabel("first principal component"plt ylabel("second principal component"figure - scatter plot of the faces dataset using the first two principal components (see figure - for the corresponding image for the cancer datasetas you can seewhen we use only the first two principal components the whole data is just big blobwith no separation of classes visible this is not very surprisinggiven that even with componentsas shown earlier in figure - pca only captures very rough characteristics of the faces dimensionality reductionfeature extractionand manifold learning |
17,149 | non-negative matrix factorization is another unsupervised learning algorithm that aims to extract useful features it works similarly to pca and can also be used for dimensionality reduction as in pcawe are trying to write each data point as weighted sum of some componentsas illustrated in figure - but whereas in pca we wanted components that were orthogonal and that explained as much variance of the data as possiblein nmfwe want the components and the coefficients to be nonnegativethat iswe want both the components and the coefficients to be greater than or equal to zero consequentlythis method can only be applied to data where each feature is non-negativeas non-negative sum of non-negative components cannot become negative the process of decomposing data into non-negative weighted sum is particularly helpful for data that is created as the addition (or overlayof several independent sourcessuch as an audio track of multiple people speakingor music with many instruments in these situationsnmf can identify the original components that make up the combined data overallnmf leads to more interpretable components than pcaas negative components and coefficients can lead to hard-to-interpret cancellation effects the eigenfaces in figure - for examplecontain both positive and negative partsand as we mentioned in the description of pcathe sign is actually arbitrary before we apply nmf to the face datasetlet' briefly revisit the synthetic data applying nmf to synthetic data in contrast to when using pcawe need to ensure that our data is positive for nmf to be able to operate on the data this means where the data lies relative to the origin ( actually matters for nmf thereforeyou can think of the non-negative components that are extracted as directions from ( toward the data the following example (figure - shows the results of nmf on the twodimensional toy datain[ ]mglearn plots plot_nmf_illustration( unsupervised learning and preprocessing |
17,150 | nents (leftand one component (rightfor nmf with two componentsas shown on the leftit is clear that all points in the data can be written as positive combination of the two components if there are enough components to perfectly reconstruct the data (as many components as there are features)the algorithm will choose directions that point toward the extremes of the data if we only use single componentnmf creates component that points toward the meanas pointing there best explains the data you can see that in contrast with pcareducing the number of components not only removes some directionsbut creates an entirely different set of componentscomponents in nmf are also not ordered in any specific wayso there is no "first non-negative component"all components play an equal part nmf uses random initializationwhich might lead to different results depending on the random seed in relatively simple cases such as the synthetic data with two componentswhere all the data can be explained perfectlythe randomness has little effect (though it might change the order or scale of the componentsin more complex situationsthere might be more drastic changes applying nmf to face images nowlet' apply nmf to the labeled faces in the wild dataset we used earlier the main parameter of nmf is how many components we want to extract usually this is lower than the number of input features (otherwisethe data could be explained by making each pixel separate componentfirstlet' inspect how the number of components impacts how well the data can be reconstructed using nmf (figure - )dimensionality reductionfeature extractionand manifold learning |
17,151 | mglearn plots plot_nmf_faces(x_trainx_testimage_shapefigure - reconstructing three face images using increasing numbers of components found by nmf the quality of the back-transformed data is similar to when using pcabut slightly worse this is expectedas pca finds the optimum directions in terms of reconstruction nmf is usually not used for its ability to reconstruct or encode databut rather for finding interesting patterns within the data as first look into the datalet' try extracting only few components (say figure - shows the result unsupervised learning and preprocessing |
17,152 | from sklearn decomposition import nmf nmf nmf(n_components= random_state= nmf fit(x_trainx_train_nmf nmf transform(x_trainx_test_nmf nmf transform(x_testfixaxes plt subplots( figsize=( )subplot_kw={'xticks'()'yticks'()}for (componentaxin enumerate(zip(nmf components_axes ravel()))ax imshow(component reshape(image_shape)ax set_title("{componentformat( )figure - the components found by nmf on the faces dataset when using components these components are all positiveand so resemble prototypes of faces much more so than the components shown for pca in figure - for exampleone can clearly see that component shows face rotated somewhat to the rightwhile component shows face somewhat rotated to the left let' look at the images for which these components are particularly strongshown in figures - and - dimensionality reductionfeature extractionand manifold learning |
17,153 | compn sort by rd componentplot first images inds np argsort(x_train_nmf[:compn])[::- figaxes plt subplots( figsize=( )subplot_kw={'xticks'()'yticks'()}for (indaxin enumerate(zip(indsaxes ravel()))ax imshow(x_train[indreshape(image_shape)compn sort by th componentplot first images inds np argsort(x_train_nmf[:compn])[::- figaxes plt subplots( figsize=( )subplot_kw={'xticks'()'yticks'()}for (indaxin enumerate(zip(indsaxes ravel()))ax imshow(x_train[indreshape(image_shape)figure - faces that have large coefficient for component unsupervised learning and preprocessing |
17,154 | as expectedfaces that have high coefficient for component are faces looking to the right (figure - )while faces with high coefficient for component are looking to the left (figure - as mentioned earlierextracting patterns like these works best for data with additive structureincluding audiogene expressionand text data let' walk through one example on synthetic data to see what this might look like let' say we are interested in signal that is combination of three different sources (figure - )in[ ] mglearn datasets make_signals(plt figure(figsize=( )plt plot( '-'plt xlabel("time"plt ylabel("signal"figure - original signal sources dimensionality reductionfeature extractionand manifold learning |
17,155 | all three of them we want to recover the decomposition of the mixed signal into the original components we assume that we have many different ways to observe the mixture (say measurement devices)each of which provides us with series of measurementsin[ ]mix data into -dimensional state np random randomstate( uniform(size=( ) np dot(sa tprint("shape of measurements{}format( shape)out[ ]shape of measurements( we can use nmf to recover the three signalsin[ ]nmf nmf(n_components= random_state= s_ nmf fit_transform(xprint("recovered signal shape{}format(s_ shape)out[ ]recovered signal shape( for comparisonwe also apply pcain[ ]pca pca(n_components= pca fit_transform(xfigure - shows the signal activity that was discovered by nmf and pcain[ ]models [xss_hnames ['observations (first three measurements)''true sources''nmf recovered signals''pca recovered signals'figaxes plt subplots( figsize=( )gridspec_kw={'hspace' }subplot_kw={'xticks'()'yticks'()}for modelnameax in zip(modelsnamesaxes)ax set_title(nameax plot(model[:: ]'-' unsupervised learning and preprocessing |
17,156 | the figure includes of the measurements from for reference as you can seenmf did reasonable job of discovering the original sourceswhile pca failed and used the first component to explain the majority of the variation in the data keep in mind that the components produced by nmf have no natural ordering in this examplethe ordering of the nmf components is the same as in the original signal (see the shading of the three curves)but this is purely accidental there are many other algorithms that can be used to decompose each data point into weighted sum of fixed set of componentsas pca and nmf do discussing all of them is beyond the scope of this bookand describing the constraints made on the components and coefficients often involves probability theory if you are interested in this kind of pattern extractionwe recommend that you study the sections of the sci kit_learn user guide on independent component analysis (ica)factor analysis (fa)and sparse coding (dictionary learning)all of which you can find on the page about decomposition methods manifold learning with -sne while pca is often good first approach for transforming your data so that you might be able to visualize it using scatter plotthe nature of the method (applying rotation and then dropping directionslimits its usefulnessas we saw with the scatter plot of the labeled faces in the wild dataset there is class of algorithms for visualization called manifold learning algorithms that allow for much more complex mappingsand often provide better visualizations particularly useful one is the -sne algorithm dimensionality reductionfeature extractionand manifold learning |
17,157 | used to generate more than two new features some of themincluding -snecompute new representation of the training databut don' allow transformations of new data this means these algorithms cannot be applied to test setratherthey can only transform the data they were trained for manifold learning can be useful for exploratory data analysisbut is rarely used if the final goal is supervised learning the idea behind -sne is to find two-dimensional representation of the data that preserves the distances between points as best as possible -sne starts with random twodimensional representation for each data pointand then tries to make points that are close in the original feature space closerand points that are far apart in the original feature space farther apart -sne puts more emphasis on points that are close byrather than preserving distances between far-apart points in other wordsit tries to preserve the information indicating which points are neighbors to each other we will apply the -sne manifold learning algorithm on dataset of handwritten digits that is included in scikit-learn each data point in this dataset is an grayscale image of handwritten digit between and figure - shows an example image for each classin[ ]from sklearn datasets import load_digits digits load_digits(figaxes plt subplots( figsize=( )subplot_kw={'xticks':()'yticks'()}for aximg in zip(axes ravel()digits images)ax imshow(img not to be confused with the much larger mnist dataset unsupervised learning and preprocessing |
17,158 | let' use pca to visualize the data reduced to two dimensions we plot the first two principal componentsand color each dot by its class (see figure - )in[ ]build pca model pca pca(n_components= pca fit(digits datatransform the digits data onto the first two principal components digits_pca pca transform(digits datacolors ["# ""# ""#bd ""# ""# ""# ""# ""# ""# ""# "plt figure(figsize=( )plt xlim(digits_pca[: min()digits_pca[: max()plt ylim(digits_pca[: min()digits_pca[: max()for in range(len(digits data))actually plot the digits as text instead of using scatter plt text(digits_pca[ ]digits_pca[ ]str(digits target[ ])color colors[digits target[ ]]fontdict={'weight''bold''size' }plt xlabel("first principal component"plt ylabel("second principal component"herewe actually used the true digit classes as glyphsto show which class is where the digits zerosixand four are relatively well separated using the first two principal componentsthough they still overlap most of the other digits overlap significantly dimensionality reductionfeature extractionand manifold learning |
17,159 | let' apply -sne to the same datasetand compare the results as -sne does not support transforming new datathe tsne class has no transform method insteadwe can call the fit_transform methodwhich will build the model and immediately return the transformed data (see figure - )in[ ]from sklearn manifold import tsne tsne tsne(random_state= use fit_transform instead of fitas tsne has no transform method digits_tsne tsne fit_transform(digits data unsupervised learning and preprocessing |
17,160 | plt figure(figsize=( )plt xlim(digits_tsne[: min()digits_tsne[: max( plt ylim(digits_tsne[: min()digits_tsne[: max( for in range(len(digits data))actually plot the digits as text instead of using scatter plt text(digits_tsne[ ]digits_tsne[ ]str(digits target[ ])color colors[digits target[ ]]fontdict={'weight''bold''size' }plt xlabel(" -sne feature "plt xlabel(" -sne feature "figure - scatter plot of the digits dataset using two components found by -sne dimensionality reductionfeature extractionand manifold learning |
17,161 | the ones and nines are somewhat split upbut most of the classes form single dense group keep in mind that this method has no knowledge of the class labelsit is completely unsupervised stillit can find representation of the data in two dimensions that clearly separates the classesbased solely on how close points are in the original space the -sne algorithm has some tuning parametersthough it often works well with the default settings you can try playing with perplexity and early_exaggerationbut the effects are usually minor clustering as we described earlierclustering is the task of partitioning the dataset into groupscalled clusters the goal is to split up the data in such way that points within single cluster are very similar and points in different clusters are different similarly to classification algorithmsclustering algorithms assign (or predicta number to each data pointindicating which cluster particular point belongs to -means clustering -means clustering is one of the simplest and most commonly used clustering algorithms it tries to find cluster centers that are representative of certain regions of the data the algorithm alternates between two stepsassigning each data point to the closest cluster centerand then setting each cluster center as the mean of the data points that are assigned to it the algorithm is finished when the assignment of instances to clusters no longer changes the following example (figure - illustrates the algorithm on synthetic datasetin[ ]mglearn plots plot_kmeans_algorithm( unsupervised learning and preprocessing |
17,162 | cluster centers are shown as triangleswhile data points are shown as circles colors indicate cluster membership we specified that we are looking for three clustersso the algorithm was initialized by declaring three data points randomly as cluster centers (see "initialization"then the iterative algorithm starts firsteach data point is assigned to the cluster center it is closest to (see "assign points ( )"nextthe cluster centers are updated to be the mean of the assigned points (see "recompute centers ( )"then the process is repeated two more times after the third iterationthe assignment of points to cluster centers remained unchangedso the algorithm stops given new data pointsk-means will assign each to the closest cluster center the next example (figure - shows the boundaries of the cluster centers that were learned in figure - in[ ]mglearn plots plot_kmeans_boundaries(clustering |
17,163 | applying -means with scikit-learn is quite straightforward herewe apply it to the synthetic data that we used for the preceding plots we instantiate the kmeans classand set the number of clusters we are looking for then we call the fit method with the datain[ ]from sklearn datasets import make_blobs from sklearn cluster import kmeans generate synthetic two-dimensional data xy make_blobs(random_state= build the clustering model kmeans kmeans(n_clusters= kmeans fit(xduring the algorithmeach training data point in is assigned cluster label you can find these labels in the kmeans labels_ attribute if you don' provide n_clustersit is set to by default there is no particular reason why you should use this value unsupervised learning and preprocessing |
17,164 | print("cluster memberships:\ {}format(kmeans labels_)out[ ]cluster memberships[ as we asked for three clustersthe clusters are numbered to you can also assign cluster labels to new pointsusing the predict method each new point is assigned to the closest cluster center when predictingbut the existing model is not changed running predict on the training set returns the same result as labels_in[ ]print(kmeans predict( )out[ ][ you can see that clustering is somewhat similar to classificationin that each item gets label howeverthere is no ground truthand consequently the labels themselves have no priori meaning let' go back to the example of clustering face images that we discussed before it might be that the cluster found by the algorithm contains only faces of your friend bela you can only know that after you look at the picturesthoughand the number is arbitrary the only information the algorithm gives you is that all faces labeled as are similar for the clustering we just computed on the two-dimensional toy datasetthat means that we should not assign any significance to the fact that one group was labeled and another one was labeled running the algorithm again might result in different numbering of clusters because of the random nature of the initialization here is plot of this data again (figure - the cluster centers are stored in the cluster_centers_ attributeand we plot them as trianglesin[ ]mglearn discrete_scatter( [: ] [: ]kmeans labels_markers=' 'mglearn discrete_scatterkmeans cluster_centers_[: ]kmeans cluster_centers_[: ][ ]markers='^'markeredgewidth= clustering |
17,165 | clusters we can also use more or fewer cluster centers (figure - )in[ ]figaxes plt subplots( figsize=( )using two cluster centerskmeans kmeans(n_clusters= kmeans fit(xassignments kmeans labels_ mglearn discrete_scatter( [: ] [: ]assignmentsax=axes[ ]using five cluster centerskmeans kmeans(n_clusters= kmeans fit(xassignments kmeans labels_ mglearn discrete_scatter( [: ] [: ]assignmentsax=axes[ ] unsupervised learning and preprocessing |
17,166 | clusters (rightfailure cases of -means even if you know the "rightnumber of clusters for given datasetk-means might not always be able to recover them each cluster is defined solely by its centerwhich means that each cluster is convex shape as result of thisk-means can only capture relatively simple shapes -means also assumes that all clusters have the same "diameterin some senseit always draws the boundary between clusters to be exactly in the middle between the cluster centers that can sometimes lead to surprising resultsas shown in figure - in[ ]x_variedy_varied make_blobs(n_samples= cluster_std=[ ]random_state= y_pred kmeans(n_clusters= random_state= fit_predict(x_variedmglearn discrete_scatter(x_varied[: ]x_varied[: ]y_predplt legend(["cluster ""cluster ""cluster "]loc='best'plt xlabel("feature "plt ylabel("feature "clustering |
17,167 | densities one might have expected the dense region in the lower left to be the first clusterthe dense region in the upper right to be the secondand the less dense region in the center to be the third insteadboth cluster and cluster have some points that are far away from all the other points in these clusters that "reachtoward the center -means also assumes that all directions are equally important for each cluster the following plot (figure - shows two-dimensional dataset where there are three clearly separated parts in the data howeverthese groups are stretched toward the diagonal as -means only considers the distance to the nearest cluster centerit can' handle this kind of datain[ ]generate some random cluster data xy make_blobs(random_state= n_samples= rng np random randomstate( transform the data to be stretched transformation rng normal(size=( ) np dot(xtransformation unsupervised learning and preprocessing |
17,168 | kmeans kmeans(n_clusters= kmeans fit(xy_pred kmeans predict(xplot the cluster assignments and cluster centers plt scatter( [: ] [: ] =y_predcmap=mglearn cm plt scatter(kmeans cluster_centers_[: ]kmeans cluster_centers_[: ]marker='^' =[ ] = linewidth= cmap=mglearn cm plt xlabel("feature "plt ylabel("feature "figure - -means fails to identify nonspherical clusters -means also performs poorly if the clusters have more complex shapeslike the two_moons data we encountered in (see figure - )in[ ]generate synthetic two_moons data (with less noise this timefrom sklearn datasets import make_moons xy make_moons(n_samples= noise= random_state= cluster the data into two clusters kmeans kmeans(n_clusters= kmeans fit(xy_pred kmeans predict(xclustering |
17,169 | plt scatter( [: ] [: ] =y_predcmap=mglearn cm = plt scatter(kmeans cluster_centers_[: ]kmeans cluster_centers_[: ]marker='^' =[mglearn cm ( )mglearn cm ( )] = linewidth= plt xlabel("feature "plt ylabel("feature "figure - -means fails to identify clusters with complex shapes herewe would hope that the clustering algorithm can discover the two half-moon shapes howeverthis is not possible using the -means algorithm vector quantizationor seeing -means as decomposition even though -means is clustering algorithmthere are interesting parallels between -means and the decomposition methods like pca and nmf that we discussed earlier you might remember that pca tries to find directions of maximum variance in the datawhile nmf tries to find additive componentswhich often correspond to "extremesor "partsof the data (see figure - both methods tried to express the data points as sum over some components -meanson the other handtries to represent each data point using cluster center you can think of that as each point being represented using only single componentwhich is given by the cluster center this view of -means as decomposition methodwhere each point is represented using single componentis called vector quantization unsupervised learning and preprocessing |
17,170 | nents extracted (figure - )as well as reconstructions of faces from the test set using components (figure - for -meansthe reconstruction is the closest cluster center found on the training setin[ ]x_trainx_testy_trainy_test train_test_splitx_peopley_peoplestratify=y_peoplerandom_state= nmf nmf(n_components= random_state= nmf fit(x_trainpca pca(n_components= random_state= pca fit(x_trainkmeans kmeans(n_clusters= random_state= kmeans fit(x_trainx_reconstructed_pca pca inverse_transform(pca transform(x_test)x_reconstructed_kmeans kmeans cluster_centers_[kmeans predict(x_test)x_reconstructed_nmf np dot(nmf transform(x_test)nmf components_in[ ]figaxes plt subplots( figsize=( )subplot_kw={'xticks'()'yticks'()}fig suptitle("extracted components"for axcomp_kmeanscomp_pcacomp_nmf in zipaxes tkmeans cluster_centers_pca components_nmf components_)ax[ imshow(comp_kmeans reshape(image_shape)ax[ imshow(comp_pca reshape(image_shape)cmap='viridis'ax[ imshow(comp_nmf reshape(image_shape)axes[ set_ylabel("kmeans"axes[ set_ylabel("pca"axes[ set_ylabel("nmf"figaxes plt subplots( subplot_kw={'xticks'()'yticks'()}figsize=( )fig suptitle("reconstructions"for axorigrec_kmeansrec_pcarec_nmf in zipaxes tx_testx_reconstructed_kmeansx_reconstructed_pcax_reconstructed_nmf)ax[ imshow(orig reshape(image_shape)ax[ imshow(rec_kmeans reshape(image_shape)ax[ imshow(rec_pca reshape(image_shape)ax[ imshow(rec_nmf reshape(image_shape)axes[ set_ylabel("original"axes[ set_ylabel("kmeans"axes[ set_ylabel("pca"axes[ set_ylabel("nmf"clustering |
17,171 | unsupervised learning and preprocessing |
17,172 | components (or cluster centers)-- -means uses only single cluster center per image an interesting aspect of vector quantization using -means is that we can use many more clusters than input dimensions to encode our data let' go back to the two_moons data using pca or nmfthere is nothing much we can do to this dataas it lives in only two dimensions reducing it to one dimension with pca or nmf would completely destroy the structure of the data but we can find more expressive representation with -meansby using more cluster centers (see figure - )clustering |
17,173 | xy make_moons(n_samples= noise= random_state= kmeans kmeans(n_clusters= random_state= kmeans fit(xy_pred kmeans predict(xplt scatter( [: ] [: ] =y_preds= cmap='paired'plt scatter(kmeans cluster_centers_[: ]kmeans cluster_centers_[: ] = marker='^' =range(kmeans n_clusters)linewidth= cmap='paired'plt xlabel("feature "plt ylabel("feature "print("cluster memberships:\ {}format(y_pred)out[ ]cluster memberships[ figure - using many -means clusters to cover the variation in complex dataset unsupervised learning and preprocessing |
17,174 | between and we can see this as the data being represented using components (that iswe have new features)with all features being apart from the one that represents the cluster center the point is assigned to using this -dimensional representationit would now be possible to separate the two half-moon shapes using linear modelwhich would not have been possible using the original two features it is also possible to get an even more expressive representation of the data by using the distances to each of the cluster centers as features this can be accomplished using the transform method of kmeansin[ ]distance_features kmeans transform(xprint("distance feature shape{}format(distance_features shape)print("distance features:\ {}format(distance_features)out[ ]distance feature shape( distance features[ ] -means is very popular algorithm for clusteringnot only because it is relatively easy to understand and implementbut also because it runs relatively quickly kmeans scales easily to large datasetsand scikit-learn even includes more scalable variant in the minibatchkmeans classwhich can handle very large datasets one of the drawbacks of -means is that it relies on random initializationwhich means the outcome of the algorithm depends on random seed by defaultscikitlearn runs the algorithm times with different random initializationsand returns the best result further downsides of -means are the relatively restrictive assumptions made on the shape of clustersand the requirement to specify the number of clusters you are looking for (which might not be known in real-world applicationnextwe will look at two more clustering algorithms that improve upon these properties in some ways in this case"bestmeans that the sum of variances of the clusters is small clustering |
17,175 | agglomerative clustering refers to collection of clustering algorithms that all build upon the same principlesthe algorithm starts by declaring each point its own clusterand then merges the two most similar clusters until some stopping criterion is satisfied the stopping criterion implemented in scikit-learn is the number of clustersso similar clusters are merged until only the specified number of clusters are left there are several linkage criteria that specify how exactly the "most similar clusteris measured this measure is always defined between two existing clusters the following three choices are implemented in scikit-learnward the default choiceward picks the two clusters to merge such that the variance within all clusters increases the least this often leads to clusters that are relatively equally sized average average linkage merges the two clusters that have the smallest average distance between all their points complete complete linkage (also known as maximum linkagemerges the two clusters that have the smallest maximum distance between their points ward works on most datasetsand we will use it in our examples if the clusters have very dissimilar numbers of members (if one is much bigger than all the othersfor example)average or complete might work better the following plot (figure - illustrates the progression of agglomerative clustering on two-dimensional datasetlooking for three clustersin[ ]mglearn plots plot_agglomerative_algorithm( unsupervised learning and preprocessing |
17,176 | initiallyeach point is its own cluster thenin each stepthe two clusters that are closest are merged in the first four stepstwo single-point clusters are picked and these are joined into two-point clusters in step one of the two-point clusters is extended to third pointand so on in step there are only three clusters remaining as we specified that we are looking for three clustersthe algorithm then stops let' have look at how agglomerative clustering performs on the simple threecluster data we used here because of the way the algorithm worksagglomerative clustering cannot make predictions for new data points thereforeagglomerative clustering has no predict method to build the model and get the cluster memberships on the training setuse the fit_predict method instead the result is shown in figure - in[ ]from sklearn cluster import agglomerativeclustering xy make_blobs(random_state= agg agglomerativeclustering(n_clusters= assignment agg fit_predict(xmglearn discrete_scatter( [: ] [: ]assignmentplt xlabel("feature "plt ylabel("feature " we could also use the labels_ attributeas we did for -means clustering |
17,177 | as expectedthe algorithm recovers the clustering perfectly while the scikit-learn implementation of agglomerative clustering requires you to specify the number of clusters you want the algorithm to findagglomerative clustering methods provide some help with choosing the right numberwhich we will discuss next hierarchical clustering and dendrograms agglomerative clustering produces what is known as hierarchical clustering the clustering proceeds iterativelyand every point makes journey from being single point cluster to belonging to some final cluster each intermediate step provides clustering of the data (with different number of clustersit is sometimes helpful to look at all possible clusterings jointly the next example (figure - shows an overlay of all the possible clusterings shown in figure - providing some insight into how each cluster breaks up into smaller clustersin[ ]mglearn plots plot_agglomerative( unsupervised learning and preprocessing |
17,178 | tive clusteringwith numbered data points (cf figure - while this visualization provides very detailed view of the hierarchical clusteringit relies on the two-dimensional nature of the data and therefore cannot be used on datasets that have more than two features there ishoweveranother tool to visualize hierarchical clusteringcalled dendrogramthat can handle multidimensional datasets unfortunatelyscikit-learn currently does not have the functionality to draw dendrograms howeveryou can generate them easily using scipy the scipy clustering algorithms have slightly different interface to the scikit-learn clustering algorithms scipy provides function that takes data array and computes linkage arraywhich encodes hierarchical cluster similarities we can then feed this linkage array into the scipy dendrogram function to plot the dendrogram (figure - )in[ ]import the dendrogram function and the ward clustering function from scipy from scipy cluster hierarchy import dendrogramward xy make_blobs(random_state= n_samples= apply the ward clustering to the data array the scipy ward function returns an array that specifies the distances bridged when performing agglomerative clustering linkage_array ward(xclustering |
17,179 | between clusters dendrogram(linkage_arraymark the cuts in the tree that signify two or three clusters ax plt gca(bounds ax get_xbound(ax plot(bounds[ ]'--' =' 'ax plot(bounds[ ]'--' =' 'ax text(bounds[ ] two clusters'va='center'fontdict={'size' }ax text(bounds[ ] three clusters'va='center'fontdict={'size' }plt xlabel("sample index"plt ylabel("cluster distance"figure - dendrogram of the clustering shown in figure - with lines indicating splits into two and three clusters the dendrogram shows data points as points on the bottom (numbered from to thena tree is plotted with these points (representing single-point clustersas the leavesand new node parent is added for each two clusters that are joined reading from bottom to topthe data points and are joined first (as you could see in figure - nextpoints and are joined into clusterand so on at the top levelthere are two branchesone consisting of points and and the other consisting of points and these correspond to the two largest clusters in the lefthand side of the plot unsupervised learning and preprocessing |
17,180 | rithm two clusters get merged the length of each branch also shows how far apart the merged clusters are the longest branches in this dendrogram are the three lines that are marked by the dashed line labeled "three clusters that these are the longest branches indicates that going from three to two clusters meant merging some very far-apart points we see this again at the top of the chartwhere merging the two remaining clusters into single cluster again bridges relatively large distance unfortunatelyagglomerative clustering still fails at separating complex shapes like the two_moons dataset but the same is not true for the next algorithm we will look atdbscan dbscan another very useful clustering algorithm is dbscan (which stands for "densitybased spatial clustering of applications with noise"the main benefits of dbscan are that it does not require the user to set the number of clusters prioriit can capture clusters of complex shapesand it can identify points that are not part of any cluster dbscan is somewhat slower than agglomerative clustering and -meansbut still scales to relatively large datasets dbscan works by identifying points that are in "crowdedregions of the feature spacewhere many data points are close together these regions are referred to as dense regions in feature space the idea behind dbscan is that clusters form dense regions of dataseparated by regions that are relatively empty points that are within dense region are called core samples (or core points)and they are defined as follows there are two parameters in dbscanmin_samples and eps if there are at least min_samples many data points within distance of eps to given data pointthat data point is classified as core sample core samples that are closer to each other than the distance eps are put into the same cluster by dbscan the algorithm works by picking an arbitrary point to start with it then finds all points with distance eps or less from that point if there are less than min_samples points within distance eps of the starting pointthis point is labeled as noisemeaning that it doesn' belong to any cluster if there are more than min_samples points within distance of epsthe point is labeled core sample and assigned new cluster label thenall neighbors (within epsof the point are visited if they have not been assigned cluster yetthey are assigned the new cluster label that was just created if they are core samplestheir neighbors are visited in turnand so on the cluster grows until there are no more core samples within distance eps of the cluster then another point that hasn' yet been visited is pickedand the same procedure is repeated clustering |
17,181 | eps of core points (called boundary points)and noise when the dbscan algorithm is run on particular dataset multiple timesthe clustering of the core points is always the sameand the same points will always be labeled as noise howevera boundary point might be neighbor to core samples of more than one cluster thereforethe cluster membership of boundary points depends on the order in which points are visited usually there are only few boundary pointsand this slight dependence on the order of points is not important let' apply dbscan on the synthetic dataset we used to demonstrate agglomerative clustering like agglomerative clusteringdbscan does not allow predictions on new test dataso we will use the fit_predict method to perform clustering and return the cluster labels in one stepin[ ]from sklearn cluster import dbscan xy make_blobs(random_state= n_samples= dbscan dbscan(clusters dbscan fit_predict(xprint("cluster memberships:\ {}format(clusters)out[ ]cluster memberships[- - - - - - - - - - - - as you can seeall data points were assigned the label - which stands for noise this is consequence of the default parameter settings for eps and min_sampleswhich are not tuned for small toy datasets the cluster assignments for different values of min_samples and eps are shown belowand visualized in figure - in[ ]mglearn plots plot_dbscan(out[ ]min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps min_samples eps cluster[- - - - - cluster[ cluster[ cluster[ cluster[- - - - - cluster[ cluster[ cluster[ cluster[- - - - - - - - - - - - cluster[- - - - - - - cluster[- - - - - - - cluster[ unsupervised learning and preprocessing |
17,182 | min_samples and eps parameters in this plotpoints that belong to clusters are solidwhile the noise points are shown in white core samples are shown as large markerswhile boundary points are displayed as smaller markers increasing eps (going from left to right in the figuremeans that more points will be included in cluster this makes clusters growbut might also lead to multiple clusters joining into one increasing min_samples (going from top to bottom in the figuremeans that fewer points will be core pointsand more points will be labeled as noise the parameter eps is somewhat more importantas it determines what it means for points to be "close setting eps to be very small will mean that no points are core samplesand may lead to all points being labeled as noise setting eps to be very large will result in all points forming single cluster the min_samples setting mostly determines whether points in less dense regions will be labeled as outliers or as their own clusters if you decrease min_samplesanything that would have been cluster with less than min_samples many samples will now be labeled as noise min_samples therefore determines the minimum cluster size you can see this very clearly in figure - when going from min_samples= to min_sam ples= with eps= with min_samples= there are three clustersone of four clustering |
17,183 | smaller clusters (with three and four pointsare now labeled as noiseand only the cluster with five samples remains while dbscan doesn' require setting the number of clusters explicitlysetting eps implicitly controls how many clusters will be found finding good setting for eps is sometimes easier after scaling the data using standardscaler or minmaxscaleras using these scaling techniques will ensure that all features have similar ranges figure - shows the result of running dbscan on the two_moons dataset the algorithm actually finds the two half-circles and separates them using the default settingsin[ ]xy make_moons(n_samples= noise= random_state= rescale the data to zero mean and unit variance scaler standardscaler(scaler fit(xx_scaled scaler transform(xdbscan dbscan(clusters dbscan fit_predict(x_scaledplot the cluster assignments plt scatter(x_scaled[: ]x_scaled[: ] =clusterscmap=mglearn cm = plt xlabel("feature "plt ylabel("feature "as the algorithm produced the desired number of clusters (two)the parameter settings seem to work well if we decrease eps to (from the default of )we will get eight clusterswhich is clearly too many increasing eps to results in single cluster when using dbscanyou need to be careful about handling the returned cluster assignments the use of - to indicate noise might result in unexpected effects when using the cluster labels to index another array unsupervised learning and preprocessing |
17,184 | comparing and evaluating clustering algorithms one of the challenges in applying clustering algorithms is that it is very hard to assess how well an algorithm workedand to compare outcomes between different algorithms after talking about the algorithms behind -meansagglomerative clusteringand dbscanwe will now compare them on some real-world datasets evaluating clustering with ground truth there are metrics that can be used to assess the outcome of clustering algorithm relative to ground truth clusteringthe most important ones being the adjusted rand index (ariand normalized mutual information (nmi)which both provide quantitative measure between and herewe compare the -meansagglomerative clusteringand dbscan algorithms using ari we also include what it looks like when we randomly assign points to two clusters for comparison (see figure - )clustering |
17,185 | from sklearn metrics cluster import adjusted_rand_score xy make_moons(n_samples= noise= random_state= rescale the data to zero mean and unit variance scaler standardscaler(scaler fit(xx_scaled scaler transform(xfigaxes plt subplots( figsize=( )subplot_kw={'xticks'()'yticks'()}make list of algorithms to use algorithms [kmeans(n_clusters= )agglomerativeclustering(n_clusters= )dbscan()create random cluster assignment for reference random_state np random randomstate(seed= random_clusters random_state randint(low= high= size=len( )plot random assignment axes[ scatter(x_scaled[: ]x_scaled[: ] =random_clusterscmap=mglearn cm = axes[ set_title("random assignment ari{ }formatadjusted_rand_score(yrandom_clusters))for axalgorithm in zip(axes[ :]algorithms)plot the cluster assignments and cluster centers clusters algorithm fit_predict(x_scaledax scatter(x_scaled[: ]x_scaled[: ] =clusterscmap=mglearn cm = ax set_title("{ari{ }format(algorithm __class__ __name__adjusted_rand_score(yclusters))figure - comparing random assignmentk-meansagglomerative clusteringand dbscan on the two_moons dataset using the supervised ari score the adjusted rand index provides intuitive resultswith random cluster assignment having score of and dbscan (which recovers the desired clustering perfectlyhaving score of unsupervised learning and preprocessing |
17,186 | instead of adjusted_rand_scorenormalized_mutual_info_scoreor some other clustering metric the problem in using accuracy is that it requires the assigned cluster labels to exactly match the ground truth howeverthe cluster labels themselves are meaningless--the only thing that matters is which points are in the same clusterin[ ]from sklearn metrics import accuracy_score these two labelings of points correspond to the same clustering clusters [ clusters [ accuracy is zeroas none of the labels are the same print("accuracy{ }format(accuracy_score(clusters clusters ))adjusted rand score is as the clustering is exactly the same print("ari{ }format(adjusted_rand_score(clusters clusters ))out[ ]accuracy ari evaluating clustering without ground truth although we have just shown one way to evaluate clustering algorithmsin practicethere is big problem with using measures like ari when applying clustering algorithmsthere is usually no ground truth to which to compare the results if we knew the right clustering of the datawe could use this information to build supervised model like classifier thereforeusing metrics like ari and nmi usually only helps in developing algorithmsnot in assessing success in an application there are scoring metrics for clustering that don' require ground truthlike the silhouette coefficient howeverthese often don' work well in practice the silhouette score computes the compactness of clusterwhere higher is betterwith perfect score of while compact clusters are goodcompactness doesn' allow for complex shapes here is an example comparing the outcome of -meansagglomerative clusteringand dbscan on the two-moons dataset using the silhouette score (figure - )in[ ]from sklearn metrics cluster import silhouette_score xy make_moons(n_samples= noise= random_state= rescale the data to zero mean and unit variance scaler standardscaler(scaler fit(xx_scaled scaler transform(xclustering |
17,187 | subplot_kw={'xticks'()'yticks'()}create random cluster assignment for reference random_state np random randomstate(seed= random_clusters random_state randint(low= high= size=len( )plot random assignment axes[ scatter(x_scaled[: ]x_scaled[: ] =random_clusterscmap=mglearn cm = axes[ set_title("random assignment{ }formatsilhouette_score(x_scaledrandom_clusters))algorithms [kmeans(n_clusters= )agglomerativeclustering(n_clusters= )dbscan()for axalgorithm in zip(axes[ :]algorithms)clusters algorithm fit_predict(x_scaledplot the cluster assignments and cluster centers ax scatter(x_scaled[: ]x_scaled[: ] =clusterscmap=mglearn cm = ax set_title("{{ }format(algorithm __class__ __name__silhouette_score(x_scaledclusters))figure - comparing random assignmentk-meansagglomerative clusteringand dbscan on the two_moons dataset using the unsupervised silhouette score--the more intuitive result of dbscan has lower silhouette score than the assignments found by -means as you can seek-means gets the highest silhouette scoreeven though we might prefer the result produced by dbscan slightly better strategy for evaluating clusters is using robustness-based clustering metrics these run an algorithm after adding some noise to the dataor using different parameter settingsand compare the outcomes the idea is that if many algorithm parameters and many perturbations of the data return the same resultit is likely to be trustworthy unfortunatelythis strategy is not implemented in scikit-learn at the time of writing even if we get very robust clusteringor very high silhouette scorewe still don' know if there is any semantic meaning in the clusteringor whether the clustering unsupervised learning and preprocessing |
17,188 | face images we hope to find groups of similar faces--saymen and womenor old people and young peopleor people with beards and without let' say we cluster the data into two clustersand all algorithms agree about which points should be clustered together we still don' know if the clusters that are found correspond in any way to the concepts we are interested in it could be that they found side views versus front viewsor pictures taken at night versus pictures taken during the dayor pictures taken with iphones versus pictures taken with android phones the only way to know whether the clustering corresponds to anything we are interested in is to analyze the clusters manually comparing algorithms on the faces dataset let' apply the -meansdbscanand agglomerative clustering algorithms to the labeled faces in the wild datasetand see if any of them find interesting structure we will use the eigenface representation of the dataas produced by pca(whiten=true)with componentsin[ ]extract eigenfaces from lfw data and transform data from sklearn decomposition import pca pca pca(n_components= whiten=truerandom_state= pca fit_transform(x_peoplex_pca pca transform(x_peoplewe saw earlier that this is more semantic representation of the face images than the raw pixels it will also make computation faster good exercise would be for you to run the following experiments on the original datawithout pcaand see if you find similar clusters analyzing the faces dataset with dbscan we will start by applying dbscanwhich we just discussedin[ ]apply dbscan with default parameters dbscan dbscan(labels dbscan fit_predict(x_pcaprint("unique labels{}format(np unique(labels))out[ ]unique labels[- we see that all the returned labels are - so all of the data was labeled as "noiseby dbscan there are two things we can change to help thiswe can make eps higherto expand the neighborhood of each pointand set min_samples lowerto consider smaller groups of points as clusters let' try changing min_samples firstclustering |
17,189 | dbscan dbscan(min_samples= labels dbscan fit_predict(x_pcaprint("unique labels{}format(np unique(labels))out[ ]unique labels[- even when considering groups of three pointseverything is labeled as noise sowe need to increase epsin[ ]dbscan dbscan(min_samples= eps= labels dbscan fit_predict(x_pcaprint("unique labels{}format(np unique(labels))out[ ]unique labels[- using much larger eps of we get only single cluster and noise points we can use this result to find out what the "noiselooks like compared to the rest of the data to understand better what' happeninglet' look at how many points are noiseand how many points are inside the clusterin[ ]count number of points in all clusters and noise bincount doesn' allow negative numbersso we need to add the first number in the result corresponds to noise points print("number of points per cluster{}format(np bincount(labels ))out[ ]number of points per cluster there are very few noise points--only --so we can look at all of them (see figure - )in[ ]noise x_people[labels==- figaxes plt subplots( subplot_kw={'xticks'()'yticks'()}figsize=( )for imageax in zip(noiseaxes ravel())ax imshow(image reshape(image_shape)vmin= vmax= unsupervised learning and preprocessing |
17,190 | comparing these images to the random sample of face images from figure - we can guess why they were labeled as noisethe fifth image in the first row shows person drinking from glassthere are images of people wearing hatsand in the last image there' hand in front of the person' face the other images contain odd angles or crops that are too close or too wide this kind of analysis--trying to find "the odd one out"--is called outlier detection if this was real applicationwe might try to do better job of cropping imagesto get more homogeneous data there is little we can do about people in photos sometimes wearing hatsdrinkingor holding something in front of their facesbut it' good to know that these are issues in the data that any algorithm we might apply needs to handle if we want to find more interesting clusters than just one large onewe need to set eps smallersomewhere between and (the defaultlet' have look at what different values of eps result inin[ ]for eps in [ ]print("\neps={}format(eps)dbscan dbscan(eps=epsmin_samples= labels dbscan fit_predict(x_pcaprint("clusters present{}format(np unique(labels))print("cluster sizes{}format(np bincount(labels ))out[ ]eps= clusters present[- cluster sizes[ eps= clusters present[- cluster sizes[ clustering |
17,191 | clusters present[- cluster sizes[ eps= clusters present[- cluster sizes[ eps= clusters present[- cluster sizes[ eps= clusters present[- cluster sizes eps= clusters present[- cluster sizes for low settings of epsall points are labeled as noise for eps= we get many noise points and many smaller clusters for eps= we still get many noise pointsbut we get one big cluster and some smaller clusters starting from eps= we get only one large cluster and noise what is interesting to note is that there is never more than one large cluster at mostthere is one large cluster containing most of the pointsand some smaller clusters this indicates that there are not two or three different kinds of face images in the data that are very distinctbut rather that all images are more or less equally similar to (or dissimilar fromthe rest the results for eps= look most interestingwith many small clusters we can investigate this clustering in more detail by visualizing all of the points in each of the small clusters (figure - )in[ ]dbscan dbscan(min_samples= eps= labels dbscan fit_predict(x_pcafor cluster in range(max(labels )mask labels =cluster n_images np sum(maskfigaxes plt subplots( n_imagesfigsize=(n_images )subplot_kw={'xticks'()'yticks'()}for imagelabelax in zip(x_people[mask]y_people[mask]axes)ax imshow(image reshape(image_shape)vmin= vmax= ax set_title(people target_names[labelsplit()[- ] unsupervised learning and preprocessing |
17,192 | some of the clusters correspond to people with very distinct faces (within this dataset)such as sharon or koizumi within each clusterthe orientation of the face is also clustering |
17,193 | tiple peoplebut they share similar orientation and expression this concludes our analysis of the dbscan algorithm applied to the faces dataset as you can seewe are doing manual analysis heredifferent from the much more automatic search approach we could use for supervised learning based on score or accuracy let' move on to applying -means and agglomerative clustering analyzing the faces dataset with -means we saw that it was not possible to create more than one big cluster using dbscan agglomerative clustering and -means are much more likely to create clusters of even sizebut we do need to set target number of clusters we could set the number of clusters to the known number of people in the datasetthough it is very unlikely that an unsupervised clustering algorithm will recover them insteadwe can start with low number of clusterslike which might allow us to analyze each of the clustersin[ ]extract clusters with -means km kmeans(n_clusters= random_state= labels_km km fit_predict(x_pcaprint("cluster sizes -means{}format(np bincount(labels_km))out[ ]cluster sizes -means[ as you can seek-means clustering partitioned the data into relatively similarly sized clusters from to this is quite different from the result of dbscan we can further analyze the outcome of -means by visualizing the cluster centers (figure - as we clustered in the representation produced by pcawe need to rotate the cluster centers back into the original space to visualize themusing pca inverse_transformin[ ]figaxes plt subplots( subplot_kw={'xticks'()'yticks'()}figsize=( )for centerax in zip(km cluster_centers_axes ravel())ax imshow(pca inverse_transform(centerreshape(image_shape)vmin= vmax= unsupervised learning and preprocessing |
17,194 | the cluster centers found by -means are very smooth versions of faces this is not very surprisinggiven that each center is an average of to face images working with reduced pca representation adds to the smoothness of the images (compared to the faces reconstructed using pca dimensions in figure - the clustering seems to pick up on different orientations of the facedifferent expressions (the third cluster center seems to show smiling face)and the presence of shirt collars (see the second-to-last cluster centerfor more detailed viewin figure - we show for each cluster center the five most typical images in the cluster (the images assigned to the cluster that are closest to the cluster centerand the five most atypical images in the cluster (the images assigned to the cluster that are furthest from the cluster center)in[ ]mglearn plots plot_kmeans_faces(kmpcax_pcax_peopley_peoplepeople target_namesclustering |
17,195 | on the leftfollowed by the five closest points to each center and the five points that are assigned to the cluster but are furthest away from the center unsupervised learning and preprocessing |
17,196 | the importance of orientation for the other clusters the "atypicalpoints are not very similar to the cluster centersthoughand their assignment seems somewhat arbitrary this can be attributed to the fact that -means partitions all the data points and doesn' have concept of "noisepointsas dbscan does using larger number of clustersthe algorithm could find finer distinctions howeveradding more clusters makes manual inspection even harder analyzing the faces dataset with agglomerative clustering nowlet' look at the results of agglomerative clusteringin[ ]extract clusters with ward agglomerative clustering agglomerative agglomerativeclustering(n_clusters= labels_agg agglomerative fit_predict(x_pcaprint("cluster sizes agglomerative clustering{}formatnp bincount(labels_agg))out[ ]cluster sizes agglomerative clustering[ agglomerative clustering also produces relatively equally sized clusterswith cluster sizes between and these are more uneven than those produced by -meansbut much more even than the ones produced by dbscan we can compute the ari to measure whether the two partitions of the data given by agglomerative clustering and -means are similarin[ ]print("ari{ }format(adjusted_rand_score(labels_agglabels_km))out[ ]ari an ari of only means that the two clusterings labels_agg and labels_km have little in common this is not very surprisinggiven the fact that points further away from the cluster centers seem to have little in common for -means nextwe might want to plot the dendrogram (figure - we'll limit the depth of the tree in the plotas branching down to the individual , data points would result in an unreadably dense plotclustering |
17,197 | linkage_array ward(x_pcanow we plot the dendrogram for the linkage_array containing the distances between clusters plt figure(figsize=( )dendrogram(linkage_arrayp= truncate_mode='level'no_labels=trueplt xlabel("sample index"plt ylabel("cluster distance"figure - dendrogram of agglomerative clustering on the faces dataset creating clusterswe cut across the tree at the very topwhere there are vertical lines in the dendrogram for the toy data shown in figure - you could see by the length of the branches that two or three clusters might capture the data appropriately for the faces datathere doesn' seem to be very natural cutoff point there are some branches that represent more distinct groupsbut there doesn' appear to be particular number of clusters that is good fit this is not surprisinggiven the results of dbscanwhich tried to cluster all points together let' visualize the clustersas we did for -means earlier (figure - note that there is no notion of cluster center in agglomerative clustering (though we could compute the mean)and we simply show the first couple of points in each cluster we show the number of points in each cluster to the left of the first imagein[ ]n_clusters for cluster in range(n_clusters)mask labels_agg =cluster figaxes plt subplots( subplot_kw={'xticks'()'yticks'()}figsize=( )axes[ set_ylabel(np sum(mask)for imagelabelasdfax in zip(x_people[mask]y_people[mask]labels_agg[mask]axes)ax imshow(image reshape(image_shape)vmin= vmax= ax set_title(people target_names[labelsplit()[- ]fontdict={'fontsize' } unsupervised learning and preprocessing |
17,198 | sponds to one clusterthe number to the left lists the number of images in each cluster clustering |
17,199 | large to be actually homogeneous to get more homogeneous clusterswe can run the algorithm againthis time with clustersand pick out some of the clusters that are particularly interesting (figure - )in[ ]extract clusters with ward agglomerative clustering agglomerative agglomerativeclustering(n_clusters= labels_agg agglomerative fit_predict(x_pcaprint("cluster sizes agglomerative clustering{}format(np bincount(labels_agg))n_clusters for cluster in [ ]hand-picked "interestingclusters mask labels_agg =cluster figaxes plt subplots( subplot_kw={'xticks'()'yticks'()}figsize=( )cluster_size np sum(maskaxes[ set_ylabel("#{}{}format(clustercluster_size)for imagelabelasdfax in zip(x_people[mask]y_people[mask]labels_agg[mask]axes)ax imshow(image reshape(image_shape)vmin= vmax= ax set_title(people target_names[labelsplit()[- ]fontdict={'fontsize' }for in range(cluster_size )axes[iset_visible(falseout[ ]cluster sizes agglomerative clustering unsupervised learning and preprocessing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.