id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
17,300 | using pipeline in grid search works the same way as using any other estimator we define parameter grid to search overand construct gridsearchcv from the pipeline and the parameter grid when specifying the parameter gridthere is slight changethough we need to specify for each parameter which step of the pipeline it belongs to both parameters that we want to adjustc and gammaare parameters of svcthe second step we gave this step the name "svmthe syntax to define parameter grid for pipeline is to specify for each parameter the step namefollowed by __ ( double underscore)followed by the parameter name to search over the parameter of svc we therefore have to use "svm__cas the key in the parameter grid dictionaryand similarly for gammain[ ]param_grid {'svm__c'[ ]'svm__gamma'[ ]with this parameter grid we can use gridsearchcv as usualin[ ]grid gridsearchcv(pipeparam_grid=param_gridcv= grid fit(x_trainy_trainprint("best cross-validation accuracy{ }format(grid best_score_)print("test set score{ }format(grid score(x_testy_test))print("best parameters{}format(grid best_params_)out[ ]best cross-validation accuracy test set score best parameters{'svm__c' 'svm__gamma' in contrast to the grid search we did beforenow for each split in the cross-validationthe minmaxscaler is refit with only the training splits and no information is leaked from the test split into the parameter search compare this (figure - with figure - earlier in this in[ ]mglearn plots plot_proper_processing(using pipelines in grid searches |
17,301 | pipeline the impact of leaking information in the cross-validation varies depending on the nature of the preprocessing step estimating the scale of the data using the test fold usually doesn' have terrible impactwhile using the test fold in feature extraction and feature selection can lead to substantial differences in outcomes illustrating information leakage great example of leaking information in cross-validation is given in hastietibshiraniand friedman' book the elements of statistical learningand we reproduce an adapted version here let' consider synthetic regression task with samples and , features that are sampled independently from gaussian distribution we also sample the response from gaussian distributionin[ ]rnd np random randomstate(seed= rnd normal(size=( ) rnd normal(size=( ,)given the way we created the datasetthere is no relation between the dataxand the targety (they are independent)so it should not be possible to learn anything from this dataset we will now do the following firstselect the most informative of the features using selectpercentile feature selectionand then we evaluate ridge regressor using cross-validation algorithm chains and pipelines |
17,302 | from sklearn feature_selection import selectpercentilef_regression select selectpercentile(score_func=f_regressionpercentile= fit(xyx_selected select transform(xprint("x_selected shape{}format(x_selected shape)out[ ]x_selected shape( in[ ]from sklearn model_selection import cross_val_score from sklearn linear_model import ridge print("cross-validation accuracy (cv only on ridge){ }formatnp mean(cross_val_score(ridge()x_selectedycv= )))out[ ]cross-validation accuracy (cv only on ridge) the mean computed by cross-validation is indicating very good model this clearly cannot be rightas our data is entirely random what happened here is that our feature selection picked out some features among the , random features that are (by chancevery well correlated with the target because we fit the feature selection outside of the cross-validationit could find features that are correlated both on the training and the test folds the information we leaked from the test folds was very informativeleading to highly unrealistic results let' compare this to proper cross-validation using pipelinein[ ]pipe pipeline([("select"selectpercentile(score_func=f_regressionpercentile= ))("ridge"ridge())]print("cross-validation accuracy (pipeline){ }formatnp mean(cross_val_score(pipexycv= )))out[ ]cross-validation accuracy (pipeline)- this timewe get negative scoreindicating very poor model using the pipelinethe feature selection is now inside the cross-validation loop this means features can only be selected using the training folds of the datanot the test fold the feature selection finds features that are correlated with the target on the training setbut because the data is entirely randomthese features are not correlated with the target on the test set in this examplerectifying the data leakage issue in the feature selection makes the difference between concluding that model works very well and concluding that model works not at all using pipelines in grid searches |
17,303 | the pipeline class is not restricted to preprocessing and classificationbut can in fact join any number of estimators together for exampleyou could build pipeline containing feature extractionfeature selectionscalingand classificationfor total of four steps similarlythe last step could be regression or clustering instead of classification the only requirement for estimators in pipeline is that all but the last step need to have transform methodso they can produce new representation of the data that can be used in the next step internallyduring the call to pipeline fitthe pipeline calls fit and then transform on each step in turn, with the input given by the output of the transform method of the previous step for the last step in the pipelinejust fit is called brushing over some finer detailsthis is implemented as follows remember that pipe line steps is list of tuplesso pipeline steps[ ][ is the first estimatorpipe line steps[ ][ is the second estimatorand so onin[ ]def fit(selfxy)x_transformed for nameestimator in self steps[:- ]iterate over all but the final step fit and transform the data x_transformed estimator fit_transform(x_transformedyfit the last step self steps[- ][ fit(x_transformedyreturn self when predicting using pipelinewe similarly transform the data using all but the last stepand then call predict on the last stepin[ ]def predict(selfx)x_transformed for step in self steps[:- ]iterate over all but the final step transform the data x_transformed step[ transform(x_transformedfit the last step return self steps[- ][ predict(x_transformed or just fit_transform algorithm chains and pipelines |
17,304 | classifier (called classifierfigure - overview of the pipeline training and prediction process the pipeline is actually even more general than this there is no requirement for the last step in pipeline to have predict functionand we could create pipeline just containingfor examplea scaler and pca thenbecause the last step (pcahas transform methodwe could call transform on the pipeline to get the output of pca transform applied to the data that was processed by the previous step the last step of pipeline is only required to have fit method convenient pipeline creation with make_pipeline creating pipeline using the syntax described earlier is sometimes bit cumbersomeand we often don' need user-specified names for each step there is convenience functionmake_pipelinethat will create pipeline for us and automatically name each step based on its class the syntax for make_pipeline is as followsin[ ]from sklearn pipeline import make_pipeline standard syntax pipe_long pipeline([("scaler"minmaxscaler())("svm"svc( = ))]abbreviated syntax pipe_short make_pipeline(minmaxscaler()svc( = )the general pipeline interface |
17,305 | pipe_short has steps that were automatically named we can see the names of the steps by looking at the steps attributein[ ]print("pipeline steps:\ {}format(pipe_short steps)out[ ]pipeline steps[('minmaxscaler'minmaxscaler(copy=truefeature_range=( )))('svc'svc( = cache_size= class_weight=nonecoef = decision_function_shape=nonedegree= gamma='auto'kernel='rbf'max_iter=- probability=falserandom_state=noneshrinking=truetol= verbose=false))the steps are named minmaxscaler and svc in generalthe step names are just lowercase versions of the class names if multiple steps have the same classa number is appendedin[ ]from sklearn preprocessing import standardscaler from sklearn decomposition import pca pipe make_pipeline(standardscaler()pca(n_components= )standardscaler()print("pipeline steps:\ {}format(pipe steps)out[ ]pipeline steps[('standardscaler- 'standardscaler(copy=truewith_mean=truewith_std=true))('pca'pca(copy=trueiterated_power= n_components= random_state=nonesvd_solver='auto'tol= whiten=false))('standardscaler- 'standardscaler(copy=truewith_mean=truewith_std=true))as you can seethe first standardscaler step was named standardscaler- and the second standardscaler- howeverin such settings it might be better to use the pipeline construction with explicit namesto give more semantic names to each step accessing step attributes often you will want to inspect attributes of one of the steps of the pipeline--saythe coefficients of linear model or the components extracted by pca the easiest way to access the steps in pipeline is via the named_steps attributewhich is dictionary from the step names to the estimators algorithm chains and pipelines |
17,306 | fit the pipeline defined before to the cancer dataset pipe fit(cancer dataextract the first two principal components from the "pcastep components pipe named_steps["pca"components_ print("components shape{}format(components shape)out[ ]components shape( accessing attributes in grid-searched pipeline as we discussed earlier in this one of the main reasons to use pipelines is for doing grid searches common task is to access some of the steps of pipeline inside grid search let' grid search logisticregression classifier on the cancer datasetusing pipeline and standardscaler to scale the data before passing it to the logisti cregression classifier first we create pipeline using the make_pipeline functionin[ ]from sklearn linear_model import logisticregression pipe make_pipeline(standardscaler()logisticregression()nextwe create parameter grid as explained in the regularization parameter to tune for logisticregression is the parameter we use logarithmic grid for this parametersearching between and because we used the make_pipeline functionthe name of the logisticregression step in the pipeline is the lowercased class namelogisticregression to tune the parameter cwe therefore have to specify parameter grid for logisticregression__cin[ ]param_grid {'logisticregression__c'[ ]as usualwe split the cancer dataset into training and test setsand fit grid searchin[ ]x_trainx_testy_trainy_test train_test_splitcancer datacancer targetrandom_state= grid gridsearchcv(pipeparam_gridcv= grid fit(x_trainy_trainso how do we access the coefficients of the best logisticregression model that was found by gridsearchcvfrom we know that the best model found by gridsearchcvtrained on all the training datais stored in grid best_estimator_the general pipeline interface |
17,307 | print("best estimator:\ {}format(grid best_estimator_)out[ ]best estimatorpipeline(steps=('standardscaler'standardscaler(copy=truewith_mean=truewith_std=true))('logisticregression'logisticregression( = class_weight=nonedual=falsefit_intercept=trueintercept_scaling= max_iter= multi_class='ovr'n_jobs= penalty=' 'random_state=nonesolver='liblinear'tol= verbose= warm_start=false))]this best_estimator_ in our case is pipeline with two stepsstandardscaler and logisticregression to access the logisticregression stepwe can use the named_steps attribute of the pipelineas explained earlierin[ ]print("logistic regression step:\ {}formatgrid best_estimator_ named_steps["logisticregression"])out[ ]logistic regression steplogisticregression( = class_weight=nonedual=falsefit_intercept=trueintercept_scaling= max_iter= multi_class='ovr'n_jobs= penalty=' 'random_state=nonesolver='liblinear'tol= verbose= warm_start=falsenow that we have the trained logisticregression instancewe can access the coefficients (weightsassociated with each input featurein[ ]print("logistic regression coefficients:\ {}formatgrid best_estimator_ named_steps["logisticregression"coef_)out[ ]logistic regression coefficients[[- - - - - - - - - - - - - - - - - - - - - - - - ]this might be somewhat lengthy expressionbut often it comes in handy in understanding your models algorithm chains and pipelines |
17,308 | parameters using pipelineswe can encapsulate all the processing steps in our machine learning workflow in single scikit-learn estimator another benefit of doing this is that we can now adjust the parameters of the preprocessing using the outcome of supervised task like regression or classification in previous we used polynomial features on the boston dataset before applying the ridge regressor let' model that using pipeline instead the pipeline contains three steps--scaling the datacomputing polynomial featuresand ridge regressionin[ ]from sklearn datasets import load_boston boston load_boston(x_trainx_testy_trainy_test train_test_split(boston databoston targetrandom_state= from sklearn preprocessing import polynomialfeatures pipe make_pipelinestandardscaler()polynomialfeatures()ridge()how do we know which degrees of polynomials to chooseor whether to choose any polynomials or interactions at allideally we want to select the degree parameter based on the outcome of the classification using our pipelinewe can search over the degree parameter together with the parameter alpha of ridge to do thiswe define param_grid that contains bothappropriately prefixed by the step namesin[ ]param_grid {'polynomialfeatures__degree'[ ]'ridge__alpha'[ ]now we can run our grid search againin[ ]grid gridsearchcv(pipeparam_grid=param_gridcv= n_jobs=- grid fit(x_trainy_trainwe can visualize the outcome of the cross-validation using heat map (figure - )as we did in in[ ]plt matshow(grid cv_results_['mean_test_score'reshape( - )vmin= cmap="viridis"plt xlabel("ridge__alpha"plt ylabel("polynomialfeatures__degree"grid-searching preprocessing steps and model parameters |
17,309 | plt yticks(range(len(param_grid['polynomialfeatures__degree']))param_grid['polynomialfeatures__degree']plt colorbar(figure - heat map of mean cross-validation score as function of the degree of the polynomial features and alpha parameter of ridge looking at the results produced by the cross-validationwe can see that using polynomials of degree two helpsbut that degree-three polynomials are much worse than either degree one or two this is reflected in the best parameters that were foundin[ ]print("best parameters{}format(grid best_params_)out[ ]best parameters{'polynomialfeatures__degree' 'ridge__alpha' which lead to the following scorein[ ]print("test-set score{ }format(grid score(x_testy_test))out[ ]test-set score let' run grid search without polynomial features for comparisonin[ ]param_grid {'ridge__alpha'[ ]pipe make_pipeline(standardscaler()ridge()grid gridsearchcv(pipeparam_gridcv= grid fit(x_trainy_trainprint("score without poly features{ }format(grid score(x_testy_test)) algorithm chains and pipelines |
17,310 | score without poly features as we would expect looking at the grid search results visualized in figure - using no polynomial features leads to decidedly worse results searching over preprocessing parameters together with model parameters is very powerful strategy howeverkeep in mind that gridsearchcv tries all possible combinations of the specified parameters thereforeadding more parameters to your grid exponentially increases the number of models that need to be built grid-searching which model to use you can even go further in combining gridsearchcv and pipelineit is also possible to search over the actual steps being performed in the pipeline (say whether to use standardscaler or minmaxscalerthis leads to an even bigger search space and should be considered carefully trying all possible solutions is usually not viable machine learning strategy howeverhere is an example comparing randomforest classifier and an svc on the iris dataset we know that the svc might need the data to be scaledso we also search over whether to use standardscaler or no preprocessing for the randomforestclassifierwe know that no preprocessing is necessary we start by defining the pipeline herewe explicitly name the steps we want two stepsone for the preprocessing and then classifier we can instantiate this using svc and standardscalerin[ ]pipe pipeline([('preprocessing'standardscaler())('classifier'svc())]now we can define the parameter_grid to search over we want the classifier to be either randomforestclassifier or svc because they have different parameters to tuneand need different preprocessingwe can make use of the list of search grids we discussed in "search over spaces that are not gridson page to assign an estimator to stepwe use the name of the step as the parameter name when we wanted to skip step in the pipeline (for examplebecause we don' need preprocessing for the randomforest)we can set that step to nonein[ ]from sklearn ensemble import randomforestclassifier param_grid {'classifier'[svc()]'preprocessing'[standardscaler()none]'classifier__gamma'[ ]'classifier__c'[ ]}{'classifier'[randomforestclassifier(n_estimators= )]'preprocessing'[none]'classifier__max_features'[ ]}grid-searching which model to use |
17,311 | in[ ]x_trainx_testy_trainy_test train_test_splitcancer datacancer targetrandom_state= grid gridsearchcv(pipeparam_gridcv= grid fit(x_trainy_trainprint("best params:\ {}\nformat(grid best_params_)print("best cross-validation score{ }format(grid best_score_)print("test-set score{ }format(grid score(x_testy_test))out[ ]best params{'classifier'svc( = cache_size= class_weight=nonecoef = decision_function_shape=nonedegree= gamma= kernel='rbf'max_iter=- probability=falserandom_state=noneshrinking=truetol= verbose=false)'preprocessing'standardscaler(copy=truewith_mean=truewith_std=true)'classifier__c' 'classifier__gamma' best cross-validation score test-set score the outcome of the grid search is that svc with standardscaler preprocessingc= and gamma= gave the best result summary and outlook in this we introduced the pipeline classa general-purpose tool to chain together multiple processing steps in machine learning workflow real-world applications of machine learning rarely involve an isolated use of modeland instead are sequence of processing steps using pipelines allows us to encapsulate multiple steps into single python object that adheres to the familiar scikit-learn interface of fitpredictand transform in particular when doing model evaluation using crossvalidation and parameter selection using grid searchusing the pipeline class to capture all the processing steps is essential for proper evaluation the pipeline class also allows writing more succinct codeand reduces the likelihood of mistakes that can happen when building processing chains without the pipeline class (like forgetting to apply all transformers on the test setor not applying them in the right orderchoosing the right combination of feature extractionpreprocessingand models is somewhat of an artand often requires some trial and error howeverusing pipelinesthis "trying outof many different processing steps is quite simple when algorithm chains and pipelines |
17,312 | evaluate whether every component you are including in your model is necessary with this we have completed our survey of general-purpose tools and algorithms provided by scikit-learn you now possess all the required skills and know the necessary mechanisms to apply machine learning in practice in the next we will dive in more detail into one particular type of data that is commonly seen in practiceand that requires some special expertise to handle correctlytext data summary and outlook |
17,313 | working with text data in we talked about two kinds of features that can represent properties of the datacontinuous features that describe quantityand categorical features that are items from fixed list there is third kind of feature that can be found in many applicationswhich is text for exampleif we want to classify an email message as either legitimate email or spamthe content of the email will certainly contain important information for this classification task or maybe we want to learn about the opinion of politician on the topic of immigration herethat individual' speeches or tweets might provide useful information in customer servicewe often want to find out if message is complaint or an inquiry we can use the subject line and content of message to automatically determine the customer' intentwhich allows us to send the message to the appropriate departmentor even send fully automatic reply text data is usually represented as stringsmade up of characters in any of the examples just giventhe length of the text data will vary this feature is clearly very different from the numeric features that we've discussed so farand we will need to process the data before we can apply our machine learning algorithms to it types of data represented as strings before we dive into the processing steps that go into representing text data for machine learningwe want to briefly discuss different kinds of text data that you might encounter text is usually just string in your datasetbut not all string features should be treated as text string feature can sometimes represent categorical variablesas we discussed in there is no way to know how to treat string feature before looking at the data |
17,314 | categorical data free strings that can be semantically mapped to categories structured string data text data categorical data is data that comes from fixed list say you collect data via survey where you ask people their favorite colorwith drop-down menu that allows them to select from "red,"green,"blue,"yellow,"black,"white,"purple,and "pink this will result in dataset with exactly eight different possible valueswhich clearly encode categorical variable you can check whether this is the case for your data by eyeballing it (if you see very many different strings it is unlikely that this is categorical variableand confirm it by computing the unique values over the datasetand possibly histogram over how often each appears you also might want to check whether each variable actually corresponds to category that makes sense for your application maybe halfway through the existence of your surveysomeone found that "blackwas misspelled as "blakand subsequently fixed the survey as resultyour dataset contains both "blakand "black,which correspond to the same semantic meaning and should be consolidated now imagine instead of providing drop-down menuyou provide text field for the users to provide their own favorite colors many people might respond with color name like "blackor "blue others might make typographical errorsuse different spellings like "grayand "grey,or use more evocative and specific names like "midnight blue you will also have some very strange entries some good examples come from the xkcd color surveywhere people had to name colors and came up with names like "velociraptor cloakaand "my dentist' office orange still remember his dandruff slowly wafting into my gaping yaw,which are hard to map to colors automatically (or at allthe responses you can obtain from text field belong to the second category in the listfree strings that can be semantically mapped to categories it will probably be best to encode this data as categorical variablewhere you can select the categories either by using the most common entriesor by defining categories that will capture responses in way that makes sense for your application you might then have some categories for standard colorsmaybe category "multicoloredfor people that gave answers like "green and red stripes,and an "othercategory for things that cannot be encoded otherwise this kind of preprocessing of strings can take lot of manual effort and is not easily automated if you are in position where you can influence data collectionwe highly recommend avoiding manually entered values for concepts that are better captured using categorical variables oftenmanually entered values do not correspond to fixed categoriesbut still have some underlying structurelike addressesnames of places or peopledatestelephone working with text data |
17,315 | their treatment is highly dependent on context and domain systematic treatment of these cases is beyond the scope of this book the final category of string data is freeform text data that consists of phrases or sentences examples include tweetschat logsand hotel reviewsas well as the collected works of shakespearethe content of wikipediaor the project gutenberg collection of , ebooks all of these collections contain information mostly as sentences composed of words for simplicity' sakelet' assume all our documents are in one languageenglish in the context of text analysisthe dataset is often called the corpusand each data pointrepresented as single textis called document these terms come from the information retrieval (irand natural language processing (nlpcommunitywhich both deal mostly in text data example applicationsentiment analysis of movie reviews as running example in this we will use dataset of movie reviews from the imdb (internet movie databasewebsite collected by stanford researcher andrew maas this dataset contains the text of the reviewstogether with label that indicates whether review is "positiveor "negative the imdb website itself contains ratings from to to simplify the modelingthis annotation is summarized as two-class classification dataset where reviews with score of or higher are labeled as positiveand the rest as negative we will leave the question of whether this is good representation of the data openand simply use the data as provided by andrew maas after unpacking the datathe dataset is provided as text files in two separate foldersone for the training data and one for the test data each of these in turn has two subfoldersone called pos and one called neg arguablythe content of websites linked to in tweets contains more information than the text of the tweets themselves most of what we will talk about in the rest of the also applies to other languages that use the roman alphabetand partially to other languages with word boundary delimiters chinesefor exampledoes not delimit word boundariesand has other challenges that make applying the techniques in this difficult the dataset is available at example applicationsentiment analysis of movie reviews |
17,316 | !tree - data/aclimdb out[ ]data/aclimdb +-test +-neg +-pos +-train +-neg +-pos directories files the pos folder contains all the positive reviewseach as separate text fileand similarly for the neg folder there is helper function in scikit-learn to load files stored in such folder structurewhere each subfolder corresponds to labelcalled load_files we apply the load_files function first to the training datain[ ]from sklearn datasets import load_files reviews_train load_files("data/aclimdb/train/"load_files returns bunchcontaining training texts and training labels text_trainy_train reviews_train datareviews_train target print("type of text_train{}format(type(text_train))print("length of text_train{}format(len(text_train))print("text_train[ ]:\ {}format(text_train[ ])out[ ]type of text_trainlength of text_train text_train[ ] 'words can\' describe how bad this movie is can\' explain it by writing only you have too see it for yourself to get at grip of how horrible movie really can be not that recommend you to do that there are so many clich\xc \xa smistakes (and all other negative things you can imaginehere that will just make you cry to start with the technical firstthere are lot of mistakes regarding the airplane won\' list them herebut just mention the coloring of the plane they didn\' even manage to show an airliner in the colors of fictional airlinebut instead used painted in the original boeing livery very bad the plot is stupid and has been done many times beforeonly muchmuch better there are so many ridiculous moments here that lost count of it really early alsoi was on the bad guys\side all the time in the moviebecause the good guys were so stupid "executive decisionshould without doubt be you\'re choice over this oneeven the "turbulence"-movies are better in factevery other movie in the world is better than this one you can see that text_train is list of length , where each entry is string containing review we printed the review with index you can also see that the review contains some html line breaks while these are unlikely to have working with text data |
17,317 | remove this formatting before we proceedin[ ]text_train [doc replace( "" "for doc in text_trainthe type of the entries of text_train will depend on your python version in python they will be of type bytes which represents binary encoding of the string data in python text_train contains strings we won' go into the details of the different string types in python herebut we recommend that you read the python and/or python documentation regarding strings and unicode the dataset was collected such that the positive class and the negative class balancedso that there are as many positive as negative stringsin[ ]print("samples per class (training){}format(np bincount(y_train))out[ ]samples per class (training)[ we load the test dataset in the same mannerin[ ]reviews_test load_files("data/aclimdb/test/"text_testy_test reviews_test datareviews_test target print("number of documents in test data{}format(len(text_test))print("samples per class (test){}format(np bincount(y_test))text_test [doc replace( "" "for doc in text_testout[ ]number of documents in test data samples per class (test)[ the task we want to solve is as followsgiven reviewwe want to assign the label "positiveor "negativebased on the text content of the review this is standard binary classification task howeverthe text data is not in format that machine learning model can handle we need to convert the string representation of the text into numeric representation that we can apply our machine learning algorithms to representing text data as bag of words one of the most simple but effective and commonly used ways to represent text for machine learning is using the bag-of-words representation when using this representationwe discard most of the structure of the input textlike paragraphssentencesand formattingand only count how often each word appears in each text in representing text data as bag of words |
17,318 | mental image of representing text as "bag computing the bag-of-words representation for corpus of documents consists of the following three steps tokenization split each document into the words that appear in it (called tokens)for example by splitting them on whitespace and punctuation vocabulary building collect vocabulary of all words that appear in any of the documentsand number them (sayin alphabetical order encoding for each documentcount how often each of the words in the vocabulary appear in this document there are some subtleties involved in step and step which we will discuss in more detail later in this for nowlet' look at how we can apply the bag-of-words processing using scikit-learn figure - illustrates the process on the string "this is how you get ants the output is one vector of word counts for each document for each word in the vocabularywe have count of how often it appears in each document that means our numeric representation has one feature for each unique word in the whole dataset note how the order of the words in the original string is completely irrelevant to the bag-of-words feature representation figure - bag-of-words processing working with text data |
17,319 | the bag-of-words representation is implemented in countvectorizerwhich is transformer let' first apply it to toy datasetconsisting of two samplesto see it workingin[ ]bards_words =["the fool doth think he is wise,""but the wise man knows himself to be fool"we import and instantiate the countvectorizer and fit it to our toy data as followsin[ ]from sklearn feature_extraction text import countvectorizer vect countvectorizer(vect fit(bards_wordsfitting the countvectorizer consists of the tokenization of the training data and building of the vocabularywhich we can access as the vocabulary_ attributein[ ]print("vocabulary size{}format(len(vect vocabulary_))print("vocabulary content:\ {}format(vect vocabulary_)out[ ]vocabulary size vocabulary content{'the' 'himself' 'wise' 'he' 'doth' 'to' 'knows' 'man' 'fool' 'is' 'be' 'think' 'but' the vocabulary consists of wordsfrom "beto "wiseto create the bag-of-words representation for the training datawe call the transform methodin[ ]bag_of_words vect transform(bards_wordsprint("bag_of_words{}format(repr(bag_of_words))out[ ]bag_of_wordswith stored elements in compressed sparse row formatthe bag-of-words representation is stored in scipy sparse matrix that only stores the entries that are nonzero (see the matrix is of shape with one row for each of the two data points and one feature for each of the words in the vocabulary sparse matrix is used as most documents only contain small subset of the words in the vocabularymeaning most entries in the feature array are think representing text data as bag of words |
17,320 | words in the english language (which is what the vocabulary modelsstoring all those zeros would be prohibitiveand waste of memory to look at the actual content of the sparse matrixwe can convert it to "densenumpy array (that also stores all the entriesusing the toarray method: in[ ]print("dense representation of bag_of_words:\ {}formatbag_of_words toarray())out[ ]dense representation of bag_of_words[[ [ ]we can see that the word counts for each word are either or neither of the two strings in bards_words contains word twice let' take look at how to read these feature vectors the first string ("the fool doth think he is wise,"is represented as the first row inand it contains the first word in the vocabulary"be"zero times it also contains the second word in the vocabulary"but"zero times it contains the third word"doth"onceand so on looking at both rowswe can see that the fourth word"fool"the tenth word"the"and the thirteenth word"wise"appear in both strings bag-of-words for movie reviews now that we've gone through the bag-of-words process in detaillet' apply it to our task of sentiment analysis for movie reviews earlierwe loaded our training and test data from the imdb reviews into lists of strings (text_train and text_test)which we will now processin[ ]vect countvectorizer(fit(text_trainx_train vect transform(text_trainprint("x_train:\ {}format(repr(x_train))out[ ]x_trainwith stored elements in compressed sparse row format this is possible because we are using small toy dataset that contains only words for any real datasetthis would result in memoryerror working with text data |
17,321 | , , indicating that the vocabulary contains , entries againthe data is stored as scipy sparse matrix let' look at the vocabulary in bit more detail another way to access the vocabulary is using the get_feature_name method of the vectorizerwhich returns convenient list where each entry corresponds to one featurein[ ]feature_names vect get_feature_names(print("number of features{}format(len(feature_names))print("first features:\ {}format(feature_names[: ])print("features to :\ {}format(feature_names[ : ])print("every th feature:\ {}format(feature_names[:: ])out[ ]number of features first features[' '' '' '' '' '' '' '' '' '' '' '' '' '' '' am'' pm'' '' '' pm'' 'features to ['dratted''draub''draught''draughts''draughtswoman''draw''drawback''drawbacks''drawer''drawers''drawing''drawings''drawl''drawled''drawling''drawn''draws''draza''dre''drea'every th feature[' ''aesir''aquarian''barking''blustering''bete''chicanery''condensing''cunning''detox''draper''enshrined''favorit''freezer''goldman''hasan''huitieme''intelligible''kantrowitz''lawful''maars''megalunged''mostey''norrland''padilla''pincher''promisingly''receptionist''rivals''schnaas''shunning''sparse''subset''temptations''treatises''unproven''walkman''xylophonist'as you can seepossibly bit surprisinglythe first entries in the vocabulary are all numbers all these numbers appear somewhere in the reviewsand are therefore extracted as words most of these numbers don' have any immediate semantic meaning--apart from " "which in the particular context of movies is likely to refer to the james bond character weeding out the meaningful from the nonmeaningful "wordsis sometimes tricky looking further along in the vocabularywe find collection of english words starting with "drayou might notice that for "draught""drawback"and "drawerboth the singular and plural forms are contained in the vocabulary as distinct words these words have very closely related semantic meaningsand counting them as different wordscorresponding to different featuresmight not be ideal quick analysis of the data confirms that this is indeed the case try confirming it yourself representing text data as bag of words |
17,322 | performance by actually building classifier we have the training labels stored in y_train and the bag-of-words representation of the training data in x_trainso we can train classifier on this data for high-dimensionalsparse data like thislinear models like logisticregression often work best let' start by evaluating logisticregresssion using cross-validation: in[ ]from sklearn model_selection import cross_val_score from sklearn linear_model import logisticregression scores cross_val_score(logisticregression()x_trainy_traincv= print("mean cross-validation accuracy{ }format(np mean(scores))out[ ]mean cross-validation accuracy we obtain mean cross-validation score of %which indicates reasonable performance for balanced binary classification task we know that logisticregression has regularization parametercwhich we can tune via cross-validationin[ ]from sklearn model_selection import gridsearchcv param_grid {' '[ ]grid gridsearchcv(logisticregression()param_gridcv= grid fit(x_trainy_trainprint("best cross-validation score{ }format(grid best_score_)print("best parameters"grid best_params_out[ ]best cross-validation score best parameters{' ' we obtain cross-validation score of using = we can now assess the generalization performance of this parameter setting on the test setin[ ]x_test vect transform(text_testprint("{ }format(grid score(x_testy_test))out[ ] the attentive reader might notice that we violate our lesson from on cross-validation with preprocessing here using the default settings of countvectorizerit actually does not collect any statisticsso our results are valid using pipeline from the start would be better choice for applicationsbut we defer it for ease of exposure working with text data |
17,323 | extracts tokens using regular expression by defaultthe regular expression that is used is "\ \ \ +\bif you are not familiar with regular expressionsthis means it finds all sequences of characters that consist of at least two letters or numbers (\wand that are separated by word boundaries (\bit does not find single-letter wordsand it splits up contractions like "doesn'tor "bit ly"but it matches " teras single word the countvectorizer then converts all words to lowercase charactersso that "soon""soon"and "soonall correspond to the same token (and therefore featurethis simple mechanism works quite well in practicebut as we saw earlierwe get many uninformative features (like the numbersone way to cut back on these is to only use tokens that appear in at least two documents (or at least five documentsand so ona token that appears only in single document is unlikely to appear in the test set and is therefore not helpful we can set the minimum number of documents token needs to appear in with the min_df parameterin[ ]vect countvectorizer(min_df= fit(text_trainx_train vect transform(text_trainprint("x_train with min_df{}format(repr(x_train))out[ ]x_train with min_dfwith stored elements in compressed sparse row formatby requiring at least five appearances of each tokenwe can bring down the number of features to , as seen in the preceding output--only about third of the original features let' look at some tokens againin[ ]feature_names vect get_feature_names(print("first features:\ {}format(feature_names[: ])print("features to :\ {}format(feature_names[ : ])print("every th feature:\ {}format(feature_names[:: ])out[ ]first features[' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' '' th'' '' '' '' '' '' '' '' '' th'' '' '' '' '' '' th'' '' '' th'' '' '' th'' '' '' th'' '' '' th'' '' '' '' mm'' '' th'features to ['repentance''repercussions''repertoire''repetition''repetitions''repetitious''repetitive''rephrase''replace''replaced''replacement''replaces''replacing''replay''replayable''replayed''replaying''replays''replete''replica'representing text data as bag of words |
17,324 | [' ''affections''appropriately''barbra''blurbs''butchered''cheese''commitment''courts''deconstructed''disgraceful''dvds''eschews''fell''freezer''goriest''hauser''hungary''insinuate''juggle''leering''maelstrom''messiah''music''occasional''parking''pleasantville''pronunciation''recipient''reviews''sas''shea''sneers''steiger''swastika''thrusting''tvs''vampyre''westerns'there are clearly many fewer numbersand some of the more obscure words or misspellings seem to have vanished let' see how well our model performs by doing grid search againin[ ]grid gridsearchcv(logisticregression()param_gridcv= grid fit(x_trainy_trainprint("best cross-validation score{ }format(grid best_score_)out[ ]best cross-validation score the best validation accuracy of the grid search is still %unchanged from before we didn' improve our modelbut having fewer features to deal with speeds up processing and throwing away useless features might make the model more interpretable if the transform method of countvectorizer is called on document that contains words that were not contained in the training datathese words will be ignored as they are not part of the dictionary this is not really an issue for classificationas it' not possible to learn anything about words that are not in the training data for some applicationslike spam detectionit might be helpful to add feature that encodes how many so-called "out of vocabularywords there are in particular documentthough for this to workyou need to set min_dfotherwisethis feature will never be active during training stopwords another way that we can get rid of uninformative words is by discarding words that are too frequent to be informative there are two main approachesusing languagespecific list of stopwordsor discarding words that appear too frequently scikitlearn has built-in list of english stopwords in the feature_extraction text module working with text data |
17,325 | from sklearn feature_extraction text import english_stop_words print("number of stop words{}format(len(english_stop_words))print("every th stopword:\ {}format(list(english_stop_words)[:: ])out[ ]number of stop words every th stopword['above''elsewhere''into''well''rather''fifteen''had''enough''herein''should''third''although''more''this''none''seemed''nobody''seems''he''also''fill''anyone''anything''me''the''yet''go''seeming''front''beforehand''forty'' 'clearlyremoving the stopwords in the list can only decrease the number of features by the length of the list--here --but it might lead to an improvement in performance let' give it tryin[ ]specifying stop_words="englishuses the built-in list we could also augment it and pass our own vect countvectorizer(min_df= stop_words="english"fit(text_trainx_train vect transform(text_trainprint("x_train with stop words:\ {}format(repr(x_train))out[ ]x_train with stop wordswith stored elements in compressed sparse row formatthere are now ( , - , fewer features in the datasetwhich means that mostbut not allof the stopwords appeared let' run the grid search againin[ ]grid gridsearchcv(logisticregression()param_gridcv= grid fit(x_trainy_trainprint("best cross-validation score{ }format(grid best_score_)out[ ]best cross-validation score the grid search performance decreased slightly using the stopwords--not enough to worry aboutbut given that excluding features out of over , is unlikely to change performance or interpretability lotit doesn' seem worth using this list fixed lists are mostly helpful for small datasetswhich might not contain enough information for the model to determine which words are stopwords from the data itself as an exerciseyou can try out the other approachdiscarding frequently stopwords |
17,326 | influences the number of features and the performance rescaling the data with tf-idf instead of dropping features that are deemed unimportantanother approach is to rescale features by how informative we expect them to be one of the most common ways to do this is using the term frequency-inverse document frequency (tf-idfmethod the intuition of this method is to give high weight to any term that appears often in particular documentbut not in many documents in the corpus if word appears often in particular documentbut not in very many documentsit is likely to be very descriptive of the content of that document scikit-learn implements the tf-idf method in two classestfidftransformerwhich takes in the sparse matrix output produced by countvectorizer and transforms itand tfidfvectorizerwhich takes in the text data and does both the bag-of-words feature extraction and the tf-idf transformation there are several variants of the tf-idf rescaling schemewhich you can read about on wikipedia the tf-idf score for word in document as implemented in both the tfidftransformer and tfidfvectorizer classes is given by: tfidf wd tf log + + nw where is the number of documents in the training setnw is the number of documents in the training set that the word appears inand tf (the term frequencyis the number of times that the word appears in the query document (the document you want to transform or encodeboth classes also apply normalization after computing the tf-idf representationin other wordsthey rescale the representation of each document to have euclidean norm rescaling in this way means that the length of document (the number of wordsdoes not change the vectorized representation because tf-idf actually makes use of the statistical properties of the training datawe will use pipelineas described in to ensure the results of our grid search are valid this leads to the following code we provide this formula here mostly for completenessyou don' need to remember it to use the tf-idf encoding working with text data |
17,327 | from sklearn feature_extraction text import tfidfvectorizer from sklearn pipeline import make_pipeline pipe make_pipeline(tfidfvectorizer(min_df= norm=none)logisticregression()param_grid {'logisticregression__c'[ ]grid gridsearchcv(pipeparam_gridcv= grid fit(text_trainy_trainprint("best cross-validation score{ }format(grid best_score_)out[ ]best cross-validation score as you can seethere is some improvement when using tf-idf instead of just word counts we can also inspect which words tf-idf found most important keep in mind that the tf-idf scaling is meant to find words that distinguish documentsbut it is purely unsupervised technique so"importanthere does not necessarily relate to the "positive reviewand "negative reviewlabels we are interested in firstwe extract the tfidfvectorizer from the pipelinein[ ]vectorizer grid best_estimator_ named_steps["tfidfvectorizer"transform the training dataset x_train vectorizer transform(text_trainfind maximum value for each of the features over the dataset max_value x_train max(axis= toarray(ravel(sorted_by_tfidf max_value argsort(get feature names feature_names np array(vectorizer get_feature_names()print("features with lowest tfidf:\ {}formatfeature_names[sorted_by_tfidf[: ]])print("features with highest tfidf\ {}formatfeature_names[sorted_by_tfidf[- :]])out[ ]features with lowest tfidf['poignant'disagree'instantly'importantly'lacked'occurred'currently'altogether'nearby'undoubtedly'directs'fond'stinker'avoided'emphasis'commented'disappoint'realizing'downhill'inane'features with highest tfidf['coop'homer'dillinger'hackenstein'gadget'taker'macarthur'vargas'jesse'basket'dominick'the'victor'bridget'victoria'khouri'zizek'rob'timon'titanic'rescaling the data with tf-idf |
17,328 | ments or are only used sparinglyand only in very long documents interestinglymany of the high-tf-idf features actually identify certain shows or movies these terms only appear in reviews for this particular show or franchisebut tend to appear very often in these particular reviews this is very clearfor examplefor "pokemon""smallville"and "doodlebops"but "scannershere actually also refers to movie title these words are unlikely to help us in our sentiment classification task (unless maybe some franchises are universally reviewed positively or negativelybut certainly contain lot of specific information about the reviews we can also find the words that have low inverse document frequency--that isthose that appear frequently and are therefore deemed less important the inverse document frequency values found on the training set are stored in the idf_ attributein[ ]sorted_by_idf np argsort(vectorizer idf_print("features with lowest idf:\ {}formatfeature_names[sorted_by_idf[: ]])out[ ]features with lowest idf['the'and'of'to'this'is'it'in'that'but'for'with'was'as'on'movie'not'have'one'be'film'are'you'all'at'an'by'so'from'like'who'they'there'if'his'out'just'about'he'or'has'what'some'good'can'more'when'time'up'very'even'only'no'would'my'see'really'story'which'well'had'me'than'much'their'get'were'other'been'do'most'don'her'also'into'first'made'how'great'because'will'people'make'way'could'we'bad'after'any'too'then'them'she'watch'think'acting'movies'seen'its'him'as expectedthese are mostly english stopwords like "theand "nobut some are clearly domain-specific to the movie reviewslike "movie""film""time""story"and so on interestingly"good""great"and "badare also among the most frequent and therefore "least relevantwords according to the tf-idf measureeven though we might expect these to be very important for our sentiment analysis task investigating model coefficients finallylet' look in bit more detail into what our logistic regression model actually learned from the data because there are so many features-- , after removing the infrequent ones--we clearly cannot look at all of the coefficients at the same time howeverwe can look at the largest coefficientsand see which words these correspond to we will use the last model that we trainedbased on the tf-idf features the following bar chart (figure - shows the largest and smallest coefficients of the logistic regression modelwith the bars showing the size of each coefficient working with text data |
17,329 | mglearn tools visualize_coefficientsgrid best_estimator_ named_steps["logisticregression"coef_feature_namesn_top_features= figure - largest and smallest coefficients of logistic regression trained on tf-idf features the negative coefficients on the left belong to words that according to the model are indicative of negative reviewswhile the positive coefficients on the right belong to words that according to the model indicate positive reviews most of the terms are quite intuitivelike "worst""waste""disappointment"and "laughableindicating bad movie reviewswhile "excellent""wonderful""enjoyable"and "refreshingindicate positive movie reviews some words are slightly less clearlike "bit""job"and "today"but these might be part of phrases like "good jobor "best today bag-of-words with more than one word ( -gramsone of the main disadvantages of using bag-of-words representation is that word order is completely discarded thereforethe two strings "it' badnot good at alland "it' goodnot bad at allhave exactly the same representationeven though the meanings are inverted putting "notin front of word is only one example (if an extreme oneof how context matters fortunatelythere is way of capturing context when using bag-of-words representationby not only considering the counts of single tokensbut also the counts of pairs or triplets of tokens that appear next to each other pairs of tokens are known as bigramstriplets of tokens are known as trigramsand more generally sequences of tokens are known as -grams we can change the range of tokens that are considered as features by changing the ngram_range parameter of countvectorizer or tfidfvectorizer the ngram_range parameter is tupleconbag-of-words with more than one word ( -grams |
17,330 | that are considered here is an example on the toy data we used earlierin[ ]print("bards_words:\ {}format(bards_words)out[ ]bards_words['the fool doth think he is wise,''but the wise man knows himself to be fool'the default is to create one feature per sequence of tokens that is at least one token long and at most one token longor in other words exactly one token long (single tokens are also called unigrams)in[ ]cv countvectorizer(ngram_range=( )fit(bards_wordsprint("vocabulary size{}format(len(cv vocabulary_))print("vocabulary:\ {}format(cv get_feature_names())out[ ]vocabulary size vocabulary['be''but''doth''fool''he''himself''is''knows''man''the''think''to''wise'to look only at bigrams--that isonly at sequences of two tokens following each other--we can set ngram_range to ( )in[ ]cv countvectorizer(ngram_range=( )fit(bards_wordsprint("vocabulary size{}format(len(cv vocabulary_))print("vocabulary:\ {}format(cv get_feature_names())out[ ]vocabulary size vocabulary['be fool''but the''doth think''fool doth''he is''himself to''is wise''knows himself''man knows''the fool''the wise''think he''to be''wise man'using longer sequences of tokens usually results in many more featuresand in more specific features there is no common bigram between the two phrases in bard_words working with text data |
17,331 | print("transformed data (dense):\ {}format(cv transform(bards_wordstoarray())out[ ]transformed data (dense)[[ [ ]for most applicationsthe minimum number of tokens should be oneas single words often capture lot of meaning adding bigrams helps in most cases adding longer sequences--up to -grams--might help toobut this will lead to an explosion of the number of features and might lead to overfittingas there will be many very specific features in principlethe number of bigrams could be the number of unigrams squared and the number of trigrams could be the number of unigrams to the power of threeleading to very large feature spaces in practicethe number of higher -grams that actually appear in the data is much smallerbecause of the structure of the (englishlanguagethough it is still large here is what using unigramsbigramsand trigrams on bards_words looks likein[ ]cv countvectorizer(ngram_range=( )fit(bards_wordsprint("vocabulary size{}format(len(cv vocabulary_))print("vocabulary:\ {}format(cv get_feature_names())out[ ]vocabulary size vocabulary['be''be fool''but''but the''but the wise''doth''doth think''doth think he''fool''fool doth''fool doth think''he''he is''he is wise''himself''himself to''himself to be''is''is wise''knows''knows himself''knows himself to''man''man knows''man knows himself''the''the fool''the fool doth''the wise''the wise man''think''think he''think he is''to''to be''to be fool''wise''wise man''wise man knows'let' try out the tfidfvectorizer on the imdb movie review data and find the best setting of -gram range using grid searchin[ ]pipe make_pipeline(tfidfvectorizer(min_df= )logisticregression()running the grid search takes long time because of the relatively large grid and the inclusion of trigrams param_grid {"logisticregression__c"[ ]"tfidfvectorizer__ngram_range"[( )( )( )]grid gridsearchcv(pipeparam_gridcv= grid fit(text_trainy_trainprint("best cross-validation score{ }format(grid best_score_)print("best parameters:\ {}format(grid best_params_)bag-of-words with more than one word ( -grams |
17,332 | best cross-validation score best parameters{'tfidfvectorizer__ngram_range'( )'logisticregression__c' as you can see from the resultswe improved performance by bit more than percent by adding bigram and trigram features we can visualize the cross-validation accuracy as function of the ngram_range and parameter as heat mapas we did in (see figure - )in[ ]extract scores from grid_search scores grid cv_results_['mean_test_score'reshape(- visualize heat map heatmap mglearn tools heatmapscoresxlabel=" "ylabel="ngram_range"cmap="viridis"fmt=" "xticklabels=param_grid['logisticregression__c']yticklabels=param_grid['tfidfvectorizer__ngram_range']plt colorbar(heatmapfigure - heat map visualization of mean cross-validation accuracy as function of the parameters ngram_range and from the heat map we can see that using bigrams increases performance quite bitwhile adding trigrams only provides very small benefit in terms of accuracy to understand better how the model improvedwe can visualize the important coeffi working with text data |
17,333 | figure - )in[ ]extract feature names and coefficients vect grid best_estimator_ named_steps['tfidfvectorizer'feature_names np array(vect get_feature_names()coef grid best_estimator_ named_steps['logisticregression'coef_ mglearn tools visualize_coefficients(coeffeature_namesn_top_features= figure - most important features when using unigramsbigramsand trigrams with tf-idf rescaling there are particularly interesting features containing the word "worththat were not present in the unigram model"not worthis indicative of negative reviewwhile "definitely worthand "well worthare indicative of positive review this is prime example of context influencing the meaning of the word "worth nextwe'll visualize only trigramsto provide further insight into why these features are helpful many of the useful bigrams and trigrams consist of common words that would not be informative on their ownas in the phrases "none of the""the only good""on and on""this is one""of the most"and so on howeverthe impact of these features is quite limited compared to the importance of the unigram featuresas you can see in figure - in[ ]find -gram features mask np array([len(feature split(")for feature in feature_names]= visualize only -gram features mglearn tools visualize_coefficients(coef ravel()[mask]feature_names[mask]n_top_features= bag-of-words with more than one word ( -grams |
17,334 | advanced tokenizationstemmingand lemmatization as mentioned previouslythe feature extraction in the countvectorizer and tfidf vectorizer is relatively simpleand much more elaborate methods are possible one particular step that is often improved in more sophisticated text-processing applications is the first step in the bag-of-words modeltokenization this step defines what constitutes word for the purpose of feature extraction we saw earlier that the vocabulary often contains singular and plural versions of some wordsas in "drawbackand "drawbacks""drawerand "drawers"and "drawingand "drawingsfor the purposes of bag-of-words modelthe semantics of "drawbackand "drawbacksare so close that distinguishing them will only increase overfittingand not allow the model to fully exploit the training data similarlywe found the vocabulary includes words like "replace""replaced""replace ment""replaces"and "replacing"which are different verb forms and noun relating to the verb "to replace similarly to having singular and plural forms of nountreating different verb forms and related words as distinct tokens is disadvantageous for building model that generalizes well this problem can be overcome by representing each word using its word stemwhich involves identifying (or conflatingall the words that have the same word stem if this is done by using rule-based heuristiclike dropping common suffixesit is usually referred to as stemming if instead dictionary of known word forms is used (an explicit and human-verified system)and the role of the word in the sentence is taken into accountthe process is referred to as lemmatization and the standardized form of the word is referred to as the lemma both processing methodslemmatization and stemmingare forms of normalization that try to extract some normal form of word another interesting case of normalization is spelling correctionwhich can be helpful in practice but is outside of the scope of this book working with text data |
17,335 | --the porter stemmera widely used collection of heuristics (here imported from the nltk package)--to lemmatization as implemented in the spacy package: in[ ]import spacy import nltk load spacy' english-language models en_nlp spacy load('en'instantiate nltk' porter stemmer stemmer nltk stem porterstemmer(define function to compare lemmatization in spacy with stemming in nltk def compare_normalization(doc)tokenize document in spacy doc_spacy en_nlp(docprint lemmas found by spacy print("lemmatization:"print([token lemma_ for token in doc_spacy]print tokens found by porter stemmer print("stemming:"print([stemmer stem(token norm_ lower()for token in doc_spacy]we will compare lemmatization and the porter stemmer on sentence designed to show some of the differencesin[ ]compare_normalization( "our meeting today was worse than yesterday" ' scared of meeting the clients tomorrow "out[ ]lemmatization['our''meeting''today''be''bad''than''yesterday'','' ''be''scared''of''meet''the''client''tomorrow''stemming['our''meet''today''wa''wors''than''yesterday'','' '"' "'scare''of''meet''the''client''tomorrow''stemming is always restricted to trimming the word to stemso "wasbecomes "wa"while lemmatization can retrieve the correct base verb form"besimilarlylemmatization can normalize "worseto "bad"while stemming produces "worsanother major difference is that stemming reduces both occurrences of "meetingto "meetusing lemmatizationthe first occurrence of "meetingis recognized as for details of the interfaceconsult the nltk and spacy documentation we are more interested in the general principles here advanced tokenizationstemmingand lemmatization |
17,336 | to "meetin generallemmatization is much more involved process than stemmingbut it usually produces better results than stemming when used for normalizing tokens for machine learning while scikit-learn implements neither form of normalizationcountvectorizer allows specifying your own tokenizer to convert each document into list of tokens using the tokenizer parameter we can use the lemmatization from spacy to create callable that will take string and produce list of lemmasin[ ]technicalitywe want to use the regexp-based tokenizer that is used by countvectorizer and only use the lemmatization from spacy to this endwe replace en_nlp tokenizer (the spacy tokenizerwith the regexp-based tokenization import re regexp used in countvectorizer regexp re compile('(? )\\ \\ \\ +\\ 'load spacy language model and save old tokenizer en_nlp spacy load('en'old_tokenizer en_nlp tokenizer replace the tokenizer with the preceding regexp en_nlp tokenizer lambda stringold_tokenizer tokens_from_listregexp findall(string)create custom tokenizer using the spacy document processing pipeline (now using our own tokenizerdef custom_tokenizer(document)doc_spacy en_nlp(documententity=falseparse=falsereturn [token lemma_ for token in doc_spacydefine count vectorizer with the custom tokenizer lemma_vect countvectorizer(tokenizer=custom_tokenizermin_df= let' transform the data and inspect the vocabulary sizein[ ]transform text_train using countvectorizer with lemmatization x_train_lemma lemma_vect fit_transform(text_trainprint("x_train_lemma shape{}format(x_train_lemma shape)standard countvectorizer for reference vect countvectorizer(min_df= fit(text_trainx_train vect transform(text_trainprint("x_train shape{}format(x_train shape) working with text data |
17,337 | x_train_lemma shape( x_train shape( as you can see from the outputlemmatization reduced the number of features from , (with the standard countvectorizer processingto , lemmatization can be seen as kind of regularizationas it conflates certain features thereforewe expect lemmatization to improve performance most when the dataset is small to illustrate how lemmatization can helpwe will use stratifiedshufflesplit for cross-validationusing only of the data as training data and the rest as test datain[ ]build grid search using only of the data as the training set from sklearn model_selection import stratifiedshufflesplit param_grid {' '[ ]cv stratifiedshufflesplit(n_iter= test_size= train_size= random_state= grid gridsearchcv(logisticregression()param_gridcv=cvperform grid search with standard countvectorizer grid fit(x_trainy_trainprint("best cross-validation score "(standard countvectorizer){ }format(grid best_score_)perform grid search with lemmatization grid fit(x_train_lemmay_trainprint("best cross-validation score "(lemmatization){ }format(grid best_score_)out[ ]best cross-validation score (standard countvectorizer) best cross-validation score (lemmatization) in this caselemmatization provided modest improvement in performance as with many of the different feature extraction techniquesthe result varies depending on the dataset lemmatization and stemming can sometimes help in building better (or at least more compactmodelsso we suggest you give these techniques try when trying to squeeze out the last bit of performance on particular task topic modeling and document clustering one particular technique that is often applied to text data is topic modelingwhich is an umbrella term describing the task of assigning each document to one or multiple topicsusually without supervision good example for this is news datawhich might be categorized into topics like "politics,"sports,"finance,and so on if each document is assigned single topicthis is the task of clustering the documentsas discussed in if each document can have more than one topicthe task topic modeling and document clustering |
17,338 | learn then corresponds to one topicand the coefficients of the components in the representation of document tell us how strongly related that document is to particular topic oftenwhen people talk about topic modelingthey refer to one particular decomposition method called latent dirichlet allocation (often lda for short latent dirichlet allocation intuitivelythe lda model tries to find groups of words (the topicsthat appear together frequently lda also requires that each document can be understood as "mixtureof subset of the topics it is important to understand that for the machine learning model "topicmight not be what we would normally call topic in everyday speechbut that it resembles more the components extracted by pca or nmf (which we discussed in )which might or might not have semantic meaning even if there is semantic meaning for an lda "topic"it might not be something we' usually call topic going back to the example of news articleswe might have collection of articles about sportspoliticsand financewritten by two specific authors in politics articlewe might expect to see words like "governor,"vote,"party,etc while in sports article we might expect words like "team,"score,and "season words in each of these groups will likely appear togetherwhile it' less likely thatfor example"teamand "governorwill appear together howeverthese are not the only groups of words we might expect to appear together the two reporters might prefer different phrases or different choices of words maybe one of them likes to use the word "demarcateand one likes the word "polarize other "topicswould then be "words often used by reporter aand "words often used by reporter ,though these are not topics in the usual sense of the word let' apply lda to our movie review dataset to see how it works in practice for unsupervised text document modelsit is often good to remove very common wordsas they might otherwise dominate the analysis we'll remove words that appear in at least percent of the documentsand we'll limit the bag-of-words model to the , words that are most common after removing the top percentin[ ]vect countvectorizer(max_features= max_df vect fit_transform(text_train there is another machine learning model that is also often abbreviated ldalinear discriminant analysisa linear classification model this leads to quite some confusion in this booklda refers to latent dirichlet allocation working with text data |
17,339 | of them similarly to the components in nmftopics don' have an inherent orderingand changing the number of topics will change all of the topics we'll use the "batchlearning methodwhich is somewhat slower than the default ("online"but usually provides better resultsand increase "max_iter"which can also lead to better modelsin[ ]from sklearn decomposition import latentdirichletallocation lda latentdirichletallocation(n_topics= learning_method="batch"max_iter= random_state= we build the model and transform the data in one step computing transform takes some timeand we can save time by doing both at once document_topics lda fit_transform(xlike the decomposition methods we saw in latentdirichletallocation has components_ attribute that stores how important each word is for each topic the size of components_ is (n_topicsn_words)in[ ]lda components_ shape out[ ]( to understand better what the different topics meanwe will look at the most important words for each of the topics the print_topics function provides nice formatting for these featuresin[ ]for each topic ( row in the components_)sort the features (ascendinginvert rows with [:::- to make sorting descending sorting np argsort(lda components_axis= )[:::- get the feature names from the vectorizer feature_names np array(vect get_feature_names()in[ ]print out the topicsmglearn tools print_topics(topics=range( )feature_names=feature_namessorting=sortingtopics_per_chunk= n_words= in factnmf and lda solve quite related problemsand we could also use nmf to extract topics topic modeling and document clustering |
17,340 | topic between young family real performance beautiful work each both director topic war world us our american documentary history new own point topic funny worst comedy thing guy re stupid actually nothing want topic show series episode tv episodes shows season new television years topic didn saw am thought years book watched now dvd got topic horror action effects budget nothing original director minutes pretty doesn topic kids action animation game fun disney children kid old topic cast role john version novel both director played performance mr topic performance role john actor oscar cast plays jack joe performances topic house woman gets killer girl wife horror young goes around judging from the important wordstopic seems to be about historical and war moviestopic might be about bad comediestopic might be about tv series topic seems to capture some very common wordswhile topic appears to be about children' movies and topic seems to capture award-related reviews using only topicseach of the topics needs to be very broadso that they can together cover all the different kinds of reviews in our dataset nextwe will learn another modelthis time with topics using more topics makes the analysis much harderbut makes it more likely that topics can specialize to interesting subsets of the datain[ ]lda latentdirichletallocation(n_topics= learning_method="batch"max_iter= random_state= document_topics lda fit_transform(xlooking at all topics would be bit overwhelmingso we selected some interesting and representative topics working with text data |
17,341 | topics np array([ ]sorting np argsort(lda components_axis= )[:::- feature_names np array(vect get_feature_names()mglearn tools print_topics(topics=topicsfeature_names=feature_namessorting=sortingtopics_per_chunk= n_words= out[ ]topic thriller suspense horror atmosphere mystery house director quite bit de performances dark twist hitchcock tension interesting mysterious murder ending creepy topic worst awful boring horrible stupid thing terrible script nothing worse waste pretty minutes didn actors actually re supposed mean want topic german hitler nazi midnight joe germany years history new modesty cowboy jewish past kirk young spanish enterprise von nazis spock topic car gets guy around down kill goes killed going house away head take another getting doesn now night right woman topic beautiful young old romantic between romance wonderful heart feel year each french sweet boy loved girl relationship saw both simple topic performance role actor cast play actors performances played supporting director oscar roles actress excellent screen plays award work playing gives topic excellent highly amazing wonderful truly superb actors brilliant recommend quite performance performances perfect drama without beautiful human moving world recommended topic war american world soldiers military army tarzan soldier america country americans during men us government jungle vietnam ii political against topic music song songs rock band soundtrack singing voice singer sing musical roll fan metal concert playing hear fans prince especially topic earth space planet superman alien world evil humans aliens human creatures miike monsters apes clark burton tim outer men moon topic modeling and document clustering |
17,342 | scott gary streisand star hart lundgren dolph career sabrina role temple phantom judy melissa zorro gets barbra cast short serial topic money budget actors low worst waste give want nothing terrible crap must reviews imdb director thing believe am actually topic funny comedy laugh jokes humor hilarious laughs fun re funniest laughing joke few moments guy unfunny times laughed comedies isn topic dead zombie gore zombies blood horror flesh minutes body living eating flick budget head gory evil shot low fulci re topic didn thought wasn ending minutes got felt part going seemed bit found though nothing lot saw long interesting few half the topics we extracted this time seem to be more specificthough many are hard to interpret topic seems to be about horror movies and thrillerstopics and seem to capture bad reviewswhile topic mostly seems to be capturing positive reviews of comedies if we want to make further inferences using the topics that were discoveredwe should confirm the intuition we gained from looking at the highestranking words for each topic by looking at the documents that are assigned to these topics for exampletopic seems to be about music let' check which kinds of reviews are assigned to this topicin[ ]sort by weight of "musictopic music np argsort(document_topics [: ])[::- print the five documents where the topic is most important for in music[: ]pshow first two sentences print(bjoin(text_train[isplit( ")[: ] \ "out[ ] ' love this movie and never get tired of watching the music in it is great \nb" enjoyed still crazy more than any film have seen in years successful band from the ' decide to give it another try \nb'hollywood hotel was the last movie musical that busby berkeley directed for warner bros his directing style had changed or evolved to the point that this film does not contain his signature overhead shots or huge production numbers with thousands of extras \nb"what happens to washed up rock- -roll stars in the late 'sthey launch comeback reunion tour at leastthat' what the members of strange fruita (fictional ' stadium rock group do \ working with text data |
17,343 | believe \'ve only just got round to watching "purple rainthe brand new -disc anniversary special edition led me to buy it \nb"this film is worth seeing alone for jared harrisoutstanding portrayal of john lennon it doesn' matter that harris doesn' exactly resemble lennonhis mannerismsexpressionspostureaccent and attitude are pure lennon \nb"the funkyyet strictly second-tier british glam-rock band strange fruit breaks up at the end of the wild' 'wacky excess-ridden ' the individual band members go their separate ways and uncomfortably settle into lackluster middle age in the dull and uneventful 'smorose keyboardist stephen rea winds up penniless and down on his luckvainneuroticpretentious lead singer bill nighy tries (and failsto pursue floundering solo careerparanoid drummer timothy spall resides in obscurity on remote farm so he can avoid paying hefty back taxes debtand surly bass player jimmy nail installs roofs for living \nb" just finished reading book on anita looswork and the photo in tcm magazine of macdonald in her angel costume looked great (impressive wings)so thought ' watch this movie ' never heard of the film beforeso had no preconceived notions about it whatsoever \nb' love this movie!!purple rain came out the year was born and it has had my heart since can remember prince is so tight in this movie \nb"this movie is sort of carrie meets heavy metal it' about highschool guy who gets picked on alot and he totally gets revenge with the help of heavy metal ghost \nas we can seethis topic covers wide variety of music-centered reviewsfrom musicalsto biographical moviesto some hard-to-specify genre in the last review another interesting way to inspect the topics is to see how much weight each topic gets overallby summing the document_topics over all reviews we name each topic by the two most common words figure - shows the topic weights learnedin[ ]figax plt subplots( figsize=( )topic_names ["{:> format(ijoin(wordsfor iwords in enumerate(feature_names[sorting[:: ]])two column bar chartfor col in [ ]start col end (col ax[colbarh(np arange( )np sum(document_topics axis= )[start:end]ax[colset_yticks(np arange( )ax[colset_yticklabels(topic_names[start:end]ha="left"va="top"ax[colinvert_yaxis(ax[colset_xlim( yax ax[colget_yaxis(yax set_tick_params(pad= plt tight_layout(topic modeling and document clustering |
17,344 | the most important topics are which seems to consist mostly of stopwordspossibly with slight negative directiontopic which is clearly about bad reviewsfollowed by some genre-specific topics and and both of which seem to contain laudatory words it seems like lda mostly discovered two kind of topicsgenre-specific and ratingspecificin addition to several more unspecific topics this is an interesting discoveryas most reviews are made up of some movie-specific comments and some comments that justify or emphasize the rating topic models like lda are interesting methods to understand large text corpora in the absence of labels--oras hereeven if labels are available the lda algorithm is randomizedthoughand changing the random_state parameter can lead to quite working with text data |
17,345 | draw from an unsupervised model should be taken with grain of saltand we recommend verifying your intuition by looking at the documents in specific topic the topics produced by the lda transform method can also sometimes be used as compact representation for supervised learning this is particularly helpful when few training examples are available summary and outlook in this we talked about the basics of processing textalso known as natural language processing (nlp)with an example application classifying movie reviews the tools discussed here should serve as great starting point when trying to process text data in particular for text classification tasks such as spam and fraud detection or sentiment analysisbag-of-words representations provide simple and powerful solution as is often the case in machine learningthe representation of the data is key in nlp applicationsand inspecting the tokens and -grams that are extracted can give powerful insights into the modeling process in text-processing applicationsit is often possible to introspect models in meaningful wayas we saw in this for both supervised and unsupervised tasks you should take full advantage of this ability when using nlp-based methods in practice natural language and text processing is large research fieldand discussing the details of advanced methods is far beyond the scope of this book if you want to learn morewe recommend the 'reilly book natural language processing with python by steven birdewan kleinand edward loperwhich provides an overview of nlp together with an introduction to the nltk python package for nlp another great and more conceptual book is the standard reference introduction to information retrieval by christopher manningprabhakar raghavanand hinrich schutzewhich describes fundamental algorithms in information retrievalnlpand machine learning both books have online versions that can be accessed free of charge as we discussed earlierthe classes countvectorizer and tfidfvectorizer only implement relatively simple text-processing methods for more advanced text-processing methodswe recommend the python packages spacy ( relatively new but very efficient and welldesigned package)nltk ( very well-established and complete but somewhat dated library)and gensim (an nlp package with an emphasis on topic modelingthere have been several very exciting new developments in text processing in recent yearswhich are outside of the scope of this book and relate to neural networks the first is the use of continuous vector representationsalso known as word vectors or distributed word representationsas implemented in the word vec library the original paper "distributed representations of words and phrases and their compositionalityby thomas mikolov et al is great introduction to the subject both spacy summary and outlook |
17,346 | follow-ups another direction in nlp that has picked up momentum in recent years is the use of recurrent neural networks (rnnsfor text processing rnns are particularly powerful type of neural network that can produce output that is again textin contrast to classification models that can only assign class labels the ability to produce text as output makes rnns well suited for automatic translation and summarization an introduction to the topic can be found in the relatively technical paper "sequence to sequence learning with neural networksby ilya suskeveroriol vinyalsand quoc le more practical tutorial using the tensorflow framework can be found on the tensorflow website working with text data |
17,347 | wrapping up you now know how to apply the important machine learning algorithms for supervised and unsupervised learningwhich allow you to solve wide variety of machine learning problems before we leave you to explore all the possibilities that machine learning offerswe want to give you some final words of advicepoint you toward some additional resourcesand give you suggestions on how you can further improve your machine learning and data science skills approaching machine learning problem with all the great methods that we introduced in this book now at your fingertipsit may be tempting to jump in and start solving your data-related problem by just running your favorite algorithm howeverthis is not usually good way to begin your analysis the machine learning algorithm is usually only small part of larger data analysis and decision-making process to make effective use of machine learningwe need to take step back and consider the problem at large firstyou should think about what kind of question you want to answer do you want to do exploratory analysis and just see if you find something interesting in the dataor do you already have particular goal in mindoften you will start with goallike detecting fraudulent user transactionsmaking movie recommendationsor finding unknown planets if you have such goalbefore building system to achieve ityou should first think about how to define and measure successand what the impact of successful solution would be to your overall business or research goals let' say your goal is fraud detection |
17,348 | how do measure if my fraud prediction is actually workingdo have the right data to evaluate an algorithmif am successfulwhat will be the business impact of my solutionas we discussed in it is best if you can measure the performance of your algorithm directly using business metriclike increased profit or decreased losses this is often hard to dothough question that can be easier to answer is "what if built the perfect model?if perfectly detecting any fraud will save your company $ monththese possible savings will probably not be enough to warrant the effort of you even starting to develop an algorithm on the other handif the model might save your company tens of thousands of dollars every monththe problem might be worth exploring say you've defined the problem to solveyou know solution might have significant impact for your projectand you've ensured that you have the right information to evaluate success the next steps are usually acquiring the data and building working prototype in this book we have talked about many models you can employand how to properly evaluate and tune these models while trying out modelsthoughkeep in mind that this is only small part of larger data science workflowand model building is often part of feedback circle of collecting new datacleaning databuilding modelsand analyzing the models analyzing the mistakes model makes can often be informative about what is missing in the datawhat additional data could be collectedor how the task could be reformulated to make machine learning more effective collecting more or different data or changing the task formulation slightly might provide much higher payoff than running endless grid searches to tune parameters humans in the loop you should also consider if and how you should have humans in the loop some processes (like pedestrian detection in self-driving carneed to make immediate decisions others might not need immediate responsesand so it can be possible to have humans confirm uncertain decisions medical applicationsfor examplemight need very high levels of precision that possibly cannot be achieved by machine learning algorithm alone but if an algorithm can make percent percentor maybe even just percent of decisions automaticallythat might already increase response time or reduce cost many applications are dominated by "simple cases,for which an algorithm can make decisionwith relatively few "complicated cases,which can be rerouted to human wrapping up |
17,349 | the tools we've discussed in this book are great for many machine learning applicationsand allow very quick analysis and prototyping python and scikit-learn are also used in production systems in many organizations--even very large ones like international banks and global social media companies howevermany companies have complex infrastructureand it is not always easy to include python in these systems that is not necessarily problem in many companiesthe data analytics teams work with languages like python and that allow the quick testing of ideaswhile production teams work with languages like goscalac++and java to build robustscalable systems data analysis has different requirements from building live servicesand so using different languages for these tasks makes sense relatively common solution is to reimplement the solution that was found by the analytics team inside the larger frameworkusing high-performance language this can be easier than embedding whole library or programming language and converting from and to the different data formats regardless of whether you can use scikit-learn in production system or notit is important to keep in mind that production systems have different requirements from one-off analysis scripts if an algorithm is deployed into larger systemsoftware engineering aspects like reliabilitypredictabilityruntimeand memory requirements gain relevance simplicity is key in providing machine learning systems that perform well in these areas critically inspect each part of your data processing and prediction pipeline and ask yourself how much complexity each step createshow robust each component is to changes in the data or compute infrastructureand if the benefit of each component warrants the complexity if you are building involved machine learning systemswe highly recommend reading the paper "machine learningthe high interest credit card of technical debt"published by researchers in google' machine learning team the paper highlights the trade-off in creating and maintaining machine learning software in production at large scale while the issue of technical debt is particularly pressing in large-scale and long-term projectsthe lessons learned can help us build better software even for short-lived and smaller systems testing production systems in this bookwe covered how to evaluate algorithmic predictions based on test set that we collected beforehand this is known as offline evaluation if your machine learning system is user-facingthis is only the first step in evaluating an algorithmthough the next step is usually online testing or live testingwhere the consequences of employing the algorithm in the overall system are evaluated changing the recommendations or search results users are shown by website can drastically change their behavior and lead to unexpected consequences to protect against these surprisesmost user-facing services employ / testinga form of blind user study in from prototype to production |
17,350 | website or service using algorithm awhile the rest of the users will be provided with algorithm for both groupsrelevant success metrics will be recorded for set period of time thenthe metrics of algorithm and algorithm will be comparedand selection between the two approaches will be made according to these metrics using / testing enables us to evaluate the algorithms "in the wild,which might help us to discover unexpected consequences when users are interacting with our model often is new modelwhile is the established system there are more elaborate mechanisms for online testing that go beyond / testingsuch as bandit algorithms great introduction to this subject can be found in the book bandit algorithms for website optimization by john myles white ( 'reillybuilding your own estimator this book has covered variety of tools and algorithms implemented in scikitlearn that can be used on wide range of tasks howeveroften there will be some particular processing you need to do for your data that is not implemented in scikit-learn it may be enough to just preprocess your data before passing it to your scikit-learn model or pipeline howeverif your preprocessing is data dependentand you want to apply grid search or cross-validationthings become trickier in we discussed the importance of putting all data-dependent processing inside the cross-validation loop so how can you use your own processing together with the scikit-learn toolsthere is simple solutionbuild your own estimatorimplementing an estimator that is compatible with the scikit-learn interfaceso that it can be used with pipelinegridsearchcvand cross_val_scoreis quite easy you can find detailed instructions in the scikit-learn documentationbut here is the gist the simplest way to implement transformer class is by inheriting from baseestimator and transformermixinand then implementing the __init__fitand predict functions like this wrapping up |
17,351 | from sklearn base import baseestimatortransformermixin class mytransformer(baseestimatortransformermixin)def __init__(selffirst_parameter= second_parameter= )all parameters must be specified in the __init__ function self first_parameter self second_parameter def fit(selfxy=none)fit should only take and as parameters even if your model is unsupervisedyou need to accept argumentmodel fitting code goes here print("fitting the model right here"fit returns self return self def transform(selfx)transform takes as parameter only apply some transformation to x_transformed return x_transformed implementing classifier or regressor works similarlyonly instead of transformer mixin you need to inherit from classifiermixin or regressormixin alsoinstead of implementing transformyou would implement predict as you can see from the example given hereimplementing your own estimator requires very little codeand most scikit-learn users build up collection of custom models over time where to go from here this book provides an introduction to machine learning and will make you an effective practitioner howeverif you want to further your machine learning skillshere are some suggestions of books and more specialized resources to investigate to dive deeper theory in this bookwe tried to provide an intuition of how the most common machine learning algorithms workwithout requiring strong foundation in mathematics or computer science howevermany of the models we discussed use principles from probability theorylinear algebraand optimization while it is not necessary to understand all the details of how these algorithms are implementedwe think that where to go from here |
17,352 | tist there have been many good books written about the theory of machine learningand if we were able to excite you about the possibilities that machine learning opens upwe suggest you pick up at least one of them and dig deeper we already mentioned hastietibshiraniand friedman' book the elements of statistical learning in the prefacebut it is worth repeating this recommendation here another quite accessible bookwith accompanying python codeis machine learningan algorithmic perspective by stephen marsland (chapman and hall/crctwo other highly recommended classics are pattern recognition and machine learning by christopher bishop (springer) book that emphasizes probabilistic frameworkand machine learninga probabilistic perspective by kevin murphy (mit press) comprehensive (read , pagesdissertation on machine learning methods featuring in-depth discussions of state-of-the-art approachesfar beyond what we could cover in this book other machine learning frameworks and packages while scikit-learn is our favorite package for machine learning and python is our favorite language for machine learningthere are many other options out there depending on your needspython and scikit-learn might not be the best fit for your particular situation often using python is great for trying out and evaluating modelsbut larger web services and applications are more commonly written in java or ++and integrating into these systems might be necessary for your model to be deployed another reason you might want to look beyond scikit-learn is if you are more interested in statistical modeling and inference than prediction in this caseyou should consider the statsmodel package for pythonwhich implements several linear models with more statistically minded interface if you are not married to pythonyou might also consider using ranother lingua franca of data scientists is language designed specifically for statistical analysis and is famous for its excellent visualization capabilities and the availability of many (often highly specializedstatistical modeling packages another popular machine learning package is vowpal wabbit (often called vw to avoid possible tongue twisting) highly optimized machine learning package written in +with command-line interface vw is particularly useful for large datasets and for streaming data for running machine learning algorithms distributed on clusterone of the most popular solutions at the time of writing is mlliba scala library built on top of the spark distributed computing environment andreas might not be entirely objective in this matter wrapping up |
17,353 | because this is an introductory bookwe focused on the most common machine learning tasksclassification and regression in supervised learningand clustering and signal decomposition in unsupervised learning there are many more kinds of machine learning out therewith many important applications there are two particularly important topics that we did not cover in this book the first is rankingin which we want to retrieve answers to particular queryordered by their relevance you've probably already used ranking system todaythis is how search engines operate you input search query and obtain sorted list of answersranked by how relevant they are great introduction to ranking is provided in manningraghavanand schutze' book introduction to information retrieval the second topic is recommender systemswhich provide suggestions to users based on their preferences you've probably encountered recommender systems under headings like "people you may know,"customers who bought this item also bought,or "top picks for you there is plenty of literature on the topicand if you want to dive right in you might be interested in the now classic "netflix prize challenge"in which the netflix video streaming site released large dataset of movie preferences and offered prize of $ million to the team that could provide the best recommendations another common application is prediction of time series (like stock prices)which also has whole body of literature devoted to it there are many more machine learning tasks out there--much more than we can list here--and we encourage you to seek out information from booksresearch papersand online communities to find the paradigms that best apply to your situation probabilistic modelinginferenceand probabilistic programming most machine learning packages provide predefined machine learning models that apply one particular algorithm howevermany real-world problems have particular structure thatwhen properly incorporated into the modelcan yield much betterperforming predictions oftenthe structure of particular problem can be expressed using the language of probability theory such structure commonly arises from having mathematical model of the situation for which you want to predict to understand what we mean by structured problemconsider the following example let' say you want to build mobile application that provides very detailed position estimate in an outdoor spaceto help users navigate historical site mobile phone provides many sensors to help you get precise location measurementslike the gpsaccelerometerand compass you also have an exact map of the area this problem is highly structured you know where the paths and points of interest are from your map you also have rough positions from the gpsand the accelerometer and compass in the user' device provide you with very precise relative measurements but throwing these all together into black-box machine learning system to predict positions might not be the best idea this would throw away all the information you where to go from here |
17,354 | you user is going northand the gps is telling you the user is going southyou probably can' trust the gps if your position estimate tells you the user just walked through wallyou should also be highly skeptical it' possible to express this situation using probabilistic modeland then use machine learning or probabilistic inference to find out how much you should trust each measurementand to reason about what the best guess for the location of user is once you've expressed the situation and your model of how the different factors work together in the right waythere are methods to compute the predictions using these custom models directly the most general of these methods are called probabilistic programming languagesand they provide very elegant and compact way to express learning problem examples of popular probabilistic programming languages are pymc (which can be used in pythonand stan ( framework that can be used from several languagesincluding pythonwhile these packages require some understanding of probability theorythey simplify the creation of new models significantly neural networks while we touched on the subject of neural networks briefly in and this is rapidly evolving area of machine learningwith innovations and new applications being announced on weekly basis recent breakthroughs in machine learning and artificial intelligencesuch as the victory of the alpha go program against human champions in the game of gothe constantly improving performance of speech understandingand the availability of near-instantaneous speech translationhave all been driven by these advances while the progress in this field is so fast-paced that any current reference to the state of the art will soon be outdatedthe recent book deep learning by ian goodfellowyoshua bengioand aaron courville (mit pressis comprehensive introduction into the subject scaling to larger datasets in this bookwe always assumed that the data we were working with could be stored in numpy array or scipy sparse matrix in memory (rameven though modern servers often have hundreds of gigabytes (gbof ramthis is fundamental restriction on the size of data you can work with not everybody can afford to buy such large machineor even to rent one from cloud provider in most applicationsthe data that is used to build machine learning system is relatively smallthoughand few machine learning datasets consist of hundreds of gigabites of data or more this makes expanding your ram or renting machine from cloud provider viable solution in many cases if you need to work with terabytes of datahoweveror you need preprint of deep learning can be viewed at wrapping up |
17,355 | out-of-core learning describes learning from data that cannot be stored in main memorybut where the learning takes place on single computer (or even single processor within computerthe data is read from source like the hard disk or the network either one sample at time or in chunks of multiple samplesso that each chunk fits into ram this subset of the data is then processed and the model is updated to reflect what was learned from the data thenthis chunk of the data is discarded and the next bit of data is read out-of-core learning is implemented for some of the models in scikit-learnand you can find details on it in the online user guide because out-of-core learning requires all of the data to be processed by single computerthis can lead to long runtimes on very large datasets alsonot all machine learning algorithms can be implemented in this way the other strategy for scaling is distributing the data over multiple machines in compute clusterand letting each computer process part of the data this can be much faster for some modelsand the size of the data that can be processed is only limited by the size of the cluster howeversuch computations often require relatively complex infrastructure one of the most popular distributed computing platforms at the moment is the spark platform built on top of hadoop spark includes some machine learning functionality within the mllib package if your data is already on hadoop filesystemor you are already using spark to preprocess your datathis might be the easiest option if you don' already have such infrastructure in placeestablishing and integrating spark cluster might be too large an efforthowever the vw package mentioned earlier provides some distributed features and might be better solution in this case honing your skills as with many things in lifeonly practice will allow you to become an expert in the topics we covered in this book feature extractionpreprocessingvisualizationand model building can vary widely between different tasks and different datasets maybe you are lucky enough to already have access to variety of datasets and tasks if you don' already have task in minda good place to start is machine learning competitionsin which dataset with given task is publishedand teams compete in creating the best possible predictions many companiesnonprofit organizationsand universities host these competitions one of the most popular places to find them is kagglea website that regularly holds data science competitionssome of which have substantial prize money attached the kaggle forums are also good source of information about the latest tools and tricks in machine learningand wide range of datasets are available on the site even more datasets with associated tasks can be found on the openml platformwhich where to go from here |
17,356 | ing with these datasets can provide great opportunity to practice your machine learning skills disadvantage of competitions is that they already provide particular metric to optimizeand usually fixedpreprocessed dataset keep in mind that defining the problem and collecting the data are also important aspects of real-world problemsand that representing the problem in the right way might be much more important than squeezing the last percent of accuracy out of classifier conclusion we hope we have convinced you of the usefulness of machine learning in wide variety of applicationsand how easily machine learning can be implemented in practice keep digging into the dataand don' lose sight of the larger picture wrapping up |
17,357 | / testing accuracy acknowledgmentsxi adjusted rand index (ari) agglomerative clustering evaluating and comparing example of hierarchical clustering linkage choices principle of algorithm chains and pipelines - building pipelines building pipelines with make_pipeline - grid search preprocessing steps grid-searching for model selection importance of overview of parameter selection with preprocessing pipeline interface using pipelines in grid searches - algorithm parameter algorithms (see also modelsproblem solvingevaluating minimal code to apply to algorithm sample datasets - scaling minmaxscaler - normalizer robustscaler standardscaler - - supervisedclassification decision trees - gradient boosting - -nearest neighbors - kernelized support vector machines - linear svms logistic regression naive bayes - neural networks - random forests - supervisedregression decision trees - gradient boosting - -nearest neighbors lasso - linear regression (ols) - neural networks - random forests - ridge - - unsupervisedclustering agglomerative clustering - - - dbscan - -means - unsupervisedmanifold learning -sne - unsupervisedsignal decomposition non-negative matrix factorization - principal component analysis - alpha parameter in linear models anaconda |
17,358 | area under the curve (auc) - attributionsx average precision bag-of-words representation applying to movie reviews - applying to toy dataset more than one word ( -grams) - steps in computing bernoullinb bigrams binary classification - binning - bootstrap samples boston housing dataset boundary points bunch objects business metric parameter in svc calibration cancer dataset categorical features categorical datadefined defined encoded as numbers example of representation in training and test sets representing using one-hot-encoding categorical variables (see categorical featureschaining (see algorithm chains and pipelinesclass labels classification problems binary vs multiclass examples of goals for iris classification example -nearest neighbors linear models naive bayes classifiers vs regression problems classifiers decisiontreeclassifier decisiontreeregressor kneighborsclassifier - - kneighborsregressor - index linearsvc - logisticregression - - mlpclassifier - naive bayes - svc - - - uncertainty estimates from - cluster centers clustering algorithms agglomerative clustering - applications for comparing on faces dataset - dbscan - evaluating with ground truth - evaluating without ground truth - goals of -means clustering - summary of code examples downloadingx permission for usex coef_ attribute comments and questionsxi competitions conflation confusion matrices - context continuous features core samples/core points corpus cos function countvectorizer cross-validation analyzing results of - benefits of cross-validation splitters grid search and - in scikit-learn leave-one-out cross-validation nested parallelizing with grid search principle of purpose of shuffle-split cross-validation stratified -fold - with groups cross_val_score function |
17,359 | data pointsdefined data representation - (see also feature extraction/feature engineeringtext dataautomatic feature selection - binning and - categorical features - effect on model performance integer features model complexity vs dataset size overview of table analogy in training vs test sets understanding your data univariate nonlinear transformations - data transformations (see also preprocessingdata-driven research dbscan evaluating and comparing - parameters principle of returned cluster assignments strengths and weaknesses decision boundaries decision function decision trees analyzing building controlling complexity of data representation and - feature importance in if/else structure of parameters vs random forests strengths and weaknesses decision_function deep learning (see neural networksdendrograms dense regions dimensionality reduction discrete features discretization - distributed computing document clustering documentsdefined dual_coef_ attribute eigenfaces embarrassingly parallel encoding ensembles defined gradient boosted regression trees - random forests - enthought canopy estimators estimator_ attribute of rfecv evaluation metrics and scoring for binary classification - for multiclass classification - metric selection model selection and regression metrics testing production systems exp function expert knowledge - ( )= formula facial recognition factor analysis (fa) false positive rate (fpr) false positive/false negative errors feature extraction/feature engineering - (see also data representationtext dataaugmenting data with automatic feature selection - categorical features - continuous vs discrete features defined interaction features - with non-negative matrix factorization overview of polynomial features - with principal component analysis univariate nonlinear transformations - using expert knowledge - feature importance featuresdefined feature_names attribute feed-forward neural networks fit method fit_transform method floating-point numbers index |
17,360 | forge dataset frameworks free string data freeform text data high-dimensional datasets histograms hit rate hold-out sets human involvement/oversight gamma parameter gaussian kernels of svc gaussiannb generalization building models for defined examples of get_dummies function get_support method of feature selection gradient boosted regression trees for feature selection - learning_rate parameter parameters vs random forests strengths and weaknesses training set accuracy graphviz module grid search accessing pipeline attributes alternate strategies for avoiding overfitting model selection with nested cross-validation parallelizing with cross-validation pipeline preprocessing searching non-grid spaces simple example of tuning parameters with using pipelines in - with cross-validation - gridsearchcv best_estimator_ attribute best_params_ attribute best_score_ attribute handcoded rulesdisadvantages of heat maps hidden layers hidden units hierarchical clustering high recall index imbalanced datasets independent component analysis (ica) inference information leakage information retrieval (ir) integer features "intelligentapplications interactions - intercept_ attribute iris classification application data inspection dataset for goals for -nearest neighbors making predictions model evaluation multiclass problem overview of training and testing data iterative feature selection jupyter notebook -fold cross-validation -means clustering applying with scikit-learn vs classification cluster centers complex datasets evaluating and comparing example of failures of strengths and weaknesses vector quantization with -nearest neighbors ( -nnanalyzing kneighborsclassifier analyzing kneighborsregressor building classification - |
17,361 | parameters predictions with regression strengths and weaknesses kaggle kernelized support vector machines (svmskernel trick linear models and nonlinear features vs linear support vector machines mathematics of parameters predictions with preprocessing data for strengths and weaknesses tuning svm parameters understanding knn object regularization regularization lasso model latent dirichlet allocation (lda) - leafs leakage learn from the past approach learning_rate parameter leave-one-out cross-validation lemmatization - linear functions linear models classification data representation and - vs -nearest neighbors lasso linear svms logistic regression multiclass classification ordinary least squares parameters predictions with regression ridge regression strengths and weaknesses linear regression - linear support vector machines (svms) linkage arrays live testing log function loss functions low-dimensional datasets machine learning algorithm chains and pipelines - applications for - approach to problem solving - benefits of python for building your own systemsvii data representation - examples of - mathematics ofvii model evaluation and improvement - preprocessing and scaling - prerequisites to learningvii resourcesix - scikit-learn and - supervised learning - understanding your data unsupervised learning - working with text data - make_pipeline function accessing step attributes displaying steps attribute grid-searched pipelines and syntax for manifold learning algorithms applications for example of results of visualizations with mathematical functions for feature transformations matplotlib max_features parameter meta-estimators for trees and forests method chaining metrics (see evaluation metrics and scoringmglearn mllib model-based feature selection models (see also algorithmscalibrated capable of generalization coefficients with text data - complexity vs dataset size index |
17,362 | effect of data representation choices on evaluation and improvement - evaluation metrics and scoring - iris classification application - overfitting vs underfitting pipeline preprocessing and selecting selecting with grid search theory behind tuning parameters with grid search - movie reviews multiclass classification vs binary classification evaluation metrics and scoring for - linear models for uncertainty estimates multilayer perceptrons (mlps) multinomialnb -grams naive bayes classifiers kinds in scikit-learn parameters strengths and weaknesses natural language processing (nlp) negative class nested cross-validation netflix prize challenge neural networks (deep learningaccuracy of estimating complexity in predictions with randomization in recent breakthroughs in strengths and weaknesses tuning non-negative matrix factorization (nmfapplications for applying to face images applying to synthetic data normalization normalized mutual information (nmi) numpy (numeric pythonlibrary offline evaluation one-hot-encoding - index one-out-of- encoding - one-vs -rest approach online resourcesix online testing openml platform operating points ordinary least squares (ols) out-of-core learning outlier detection overfitting pair plots pandas benefits of checking string-encoded data column indexing in converting data to one-hot-encoding get_dummies function parallelization over cluster permissionsx pipelines (see algorithm chains and pipelinespolynomial features - polynomial kernels polynomial regression positive class posix time preand post-pruning precision precision-recall curves - predict for the future approach predict method predict_proba function preprocessing - data transformation application effect on supervised learning kinds of parameter selection with pipelines and purpose of scaling training and test data principal component analysis (pcadrawbacks of example of feature extraction with unsupervised nature of visualizations with whitening option probabilistic modeling |
17,363 | problem solving building your own estimators business metrics and initial approach to resources - simple vs complicated cases steps of testing your system tool choice production systems testing tool choice pruning for decision trees pseudorandom number generators pure leafs pymc language python benefits of prepackaged distributions python vs python python( , ) statsmodel package language radial basis function (rbfkernel random forests analyzing building data representation and - vs decision trees vs gradient boosted regression trees parameters predictions with randomization in strengths and weaknesses random_state parameter ranking real numbers recall receiver operating characteristics (roccurves - recommender systems rectified linear unit (relu) rectifying nonlinearity recurrent neural networks (rnns) recursive feature elimination (rfe) regression f_regression linearregression - regression problems boston housing dataset vs classification problems evaluation metrics and scoring examples of goals for -nearest neighbors lasso linear models ridge regression wave dataset illustration regularization regularization regularization rescaling example of - kernel svms resourcesix ridge regression robustness-based clustering roots safari books onlinex samplesdefined scaling - data transformation application effect on supervised learning into larger datasets kinds of purpose of training and test data scatter plots scikit-learn alternate frameworks benefits of bunch objects cancer dataset core code for data and labels in documentation feature_names attribute fit method fit_transform method installing knn object libraries and tools - index |
17,364 | download our free python ebookhow to code in python which is available via do co/python-book for other programming languages and devops engineering articlesour knowledge base of over , tutorials is available as creativecommons-licensed resource via do co/tutorials |
17,365 | written by lisa tagliaferri python is flexible and versatile programming language suitable for many use caseswith strengths in scriptingautomationdata analysismachine learningand back-end development first published in the python development team was inspired by the british comedy group monty python to make programming language that was fun to use python is the most current version of the language and is considered to be the future of python this tutorial will help get your remote server or local computer set up with python programming environment if you already have python installedalong with pip and venvfeel free to move onto the next prerequisites this tutorial will be based on working with linux or unix-like (*nixsystem and use of command line or terminal environment both macos and specifically the powershell program of windows should be able to achieve similar results step -installing python many operating systems come with python already installed you can check to see whether you have python installed by opening up terminal window and typing the followingpython - |
17,366 | the version number while this number may varythe output will be similar to thisoutput python if you received alternate outputyou can navigate in web browser to python org in order to download python and install it to your machine by following the instructions once you are able to type the python - command above and receive output that states your computer' python version numberyou are ready to continue step -installing pip to manage software packages for pythonlet' install pipa tool that will install and manage programming packages we may want to use in our development projects if you have downloaded python from python orgyou should have pip already installed if you are on an ubuntu or debian server or computeryou can download pip by typing the followingsudo apt install - python -pip now that you have pip installedyou can download python packages with the following commandpip install package_name |
17,367 | as django for web development or numpy for scientific computing so if you would like to install numpyyou can do so with the command pip install numpy there are few more packages and development tools to install to ensure that we have robust set-up for our programming environmentsudo apt install build-essential libssl-dev libffi-dev python -dev once python is set upand pip and other tools are installedwe can set up virtual environment for our development projects step -setting up virtual environment virtual environments enable you to have an isolated space on your server for python projectsensuring that each of your projects can have its own set of dependencies that won' disrupt any of your other projects setting up programming environment provides us with greater control over our python projects and over how different versions of packages are handled this is especially important when working with third-party packages you can set up as many python programming environments as you want each environment is basically directory or folder on your server that has few scripts in it to make it act as an environment while there are few ways to achieve programming environment in pythonwe'll be using the venv module herewhich is part of the standard python library if you have installed python with through the installer available from python orgyou should have venv ready to go |
17,368 | install it with the followingsudo apt install - python -venv with venv installedwe can now create environments let' either choose which directory we would like to put our python programming environments inor create new directory with mkdiras inmkdir environments cd environments once you are in the directory where you would like the environments liveyou can create an environment you should use the version of python that is installed on your machine as the first part of the command (the output you received when typing python -vif that version was python you can type the followingpython - venv my_env ifinsteadyour computer has python installeduse the following commandpython - venv my_env windows machines may allow you to remove the version number entirely |
17,369 | once you run the appropriate commandyou can verify that the environment is set up be continuing essentiallypyvenv sets up new directory that contains few items which we can view with the ls commandls my_env output bin include lib lib pyvenv cfg share togetherthese files work to make sure that your projects are isolated from the broader context of your local machineso that system files and project files don' mix this is good practice for version control and to ensure that each of your projects has access to the particular packages that it needs python wheelsa built-package format for python that can speed up your software production by reducing the number of times you need to compilewill be in the ubuntu share directory to use this environmentyou need to activate itwhich you can achieve by typing the following command that calls the activate scriptsource my_env/bin/activate your command prompt will now be prefixed with the name of your environmentin this case it is called my_env depending on what version debian linux you are runningyour prefix may appear somewhat |
17,370 | the first thing you see on your line(my_envsammy@sammy:~/environmentsthis prefix lets us know that the environment my_env is currently activemeaning that when we create programs here they will use only this particular environment' settings and packages notewithin the virtual environmentyou can use the command python instead of python and pip instead of pip if you would prefer if you use python on your machine outside of an environmentyou will need to use the python and pip commands exclusively after following these stepsyour virtual environment is ready to use step -creating "helloworldprogram now that we have our virtual environment set uplet' create traditional "helloworld!program this will let us test our environment and provides us with the opportunity to become more familiar with python if we aren' already to do thiswe'll open up command-line text editor such as nano and create new file(my_envsammy@sammy:~/environmentsnano hello py once the text file opens up in the terminal window we'll type out our programprint("helloworld!" |
17,371 | the file press once you exit out of nano and return to your shelllet' run the program(my_envsammy@sammy:~/environmentspython hello py hello py program that you just created should cause your terminal to produce the following outputoutput helloworldto leave the environmentsimply type the command deactivate and you will return to your original directory conclusion at this point you have python programming environment set up on your machine and you can now begin coding projectif you would like to learn more about pythonyou can download our free how to code in python ebook via do co/python-book |
17,372 | written by lisa tagliaferri machine learning is subfield of artificial intelligence (aithe goal of machine learning generally is to understand the structure of data and fit that data into models that can be understood and utilized by people although machine learning is field within computer scienceit differs from traditional computational approaches in traditional computingalgorithms are sets of explicitly programmed instructions used by computers to calculate or problem solve machine learning algorithms instead allow for computers to train on data inputs and use statistical analysis in order to output values that fall within specific range because of thismachine learning facilitates computers in building models from sample data in order to automate decision-making processes based on data inputs any technology user today has benefitted from machine learning facial recognition technology allows social media platforms to help users and share photos of friends optical character recognition (ocrtechnology converts images of text into movable type recommendation enginespowered by machine learningsuggest what movies or television shows to watch next based on user preferences self-driving cars that rely on machine learning to navigate may soon be available to consumers machine learning is continuously developing field because of thisthere are some considerations to keep in mind as you work with machine learning methodologiesor analyze the impact of machine learning processes |
17,373 | supervised and unsupervised learningand common algorithmic approaches in machine learningincluding the -nearest neighbor algorithmdecision tree learningand deep learning we'll explore which programming languages are most used in machine learningproviding with some of the positive and negative attributes of each additionallywe'll discuss biases that are perpetuated by machine learning algorithmsand consider what can be kept in mind to prevent these biases when building algorithms machine learning methods in machine learningtasks are generally classified into broad categories these categories are based on how learning is received or how feedback on the learning is given to the system developed two of the most widely adopted machine learning methods are supervised learning which trains algorithms based on example input and output data that is labeled by humansand unsupervised learning which provides the algorithm with no labeled data in order to allow it to find structure within its input data let' explore these methods in more detail supervised learning in supervised learningthe computer is provided with example inputs that are labeled with their desired outputs the purpose of this method is for the algorithm to be able to "learnby comparing its actual output with the "taughtoutputs to find errorsand modify the model accordingly supervised learning therefore uses patterns to predict label values on additional unlabeled data |
17,374 | with images of sharks labeled as fish and images of oceans labeled as water by being trained on this datathe supervised learning algorithm should be able to later identify unlabeled shark images as fish and unlabeled ocean images as water common use case of supervised learning is to use historical data to predict statistically likely future events it may use historical stock market information to anticipate upcoming fluctuationsor be employed to filter out spam emails in supervised learningtagged photos of dogs can be used as input data to classify untagged photos of dogs unsupervised learning in unsupervised learningdata is unlabeledso the learning algorithm is left to find commonalities among its input data as unlabeled data are more abundant than labeled datamachine learning methods that facilitate unsupervised learning are particularly valuable the goal of unsupervised learning may be as straightforward as discovering hidden patterns within datasetbut it may also have goal of feature learningwhich allows the computational machine to automatically discover the representations that are needed to classify raw data unsupervised learning is commonly used for transactional data you may have large dataset of customers and their purchasesbut as human you will likely not be able to make sense of what similar attributes can be drawn from customer profiles and their types of purchases with this data fed into an unsupervised learning algorithmit may be determined that women of certain age range who buy unscented soaps are likely to be pregnantand therefore marketing |
17,375 | audience in order to increase their number of purchases without being told "correctanswerunsupervised learning methods can look at complex data that is more expansive and seemingly unrelated in order to organize it in potentially meaningful ways unsupervised learning is often used for anomaly detection including for fraudulent credit card purchasesand recommender systems that recommend what products to buy next in unsupervised learninguntagged photos of dogs can be used as input data for the algorithm to find likenesses and classify dog photos together approaches as fieldmachine learning is closely related to computational statisticsso having background knowledge in statistics is useful for understanding and leveraging machine learning algorithms for those who may not have studied statisticsit can be helpful to first define correlation and regressionas they are commonly used techniques for investigating the relationship among quantitative variables correlation is measure of association between two variables that are not designated as either dependent or independent regression at basic level is used to examine the relationship between one dependent and one independent variable because regression statistics can be used to anticipate the dependent variable when the independent variable is knownregression enables prediction capabilities approaches to machine learning are continuously being developed for our purposeswe'll go through few of the popular approaches that are being used in machine learning at the time of writing |
17,376 | the -nearest neighbor algorithm is pattern recognition model that can be used for classification as well as regression often abbreviated as knnthe in -nearest neighbor is positive integerwhich is typically small in either classification or regressionthe input will consist of the closest training examples within space we will focus on -nn classification in this methodthe output is class membership this will assign new object to the class most common among its nearest neighbors in the case of the object is assigned to the class of the single nearest neighbor let' look at an example of -nearest neighbor in the diagram belowthere are blue diamond objects and orange star objects these belong to two separate classesthe diamond class and the star class |
17,377 | when new object is added to the space -in this case green heart -we will want the machine learning algorithm to classify the heart to certain class |
17,378 | when we choose the algorithm will find the three nearest neighbors of the green heart in order to classify it to either the diamond class or the star class in our diagramthe three nearest neighbors of the green heart are one diamond and two stars thereforethe algorithm will classify the heart with the star class |
17,379 | among the most basic of machine learning algorithmsk-nearest neighbor is considered to be type of "lazy learningas generalization beyond the training data does not occur until query is made to the system decision tree learning for general usedecision trees are employed to visually represent decisions and show or inform decision making when working with machine learning and data miningdecision trees are used as predictive model these models map observations about data to conclusions about the data' target value the goal of decision tree learning is to create model that will predict |
17,380 | in the predictive modelthe data' attributes that are determined through observation are represented by the brancheswhile the conclusions about the data' target value are represented in the leaves when "learninga treethe source data is divided into subsets based on an attribute value testwhich is repeated on each of the derived subsets recursively once the subset at node has the equivalent value as its target value hasthe recursion process will be complete let' look at an example of various conditions that can determine whether or not someone should go fishing this includes weather conditions as well as barometric pressure conditions fishing decision tree example in the simplified decision tree abovean example is classified by sorting it through the tree to the appropriate leaf node this then returns the classification associated with the particular leafwhich in this case is |
17,381 | whether or not it is suitable for going fishing true classification tree data set would have lot more features than what is outlined abovebut relationships should be straightforward to determine when working with decision tree learningseveral determinations need to be madeincluding what features to choosewhat conditions to use for splittingand understanding when the decision tree has reached clear ending deep learning deep learning attempts to imitate how the human brain can process light and sound stimuli into vision and hearing deep learning architecture is inspired by biological neural networks and consists of multiple layers in an artificial neural network made up of hardware and gpus deep learning uses cascade of nonlinear processing unit layers in order to extract or transform features (or representationsof the data the output of one layer serves as the input of the successive layer in deep learningalgorithms can be either supervised and serve to classify dataor unsupervised and perform pattern analysis among the machine learning algorithms that are currently being used and developeddeep learning absorbs the most data and has been able to beat humans in some cognitive tasks because of these attributesdeep learning has become the approach with significant potential in the artificial intelligence space computer vision and speech recognition have both realized significant advances from deep learning approaches ibm watson is well-known example of system that leverages deep learning |
17,382 | although data and computational analysis may make us think that we are receiving objective informationthis is not the casebeing based on data does not mean that machine learning outputs are neutral human bias plays role in how data is collectedorganizedand ultimately in the algorithms that determine how machine learning will interact with that data iffor examplepeople are providing images for "fishas data to train an algorithmand these people overwhelmingly select images of goldfisha computer may not classify shark as fish this would create bias against sharks as fishand sharks would not be counted as fish when using historical photographs of scientists as training dataa computer may not properly classify scientists who are also people of color or women in factrecent peer-reviewed research has indicated that ai and machine learning programs exhibit human-like biases that include race and gender prejudices seefor example "semantics derived automatically from language corpora contain human-like biasesand "men also like shoppingreducing gender bias amplification using corpus-level constraints[pdfas machine learning is increasingly leveraged in businessuncaught biases can perpetuate systemic issues that may prevent people from qualifying for loansfrom being shown ads for high-paying job opportunitiesor from receiving same-day delivery options because human bias can negatively impact othersit is extremely important to be aware of itand to also work towards eliminating it as much as possible one way to work towards achieving this is by ensuring that there are diverse people working on project and that diverse |
17,383 | third parties to monitor and audit algorithmsbuilding alternative systems that can detect biasesand ethics reviews as part of data science project planning raising awareness about biasesbeing mindful of our own unconscious biasesand structuring equity in our machine learning projects and pipelines can work to combat bias in this field conclusion this tutorial reviewed some of the use cases of machine learningcommon methods and popular approaches used in the fieldsuitable machine learning programming languagesand also covered some things to keep in mind in terms of unconscious biases being replicated in algorithms because machine learning is field that is continuously being innovatedit is important to keep in mind that algorithmsmethodsand approaches will continue to change currentlypython is one of the most popular programming languages use with machine learning applications in professional fields other languages you may wish to investigate include javarand + |
17,384 | python with scikit-learn written by michelle morales edited by brian hogan in this tutorialyou'll implement simple machine learning algorithm in python using scikit-learna machine learning tool for python using database of breast cancer tumor informationyou'll use naive bayes (nbclassifier that predicts whether or not tumor is malignant or benign by the end of this tutorialyou'll know how to build your very own machine learning model in python prerequisites to complete this tutorialwe'll use jupyter notebookswhich are useful and interactive way to run machine learning experiments with jupyter notebooksyou can run short blocks of code and see the results quicklymaking it easy to test and debug your code to get up and running quicklyyou can open up web browser and navigate to the try jupyter websitejupyter org/try from thereclick on try jupyter with pythonand you will be taken to an interactive jupyter notebook where you can start to write python code if you would like to learn more about jupyter notebooks and how to set up your own python programming environment to use with jupytery can read our tutorial on how to set up jupyter notebook for python |
17,385 | let' begin by installing the python module scikit-learnone of the best and most documented machine learning libraries for python to begin our coding projectlet' activate our python programming environment make sure you're in the directory where your environment is locatedand run the following commandmy_env/bin/activate with our programming environment activatedcheck to see if the sckikit-learn module is already installed(my_envpython - "import sklearnif sklearn is installedthis command will complete with no error if it is not installedyou will see the following error messageoutput traceback (most recent call last)file ""line in importerrorno module named 'sklearnthe error message indicates that sklearn is not installedso download the library using pip(my_envpip install scikit-learn[alldepsonce the installation completeslaunch jupyter notebook |
17,386 | in jupytercreate new python notebook called ml tutorial in the first cell of the notebookimport the sklearn moduleml tutorial import sklearn your notebook should look like the following figurejupyter notebook with one python cellwhich imports sklearn now that we have sklearn imported in our notebookwe can begin working with the dataset for our machine learning model step -importing scikit-learn' dataset the dataset we will be working with in this tutorial is the breast cancer wisconsin diagnostic database the dataset includes various information about breast cancer tumorsas well as classification labels of malignant or benign the dataset has instancesor dataon tumors and includes information on attributesor featuressuch as the radius of the tumortexturesmoothnessand area using this datasetwe will build machine learning model to use tumor information to predict whether or not tumor is malignant or |
17,387 | scikit-learn comes installed with various datasets which we can load into pythonand the dataset we want is included import and load the datasetml tutorial from sklearn datasets import load_breast_cancer load dataset data load_breast_cancer( data variable represents python object that works like dictionary the important dictionary keys to consider are the classification label names (target_names)the actual labels (target)the attribute/feature names (feature_names)and the attributes (dataattributes are critical part of any classifier attributes capture important characteristics about the nature of the data given the label we are trying to predict (malignant versus benign tumor)possible useful attributes include the sizeradiusand texture of the tumor create new variables for each important set of information and assign the dataml tutorial organize our data |
17,388 | labels data['target'feature_names data['feature_names'features data['data'we now have lists for each set of information to get better understanding of our datasetlet' take look at our data by printing our class labelsthe first data instance' labelour feature namesand the feature values for the first data instanceml tutorial look at our data print(label_namesprint(labels[ ]print(feature_names[ ]print(features[ ]you'll see the following results if you run the codealt jupyter notebook with three python cellswhich prints the first instance in our dataset |
17,389 | are then mapped to binary values of and where represents malignant tumors and represents benign tumors thereforeour first data instance is malignant tumor whose mean radius is + now that we have our data loadedwe can work with our data to build our machine learning classifier step -organizing data into sets to evaluate how well classifier is performingyou should always test the model on unseen data thereforebefore building modelsplit your data into two partsa training set and test set you use the training set to train and evaluate the model during the development stage you then use the trained model to make predictions on the unseen test set this approach gives you sense of the model' performance and robustness fortunatelysklearn has function called train_test_split()which divides your data into these sets import the function and then use it to split the dataml tutorial from sklearn model_selection import train_test_split split our data traintesttrain_labelstest_labels train_test_split(featureslabels |
17,390 | random_state= the function randomly splits the data using the test_size parameter in this examplewe now have test set (testthat represents of the original dataset the remaining data (trainthen makes up the training data we also have the respective labels for both the train/test variablesi train_labels and test_labels we can now move on to training our first model step -building and evaluating the model there are many models for machine learningand each model has its own strengths and weaknesses in this tutorialwe will focus on simple algorithm that usually performs well in binary classification tasksnamely naive bayes (nbfirstimport the gaussiannb module then initialize the model with the gaussiannb(functionthen train the model by fitting it to the data using gnb fit()ml tutorial from sklearn naive_bayes import gaussiannb initialize our classifier gnb gaussiannb(train our classifier |
17,391 | after we train the modelwe can then use the trained model to make predictions on our test setwhich we do using the predict(function the predict(function returns an array of predictions for each data instance in the test set we can then print our predictions to get sense of what the model determined use the predict(function with the test set and print the resultsml tutorial make predictions preds gnb predict(testprint(predsrun the code and you'll see the following resultsjupyter notebook with python cell that prints the predicted values of the naive bayes classifier on our test data as you see in the jupyter notebook outputthe predict(function returned an array of and which represent our predicted values for |
17,392 | now that we have our predictionslet' evaluate how well our classifier is performing step -evaluating the model' accuracy using the array of true class labelswe can evaluate the accuracy of our model' predicted values by comparing the two arrays (test_labels vs predswe will use the sklearn function accuracy_score(to determine the accuracy of our machine learning classifier ml tutorial from sklearn metrics import accuracy_score evaluate accuracy print(accuracy_score(test_labelspreds)you'll see the following resultsalt jupyter notebook with python cell that prints the accuracy of our nb classifier as you see in the outputthe nb classifier is accurate this means that percent of the time the classifier is able to make the correct prediction as to whether or not the tumor is malignant or benign |
17,393 | indicators of tumor class you have successfully built your first machine learning classifier let' reorganize the code by placing all import statements at the top of the notebook or script the final version of the code should look like thisml tutorial from sklearn datasets import load_breast_cancer from sklearn model_selection import train_test_split from sklearn naive_bayes import gaussiannb from sklearn metrics import accuracy_score load dataset data load_breast_cancer(organize our data label_names data['target_names'labels data['target'feature_names data['feature_names'features data['data'look at our data print(label_namesprint('class label 'labels[ ]print(feature_namesprint(features[ ] |
17,394 | traintesttrain_labelstest_labels train_test_split(featureslabelstest_size= random_state= initialize our classifier gnb gaussiannb(train our classifier model gnb fit(traintrain_labelsmake predictions preds gnb predict(testprint(predsevaluate accuracy print(accuracy_score(test_labelspreds)now you can continue to work with your code to see if you can make your classifier perform even better you could experiment with different subsets of features or even try completely different algorithms check out scikit-learn' website at scikit-learn org/stable for more machine learning ideas conclusion |
17,395 | python now you can load dataorganize datatrainpredictand evaluate machine learning classifiers in python using scikit-learn the steps in this tutorial should help you facilitate the process of working with your own data in python |
17,396 | handwritten digits with tensorflow written by ellie birbeck edited by brian hogan neural networks are used as method of deep learningone of the many subfields of artificial intelligence they were first proposed around years ago as an attempt at simulating the way the human brain worksthough in much more simplified form individual 'neuronsare connected in layerswith weights assigned to determine how the neuron responds when signals are propagated through the network previouslyneural networks were limited in the number of neurons they were able to simulateand therefore the complexity of learning they could achieve but in recent yearsdue to advancements in hardware developmentwe have been able to build very deep networksand train them on enormous datasets to achieve breakthroughs in machine intelligence these breakthroughs have allowed machines to match and exceed the capabilities of humans at performing certain tasks one such task is object recognition though machines have historically been unable to match human visionrecent advances in deep learning have made it possible to build neural networks which can recognize objectsfacestextand even emotions in this tutorialyou will implement small subsection of object recognition--digit recognition (an using open-source tensorflow python library developed by the google brain labs for deep learning researchyou will |
17,397 | network to recognize and predict the correct label for the digit displayed while you won' need prior experience in practical deep learning or tensorflow to follow along with this tutorialwe'll assume some familiarity with machine learning terms and concepts such as training and testingfeatures and labelsoptimizationand evaluation prerequisites to complete this tutorialyou'll need local or remote python development environment that includes pip for installing python packagesand venv for creating virtual environments step -configuring the project before you can develop the recognition programyou'll need to install few dependencies and create workspace to hold your files we'll use python virtual environment to manage our project' dependencies create new directory for your project and navigate to the new directorymkdir tensorflow-demo cd tensorflow-demo execute the following commands to set up the virtual environment for this tutorialpython - venv tensorflow-demo source tensorflow-demo/bin/activate |
17,398 | versions of these libraries by creating requirements txt file in the project directory which specifies the requirement and the version we need create the requirements txt file(tensorflow-demotouch requirements txt open the file in your text editor and add the following lines to specify the imagenumpyand tensorflow libraries and their versionsrequirements txt image=numpy=tensorflow=save the file and exit the editor then install these libraries with the following command(tensorflow-demopip install - requirements txt with the dependencies installedwe can start working on our project step -importing the mnist dataset the dataset we will be using in this tutorial is called the mnist datasetand it is classic in the machine learning community this dataset is made up of images of handwritten digits pixels in size here are some examples of the digits included in the dataset |
17,399 | let' create python program to work with this dataset we will use one file for all of our work in this tutorial create new file called main py(tensorflow-demotouch main py now open this file in your text editor of choice and add this line of code to the file to import the tensorflow librarymain py import tensorflow as tf add the following lines of code to your file to import the mnist dataset and store the image data in the variable mnistmain py from tensorflow examples tutorials mnist import input_data |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.