id
int64
0
25.6k
text
stringlengths
0
4.59k
17,700
determining the word types determining the word types is what part of speech (postagging is all about pos tagger parses full sentence with the goal to arrange it into dependence treewhere each node corresponds to word and the parent-child relationship determines which word it depends on with this treeit can then make more informed decisionsfor examplewhether the word "bookis noun ("this is good book "or verb ("could you please book the flight?"you might have already guessed that nltk will also play role also in this area and indeedit comes readily packaged with all sorts of parsers and taggers the pos tagger we will usenltk pos_tag()is actually full-blown classifier trained using manually annotated sentences from the penn treebank project (upenn edu/~treebankit takes as input list of word tokens and outputs list of tupleseach element of which contains the part of the original sentence and its part of speech tagimport nltk nltk pos_tag(nltk word_tokenize("this is good book ")[('this''dt')('is''vbz')(' ''dt')('good''jj')('book''nn')('')nltk pos_tag(nltk word_tokenize("could you please book the flight?")[('could''md')('you''prp')('please''vb')('book''nn')('the''dt')('flight''nn')('?'')the pos tag abbreviations are taken from the penn treebank project (adapted from pos tag description example cc coordinating conjunction or cd cardinal number second dt determiner the ex existential there there are fw foreign word kindergarten in preposition/subordinating conjunction onoflike jj adjective cool jjr adjectivecomparative cooler jjs adjectivesuperlative coolest ls list marker md modal couldwill
17,701
pos tag description example nn nounsingular or mass book nns noun plural books nnp proper nounsingular sean nnps proper nounplural vikings pdt predeterminer both the boys pos possessive ending friend' prp personal pronoun iheit prppossessive pronoun myhis rb adverb howeverusuallynaturallyheregood rbr adverbcomparative better rbs adverbsuperlative best rp particle give up to to to goto him uh interjection uhhuhhuhh vb verbbase form take vbd verbpast tense took vbg verbgerund/present participle taking vbn verbpast participle taken vbp verbsingularpresentnon- take vbz verbthird person singularpresent takes wdt wh-determiner which wp wh-pronoun whowhat wppossessive wh-pronoun whose wrb wh-abverb wherewhen with these tags it is pretty easy to filter the desired tags from the output of pos_ tag(we simply have to count all the words whose tags start with nn for nounsvb for verbsjj for adjectivesand rb for adverbs
17,702
successfully cheating using sentiwordnet while the linguistic information that we discussed earlier will most likely help usthere is something better we can do to harvest itsentiwordnet (sentiwordnet isti cnr itsimply putit is mb file that assigns most of the english words positive and negative value in more complicated wordsfor every synonym setit records both the positive and negative sentiment values some examples are as followspos id posscore negscore synsetterms description studious# marked by care and effort"made studious attempt to fix the television seta careless# marked by lack of attention or consideration or forethought or thoroughnessnot careful implant# prosthesis placed permanently in tissue kink# curve# curl# form curlcurveor kink"the cigar smoke curled up at the ceilingwith the information in the pos columnwe will be able to distinguish between the noun "bookand the verb "bookposscore and negscore together will help us to determine the neutrality of the wordwhich is -posscore-negscore synsetterms lists all words in the set that are synonyms the id and description can be safely ignored for our purpose the synset terms have number appendedbecause some occur multiple times in different synsets for example"fantasizeconveys two quite different meaningsalso leading to different scorespos id posscore negscore synsetterms description fantasize# fantasise# portray in the mind"he is fantasizing the ideal wifev fantasy# fantasize# fantasise# indulge in fantasies"he is fantasizing when he says that he plans to start his own company
17,703
to find out which of the synsets to takewe would have to really understand the meaning of the tweetswhich is beyond the scope of this the field of research that focuses on this challenge is called word sense disambiguation for our taskwe take the easy route and simply average the scores over all the synsets in which term is found for "fantasize"posscore would be and negscore would be the following functionload_sent_word_net()does all that for usand returns dictionary where the keys are strings of the form "word type/word"for example "nimplant"and the values are the positive and negative scoresimport csvcollections def load_sent_word_net()sent_scores collections defaultdict(listwith open(os path join(data_dirsentiwordnet__ txt")" "as csvfilereader csv reader(csvfiledelimiter='\ 'quotechar='"'for line in readerif line[ startswith("#")continue if len(line)== continue pos,id,posscore,negscore,synsetterms,gloss line if len(pos)== or len(id)== continue #print pos,posscore,negscore,synsetterms for term in synsetterms split(")drop number at the end of every term term term split("#")[ term term replace("-""replace(" ""key "% /% "%(pos,term split("#")[ ]sent_scores[keyappend((float(posscore)float(negscore))for keyvalue in sent_scores iteritems()sent_scores[keynp mean(valueaxis= return sent_scores
17,704
our first estimator now we have everything in place to create our first vectorizer the most convenient way to do it is to inherit it from baseestimator it requires us to implement the following three methodsget_feature_names()this returns list of strings of the features that we will return in transform(fit(documenty=none)as we are not implementing classifierwe can ignore this one and simply return self transform(documents)this returns numpy array()containing an array of shape (len(documents)len(get_feature_names)this means that for every document in documentsit has to return value for every feature name in get_feature_names(let us now implement these methodssent_word_net load_sent_word_net(class linguisticvectorizer(baseestimator)def get_feature_names(self)return np array(['sent_neut''sent_pos''sent_neg''nouns''adjectives''verbs''adverbs''allcaps''exclamation''question''hashtag''mentioning']we don' fit here but need to return the reference so that it can be used like fit(dtransform(ddef fit(selfdocumentsy=none)return self def _get_sentiments(selfd)sent tuple( split()tagged nltk pos_tag(sentpos_vals [neg_vals [nouns adjectives verbs adverbs
17,705
for , in taggedpn , sent_pos_type none if startswith("nn")sent_pos_type "nnouns + elif startswith("jj")sent_pos_type "aadjectives + elif startswith("vb")sent_pos_type "vverbs + elif startswith("rb")sent_pos_type "radverbs + if sent_pos_type is not nonesent_word "% /% "%(sent_pos_typewif sent_word in sent_word_netp, sent_word_net[sent_wordpos_vals append(pneg_vals append(nl len(sentavg_pos_val np mean(pos_valsavg_neg_val np mean(neg_valsreturn [ -avg_pos_val-avg_neg_valavg_pos_valavg_neg_valnouns/ladjectives/lverbs/ladverbs/ldef transform(selfdocuments)obj_valpos_valneg_valnounsadjectivesverbsadverbs np array([self _get_sentiments(dfor in documents] allcaps [exclamation [question [hashtag [mentioning [
17,706
for in documentsallcaps append(np sum([ isupper(for in split(if len( )> ])exclamation append( count("!")question append( count("?")hashtag append( count("#")mentioning append( count("@")result np array([obj_valpos_valneg_valnounsadjectivesverbsadverbsallcapsexclamationquestionhashtagmentioning] return result putting everything together neverthelessusing these linguistic features in isolation without the words themselves will not take us very far thereforewe have to combine tfidfvectorizer with the linguistic features this can be done with scikit-learn' featureunion class it is initialized the same way as pipelinebut instead of evaluating the estimators in sequence and each passing the output of the previous one to the next onefeatureunion does it in parallel and joins the output vectors afterwardsdef create_union_model(params=none)def preprocessor(tweet)tweet tweet lower(for in emo_repl_ordertweet tweet replace(kemo_repl[ ]for rrepl in re_repl iteritems()tweet re sub(rrepltweetreturn tweet replace("-""replace(" ""tfidf_ngrams tfidfvectorizer(preprocessor=preprocessoranalyzer="word"ling_stats linguisticvectorizer(all_features featureunion([('ling'ling_stats)('tfidf'tfidf_ngrams)]clf multinomialnb(
17,707
pipeline pipeline([('all'all_features)('clf'clf)]if paramspipeline set_params(**paramsreturn pipeline training and testing on the combined featurizers gives another percent improvement on positive versus negative=pos vs neg = =pos/neg vs irrelevant/neutral = =pos vs rest = =neg vs rest = with these resultswe probably do not want to use the positive versus rest and negative versus rest classifiersbut instead use first the classifier determining whether the tweet contains sentiment at all ("pos/neg versus irrelevant/neutral"and thenwhen it doesuse the positive versus negative classifier to determine the actual sentiment summary congratulations for sticking with us until the endtogether we have learned how naive bayes work and why they are not that naive at all for training sets where we don' have enough data to learn all the niches in the class probability spacenaive bayes do great job of generalizing we learned how to apply them to tweets and that cleaning the rough tweetstext helps lot finallywe realized that bit of "cheating(only after we have done our fair share of workis okespeciallywhen it gives another improvement of the classifier' performanceas we have experienced with the use of sentiwordnet
17,708
recommendations you have probably learned about regression already in high school mathematics classthis was probably called ordinary least squares (olsregression then this centuries old technique is fast to run and can be effectively used for many real-world problems in this we will start by reviewing ols regression and showing you how it is available in both numpy and scikit-learn in various modern problemswe run into limitations of the classical methods and start to benefit from more advanced methodswhich we will see later in this this is particularly true when we have many featuresincluding when we have more features than examples (which is something that ordinary least squares cannot handle correctlythese techniques are much more modernwith major developments happening in the last decade they go by names such as lassoridgeor elastic nets we will go into these in detail finallywe will start looking at recommendations this is an important area in many applications as it is significant added-value to many applications this is topic that we will start exploring here and will see in more detail in the next predicting house prices with regression let us start with simple problempredicting house prices in boston we can use publicly available dataset we are given several demographic and geographical attributessuch as the crime rate or the pupil-teacher ratioand the goal is to predict the median value of house in particular area as usualwe have some training datawhere the answer is known to us
17,709
we start by using scikit-learn' methods to load the dataset this is one of the built-in datasets that scikit-learn comes withso it is very easyfrom sklearn datasets import load_boston boston load_boston(the boston object is composite object with several attributesin particularboston data and boston target will be of interest to us we will start with simple one-dimensional regressiontrying to regress the price on single attribute according to the average number of rooms per dwellingwhich is stored at position (you can consult boston descr and boston feature_names for detailed information on the data)from matplotlib import pyplot as plt plt scatter(boston data[:, ]boston targetcolor=' 'the boston target attribute contains the average house price (our target variablewe can use the standard least squares regression you probably first saw in high school our first attempt looks like thisimport numpy as np we import numpyas this basic package is all we need we will use functions from the np linalg submodulewhich performs basic linear algebra operationsx boston data[:, np array([[vfor in ]this may seem strangebut we want to be two dimensionalthe first dimension is the different exampleswhile the second dimension is the attributes in our casewe have single attributethe mean number of rooms per dwellingso the second dimension is boston target slope, , , np linalg lstsq( ,yfinallywe use least squares regression to obtain the slope of the regression the np linalg lstsq function also returns some internal information on how well the regression fits the datawhich we will ignore for the moment
17,710
the preceding graph shows all the points (as dotsand our fit (the solid linethis does not look very good in factusing this one-dimensional modelwe understand that house price is multiple of the rm variable (the number of roomsthis would mean thaton averagea house with two rooms would be double the price of single room and with three rooms would be triple the price we know that these are false assumptions (and are not even approximately trueone common step is to add bias term to the previous expression so that the price is multiple of rm plus bias this bias is the base price for zero-bedroom apartment the trick to implement this is to add to every element of xx boston data[:, np array([[ , for in ]we now use [ , instead of [vy boston target (slope,bias), , , np linalg lstsq( ,
17,711
in the following screenshotwe can see that visually it looks better (even though few outliers may be having disproportionate impact on the result)ideallythoughwe would like to measure how good of fit this is quantitatively in order to do sowe can ask how close our prediction is for thiswe now look at one of those other returned values from the np linalg lstsq functionthe second element(slope,bias),total_error, , np linalg lstsq( ,yrmse np sqrt(total_error[ ]/len( )the np linalg lstsq function returns the total squared error for each elementit checks the error (the difference between the line and the true value)squares itand returns the sum of all these it is more understandable to measure the average errorso we divide by the number of elements finallywe take the square root and print out the root mean squared error (rmsefor the first unbiased regressionwe get an error of while adding the bias improves it to this means that we can expect the price to be different from the real price by at the most thousand dollars
17,712
root mean squared error and prediction the root mean squared error corresponds approximately to an estimate of the standard deviation since most of the data is at the most two standard deviations from the meanwe can double our rmse to obtain rough confident interval this is only completely valid if the errors are normally distributedbut it is roughly correct even if they are not multidimensional regression so farwe have only used single variable for predictionthe number of rooms per dwelling we will now use all the data we have to fit model using multidimensional regression we now try to predict single output (the average house pricebased on multiple inputs the code looks very much like beforex boston data we still add bias termbut now we must use np concatenatewhich concatenates two arrays/lists because we have several input variables in np array([np concatenate( ,[ ]for in boston data] boston target ,total_error, , np linalg lstsq( ,ynowthe root mean squared error is only this is better than what we had beforewhich indicates that the extra variables did help unfortunatelywe can no longer easily display the results as we have -dimensional regression cross-validation for regression if you remember when we first introduced classificationwe stressed the importance of cross-validation for checking the quality of our predictions in regressionthis is not always done in factwe only discussed the training error model earlier this is mistake if you want to confidently infer the generalization ability since ordinary least squares is very simple modelthis is often not very serious mistake (the amount of overfitting is slighthoweverwe should still test this empiricallywhich we will do now using scikit-learn we will also use its linear regression classes as they will be easier to replace for more advanced methods later in the from sklearn linear_model import linearregression
17,713
the linearregression class implements ols regression as followslr linearregression(fit_intercept=truewe set the fit_intercept parameter to true in order to add bias term this is exactly what we had done beforebut in more convenient interfacelr fit( ,yp map(lr predictxlearning and prediction are performed for classification as followse - total_error np sum( *esum of squares rmse_train np sqrt(total_error/len( )print('rmse on training{}format(rmse_train)we have used different procedure to compute the root mean square error on the training data of coursethe result is the same as we had before (it is always good to have these sanity checks to make sure we are doing things correctlynowwe will use the kfold class to build -fold cross-validation loop and test the generalization ability of linear regressionfrom sklearn cross_validation import kfold kf kfold(len( )n_folds= err for train,test in kflr fit( [train], [train] map(lr predictx[test] - [testerr +np sum( *ermse_ cv np sqrt(err/len( )print('rmse on -fold cv{}format(rmse_ cv)with cross-validationwe obtain more conservative estimate (that isthe error is greater) as in the case of classificationthis is better estimate of how well we could generalize to predict prices ordinary least squares is fast at learning time and returns simple modelwhich is fast at prediction time for these reasonsit should often be the first model that you use in regression problem howeverwe are now going to see more advanced methods
17,714
penalized regression the important variations of ols regression fall under the theme of penalized regression in ordinary regressionthe returned fit is the best fit on the training datawhich can lead to overfitting penalizing means that we add penalty for overconfidence in the parameter values penalized regression is about tradeoffs penalized regression is another example of the bias-variance tradeoff when using penaltywe get worse fit in the training data as we are adding bias on the other handwe reduce the variance and tend to avoid overfitting thereforethe overall result might be generalized in better way and penalties there are two types of penalties that are typically used for regressionl and penalties the penalty means that we penalize the regression by the sum of the absolute values of the coefficientsand the penalty penalizes by the sum of squares let us now explore these ideas formally the ols optimization is given as followsin the preceding formulawe find the vector that results in the minimum squared distance to the actual target when we add an penaltywe instead optimize the following formulaherewe are trying to simultaneously make the error smallbut also make the values of the coefficients small (in absolute termsusing penalty means that we use the following formulathe difference is rather subtlewe now penalize by the square of the coefficient rather than its absolute value howeverthe difference in the results is dramatic
17,715
ridgelassoand elastic nets these penalized models often go by rather interesting names the penalized model is often called the lassowhile an penalized model is known as ridge regression of coursewe can combine the two and we obtain an elastic net model both the lasso and the ridge result in smaller coefficients than unpenalized regression howeverthe lasso has the additional property that it results in more coefficients being set to zerothis means that the final model does not even use some of its input featuresthe model is sparse this is often very desirable property as the model performs both feature selection and regression in single step you will notice that whenever we add penaltywe also add weight lwhich governs how much penalization we want when is close to zerowe are very close to ols (in factif you set to zeroyou are just performing ols)and when is largewe have model which is very different from the ols one the ridge model is older as the lasso is hard to compute manually howeverwith modern computerswe can use the lasso as easily as ridgeor even combine them to form elastic nets an elastic net has two penaltiesone for the absolute value and another for the squares using lasso or elastic nets in scikit-learn let us adapt the preceding example to use elastic nets using scikit-learnit is very easy to swap in the elastic net regressor for the least squares one that we had beforefrom sklearn linear_model import elasticnet en elasticnet(fit_intercept=truealpha= now we use en whereas before we had used lr this is the only change that is needed the results are exactly what we would have expected the training error increases to (which was before)but the cross-validation error decreases to (which was beforewe have larger error on the training databut we gain better generalization we could have tried an penalty using the lasso class or using the ridge class with the same code the next plot shows what happens when we switch from unpenalized regression (shown as dotted lineto lasso regressionwhich is closer to flat line the benefits of lasso regression arehowevermore apparent when we have many input variables and we consider this setting next
17,716
greater than scenarios the title of this section is bit of inside jargonwhich you will now learn starting in the sfirst in the biomedical domain and then on the webproblems started to appear when was greater than what this means is that the number of featurespwas greater than the number of examplesn (these letters were the conventional statistical shorthand for these conceptsthese became known as " greater than nproblems for exampleif your input is set of written texta simple way to approach it is to consider each possible word in the dictionary as feature and regress on those (we will later work on one such problem ourselvesin the english languageyou have over , words (this is if you perform some stemming and only consider common wordsit is more than ten times that if you keep trademarksif you only have few hundred or few thousand examplesyou will have more features than examples in this caseas the number of features is greater than the number of examplesit is possible to have perfect fit on the training data this is mathematical factyou arein effectsolving system of equations with fewer equations than variables you can find set of regression coefficients with zero training error (in factyou can find more than one perfect solutioninfinitely many
17,717
howeverand this is major problemzero training error does not mean that your solution will generalize well in factit may generalize very poorly whereas before regularization could give you little extra boostit is now completely required for meaningful result an example based on text we will now turn to an example which comes from study performed at carnegie mellon university by prof noah smith' research group the study was based on mining the so-called " - reportsthat companies file with the securities and exchange commission (secin the united states this filing is mandated by the law for all publicly-traded companies the goal is to predictbased on this piece of public informationwhat the future volatility of the company' stock will be in the training datawe are actually using historical data for which we already know what happened there are , examples available the features correspond to different words , in totalwhich have already been preprocessed for us thuswe have many more features than examples the dataset is available in svmlight format from multiple sourcesincluding the book' companion website this is format which scikit-learn can read svmlight isas the name saysa support vector machine implementationwhich is also available through scikit-learnright nowwe are only interested in the file format from sklearn datasets import load_svmlight_file data,target load_svmlight_file(' train'in the preceding codedata is sparse matrix (that ismost of its entries are zeros andthereforeonly the non-zero entries are saved in memory)while target is simple one-dimensional vector we can start by looking at some attributes of targetprint('min target value{}format(target min())print('max target value{}format(target max())print('mean target value{}format(target mean())print('std dev target{}format(target std())this prints out the following valuesmin target value- max target value- mean target value- std dev target
17,718
sowe can see that the data lies between - and - now that we have an estimate datawe can check what happens when we use ols to predict note that we can use exactly the same classes and methods as beforefrom sklearn linear_model import linearregression lr linearregression(fit_intercept=truelr fit(data,targetp np array(map(lr predictdata) ravel( is ( , arraywe want to flatten it -target is 'error'difference of prediction and reality total_sq_error np sum( *ermse_train np sqrt(total_sq_error/len( )print(rmse_trainthe error is not exactly zero because of the rounding errorbut it is very close (much smaller than the standard deviation of the targetwhich is the natural comparison valuewhen we use cross-validation (the code is very similar to what we used before in the boston example)we get something very different remember that the standard deviation of the data is only this means that if we always "predictthe mean value of - we have root mean square error of sowith olsin trainingthe error is insignificant when generalizingit is very large and the prediction is actually harmfulwe would have done better (in terms of root mean square errorby simply predicting the mean value every timetraining and generalization error when the number of features is greater than the number of examplesyou always get zero training error with olsbut this is rarely sign that your model will do well in terms of generalization in factyou may get zero training error and have completely useless model one solutionnaturallyis to use regularization to counteract the overfitting we can try the same cross-validation loop with an elastic net learnerhaving set the penalty parameter to nowwe get rmsewhich is better than just "predicting the meanin real-life problemit is hard to know when we have done all we can as perfect prediction is almost always impossible
17,719
setting hyperparameters in smart way in the preceding examplewe set the penalty parameter to we could just as well have set it to (or halfor or millionnaturallythe results vary each time if we pick an overly large valuewe get underfitting in extreme casethe learning system will just return every coefficient equal to zero if we pick value that is too smallwe overfit and are very close to olswhich generalizes poorly how do we choose good valuethis is general problem in machine learningsetting parameters for our learning methods generic solution is to use crossvalidation we pick set of possible valuesand then use cross-validation to choose which one is best this performs more computation (ten times more if we use folds)but is always applicable and unbiased we must be carefulthough in order to obtain an estimate of generalizationwe have to use two levels of cross-validationone level is to estimate the generalizationwhile the second level is to get good parameters that iswe split the data infor example folds we start by holding out the first fold and will learn on the other nine nowwe split these again into folds in order to choose the parameters once we have set our parameterswe test on the first fold nowwe repeat this nine other times the preceding figure shows how you break up single training fold into subfolds we would need to repeat it for all the other folds in this casewe are looking at five outer folds and five inner foldsbut there is no reason to use the same number of outer and inner foldsyou can use any numbers you want as long as you keep them separate this leads to lot of computationbut it is necessary in order to do things correctly the problem is that if you use piece of data to make any decisions about your model (including which parameters to set)you have contaminated it and you can no longer use it to test the generalization ability of your model this is subtle point and it may not be immediately obvious in factit is still the case that many users of machine learning get this wrong and overestimate how well their systems are doing because they do not perform cross-validation correctly
17,720
fortunatelyscikit-learn makes it very easy to do the right thingit has classes named lassocvridgecvand elasticnetcvall of which encapsulate cross-validation check for the inner parameter the code is percent like the previous oneexcept that we do not need to specify any value for alpha from sklearn linear_model import elasticnetcv met elasticnetcv(fit_intercept=truekf kfold(len(target)n_folds= for train,test in kfmet fit(data[train],target[train] map(met predictdata[test] np array(pravel( -target[testerr +np dot( ,ermse_ cv np sqrt(err/len(target)this results in lot of computationso you may want to get some coffee while you are waiting (depending on how fast your computer israting prediction and recommendations if you have used any commercial online system in the last yearsyou have probably seen these recommendations some are like amazon' "costumers who bought also bought these will be dealt with in the next under the topic of basket analysis others are based on predicting the rating of productsuch as movie this last problem was made famous with the netflix challengea million-dollar machine learning public challenge by netflix netflix (well-known in the and but not available elsewhereis movie rental company traditionallyyou would receive dvds in the mailmore recentlythe business has focused on online streaming of videos from the startone of the distinguishing features of the service was that it gave every user the option of rating films they had seenusing these ratings to then recommend other films in this modeyou not only have the information about which films the user sawbut also their impression of them (including negative impressionsin netflix made available large number of customer ratings of films in its database and the goal was to improve on their in-house algorithm for ratings prediction whoever was able to beat it by percent or more would win million dollars in an international team named bellkor' pragmatic chaos was able to beat that mark and take the prize they did so just minutes before another teamthe ensemblepassed the percent mark as wellan exciting photo-finish for competition that lasted several years
17,721
unfortunatelyfor legal reasonsthis dataset is no longer available (although the data was anonymousthere were concerns that it might be possible to discover who the clients were and reveal the private details of movie rentalshoweverwe can use an academic dataset with similar characteristics this data comes from grouplensa research laboratory at the university of minnesota machine learning in the real world much has been written about the netflix prize and you may learn lot reading up on it (this book will have given you enough to start to understand the issuesthe techniques that won were mix of advanced machine learning with lot of work in the preprocessing of the data for examplesome users like to rate everything very highlyothers are always more negativeif you do not account for this in preprocessingyour model will suffer other not so obvious normalizations were also necessary for good resulthow old the film ishow many ratings did it receiveand so on good algorithms are good thingbut you always need to "get your hands dirtyand tune your methods to the properties of the data you have in front of you we can formulate this as regression problem and apply the methods that we learned in this it is not good fit for classification approach we could certainly attempt to learn the five class classifiersone class for each possible grade there are two problems with this approacherrors are not all the same for examplemistaking -star movie for -star one is not as serious of mistake as mistaking -star movie for -star one intermediate values make sense even if our inputs are only integer valuesit is perfectly meaningful to say that the prediction is we can see that this is different prediction than these two factors together mean that classification is not good fit to the problem the regression framework is more meaningful we have two choiceswe can build movie-specific or user-specific models in our casewe are going to first build user-specific models this means thatfor each userwe take the moviesit has rated as our target variable the inputs are the ratings of other old users this will give high value to users who are similar to our user (or negative value to users who like more or less the same movies that our user dislikesthe system is just an application of what we have developed so far you will find copy of the dataset and code to load it into python on the book' companion website there you will also find pointers to more informationincluding the original movielens website
17,722
the loading of the dataset is just basic pythonso let us jump ahead to the learning we have sparse matrixwhere there are entries from to whenever we have rating (most of the entries are zero to denote that this user has not rated these moviesthis timeas regression methodfor varietywe are going to be using the lassocv classfrom sklearn linear_model import lassocv reg lassocv(fit_intercept=truealphas= , , , ]by passing the constructor an explicit set of alphaswe can constrain the values that the inner cross-validation will use you may note that the values are multiples of twostarting with / up to we will now write function which learns model for the user iisolate this user reviews[iwe are only interested in the movies that the user ratedso we must build up the index of those there are few numpy tricks in hereu toarray(to convert from sparse matrix to regular array thenwe ravel(that array to convert from row array (that isa two-dimensional array with first dimension of to simple onedimensional array we compare it with zero and ask where this comparison is true the resultpsis an array of indicesthose indices correspond to movies that the user has ratedu array(ravel(psnp where( build an array with indices [ nexcept us np delete(np arange(reviews shape[ ])ix reviews[us][:,pst finallywe select only the movies that the user has ratedy [pscross-validation is set up as before because we have many userswe are going to only use four folds (more would take long time and we have enough training data with just percent of the data)err kf kfold(len( )n_folds= for train,test in kfnow we perform per-movie normalization this is explained below xc, movie_norm( [train]
17,723
reg fit(xcy[train]- we need to perform the same normalization while testing xc, movie_norm( [test] np array(map(reg predictxc)ravel( ( + )- [testerr +np sum( *ewe did not explain the movie_norm function this function performs per-movie normalizationsome movies are just generally better and get higher average marksdef movie_norm( )xc copy(toarray(we cannot use xc mean( because we do not want to have the zeros counting for the mean we only want the mean of the ratings that were actually givenx np array([xi[xi mean(for xi in xc]in certain casesthere were no ratings and we got nan valueso we replace it with zeros using np nan_to_numwhich does exactly this taskx np nan_to_num( now we normalize the input by removing the mean value from the non-zero entriesfor in xrange(xc shape[ ])xc[ -(xc[ [iimplicitlythis also makes the movies that the user did not rate have value of zerowhich is average finallywe return the normalized array and the meansreturn , you might have noticed that we converted to regular (densearray this has the added advantage that it makes the optimization much fasterwhile scikit-learn works well with the sparse valuesthe dense arrays are much faster (if you can fit them in memorywhen you cannotyou are forced to use sparse arrayswhen compared with simply guessing the average value for that userthis approach is percent better the results are not spectacularbut it is start on one handthis is very hard problem and we cannot expect to be right with every predictionwe perform better when the users have given us more reviews on the other handregression is blunt tool for this job note how we learned completely separate model for each user in the next we will look at other methods that go beyond regression for approaching this problem in those modelswe integrate the information from all users and all movies in more intelligent manner
17,724
summary in this we started with the oldest trick in the bookordinary least squares it is still sometimes good enough howeverwe also saw that more modern approaches that avoid overfitting can give us better results we used ridgelassoand elastic netsthese are the state-of-the-art methods for regression we once again saw the danger of relying on training error to estimate generalizationit can be an overly optimistic estimate to the point where our model has zero training errorbut we can know that it is completely useless when thinking through these issueswe were led into two-level cross-validationan important point that many in the field still have not completely internalized throughoutwe were able to rely on scikit-learn to support all the operations we wanted to performincluding an easy way to achieve correct cross-validation at the end of this we started to shift gears and look at recommendation problems for nowwe approached these problems with the tools we knewpenalized regression in the next we will look at newbetter tools for this problem these will improve our results on this dataset this recommendation setting also has disadvantage that it requires that users have rated items on numeric scale only fraction of users actually perform this operation there is another type of information that is often easier to obtainwhich items were purchased together in the next we will also see how to leverage this information in framework called basket analysis
17,725
recommendations improved at the end of the last we used very simple method to build recommendation enginewe used regression to guess ratings value in the first part of this we will continue this work and build more advanced (and betterrating estimator we start with few ideas that are helpful and then combine all of them when combiningwe use regression again to learn the best way to combine them in the second part of this we will look at different way of learning called basket analysiswhere we will learn how to make recommendations unlike the case in which we had numeric ratingsin the basket analysis settingall we have is information about shopping basketsthat iswhat items were bought together the goal is to learn recommendations you have probably already seen features of the form "people who bought also bought yin online shopping we will develop similar feature of our own improved recommendations remember where we stopped in the previous with very basicbut not very goodrecommendation system that gave better than random predictions we are now going to start improving it firstwe will go through couple of ideas that will capture some part of the problem thenwhat we will do is combine multiple approaches rather than using single approach in order to be able to achieve better final performance we will be using the same movie recommendation dataset that we started off with in the last it consists of matrix with users on one axis and movies on the other it is sparse matrixas each user has only reviewed small fraction of the movies
17,726
using the binary matrix of recommendations one of the interesting conclusions from the netflix challenge was one of those obvious-in-hindsight ideaswe can learn lot about you just from knowing which movies you ratedeven without looking at which rating was giveneven with binary matrix where we have rating of where user rated movie and where they did notwe can make useful predictions in hindsightthis makes perfect sensewe do not choose movies to watch completely randomlybut instead pick those where we already have an expectation of liking them we also do not make random choices of which movies to ratebut perhaps only rate those we feel most strongly about (naturallythere are exceptionsbut on an average this is probably truewe can visualize the values of the matrix as an image where each rating is depicted as little square black represents the absence of rating and the grey levels represent the rating value we can see that the matrix is sparse--most of the squares are black we can also see that some users rate lot more movies than others and that some movies are the target of many more ratings than others the code to visualize the data is very simple (you can adapt it to show larger fraction of the matrix than is possible to show in this book)as followsfrom matplotlib import pyplot as plt imagedata reviews[: : todense(plt imshow(imagedatainterpolation='nearest'the following screenshot is the output of this code
17,727
we are now going to use this binary matrix to make predictions of movie ratings the general algorithm will be (in pseudocodeas follows for each userrank every other user in terms of closeness for this stepwe will use the binary matrix and use correlation as the measure of closeness (interpreting the binary matrix as zeros and ones allows us to perform this computation when we need to estimate rating for user-movie pairwe look at the neighbors of the user sequentially (as defined in step when we first find rating for the movie in questionwe report it implementing the code firstwe are going to write simple numpy function numpy ships with np corrcoeffwhich computes correlations this is very generic function and computes -dimensional correlations even when only singletraditional correlation is needed thereforeto compute the correlation between two userswe need to call the followingcorr_between_user _and_user np corrcoef(user user )[ , in factwe will be wishing to compute the correlation between user and all the other users this will be an operation we will use few timesso we wrap it in function named all_correlationsimport numpy as np def all_correlations(baittarget)''corrs all_correlations(baittargetcorrs[iis the correlation between bait and target[ ''return np array[np corrcoef(baitc)[ , for in target]
17,728
now we can use this in several ways simple one is to select the nearest neighbors of each user these are the users that most resemble it we will use the measured correlation discussed earlierdef estimate(userrest)''estimate movie ratings for 'userbased on the 'restof the universe ''binary version of user ratings bu user binary version of rest ratings br rest ws all_correlations(bu,brselect highest values selected ws argsort()[- :estimate based on the meanestimates rest[selectedmean( we need to correct estimates based on the fact that some movies have more ratings than othersestimates / +br[selectedmean( )when compared to the estimate obtained over all the users in the datasetthis reduces the rmse by percent as usualwhen we look only at those users that have more predictionswe do betterthere is percent error reduction if the user is in the top half of the rating activity looking at the movie neighbors in the previous sectionwe looked at the users that were most similar we can also look at which movies are most similar we will now build recommendations based on nearest neighbor rule for movieswhen predicting the rating of movie for user uthe system will predict that will rate with the same points it gave to the movie most similar to thereforewe proceed with two stepsfirstwe compute similarity matrix ( matrix that tells us which movies are most similar)secondwe compute an estimate for each (user-moviepair we use the numpy zeros and ones functions to allocate arrays (initialized to zeros and ones respectively)movie_likeness np zeros((nmovies,nmovies)allms np ones(nmoviesboolcs np zeros(nmovies
17,729
nowwe iterate over all the moviesfor in range(nmovies)movie_likeness[iall_correlations(reviews[:, ]reviews tmovie_likeness[ , - we set the diagonal to - otherwisethe most similar movie to any movie is itselfwhich is truebut very unhelpful this is the same trick we used in learning how to classify with real-world exampleswhen we first introduced the nearest neighbor classification based on this matrixwe can easily write function that estimates ratingdef nn_movie(movie_likenessreviewsuidmid)likes movie_likeness[midargsort(reverse the sorting so that most alike are in beginning likes likes[::- returns the rating for the most similar movie available for ell in likesif reviews[ ,ell return reviews[ ,ellhow well does the preceding function dofairly wellits rmse is only the preceding code does not show you all of the details of the crossvalidation while it would work well in production as it is writtenfor testingwe need to make sure we have recomputed the likeness matrix afresh without using the user that we are currently testing on (otherwisewe contaminate the test set and we have an inflated estimate of generalizationunfortunatelythis takes long timeand we do not need the full matrix for each user you should compute only what you need this makes the code slightly more complex than the preceding examples on the companion website for this bookyou will find code with all the hairy details there you will also find much faster implementation of the all_correlations function combining multiple methods we can now combine the methods given in the earlier section into single prediction for examplewe could average the predictions this is normally good enoughbut there is no reason to think that both predictions are similarly good and should thus have the exact same weight of it might be that one is better
17,730
we can try weighted averagemultiplying each prediction by given weight before summing it all up how do we find the best weights thoughwe learn them from the data of courseensemble learning we are using general technique in machine learning called ensemble learningthis is not only applicable in regression we learn an ensemble (that isa setof predictors thenwe combine them what is interesting is that we can see each prediction as being new featureand we are now just combining features based on training datawhich is what we have been doing all along note that we are doing so for regression herebut the same reasoning is applicable during classificationyou learn how to create several classifiers and master classifierwhich takes the output of all of them and gives final prediction different forms of ensemble learning differ on how you combine the base predictors in our casewe reuse the training data that learned the predictors predict predict predict weights data final prediction by having flexible way to combine multiple methodswe can simply try any idea we wish by adding it into the mix of learners and letting the system give it weight we can also use the weights to discover which ideas are goodif they get high weightthis means that it seems they are adding useful information ideas with very low weights can even be dropped for better performance the code for this is very simpleand is as followswe import the code we used in the previous examplesimport similar_movie import corrneighbors import usermodel from sklearn linear_model import linearregression es usermodel estimate_all(corrneighbors estimate_all()similar_movie estimate_all()
17,731
coefficients [we are now going to run leave- -out cross-validation loop for in xrange(reviews shape[ ])for all user ids es np delete(es, , all but user np delete(reviewsu , np where( we only care about actual predictions es[:, , [ reg fit( ,ycoefficients append(reg coef_prediction reg predict(es[:, ,reviews[ tmeasure error as before the result is an rmse of almost exactly we can also analyze the coefficients variable to find out how well our predictors fareprint coefficients mean( the mean value across all users the values of the array are the estimate of the most similar movie has the highest weight (it was the best individual predictionso it is not surprising)and we can drop the correlation-based method from the learning process as it has little influence on the final result what this setting does is it makes it easy to add few extra ideasfor exampleif the single most similar movie is good predictorhow about we use the five most similar movies in the learning process as wellwe can adapt the earlier code to generate the -th most similar movie and then use the stacked learner to learn the weightses usermodel estimate_all(similar_movie estimate_all( = )similar_movie estimate_all( = )similar_movie estimate_all( = )similar_movie estimate_all( = )similar_movie estimate_all( = )the rest of the code remains as beforewe gained lot of freedom in generating new machine learning systems in this casethe final result is not betterbut it was easy to test this new idea
17,732
howeverwe do have to be careful to not overfit our dataset in factif we randomly try too many thingssome of them will work well on this dataset but will not generalize even though we are using cross-validationwe are not cross-validating our design decisions in order to have good estimateand if data is plentifulyou should leave portion of the data untouched until you have your final model that is about to go into production thentesting your model gives you an unbiased prediction of how well you should expect it to work in the real world basket analysis the methods we have discussed so far work well when you have numeric ratings of how much user liked product this type of information is not always available basket analysis is an alternative mode of learning recommendations in this modeour data consists only of what items were bought togetherit does not contain any information on whether individual items were enjoyed or not it is often easier to get this data rather than ratings data as many users will not provide ratingswhile the basket data is generated as side effect of shopping the following screenshot shows you snippet of amazon com' web page for the book war and peaceleo tolstoywhich is classic way to use these resultsthis mode of learning is not only applicable to actual shopping basketsnaturally it is applicable in any setting where you have groups of objects together and need to recommend another for examplerecommending additional recipients to user writing an -mail is done by gmail and could be implemented using similar techniques (we do not know what gmail uses internallyperhaps they combine multiple techniques as we did earlierorwe could use these methods to develop an application to recommend webpages to visit based on your browsing history even if we are handling purchasesit may make sense to group all purchases by customer into single basket independently of whether the items where bought together or on separate transactions (this depends on the business context
17,733
the beer and diapers story one of the stories that is often mentioned in the context of basket analysis is the "diapers and beerstory it states that when supermarkets first started to look at their datathey found that diapers were often bought together with beer supposedlyit was the father who would go out to the supermarket to buy diapers and then would pick up some beer as well there has been much discussion of whether this is true or just an urban myth in this caseit seems that it is true in the early sosco drug did discover that in the early evening beer and diapers were bought togetherand it did surprise the managers who haduntil thennever considered these two products to be similar what is not true is that this led the store to move the beer display closer to the diaper section alsowe have no idea whether it was really that fathers were buying beer and diapers together more than mothers (or grandparentsobtaining useful predictions it is not just "customers who bought also bought "even though that is how many online retailers phrase it (see the amazon com screenshot given earlier) real system cannot work like this why notbecause such system would get fooled by very frequently bought items and would simply recommend that which is popular without any personalization for exampleat supermarketmany customers buy bread (say percent of customers buy breadso if you focus on any particular itemsay dishwasher soapand look at what is frequently bought with dishwasher soapyou might find that bread is frequently bought with soap in fact percent of the times someone buys dishwasher soapthey buy bread howeverbread is frequently bought with anything else just because everybody buys bread very often what we are really looking for is customers who bought are statistically more likely to buy than the baseline so if you buy dishwasher soapyou are likely to buy breadbut not more so than the baseline similarlya bookstore that simply recommended bestsellers no matter which books you had already bought would not be doing good job of personalizing recommendations analyzing supermarket shopping baskets as an examplewe will look at dataset consisting of anonymous transactions at supermarket in belgium this dataset was made available by tom brijs at hasselt university the data is anonymousso we only have number for each product and basket that is set of numbers the datafile is available from several online sources (including the book' companion websiteas retail dat
17,734
we begin by loading the dataset and looking at some statisticsfrom collections import defaultdict from itertools import chain file format is line per transaction of the form ' dataset [[int(tokfor tok in ,line strip(split()for line in open('retail dat')count how often each product was purchasedcounts defaultdict(intfor elem in chain(*dataset)counts[elem+ we can plot small histogram as followsof times bought of products just once twice or thrice four to seven times eight to times to times to times to times to times times or more there are many products that have only been bought few times for example percent of products were bought four or less times howeverthis represents only percent of purchases this phenomenon that many products are only purchased small number of times is sometimes labeled "the long tail"and has only become more prominent as the internet made it cheaper to stock and sell niche items in order to be able to provide recommendations for these productswe would need lot more data there are few open source implementations of basket analysis algorithms out therebut none that are well-integrated with scikit-learn or any of the other packages we have been using thereforewe are going to implement one classic algorithm ourselves this algorithm is called the apriori algorithmand it is bit old (it was published in by rakesh agrawal and ramakrishnan srikant)but it still works (algorithmsof coursenever stop workingthey just get superceded by better ideas
17,735
formallyapriori takes collection of sets (that isyour shopping basketsand returns sets that are very frequent as subsets (that isitems that together are part of many shopping basketsthe algorithm works according to the bottom-up approachstarting with the smallest candidates (those composed of one single element)it builds upadding one element at time we need to define the minimum support we are looking forminsupport support is the number of times that set of products was purchased together the goal of apriori is to find itemsets with high support logicallyany itemset with more than minimal support can only be composed of items that themselves have at least minimal supportvalid set( for , in counts items(if ( >minsupport)our initial itemsets are singletons (sets with single elementin particularall singletons that have at least minimal support are frequent itemsets itemsets [frozenset([ ]for in validnow our iteration is very simple and is given as followsnew_itemsets [for iset in itemsetsfor in validif not in isetwe create new possible set which is the same as the previous#with the addition of newset (ell|set([v_])loop over the dataset to count the number of times newset appears this step is slow and not used proper implementation c_newset for in datasetif issuperset( )c_newset + if c_newset minsupportnewsets append(newset
17,736
this works correctlybut is very slow better implementation has more infrastructure so you can avoid having to loop over all the datasets to get the count (c_newsetin particularwe keep track of which shopping baskets have which frequent itemsets this accelerates the loop but makes the code harder to follow thereforewe will not show it here as usualyou can find both implementations on the book' companion website the code there is also wrapped into function that can be applied to other datasets the apriori algorithm returns frequent itemsetsthat issmall baskets that are not in any specific quantity (minsupport in the codeassociation rule mining frequent itemsets are not very useful by themselves the next step is to build association rules because of this final goalthe whole field of basket analysis is sometimes called association rule mining an association rule is statement of the "if then yformfor exampleif customer bought war and peacethey will buy anna karenina note that the rule is not deterministic (not all customers who buy will buy )but it is rather cumbersome to always spell it out so if customer bought xhe is more likely to buy according to the baselinethuswe say if then ybut we mean it in probabilistic sense interestinglythe antecedent and conclusion may contain multiple objectscostumers who bought xyand also bought aband multiple antecedents may allow you to make more specific predictions than are possible from single item you can get from frequent set to rule by just trying all possible combinations of implies it is easy to generate many of these rules howeveryou only want to have valuable rules thereforewe need to measure the value of rule commonly used measure is called the lift the lift is the ratio between the probability obtained by applying the rule and the baselinein the preceding formulap(yis the fraction of all transactions that include while ( |xis the fraction of transactions that include and both using the lift helps you avoid the problem of recommending bestsellersfor bestsellerboth (yand ( |ywill be large thereforethe lift will be close to one and the rule will be deemed not very relevant in practicewe wish to have at least perhaps even values of lift
17,737
refer to the following codedef rules_from_itemset(itemsetdataset)itemset frozenset(itemsetnr_transactions float(len(dataset)for item in itemsetantecendent itemset-consequent base acountantecedent count acount ccount consequent count ccount for in datasetif item in dbase + if issuperset(itemset)ccount + if issuperset(antecedent)acount + base /nr_transactions p_y_given_x ccount/acount lift p_y_given_x base print('rule { -{ has lift { }format(antecedentconsequent,lift)this is slow-running codewe iterate over the whole dataset repeatedly better implementation would cache the counts for speed you can download such an implementation from the book' websiteand it does indeed run much faster some of the results are shown in the following tableantecedent consequent consequent count antecedent count antecedent and consequent count lift ( % ( % ( % counts are the number of transactionsthey include the followingthe consequent alone (that isthe base rate at which that product is boughtall the items in the antecedent all the items in the antecedent and the consequent
17,738
we can seefor examplethat there were transactions of which land were bought together of these also included so the estimated conditional probability is / compared to the fact that only of all transactions included this gives us lift of the need to have decent number of transactions in these counts in order to be able to make relatively solid inferences is why we must first select frequent itemsets if we were to generate rules from an infrequent itemsetthe counts would be very smalldue to thisthe relative values would be meaningless (or subject to very large error barsnote that there are many more association rules that have been discovered from this dataset datasets are required to support at least minimum baskets and minimum lift of this is still small dataset when compared to what is now possible with the webwhen you perform millions of transactionsyou can expect to generate many thousandseven millionsof rules howeverfor each customeronly few of them will be relevant at any given timeand so each costumer only receives small number of recommendations more advanced basket analysis there are now other algorithms for basket analysis that run faster than apriori the code we saw earlier was simpleand was good enough for us as we only had circa thousand transactions if you have had many millionsit might be worthwhile to use faster algorithm (although note that for most applicationslearning association rules can be run offlinethere are also methods to work with temporal information leading to rules that take into account the order in which you have made your purchases to take an extreme example of why this may be usefulconsider that someone buying supplies for large party may come back for trash bags therefore it may make sense to propose trash bags on the first visit howeverit would not make sense to propose party supplies to everyone who buys trash bag you can find python open source implementations ( new bsd license as scikit-learnof some of these in package called pymining this package was developed by barthelemy dagenais and is available at
17,739
summary in this we started by improving our rating predictions from the previous we saw couple of different ways in which to do so and then combined them all in single prediction by learning how to use set of weights these techniquesensemble or stacked learningare general techniques that can be used in many situations and not just for regression they allow you to combine different ideas even if their internal mechanics are completely differentyou can combine their final outputs in the second half of the we switched gears and looked at another method of recommendationshopping basket analysis or association rule mining in this modewe try to discover (probabilisticassociation rules of the customers who bought are likely to be interested in form this takes advantage of the data that is generated from sales alone without requiring users to numerically rate items this is not available in scikit-learn (yet)so we wrote our own code (for changeassociation rule mining needs to be careful to not simply recommend bestsellers to every user (otherwisewhat is the point of personalization?in order to do thiswe learned about measuring the value of rules in relation to the baseline as the lift of rule in the next we will build music genre classifier
17,740
genre classification so farwe have had the luxury that every training data instance could easily be described by vector of feature values in the iris datasetfor examplethe flowers are represented by vectors containing values for the length and width of certain aspects of flower in the text-based exampleswe could transform the text into bag-of-words representation and manually craft our own features that captured certain aspects of the texts it will be different in this howeverwhen we try to classify songs by their genre or how would wefor instancerepresent three-minute long songshould we take the individual bits of its mp representationprobably notsince treating it like text and creating something such as "bag of sound biteswould certainly be way too complex somehowwe will nevertheless have to convert song into number of values that describes it sufficiently sketching our roadmap this will show us how we can come up with decent classifier in domain that is outside our comfort zone for onewe will have to use sound-based featureswhich are much more complex than the text-based ones that we have used before and then we will have to learn how to deal with multiple classeswhereas we have only encountered binary-classification problems up to now in additionwe will get to know new ways of measuring classification performance let us assume scenario where we find bunch of randomly named mp files on our hard diskwhich are assumed to contain music our task is to sort them according to the music genre into different folders such as jazzclassicalcountrypoprockand metal
17,741
fetching the music data we will use the gtzan datasetwhich is frequently used to benchmark music genre classification tasks it is organized into distinct genresof which we will use only six for the sake of simplicityclassicaljazzcountrypoprockand metal the dataset contains the first seconds of songs per genre we can download the dataset at recorded at , hz ( , readings per secondmono in the wav format converting into wave format sure enoughif we would want to test our classifier later on our private mp collectionwe would not be able to extract much meaning this is because mp is lossy music compression format that cuts out parts that the human ear cannot perceive this is nice for storing because with mp you can fit ten times as many songs on your device for our endeavorhoweverit is not so nice for classificationwe will have an easier time with wav filesso we will have to convert our mp files in case we would want to use them with our classifier in case you don' have conversion tool nearbyyou might want to check out soxarmy knife of sound processingand we agree with this bold claim one advantage of having all our music files in the wav format is that it is directly readable by the scipy toolkitsample_ratex scipy io wavfile read(wave_filenameherex contains the samples and sample_rate is the rate at which they were taken let us use this information to peek into some music files to get first impression of what the data looks like looking at music very convenient way to get quick impression of how the songs of the diverse genres "looklike is to draw spectrogram for set of songs of genre spectrogram is visual representation of the frequencies that occur in song it shows the intensity of the frequencies on the axis in the specified time intervals on the axisthat isthe darker the colorthe stronger the frequency is in the particular time window of the song
17,742
matplotlib provides the convenient function specgram(that performs most of the under-the-hood calculation and plotting for usimport scipy from matplotlib pyplot import specgram sample_ratex scipy io wavfile read(wave_filenameprint sample_ratex shape ( ,specgram(xfs=sample_ratexextent=( , )the wave file we just read was sampled at sample rate of , hz and contains , samples if we now plot the spectrogram for these first seconds of diverse wave fileswe can see that there are commonalities between songs of the same genre
17,743
just glancing at itwe immediately see the difference in the spectrum betweenfor examplemetal and classical songs while metal songs have high intensity over most of the frequency spectrum all the time (energize!)classical songs show more diverse pattern over time it should be possible to train classifier that discriminates at least between metal and classical songs with an accuracy that is high enough other genre pairs such as country and rock could pose bigger challengethough this looks like real challenge to usas we need to discriminate not just between two classesbut between six we need to be able to discriminate between all six reasonably well decomposing music into sine wave components our plan is to extract individual frequency intensities from the raw sample readings (stored in earlierand feed them into classifier these frequency intensities can be extracted by applying the fast fourier transform (fftas the theory behind fft is outside the scope of this let us just look at an example to get an intuition of what it accomplishes later onwe will then treat it as black box feature extractor for examplelet us generate two wave filessine_a wav and sine_b wavwhich contain the sound of hz and , hz sine waves the swiss army knifesoxmentioned earlier is one way to achieve thissox --null - sine_a wav synth sine sox --null - sine_b wav synth sine the charts in the following screenshot show the plotting of the first seconds we can also see the fft of the sine waves not surprisinglywe see spike at and , hz below the corresponding sine waves now let us mix them bothgiving the hz sound half the volume of the , hz onesox --combine mix --volume sine_b wav --volume sine_a wav sine_mix wav we see two spikes in the fft plot of the combined soundof which the , hz spike is almost double the size of the hz one
17,744
17,745
for real musicwe can quickly see that the fft looks not as beautiful as in the preceding toy exampleusing fft to build our first classifier neverthelesswe can now create some kind of musical fingerprint of song using fft if we do this for couple of songsand manually assign their corresponding genres as labelswe have the training data that we can feed into our first classifier increasing experimentation agility before we dive into the classifier traininglet us first spend some time on experimentation agility although we have the word "fastin fftit is much slower than the creation of the features in our text-based and because we are still in the experimentation phasewe might want to think about how we could speed up the whole feature-creation process of coursethe creation of the fft for each file will be the same each time we run the classifier we could therefore cache it and read the cached fft representation instead of the wave file we do this with the create_fft(functionwhich in turn uses scipy fft(to create the fft for the sake of simplicity (and speed!)let us fix the number of fft components to the first , in this example with our current knowledgewe do not know whether these are the most important ones with regard to music genre classification--only that they show the highest intensities in the earlier fft example if we would later want to use more or less fft componentswe would of course have to recreate the cached fft files
17,746
def create_fft(fn)sample_ratex scipy io wavfile read(fnfft_features abs(scipy fft( )[: ]base_fnext os path splitext(fndata_fn base_fn fftnp save(data_fnfft_featureswe save the data using numpy' save(functionwhich always appends npy to the filename we only have to do this once for every wave file needed for training or predicting the corresponding fft reading function is read_fft()def read_fft(genre_listbase_dir=genre_dir) [ [for labelgenre in enumerate(genre_list)genre_dir os path join(base_dirgenre"fft npy"file_list glob glob(genre_dirfor fn in file_listfft_features np load(fnx append(fft_features[: ] append(labelreturn np array( )np array(yin our scrambled music directorywe expect the following music genresgenre_list ["classical""jazz""country""pop""rock""metal"training the classifier let us use the logistic regression classifierwhich has already served us well in the on sentiment analysis the added difficulty is that we are now faced with multiclass classification problemwhereas up to now we have had to discriminate only between two classes one aspect that which is surprising the first time one switches from binary to multiclass classification is the evaluation of accuracy rates in binary classification problemswe have learned that an accuracy of percent is the worst case as it could have been achieved by mere random guessing in multiclass settings percent can already be very good with our six genresfor instancerandom guessing would result in only percent (equal class sizes assumed
17,747
using the confusion matrix to measure accuracy in multiclass problems with multiclass problemswe should also not limit our interest to how well we manage to correctly classify the genres in additionwe should also look at which genres we actually confuse with each other this can be done with the so-called confusion matrixfrom sklearn metrics import confusion_matrix cm confusion_matrix(y_testy_predprint(cm[[ ]it prints the distribution of labels that the classifier predicted for the test set for every genre since we have six genreswe have six by six matrix the first row in the matrix says that for classical songs (sum of the first row)it predicted to belong to the genre classicalone to be jazz songtwo to belong to the country genreand two to be metal the diagonal shows the correct classifications in the first rowwe see that out of songs ( ) have been correctly classified as classical and were misclassifications this is actually not that bad the second row is more soberingonly out of jazz songs have been correctly classified--that is only percent of coursewe follow the train/test split setup from the previous so that we actually have to record the confusion matrices per cross-validation fold we also have to average and normalize later on so that we have range between (total failureto (everything classified correctlya graphical visualization is often much easier to read than numpy arrays matplotlib' matshow(is our friendfrom matplotlib import pylab def plot_confusion_matrix(cmgenre_listnametitle)pylab clf(pylab matshow(cmfignum=falsecmap='blues'vmin= vmax= ax pylab axes(ax set_xticks(range(len(genre_list))ax set_xticklabels(genre_listax xaxis set_ticks_position("bottom"
17,748
ax set_yticks(range(len(genre_list))ax set_yticklabels(genre_listpylab title(titlepylab colorbar(pylab grid(falsepylab xlabel('predicted class'pylab ylabel('true class'pylab grid(falsepylab show(when you create confusion matrixbe sure to choose color map (the cmap parameter of matshow()with an appropriate color orderingso that it is immediately visible what lighter or darker color means especially discouraged for these kind of graphs are rainbow color mapssuch as matplotlib' default "jetor even the "pairedcolor map the final graph looks like the following screenshotfor perfect classifierwe would have expected diagonal of dark squares from the left-upper corner to the right-lower oneand light colors for the remaining area in the graphwe immediately see that our fft-based classifier is far away from being perfect it only predicts classical songs correctly (dark squarefor rockfor instanceit prefers the label metal most of the time
17,749
obviouslyusing fft points to the right direction (the classical genre was not that bad)but it is not enough to get decent classifier surelywe can play with the number of fft components (fixed to , but before we dive into parameter tuningwe should do our research there we find that fft is indeed not bad feature for genre classification--it is just not refined enough shortlywe will see how we can boost our classification performance by using processed version of it before we do thathoweverwe will learn another method of measuring classification performance an alternate way to measure classifier performance using receiver operator characteristic (rocwe have already learned that measuring accuracy is not enough to truly evaluate classifier insteadwe relied on precision-recall curves to get deeper understanding of how our classifiers perform there is sister of precision-recall curvescalled receiver operator characteristic (rocthat measures similar aspects of the classifier' performancebut provides another view on the classification performance the key difference is that / curves are more suitable for tasks where the positive class is much more interesting than the negative oneor where the number of positive examples is much less than the number of negative ones information retrieval or fraud detection are typical application areas on the other handroc curves provide better picture on how well the classifier behaves in general to better understand the differenceslet us consider the performance of the trained classifier described earlier in classifying country songs correctly
17,750
on the left-hand side graphwe see the / curve for an ideal classifierwe would have the curve going from the top-left corner directly to the top-right corner and then to the bottom-right cornerresulting in an area under curve (aucof the right-hand side graph depicts the corresponding roc curve it plots the true positive rate over the false positive rate herean ideal classifier would have curve going from the lower-left to top-left corner and then to the top-right corner random classifier would be straight line from the lower-left to upper-right corneras shown by the dashed line having an auc of thereforewe cannot compare the auc of / curve with that of an roc curve when comparing two different classifiers on the same datasetwe are always safe to assume that higher auc of / curve for one classifier also means higher auc of the corresponding roc curve and vice versa thereforewe never bother to generate both more on this can be found in the very insightful paper the relationship between precision-recall and roc curvesjesse davis and mark goadrichicml the definitions of both the curvesx and axes are given in the following tablex axis axis / roc looking at the definitions of both curvesx axes and axeswe see that the true positive rate in the roc curve' axis is the same as recall of the / graph' axis the false positive rate measures the fraction of true negative examples that were falsely identified as positive onesgiving in perfect case (no false positivesand otherwise contrast this to the precision curvewhere we track exactly the oppositenamely the fraction of true positive examples that we correctly classified as such going forwardlet us use roc curves to measure our classifier' performance to get better feeling for it the only challenge for our multiclass problem is that both roc and / curves assume binary classification problem for our purposelet us therefore create one chart per genre that shows how the classifier performed "one versus restclassificationy_pred clf predict(x_testfor label in labels
17,751
y_label_test np asarray(y_test==labeldtype=intproba clf predict_proba(x_testproba_label proba[:,labelfprtprroc_thresholds roc_curve(y_label_testproba_ labelplot tpr over fpr the outcome will be the six roc plots shown in the following screenshot as we have already found outour first version of classifier only performs well on classical songs looking at the individual roc curveshowevertells us that we are really underperforming for most of the other genres only jazz and country provide some hope the remaining genres are clearly not usable
17,752
improving classification performance with mel frequency cepstral coefficients we have already learned that fft is pointing us in the right directionbut in itself it will not be enough to finally arrive at classifier that successfully manages to organize our scrambled directory containing songs of diverse music genres into individual genre directories we somehow need more advanced version of it at this pointit is always wise to acknowledge that we have to do more research other people might have had similar challenges in the past and already found out new ways that might also help us and indeedthere is even yearly conference only dedicated to music genre classification organized by the international society for music information retrieval (ismirapparentlyautomatic music genre classification (amgcis an established subfield of music information retrieval (mirglancing over some of the amgc paperswe see that there is bunch of work targeting automatic genre classification that might help us one technique that seems to be successfully applied in many of those works is called mel frequency cepstral coefficients (mfccthe mel frequency cepstrum (mfcencodes the power spectrum of sound it is calculated as the fourier transform of the logarithm of the signal' spectrum if that sounds too complicatedsimply remember that the name "cepstrumoriginates from "spectrum"with the first four characters reversed mfc has been successfully used in speech and speaker recognition let' see whether it also works in our case we are in lucky situation where someone else has already needed exactly what we need and published an implementation of it as the talkbox scikit we can install it from the mfcc(functionwhich calculates the mfc coefficients as followsfrom scikits talkbox features import mfcc sample_ratex scipy io wavfile read(fncepsmspecspec mfcc(xprint(ceps shape( the data we would want to feed into our classifier is stored in cepswhich contains coefficients (the default value for the nceps parameter of the mfcc(functionfor each of the frames for the song with the filename fn taking all of the data would overwhelm our classifier what we could do instead is to do an averaging per coefficient over all the frames assuming that the start and end of each song are possibly less genre-specific than the middle part of itwe also ignore the first and last percentx np mean(ceps[int(num_ceps* / ):int(num_ceps* / )]axis=
17,753
sure enoughthe benchmark dataset that we will be using contains only the first seconds of each songso that we would not need to cut off the last percent we do it neverthelessso that our code works on other datasets as wellwhich are most likely not truncated similar to our work with fftwe certainly would also want to cache the oncegenerated mfcc features and read them instead of recreating them each time we train our classifier this leads to the following codedef write_ceps(cepsfn)base_fnext os path splitext(fndata_fn base_fn cepsnp save(data_fncepsprint("written %sdata_fndef create_ceps(fn)sample_ratex scipy io wavfile read(fncepsmspecspec mfcc(xwrite_ceps(cepsfndef read_ceps(genre_listbase_dir=genre_dir)xy [][for labelgenre in enumerate(genre_list)for fn in glob glob(os path joinbase_dirgenre"ceps npy"))ceps np load(fnnum_ceps len(cepsx append(np meanceps[int(num_ceps* / ):int(num_ceps* / )]axis= ) append(labelreturn np array( )np array(
17,754
we get the following promising resultsas shown in the next screenshotwith classifier that uses only features per song
17,755
the classification performance for all genres has improved jazz and metal are even at almost auc and indeedthe confusion matrix in the following plot also looks much better now we can clearly see the diagonal showing that the classifier manages to classify the genres correctly in most of the cases this classifier is actually quite usable to solve our initial taskif we would want to improve on thisthis confusion matrix quickly tells us where to focus onthe non-white spots on the non-diagonal places for instancewe have darker spot where we mislabel jazz songs as being rock with considerable probability to fix thiswe would probably need to dive deeper into the songs and extract thingsfor instancedrum patterns and similar genre-specific characteristics alsowhile glancing over the ismir papersyou may have also read about socalled auditory filterbank temporal envelope (aftefeatureswhich seem to outperform the mfcc features in certain situations maybe we should have look at them as wellthe nice thing is that being equipped with only roc curves and confusion matriceswe are free to pull in other expertsknowledge in terms of feature extractorswithout requiring ourselves to fully understand their inner workings our measurement tools will always tell us when the direction is right and when to change it of coursebeing machine learner who is eager to learnwe will always have the dim feeling that there is an exciting algorithm buried somewhere in black box of our feature extractorswhich is just waiting for us to be understood
17,756
summary in this we stepped out of our comfort zone when we built music genre classifier not having deep understanding of music theoryat first we failed to train classifier that predicts the music genre of songs with reasonable accuracy using fft but then we created classifier that showed really usable performance using mfc features in both caseswe used features that we understood only so much as to know how and where to put them into our classifier setup the one failedthe other succeeded the difference between them is that in the second casewe relied on features that were created by experts in the field and that is totally ok if we are mainly interested in the resultwe sometimes simply have to take shortcuts--we only have to make sure to take these shortcuts from experts in the specific domains and because we had learned how to correctly measure the performance in this new multiclass classification problemwe took these shortcuts with confidence in the next we will look at how to apply techniques you have learned in the rest of the book to this specific type of data we will learn how to use the mahotas computer vision package to preprocess images using traditional image processing functions
17,757
recognition image analysis and computer vision has always been important in industrial applications with the popularization of cell phones with powerful cameras and internet connectionsthey are also increasingly being generated by the users thereforethere are opportunities to make use of this to provide better user experience in this we will look at how to apply techniques you have learned in the rest of the book to this specific type of data in particularwe will learn how to use the mahotas computer vision package to preprocess images using traditional image-processing functions these can be used for preprocessingnoise removalcleanupcontrast stretchingand many other simple tasks we will also look at how to extract features from images these can be used as input to the same classification methods we have learned about in other we will apply these techniques to publicly available datasets of photographs introducing image processing from the point of view of the computeran image is large rectangular array of pixel values we wish to either process this image to generate new or better image (perhaps with less noiseor with different lookthis is typically the area of image processing we may also want to go from this array to decision that is relevant to our applicationwhich is better known as computer vision not everybody agrees with this distinction of the two fieldsbut its description is almost exactly how the terms are typically used
17,758
the first step will be to load the image from the diskwhere it is typically stored in an image-specific format such as png or jpegthe former being lossless compression format and the latter lossy compression one that is optimized for subjective appreciation of photographs thenwe may wish to perform preprocessing on the images (for examplenormalizing them for illumination variationswe will have classification problem as driver for this we want to be able to learn support vector machine (or otherclassifier that can learn from images thereforewe will use an intermediate representation for extracting numeric features from the images before applying machine learning finallyat the end of the we will learn about using local features these are relatively new methods (sift (scale-invariant feature transform)the first element in this new familywas introduced in and achieve very good results in many tasks loading and displaying images in order to manipulate imageswe will use package called mahotas this is an open source package (mit licenseso it can be used in any projectthat was developed by one of the authors of the book you are reading fortunatelyit is based on numpy the numpy knowledge you have acquired so far can be used for image processing there are other image packages such as scikit-image (skimage)the ndimage ( -dimensional imagemodule in scipyand the python bindings for opencv all of these work natively with numpyso you can even mix and match functionalities from different packages to get your result we start by importing mahotas with the mh abbreviationwhich we will use throughout this import mahotas as mh now we can load an image file using imreadimage mh imread('imagefile png'if imagefile png contains color image of height and width wthen image will be an array of shape (hw the first dimension is the heightthe second the widthand the third is red/green/blue other systems put the width on the first dimensionbut this is the mathematical convention and is used by all numpy-based packages the type of array will typically be np uint (an unsigned integer of bitsthese are the images that your camera takes or that your monitor can fully display
17,759
howeversome specialized equipment (mostly in scientific fieldscan take images with more bit resolution or bits are common mahotas can deal with all these typesincluding floating point images (not all operations make sense with floating point numbersbut when they domahotas supports themin many computationseven if the original data is composed of unsigned integersit is advantageous to convert to floating point numbers in order to simplify handling of rounding and overflow issues mahotas can use variety of different input/output backends unfortunatelynone of them can load all existing image formats (there are hundredswith several variations of eachhoweverloading png and jpeg images is supported by all of them we will focus on these common formats and refer you to the mahotas documentation on how to read uncommon formats the return value of mh imread is numpy array this means that you can use standard numpy functionalities to work with images for exampleit is often useful to subtract the mean value of the image from it this can help to normalize images taken under different lighting conditions and can be accomplished with the standard mean methodimage image image mean(we can display the image on screen using maplotlibthe plotting library we have already used several timesfrom matplotlib import pyplot as plt plt imshow(imageplt show(this shows the image using the convention that the first dimension is the height and the second the width it correctly handles color images as well when using python for numerical computationwe benefit from the whole ecosystem working well together basic image processing we will start with small dataset that was collected especially for this book it has three classesbuildingsnatural scenes (landscapes)and pictures of texts there are images in each categoryand they were all taken using cell phone camera with minimal compositionso the images are similar to those that would be uploaded to modern website this dataset is available from the book' website later in the we will look at harder dataset with more images and more categories
17,760
this screenshot of building is one of the images in the dataset we will use this screenshot as an example as you may be awareimage processing is large field here we will only be looking at some very basic operations we can perform on our images some of the most basic operations can be performed using numpy onlybut otherwise we will use mahotas thresholding thresholding is very simple operationwe transform all pixel values above certain threshold to and all those below to (or by using booleanstransform it to true and false)binarized (image threshold_valuethe value of the threshold width (threshold_value in the codeneeds to be chosen if the images are all very similarwe can pick one statically and use it for all images otherwisewe must compute different threshold for each image based on its pixel values mahotas implements few methods for choosing threshold value one is called otsuafter its inventor the first necessary step is to convert the image to grayscale with rgb gray
17,761
instead of rgb graywe can also have just the mean value of the redgreenand blue channels by calling image mean( the resulthoweverwill not be the same because rgb gray uses different weights for the different colors to give subjectively more pleasing result our eyes are not equally sensitive to the three basic colors image mh colors rgb gray(imagedtype=np uint plt imshow(imagedisplay the image by defaultmatplotlib will display this single-channel image as false color imageusing red for high values and blue for low for natural imagesgrayscale is more appropriate you can select it with the followingplt gray(now the screenshot is shown in grayscale note that only the way in which the pixel values are interpreted and shown has changed and the screenshot is untouched we can continue our processing by computing the threshold value thresh mh thresholding otsu(imageprint(threshimshow(image threshwhen applied to the previous screenshotthis method finds the threshold valuewhich separates the building and parked cars from the sky above
17,762
the result may be useful on its own (if you are measuring some properties of the thresholded imageor it can be useful for further processing the result is binary image that can be used to select region of interest the result is still not very good we can use operations on this screenshot to further refine it for examplewe can run the close operator to get rid of some of the noise in the upper corners otsubin otsubin (image <threshmh close(otsubinnp ones(( , ))in this casewe are closing the region that is below the thresholdso we reversed the threshold operator we couldalternativelyhave performed an open operation on the negative of the image otsubin otsubin (image threshmh open(otsubinnp ones(( , ))in either casethe operator takes structuring element that defines the type of region we want to close in our casewe used square
17,763
this is still not perfect as there are few bright objects in the parking lot that are not picked up we will improve it bit later in the the otsu threshold was able to identify the region of the sky as brighter than the building an alternative thresholding method is the ridley-calvard method (also named after its inventors)thresh mh thresholding rc(imageprint(threshthis method returns smaller threshold and tells apart the building details whether this is better or worse depends on what you are trying to distinguish gaussian blurring blurring your image may seem oddbut it often serves to reduce noisewhich helps with further processing with mahotasit is just function callimage mh colors rgb gray(imageim mh gaussian_filter(image,
17,764
notice how we did not convert the gray screenshot to unsigned integerswe just made use of the floating point result as it is the second argument to the gaussian_filter function is the size of the filter (the standard deviation of the filterlarger values result in more blurringas can be seen in the following screenshot (shown are filtering with sizes and )we can use the screenshot on the left and threshold it with otsu (using the same code seen previouslynow the result is perfect separation of the building region and the sky while some of the details have been smoothed overthe bright regions in the parking lot have also been smoothed over the result is an approximate outline of the sky without any artifacts by blurringwe got rid of the detail that didn' matter to the broad picture have look at the following screenshot
17,765
filtering for different effects the use of image processing to achieve pleasing effects in images dates back to the beginning of digital imagesbut it has recently been the basis of number of interesting applicationsthe most well-known of which is probably instagram we are going to use traditional image in image processingthe screenshot of the lenna imagewhich is shown and can be downloaded from the book' website (or many other image-processing websites)im mh imread('lenna jpg'as_grey=trueadding salt and pepper noise we can perform many further manipulations on this result if we want to for examplewe will now add bit of salt and pepper noise to the image to simulate few scanning artifacts we generate random arrays of the same width and height as the original image only percent of these values will be true salt np random random(lenna shape pepper np random random(lenna shape
17,766
we now add the salt (which means some values will be almost whiteand pepper noise (which means some values will be almost black)lenna mh stretch(lennalenna np maximum(salt* seplenna np minimum(pepper* lenna*(~pepper)lennawe used the values and as white and black this is slightly smoother than the more extreme choices of and howeverall of these are choices that need to be made by subjective preferences and style putting the center in focus the final example shows how to mix numpy operators with tiny bit of filtering to get an interesting result we start with the lenna image and split it into the color channelsim mh imread('lenna jpg' , , im transpose( , ,
17,767
now we filter the channels separately and build composite image out of it with mh as_rgb this function takes two-dimensional arraysperforms contrast stretching to make each an -bit integer arrayand then stacks themr mh gaussian_filter( mh gaussian_filter( mh gaussian_filter( im mh as_rgbr , , we then blend the two images from the center away to the edges first we need to build weights arraywthat will contain at each pixel normalized valuewhich is its distance to the centerh, shape height and width , np mgrid[: ,:wwe used the np mgrid objectwhich returns arrays of size (hwwith values corresponding to the and coordinates respectivelyy - / center at / max(normalize to - + - / max(we now use gaussian function to give the center region high valuew np exp(- *( ** ** )normalize again to min( ptp( [:,:,nonethis adds dummy third dimension to notice how all of these manipulations are performed using numpy arrays and not some mahotas-specific methodology this is one advantage of the python numpy ecosystemthe operations you learned to perform when you were learning about pure machine learning now become useful in completely different context
17,768
finallywe can combine the two images to have the center in sharp focus and the edges softer ringed mh stretch(im* ( - )*im now that you know some of the basic techniques of filtering imagesyou can build upon this to generate new filters it is more of an art than science after this point pattern recognition when classifying imageswe start with large rectangular array of numbers (pixel valuesnowadaysmillions of pixels are common we could try to feed all these numbers as features into the learning algorithm this is not very good idea this is because the relationship of each pixel (or even each small group of pixelsto the final result is very indirect insteada traditional approach is to compute features from the image and use those features for classification there are few methods that do work directly from the pixel values they have feature computation submodules inside them they may even attempt to learn what good features are automatically these are the topics of current research
17,769
we previously used an example of the buildings class here are examples of the text and scene classespattern recognition is just classification of images for historical reasonsthe classification of images has been called pattern recognition howeverthis is nothing more than the application of classification methods to images naturallyimages have their own specific issueswhich is what we will be dealing with in this computing features from images with mahotasit is very easy to compute features from images there is submodule named mahotas features where feature computation functions are available commonly used set of features are the haralick texture features as with many methods in image processingthis method was named after its inventor these features are texture-basedthey distinguish between images that are smooth and those that are patterned and have between different patterns with mahotasit is very easy to compute themharalick_features np mean(mh features haralick(image), the function mh features haralick returns array the first dimension refers to four possible directions in which to compute the features (updownleftand rightif we are not interested in the directionwe can use the mean overall directions based on this functionit is very easy to classify system
17,770
there are few other feature sets implemented in mahotas linear binary patterns is another texture-based feature set that is very robust against illumination changes there are other types of featuresincluding local featuresthat we will discuss later in this features are not just for classification the feature-based approach of reducing million pixel image can also be applied in other machine learning contextssuch as clusteringregressionor dimensionality reduction by computing few hundred features and then running dimensionality reduction algorithm on the resultyou will be able to go from an object with million pixel values to few dimensionseven to two-dimensions as you build visualization tool with these featureswe use standard classification method such as support vector machinesimages glob('simple-dataset/jpg'features [labels [for im in imagesfeatures append(mh features haralick(immean( )labels append(im[:-len(' jpg')]features np array(featureslabels np array(labelsthe three classes have very different textures buildings have sharp edges and big blocks where the color is similar (the pixel values are rarely exactly the samebut the variation is slighttext is made of many sharp dark-light transitionswith small black areas in sea of white natural scenes have smoother variations with fractallike transitions thereforea classifier based on texture is expected to do well since our dataset is smallwe only get percent accuracy using logistic regression writing your own features feature is nothing magical it is simply number that we computed from an image there are several feature sets already defined in the literature these often have the added advantage that they have been designed and studied to be invariant to many unimportant factors for examplelinear binary patterns are completely invariant to multiplying all pixel values by number or adding constant to all these values this makes it robust against illumination changes of images
17,771
howeverit is also possible that your particular use case would benefit from few specially designed features for examplewe may think that in order to distinguish text from natural imagesit is an important defining feature of text that it is "edgy we do not mean what the text says (that may be edgy or square)but rather that images of text have many edges thereforewe may want to introduce an "edginess featurethere are few ways in which to do so (infinitely manyone of the advantages of machine learning systems is that we can just write up few of these ideas and let the system figure out which ones are good and which ones are not we start with introducing another traditional image-processing operationedge finding in this casewe will use sobel filtering mathematicallywe filter (convolveour image with two matricesthe vertical one is shown in the following screenshotand the horizontal one is shown herewe then sum up the squared result for an overall measure of edginess at each point (in other usesyou may want to distinguish horizontal from vertical edges and use these in another wayas alwaysthis depends on the underlying applicationmahotas supports sobel filtering as followsfiltered mh sobel(imagejust_filter=true
17,772
the just_filter=true argument is necessaryotherwise thresholding is performed and you get an estimate of where the edges are the following screenshot shows the result of applying the filter (so that lighter areas are edgieron the left and the result of thresholding on the rightbased on this operatorwe may want to define global feature as the overall edginess of the resultdef edginess_sobel(image)edges mh sobel(imagejust_filter=trueedges edges ravel(return np sqrt(np dot(edgesedges)in the last linewe used trick to compute the root mean square--using the inner product function np dot is equivalent to writing np sum(edges * )but much faster (we just need to make sure we unraveled the array firstnaturallywe could have thought up many different ways to achieve similar results using the thresholding operation and counting the fraction of pixels above threshold would be another obvious example we can add this feature to the previous pipeline very easilyfeatures [for im in imagesimage mh imread(imfeatures append(np concatenatemh features haralick(immean( )build -element list with our feature to match expectations of np concatenate [edginess_sobel(im)])
17,773
feature sets may be combined easily using this structure by using all of these featureswe get percent accuracy this is perfect illustration of the principle that good algorithms are the easy part you can always use an implementation of state-of-the-art classification the real secret and added value often comes in feature design and engineering this is where knowledge of your dataset is valuable classifying harder dataset the previous dataset was an easy dataset for classification using texture features in factmany of the problems that are interesting from business point of view are relatively easy howeversometimes we may be faced with tougher problem and need better and more modern techniques to get good results we will now test public dataset that has the same structureseveral photographs of the same class the classes are animalscarstransportationand natural scenes when compared to the three classesproblem we discussed previouslythese classes are harder to tell apart natural scenesbuildingsand texts have very different textures in this datasethoweverthe texture is clear marker of the class the following is an example from the animal classand here is another from the cars class
17,774
both objects are against natural backgrounds and with large smooth areas inside the objects we therefore expect that textures will not be very good when we use the same features as beforewe achieve percent accuracy in cross-validation using logistic regression this is not too bad on four classesbut not spectacular either let' see if we can use different method to do better in factwe will see that we need to combine texture features with other methods to get the best possible results butfirst things first--we look at local features local feature representations relatively recent development in the computer vision world has been the development of local-feature-based methods local features are computed on small region of the imageunlike the previous features we consideredwhich had been computed on the whole image mahotas supports computing type of these featuresspeeded up robust featuresalso known as surf (there are several othersthe most well-known being the original proposal of scale-invariant feature transform (sift)these local features are designed to be robust against rotational or illumination changes (that isthey only change their value slightly when illumination changeswhen using these featureswe have to decide where to compute them there are three possibilities that are commonly usedrandomly in grid detecting interesting areas of the image ( technique known as keypoint detection or interest point detectionall of these are valid and willunder the right circumstancesgive good results mahotas supports all three using interest point detection works best if you have reason to expect that your interest point will correspond to areas of importance in the image this dependsnaturallyon what your image collection consists of typicallythis is found to work better in man-made images rather than natural scenes manmade scenes have stronger anglesedgesor regions of high contrastwhich are the typical regions marked as interesting by these automated detectors since we are using photographs of mostly natural sceneswe are going to use the interest point method computing them with mahotas is easyimport the right submodule and call the surf surf functionfrom mahotas features import surf descriptors surf surf(imagedescriptors_only=true
17,775
the descriptors_only=true flag means that we are only interested in the descriptors themselvesand not in their pixel locationsizeand other method information alternativelywe could have used the dense sampling methodusing the surf dense functionfrom mahotas features import surf descriptors surf dense(imagespacing= this returns the value of the descriptors computed on points that are at distance of pixels from each other since the position of the points is fixedthe metainformation on the interest points is not very interesting and is not returned by default in either casethe result (descriptorsis an -times- arraywhere is the number of points sampled the number of points depends on the size of your imagestheir contentand the parameters you pass to the functions we used defaults previouslyand this way we obtain few hundred descriptors per image we cannot directly feed these descriptors to support vector machinelogistic regressoror similar classification system in order to use the descriptors from the imagesthere are several solutions we could just average thembut the results of doing so are not very good as they throw away all location-specific information in that casewe would have just another global feature set based on edge measurements the solution we will use here is the bag-of-words modelwhich is very recent idea it was published in this form first in this is one of those "obvious in hindsightideasit is very simple and works very well it may seem strange to say "wordswhen dealing with images it may be easier to understand if you think that you have not written wordswhich are easy to distinguish from each otherbut orally spoken audio noweach time word is spokenit will sound slightly differentso its waveform will not be identical to the other times it was spoken howeverby using clustering on these waveformswe can hope to recover most of the structure so that all the instances of given word are in the same cluster even if the process is not perfect (and it will not be)we can still talk of grouping the waveforms into words this is the same thing we do with visual wordswe group together similar-looking regions from all images and call these visual words grouping is form of clustering that we first encountered in clustering finding related posts
17,776
the number of words used does not usually have big impact on the final performance of the algorithm naturallyif the number is extremely small (ten or twentywhen you have few thousand images)then the overall system will not perform well similarlyif you have too many words (many more than the number of images for example)the system will not perform well howeverin between these two extremesthere is often very large plateau where you can choose the number of words without big impact on the result as rule of thumbusing value such as or if you have very many imagesshould give you good result we are going to start by computing the featuresalldescriptors [for im in imagesim mh imread(imas_grey=trueim im astype(np uint alldescriptors append(surf surf(imdescriptors_only)this results in over , local descriptors nowwe use -means clustering to obtain the centroids we could use all the descriptorsbut we are going to use smaller sample for extra speedconcatenated np concatenate(alldescriptorsget all descriptors into single array concatenated concatenated[:: use only every nd vector from sklearn cluster import kmeans km kmeans(kkm fit(concatenatedafter this is done (which will take while)we have km containing information about the centroids we now go back to the descriptors and build feature vectorsfeatures [for in alldescriptorsc km predict(dfeatures appendnp array([np sum( =cifor ci in range( )]features np array(featuresthe end result of this loop is that features[fiis histogram corresponding to the image at position fi (the same could have been computed faster with the np histogram functionbut getting the arguments just right is little trickyand the rest of the code isin any casemuch slower than this simple step
17,777
the result is that each image is now represented by single array of features of the same size (the number of clustersin our case thereforewe can use our standard classification methods using logistic regression againwe now get percenta percent improvement we can combine all of the features together and we obtain percentmore than percent over what was obtained with texturebased methodssummary we learned the classical feature-based approach to handling images in machine learning context by reducing million pixels to few numeric dimensions all the technologies that we learned in the other suddenly become directly applicable to image problems this includes classificationwhich is often referred to as pattern recognition when the inputs are imagesclusteringor dimensionality reduction (even topic modeling can be performed on imagesoften with very interesting resultswe also learned how to use local features in bag-of-words model for classification this is very modern approach to computer vision and achieves good results while being robust to many irrelevant aspects of the imagesuch as illumination and also uneven illumination in the same image we also used clustering as useful intermediate step in classification rather than as an end in itself
17,778
we focused on mahotaswhich is one of the major computer vision libraries in python there are others that are equally well maintained skimage (scikit-imageis similar in spiritbut has different set of features opencv is very good +library with python interface all of these can work with numpy arrays and can mix and match functions from different libraries to build complex pipelines in the next you will learn different form of machine learningdimensionality reduction as we saw in several earlier including when using images in this it is very easy to computationally generate many features howeveroften we want to have reduced number of features for speedvisualizationor to improve our results in the next we will see how to achieve this
17,779
garbage ingarbage outthat' what we know from real life throughout this bookwe have seen that this pattern also holds true when applying machine learning methods to training data looking backwe realize that the most interesting machine learning challenges always involved some sort of feature engineeringwhere we tried to use our insight into the problem to carefully craft additional features that the machine learner hopefully picks up in this we will go in the opposite direction with dimensionality reduction involving cutting away features that are irrelevant or redundant removing features might seem counter-intuitive at first thoughtas more information is always better than less information shouldn' the unnecessary features be ignored after allfor exampleby setting their weights to inside the machine learning algorithm the following are several good reasons that are still in practice for trimming down the dimensions as much as possiblesuperfluous features can irritate or mislead the learner this is not the case with all machine learning methods (for examplesupport vector machines love high-dimensional spacesbut most of the models feel safer with less dimensions another argument against high-dimensional feature spaces is that more features mean more parameters to tune and higher risk of overfitting the data we retrieved to solve our task might just have artificial high dimensionswhereas the real dimension might be small less dimensions mean faster training and more variations to try outresulting in better end results if we want to visualize the datawe are restricted to two or three dimensions this is known as visualization sohere we will show you how to get rid of the garbage within our data while keeping the valuable part of it
17,780
sketching our roadmap dimensionality reduction can be roughly grouped into feature selection and feature extraction methods we have already employed some kind of feature selection in almost every when we inventedanalyzedand then probably dropped some features in this we will present some ways that use statistical methodsnamely correlation and mutual informationto be able to do feature selection in vast feature spaces feature extraction tries to transform the original feature space into lower-dimensional feature space this is useful especially when we cannot get rid of features using selection methodsbut we still have too many features for our learner we will demonstrate this using principal component analysis (pca)linear discriminant analysis (lda)and multidimensional scaling (mdsselecting features if we want to be nice to our machine learning algorithmwe will provide it with features that are not dependent on each otheryet highly dependent on the value to be predicted it means that each feature adds some salient information removing any of the features will lead to drop in performance if we have only handful of featureswe could draw matrix of scatter plots one scatter plot for every feature-pair combination relationships between the features could then be easily spotted for every feature pair showing an obvious dependencewe would then think whether we should remove one of them or better design newercleaner feature out of both most of the timehoweverwe have more than handful of features to choose from just think of the classification task where we had bag-of-words to classify the quality of an answerwhich would require , by , scatter plot in this casewe need more automated way to detect the overlapping features and way to resolve them we will present two general ways to do so in the following subsectionsnamely filters and wrappers
17,781
detecting redundant features using filters filters try to clean up the feature forest independent of any machine learning method used later they rely on statistical methods to find out which of the features are redundant (in which casewe need to keep only one per redundant feature groupor irrelevant in generalthe filter works as depicted in the workflow shown in the following diagramy all features xn select features that are not redundant some features xm select features that are not irrelevant resulting features correlation using correlationwe can easily see linear relationships between pairs of featureswhich are relationships that can be modeled using straight line in the graphs shown in the following screenshotwe can see different degrees of correlation together with potential linear dependency plotted as red dashed line ( fitted one-dimensional polynomialthe correlation coefficient at the top of the individual graphs is calculated using the common pearson correlation coefficient (the pearson valueby means of the pearsonr(function of scipy stat given two equal-sized data seriesit returns tuple of the correlation coefficient values and the -valuewhich is the probability that these data series are being generated by an uncorrelated system in other wordsthe higher the -valuethe less we should trust the correlation coefficient>from import scipy stats import pearsonr >pearsonr([ , , ][ , , ]>( >pearsonr([ , , ][ , , ]>( in the first casewe have clear indication that both series are correlated in the second onewe still clearly have non-zero value
17,782
howeverthe -value basically tells us that whatever the correlation coefficient iswe should not pay attention to it the following output in the screenshot illustrates the samein the first three cases that have high correlation coefficientswe would probably want to throw out either or since they seem to convey similar if not the same information in the last casehoweverwe should keep both features in our applicationthis decision would of course be driven by that -value although it worked nicely in the previous examplereality is seldom nice to us one big disadvantage of correlation-based feature selection is that it only detects linear relationships ( relationship that can be modeled by straight lineif we use correlation on non-linear datawe see the problem in the following examplewe have quadratic relationship
17,783
although the human eye immediately sees the relationship between and in all but the bottom-right graphthe correlation coefficient does not it is obvious that correlation is useful to detect linear relationshipsbut fails for everything else for non-linear relationshipsmutual information comes to the rescue mutual information when looking at the feature selectionwe should not focus on the type of relationship as we did in the previous section (linear relationshipsinsteadwe should think in terms of how much information one feature providesgiven that we already have another
17,784
to understand thatlet us pretend we want to use features from the feature set house_sizenumber_of_levelsand avg_rent_price to train classifier that outputs whether the house has an elevator or not in this examplewe intuitively see that knowing house_size means we don' need number_of_levels anymoresince it somehow contains redundant information with avg_rent_priceit is different as we cannot infer the value of rental space simply from the size of the house or the number of levels it has thus we would be wise to keep only one of them in addition to the average price of rental space mutual information formalizes the previous reasoning by calculating how much information two features have in common but unlike correlationit does not rely on sequence of databut on the distribution to understand how it workswe have to dive bit into information entropy let' assume we have fair coin before we flip itwe will have maximum uncertainty as to whether it will show heads or tailsas both have an equal probability of percent this uncertainty can be measured by means of claude shannon' information entropyin our fair coin scenariowe have two caseslet case of tailswith be the case of heads and the thuswe get the followingfor conveniencewe can also use scipy stats entropy([ ]base= we set the base parameter to to get the same result as the previous one otherwisethe function will use the natural logarithm via np log(in generalthe base does not matter as long as you use it consistently now imagine we knew upfront that the coin is actually not that fairwith the heads side having percent chance of showing up after flipping
17,785
we can see that this situation is less uncertain the uncertainty will decrease the farther we get from reaching the extreme value of for either percent or percent chance of the head showing upas we can see in the following graphwe will now modify the entropy by applying it to two features instead of onesuch that it measures how much uncertainty is removed from when we learn about then we can catch how one feature reduces the uncertainty of another for examplewithout having any further information about the weatherwe are totally uncertain whether it is raining outside or not if we now learn that the grass outside is wetthe uncertainty has been reduced (we will still have to check whether the sprinkler had been turned onmore formallymutual information is defined asthis looks bit intimidatingbut is really not more than sums and products for instancethe calculation of is done by binning the feature values and then calculating the fraction of values in each bin in the following plotswe have set the number of bins to
17,786
in order to restrict mutual information to the interval [ , ]we have to divide it by their added individual entropywhich gives us the normalized mutual informationthe nice thing about mutual information is that unlike correlationit is not looking only at linear relationshipsas we can see in the following graphs
17,787
hencewe have to calculate the normalized mutual information for all feature pairs for every pair having very high value (we would have to determine what that means)we would then drop one of them in case we are doing regressionwe could drop the feature that has very low mutual information with the desired result value this might work for small set of features at some pointhoweverthis procedure can be really expensiveas the amount of calculation grows quadratically since we are computing the mutual information between feature pairs another huge disadvantage of filters is that they drop features that are not useful in isolation more often than notthere are handful of features that seem to be totally independent of the target variableyet when combined togetherthey rock to keep thesewe need wrappers
17,788
asking the model about the features using wrappers while filters can tremendously help in getting rid of useless featuresthey can go only so far after all the filteringthere might still be some features that are independent among themselves and show some degree of dependence with the result variablebut yet they are totally useless from the model' point of view just think of the following datawhich describes the xor function individuallyneither nor would show any signs of dependence on ywhereas together they clearly doa so why not ask the model itself to give its vote on the individual featuresthis is what wrappers doas we can see in the following process chart diagramy current featuresinitialized with all features xn train model with and check the importance of individual features importance of individual features feature set too big no resulting features yes drop features that are unimportant here we have pushed the calculation of feature importance to the model training process unfortunately (but understandably)feature importance is not determined as binary but as ranking value so we still have to specify where to make the cut what part of the features are we willing to take and what part do we want to drop
17,789
coming back to scikit-learnwe find various excellent wrapper classes in the sklearn feature_selection package real workhorse in this field is rfewhich stands for recursive feature elimination it takes an estimator and the desired number of features to keep as parameters and then trains the estimator with various feature sets as long as it has found subset of the features that are small enough the rfe instance itself pretends to be like an estimatortherebywrapping the provided estimator in the following examplewe create an artificial classification problem of samples using the convenient make_classification(function of datasets it lets us specify the creation of featuresout of which only three are really valuable to solve the classification problemfrom sklearn feature_selection import rfe from sklearn linear_model import logisticregression from sklearn datasets import make_classification , make_classification(n_samples= n_features= n_ informative= random_state= clf logisticregression(clf fit(xyselector rfe(clfn_features_to_select= selector selector fit(xyprint(selector support_[false true false true false false false false true falseprint(selector ranking_[ the problem in real-world scenarios isof coursehow can we know the right value for n_features_to_selecttruth iswe can' but most of the timewe can use sample of the data and play with it using different settings to quickly get feeling for the right ballpark the good thing is that we don' have to be that exact when using wrappers let' try different values for n_features_to_select to see how support_ and ranking_ changen_ features_ to_select support_ ranking_ [false false false true false false false false false false [false false false true false false false false true false[ [false true false true false false false false true false[
17,790
n_ features_ to_select support_ ranking_ [false true false true false false false false true true[ [false true true true false false false false true true[ true true true true false false false false true true[ true true true true false true false false true true[ true true true true false true false true true true[ true true true true false true true true true true[ true true true true true true true true true true[ we see that the result is very stable features that have been used when requesting smaller feature sets keep on getting selected when letting more features in finally we rely on our train/test set splitting to warn us when we go in the wrong direction other feature selection methods there are several other feature selection methods that you will discover while reading through machine learning literature some don' even look like feature selection methods as they are embedded into the learning process (not to be confused with the previously mentioned wrappersdecision treesfor instancehave feature selection mechanism implanted deep in their core other learning methods employ some kind of regularization that punishes model complexitythus driving the learning process towards good performing models that are still "simplethey do this by decreasing the less impactful featuresimportance to zero and then dropping them ( -regularizationso watch outoftenthe power of machine learning methods has to be attributed to their implanted feature selection method to great degree
17,791
feature extraction at some pointafter we have removed the redundant features and dropped the irrelevant oneswe often still find that we have too many features no matter what learning method we usethey all perform badlyand given the huge feature spacewe understand that they actually cannot do better we realize that we have to cut living flesh and that we have to get rid of features that all common sense tells us are valuable another situation when we need to reduce the dimensionsand when feature selection does not help muchis when we want to visualize data thenwe need to have at most three dimensions at the end to provide any meaningful graph enter the feature extraction methods they restructure the feature space to make it more accessible to the modelor simply cut down the dimensions to two or three so that we can show dependencies visually againwe can distinguish between feature extraction methods as being linear or non-linear ones and as beforein the feature selection sectionwe will present one method for each typeprincipal component analysis for linear and multidimensional scaling for the non-linear version although they are widely known and usedthey are only representatives for many more interesting and powerful feature extraction methods about principal component analysis (pcaprincipal component analysis is often the first thing to try out if you want to cut down the number of features and do not know what feature extraction method to use pca is limited as it is linear methodbut chances are that it already goes far enough for your model to learn well enough add to that the strong mathematical properties it offersthe speed at which it finds the transformed feature spaceand its ability to transform between the original and transformed features laterwe can almost guarantee that it will also become one of your frequently used machine learning tools summarizing itgiven the original feature spacepca finds linear projection of it into lower dimensional space that has the following propertiesthe conserved variance is maximized the final reconstruction error (when trying to go back from transformed features to original onesis minimized as pca simply transforms the input datait can be applied both to classification and regression problems in this sectionwe will use classification task to discuss the method
17,792
sketching pca pca involves lot of linear algebrawhich we do not want to go into neverthelessthe basic algorithm can be easily described with the help of the following steps center the data by subtracting the mean from it calculate the covariance matrix calculate the eigenvectors of the covariance matrix if we start with featuresthe algorithm will again return transformed feature space with dimensions we gained nothing so far the nice thing about this algorithmhoweveris that the eigenvalues indicate how much of the variance is described by the corresponding eigenvector featuresand we know that our model does let us assume we start with not work well with more than features then we simply pick the eigenvectors having the highest eigenvalues applying pca let us consider the following artificial datasetwhich is visualized in the left plot as followsx np arange( +np random normal(loc= scale= size=len( ) np c_[( )good ( > ( > some arbitrary classes bad ~good to make the example look good
17,793
scikit-learn provides the pca class in its decomposition package in this examplewe can clearly see that one dimension should be enough to describe the data we can specify that using the n_components parameterfrom sklearn import linear_modeldecompositiondatasets pca decomposition pca(n_components= here we can also use pca' fit(and transform(methods (or its fit_ transform(combinationto analyze the data and project it into the transformed feature spacextrans pca fit_transform(xxtrans contains only one dimensionas we have specified you can see the result in the right graph the outcome is even linearly separable in this case we would not even need complex classifier to distinguish between both classes to get an understanding of the reconstruction errorwe can have look at the variance of the data that we have retained in the transformationprint(pca explained_variance_ratio_ this means that after going from two dimensions to one dimensionwe are still left with percent of the variance of courseit is not always that simple oftenwe don' know what number of dimensions is advisable upfront in that casewe leave the n_components parameter unspecified when initializing pca to let it calculate the full transformation after fitting the dataexplained_variance_ratio_ contains an array of ratios in decreasing order the first value is the ratio of the basis vector describing the direction of the highest variancethe second value is the ratio of the direction of the second highest varianceand so on after plotting this arraywe quickly get feel of how many components we would needthe number of components immediately before the chart has its elbow is often good guess plots displaying the explained variance over the number of components is called scree plot nice example of combining scree plot with grid search to find the best setting for the classification problem can be found at auto_examples/plot_digits_pipe html
17,794
limitations of pca and how lda can help being linear methodpca has its limitations when we are faced with data that has non-linear relationships we won' go into details herebut it will suffice to say that there are extensions of pcafor example kernel pcawhich introduce non-linear transformation so that we can still use the pca approach another interesting weakness of pca that we will cover here is when it is being applied to special classification problems let us replace the followinggood ( > ( > with good > to simulate such special case and we quickly see the problem herethe classes are not distributed according to the axis with the highest variance but the one with the second highest variance clearlypca falls flat on its face as we don' provide pca with any cues regarding the class labelsit cannot do any better linear discriminant analysis (ldacomes to the rescue here it is method that tries to maximize the distance of points belonging to different classes while minimizing the distance of points of the same class we won' give any more details regarding how the underlying theory works in particularjust quick tutorial on how to use itfrom sklearn import lda lda_inst lda lda(n_components= xtrans lda_inst fit_transform(xgood
17,795
that' all note that in contrast to the previous pca examplewe provide the class labels to the fit_transform(method thuswhereas pca is an unsupervised feature extraction methodlda is supervised one the result looks as expectedthen why to consider pca in the first place and not use lda onlywellit is not that simple with the increasing number of classes and less samples per classlda does not look that well any more alsopca seems to be not as sensitive to different training sets as lda so when we have to advise which method to usewe can only suggest clear "it dependsmultidimensional scaling (mdson one handpca tries to use optimization for retained varianceand on the other handmds tries to retain the relative distances as much as possible when reducing the dimensions this is useful when we have high-dimensional dataset and want to get visual impression mds does not care about the data points themselvesinsteadit is interested in the dissimilarities between pairs of data points and interprets these as distances the first thing the mds algorithm is doing isthereforetaking all the data points of dimension and calculates distance matrix using distance function which measures the (most of the timeeuclideandistance in the original feature space
17,796
nowmds tries to position the individual data points in the lower dimensional space such that the new distance there resembles as much as possible the distances in the original space as mds is often used for visualizationthe choice of the lower dimension is most of the time two or three let us have look at the following simple data consisting of three data points in five-dimensional space two of the data points are close by and one is very distinctand we want to visualize that in three and two dimensions as followsx np c_[np ones( ) np ones( ) np ones( ) print( [ ]using the class mds in scikit-learn' manifold packagewe first specify that we want to transform into three-dimensional space as followsfrom sklearn import manifold mds manifold mds(n_components= xtrans mds fit_transform(xto visualize it in two dimensionswe would have to say so using n_components the results can be seen in the following two graphs the triangle and circle are both close togetherwhereas the star is far away
17,797
let us have look at slightly more complex iris dataset we will use it later to contrast lda with pca the iris dataset contains four attributes per flower with the previous codewe would project it into three-dimensional space while keeping the relative distances between the individual flowers as much as possible in the previous examplewe did not specify any metricso mds will default to euclidean this means that flowers that were different according to their four attributes should also be far away in the mds-scaled three-dimensional spaceand flowers that were similar should be near together nowas shown in the following screenshotdoing the dimensional reduction to three and two dimensions with pca insteadwe see the expected bigger spread of the flowers belonging to the same classas shown in the following screenshot
17,798
of courseusing mds requires an understanding of the individual feature' unitsmaybe we are using features that cannot be compared using the euclidean metric for instancea categorical variableeven when encoded as an integer ( red circle blue star green triangleand so on)cannot be compared using euclidean (is red closer to blue than to green?but once we are aware of this issuemds is useful tool that reveals similarities in our data that otherwise would be difficult to see in the original feature space looking bit deeper into mdswe realize that it is not single algorithmbut family of different algorithmsof which we have used just one the same was true for pcaand in case you realize that neither pca nor mds solves your problemjust look at the other manifold learning algorithms that are available in the scikit-learn toolkit summary we learned that sometimes we can get rid of all the features using feature selection methods we also saw that in some cases this is not enoughand we have to employ feature extraction methods that reveal the real and the lower-dimensional structure in our datahoping that the model has an easier game with it we have only scratched the surface of the huge body of available dimensionality reduction methods stillwe hope that we have got you interested in this whole fieldas there are lots of other methods waiting for you to pick up at the endfeature selection and extraction is an artjust like choosing the right learning method or training model the next covers the use of juga little python framework to manage computations in way that takes advantage of multiple cores or multiple machines we will also learn about aws the amazon cloud
17,799
while computers keep getting faster and have more memorythe size of the data has grown as well in factdata has grown faster than computational speedand this means that it has grown faster than our ability to process it it is not easy to say what is big data and what is notso we will adopt an operational definitionwhen data is so large that it becomes too cumbersome to work withwe refer to it as big data in some areasthis might mean petabytes of data or trillions of transactionsdata that will not fit into single hard drive in other casesit may be one hundred times smallerbut just difficult to work with we will first build upon some of the experience of the previous and work with what we can call the medium data setting (not quite big databut not small eitherfor this we will use package called jugwhich allows us to do the followingbreak up your pipeline into tasks cache (memoizeintermediate results make use of multiple coresincluding multiple computers on grid the next step is to move to true "big data"and we will see how to use the cloud (in particularthe amazon web services infrastructurewe will now use another python packagestarclusterto manage clusters learning about big data the expression "big datadoes not mean specific amount of dataneither in the number of examples nor in the number of gigabytesterabytesor petabytes taken up by the data it means the followingwe have had data growing faster than the processing power some of the methods and techniques that worked well in the past now need to be redoneas they do not scale well