id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
17,600 | the last few lines select the best model first we compare the predictionspredwith the actual labelsvirginica the little trick of computing the mean of the comparisons gives us the fraction of correct resultsthe accuracy at the end of the for loopall possible thresholds for all possible features have been testedand the best_fi and best_t variables hold our model to apply it to new examplewe perform the followingif example[best_fitprint 'virginicaelseprint 'versicolorwhat does this model look likeif we run it on the whole datathe best model that we get is split on the petal length we can visualize the decision boundary in the following screenshotwe see two regionsone is white and the other is shaded in grey anything that falls in the white region will be called iris virginica and anything that falls on the shaded side will be classified as iris versicolorin threshold modelthe decision boundary will always be line that is parallel to one of the axes the plot in the preceding screenshot shows the decision boundary and the two regions where the points are classified as either white or grey it also shows (as dashed linean alternative threshold that will achieve exactly the same accuracy our method chose the first thresholdbut that was an arbitrary choice |
17,601 | evaluation holding out data and cross-validation the model discussed in the preceding section is simple modelit achieves percent accuracy on its training data howeverthis evaluation may be overly optimistic we used the data to define what the threshold would beand then we used the same data to evaluate the model of coursethe model will perform better than anything else we have tried on this dataset the logic is circular what we really want to do is estimate the ability of the model to generalize to new instances we should measure its performance in instances that the algorithm has not seen at training thereforewe are going to do more rigorous evaluation and use held-out data for thiswe are going to break up the data into two blockson one blockwe'll train the modeland on the other--the one we held out of training--we'll test it the output is as followstraining error was testing error was ( the result of the testing data is lower than that of the training error this may surprise an inexperienced machine learnerbut it is expected and typical to see whylook back at the plot that showed the decision boundary see if some of the examples close to the boundary were not there or if one of the ones in between the two lines was missing it is easy to imagine that the boundary would then move little bit to the right or to the left so as to put them on the "wrongside of the border the error on the training data is called training error and is always an overly optimistic estimate of how well your algorithm is doing we should always measure and report the testing errorthe error on collection of examples that were not used for training these concepts will become more and more important as the models become more complex in this examplethe difference between the two errors is not very large when using complex modelit is possible to get percent accuracy in training and do no better than random guessing on testingone possible problem with what we did previouslywhich was to hold off data from trainingis that we only used part of the data (in this casewe used half of itfor training on the other handif we use too little data for testingthe error estimation is performed on very small number of examples ideallywe would like to use all of the data for training and all of the data for testing as well we can achieve something quite similar by cross-validation one extreme (but sometimes usefulform of cross-validation is leave-one-out we will take an example out of the training datalearn model without this exampleand then see if the model classifies this example correctly |
17,602 | error for ei in range(len(features))select all but the one at position 'ei'training np ones(len(features)booltraining[eifalse testing ~training model learn_model(features[training]virginica[training]predictions apply_model(features[testing]virginica[testing]modelerror +np sum(predictions !virginica[testing]at the end of this loopwe will have tested series of models on all the examples howeverthere is no circularity problem because each example was tested on model that was built without taking the model into account thereforethe overall estimate is reliable estimate of how well the models would generalize the major problem with leave-one-out cross-validation is that we are now being forced to perform times more work in factwe must learn whole new model for each and every exampleand this will grow as our dataset grows we can get most of the benefits of leave-one-out at fraction of the cost by using -fold cross-validationhere"xstands for small numbersayfive in order to perform five-fold cross-validationwe break up the data in five groupsthat isfive folds then we learn five modelsleaving one fold out of each the resulting code will be similar to the code given earlier in this sectionbut here we leave percent of the data out instead of just one element we test each of these models on the left out fold and average the results |
17,603 | the preceding figure illustrates this process for five blocksthe dataset is split into five pieces then for each foldyou hold out one of the blocks for testing and train on the other four you can use any number of folds you wish five or ten fold is typicalit corresponds to training with or percent of your data and should already be close to what you would get from using all the data in an extreme caseif you have as many folds as datapointsyou can simply perform leave-one-out cross-validation when generating the foldsyou need to be careful to keep them balanced for exampleif all of the examples in one fold come from the same classthe results will not be representative we will not go into the details of how to do this because the machine learning packages will handle it for you we have now generated several models instead of just one sowhat final model do we return and use for the new datathe simplest solution is now to use single overall model on all your training data the cross-validation loop gives you an estimate of how well this model should generalize cross-validation schedule allows you to use all your data to estimate if your methods are doing well at the end of the cross-validation loopyou can use all your data to train final model although it was not properly recognized when machine learning was starting outnowadays it is seen as very bad sign to even discuss the training error of classification system this is because the results can be very misleading we always want to measure and compare either the error on held-out dataset or the error estimated using cross-validation schedule building more complex classifiers in the previous sectionwe used very simple modela threshold on one of the dimensions throughout this bookyou will see many other types of modelsand we're not even going to cover everything that is out there what makes up classification modelwe can break it up into three partsthe structure of the modelin thiswe use threshold on single feature the search procedurein thiswe try every possible combination of feature and threshold the loss functionusing the loss functionwe decide which of the possibilities is less bad (because we can rarely talk about the perfect solutionwe can use the training error or just define this point the other way around and say that we want the best accuracy traditionallypeople want the loss function to be minimum |
17,604 | we can play around with these parts to get different results for examplewe can attempt to build threshold that achieves minimal training errorbut we will only test three values for each featurethe mean value of the featuresthe mean plus one standard deviationand the mean minus one standard deviation this could make sense if testing each value was very costly in terms of computer time (or if we had millions and millions of datapointsthen the exhaustive search we used would be infeasibleand we would have to perform an approximation like this alternativelywe might have different loss functions it might be that one type of error is much more costly than another in medical settingfalse negatives and false positives are not equivalent false negative (when the result of test comes back negativebut that is falsemight lead to the patient not receiving treatment for serious disease false positive (when the test comes back positive even though the patient does not actually have that diseasemight lead to additional tests for confirmation purposes or unnecessary treatment (which can still have costsincluding side effects from the treatmentthereforedepending on the exact settingdifferent trade-offs can make sense at one extremeif the disease is fatal and treatment is cheap with very few negative side effectsyou want to minimize the false negatives as much as you can with spam filteringwe may face the same problemincorrectly deleting non-spam -mail can be very dangerous for the userwhile letting spam -mail through is just minor annoyance what the cost function should be is always dependent on the exact problem you are working on when we present general-purpose algorithmwe often focus on minimizing the number of mistakes (achieving the highest accuracyhoweverif some mistakes are more costly than othersit might be better to accept lower overall accuracy to minimize overall costs finallywe can also have other classification structures simple threshold rule is very limiting and will only work in the very simplest casessuch as with the iris dataset more complex dataset and more complex classifier we will now look at slightly more complex dataset this will motivate the introduction of new classification algorithm and few other ideas |
17,605 | learning about the seeds dataset we will now look at another agricultural datasetit is still smallbut now too big to comfortably plot exhaustively as we did with iris this is dataset of the measurements of wheat seeds seven features are presentas followsarea (aperimeter (pcompactness length of kernel width of kernel asymmetry coefficient length of kernel groove there are three classes that correspond to three wheat varietiescanadiankomaand rosa as beforethe goal is to be able to classify the species based on these morphological measurements unlike the iris datasetwhich was collected in the sthis is very recent datasetand its features were automatically computed from digital images this is how image pattern recognition can be implementedyou can take images in digital formcompute few relevant features from themand use generic classification system in later we will work through the computer vision side of this problem and compute features in images for the momentwe will work with the features that are given to us uci machine learning dataset repository the university of california at irvine (ucimaintains an online repository of machine learning datasets (at the time of writingthey are listing datasetsboth the iris and seeds dataset used in this were taken from there the repository is available online |
17,606 | features and feature engineering one interesting aspect of these features is that the compactness feature is not actually new measurementbut function of the previous two featuresarea and perimeter it is often very useful to derive new combined features this is general area normally termed feature engineeringit is sometimes seen as less glamorous than algorithmsbut it may matter more for performance ( simple algorithm on well-chosen features will perform better than fancy algorithm on not-so-good featuresin this casethe original researchers computed the "compactness"which is typical feature for shapes (also called "roundness"this feature will have the same value for two kernelsone of which is twice as big as the other onebut with the same shape howeverit will have different values for kernels that are very round (when the feature is close to oneas compared to kernels that are elongated (when the feature is close to zerothe goals of good feature are to simultaneously vary with what matters and be invariant with what does not for examplecompactness does not vary with size but varies with the shape in practiceit might be hard to achieve both objectives perfectlybut we want to approximate this ideal you will need to use background knowledge to intuit which will be good features fortunatelyfor many problem domainsthere is already vast literature of possible features and feature types that you can build upon for imagesall of the previously mentioned features are typicaland computer vision libraries will compute them for you in text-based problems toothere are standard solutions that you can mix and match (we will also see this in later often thoughyou can use your knowledge of the specific problem to design specific feature even before you have datayou must decide which data is worthwhile to collect thenyou need to hand all your features to the machine to evaluate and compute the best classifier natural question is whether or not we can select good features automatically this problem is known as feature selection there are many methods that have been proposed for this problembut in practicevery simple ideas work best it does not make sense to use feature selection in these small problemsbut if you had thousands of featuresthrowing out most of them might make the rest of the process much faster |
17,607 | nearest neighbor classification with this dataseteven if we just try to separate two classes using the previous methodwe do not get very good results let me introduce thereforea new classifierthe nearest neighbor classifier if we consider that each example is represented by its features (in mathematical termsas point in -dimensional space)we can compute the distance between examples we can choose different ways of computing the distancefor exampledef distance( )'computes squared euclidean distancereturn np sum( - )** now when classifyingwe adopt simple rulegiven new examplewe look at the dataset for the point that is closest to it (its nearest neighborand look at its labeldef nn_classify(training_settraining_labelsnew_example)dists np array([distance(tnew_examplefor in training_set]nearest dists argmin(return training_labels[nearestin this caseour model involves saving all of the training data and labels and computing everything at classification time better implementation would be to actually index these at learning time to speed up classificationbut this implementation is complex algorithm nownote that this model performs perfectly on its training datafor each pointits closest neighbor is itselfand so its label matches perfectly (unless two examples have exactly the same features but different labelswhich can happenthereforeit is essential to test using cross-validation protocol using ten folds for cross-validation for this dataset with this algorithmwe obtain percent accuracy as we discussed in the earlier sectionthe cross-validation accuracy is lower than the training accuracybut this is more credible estimate of the performance of the model we will now examine the decision boundary for thiswe will be forced to simplify and look at only two dimensions (just so that we can plot it on paper |
17,608 | in the preceding screenshotthe canadian examples are shown as diamondskama seeds as circlesand rosa seeds as triangles their respective areas are shown as whiteblackand grey you might be wondering why the regions are so horizontalalmost weirdly so the problem is that the axis (arearanges from to while the axis (compactnessranges from to this means that small change in is actually much larger than small change in sowhen we compute the distance according to the preceding functionwe arefor the most partonly taking the axis into account if you have physics backgroundyou might have already noticed that we had been summing up lengthsareasand dimensionless quantitiesmixing up our units (which is something you never want to do in physical systemwe need to normalize all of the features to common scale there are many solutions to this problema simple one is to normalize to -scores the -score of value is how far away from the mean it is in terms of units of standard deviation it comes down to this simple pair of operationssubtract the mean for each featurefeatures -features mean(axis= divide each feature by its standard deviation features /features std(axis= |
17,609 | independent of what the original values wereafter -scoringa value of zero is the mean and positive values are above the mean and negative values are below it now every feature is in the same unit (technicallyevery feature is now dimensionlessit has no unitsand we can mix dimensions more confidently in factif we now run our nearest neighbor classifierwe obtain percent accuracylook at the decision space again in two dimensionsit looks as shown in the following screenshotthe boundaries are now much more complex and there is interaction between the two dimensions in the full dataseteverything is happening in seven-dimensional space that is very hard to visualizebut the same principle applieswhere before few dimensions were dominantnow they are all given the same importance the nearest neighbor classifier is simplebut sometimes good enough we can generalize it to -nearest neighbor classifier by considering not just the closest point but the closest points all neighbors vote to select the label is typically small numbersuch as but can be largerparticularly if the dataset is very large |
17,610 | binary and multiclass classification the first classifier we sawthe threshold classifierwas simple binary classifier (the result is either one class or the other as point is either above the threshold or it is notthe second classifier we usedthe nearest neighbor classifierwas naturally multiclass classifier (the output can be one of several classesit is often simpler to define simple binary method than one that works on multiclass problems howeverwe can reduce the multiclass problem to series of binary decisions this is what we did earlier in the iris dataset in haphazard waywe observed that it was easy to separate one of the initial classes and focused on the other tworeducing the problem to two binary decisionsis it an iris setosa (yes or no)if nocheck whether it is an iris virginica (yes or noof coursewe want to leave this sort of reasoning to the computer as usualthere are several solutions to this multiclass reduction the simplest is to use series of "one classifier versus the rest of the classifiersfor each possible label lwe build classifier of the type "is this or something else?when applying the ruleexactly one of the classifiers would say "yesand we would have our solution unfortunatelythis does not always happenso we have to decide how to deal with either multiple positive answers or no positive answers iris setosairis virgiinicais iris virginica is iris setosa is iris versicolour alternativelywe can build classification tree split the possible labels in two and build classifier that asks "should this example go to the left or the right bin?we can perform this splitting recursively until we obtain single label the preceding diagram depicts the tree of reasoning for the iris dataset each diamond is single binary classifier it is easy to imagine we could make this tree larger and encompass more decisions this means that any classifier that can be used for binary classification can also be adapted to handle any number of classes in simple way |
17,611 | there are many other possible ways of turning binary method into multiclass one there is no single method that is clearly better in all cases howeverwhich one you use normally does not make much of difference to the final result most classifiers are binary systems while many real-life problems are naturally multiclass several simple protocols reduce multiclass problem to series of binary decisions and allow us to apply the binary models to our multiclass problem summary in sensethis was very theoretical as we introduced generic concepts with simple examples we went over few operations with classic dataset thisby nowis considered very small problem howeverit has the advantage that we were able to plot it out and see what we were doing in detail this is something that will be lost when we move on to problems with many dimensions and many thousands of examples the intuitions we gained here will all still be valid classification means generalizing from examples to build model (that isa rule that can automatically be applied to newunclassified objectsit is one of the fundamental tools in machine learningand we will see many more examples of this in forthcoming we also learned that the training error is misleadingover-optimistic estimate of how well the model does we mustinsteadevaluate it on testing data that was not used for training in order to not waste too many examples in testinga cross-validation schedule can get us the best of both worlds (at the cost of more computationwe also had look at the problem of feature engineering features are not something that is predefined for youbut choosing and designing features is an integral part of designing machine-learning pipeline in factit is often the area where you can get the most improvements in accuracy as better data beats fancier methods the on computer vision and text-based classification will provide examples for these specific settings in this we wrote all of our own code (except when we used numpyof coursethis will not be the case for the next few but we needed to build up intuitions on simple cases to illustrate the basic concepts the next looks at how to proceed when your data does not have predefined classes for classification |
17,612 | related posts in the previous we have learned how to find classes or categories of individual data points with handful of training data items that were paired with their respective classeswe learned model that we can now use to classify future data items we called this supervised learningas the learning was guided by teacherin our case the teacher had the form of correct classifications let us now imagine that we do not possess those labels by which we could learn the classification model this could befor examplebecause they were too expensive to collect what could we have done in that casewellof coursewe would not be able to learn classification model stillwe could find some pattern within the data itself this is what we will do in this where we consider the challenge of "question and answerwebsite when user browses our site looking for some particular informationthe search engine will most likely point him/her to specific answer to improve the user experiencewe now want to show all related questions with their answers if the presented answer is not what he/she was looking forhe/she can easily see the other available answers and hopefully stay on our site the naive approach would be to take the postcalculate its similarity to all other postsand display the top most similar posts as links on the page this will quickly become very costly insteadwe need method that quickly finds all related posts |
17,613 | we will achieve this goal in this using clustering this is method of arranging items so that similar items are in one cluster and dissimilar items are in distinct ones the tricky thing that we have to tackle first is how to turn text into something on which we can calculate similarity with such measurement for similaritywe will then proceed to investigate how we can leverage that to quickly arrive at cluster that contains similar posts once therewe will only have to check out those documents that also belong to that cluster to achieve thiswe will introduce the marvelous scikit librarywhich comes with diverse machine-learning methods that we will also use in the following measuring the relatedness of posts from the machine learning point of viewraw text is useless only if we manage to transform it into meaningful numberscan we feed it into our machine-learning algorithms such as clustering the same is true for more mundane operations on textsuch as similarity measurement how not to do it one text similarity measure is the levenshtein distancewhich also goes by the name edit distance let' say we have two words"machineand "mchienethe similarity between them can be expressed as the minimum set of edits that are necessary to turn one word into the other in this casethe edit distance would be as we have to add an "aafter "mand delete the first "ethis algorithm ishoweverquite costlyas it is bound by the product of the lengths of the first and second words looking at our postswe could cheat by treating the whole word as characters and performing the edit distance calculation on the word level let' say we have two posts (let' concentrate on the title for the sake of simplicity)"how to format my hard diskand "hard disk format problems"we would have an edit distance of five (removing "how""to""format""my"and then adding "formatand "problemsat the endthereforeone could express the difference between two posts as the number of words that have to be added or deleted so that one text morphs into the other although we could speed up the overall approach quite bitthe time complexity stays the same even if it would have been fast enoughthere is another problem the post above the word "formataccounts for an edit distance of two (deleting it firstthen adding itso our distance doesn' seem to be robust enough to take word reordering into account |
17,614 | how to do it more robust than edit distance is the so-called bag-of-word approach it uses simple word counts as its basis for each word in the postits occurrence is counted and noted in vector not surprisinglythis step is also called vectorization the vector is typically huge as it contains as many elements as the words that occur in the whole dataset take for instance two example posts with the following word countsword occurrences in post occurrences in post disk format how hard my problems to the columns post and post can now be treated as simple vectors we could simply calculate the euclidean distance between the vectors of all posts and take the nearest one (too slowas we have just found outas suchwe can use them later in the form of feature vectors in the following clustering steps extract the salient features from each post and store it as vector per post compute clustering on the vectors determine the cluster for the post in question from this clusterfetch handful of posts that are different from the post in question this will increase diversity howeverthere is some more work to be done before we get thereand before we can do that workwe need some data to work on preprocessing similarity measured as similar number of common words as we have seen previouslythe bag-of-word approach is both fast and robust howeverit is not without challenges let' dive directly into them |
17,615 | converting raw text into bag-of-words we do not have to write custom code for counting words and representing those counts as vector scikit' countvectorizer does the job very efficiently it also has very convenient interface scikit' functions and classes are imported via the sklearn package as followsfrom sklearn feature_extraction text import countvectorizer vectorizer countvectorizer(min_df= the parameter min_df determines how countvectorizer treats words that are not used frequently (minimum document frequencyif it is set to an integerall words occurring less than that value will be dropped if it is fractionall words that occur less than that fraction of the overall dataset will be dropped the parameter max_df works in similar manner if we print the instancewe see what other parameters scikit provides together with their default valuesprint(vectorizercountvectorizer(analyzer=wordbinary=falsecharset=utf- charset_error=strictdtype=input=contentlowercase=truemax_df= max_features=nonemax_n=nonemin_df= min_n=nonengram_range=( )preprocessor=nonestop_words=nonestrip_accents=nonetoken_pattern=(? )\ \ \ +\btokenizer=nonevocabulary=nonewe see thatas expectedthe counting is done at word level (analyzer=wordand the words are determined by the regular expression pattern token_pattern it wouldfor exampletokenize "cross-validatedinto "crossand "validatedlet us ignore the other parameters for now content ["how to format my hard disk"hard disk format problems " vectorizer fit_transform(contentvectorizer get_feature_names([ 'disk' 'format' 'hard' 'how' 'my' 'problems' 'to'the vectorizer has detected seven words for which we can fetch the counts individuallyprint( toarray(transpose()array([[ ][ ][ ][ ][ ][ ][ ]]dtype=int |
17,616 | this means that the first sentence contains all the words except for "problems"while the second contains all except "how""my"and "toin factthese are exactly the same columns as seen in the previous table from xwe can extract feature vector that we can use to compare the two documents with each other first we will start with naive approach to point out some preprocessing peculiarities we have to account for so let us pick random postfor which we will then create the count vector we will then compare its distance to all the count vectors and fetch the post with the smallest one counting words let us play with the toy dataset consisting of the following postspost filename post content txt this is toy post about machine learning actuallyit contains not much interesting stuff txt imaging databases can get huge txt most imaging databases safe images permanently txt imaging databases store images txt imaging databases store images imaging databases store images imaging databases store images in this post datasetwe want to find the most similar post for the short post "imaging databasesassuming that the posts are located in the folder dirwe can feed countvectorizer with it as followsposts [open(os path join(dirf)read(for in os listdir(dir)from sklearn feature_extraction text import countvectorizer vectorizer countvectorizer(min_df= we have to notify the vectorizer about the full dataset so that it knows upfront what words are to be expectedas shown in the following codex_train vectorizer fit_transform(postsnum_samplesnum_features x_train shape print("#samples% #features% (num_samplesnum_features)#samples #features |
17,617 | unsurprisinglywe have five posts with total of different words the following words that have been tokenized will be countedprint(vectorizer get_feature_names()[ 'about' 'actually' 'capabilities' 'contains' 'data' 'databases' 'images' 'imaging' 'interesting' 'is' 'it' 'learning' 'machine' 'most' 'much' 'not' 'permanently' 'post' 'provide' 'safe' 'storage' 'store' 'stuff' 'this' 'toy'now we can vectorize our new post as followsnew_post "imaging databasesnew_post_vec vectorizer transform([new_post]note that the count vectors returned by the transform method are sparse that iseach vector does not store one count value for each wordas most of those counts would be zero (post does not contain the wordinsteadit uses the more memory efficient implementation coo_matrix (for "coordinate"our new postfor instanceactually contains only two elementsprint(new_post_vec( ) ( ) via its member toarray()we can again access full ndarray as followsprint(new_post_vec toarray()[[ ]we need to use the full array if we want to use it as vector for similarity calculations for the similarity measurement (the naive one)we calculate the euclidean distance between the count vectors of the new post and all the old posts as followsimport scipy as sp def dist_raw( )delta - return sp linalg norm(delta toarray()the norm(function calculates the euclidean norm (shortest distancewith dist_ rawwe just need to iterate over all the posts and remember the nearest oneimport sys best_doc none best_dist sys maxint best_i none for in range( num_samples) |
17,618 | post posts[iif post==new_postcontinue post_vec x_train getrow(id dist(post_vecnew_post_vecprint "==post % with dist= % "%(idpostif <best_distbest_dist best_i print("best post is % with dist= "%(best_ibest_dist)==post with dist= this is toy post about machine learning actuallyit contains not much interesting stuff ==post with dist= imaging databases provide storage capabilities ==post with dist= most imaging databases safe images permanently ==post with dist= imaging databases store data ==post with dist= imaging databases store data imaging databases store data imaging databases store data best post is with dist= congratulationswe have our first similarity measurement post is most dissimilar from our new post quite understandablyit does not have single word in common with the new post we can also understand that post is very similar to the new postbut not to the winneras it contains one word more than post that is not contained in the new post looking at posts and howeverthe picture is not so clear any more post is the same as post duplicated three times soit should also be of the same similarity to the new post as post printing the corresponding feature vectors explains the reasonprint(x_train getrow( toarray()[[ ]print(x_train getrow( toarray()[[ ]obviouslyusing only the counts of the raw words is too simple we will have to normalize them to get vectors of unit length |
17,619 | normalizing the word count vectors we will have to extend dist_raw to calculate the vector distancenot on the raw vectors but on the normalized ones insteaddef dist_norm( ) _normalized /sp linalg norm( toarray() _normalized /sp linalg norm( toarray()delta _normalized _normalized return sp linalg norm(delta toarray()this leads to the following similarity measurement==post with dist= this is toy post about machine learning actuallyit contains not much interesting stuff ==post with dist= imaging databases provide storage capabilities ==post with dist= most imaging databases safe images permanently ==post with dist= imaging databases store data ==post with dist= imaging databases store data imaging databases store data imaging databases store data best post is with dist= this looks bit better now post and post are calculated as being equally similar one could argue whether that much repetition would be delight to the readerbut from the point of counting the words in the poststhis seems to be right removing less important words let us have another look at post of its words that are not in the new postwe have "most""safe""images"and "permanentlythey are actually quite different in the overall importance to the post words such as "mostappear very often in all sorts of different contextsand words such as this are called stop words they do not carry as much informationand thus should not be weighed as much as words such as "images"that don' occur often in different contexts the best option would be to remove all words that are so frequent that they do not help to distinguish between different texts these words are called stop words as this is such common step in text processingthere is simple parameter in countvectorizer to achieve thisas followsvectorizer countvectorizer(min_df= stop_words='english' |
17,620 | if you have clear picture of what kind of stop words you would want to removeyou can also pass list of them setting stop_words to "englishwill use set of english stop words to find out which ones they areyou can use get_stop_ words()sorted(vectorizer get_stop_words())[ : [' ''about''above''across''after''afterwards''again''against''all''almost''alone''along''already''also''although''always''am''among''amongst''amoungst'the new word list is seven words lighter[ 'actually' 'capabilities' 'contains' 'data' 'databases' 'images' 'imaging' 'interesting' 'learning' 'machine' 'permanently' 'post' 'provide' 'safe' 'storage' 'store' 'stuff' 'toy'without stop wordswe arrive at the following similarity measurement==post with dist= this is toy post about machine learning actuallyit contains not much interesting stuff ==post with dist= imaging databases provide storage capabilities ==post with dist= most imaging databases safe images permanently ==post with dist= imaging databases store data ==post with dist= imaging databases store data imaging databases store data imaging databases store data best post is with dist= post is now on par with post overallit hashowevernot changed much as our posts are kept short for demonstration purposes it will become vital when we look at real-world data stemming one thing is still missing we count similar words in different variants as different words post for instancecontains "imagingand "imagesit would make sense to count them together after allit is the same concept they are referring to we need function that reduces words to their specific word stem scikit does not contain stemmer by default with the natural language toolkit (nltk)we can download free software toolkitwhich provides stemmer that we can easily plug into countvectorizer |
17,621 | installing and using nltk how to install nltk on your operating system is described in detail at packages nltk and pyyaml to check whether your installation was successfulopen python interpreter and type the followingimport nltk you will find very nice tutorial for nltk in the book python text processing with nltk cookbook to play little bit with stemmeryou can visit the accompanied web page nltk comes with different stemmers this is necessarybecause every language has different set of rules for stemming for englishwe can take snowballstemmer import nltk stem snltk stem snowballstemmer('english' stem("graphics" 'graphics stem("imaging" 'imags stem("image" 'imags stem("imagination") 'imagins stem("imagine" 'imaginnote that stemming does not necessarily have to result into valid english words it also works with verbs as followss stem("buys" 'buys stem("buying" 'buys stem("bought" 'bought |
17,622 | extending the vectorizer with nltk' stemmer we need to stem the posts before we feed them into countvectorizer the class provides several hooks with which we could customize the preprocessing and tokenization stages the preprocessor and tokenizer can be set in the constructor as parameters we do not want to place the stemmer into any of thembecause we would then have to do the tokenization and normalization by ourselves insteadwe overwrite the method build_analyzer as followsimport nltk stem english_stemmer nltk stem snowballstemmer('english'class stemmedcountvectorizer(countvectorizer)def build_analyzer(self)analyzer super(stemmedcountvectorizerselfbuild_ analyzer(return lambda doc(english_stemmer stem(wfor in analyzer(doc)vectorizer stemmedcountvectorizer(min_df= stop_ words='english'this will perform the following steps for each post lower casing the raw post in the preprocessing step (done in the parent class extracting all individual words in the tokenization step (done in the parent class converting each word into its stemmed version as resultwe now have one feature lessbecause "imagesand "imagingcollapsed to one the set of feature names looks like the following[ 'actual' 'capabl' 'contain' 'data' 'databas' 'imag' 'interest' 'learn' 'machin' 'perman' 'post' 'provid' 'safe' 'storag' 'store' 'stuff' 'toy'running our new stemmed vectorizer over our postswe see that collapsing "imagingand "imagesreveals that post is actually the most similar post to our new postas it contains the concept "imagtwice==post with dist= this is toy post about machine learning actuallyit contains not much interesting stuff ==post with dist= imaging databases provide storage capabilities ==post with dist= most imaging databases safe images permanently |
17,623 | ==post with dist= imaging databases store data ==post with dist= imaging databases store data imaging databases store data imaging databases store data best post is with dist= stop words on steroids now that we have reasonable way to extract compact vector from noisy textual postlet us step back for while to think about what the feature values actually mean the feature values simply count occurrences of terms in post we silently assumed that higher values for term also mean that the term is of greater importance to the given post but what aboutfor instancethe word "subject"which naturally occurs in each and every single postalrightwe could tell countvectorizer to remove it as well by means of its max_df parameter we couldfor instanceset it to so that all words that occur in more than percent of all posts would be always ignored but what about words that appear in percent of all postshow low would we be willing to set max_dfthe problem is that however we set itthere will always be the problem that some terms are just more discriminative than others this can only be solved by counting term frequencies for every postand in additiondiscounting those that appear in many posts in other wordswe want high value for given term in given value if that term occurs often in that particular post and very rarely anywhere else this is exactly what term frequency inverse document frequency (tf-idfdoestf stands for the counting partwhile idf factors in the discounting naive implementation would look like the followingimport scipy as sp def tfidf(termdocdocset)tf float(doc count(term))/sum(doc count(wfor in docsetidf math log(float(len(docset))/(len([doc for doc in docset if term in doc]))return tf idf for the following document setdocsetconsisting of three documents that are already tokenizedwe can see how the terms are treated differentlyalthough all appear equally often per documentaabbabc [" "][" "" "" "][" "" "" " [aabbabcprint(tfidf(" "ad) |
17,624 | print(tfidf(" "abbd) print(tfidf(" "abcd) print(tfidf(" "abcd) print(tfidf(" "abcd) we see that carries no meaning for any document since it is contained everywhere is more important for the document abb than for abc as it occurs there twice in realitythere are more corner cases to handle than the above example does thanks to scikitwe don' have to think of themas they are already nicely packaged in tfidfvectorizerwhich is inherited from countvectorizer sure enoughwe don' want to miss our stemmerfrom sklearn feature_extraction text import tfidfvectorizer class stemmedtfidfvectorizer(tfidfvectorizer)def build_analyzer(self)analyzer super(tfidfvectorizerselfbuild_analyzer(return lambda docenglish_stemmer stem(wfor in analyzer(doc)vectorizer stemmedtfidfvectorizer(min_df= stop_words='english'charset_error='ignore'the resulting document vectors will not contain counts any more insteadthey will contain the individual tf-idf values per term our achievements and goals our current text preprocessing phase includes the following steps tokenizing the text throwing away words that occur way too often to be of any help in detecting relevant posts |
17,625 | throwing away words that occur so seldom that there is only small chance that they occur in future posts counting the remaining words calculating tf-idf values from the countsconsidering the whole text corpus again we can congratulate ourselves with this processwe are able to convert bunch of noisy text into concise representation of feature values butas simple and as powerful as the bag-of-words approach with its extensions isit has some drawbacks that we should be aware of they are as followsit does not cover word relations with the previous vectorization approachthe text "car hits walland "wall hits carwill both have the same feature vector it does not capture negations correctly for instancethe text " will eat ice creamand " will not eat ice creamwill look very similar by means of their feature vectorsalthough they contain quite the opposite meaning this problemhowevercan be easily changed by not only counting individual wordsalso called unigramsbut also considering bigrams (pairs of wordsor trigrams (three words in rowit totally fails with misspelled words although it is clear to the readers that "databaseand "databasconvey the same meaningour approach will treat them as totally different words for brevity' sakelet us nevertheless stick with the current approachwhich we can now use to efficiently build clusters from clustering finallywe have our vectors that we believe capture the posts to sufficient degree not surprisinglythere are many ways to group them together most clustering algorithms fall into one of the two methodsflat and hierarchical clustering |
17,626 | flat clustering divides the posts into set of clusters without relating the clusters to each other the goal is simply to come up with partitioning such that all posts in one cluster are most similar to each other while being dissimilar from the posts in all other clusters many flat clustering algorithms require the number of clusters to be specified up front in hierarchical clusteringthe number of clusters does not have to be specified insteadthe hierarchical clustering creates hierarchy of clusters while similar posts are grouped into one clustersimilar clusters are again grouped into one uber-cluster this is done recursivelyuntil only one cluster is leftwhich contains everything in this hierarchyone can then choose the desired number of clusters howeverthis comes at the cost of lower efficiency scikit provides wide range of clustering approaches in the package sklearn cluster you can get quick overview of the advantages and drawbacks of each of them at in the following sectionwe will use the flat clustering methodkmeansand play bit with the desired number of clusters kmeans kmeans is the most widely used flat clustering algorithm after it is initialized with the desired number of clustersnum_clustersit maintains that number of so-called cluster centroids initiallyit would pick any of the num_clusters posts and set the centroids to their feature vector then it would go through all other posts and assign them the nearest centroid as their current cluster then it will move each centroid into the middle of all the vectors of that particular class this changesof coursethe cluster assignment some posts are now nearer to another cluster so it will update the assignments for those changed posts this is done as long as the centroids move considerable amount after some iterationsthe movements will fall below threshold and we consider clustering to be converged downloading the example code you can download the example code files for all packt books you have purchased from your account at com if you purchased this book elsewhereyou can visit www packtpub com/support and register to have the files -mailed directly to you |
17,627 | let us play this through with toy example of posts containing only two words each point in the following chart represents one documentafter running one iteration of kmeansthat istaking any two vectors as starting pointsassigning labels to the restand updating the cluster centers to be the new center point of all points in that clusterwe get the following clustering |
17,628 | because the cluster centers are movedwe have to reassign the cluster labels and recalculate the cluster centers after iteration we get the following clusteringthe arrows show the movements of the cluster centers after five iterations in this examplethe cluster centers don' move noticeably any more (scikit' tolerance threshold is by defaultafter the clustering has settledwe just need to note down the cluster centers and their identity when each new document comes inwe have to vectorize and compare it with all the cluster centers the cluster center with the smallest distance to our new post vector belongs to the cluster we will assign to the new post getting test data to evaluate our ideas on in order to test clusteringlet us move away from the toy text examples and find dataset that resembles the data we are expecting in the future so that we can test our approach for our purposewe need documents on technical topics that are already grouped together so that we can check whether our algorithm works as expected when we apply it later to the posts we hope to receive |
17,629 | one standard dataset in machine learning is the newsgroup datasetwhich contains , posts from different newsgroups among the groupstopics are technical ones such as comp sys mac hardware or sci crypt as well as more politicsand religion-related ones such as talk politics guns or soc religion christian we will restrict ourselves to the technical groups if we assume each newsgroup is one clusterwe can nicely test whether our approach of finding related posts works the dataset can be downloaded from jrennie/ newsgroups much more simplehoweveris to download it from mlcomp at scikit already contains custom loaders for that dataset and rewards you with very convenient data loading options the dataset comes in the form of zip filedataset- - news- _wjqig zipwhich we have to unzip to get the folder which contains the datasets we also have to notify scikit about the path containing that data directory it contains metadata file and three directories testtrainand raw the test and train directories split the whole dataset into percent of training and percent of testing posts for conveniencethe dataset module also contains the function fetch_ newsgroupswhich downloads that data into the desired directory the website finding the right dataset to tune your machine-learning program and exploring how other people use particular dataset for instanceyou can see how well other people' algorithms performed on particular datasets and compare them either you set the environment variable mlcomp_datasets_home or you specify the path directly with the mlcomp_root parameter when loading the dataset as followsimport sklearn datasets mlcomp_dir " :\datadata sklearn datasets load_mlcomp(" news- "mlcomp_ root=mlcomp_dirprint(data filenamesarray([' :\\data\\ \\raw\\comp graphics\\ - '' :\\data\\ \\raw\\comp graphics\\ - '' :\\data\\ \\raw\\alt atheism\\ - '' :\\data\\ \\raw\\rec sport hockey\\ - '' :\\data\\ \\raw\\sci crypt\\ - ' |
17,630 | ' :\\data\\ \\raw\\comp os ms-windows misc\\ - ']dtype='| 'print(len(data filenames) data target_names ['alt atheism''comp graphics''comp os ms-windows misc''comp sys ibm pc hardware''comp sys mac hardware''comp windows ''misc forsale''rec autos''rec motorcycles''rec sport baseball''rec sport hockey''sci crypt''sci electronics''sci med'sci space''soc religion christian''talk politics guns''talk politics mideast''talk politics misc''talk religion misc'we can choose among training and test sets as followstrain_data sklearn datasets load_mlcomp(" news- ""train"mlcomp_root=mlcomp_dirprint(len(train_data filenames) test_data sklearn datasets load_mlcomp(" news- ""test"mlcomp_root=mlcomp_dirprint(len(test_data filenames) for simplicity' sakewe will restrict ourselves to only some newsgroups so that the overall experimentation cycle is shorter we can achieve this with the categories parameter as followsgroups ['comp graphics''comp os ms-windows misc''comp sys ibm pc hardware''comp sys ma hardware''comp windows ''sci space'train_data sklearn datasets load_mlcomp(" news- ""train"mlcomp_root=mlcomp_dircategories=groupsprint(len(train_data filenames) clustering posts you must have already noticed one thing real data is noisy the newsgroup dataset is no exception it even contains invalid characters that will result in unicodedecodeerror we have to tell the vectorizer to ignore themvectorizer stemmedtfidfvectorizer(min_df= max_df= stop_words='english'charset_error='ignore'vectorized vectorizer fit_transform(dataset data |
17,631 | num_samplesnum_features vectorized shape print("#samples% #features% (num_samplesnum_features)#samples #features we now have pool of , posts and extracted for each of them feature vector of , dimensions that is what kmeans takes as input we will fix the cluster size to for this and hope you are curious enough to try out different values as an exerciseas shown in the following codenum_clusters from sklearn cluster import kmeans km kmeans(n_clusters=num_clustersinit='random'n_init= verbose= km fit(vectorizedthat' it after fittingwe can get the clustering information out of the members of km for every vectorized post that has been fitthere is corresponding integer label in km labels_km labels_ array([ ]km labels_ shape ( ,the cluster centers can be accessed via km cluster_centers_ in the next section we will see how we can assign cluster to newly arriving post using km predict solving our initial challenge we now put everything together and demonstrate our system for the following new post that we assign to the variable new_postdisk drive problems hii have problem with my hard disk after year it is working only sporadically now tried to format itbut now it doesn' boot any more any ideasthanks |
17,632 | as we have learned previouslywe will first have to vectorize this post before we predict its label as followsnew_post_vec vectorizer transform([new_post]new_post_label km predict(new_post_vec)[ now that we have the clusteringwe do not need to compare new_post_vec to all post vectors insteadwe can focus only on the posts of the same cluster let us fetch their indices in the original datasetsimilar_indices (km labels_==new_post_labelnonzero()[ the comparison in the bracket results in boolean arrayand nonzero converts that array into smaller array containing the indices of the true elements using similar_indiceswe then simply have to build list of posts together with their similarity scores as followssimilar [for in similar_indicesdist sp linalg norm((new_post_vec vectorized[ ]toarray()similar append((distdataset data[ ])similar sorted(similarprint(len(similar) we found posts in the cluster of our post to give the user quick idea of what kind of similar posts are availablewe can now present the most similar post (show_ at_ )the least similar one (show_at_ )and an in-between post (show_at_ )all of which are from the same cluster as followsshow_at_ similar[ show_at_ similar[len(similar)/ show_at_ similar[- |
17,633 | the following table shows the posts together with their similarity valuesposition similarity excerpt from post boot problem with ide controller hii've got multi / card (ide controller serial/parallel interfaceand two floppy drives ( / / and quantum prodrive at connected to it was able to format the hard diskbut could not boot from it can boot from drive (which disk drive does not matterbut if remove the disk from drive and press the reset switchthe led of drive acontinues to glowand the hard disk is not accessed at all guess this must be problem of either the multi / card\nor floppy disk drive settings (jumper configuration?does someone have any hint what could be the reason for it ide cable just bought new ide hard drive for my system to go with the one already had my problem is this my system only had ide cable for one driveso had to buy cable with two drive connectors on itand consequently have to switch cables the problem isthe new hard drive\' manual refers to matching pin on the cable with both pin on the drive itself and pin on the ide card but for the life of me cannot figure out how to tell which way to plug in the cable to align these secondlythe cable has like connector at two ends and one between them figure one end goes in the controller and then the other two go into the drives does it matter which plug into the "masterdrive and which into the "slave"any help appreciated |
17,634 | position similarity excerpt from post conner cp info please how to change the cluster size wondering if somebody could tell me if we can change the cluster size of my ide drive normally can do it with norton' calibrat on mfm/rll drives but dunno if can on ide too it is interesting how the posts reflect the similarity measurement score the first post contains all the salient words from our new post the second one also revolves around hard disksbut lacks concepts such as formatting finallythe third one is only slightly related stillfor all the postswe would say that they belong to the same domain as that of the new post another look at noise we should not expect perfect clusteringin the sense that posts from the same newsgroup (for examplecomp graphicsare also clustered together an example will give us quick impression of the noise that we have to expectpost_group zip(dataset datadataset targetz (len(post[ ])post[ ]dataset target_names[post[ ]]for post in post_group print(sorted( )[ : ][( 'from"kwansik kim\nsubjectwhere is faq ?\ \nwhere can find it ?\ \nthankskwansik\ \ ''comp graphics')( 'fromlioness@maple circa ufl edu\nsubjectwhat is do?\ \ \nsomeone please fill me in on what do \ \nthanks,\ \nbh\ ''comp graphics')for both of these poststhere is no real indication that they belong to comp graphicsconsidering only the wording that is left after the preprocessing stepanalyzer vectorizer build_analyzer(list(analyzer( [ ][ ])[ 'kwansik' 'kim' 'kkim' 'cs' 'indiana' 'edu' 'subject' 'faq' 'thank' 'kwansik'list(analyzer( [ ][ ])[ 'lioness' 'mapl' 'circa' 'ufl' 'edu' 'subject' ' do' ' do' 'thank' 'bh' |
17,635 | this is only after tokenizationlower casingand stop word removal if we also subtract those words that will be later filtered out via min_df and max_dfwhich will be done later in fit_transformit gets even worselist(set(analyzer( [ ][ ])intersectionvectorizer get_feature_names())[ 'cs' 'faq' 'thank'list(set(analyzer( [ ][ ])intersectionvectorizer get_feature_names())[ 'bh' 'thank'furthermoremost of the words occur frequently in other posts as wellas we can check with the idf scores remember that the higher the tf-idfthe more discriminative term is for given post and as idf is multiplicative factor herea low value of it signals that it is not of great value in generalfor term in ['cs''faq''thank''bh''thank']print('idf(% )= '%(termvectorizer _tfidf idf_[vectorizer vocabulary_[term]]idf(cs)= idf(faq)= idf(thank)= idf(bh)= idf(thank)= soexcept for bhwhich is close to the maximum overall idf value of the terms don' have much discriminative power understandablyposts from different newsgroups will be clustered together for our goalhoweverthis is no big dealas we are only interested in cutting down the number of posts that we have to compare new post to after allthe particular newsgroup from where our training data came from is of no special interest tweaking the parameters so what about all the other parameterscan we tweak them all to get better resultssure we couldof coursetweak the number of clusters or play with the vectorizer' max_features parameter (you should try that!alsowe could play with different cluster center initializations there are also more exciting alternatives to kmeans itself there arefor exampleclustering approaches that also let you use different similarity measurements such as cosine similaritypearsonor jaccard an exciting field for you to play |
17,636 | but before you go thereyou will have to define what you actually mean by "betterscikit has complete package dedicated only to this definition the package is called sklearn metrics and also contains full range of different metrics to measure clustering quality maybe that should be the first place to go nowright into the sources of the metrics package summary that was tough ridefrom preprocessing over clustering to solution that can convert noisy text into meaningful concise vector representation that we can cluster if we look at the efforts we had to do to finally be able to clusterit was more than half of the overall taskbut on the waywe learned quite bit on text processing and how simple counting can get you very far in the noisy real-world data the ride has been made much smoother thoughbecause of scikit and its powerful packages and there is more to explore in this we were scratching the surface of its capabilities in the next we will see more of its powers |
17,637 | in the previous we clustered texts into groups this is very useful toolbut it is not always appropriate clustering results in each text belonging to exactly one cluster this book is about machine learning and python should it be grouped with other python-related works or with machine-related worksin the paper book agea bookstore would need to make this decision when deciding where to stock it in the internet store agehoweverthe answer is that this book is both about machine learning and pythonand the book can be listed in both sections we willhowevernot list it in the food section in this we will learn methods that do not cluster objectsbut put them into small number of groups called topics we will also learn how to derive between topics that are central to the text and others only that are vaguely mentioned (this book mentions plotting every so oftenbut it is not central topic such as machine learning isthe subfield of machine learning that deals with these problems is called topic modeling latent dirichlet allocation (ldalda and ldaunfortunatelythere are two methods in machine learning with the initials ldalatent dirichlet allocationwhich is topic modeling methodand linear discriminant analysiswhich is classification method they are completely unrelatedexcept for the fact that the initials lda can refer to either howeverthis can be confusing scikit-learn has submodulesklearn ldawhich implements linear discriminant analysis at the momentscikit-learn does not implement latent dirichlet allocation the simplest topic model (on which all others are basedis latent dirichlet allocation (ldathe mathematical ideas behind lda are fairly complexand we will not go into the details here |
17,638 | for those who are interested and adventurous enougha wikipedia search will provide all the equations behind these algorithms at the following linkhoweverwe can understand that this is at high level and there is sort of fable which underlies these models in this fablethere are topics that are fixed this lacks clarity which documentsfor examplelet' say we have only three topics at presentmachine learning python baking each topic has list of words associated with it this book would be mixture of the first two topicsperhaps percent each thereforewhen we are writing itwe pick half of our words from the machine learning topic and half from the python topic in this modelthe order of words does not matter the preceding explanation is simplification of the realityeach topic assigns probability to each word so that it is possible to use the word "flourwhen the topic is either machine learning or bakingbut more probable if the topic is baking of coursewe do not know what the topics are otherwisethis would be different and much simpler problem our task right now is to take collection of text and reverse engineer this fable in order to discover what topics are out there and also where each document belongs building topic model unfortunatelyscikit-learn does not support latent dirichlet allocation thereforewe are going to use the gensim package in python gensim is developed by radim rehurekwho is machine learning researcher and consultant in the czech republic we must start by installing it we can achieve this by running one of the following commandspip install gensim easy_install gensim we are going to use an associated press (apdataset of news reports this is standard datasetwhich was used in some of the initial work on topic modelsfrom gensim import corporamodelssimilarities corpus corpora bleicorpus(/data/ap/ap dat''/data/ap/vocab txt' |
17,639 | corpus is just the preloaded list of wordsmodel models ldamodel ldamodelcorpusnum_topics= id word=corpus id wordthis one-step process will build topic model we can explore the topics in many ways we can see the list of topics document refers to by using the model[docsyntaxtopics [model[cfor in corpusprint topics[ [( )( )( )( ) elided some of the outputbut the format is list of pairs (topic_indextopic_ weightwe can see that only few topics are used for each document the topic model is sparse modelas although there are many possible topics for each documentonly few of them are used we can plot histogram of the number of topics as shown in the following graph |
17,640 | sparsity means that while you may have large matrices and vectorsin principlemost of the values are zero (or so small that we can round them to zero as good approximationthereforeonly few things are relevant at any given time often problems that seem too big to solve are actually feasible because the data is sparse for exampleeven though one webpage can link to any other webpagethe graph of links is actually very sparse as each webpage will link to very tiny fraction of all other webpages in the previous graphwe can see that about documents have topicswhile the majority deal with around to of them no document talks about more than topics to large extentthis is function of the parameters usednamely the alpha parameter the exact meaning of alpha is bit abstractbut bigger values for alpha will result in more topics per document alpha needs to be positivebut is typically very smallusually smaller than one by defaultgensim will set alpha equal to len (corpus)but you can set it yourself as followsmodel models ldamodel ldamodelcorpusnum_topics= id word=corpus id wordalpha= in this casethis is larger alphawhich should lead to more topics per document we could also use smaller value as we can see in the combined histogram given nextgensim behaves as we expected |
17,641 | now we can see that many documents touch upon to different topics what are these topicstechnicallythey are multinomial distributions over wordswhich mean that they give each word in the vocabulary probability words with high probability are more associated with that topic than words with lower probability our brains aren' very good at reasoning with probability distributionsbut we can readily make sense of list of words thereforeit is typical to summarize topics with the list of the most highly weighted words here are the first ten topicsdress military soviet president new state capt carlucci states leader stance government koch zambia lusaka one-party orange kochs party government mayor new political human turkey rights abuses royal thompson threats new state wrote garden president bill employees experiments levin taxation federal measure legislation senate president whistleblowers sponsor ohio july drought jesus disaster percent hartford mississippi crops northern valley virginia united percent billion year president world years states people bush news hughes affidavit states united ounces squarefoot care delaying charged unrealistic bush yeutter dukakis bush convention farm subsidies uruguay percent secretary general told kashmir government people srinagar india dumps city two jammu-kashmir group moslem pakistan workers vietnamese irish wage immigrants percent bargaining last island police hutton |
17,642 | although daunting at first glancewe can clearly see that the topics are not just random wordsbut are connected we can also see that these topics refer to older news itemsfrom when the soviet union still existed and gorbachev was its secretary general we can also represent the topics as word cloudsmaking more likely words larger for examplethis is the visualization of topicwhich deals with the middle east and politicswe can also see that some of the words should perhaps be removed (for examplethe word ias they are not so informative (stop wordsin topic modelingit is important to filter out stop wordsas otherwise you might end up with topic consisting entirely of stop wordswhich is not very informative we may also wish to preprocess the text to stems in order to normalize plurals and verb forms this process was covered in the previous and you can refer to it for details if you are interestedyou can download the code from the companion website of the book and try all these variations to draw different pictures building word cloud like the one in the previous screenshot can be done with several different pieces of software for the previous graphici used the online tool wordle (which generates particularly attractive images since only had few examplesi copy and pasted the list of words manuallybut it is possible to use it as web service and call it directly from python comparing similarity in topic space topics can be useful on their own to build small vignettes with words that are in the previous screenshot these visualizations could be used to navigate large collection of documents andin factthey have been used in just this way |
17,643 | howevertopics are often just an intermediate tool to another end now that we have an estimate for each document about how much of that document comes from each topicwe can compare the documents in topic space this simply means that instead of comparing word per wordwe say that two documents are similar if they talk about the same topics this can be very powerfulas two text documents that share few words may actually refer to the same topic they may just refer to it using different constructions (for exampleone may say the president of the united states while the other will use the name barack obamatopic models are useful on their own to build visualizations and explore data they are also very useful as an intermediate step in many other tasks at this pointwe can redo the exercise we performed in the previous and look for the most similar postbut by using the topics whereas previously we compared two documents by comparing their word vectorswe can now compare two documents by comparing their topic vectors for thiswe are going to project the documents to the topic space that iswe want to have vector of topics that summarizes the document since the number of topics ( is smaller than the number of possible wordswe have reduced dimensionality how to perform these types of dimensionality reduction in general is an important task in itselfand we have entirely devoted to this task one additional computational advantage is that it is much faster to compare vectors of topic weights than vectors of the size of the vocabulary (which will contain thousands of termsusing gensimwe saw before how to compute the topics corresponding to all documents in the corpustopics [model[cfor in corpusprint topics[ [( )( )( )( )we will store all these topic counts in numpy arrays and compute all pairwise distancesdense np zeros(len(topics) )floatfor ti, in enumerate(topics) |
17,644 | for tj, in tdense[ti,tjv nowdense is matrix of topics we can use the pdist function in scipy to compute all pairwise distances that iswith single function callwe compute all the values of sum((dense[tidense[tj])** )from scipy spatial import distance pairwise distance squareform(distance pdist(dense)now we employ one last little trickwe set the diagonal elements of the distance matrix to high value (it just needs to be larger than the other values in the matrix)largest pairwise max(for ti in range(len(topics))pairwise[ti,tilargest+ and we are donefor each documentwe can look up the closest element easilydef closest_to(doc_id)return pairwise[doc_idargmin(the previous code would not work if we had not set the diagonal elements to large valuethe function would always return the same element as it is almost similar to itself (except in the weird case where two elements have exactly the same topic distributionwhich is very rare unless they are exactly the samefor examplehere is the second document in the collection (the first document is very uninterestingas the system returns post stating that it is the most similar)fromgeb@cs pitt edu (gordon bankssubjectrerequest for information on "essential tremorand indrolin article sundar@ai mit edu writesessential tremor is progressive hereditary tremor that gets worse when the patient tries to use the effected member all limbsvocal cordsand head can be involved inderal is beta-blocker and is usually effective in diminishing the tremor alcohol and mysoline are also effectivebut alcohol is too toxic to use as treatment gordon banks jxp "skepticism is the chastity of the intellectand geb@cadre dsl pitt edu it is shameful to surrender it too soon if we ask for the most similar documentclosest_to( )we receive the following documentfromgeb@cs pitt edu (gordon banks |
17,645 | subjectrehigh prolactin in article jer @psuvm psu edu (john rodwaywrites>any comments on the use of the drug parlodel for high prolactin in the blood>it can suppress secretion of prolactin is useful in cases of galactorrhea some adenomas of the pituitary secret too much gordon banks jxp "skepticism is the chastity of the intellectand geb@cadre dsl pitt edu it is shameful to surrender it too soon we received post by the same author discussing medications modeling the whole of wikipedia while the initial lda implementations could be slowmodern systems can work with very large collections of data following the documentation of gensimwe are going to build topic model for the whole of the english language wikipedia this takes hoursbut can be done even with machine that is not too powerful with cluster of machineswe could make it go much fasterbut we will look at that sort of processing in later first we download the whole wikipedia dump from org this is large file (currently just over gb)so it may take whileunless your internet connection is very fast thenwe will index it with gensim toolpython - gensim scripts make_wiki enwiki-latest-pages-articles xml bz wiki_en_output run the previous command on the command linenot on the python shell after few hoursthe indexing will be finished finallywe can build the final topic model this step looks exactly like what we did for the small ap dataset we first import few packagesimport logginggensim logging basicconfigformat='%(asctime) %(levelname) %(message) 'level=logging infonowwe load the data that has been preprocessedid word gensim corpora dictionary load_from_text('wiki_en_output_wordids txt'mm gensim corpora mmcorpus('wiki_en_output_tfidf mm' |
17,646 | finallywe build the lda model as beforemodel gensim models ldamodel ldamodelcorpus=mmid word=id wordnum_topics= update_every= chunksize= passes= this will again take couple of hours (you will see the progress on your consolewhich can give you an indication of how long you still have to waitonce it is doneyou can save it to file so you don' have to redo it all the timemodel save('wiki_lda pkl'if you exit your session and come back lateryou can load the model again withmodel gensim models ldamodel ldamodel load('wiki_lda pkl'let us explore some topicstopics [for doc in mmtopics append(model[doc]we can see that this is still sparse model even if we have many more documents than before (over million as we are writing this)import numpy as np lens np array([len(tfor in print np mean(lens print np mean(lens < topics]sothe average document mentions topics and percent of them mention or fewer if you have not seen the idiom beforeit may be odd to take the mean of comparisonbut it is direct way to compute fraction np mean(lens < is taking the mean of an array of booleans the booleans get interpreted as and in numeric context thereforethe result is number between and which is the fraction of ones in this caseit is the fraction of elements of lenswhich are less than or equal to |
17,647 | we can also ask what the most talked about topic in wikipedia is we first collect some statistics on topic usagecounts np zeros( for doc_top in topicsfor ti, in doc_topcounts[ti+ words model show_topic(counts argmax() using the same tool as before to build up visualizationwe can see that the most talked about topic is fiction and storiesboth as books and movies for varietywe chose different color scheme full percent of wikipedia pages are partially related to this topic (or alternatively percent of the words come from this topic)these plots and numbers were obtained when the book was being written in early as wikipedia keeps changingyour results will be different we expect that the trends will be similarbut the details may vary particularlythe least relevant topic is subject to changewhile topic similar to the previous topic is likely to be still high on the list (even if not as the most important |
17,648 | alternativelywe can look at the least talked about topicwords model show_topic(counts argmin() the least talked about are the former french colonies in central africa just percent of documents touch upon itand it represents percent of the words probably if we had performed this exercise using the french wikipediawe would have obtained very different result choosing the number of topics so farwe have used fixed number of topicswhich is this was purely an arbitrary numberwe could have just as well done or topics fortunatelyfor many usersthis number does not really matter if you are going to only use the topics as an intermediate step as we did previouslythe final behavior of the system is rarely very sensitive to the exact number of topics this means that as long as you use enough topicswhether you use topics or the recommendations that result from the process will not be very different one hundred is often good number (while is too few for general collection of text documentsthe same is true of setting the alpha (avalue while playing around with it can change the topicsthe final results are again robust against this change topic modeling is often an end towards goal in that caseit is not always important exactly which parameters you choose different numbers of topics or values for parameters such as alpha will result in systems whose end results are almost identical |
17,649 | if you are going to explore the topics yourself or build visualization toolyou should probably try few values and see which gives you the most useful or most appealing results howeverthere are few methods that will automatically determine the number of topics for you depending on the dataset one popular model is called the hierarchical dirichlet process againthe full mathematical model behind it is complex and beyond the scope of this bookbut the fable we can tell is that instead of having the topics be fixed priori and our task being to reverse engineer the data to get them backthe topics themselves were generated along with the data whenever the writer was going to start new documenthe had the option of using the topics that already existed or creating completely new one this means that the more documents we havethe more topics we will end up with this is one of those statements that is unintuitive at firstbut makes perfect sense upon reflection we are learning topicsand the more examples we havethe more we can break them up if we only have few examples of news articlesthen sports will be topic howeveras we have morewe start to break it up into the individual modalities such as hockeysoccerand so on as we have even more datawe can start to tell nuances apart articles about individual teams and even individual players the same is true for people in group of many different backgroundswith few "computer people"you might put them togetherin slightly larger groupyou would have separate gatherings for programmers and systems managers in the real worldwe even have different gatherings for python and ruby programmers one of the methods for automatically determining the number of topics is called the hierarchical dirichlet process (hdp)and it is available in gensim using it is trivial taking the previous code for ldawe just need to replace the call to gensim models ldamodel ldamodel with call to the hdpmodel constructor as followshdp gensim models hdpmodel hdpmodel(mmid wordthat' it (except it takes bit longer to compute--there are no free lunchesnowwe can use this model as much as we used the lda modelexcept that we did not need to specify the number of topics summary in this we discussed more advanced form of grouping documentswhich is more flexible than simple clustering as we allow each document to be present in more than one group we explored the basic lda model using new packagegensimbut were able to integrate it easily into the standard python scientific ecosystem |
17,650 | topic modeling was first developed and is easier to understand in the case of textbut in computer vision pattern recognitionwe will see how some of these techniques may be applied to images as well topic models are very important in most of modern computer vision research in factunlike the previous this was very close to the cutting edge of research in machine learning algorithms the original lda algorithm was published in scientific journal in but the method that gensim uses to be able to handle wikipedia was only developed in and the hdp algorithm is from the research continues and you can find many variations and models with wonderful names such as the indian buffet process (not to be confused with the chinese restaurant processwhich is different model)or pachinko allocation (pachinko being type of japanese gamea cross between slot-machine and pinballcurrentlythey are still in the realm of research in few yearsthoughthey might make the jump into the real world we have now gone over some of the major machine learning models such as classificationclusteringand topic modeling in the next we go back to classificationbut this time we will be exploring advanced algorithms and approaches |
17,651 | poor answers now that we are able to extract useful features from textwe can take on the challenge of building classifier using real data let' go back to our imaginary website in clustering finding related postswhere users can submit questions and get them answered continuous challenge for owners of these & sites is to maintain decent level of quality in the posted content websites such as stackoverflow com take considerable efforts to encourage users to score questions and answers with badges and bonus points higher quality content is the resultas users are trying to spend more energy on carving out the question or crafting possible answer one particular successful incentive is the possibility for the asker to flag one answer to their question as the accepted answer (againthere are incentives for the asker to flag such answersthis will result in more score points for the author of the flagged answer would it not be very useful for the user to immediately see how good their answer is while they are typing it inthis means that the website would continuously evaluate their work-in-progress answer and provide feedback as to whether the answer shows signs of being poor one or not this will encourage the user to put more effort into writing the answer (for exampleproviding code exampleincluding an imageand so onso finallythe overall system will be improved let us build such mechanism in this |
17,652 | sketching our roadmap we will build system using real data that is very noisy this is not for the faintheartedas we will not arrive at the golden solution for classifier that achieves percent accuracy this is because even humans often disagree whether an answer was good or not (just look at some of the comments on the stackoverflow com websitequite the contrarywe will find out that some problems like this one are so hard that we have to adjust our initial goals on the way but on that waywe will start with the nearest neighbor approachfind out why it is not very good for the taskswitch over to logistic regressionand arrive at solution that will achieve good prediction quality but on smaller part of the answers finallywe will spend some time on how to extract the winner to deploy it on the target system learning to classify classy answers while classifyingwe want to find the corresponding classessometimes also called labelsfor the given data instances to be able to achieve thiswe need to answer the following two questionshow should we represent the data instanceswhich model or structure should our classifier possesstuning the instance in its simplest formin our casethe data instance is the text of the answer and the label is binary value indicating whether the asker accepted this text as an answer or not raw texthoweveris very inconvenient representation to process for most of the machine learning algorithms they want numbers it will be our task to extract useful features from raw textwhich the machine learning algorithm can then use to learn the right label tuning the classifier once we have found or collected enough (text and labelpairswe can train classifier for the underlying structure of the classifierwe have wide range of possibilitieseach of them having advantages and drawbacks just to name some of the more prominent choicesthere is logistic regressionand there are decision treessvmsand naive bayes in this we will contrast the instance-based method from the previous with model-based logistic regression |
17,653 | fetching the data luckily for usthe team behind stackoverflow provides most of the data behind the stackexchange universe to which stackoverflow belongs under cc wiki license while writing thisthe latest data dump can be found at net/torrents/ -aug- most likelythis page will contain pointer to an updated dump when you read it after downloading and extracting itwe have around gb of data in the xml format this is illustrated in the following tablefile size (mbdescription badges xml badges of users comments xml , comments on questions or answers posthistory xml , edit history posts xml , questions and answers--this is what we need users xml general information about users votes xml , information on votes as the files are more or less self-containedwe can delete all of them except posts xmlit contains all the questions and answers as individual row tags within the root tag posts refer to the following code<row id=" posttypeid=" parentid=" creationdate=" : : score=" viewcount="body="< >ianalbut < href="rel="nofollow">this</ >indicates to me that you cannot use the loops in your application:</ >

<blockquote>
< >howeverindividual audio loops may
not be commercially or otherwise­istributed on standalone basisnor
may they be repackaged in whole or in
part as audio samplessound effects
or music beds "</ >

< >so don' worryyou can make¬ommercial music with garagebandyou
just can' distribute the loops as
loops </ >
</blockquote>
owneruserid=" lastactivitydate=" - : : commentcount=" / |
17,654 | name type description id integer this is unique identifier posttype integer this describes the category of the post the following values are of interest to usquestion answer other values will be ignored parentid integer this is unique identifier of the question to which this answer belongs (missing for questionscreationdate datetime this is the date of submission score integer this is the score of the post viewcount integer or empty this tells us the number of user views for this post body string this is the complete post as it is encoded in html text owneruserid id this is unique identifier of the poster if it is it is wiki question title string this is the title of the question (missing for answersacceptedanswerid id this is the id of the accepted answer (missing for answerscommentcount integer this tells us the number of comments for the post slimming the data down to chewable chunks to speed up our experimentation phasewe should not try to evaluate our classification ideas on gb file insteadwe should think of how we can trim it down so that we can still keep representable snapshot of it while being able to quickly test our ideas if we filter an xml for row tags that have creationdate of or laterwe still end up with over million posts ( , , questions and , , answers)which should be enough training data for now we also do not operate on the xml format as it will slow us down the simpler the formatthe better it is that' why we parse the remaining xml using python' celementtree and write it out to tab-separated file |
17,655 | preselection and processing of attributes we should also only keep those attributes that we think could help the classifier in determining the good from the not-so-good answers certainlywe need the identification-related attributes to assign the correct answers to the questions read the following attributesthe posttype attributefor exampleis only necessary to distinguish between questions and answers furthermorewe can distinguish between them later by checking for the parentid attribute sowe keep it for questions tooand set it to the creationdate attribute could be interesting to determine the time span between posting the question and posting the individual answersso we keep it the score attribute isof courseimportant as an indicator of the community' evaluation the viewcount attributein contrastis most likely of no use for our task even if it is able to help the classifier distinguish between good and badwe will not have this information at the time when an answer is being submitted we will ignore it the body attribute obviously contains the most important information as it is encoded in htmlwe will have to decode it to plain text the owneruserid attribute is useful only if we will take the user-dependent features into accountwhich we won' although we drop it herewe encourage you to use it (maybe in connection with users xmlto build better classifier the title attribute is also ignored herealthough it could add some more information about the question the commentcount attribute is also ignored similar to viewcountit could help the classifier with posts that were posted while ago (more comments are equal to more ambiguous postsit willhowevernot help the classifier at the time that an answer is posted the acceptedanswerid attribute is similar to the score attributethat isit is an indicator of post' quality as we will access this per answerinstead of keeping this attributewe will create new attributeisacceptedwhich will be or for answers and ignored for questions (parentid |
17,656 | we end up with the following formatid parentid isaccepted timetoanswer score text for concrete parsing detailsplease refer to so_xml_to_tsv py and choose_ instance py it will suffice to say that in order to speed up the processwe will split the data into two files in meta jsonwe store dictionarymapping post' id to its other data (except text in the json formatso that we can read it in the proper format for examplethe score of post would reside at meta[id['score'in data tsvwe store id and textwhich we can easily read with the following methoddef fetch_posts()for line in open("data tsv"" ")post_idtext line split("\ "yield int(post_id)text strip(defining what is good answer before we can train classifier to distinguish between good and bad answerswe have to create the training data so farwe have only bunch of data what we still have to do is to define labels we couldof coursesimply use the isaccepted attribute as label after allit marks the answer that answered the question howeverthat is only the opinion of the asker naturallythe asker wants to have quick answer and accepts the first best answer if more answers are submitted over timesome of them will tend to be better than the already accepted one the askerhoweverseldom gets back to the question and changes his/her mind so we end up with many questions with accepted answers that have not been scored the highest at the other extremewe could take the best and worst scored answer per question as positive and negative examples howeverwhat do we do with questions that have only good answerssayone with two and the other with four pointsshould we really take the answer with two points as negative examplewe should settle somewhere between these extremes if we take all answers that are scored higher than zero as positive and all answers with or less points as negativewe end up with quite reasonable labels as followsall_answers [ for , in meta iteritems(if ['parentid']!=- np asarray([meta[aid]['score']> for aid in all_answers] |
17,657 | creating our first classifier let us start with the simple and beautiful nearest neighbor method from the previous although it is not as advanced as other methodsit is very powerful as it is not model-basedit can learn nearly any data howeverthis beauty comes with clear disadvantagewhich we will find out very soon starting with the -nearest neighbor (knnalgorithm this timewe won' implement it ourselvesbut rather take it from the sklearn toolkit therethe classifier resides in sklearn neighbors let us start with simple -nearest neighbor classifierfrom sklearn import neighbors knn neighbors kneighborsclassifier(n_neighbors= print(knnkneighborsclassifier(algorithm=autoleaf_size= n_neighbors= = warn_on_equidistant=trueweights=uniformit provides the same interface as all the other estimators in sklearn we train it using fit()after which we can predict the classes of new data instances using predict()knn fit([[ ],[ ],[ ],[ ],[ ],[ ]][ , , , , , ]knn predict( array([ ]knn predict( array([ ]knn predict( neighborswarningkneighborsneighbor + and neighbor have the same distanceresults will be dependent on data order neigh_distneigh_ind self kneighbors(xarray([ ]to get the class probabilitieswe can use predict_proba(in this casewhere we have two classes and it will return an array of two elements as in the following codeknn predict_proba( array([ ]]knn predict_proba( array([ ]]knn predict_proba( array([ ]] |
17,658 | engineering the features sowhat kind of features can we provide to our classifierwhat do we think will have the most discriminative powerthe timetoanswer attribute is already present in our meta dictionarybut it probably won' provide much value on its own then there is only textbut in its raw formwe cannot pass it to the classifier as the features must be in numerical form we will have to do the dirty work of extracting features from it what we could do is check the number of html links in the answer as proxy for quality our hypothesis would be that more hyperlinks in an answer indicate better answersand thus have higher likelihood of being up-voted of coursewe want to only count links in normal text and not in code examplesimport re code_match re compile('*?)'re multiline|re dotalllink_match re compile('< href="re multiline|re dotalldef extract_features_from_body( )link_count_in_code count links in code to later subtract them for match_str in code_match findall( )link_count_in_code +len(link_match findall(match_str)return len(link_match findall( )link_count_in_code for production systemswe should not parse html content with regular expressions insteadwe should rely on excellent libraries such as beautifulsoup that does marvelous job of robustly handling all the weird things that typically occur in everyday html with this in placewe can generate one feature per answer but before we train the classifierlet us first have look at what we will train it with we can get first impression with the frequency distribution of our new feature this can be done by plotting the percentage of how often each value occurs in the data as shown in the following graph |
17,659 | with the majority of posts having no link at allwe now know that this feature alone will not make good classifier let us nevertheless try it out to get first estimation of where we are training the classifier we have to pass the feature array together with the previously defined labels to the knn learner to obtain classifierx np asarray([extract_features_from_body(textfor post_idtext in fetch_posts(if post_id in all_answers]knn neighbors kneighborsclassifier(knn fit(xyusing the standard parameterswe just fitted nn (meaning nn with to our data why nnwellwith the current state of our knowledge about the datawe really have no clue what the right should be once we have more insightwe will have better idea of how to set the value for measuring the classifier' performance we have to be clear about what we want to measure the naive but easiest way is to simply calculate the average prediction quality over the test set this will result in value between for incorrectly predicting everything and for perfect prediction accuracy can be obtained through knn score( |
17,660 | but as we learned in the previous we will not do it just oncebut apply cross-validation here using the ready-made kfold class from sklearn cross_ validation finallywe will average the scores on the test set of each fold and see how much it varies using standard deviation refer to the following codefrom sklearn cross_validation import kfold scores [cv kfold( =len( ) = indices=truefor traintest in cvx_trainy_train [train] [trainx_testy_test [test] [testclf neighbors kneighborsclassifier(clf fit(xyscores append(clf score(x_testy_test)print("mean(scores)= \tstddev(scores)= "%(np mean(scoresnp std(scores))the output is as followsmean(scores)= stddev(scores)= this is far from being usable with only percent accuracyit is even worse than tossing coin apparentlythe number of links in post are not very good indicator of the quality of the post we say that this feature does not have much discriminative power--at leastnot for knn with designing more features in addition to using number of hyperlinks as proxies for post' qualityusing number of code lines is possibly another good option too at least it is good indicator that the post' author is interested in answering the question we can find the code embedded in the tag once we have extracted itwe should count the number of words in the post while ignoring the code linesdef extract_features_from_body( )num_code_lines link_count_in_code code_free_s remove source code and count how many lines for match_str in code_match findall( )num_code_lines +match_str count('\ 'code_free_s code_match sub(""code_free_ssometimes source code contain links |
17,661 | which we don' want to count link_count_in_code +len(link_match findall(match_str)links link_match findall(slink_count len(linkslink_count -link_count_in_code html_free_s re sub(+""tag_match sub(''code_free_s)replace("\ """link_free_s html_free_s remove links from text before counting words for anchor in anchorsif anchor lower(startswith("link_free_s link_free_s replace(anchor,''num_text_tokens html_free_s count("return num_text_tokensnum_code_lineslink_count looking at the following graphswe can notice that the number of words in post show higher variabilitytraining on the bigger feature space improves accuracy quite bitmean(scores)= stddev(scores)= |
17,662 | but stillthis would mean that we could classify roughly four out of the ten wrong answers at least we are heading in the right direction more features lead to higher accuracywhich leads us to adding more features thereforelet us extend the feature space with even more featuresavgsentlenthis feature measures the average number of words in avgwordlenthis feature is similar to avgsentlenit measures the average numallcapsthis feature measures the number of words that are written in numexclamsthis feature measures the number of exclamation marks sentence maybe there is pattern that particularly good posts don' overload the reader' brain with very long sentences number of characters in the words of post uppercasewhich is considered bad style the following charts show the value distributions for average sentences and word lengths as well as the number of uppercase words and exclamation marks |
17,663 | with these four additional featureswe now have seven features representing individual posts let' see how we have progressedmean(scores)= stddev(scores)= now that' interesting we added four more features and got worse classification accuracy how can that be possibleto understand thiswe have to remind ourselves of how knn works our nn classifier determines the class of new post by calculating the preceding seven described featuresnamely linkcountnumtexttokensnumcodelinesavgsentlenavgwordlennumallcapsand numexclamsand then finds the five nearest other posts the new post' class is then the majority of the classes of those nearest posts the nearest posts are determined by calculating the euclidean distance as we did not specify itthe classifier was initialized with the default value which is the parameter in the minkowski distance this means that all seven features are treated similarly knn does not really learn thatfor instancenumtexttokens is good to have but much less important than numlinks let us consider the following two postsa and bwhich only differ in the following featuresand how they compare to new postpost numlinks numtexttokens new although we would think that links provide more value than mere textpost would be considered more similar to the new post than post clearlyknn has hard time correctly using the available data deciding how to improve to improve on thiswe basically have the following optionsadd more datait may be that there is just not enough data for the learning algorithm and that we simply need to add more training data play with the model complexityit may be that the model is not complex enough or is already too complex in this casewe could either decrease so that it would take less nearest neighbors into account and thus would be better at predicting non-smooth dataor we could increase it to achieve the opposite |
17,664 | modify the feature spaceit may be that we do not have the right set of features we couldfor examplechange the scale of our current features or design even more new features or ratherwe could remove some of our current features in case some features are aliasing others change the modelit may be that knn is generally not good fit for our use casesuch that it will never be capable of achieving good prediction performance no matter how complex we allow it to be and how sophisticated the feature space will become in real lifeat this pointpeople often try to improve the current performance by randomly picking one of the preceding options and trying them out in no particular orderhoping to find the golden configuration by chance we could do the same herebut it will surely take longer than making informed decisions let' take the informed routefor which we need to introduce the bias-variance tradeoff bias-variance and its trade-off in getting started with python machine learningwe tried to fit polynomials of different complexities controlled by the dimensionality parameterdto fit the data we realized that two-dimensional polynomiala straight linedid not fit the example data very well because the data was not of linear nature no matter how elaborate our fitting procedure would have beenour two-dimensional model will see everything as straight line we say that it is too biased for the data at handit is under-fitting we played bit with the dimensions and found out that the -dimensional polynomial was actually fitting very well into the data on which it was trained (we did not know about train-test splits at the timehoweverwe quickly found that it was fitting too well we realized that it was over-fitting so badly that with different samples of the data pointswe would have gotten totally different -dimensional polynomials we say that the model has too high variance for the given data or that it is over-fitting these are the extremes between which most of our machine learning problems reside ideallywe want to have both low bias and low variance butwe are in bad world and have to trade off between them if we improve on onewe will likely get worse on the other fixing high bias let us assume that we are suffering from high bias in this caseadding more training data clearly will not help alsoremoving features surely will not help as our model is probably already overly simplistic |
17,665 | the only possibilities we have in this case is to either get more featuresmake the model more complexor change the model fixing high variance ifon the contrarywe suffer from high variance that means our model is too complex for the data in this casewe can only try to get more data or decrease the complexity this would mean to increase so that more neighbors would be taken into account or to remove some of the features high bias or low bias to find out what actually our problem iswe have to simply plot the train and test errors over the data size high bias is typically revealed by the test error decreasing bit at the beginningbut then settling at very high value with the train error approaching growing dataset size high variance is recognized by big gap between both curves plotting the errors for different dataset sizes for nn shows big gap between the train and test errorhinting at high variance problem refer to the following graph |
17,666 | looking at the previous graphwe immediately see that adding more training data will not helpas the dashed line corresponding to the test error seems to stay above the only option we have is to decrease the complexity either by increasing or by reducing the feature space reducing the feature space does not help here we can easily confirm this by plotting the graph for simplified feature space of linkcount and numtexttokens refer to the following graphwe will get similar graphs for other smaller feature sets as well no matter what subset of features we takethe graph will look similar at least reducing the model complexity by increasing shows some positive impact this is illustrated in the following tablek mean(scoresstddev(scores |
17,667 | but this is not enoughand it comes at the price of lower classification runtime performance takefor instancethe value of where we have very low test error to classify new postwe need to find the nearest other posts to decide whether the new post is good one or notclearlywe seem to be facing an issue with using the nearest neighbor algorithm for our scenario it also has another real disadvantage over timewe will get more and more posts to our system as the nearest neighbor method is an instance-based approachwe will have to store all the posts in our system the more posts we getthe slower the prediction will be this is different with model-based approaches where you try to derive model from the data so here we arewith enough reasons now to abandon the nearest neighbor approach and look for better places in the classification world of coursewe will never know whether there is the one golden feature we just did not happen to think of but for nowlet' move on to another classification method that is known to work great in text-based classification scenarios using logistic regression contrary to its namelogistic regression is classification methodand is very powerful when it comes to text-based classification it achieves this by first performing regression on logistic functionhence the name |
17,668 | bit of math with small example to get an initial understanding of the way logistic regression workslet us first take look at the following examplewhere we have an artificial feature value at the axis plotted with the corresponding class rangeeither or as we can seethe data is so noisy that classes overlap in the feature value range between and thereforeit is better to not directly model the discrete classesbut rather the probability that feature value belongs to class (xonce we possess such modelwe could then predict class if ( or class otherwisemathematicallyit is always difficult to model something that has finite rangeas is the case here with our discrete labels and we canhowevertweak the probabilities bit so that they always stay between and for thiswe will need the odds ratio and its logarithm let' say feature has the probability of that it belongs to class that isp( = the odds ratio is then ( = )/ ( = we could say that the chance is : that this feature maps to class if ( = )we would consequently have : chance that the instance is of class the odds ratio is bounded by but goes to infinity (the left graph in the following screenshotif we now take the logarithm of itwe can map all probabilities between and to the full range from negative to positive infinity (the right graph in the following screenshotthe best part is that we still maintain the relationship that higher probability leads to higher log of odds--it' just not limited to or anymore |
17,669 | this means that we can now fit linear combinations of our features (okwe have only one feature and constantbut that will change soonto the values let' consider the linear equation in getting started with python machine learning shown as followsthis can be replaced with the following equation (by replacing with )we can solve the equation for as shown in the following formulawe simply have to find the right coefficients such that the formula will give the lowest errors for all our pairs (xipiin the datasetwhich will be detected by scikit-learn after fitting the data to the class labelsthe formula will give the probability for every new data pointxthat belongs to class refer to the following codefrom sklearn linear_model import logisticregression clf logisticregression(print(clflogisticregression( = class_weight=nonedual=falsefit_ intercept=trueintercept_scaling= penalty= tol= |
17,670 | clf fit(xyprint(np exp(clf intercept_)np exp(clf coef_ ravel()) def lr_model(clfx)return ( np exp(-(clf intercept_ clf coef_* ))print(" ( =- )= \tp( = )= "%(lr_model(clf- )lr_ model(clf )) ( =- )= ( = )= you might have noticed that scikit-learn exposes the first coefficient through the special field intercept_ if we plot the fitted modelwe see that it makes perfect sense given the dataapplying logistic regression to our postclassification problem admittedlythe example in the previous section was created to show the beauty of logistic regression how does it perform on the extremely noisy datacomparing it to the best nearest neighbour classifier ( as baselinewe see that it performs bit betterbut also won' change the situation whole lotmethod mean(scoresstddev(scoreslogreg = logreg = logreg = logreg = |
17,671 | method mean(scoresstddev(scoreslogreg = nn we have seen the accuracy for the different values of the regularization parameter with itwe can control the model complexitysimilar to the parameter for the nearest neighbor method smaller values for result in higher penaltythat isthey make the model more complex quick look at the bias-variance chart for our best candidatec shows that our model has high bias--test and train error curves approach closely but stay at unacceptably high values this indicates that logistic regression with the current feature space is under-fitting and cannot learn model that captures the data correctly so what nowwe switched the model and tuned it as much as we could with our current state of knowledgebut we still have no acceptable classifier it seems more and more that either the data is too noisy for this task or that our set of features is still not appropriate to discriminate the classes that are good enough |
17,672 | looking behind accuracy precision and recall let us step back and think again what we are trying to achieve here actuallywe do not need classifier that perfectly predicts good and bad answersas we measured it until now using accuracy if we can tune the classifier to be particularly good in predicting one classwe could adapt the feedback to the user accordingly if we had classifierfor examplethat was always right when it predicted an answer to be badwe would give no feedback until the classifier detected the answer to be bad contrariwiseif the classifier succeeded in predicting answers to be always goodwe could show helpful comments to the user at the beginning and remove them when the classifier said that the answer is good one to find out which situation we are in herewe have to understand how to measure precision and recall to understand thiswe have to look into the four distinct classification results as they are described in the following tableclassified as positive in reality it is negative positive true positive (tpfalse negative (fnnegative false positive (fptrue negative (tnfor instanceif the classifier predicts an instance to be positive and the instance indeed is positive in realitythis is true positive instance if on the other handthe classifier misclassified that instance saying that it is negative while in reality it is positivethat instance is said to be false negative what we want is to have high success rate when we are predicting post as either good or badbut not necessarily both that iswe want as many true positives as possible this is what precision captures |
17,673 | if instead our goal would have been to detect as much good or bad answers as possiblewe would be more interested in recallthe next screenshot shows all the good answers and the answers that have been classified as being good onesin terms of the previous diagramprecision is the fraction of the intersection of the right circle while recall is the fraction of the intersection of the left circle sohow can we optimize for precisionup to nowwe have always used as the threshold to decide whether an answer is good or not what we can do now is to count the number of tpfpand fn instances while varying that threshold between and with these countswe can then plot precision over recall the handy function precision_recall_curve(from the metrics module does all the calculations for us as shown in the following codefrom sklearn metrics import precision_recall_curve precisionrecallthresholds precision_recall_curve(y_testclf predict(x_test |
17,674 | predicting one class with acceptable performance does not always mean that the classifier will predict the other classes acceptably this can be seen in the following two graphs where we plot the precision/recall curves for classifying bad (left graph of the next screenshotand good (right graph of the next screenshotanswersin the previous graphswe have also included much better description of classifier' performancethe area under curve (aucthis can be understood as the average precision of the classifier and is great way of comparing different classifiers we see that we can basically forget about predicting bad answers (the left graph of the previous screenshotthis is because the precision for predicting bad answers decreases very quicklyat already very low recall valuesand stays at an unacceptably low percent predicting good answershowevershows that we can get above percent precision at recall of almost percent let us find out what threshold we need for that with the following codethresholds np hstack(([ ],thresholds[medium])idx precisions>= print(" = = thresh= (precision[idx ][ ]recall[idx ][ ]threshold[idx ][ ]) = = thresh= |
17,675 | setting the threshold at we see that we can still achieve precision of above percentdetecting good answers when we accept low recall of percent this means that we will detect only one in three bad answersbut those answers that we manage to detect we would be reasonably sure of to apply this threshold in the prediction processwe have to use predict_ proba()which returns per class probabilitiesinstead of predict()which returns the class itselfthresh threshold[idx ][ probs_for_good clf predict_proba(answer_features)[:, answer_class probs_for_good>thresh we can confirm that we are in the desired precision/recall range using classification_reportfrom sklearn metrics import classification_report print(classification_report(y_testclf predict_proba [:, ]> target_names=['not accepted''accepted'])not accepted accepted avg total precision recall -score support using the threshold will not guarantee that we are always above the precision and recall values that we determined previously together with its threshold |
17,676 | slimming the classifier it is always worth looking at the actual contributions of the individual features for logistic regressionwe can directly take the learned coefficients (clf coef_to get an impression of the feature' impact the higher the coefficient of feature isthe more the feature plays role in determining whether the post is good or not consequentlynegative coefficients tell us that the higher values for the corresponding features indicate stronger signal for the post to be classified as badwe see that linkcount and numexclams have the biggest impact on the overall classification decisionwhile numimages and avgsentlen play rather minor role while the feature importance overall makes sense intuitivelyit is surprising that numimages is basically ignored normallyanswers containing images are always rated high in realityhoweveranswers very rarely have images so although in principal it is very powerful featureit is too sparse to be of any value we could easily drop this feature and retain the same classification performance |
17,677 | ship itlet' assume we want to integrate this classifier into our site what we definitely do not want is to train the classifier each time we start the classification service insteadwe can simply serialize the classifier after training and then deserialize it on that siteimport pickle pickle dump(clfopen("logreg dat"" ")clf pickle load(open("logreg dat"" ")congratulationsthe classifier is now ready to be used as if it had just been trained summary we made itfor very noisy datasetwe built classifier that suits part of our goal of coursewe had to be pragmatic and adapt our initial goal to what was achievable but on the waywe learned about the strengths and weaknesses of the nearest neighbor and logistic regression algorithms we learned how to extract featuressuch as linkcountnumtexttokensnumcodelinesavgsentlenavgwordlennumallcapsnumexclamsand numimagesand how to analyze their impact on the classifier' performance but what is even more valuable is that we learned an informed way of how to debug badly performing classifiers this will help us in the future to come up with usable systems much faster after having looked into the nearest neighbor and logistic regression algorithmsin the next we will get familiar with yet another simple yet powerful classification algorithmnaive bayes along the waywe will also learn how to use some more convenient tools from scikit-learn |
17,678 | analysis for companiesit is vital to closely monitor the public reception of key events such as product launches or press releases with real-time access and easy accessibility of user-generated content on twitterit is now possible to do sentiment classification of tweets sometimes also called opinion miningit is an active field of research in which several companies are already selling their products as this shows that market obviously existswe have motivation to use our classification muscles built in the previous to build our own home-grown sentiment classifier sketching our roadmap sentiment analysis of tweets is particularly hard because of twitter' size limitation of characters this leads to special syntaxcreative abbreviationsand seldom well-formed sentences the typical approach of analyzing sentencesaggregating their sentiment information per paragraph and then calculating the overall sentiment of documentthereforedoes not work here clearlywe will not try to build state-of-the-art sentiment classifier insteadwe want touse this scenario as vehicle to introduce yet another classification algorithmnaive bayes explain how part of speech (postagging works and how it can help us show some more tricks from the scikit-learn toolbox that come in handy from time to time |
17,679 | fetching the twitter data naturallywe need tweets and their corresponding labels that tell us whether tweet contains positivenegativeor neutral sentiment in this we will use the corpus from niek sanderswho has done an awesome job of manually labeling more than tweets and granted us permission to use it in this to comply with twitter' terms of serviceswe will not provide any data from twitter nor show any real tweets in this insteadwe can use sandershand-labeled datawhich contains the tweet ids and their hand-labeled sentimentand use his scriptinstall pyto fetch the corresponding twitter data as the script is playing nicely with twitter' serversit will take quite some time to download all the data for more than tweets so it is good idea to start it now the data comes with four sentiment labelsxy load_sanders_data(classes np unique(yfor in classesprint("#% % (csum( == ))#irrelevant #negative #neutral #positive we will treat irrelevant and neutral labels together and ignore all non-english tweetsresulting into tweets these can be easily filtered using the data provided by twitter introducing the naive bayes classifier naive bayes is probably one of the most elegant machine learning algorithms out there that is of practical use despite its nameit is not that naive when you look at its classification performance it proves to be quite robust to irrelevant featureswhich it kindly ignores it learns fast and predicts equally so it does not require lots of storage sowhy is it then called naivethe naive was added to the account for one assumption that is required for bayes to work optimallyall features must be independent of each other thishoweveris rarely the case for real-world applications neverthelessit still returns very good accuracy in practice even when the independent assumption does not hold |
17,680 | getting to know the bayes theorem at its corenaive bayes classification is nothing more than keeping track of which feature gives evidence to which class to ease our understandinglet us assume the following meanings for the variables that we will use to explain naive bayesvariable possible values meaning "pos""negclass of tweet (positive or negativenon-negative integers counting the occurrence of awesome in the tweet non-negative integers counting the occurrence of crazy in the tweet during trainingwe learn the naive bayes modelwhich is the probability for class when we already know features and this probability is written as since we cannot estimate this directlywe apply trickwhich was found out by bayesif we substitute with the probability of both features and occurring and think of as being our class we arrive at the relationship that helps us to later retrieve the probability for the data instance belonging to the specified classthis allows us to express by means of the other probabilitieswe could also say that |
17,681 | the prior and evidence values are easily determinedis the prior probability of class without knowing about the data this quantity can be obtained by simply calculating the fraction of all training data instances belonging to that particular class is the evidenceor the probability of features and this can be retrieved by calculating the fraction of all training data instances having that particular feature value the tricky part is the calculation of the likelihood it is the value describing how likely it is to see feature values and if we know that the class of the data instance is to estimate this we need bit more thinking being naive from the probability theorywe also know the following relationshipthis alonehoweverdoes not help muchsince we treat one difficult problem (estimating with another one (estimating howeverif we naively assume that and are independent from each othersimplifies to and we can write it as followsputting everything togetherwe get this quite manageable formulathe interesting thing is that although it is not theoretically correct to simply tweak our assumptions when we are in the mood to do soin this case it proves to work astonishingly well in real-world applications |
17,682 | using naive bayes to classify given new tweetthe only part left is to simply calculate the probabilitieswe also need to choose the class having the higher probability as for both classes the denominatoris the sameso we can simply ignore it without changing the winner class notehoweverthat we don' calculate any real probabilities any more insteadwe are estimating which class is more likely given the evidence this is another reason why naive bayes is so robustit is not so much interested in the real probabilitiesbut only in the information which class is more likely to in shortwe can write it as followshere we are calculating the part after argmax for all classes of ("posand "negin our caseand returning the class that results in the highest value but for the following examplelet us stick to real probabilities and do some calculations to see how naive bayes works for the sake of simplicitywe will assume that twitter allows only for the two words mentioned earlierawesome and crazyand that we had already manually classified handful of tweetstweet class awesome positive awesome positive awesome crazy positive crazy positive crazy negative crazy negative |
17,683 | in this casewe have six total tweetsout of which four are positive and two negativewhich results in the following priorsthis meanswithout knowing anything about the tweet itselfwe would be wise in assuming the tweet to be positive and the piece that is still missing is the calculation of probabilities for the two features and conditioned on class which are the this is calculated as the number of tweets in which we have seen that the concrete feature is divided by the number of tweets that have been labeled with the class of let' say we want to know the probability of seeing awesome occurring once in tweet knowing that its class is "positive"we would have the followingsince out of the four positive tweets three contained the word awesomeobviously the probability for not having awesome in positive tweet is its inverse as we have seen only tweets with the counts or similarly for the rest (omitting the case that word is not occurring in tweet)for the sake of completenesswe will also compute the evidence so that we can see real probabilities in the following example tweets for two concrete values of and ,we can calculate the evidence as follows |
17,684 | this denotation "leads to the following valuesnow we have all the data to classify new tweets the only work left is to parse the tweet and give features to it tweet class probabilities classification awesome positive crazy negative awesome crazy positive awesome text undefinedbecause we have never seen these words in this tweet before so farso good the classification of trivial tweets makes sense except for the last onewhich results in division by zero how can we handle that |
17,685 | accounting for unseen words and other oddities when we calculated the preceding probabilitieswe actually cheated ourselves we were not calculating the real probabilitiesbut only rough approximations by means of the fractions we assumed that the training corpus would tell us the whole truth about the real probabilities it did not corpus of only six tweets obviously cannot give us all the information about every tweet that has ever been written for examplethere certainly are tweets containing the word "text"it is just that we have never seen them apparentlyour approximation is very roughso we should account for that this is often done in practice with "add-one smoothingadd-one smoothing is sometimes also referred to as additive smoothing or laplace smoothing note that laplace smoothing has nothing to do with laplacian smoothingwhich is related to smoothing of polygon meshes if we do not smooth by one but by an adjustable parameter alpha greater than zeroit is called lidstone smoothing it is very simple techniquesimply adding one to all counts it has the underlying assumption that even if we have not seen given word in the whole corpusthere is still chance that our sample of tweets happened to not include that word sowith add-one smoothing we pretend that we have seen every occurrence once more than we actually did that means that instead of calculating the followingwe now calculatewhy do we add in the denominatorwe have to make sure that the end result is again probability thereforewe have to normalize the counts so that all probabilities sum up to one as in our current dataset awesomecan occur either zero or one timewe have two cases and indeedwe get as the total probability |
17,686 | similarlywe do this for the prior probabilitiesaccounting for arithmetic underflows there is yet another roadblock in realitywe work with probabilities much smaller than the ones we have dealt with in the toy example in realitywe also have more than two featureswhich we multiply with each other this will quickly lead to the point where the accuracy provided by numpy does not suffice anymoreimport numpy as np np set_printoptions(precision= tell numpy to print out more digits (default is np array([ - ]array( - ]np array([ - ]array( ]sohow probable is it that we will ever hit number like - to answer thiswe just have to imagine likelihood for the conditional probabilities of and then multiply of them together (meaning that we have low probable feature valuesand you've been hit by the arithmetic underflowx= ** still fine - ** ouch float in python is typically implemented using double in to find out whether it is the case for your platformyou can check it as followsimport sys sys float_info sys float_info(max= + max_exp= max_ exp= min= - min_exp=- min_ _exp=- dig= mant_dig= epsilon= - radix= rounds= to mitigate thisyou could switch to math libraries such as mpmath (google com/ /mpmath/that allow arbitrary accuracy howeverthey are not fast enough to work as numpy replacement |
17,687 | fortunatelythere is better way to take care of thisand it has to do with nice relationship that we maybe still know from schoolif we apply it to our casewe get the followingas the probabilities are in the interval between and the log of the probabilities lies in the interval and don' get irritated with that higher numbers are still stronger indicator for the correct class--it is only that they are negative now there is one caveat thoughwe actually don' have log in the formula' nominator (the part preceding the fractionwe only have the product of the probabilities in our caseluckily we are not interested in the actual value of the probabilities we simply want to know which class has the highest posterior probability we are lucky because if we find thisthen we also have the following |
17,688 | quick look at the previous graph shows that the curve never goes down when we go from left to right in shortapplying the logarithm does not change the highest value solet us stick this into the formula we used earlierwe will use this to retrieve the formula for two features that will give us the best class for real-world data that we will see in practiceof coursewe will not be very successful with only two featuresso let us rewrite it to allow the arbitrary number of featuresthere we areready to use our first classifier from the scikit-learn toolkit creating our first classifier and tuning it the naive bayes classifiers reside in the sklearn naive_bayes package there are different kinds of naive bayes classifiersgaussiannbthis assumes the features to be normally distributed multinomialnbthis assumes the features to be occurrence countswhich is bernoullinbthis is similar to multinomialnbbut more suited when using binary word occurrences and not word counts (gaussianone use case for it could be the classification of sex according to the given height and width of person in our casewe are given tweet texts from which we extract word counts these are clearly not gaussian distributed relevant to us since we will be using word counts in the tweets as features in practicethis classifier also works well with tf-idf vectors as we will mainly look at the word occurrencesfor our purposemultinomialnb is best suited |
17,689 | solving an easy problem first as we have seen when we looked at our tweet datathe tweets are not just positive or negative the majority of tweets actually do not contain any sentimentbut are neutral or irrelevantcontainingfor instanceraw information (new bookbuilding machine learning avoid complicating the task too muchlet us for now only focus on the positive and negative tweetspos_neg_idx=np logical_or( =="positive" =="negative" [pos_neg_idxy [pos_neg_idxy =="positivenowwe have in the raw tweet texts and in the binary classificationwe assign for negative and for positive tweets as we have learned in the beforewe can construct tfidfvectorizer to convert the raw tweet text into the tf-idf feature valueswhich we then use together with the labels to train our first classifier for conveniencewe will use the pipeline classwhich allows us to join the vectorizer and the classifier together and provides the same interfacefrom sklearn feature_extraction text import tfidfvectorizer from sklearn naive_bayes import multinomialnb from sklearn pipeline import pipeline def create_ngram_model()tfidf_ngrams tfidfvectorizer(ngram_range=( )analyzer="word"binary=falseclf multinomialnb(pipeline pipeline([('vect'tfidf_ngrams)('clf'clf)]return pipeline the pipeline instance returned by create_ngram_model(can now be used for fit(and predict(as if we had normal classifier since we do not have that much datawe should do cross-validation this timehoweverwe will not use kfoldwhich partitions the data in consecutive foldsbut instead we use shufflesplit this shuffles the data for usbut does not prevent the same data instance to be in multiple folds for each foldthenwe keep track of the area under the precision-recall curve and the accuracy |
17,690 | to keep our experimentation agilelet us wrap everything together in train_ model(functionwhich takes function as parameter that creates the classifierfrom sklearn metrics import precision_recall_curveauc from sklearn cross_validation import shufflesplit def train_model(clf_factoryxy)setting random_state to get deterministic behavior cv shufflesplit( =len( )n_iter= test_size= indices=truerandom_state= scores [pr_scores [for traintest in cvx_trainy_train [train] [trainx_testy_test [test] [testclf clf_factory(clf fit(x_trainy_traintrain_score clf score(x_trainy_traintest_score clf score(x_testy_testscores append(test_scoreproba clf predict_proba(x_testprecisionrecallpr_thresholds precision_recall_curve (y_testproba[:, ]pr_scores append(auc(recallprecision)summary (np mean(scores)np std(scores)np mean(pr_scores)np std(pr_scores)print " \ \ \ "%summary xy load_sanders_data(pos_neg_idx=np logical_or( =="positive" =="negative" [pos_neg_idxy [pos_neg_idxy =="positivetrain_model(create_ngram_model |
17,691 | with our first try of using naive bayes on vectorized tf-idf trigram featureswe get an accuracy of percent and / auc of percent looking at the / chart shown in the following screenshotit shows much more encouraging behavior than the plots we saw in the previous for the first timethe results are quite encouraging they get even more impressive when we realize that percent accuracy is probably never achievable in sentiment classification task for some tweetseven humans often do not really agree on the same classification label using all the classes but againwe simplified our task bitsince we used only positive or negative tweets that means we assumed perfect classifier that classified upfront whether the tweet contains sentiment and forwarded that to our naive bayes classifier sohow well do we perform if we also classify whether tweet contains any sentiment at allto find that outlet us first write convenience function that returns modified class array that provides list of sentiments that we would like to interpret as positive def tweak_labels(ypos_sent_list)pos ==pos_sent_list[ for sent_label in pos_sent_list[ :]pos | ==sent_label |
17,692 | np zeros( shape[ ] [pos astype(intreturn note that we are talking about two different positives now the sentiment of tweet can be positivewhich is to be distinguished from the class of the training data iffor examplewe want to find out how good we can separate the tweets having sentiment from neutral oneswe could do this as followsy tweak_labels( ["positive""negative"]in we now have (positive classfor all tweets that are either positive or negative and (negative classfor neutral and irrelevant ones train_model(create_ngram_modelxyplot=true as expectedthe / auc drops considerablybeing only percent now the accuracy is still highbut that is only due to the fact that we have highly imbalanced dataset out of , total tweetsonly , are either positive or negativewhich is about percent this means that if we created classifier that always classified tweet as not containing any sentimentswe would already have an accuracy of percent this is another example of why you should always look at precision and recall if the training and test data is unbalanced |
17,693 | sohow would the naive bayes classifier perform on classifying positive tweets versus the rest and negative tweets versus the restone wordbad =pos vs rest = =neg vs rest = pretty unusable if you ask me looking at the / curves shown in the following screenshotswe also find no usable precision/recall tradeoff as we were able to do in the previous tuning the classifier' parameters certainlywe have not explored the current setup enough and should investigate more there are roughly two areas where we could play with the knobstfidfvectorizer and multinomialnb as we have no real intuition as to which area we should explorelet us try to distribute the parametersvaluestfidfvectorizer degdeg use different settings for ngramsunigrams ( , )bigrams ( , )and trigrams ( , degdeg play with min_df or degdeg explore the impact of idf within tf-idf using use_idf and smooth_idffalse or true degdeg play with the idea of whether to remove stop words or not by setting stop_words to english or none |
17,694 | degdeg experiment with whether or not to use the logarithm of the word counts (sublinear_tfdegdeg experiment with whether or not to track word counts or simply track whether words occur or not by setting binary to true or false multinomialnb degdeg decide which of the following smoothing methods to use by setting alphadegdeg add-one or laplace smoothing degdeg lidstone smoothing or degdeg no smoothing simple approach could be to train classifier for all those reasonable exploration values while keeping the other parameters constant and checking the classifier' results as we do not know whether those parameters affect each otherdoing it right would require that we train classifier for every possible combination of all parameter values obviouslythis is too tedious for us to do because this kind of parameter exploration occurs frequently in machine learning tasksscikit-learn has dedicated class for it called gridsearchcv it takes an estimator (an instance with classifier-like interface)which would be the pipeline instance in our caseand dictionary of parameters with their potential values gridsearchcv expects the dictionary' keys to obey certain format so that it is able to set the parameters of the correct estimator the format is as follows____ __ nowif we want to specify the desired values to explore for the min_df parameter of tfidfvectorizer (named vect in the pipeline description)we would have to sayparam_grid={"vect__ngram_range"=[( )( )( )]this would tell gridsearchcv to try out unigramsbigramsand trigrams as parameter values for the ngram_range parameter of tfidfvectorizer then it trains the estimator with all possible parameter/value combinations finallyit provides the best estimator in the form of the member variable best_estimator_ as we want to compare the returned best classifier with our current best onewe need to evaluate it the same way thereforewe can pass the shufflesplit instance using the cv parameter (this is the reason cv is present in gridsearchcv |
17,695 | the only missing thing is to define how gridsearchcv should determine the best estimator this can be done by providing the desired score function to (surprise!the score_func parameter we could either write one ourselves or pick one from the sklearn metrics package we should certainly not take metric accuracy because of our class imbalance (we have lot less tweets containing sentiment than neutral onesinsteadwe want to have good precision and recall on both the classesthe tweets with sentiment and the tweets without positive or negative opinions one metric that combines both precision and recall is the -measure metricwhich is implemented as metrics _scoreputting everything togetherwe get the following codefrom sklearn grid_search import gridsearchcv from sklearn metrics import _score def grid_search_model(clf_factoryxy)cv shufflesplitn=len( )n_iter= test_size= indices=truerandom_ state= param_grid dict(vect__ngram_range=[( )( )( )]vect__min_df=[ ]vect__stop_words=[none"english"]vect__smooth_idf=[falsetrue]vect__use_idf=[falsetrue]vect__sublinear_tf=[falsetrue]vect__binary=[falsetrue]clf__alpha=[ ]grid_search gridsearchcv(clf_factory()param_grid=param_gridcv=cvscore_func= _scoreverbose= grid_search fit(xyreturn grid_search best_estimator_ |
17,696 | we have to be patient when executing the following codeclf grid_search_model(create_ngram_modelxyprint clf this is because we have just requested parameter sweep over the parameter combinations--each being trained on foldswaiting some hours pipeline(clf=multinomialnbalpha= class_weight=nonefit_prior=true)clf__alpha= clf__class_weight=noneclf__fit_prior=truevect=tfidfvectorizeranalyzer=wordbinary=falsecharset=utf- charset_error=strictdtype=input=contentlowercase=truemax_df= max_features=nonemax_n=nonemin_df= min_n=nonengram_range=( )norm= preprocessor=nonesmooth_idf=falsestop_words=none,strip_accents=nonesublinear_tf=truetoken_pattern=(? )\ \ \ +\btoken_processor=nonetokenizer=noneuse_idf=falsevocabulary=none)vect__analyzer=wordvect__binary=falsevect__charset=utf- vect__charset_error=strictvect__dtype=vect__input=contentvect__lowercase=truevect__max_df= vect__max_features=nonevect__max_n=nonevect__min_df= vect__min_n=nonevect__ngram_range=( )vect__norm= vect__preprocessor=nonevect__smooth_idf=falsevect__stop_words=nonevect__strip_accents=nonevect__sublinear_tf=truevect__token_pattern=(? )\ \ \ +\bvect__token_processor=nonevect__tokenizer=nonevect__use_idf=falsevect__vocabulary=none the best estimator indeed improves the / auc by nearly percent to with the setting that was printed earlier |
17,697 | the devastating results for positive tweets against the rest and negative tweets against the rest will improve if we configure the vectorizer and classifier with those parameters that we have just found out=pos vs rest = =neg vs rest = indeedthe / curves look much better (note that the graphs are from the medium of the fold classifiersthus have slightly diverging auc values)neverthelesswe probably still wouldn' use those classifiers time for something completely differentcleaning tweets new constraints lead to new forms twitter is no exception in this regard because text has to fit into characterspeople naturally develop new language shortcuts to say the same in less characters so farwe have ignored all the diverse emoticons and abbreviations let' see how much we can improve by taking that into account for this endeavorwe will have to provide our own preprocessor(to tfidfvectorizer |
17,698 | firstwe define range of frequent emoticons and their replacements in dictionary although we could find more distinct replacementswe go with obvious positive or negative words to help the classifieremo_repl positive emoticons "< "good "": "good ": in lower case ":dd"good ":dd in lower case " )"good "":-)"good "":)"good "";)"good ""(-:"good ""(:"good "negative emoticons":/"bad "":>"sad "":')"sad "":-("bad "":("bad "": "bad "":- "bad "make sure that :dd is replaced before : emo_repl_order [ for (k_len,kin reversed(sorted([(len( ),kfor in emo_repl keys()]))thenwe define abbreviations as regular expressions together with their expansions (\ marks the word boundary)re_repl "\br\ ""are" "\bu\ ""you" "\bhaha\ ""ha" "\bhahaha\ ""ha" "\bdon' \ ""do not" "\bdoesn' \ ""does not" "\bdidn' \ ""did not" "\bhasn' \ ""has not" "\bhaven' \ ""have not" "\bhadn' \ ""had not" "\bwon' \ ""will not" |
17,699 | "\bwouldn' \ ""would not" "\bcan' \ ""can not" "\bcannot\ ""can not"def create_ngram_model(params=none)def preprocessor(tweet)global emoticons_replaced tweet tweet lower(#return tweet lower(for in emo_repl_ordertweet tweet replace(kemo_repl[ ]for rrepl in re_repl iteritems()tweet re sub(rrepltweetreturn tweet tfidf_ngrams tfidfvectorizer(preprocessor=preprocessoranalyzer="word"certainlythere are many more abbreviations that could be used here but already with this limited setwe get an improvement for sentiment versus not sentiment of half pointwhich comes to percent=pos vs neg = =pos/neg vs irrelevant/neutral = =pos vs rest = =neg vs rest = taking the word types into account so far our hope was to simply use the words independent of each other with the hope that bag-of-words approach would suffice just from our intuitionhoweverneutral tweets probably contain higher fraction of nounswhile positive or negative tweets are more colorfulrequiring more adjectives and verbs what if we could use this linguistic information of the tweets as wellif we could find out how many words in tweet were nounsverbsadjectivesand so onthe classifier could maybe take that into account as well |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.