id
int64
0
25.6k
text
stringlengths
0
4.59k
17,500
fig python prompt fig python interpreter fig anaconda fig jupyter notebook fig spyder software fig pycharm software
17,501
reserved words these words have some pre-defined significance in python programming language it should be noted that reserved words may not be used as constants or any other identifier names example of reserved words include ifelseforbreak etc whitespace line containing only whitespacepossibly with commentis known as blank lineand python compiler totally ignores it whitespace is the term used in python to describe blankstabsnewline characters and comments examplenet profit gross profitexpenses whitespace character is not required between net profit and =or between and gross profitalthough we are free to include some to increase readability input and output in python the most fundamental thing in programming language is input to the program and output from the program input in python program is taken using the input(function and the output is displayed using print(function the string which needs to be printed is written in double quotes in print(function the command print("helloworld"hence prints hello world on the screen as shown in the above figures in different software an identifier inside double quotes will be printed by python as the name of the identifier rather than the value of the identifier exampleprint("the total amount is amount"will print "the total amount is amounton the screen but if we need to print the value of the identifierwe need to write identifier name without double quotes exampleif we write print("the total amount is",amount)then the screen will print "the total amount is it should be noted that the string and identifier are concatenated with comma (,it should be noted that the input(function accepts only string data #program to show use of only input(function =input("enter the number"print("the number is", "and class is",type( )in [ ]runfileenter the number the number is and class isexplanationin the above programthe user is asked to enter number using only the input function since the input function only considers string as inputhence it converts all input into strings in the above programthe user enters which is number butthe input function converts it into string hencewhen we will print the data type of "xusing type(functionclass "stris printed the effective use of input function is only when the user wants to enter string data type only this is demonstrated in the following program
17,502
#program to enter string using input(function username=input("enter your name"print("hello ",usernamein [ ]runfileenter your nameindia hello india explanationin the above programthe user is prompted to enter the name which is stored in variable "usernameand hence when the username is printedthe name is printed the above program illustrates the use of input function to input string howeverif the user want to enter numeric datathere is need to use other functions along with input(function python accepts numeric input from the user in the form of an integer or float number using the int(and float(function respectively this is demonstrated in the following example#program to show use of int(and float(functions amountint(input("enter the amount")print("the amount is:",amountprofit=float(input("enter the profit")print("the profit is:",profitin [ ]runfileenter the amount the amount is enter the profit the profit is in [ ]runfileenter the amount the amount is enter the profit the profit is explanationafter displaying the string "enter the amount"the programs execution stops and wait for the user to type some text and then press enter key the string produced by the input function is passed to the int(function which produces value to assign to the variable "amountthe command amountint(input("enter the amount:")hence prompts the user to enter an integer value and stores it in the identifier amount the print command prints the value of amount the command profit=float(input("enter the profit")prompts the user to enter decimal value for the profit and stores it in the identifier profit the print command prints the value of profit in [ ]displays the results when the program was executed for the first time and in [ ]displays the results when the program was executed for the second time
17,503
it can be observed that the first time when the program was executed the integer value corresponding to amount and decimal value corresponding to profit were printed in their respective formssince the type of input data was same as the functions used howeverwhen the functions were executed for the second timewe can see that the profit was given as an input of since the function used was floathencethe integer value was converted to float valuethereby printing float value of howevererror would have been generated if the amount was entered in decimal form because float value is not automatically converted to an integer form in python one of the most important operator is assignment operator (=that assign values from right side operands to left side operand will assign the value of to like any other programming languagepython also helps to compute the result in print statement as shown in the following program#program to do the calculation inside print statement =float(input("enter radius of circle")area= * * print("the area of circle using variable is",areaprint("the area of circle computed directly is" * *rin [ ]runfileenter radius of circle the area of circle using variable is the area of circle computed directly is explanationin the first case the user is prompted to enter the radius which is stored in variable "rthe area is computed and stored in variable named "areathe first print statement prints the answer from the variable "areaon the screen the second print statement does the calculation inside the print statement and prints the result both the ways of print statement hence prints the same result ( the user can decide amongst two ways depending on the requirement python provides distinct feature of taking input from the user using eval(function which considers the data type according to the nature of input provided the eval(function can also be used to convert string representing numeric expression into its evaluated numeric value this is demonstrated in the following program#program to take input from the user using eval(function var eval(input('enter the first value')print('var ='var type:'type(var )var eval(input('enter the second value')print('var ='var type:'type(var )var eval(input('enter the third value')print('var ='var type:'type(var )
17,504
in [ ]runfileenter the first value var typeenter the second value var typeenter the third value"hello dearvar hello dear typeexplanationthe first statements ask the user to enter the first value the value entered by the user is stored in var (here the statement print('var ='var 'type:'type(var )displays the output on the screen as var typeit can be observed that var is written as it is since it is contained inside the single quote after the comma var is written without quoteshence the value of the identifier var will be displayed (here after the comma type is written in single quotehence it is printed as it is and then the type(var is printed since the user entered an integer valueclass is printed as int similarlyin the next example the user enters float value ( )hence the class is printed as float in the last examplethe user enters string in the double quotehence the class is printed as "strwhich represents string unlike other programming language like cc++it is possible to give multiple inputs using one single input statement and hence multiple variables can be assigned in one statement examplenum num eval(input('please enter number and number ')will prompt the user to enter two numbers num and num #program to show use of accepting multiple inputs num num eval(input('please enter two numbers')print(num '+'num '='num num in [ ]runfileplease enter number and number explanationthe above example shows that the user is able to enter the two numbers at the same time using one eval(function the user has entered two numbers and and hence the resultant is it should be noted that two numbers entered are separated by comma it should be noted that each print statement corresponds to the output in new line #program to print multiple statements in single program print('hello'print('and'print('welcome'in [ ]runfilehello and welcome
17,505
explanationwe can observe that the use of each print statement displays the result in new line hence"hello""and""welcomeare printed on three different lines sometimesit is convenient to view the output of single line of printed text over several python statements as an examplewe may compute part of complicated calculationprint an intermediate resultfinish the calculationand print the final answer with the output all appearing in one line of text the "endargument in print statement allows us to do so in the following exampleif user is able to print the output in single line the "endargument is used for not shifting the control to next line this is illustrated in the following example#program to display output of different print statements in single line print(' 'end='print(' 'end='print(' 'end='in [ ]runfileabc explanationin the above programwe have used "endargument in print function this will cause the cursor to remain on the same line as the printed text henceall the letters are printed in the same line without this "endargumentthe cursor moves down to the next line after printing the text #program to show use of end argument print('please enter an integer value'print('please enter an integer value'end=''print('hello'print(end='please enter an integer value'in [ ]runfileplease enter an integer valueplease enter an integer valuehello please enter an integer valueexplanationthe statement print('please enter an integer value:'is an abbreviated form of the statement print('please enter an integer value:'end='\ 'that isthe default ending for line of printed text is the string '\ 'the newline control code the next statement print ('please enter an integer value'end=''terminates the line with the string 'please enter an integer value:rather than the normal \ newline code the difference in the behavior of the two statements is indistinguishable similarlythe statement print(end='please enter an integer value'essentially moves the cursor down to next line another keyword argument is "sepwhich allows us to control how the print function visually separates the arguments it displays the name "sepstands for separator by defaultthe print function places single space in between the items it prints the print(function uses keyword argument named "septo specify the string to use insert between items the default value of "sepis the blank string' string containing single space
17,506
#program to show the use of 'separgument in print statement wxyz #without separator print(wx,yz#using no space as separator print(wx,yzsep=''#using comma as separator print(wx,yzsep=','#using space as separator print(wx,yzsep='#using colon as separator print(wx,yzsep=':'#using as separator print(wx,yzsep=''in [ ]runfile , , , : : : explanationthe first output shows print' default method of using single space between printed items the second output line uses no space as separatorssince sep='the third output line uses commas as separators the fourth line runs the items together with an empty string separator the fifth line uses colon as separator the sixth line shows that the separating string may consist of multiple characters operators an operator is symbol that tells the compiler to perform specific mathematical or logical functions python language is rich in built-in operators and provides the following types of operatorsarithmetic operatorsassignment operatorsrelational operatorslogical operators and boolean operators this section discusses these operators in detail
17,507
arithmetic operators different arithmetic operators like addition(+for adding two operandssubtraction (-for subtracting second operand from first multiplication(*for multiplying both operands division(/for dividing numerator by denominator modulus operator(%for determining remainder of after an integer divisionexponential operator(**for calculating exponential power value and integer division(//for performing integer division and displaying only integer quotient it should be noted that in an expression where multiple operations are taking placefirst the bracket (parenthesiswill be given priorityfollowed by exponentialthen divisionmultiplicationmodulusfloor divisionfollowed by addition and then subtraction the assignment operator is given the last priority and finally the value of right hand side of expression is stored in the left hand side of the expression #program to show the usage of arithmetic operators value int(input('please enter number')value int(input('please enter another number')#using (+to add two integers provided by the user print(value '+'value ,'=',value +value #using (-to subtract two integers provided by the user print(value '-'value ,'=',value -value #using (*to multiply two integers provided by the user print(value '*'value ,'=',value *value #using (/to divide two integers provided by the user print(value '/'value '=',value /value #using (%to determine the remainder from two integers print(value '%'value ,'=',value %value #using (**to determine exponential power from two integers print(value '**'value ,'=',value **value #using (//to perform integer division on two integers print(value '//'value ,'=',value //value in [ ]runfileplease enter number please enter another number * /
17,508
python for beginners explanationthe first two statements prompt the user to enter two numbers when the user types the number and then presses the enter keyvalue is assigned the integer similarlythe user enters number which is stored in value laterthe different arithmetic operators produce the desired result it can be observed that / gives float value while // gives only the integer quotient and ignores the decimal part the remainder is displayed using modulus operator (%assignment operators the different types of operators includeassignment operator (=that assign values from right side operands to left side operand python will assign the value of to python unlike and javapython does not have increment and decrement operators #program to use assignment operators value int(input('please enter number')value int(input('please enter another number')#add and (+=)assignment operator it adds the right operand to the left operand and assigns the result to the left operand + is equivalent to ans=value ans+=value print("the answer using addition assignment is:",ans#subtract and (-=assignment operator it subtracts the right operand from the left operand and assigns the result to the left operand - is equivalent to ans=value ans-=value print("the answer using subtraction assignment is:",ans#multiply and (*=assignment operator it multiplies the right operand with the left operand and assigns the result to the left operand * is equivalent to ans=value ans*=value print("the answer using multiplication assignment is:",ans#divide and (/=assignment operator it divides the left operand with the right operand and assigns the result to the left operand / is equivalent to ans=value ans/=value print("the answer using division assignment is:",ans#integer divison and (//=assignment operator it divides the left operand with the right operand and assigns the integer quotient to the left operand // is equivalent to / ans=value ans//=value print("the answer using integer division assignment is:",ans
17,509
#modulus and (%=assignment operator it divides the left operand with the right operand and assigns the remainder to the left operand % is equivalent to ans=value ans%=value print("the answer using modulus assignment is:",ans#exponential and (**=assignment operator it divides the left operand with the right operand and assigns the remainder to the left operand ** is equivalent to * ans=value ans**=value print("the answer using exponential assignment is:",ansin [ ]runfileplease enter number please enter another number the answer using addition assignment is the answer using subtraction assignment is the answer using multiplication assignment is the answer using division assignment is the answer using integer division assignment is the answer using modulus assignment is the answer using exponential assignment is explanationthe above example demonstrates the use of different assignment operators in python relational operators the different relational operators supported in python areequals (==)not equals (!=)greater than (>)less than (=)less than or equal to (<=the use of these operators is explained in the following program #program to use relational operators in program = = #the equals(==operator checks if the values of two operands are equal or not if the values are equalthen the condition becomes true print("the result of equals to operator is", ==bthe not equal to(!=operator checks if the values of two operands are equal or not if the values are not equalthen the condition becomes true print("the result of not equals to operator is", !=
17,510
#the greater than (>operator checks if the value of left operand is greater than the value of right operand if yesthen the condition becomes true print("the result of greater than operator is", > #the greater than or equal to (>=operator checks if the value of left operand is greater than or equal to the value of right operand if yesthen the condition becomes true print("the result of greater than or equals to operator is", >= #the less than (<operator checks if the value of left operand is less than the value of right operand if yesthen the condition becomes true print("the result of less than to operator is", < #the less than or equal to(<=operator checks if the value of left operand is less than or equal to the value of right operand if yesthen the condition becomes true print("the result of less than or equals to operator is", <=bin [ ]runfilethe result of equals to operator isfalse the result of not equals to operator istrue the result of greater than operator istrue the result of greater than or equals to operator istrue the result of less than to operator isfalse the result of less than or equals to operator isfalse explanationthe above program shows the utility of different relational operators in python logical operators these operators are used when we need to check multiple conditions each of the condition is evaluated to true or false and then the combined decision related to all conditions is taken there are logical operators in python which include "and""orand "notthe "andoperator returns true if all the conditions are true the "oroperator returns trueif at least one of the conditions is true the "notoperator returns the opposite of the value this means that if value is "true"it will return "falseand vice versa these operators are explained in the following program #program to show use of logical operators = #using "andlogical operator print("the use of and operator returns:", > and >
17,511
#using "orlogical operator print("the use of or operator returns:", > or > #using "notlogical operator print("the use of not operator returns:"not >zin [ ]runfilethe use of and operator returnsfalse the use of or operator returnstrue the use of not operator returnstrue explanationin the first examplex> does not holds good and > holds correct since all the conditions are not holding correct for "andoperatorhence result is "falsein the second example result is "true"sinceone condition is holding correct in the last example > is "false"but when "notoperator is usedwe get the result as "trueboolean operators boolean operators are applicable only on boolean values and they provide boolean output there are three boolean operators"and""orand "notthe boolean operator always returns boolean value as shown in the following program#program to show the use of boolean operators xtrue yfalse #use of "andboolean operator print("the use of and operator returns" and yprint("the use of and operator on only returns" and xprint("the use of and operator on only returns" and #use of "orboolean operator print("the use of or operator returns" or yprint("the use of or operator returns" or #use of "notboolean operator print("the use of not operator on returns"not xprint("the use of not operator on returns"not yin [ ]runfilethe use of and operator returnsfalse the use of and operator on only returnstrue the use of and operator on only returnsfalse
17,512
the use of or operator returnstrue the use of or operator returnstrue the use of not operator on returnsfalse the use of not operator on returnstrue explanationthe use of "andoperation on and returns "falsesince both are not true however"xand "xreturns true since both the conditions are true similarly "yand "yreturns "falsesince both the conditions are "falsethe use of "oroperator returns "truesince at least one of the conditions is "truethe "not xreturns false (opposite of trueand "not yreturns true (opposite of falseoperators precedence operator precedence determines the grouping of terms in an expression and decides how an expression is evaluated certain operators have higher precedence than others for examplex herex is assigned not sincethe multiplication operator has higher precedence than the addition operatorso is first multiplied with and then adds into the following table shows operator precedence in descending order operator details (parenthesis *exponential /divisionmultiplicationinteger divisionmodulus +addition and subtraction >>=<<===!relational operators =+=-=**=/=//-%assignment operators not or and logical and boolean operators the operators which have higher precedence appear at the top of the table and those with the lowest appear at the bottom this means that within an expressionhigher precedence operators will be evaluated first the following program shows the impact of operator precedence in python #program to show utility of operator precedence = = = print("parenthesis has highest precedence:",( + )*( + )print("multiplication has higher precedence than addition:", + *yprint("relational operators has higher precedence than logical operators:", > or <zin [ ]runfileparenthesis has highest precedence multiplication has higher precedence than addition relational operators has higher precedence than logical operatorstrue
17,513
explanationin the first example " +zand " +yare evaluated first since they are inside the parenthesis and are then multiplied to each other hencethe result is * in the second examplesincemultiplication has higher precedence than additionhence multiplication of and is done first and then added to hencethe result is + similarlyin the last case since relational operators have higher precedence than logicalhence " >zand " <zare evaluated first and then "oroperator is used since > is true and < is falsehence "oroperator used on true and false returns true libraries in python python has rich collection of functions that comes along with python software besidespython has an extensive list of functions which are accessible only when the user imports the required library python library is collection of functions and sub packages functions related to one domain are grouped inside library and when the user wantsa particular library can be accessed using import statement in python there are thousands of libraries for pythonwritten by many authors some of these libraries implement specialized statistical methodsothers give access to arraysand others are designed to create visualization effects etc the user can import the respective libraries in the programdepending on the requirement to access particular functions some of the common libraries include numpythe most fundamental library around which the scientific computation stack is built is numpy (numerical pythonit provides an abundance of useful features for largemulti-dimensional arrays and matricesalong with large library of high-level mathematical functions to operate on these arrays pandasa python library designed to work with labeled and relational data is pandas it designed for quick and easy data manipulationaggregationand visualization the pandas is spreadsheets for python and it is able to describe the data efficiently it can do grouping and pivot tables on larger data than most spreadsheet programs matplotliba python library that is tailored for creating simple and powerful visualizations with ease is matplotlib line plotsscatter plotsbar charts and histogramspie chartsstem plotscontour plotsquiver plotsspectrograms etc different formatting styles like titlelabelsgridslegends etc are also available in this library seabornthe seaborn library is mostly focused on the visualization of statistical modelssuch visualizations include heat maps and those that summarize data but still depict overall distributions scipythe scipy contains modules for linear algebraoptimizationintegrationand statistics it provides efficient numerical routines as numerical integrationoptimizationand many others via its specific sub modules it adds significant power to the interactive python session by providing the user with high-level commands and classes for manipulating data nltkthe name of this suite of libraries stands for natural language toolkit andas the name impliesit used for common tasks associated with symbolic and statistical natural language processing the functionality of nltk allows lot of operations such as text taggingclassificationand tokenizingname entities identification
17,514
by ankur patel copyright ( human ai collaborationinc all rights reserved isbn- - originally printed in the united states of america published by 'reilly mediainc gravenstein highway northsebastopolca 'reilly books may be purchased for educationalbusinessor sales promotional use online editions are also available for most titles (institutional sales department( - or corporate@oreilly com developmental editorsmichele cronin acquisitions editorjonathan hassell production editorkatherine tozer copyeditorjasmine kwityn proofreaderchristina edwards indexerjudith mcconville interior designerdavid futato cover designerkaren montgomery illustratorrebecca demarest printing historyfebruary first edition revision history for the first edition first release see first indian reprintapril isbn- - the 'reilly logo is registered trademark of 'reilly mediainc hands-on unsupervised learning using pythonthe cover imageand related trade dress are trademarks of 'reilly mediainc the views expressed in this work are those of the authorsand do not represent the publisher' views while the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accuratethe publisher and the authors disclaim all responsibility for errors or omissionsincluding without limitation responsibility for damages resulting from the use of or reliance on this work use of the information and instructions contained in this work is at your own risk if any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of othersit is your responsibility to ensure that your use thereof complies with such licenses and/or rights for sale in the indian subcontinent (indiapakistanbangladeshsri lankanepalbhutanmaldivesand african continent (excluding moroccoalgeriatunisialibyaegyptand the republic of south africaonly illegal for sale outside of these countries authorized reprint of the original work published by 'reilly mediainc all rights reserved no part of the material protected by this copyright notice may be reproduced or utilized in any form or by any meanselectronic or mechanicalincluding photocopyingrecordingor by any information storage and retrieval systemnor exported to any countries other than ones mentioned above without the written permission of the copyright owner published by shroff publishers distributors pvt ltd - railway commercial complexsector sanpada ( )navi mumbai tel( fax( -mail:spdorders@shroffpublishers com*web:www shroffpublishers com printed at jasmine art printers pvt ltd mumbai
17,515
preface xi part fundamentals of unsupervised learning unsupervised learning in the machine learning ecosystem basic machine learning terminology rules-based vs machine learning supervised vs unsupervised the strengths and weaknesses of supervised learning the strengths and weaknesses of unsupervised learning using unsupervised learning to improve machine learning solutions closer look at supervised algorithms linear methods neighborhood-based methods tree-based methods support vector machines neural networks closer look at unsupervised algorithms dimensionality reduction clustering feature extraction unsupervised deep learning sequential data problems using unsupervised learning reinforcement learning using unsupervised learning semisupervised learning successful applications of unsupervised learning anomaly detection iii
17,516
end-to-end machine learning project environment setup version controlgit clone the hands-on unsupervised learning git repository scientific librariesanaconda distribution of python neural networkstensorflow and keras gradient boostingversion onexgboost gradient boostingversion twolightgbm clustering algorithms interactive computing environmentjupyter notebook overview of the data data preparation data acquisition data exploration generate feature matrix and labels array feature engineering and feature selection data visualization model preparation split into training and test sets select cost function create -fold cross-validation sets machine learning models (part imodel # logistic regression evaluation metrics confusion matrix precision-recall curve receiver operating characteristic machine learning models (part iimodel # random forests model # gradient boosting machine (xgboostmodel # gradient boosting machine (lightgbmevaluation of the four models using the test set ensembles stacking final model selection production pipeline conclusion iv table of contents
17,517
unsupervised learning using scikit-learn dimensionality reduction the motivation for dimensionality reduction the mnist digits database dimensionality reduction algorithms linear projection vs manifold learning principal component analysis pcathe concept pca in practice incremental pca sparse pca kernel pca singular value decomposition random projection gaussian random projection sparse random projection isomap multidimensional scaling locally linear embedding -distributed stochastic neighbor embedding other dimensionality reduction methods dictionary learning independent component analysis conclusion anomaly detection credit card fraud detection prepare the data define anomaly score function define evaluation metrics define plotting function normal pca anomaly detection pca components equal number of original dimensions search for the optimal number of principal components sparse pca anomaly detection kernel pca anomaly detection gaussian random projection anomaly detection sparse random projection anomaly detection nonlinear anomaly detection dictionary learning anomaly detection ica anomaly detection table of contents
17,518
normal pca anomaly detection on the test set ica anomaly detection on the test set dictionary learning anomaly detection on the test set conclusion clustering mnist digits dataset data preparation clustering algorithms -means -means inertia evaluating the clustering results -means accuracy -means and the number of principal components -means on the original dataset hierarchical clustering agglomerative hierarchical clustering the dendrogram evaluating the clustering results dbscan dbscan algorithm applying dbscan to our dataset hdbscan conclusion group segmentation lending club data data preparation transform string format to numerical format impute missing values engineer features select final set of features and perform scaling designate labels for evaluation goodness of the clusters -means application hierarchical clustering application hdbscan application conclusion vi table of contents
17,519
unsupervised learning using tensorflow and keras autoencoders neural networks tensorflow keras autoencoderthe encoder and the decoder undercomplete autoencoders overcomplete autoencoders dense vs sparse autoencoders denoising autoencoder variational autoencoder conclusion hands-on autoencoder data preparation the components of an autoencoder activation functions our first autoencoder loss function optimizer training the model evaluating on the test set two-layer undercomplete autoencoder with linear activation function increasing the number of nodes adding more hidden layers nonlinear autoencoder overcomplete autoencoder with linear activation overcomplete autoencoder with linear activation and dropout sparse overcomplete autoencoder with linear activation sparse overcomplete autoencoder with linear activation and dropout working with noisy datasets denoising autoencoder two-layer denoising undercomplete autoencoder with linear activation two-layer denoising overcomplete autoencoder with linear activation two-layer denoising overcomplete autoencoder with relu activation conclusion semisupervised learning data preparation supervised model unsupervised model table of contents vii
17,520
the power of supervised and unsupervised conclusion part iv deep unsupervised learning using tensorflow and keras recommender systems using restricted boltzmann machines boltzmann machines restricted boltzmann machines recommender systems collaborative filtering the netflix prize movielens dataset data preparation define the cost functionmean squared error perform baseline experiments matrix factorization one latent factor three latent factors five latent factors collaborative filtering using rbms rbm neural network architecture build the components of the rbm class train rbm recommender system conclusion feature detection using deep belief networks deep belief networks in detail mnist image classification restricted boltzmann machines build the components of the rbm class generate images using the rbm model view the intermediate feature detectors train the three rbms for the dbn examine feature detectors view generated images the full dbn how training of dbn works train the dbn how unsupervised learning helps supervised learning generate images to build better image classifier viii table of contents
17,521
supervised only unsupervised and supervised solution conclusion generative adversarial networks gansthe concept the power of gans deep convolutional gans convolutional neural networks dcgans revisited generator of the dcgan discriminator of the dcgan discriminator and adversarial models dcgan for the mnist dataset mnist dcgan in action synthetic image generation conclusion time series clustering ecg data approach to time series clustering -shape time series clustering using -shape on ecgfivedays data preparation training and evaluation time series clustering using -shape on ecg data preparation training and evaluation time series clustering using -means on ecg time series clustering using hierarchical dbscan on ecg comparing the time series clustering algorithms full run with -shape full run with -means full run with hdbscan comparing all three time series clustering approaches conclusion conclusion supervised learning unsupervised learning scikit-learn table of contents ix
17,522
reinforcement learning most promising areas of unsupervised learning today the future of unsupervised learning final words index table of contents
17,523
brief history of machine learning machine learning is subfield of artificial intelligence (aiin which computers learn from data--usually to improve their performance on some narrowly defined task-without being explicitly programmed the term machine learning was coined as early as (by arthur samuela legend in the field of ai)but there were few major commercial successes in machine learning during the twenty-first century insteadthe field remained niche research area for academics at universities early on (in the smany in the ai community were too optimistic about its future researchers at the timesuch as herbert simon and marvin minskyclaimed that ai would reach human-level intelligence within matter of decades: machines will be capablewithin twenty yearsof doing any work man can do --herbert simon from three to eight yearswe will have machine with the general intelligence of an average human being --marvin minsky blinded by their optimismresearchers focused on so-called strong ai or general artificial intelligence (agiprojectsattempting to build ai agents capable of problem solvingknowledge representationlearning and planningnatural language processingperceptionand motor control this optimism helped attract significant funding into the nascent field from major players such as the department of defensebut the problems these researchers tackled were too ambitious and ultimately doomed to fail ai research rarely made the leap from academia to industryand series of so-called ai winters followed in these ai winters (an analogy based on the nuclear winter dur such views inspired stanley kubrick in to create the ai agent hal in space odyssey xi
17,524
cycles around ai occurred but had very little staying power by the early sinterest in and funding for ai had hit trough ai is backbut why nowai has re-emerged with vengeance over the past two decades--first as purely academic area of interest and now as full-blown field attracting the brightest minds at both universities and corporations three critical developments are behind this resurgencebreakthroughs in machine learning algorithmsthe availability of lots of dataand superfast computers firstinstead of focusing on overly ambitious strong ai projectsresearchers turned their attention to narrowly defined subproblems of strong aialso known as weak ai or narrow ai this focus on improving solutions for narrowly defined tasks led to algorithmic breakthroughswhich paved the way for successful commercial applications many of these algorithms--often developed initially at universities or private research labs--were quickly open-sourcedspeeding up the adoption of these technologies by industry seconddata capture became focus for most organizationsand the costs of storing data fell dramatically driven by advances in digital data storage thanks to the internetlots of data also became widely and publicly available at scale never before seen thirdcomputers became increasingly powerful and available over the cloudallowing ai researchers to easily and cheaply scale their it infrastructure as required without making huge upfront investments in hardware the emergence of applied ai these three forces have pushed ai from academia to industryhelping attract increasingly higher levels of interest and funding every year ai is no longer just theoretical area of interest but rather full-blown applied field figure - shows chart from google trendsindicating the growth in interest in machine learning over the past five years xii preface
17,525
ai is now viewed as breakthrough horizontal technologyakin to the advent of computers and smartphonesthat will have significant impact on every single industry over the next decade successful commercial applications involving machine learning include--but are certainly not limited to--optical character recognitionemail spam filteringimage classificationcomputer visionspeech recognitionmachine translationgroup segmentation and clusteringgeneration of synthetic dataanomaly detectioncybercrime preventioncredit card fraud detectioninternet fraud detectiontime series predictionnatural language processingboard game and video game playingdocument classificationrecommender systemssearchroboticsonline advertisingsentiment analysisdna sequencingfinancial market analysisinformation retrievalquestion answeringand healthcare decision making major milestones in applied ai over the past years the milestones presented here helped bring ai from mostly academic topic of conversation then to mainstream staple in technology today deep bluean ai bot that had been in development since the mid- sbeats world chess champion garry kasparov in highly publicized chess event darpa introduces the darpa grand challengean annually held autonomous driving challenge held in the desert in stanford takes the top prize in carnegie mellon university performs this feat in an urban setting in google builds self-driving car by many major technology giantsincluding teslaalphabet' waymoand uberhave launched well-funded programs to build mainstream self-driving technology according to mckinsey global instituteover half of all the professional activities people are paid to do could be automated by preface xiii
17,526
algorithm to train neural networks with many layerskicking off the deep learning revolution netflix launches the netflix prize competitionwith one million dollar pursechallenging teams to use machine learning to improve its recommendation system' accuracy by at least team won the prize in ai achieves superhuman performance at checkerssolved by team from the university of alberta imagenet launches an annual contest--the imagenet large scale visual recognition challenge (ilsvrc)--in which teams use machine learning algorithms to correctly detect and classify objects in largewell-curated image dataset this draws significant attention from both academia and technology giants the classification error rate falls from in to just few percent by backed by advances in deep convolutional neural networks this leads to commercial applications of computer vision and object recognition microsoft launches kinect for xbox developed by the computer vision team at microsoft researchkinect is capable of tracking human body movement and translating this into gameplay sirione of the first mainstream digital voice assistantsis acquired by apple and released as part of iphone in october eventuallysiri is rolled out across all of apple' products powered by convolutional neural networks and long short-term memory recurrent neural networkssiri performs both speech recognition and natural language processing eventuallyamazonmicrosoftand google enter the racereleasing alexa ( )cortana ( )and google assistant ( )respectively ibm watsona question-answering ai agent developed by team led by david ferruccibeats former jeopardywinners brad rutter and ken jennings ibm watson is now used across several industriesincluding healthcare and retail google brain teamled by andrew ng and jeff deantrains neural network to recognize cats by watching unlabeled images taken from youtube videos google wins darpa' robotics challengeinvolving trials in which semiautonomous bots perform complex tasks in treacherous environmentssuch as driving vehiclewalking across rubbleremoving debris from blocked entrywayopening doorand climbing ladder facebook publishes work on deepfacea neural network-based system that can identify faces with accuracy this is near human-level performance and is more than improvement over previous systems xiv preface
17,527
the world google deepmind' alphago beats world-class professional fan hui at the game go in alphago defeats lee sedoland in alphago defeats ke jie in new version called alphago zero defeats the previous alphago version to zero alphago zero incorporates unsupervised learning techniques and masters go just by playing itself google launches major revamp to its language translationgoogle translatereplacing its existing phrase-based translation system with deep learning-based neural machine translation systemreducing translation errors by up to and approaching near human-level accuracy libratusdeveloped by carnegie mellonwins at head-to-head no-limit texas hold'em openai-trained bot beats professional gamer at dota tournament from narrow ai to agi of coursethese successes in applying ai to narrowly defined problems are just starting point there is growing belief in the ai community that--by combining several weak ai systems--we can develop strong ai this strong ai or agi agent will be capable of human-level performance at many broadly defined tasks soon after ai achieves human-level performancesome researchers predict this strong ai will surpass human intelligence and reach so-called superintelligence estimates for attaining such superintelligence range from as little as years to as many as years from nowbut most researchers believe ai will advance enough to achieve this in few generations is this inflated hype once again (like what we saw in previous ai cycles)or is it different this time aroundonly time will tell objective and approach most of the successful commercial applications to date--in areas such as computer visionspeech recognitionmachine translationand natural language processing-have involved supervised learningtaking advantage of labeled datasets howevermost of the world' data is unlabeled in this bookwe will cover the field of unsupervised learning (which is branch of machine learning used to find hidden patternsand learn the underlying structure in unlabeled data according to many industry expertssuch as yann lecunthe director of ai research at facebook and professor at nyuunsupervised learning is the preface xv
17,528
unsupervised learning is one of the trendiest topics in ai today the book' goal is to outline the concepts and tools required for you to develop the intuition necessary for applying this technology to everyday problems that you work on in other wordsthis is an applied bookone that will allow you to build real-world systems we will also explore how to efficiently label unlabeled datasets to turn unsupervised learning problems into semisupervised ones the book will use hands-on approachintroducing some theory but focusing mostly on applying unsupervised learning techniques to solving real-world problems the datasets and code are available online as jupyter notebooks on github (bit ly/ gd earmed with the conceptual understanding and hands-on experience you'll gain from this bookyou will be able to apply unsupervised learning to largeunlabeled datasets to uncover hidden patternsobtain deeper business insightdetect anomaliescluster groups based on similarityperform automatic feature engineering and selectiongenerate synthetic datasetsand more prerequisites this book assumes that you have some python programming experienceincluding familiarity with numpy and pandas for more on pythonvisit the official python website (more on jupyter notebookvisit the official jupyter website (index htmlfor refresher on college-level calculuslinear algebraprobabilityand statisticsread part of the deep learning textbook (book org/by ian goodfellow and yoshua bengio for refresher on machine learningread the elements of statistical learning (roadmap the book is organized into four partscovering the following topicspart ifundamentals of unsupervised learning differences between supervised and unsupervised learningan overview of popular supervised and unsupervised algorithmsand an end-to-end machine learning project part iiunsupervised learning using scikit-learn dimensionality reductionanomaly detectionand clustering and group segmentation xvi preface
17,529
refer to the scikit-learn documentation (stable/modules/classes htmlpart iiiunsupervised learning using tensorflow and keras representation learning and automatic feature extractionautoencodersand semisupervised learning part ivdeep unsupervised learning using tensorflow and keras restricted boltzmann machinesdeep belief networksand generative adversarial networks conventions used in this book the following typographical conventions are used in this bookitalic indicates new termsurlsemail addressesfilenamesand file extensions constant width used for program listingsas well as within paragraphs to refer to program elements such as variable or function namesdatabasesdata typesenvironment variablesstatementsand keywords constant width bold shows commands or other text that should be typed literally by the user constant width italic shows text that should be replaced with user-supplied values or by values determined by context this element signifies tip or suggestion this element signifies general note preface xvii
17,530
using code examples supplemental material (code examplesetc is available for download on github (this book is here to help you get your job done in generalif example code is offered with this bookyou may use it in your programs and documentation you do not need to contact us for permission unless you're reproducing significant portion of the code for examplewriting program that uses several chunks of code from this book does not require permission selling or distributing cd-rom of examples from 'reilly books does require permission answering question by citing this book and quoting example code does not require permission incorporating significant amount of example code from this book into your product' documentation does require permission we appreciatebut do not requireattribution an attribution usually includes the titleauthorpublisherand isbn for example"hands-on unsupervised learning using python by ankur patel ( 'reillycopyright ankur patel- - if you feel your use of code examples falls outside fair use or the permission given abovefeel free to contact us at permissions@oreilly com 'reilly online learning for almost yearso'reilly media has provided technology and business trainingknowledgeand insight to help companies succeed our unique network of experts and innovators share their knowledge and expertise through booksarticlesconferencesand our online learning platform 'reilly' online learning platform gives you on-demand access to live training coursesindepth learning pathsinteractive coding environmentsand vast collection of text and video from 'reilly and other publishers for more informationplease visit xviii preface
17,531
please address comments and questions concerning this book to the publishero'reilly mediainc gravenstein highway north sebastopolca (in the united states or canada(international or local(faxwe have web page for this bookwhere we list errataexamplesand any additional information you can access this page at to comment or ask technical questions about this booksend email to bookquestions@oreilly com for more information about our bookscoursesconferencesand newssee our website at find us on facebookfollow us on twitterwatch us on youtubepreface xix
17,532
fundamentals of unsupervised learning to startlet' explore the current machine learning ecosystem and where unsupervised learning fits in we will also build machine learning project from scratch to cover basics such as setting up the programming environmentacquiring and preparing dataexploring dataselecting machine learning algorithms and cost functionsand evaluating the results
17,533
unsupervised learning in the machine learning ecosystem most of human and animal learning is unsupervised learning if intelligence was cakeunsupervised learning would be the cakesupervised learning would be the icing on the cakeand reinforcement learning would be the cherry on the cake we know how to make the icing and the cherrybut we don' know how to make the cake we need to solve the unsupervised learning problem before we can even think of getting to true ai --yann lecun in this we will explore the difference between rules-based system and machine learningthe difference between supervised learning and unsupervised learningand the relative strengths and weaknesses of each we will also cover many popular supervised learning algorithms and unsupervised learning algorithms and briefly examine how semisupervised learning and reinforcement learning fit into the mix basic machine learning terminology before we delve into the different types of machine learninglet' take look at simple and commonly used machine learning example to help make the concepts we introduce tangiblethe email spam filter we need to build simple program that takes in emails and correctly classifies them as either "spamor "not spam this is straightforward classification problem here' bit of machine learning terminology as refresherthe input variables into this problem are the text of the emails these input variables are also known as features or predictors or independent variables the output variable--what we are trying
17,534
dependent variableor response variable (or class since this is classification problemthe set of examples the ai trains on is known as the training setand each individual example is called training instance or sample during the trainingthe ai is attempting to minimize its cost function or error rateor framed more positivelyto maximize its value function--in this casethe ratio of correctly classified emails the ai actively optimizes for minimal error rate during training its error rate is calculated by comparing the ai' predicted label with the true label howeverwhat we care about most is how well the ai generalizes its training to never-before-seen emails this will be the true test for the aican it correctly classify emails that it has never seen before using what it has learned by training on the examples in the training setthis generalization error or out-of-sample error is the main thing we use to evaluate machine learning solutions this set of never-before-seen examples is known as the test set or holdout set (because the data is held out from the trainingif we choose to have multiple holdout sets (perhaps to gauge our generalization error as we trainwhich is advisable)we may have intermediate holdout sets that we use to evaluate our progress before the final test setthese intermediate holdout sets are called validation sets to put all of this togetherthe ai trains on the training data (experienceto improve its error rate (performancein flagging spam (task)and the ultimate success criterion is how well its experience generalizes to newnever-before-seen data (generalization errorrules-based vs machine learning using rules-based approachwe can design spam filter with explicit rules to catch spam such as flag emails with "uinstead of "you," instead of "for,"buy now,etc but this system would be difficult to maintain over time as bad guys change their spam behavior to evade the rules if we used rules-based systemwe would have to frequently adjust the rules manually just to stay up-to-date alsoit would be very expensive to set up--think of all the rules we would need to create to make this well-functioning system instead of rules-based approachwe can use machine learning to train on the email data and automatically engineer rules to correctly flag malicious email as spam this machine learning-based system could be automatically adjusted over time as well this system would be much cheaper to train and maintain in this simple email problemit may be possible for us to handcraft rulesbutfor many problemshandcrafting rules is not feasible at all for exampleconsider designing self-driving car--imagine drafting rules for how the car should behave in unsupervised learning in the machine learning ecosystem
17,535
unless the car can learn and adapt on its own based on its experience we could also use machine learning systems as an exploration or data discovery tool to gain deeper insight into the problem we are trying to solve for examplein the email spam filter examplewe can learn which words or phrases are most predictive of spam and recognize newly emerging malicious spam patterns supervised vs unsupervised the field of machine learning has two major branches--supervised learning and unsupervised learning--and plenty of sub-branches that bridge the two in supervised learningthe ai agent has access to labelswhich it can use to improve its performance on some task in the email spam filter problemwe have dataset of emails with all the text within each and every email we also know which of these emails are spam or not (the so-called labelsthese labels are very valuable in helping the supervised learning ai separate the spam emails from the rest in unsupervised learninglabels are not available thereforethe task of the ai agent is not well-definedand performance cannot be so clearly measured consider the email spam filter problem--this time without labels nowthe ai agent will attempt to understand the underlying structure of emailsseparating the database of emails into different groups such that emails within group are similar to each other but different from emails in other groups this unsupervised learning problem is less clearly defined than the supervised learning problem and harder for the ai agent to solve butif handled wellthe solution is more powerful here' whythe unsupervised learning ai may find several groups that it later tags as being "spam"--but the ai may also find groups that it later tags as being "importantor categorize as "family,"professional,"news,"shopping,etc in other wordsbecause the problem does not have strictly defined taskthe ai agent may find interesting patterns above and beyond what we initially were looking for moreoverthis unsupervised system is better than the supervised system at finding new patterns in future datamaking the unsupervised solution more nimble on goforward basis this is the power of unsupervised learning the strengths and weaknesses of supervised learning supervised learning excels at optimizing performance in well-defined tasks with plenty of labels for exampleconsider very large dataset of images of objectswhere each image is labeled if the dataset is sufficiently large enough and we train using the right machine learning algorithms ( convolutional neural networksand with supervised vs unsupervised
17,536
image classification system as the supervised learning ai trains on the datait will be able to measure its performance (via cost functionby comparing its predicted image label with the true image label that we have on file the ai will explicitly try to minimize this cost function such that its error on never-before-seen images (from holdout setis as low as possible this is why labels are so powerful--they help guide the ai agent by providing it with an error measure the ai uses the error measure to improve its performance over time without such labelsthe ai does not know how successful it is (or isn'tin correctly classifying images howeverthe costs of manually labeling an image dataset are high andeven the best curated image datasets have only thousands of labels this is problem because supervised learning systems will be very good at classifying images of objects for which it has labels but poor at classifying images of objects for which it has no labels as powerful as supervised learning systems arethey are also limited at generalizing knowledge beyond the labeled items they have trained on since the majority of the world' data is unlabeledwith supervised learningthe ability of ai to expand its performance to never-before-seen instances is quite limited in other wordssupervised learning is great at solving narrow ai problems but not so good at solving more ambitiousless clearly defined problems of the strong ai type the strengths and weaknesses of unsupervised learning supervised learning will trounce unsupervised learning at narrowly defined tasks for which we have well-defined patterns that do not change much over time and sufficiently largereadily available labeled datasets howeverfor problems where patterns are unknown or constantly changing or for which we do not have sufficiently large labeled datasetsunsupervised learning truly shines instead of being guided by labelsunsupervised learning works by learning the underlying structure of the data it has trained on it does this by trying to represent the data it trains on with set of parameters that is significantly smaller than the number of examples available in the dataset by performing this representation learningunsupervised learning is able to identify distinct patterns in the dataset in the image dataset example (this time without labels)the unsupervised learning ai may be able to identify and group images based on how similar they are to each other and how different they are from the rest for exampleall the images that look like unsupervised learning in the machine learning ecosystem
17,537
togetheretc of coursethe unsupervised learning ai itself cannot label these groups as "chairsor "dogsbut now that similar images are grouped togetherhumans have much simpler labeling task instead of labeling millions of images by handhumans can manually label all the distinct groupsand the labels will apply to all the members within each group after the initial trainingif the unsupervised learning ai finds images that do not belong to any of the labeled groupsthe ai will create separate groups for the unclassified imagestriggering human to label the newyet-to-be-labeled groups of images unsupervised learning makes previously intractable problems more solvable and is much more nimble at finding hidden patterns both in the historical data that is available for training and in future data moreoverwe now have an ai approach for the huge troves of unlabeled data that exist in the world even though unsupervised learning is less adept than supervised learning at solving specificnarrowly defined problemsit is better at tackling more open-ended problems of the strong ai type and at generalizing this knowledge just as importantlyunsupervised learning can address many of the common problems data scientists encounter when building machine learning solutions using unsupervised learning to improve machine learning solutions recent successes in machine learning have been driven by the availability of lots of dataadvances in computer hardware and cloud-based resourcesand breakthroughs in machine learning algorithms but these successes have been in mostly narrow ai problems such as image classificationcomputer visionspeech recognitionnatural language processingand machine translation to solve more ambitious ai problemswe need to unlock the value of unsupervised learning let' explore the most common challenges data scientists face when building solutions and how unsupervised learning can help insufficient labeled data think ai is akin to building rocket ship you need huge engine and lot of fuel if you have large engine and tiny amount of fuelyou won' make it to orbit if you have tiny engine and ton of fuelyou can' even lift off to build rocket you need huge engine and lot of fuel --andrew ng using unsupervised learning to improve machine learning solutions
17,538
of datathe rocket ship cannot fly but not all data is created equal to use supervised algorithmswe need lots of labeled datawhich is hard and costly to generate with unsupervised learningwe can automatically label unlabeled examples here is how it would workwe would cluster all the examples and then apply the labels from labeled examples to the unlabeled ones within the same cluster unlabeled examples would receive the label of the labeled ones they are most similar to we will explore clustering in overfitting if the machine learning algorithm learns an overly complex function based on the training datait may perform very poorly on never-before-seen instances from holdout sets such as the validation set or test set in this casethe algorithm has overfit the training data--by extracting too much from the noise in the data--and has very poor generalization error in other wordsthe algorithm is memorizing the training data rather than learning how to generalize knowledge based off of it to address thiswe can introduce unsupervised learning as regularizer regularization is process used to reduce the complexity of machine learning algorithmhelping it capture the signal in the data without adjusting too much to the noise unsupervised pretraining is one such form of regularization instead of feeding the original input data directly into supervised learning algorithmwe can feed new representation of the original input data that we generate this new representation captures the essence of the original data--the true underlying structure--while losing some of the less representative noise along the way when we feed this new representation into the supervised learning algorithmit has less noise to wade through and captures more of the signalimproving its generalization error we will explore feature extraction in curse of dimensionality even with the advances in computational powerbig data is hard for machine learning algorithms to manage in generaladding more instances is not too problematic because we can parallelize operations using modern map-reduce solutions such as spark howeverthe more features we havethe more difficult training becomes there are startups such as figure eight that explicitly provide this human in the loop service underfitting is another problem that may occur in building machine learning applicationsbut this is easier to solve underfitting occurs because the model is too simple--the algorithm cannot build complex enough function approximation to make good decisions for the task at hand to solve thiswe can allow the algorithm to grow in size (have more parametersperform more training iterationsetc or apply more complicated machine learning algorithm unsupervised learning in the machine learning ecosystem
17,539
rate points and build function approximation to make good decisions when the features are very numerousthis search becomes very expensiveboth from time and compute perspective in some casesit may be impossible to find good solution fast enough this problem is known as the curse of dimensionalityand unsupervised learning is well suited to help manage this with dimensionality reductionwe can find the most salient features in the original feature setreduce the number of dimensions to more manageable number while losing very little important information in the processand then apply supervised algorithms to more efficiently perform the search for good function approximation we will cover dimensionality reduction in feature engineering feature engineering is one of the most vital tasks data scientists perform without the right featuresthe machine learning algorithm will not be able to separate points in space well enough to make good decisions on never-before-seen examples howeverfeature engineering is typically very labor-intensiveit requires humans to creatively hand-engineer the right types of features insteadwe can use representation learning from unsupervised learning algorithms to automatically learn the right types of feature representations to help solve the task at hand we will explore automatic feature extraction in outliers the quality of data is also very important if machine learning algorithms train on raredistortive outlierstheir generalization error will be lower than if they ignored or addressed the outliers separately with unsupervised learningwe can perform outlier detection using dimensionality reduction and create solution specifically for the outliers andseparatelya solution for the normal data we will build an anomaly detection system in data drift machine learning models also need to be aware of drift in the data if the data the model is making predictions on differs statistically from the data the model trained onthe model may need to retrain on data that is more representative of the current data if the model does not retrain or does not recognize the driftthe model' prediction quality on current data will suffer by building probability distributions using unsupervised learningwe can assess how different the current data is from the training set data--if the two are different enoughwe can automatically trigger retraining we will explore how to build these types of data discriminators in using unsupervised learning to improve machine learning solutions
17,540
before we delve into unsupervised learning systemslet' take look at supervised learning algorithms and how they work this will help frame where unsupervised learning fits within the machine learning ecosystem in supervised learningthere are two major types of problemsclassification and regression in classificationthe ai must correctly classify items into one of two or more classes if there are just two classesthe problem is called binary classification if there are three or more classesthe problem is classed multiclass classification classification problems are also known as discrete prediction problems because each class is discrete group classification problems also may be referred to as qualitative or categorical problems in regressionthe ai must predict continuous variable rather than discrete one regression problems also may be referred to as quantitative problems supervised machine learning algorithms span the gamutfrom very simple to very complexbut they are all aimed at minimizing some cost function or error rate (or maximizing value functionthat is associated with the labels we have for the dataset as mentioned beforewhat we care about most is how well the machine learning solution generalizes to never-before-seen cases the choice of the supervised learning algorithm is very important at minimizing this generalization error to achieve the lowest possible generalization errorthe complexity of the algorithmic model should match the complexity of the true function underlying the data we do not know what this true function really is if we didwe would not need to use machine learning to create model--we would just solve the function to find the right answer but since we do not know what this true function iswe choose machine learning algorithm to test hypotheses and find the model that best approximates this true function ( has the lowest possible generalization errorif what the algorithm models is less complex than the true functionwe have underfit the data in this casewe could improve the generalization error by choosing an algorithm that can model more complex function howeverif the algorithm designs an overly complex modelwe have overfit the training data and will have poor performance on never-before-seen casesincreasing our generalization error in other wordschoosing more complex algorithms over simpler ones is not always the right choice--sometimes simpler is better each algorithm comes with its set of strengthsweaknessesand assumptionsand knowing what to use when given the data you have and the problem you are trying to solve is very important to mastering machine learning unsupervised learning in the machine learning ecosystem
17,541
algorithms (including some real-world applicationsbefore doing the same for unsupervised algorithms linear methods the most basic supervised learning algorithms model simple linear relationship between the input features and the output variable that we wish to predict linear regression the simplest of all the algorithms is linear regressionwhich uses model that assumes linear relationship between the input variables (xand the single output variable (yif the true relationship between the inputs and the output is linear and the input variables are not highly correlated ( situation known as collinearity)linear regression may be an appropriate choice if the true relationship is more complex or nonlinearlinear regression will underfit the data because it is so simpleinterpreting the relationship modeled by the algorithm is also very straightforward interpretability is very important consideration for applied machine learning because solutions need to be understood and enacted by both technical and nontechnical people in industry without interpretabilitythe solutions become inscrutable black boxes strengths linear regression is simpleintrepretableand hard to overfit because it cannot model overly complex relationships it is an excellent choice when the underlying relationship between the input and output variables is linear weaknesses linear regression will underfit the data when the relationship between the input and output variables is nonlinear applications since the true underlying relationship between human weight and human height is linearlinear regression is great for predicting weight using height as the input variable orvice versafor predicting height using weight as the input variable this list is by no means exhaustive but does include the most commonly used machine learning algorithms there may be other potential issues that might make linear regression poor choiceincluding outlierscorrelation of error termsand nonconstant variance of error terms closer look at supervised algorithms
17,542
the simplest classification algorithm is logistic regressionwhich is also linear method but the predictions are transformed using the logistic function the outputs of this transformation are class probabilities--in other wordsthe probabilities that the instance belongs to the various classeswhere the sum of the probabilities for each instance adds up to one each instance is then assigned to the class for which it has the highest probability of belonging in strengths like linear regressionlogistic regression is simple and interpretable when the classes we are trying to predict are nonoverlapping and linearly separablelogistic regression is an excellent choice weaknesses when classes are not linearly separablelogistic regression will fail applications when classes are mostly nonoverlapping--for examplethe heights of young children versus the heights of adults--logistic regression will work well neighborhood-based methods another group of very simple algorithms are neighborhood-based methods neighborhood-based methods are lazy learners since they learn how to label new points based on the proximity of the new points to existing labeled points unlike linear regression or logistic regressionneighborhood-based models do not learn set model to predict labels for new pointsratherthese models predict labels for new points based purely on distance of new points to preexisting labeled points lazy learning is also referred to as instance-based learning or nonparametric methods -nearest neighbors the most common neighborhood-based method is -nearest neighbors (knnto label each new pointknn looks at number (where is an integer valueof nearest labeled points and has these already labeled neighbors vote on how to label the new point by defaultknn uses euclidean distance to measure what is closest the choice of is very important if is set to very low valueknn becomes very flexibledrawing highly nuanced boundaries and potentially overfitting the data if is set to very high valueknn becomes inflexibledrawing too rigid boundary and potentially underfitting the data strengths unlike linear methodsknn is highly flexible and adept at learning more complexnonlinear relationships yetknn remains simple and interpretable unsupervised learning in the machine learning ecosystem
17,543
knn does poorly when the number of observations and features grow knn becomes computationally inefficient in this highly populatedhigh-dimensional space since it needs to calculate distances from the new point to many nearby labeled points in order to predict labels it cannot rely on an efficient model with reduced number of parameters to make the necessary prediction alsoknn is very sensitive to the choice of when is set too lowknn can overfitand when is set too highknn can underfit applications knn is regularly used in recommender systemssuch as those used to predict taste in movies (netflix)music (spotify)friends (facebook)photos (instagram)search (google)and shopping (amazonfor exampleknn can help predict what user will like given what similar users like (known as collaborative filteringor what the user has liked in the past (known as content-based filteringtree-based methods instead of using linear methodwe can have the ai build decision tree where all the instances are segmented or stratified into many regionsguided by the labels we have once this segmentation is completeeach region corresponds to particular class of label (for classification problemsor range of predicted values (for regression problemsthis process is similar to having the ai build rules automatically with the explicit goal of making better decisions or predictions single decision tree the simplest tree-based method is single decision treein which the ai goes once through the training datacreates rules for segmenting the data guided by the labelsand uses this tree to make predictions on the never-before-seen validation or test set howevera single decision tree is usually poor at generalizing what it has learned during training to never-before-seen cases because it usually overfits the training data during its one and only training iteration bagging to improve the single decision treewe can introduce bootstrap aggregation (more commonly known as bagging)in which we take multiple random samples of instances from the training datacreate decision tree for each sampleand then predict the output for each instance by averaging the predictions of each of these trees by using randomization of samples and averaging results from multiple trees--an approach that is also known as the ensemble method--bagging will address some of the overfitting that results from single decision tree closer look at supervised algorithms
17,544
we can improve overfitting further by sampling not only the instances but also the predictors with random forestswe take multiple random samples of instances from the training data like we do in baggingbutfor each split in each decision treewe make the split based not on all the predictors but rather random sample of the predictors the number of predictors we consider for each split is usually the square root of the total number of predictors by sampling the predictors in this waythe random forests algorithm creates trees that are even less correlated with each other (compared to the trees in bagging)reducing overfitting and improving the generalization error boosting another approachknown as boostingis used to create multiple trees like in bagging but to build the trees sequentiallyusing what the ai learned from the previous tree to improve results on the subsequent tree each tree is kept pretty shallowwith only few decision splitsand the learning occurs slowlytree by tree of all the tree-based methodsgradient boosting machines are among the best-performing and are commonly used to win machine learning competitions strengths tree-based methods are among the best-performing supervised-learning algorithms for prediction problems these methods are able to capture complex relationships in the data by learning many simple rulesone rule at time they are also capable of handling missing data and categorical features weaknesses tree-based methods are difficult to interpretespecially if many rules are needed to make good prediction performance also becomes an issue as the number of features increase applications gradient boosting and random forests are excellent for prediction problems support vector machines instead of building trees to separate datawe can use algorithms to create hyperplanes in space that separate the dataguided by the labels that we have the approach is known as support vector machines (svmssvms allow some violations to this separation--not all the points within an area in hyperspace need to have the same label- for more on gradient boosting in machine learning competitionsconsult ben gorman' blog post (bit ly/ qy unsupervised learning in the machine learning ecosystem
17,545
boundary-defining points of another label should be maximized as much as possible alsothe boundaries do not have to be linear--we can use nonlinear kernels to more flexibly separate the data neural networks we can learn representations of the data using neural networkswhich are composed of an input layerseveral hidden layersand an output layer the input layer uses the featuresand the output layer tries to match the response variable the hidden layers are nested hierarchy of concepts--each layer (or conceptis trying to understand how the previous layer relates to the output layer using this hierarchy of conceptsthe neural network is able to learn complicated concepts by building them out of simpler ones neural networks are one of the most powerful approaches to function approximation but are prone to overfitting and are hard to interpretshortcomings that we will explore in greater detail later in the book closer look at unsupervised algorithms we will now turn our attention to problems where we do not have labels instead of trying to make predictionsunsupervised learning algorithms will try to learn the underlying structure of the data dimensionality reduction one family of algorithms--known as dimensionality reduction algorithms--projects the original high-dimensional input data to low-dimensional spacefiltering out the not-so-relevant features and keeping as much of the interesting ones as possible dimensionality reduction allows unsupervised learning ai to more effectively identify patterns and more efficiently solve large-scalecomputationally expensive problems (often involving imagesvideospeechand textlinear projection there are two major branches of dimensionality--linear projection and nonlinear dimensionality reduction we will start with linear projection first principal component analysis (pcaone approach to learning the underlying structure of data is to identify which features out of the full set of features are most important in explaining the variability among the instances in the data not all features are equal for more on neutral networkscheck out deep learning (lowyoshua bengioand aaron courville (mit pressa closer look at unsupervised algorithms
17,546
less useful in explaining the dataset for other featuresthe values might vary considerably--these features are worth exploring in greater detail since they will be better at helping the model we design separate the data in pcathe algorithm finds low-dimensional representation of the data while retaining as much of the variation as possible the number of dimensions we are left with is considerably smaller than the number of dimensions of the full dataset ( the number of total featureswe lose some of the variance by moving to this lowdimensional spacebut the underlying structure of the data is easier to identifyallowing us to perform tasks like clustering more efficiently there are several variants of pcawhich we will explore later in the book these include mini-batch variants such as incremental pcanonlinear variants such as kernel pcaand sparse variants such as sparse pca singular value decomposition (svdanother approach to learning the underlying structure of the data is to reduce the rank of the original matrix of features to smaller rank such that the original matrix can be recreated using linear combination of some of the vectors in the smaller rank matrix this is known as svd to generate the smaller rank matrixsvd keeps the vectors of the original matrix that have the most information ( the highest singular valuethe smaller rank matrix captures the most important elements of the original feature space random projection similar dimensionality reduction algorithm involves projecting points from high-dimensional space to space of much lower dimensions in such way that the scale of distances between the points is preserved we can use either random gaussian matrix or random sparse matrix to accomplish this manifold learning both pca and random projection rely on projecting the data linearly from highdimensional space to low-dimensional space instead of linear projectionit may be better to perform nonlinear transformation of the data--this is known as manifold learning or nonlinear dimensionality reduction isomap isomap is one type of manifold learning approach this algorithm learns the intrinsic geometry of the data manifold by estimating the geodesic or curved distance between each point and its neighbors rather than the euclidean distance isomap uses this to then embed the original high-dimensional space to low-dimensional one -distributed stochastic neighbor embedding ( -sneanother nonlinear dimensionality reduction--known as -sne--embeds high-dimensional data into space of just two or three dimensionsallowing the transformed data to be visualized in this twoor unsupervised learning in the machine learning ecosystem
17,547
instances are modeled further away dictionary learning an approach known as dictionary learning involves learning the sparse representation of the underlying data these representative elements are simplebinary vectors (zeros and ones)and each instance in the dataset can be reconstructed as weighted sum of the representative elements the matrix (known as the dictionarythat this unsupervised learning generates is mostly populated by zeros with only few nonzero weights by creating such dictionarythis algorithm is able to efficiently identify the most salient representative elements of the original feature space--these are the ones that have the most nonzero weights the representative elements that are less important will have few nonzero weights as with pcadictionary learning is excellent for learning the underlying structure of the datawhich will be helpful in separating the data and in identifying interesting patterns independent component analysis one common problem with unlabeled data is that there are many independent signals embedded together into the features we are given using independent component analysis (ica)we can separate these blended signals into their individual components after the separation is completewe can reconstruct any of the original features by adding together some combination of the individual components we generate ica is commonly used in signal processing tasks (for exampleto identify the individual voices in an audio clip of busy coffeehouselatent dirichlet allocation unsupervised learning can also explain dataset by learning why some parts of the dataset are similar to each other this requires learning unobserved elements within the dataset--an approach known as latent dirichlet allocation (ldafor exampleconsider document of text with manymany words these words within document are not purely randomratherthey exhibit some structure this structure can be modeled as unobserved elements known as topics after traininglda is able to explain given document with small set of topicswhere for each topic there is small set of frequently used words this is the hidden structure the lda is able to capturehelping us better explain previously unstructured corpus of text closer look at unsupervised algorithms
17,548
smaller set of just the most important features from herewe can run other unsupervised learning algorithms on this smaller set of features to find interesting patterns in the data (see the next section on clustering)orif we have labelswe can speed up the training cycle of supervised learning algorithms by feeding in this smaller matrix of features instead of using the original feature matrix clustering once we have reduced the set of original features to smallermore manageable setwe can find interesting patterns by grouping similar instances of data together this is known as clustering and can be accomplished with variety of unsupervised learning algorithms and be used for real-world applications such as market segmentation -means to cluster wellwe need to identify distinct groups such that the instances within group are similar to each other but different from instances in other groups one such algorithm is -means clustering with this algorithmwe specify the number of desired clusters kand the algorithm will assign each instance to exactly one of these clusters it optimizes the grouping by minimizing the within-cluster variation (also known as inertiasuch that the sum of the within-cluster variations across all clusters is as small as possible to speed up this clustering processk-means randomly assigns each observation to one of the clusters and then begins to reassign these observations to minimize the euclidean distance between each observation and its cluster' center pointor centroid as resultdifferent runs of -means--each with randomized start--will result in slightly different clustering assignments of the observations from these different runswe can choose the one that has the best separationdefined as the lowest total sum of within-cluster variations across all clusters hierarchical clustering an alternative clustering approach--one that does not require us to precommit to particular number of clusters--is known as hierarchical clustering one version of hierarchical clustering called agglomerative clustering uses tree-based clustering methodand builds what is called dendrogram dendrogram can be depicted graphically as an upside-down treewhere the leaves are at the bottom and the tree trunk is at the top there are faster variants of -means clustering such as mini-batch -meanswhich we cover later in the book unsupervised learning in the machine learning ecosystem
17,549
clustering then joins the leaves together--as we move vertically up the upside-down tree--based on how similar they are to each other the instances (or groups of instancesthat are most similar to each other are joined soonerwhile the instances that are not as similar are joined later with this iterative processall the instances are eventually linked together forming the single trunk of the tree this vertical depiction is very helpful once the hierarchical clustering algorithm has finished runningwe can view the dendrogram and determine where we want to cut the tree--the lower we cutthe more individual branches we are left with ( more clustersif we want fewer clusterswe can cut higher on the dendrogramcloser to the single trunk at the very top of this upside-down tree the placement of this vertical cut is similar to choosing the number of clusters in the -means clustering algorithm dbscan an even more powerful clustering algorithm (based on the density of pointsis known as dbscan (density-based spatial clustering of applications with noisegiven all the instances we have in spacedbscan will group together those that are packed closely togetherwhere close together is defined as minimum number of instances that must exist within certain distance we specify both the minimum number of instances required and the distance if an instance is within this specified distance of multiple clustersit will be grouped with the cluster to which it is most densely located any instance that is not within this specified distance of another cluster is labeled an outlier unlike -meanswe do not need to prespecify the number of clusters we can also have arbitrarily shaped clusters dbscan is much less prone to the distortion typically caused by outliers in the data feature extraction with unsupervised learningwe can learn new representations of the original features of data-- field known as feature extraction feature extraction can be used to reduce the number of original features to smaller subseteffectively performing dimensionality reduction but feature extraction can also generate new feature representations to help improve performance on supervised learning problems hierarchical clustering uses euclidean distance by defaultbut it can also use other similarity metrics such as correlation-based distancewhich we will explore in greater detail later in the book closer look at unsupervised algorithms
17,550
to generate new feature representationswe can use feedforwardnonrecurrent neural network to perform representation learningwhere the number of nodes in the output layer matches the number of nodes in the input layer this neural network is known as an autoencoder and effectively reconstructs the original featureslearning new representation using the hidden layers in between each hidden layer of the autoencoder learns representation of the original featuresand subsequent layers build on the representation learned by the preceding layers layer by layerthe autoencoder learns increasingly complicated representations from simpler ones the output layer is the final newly learned representation of the original features this learned representation can then be used as an input into supervised learning model with the objective of improving the generalization error feature extraction using supervised training of feedforward networks if we have labelsan alternate feature extraction approach is to use feedforwardnonrecurrent neural network where the output layer attempts to predict the correct label just like with autoencoderseach hidden layer learns representation of the original features howeverwhen generating the new representationsthis network is explicitly guided by the labels to extract the final newly learned representation of the original features in this networkwe extract the penultimate layer--the hidden layer just before the output layer this penultimate layer can then be used as an input into any supervised learning model unsupervised deep learning unsupervised learning performs many important functions in the field of deep learningsome of which we will explore in this book this field is known as unsupervised deep learning until very recentlythe training of deep neural networks was computationally intractable in these neural networksthe hidden layers learn internal representations to help solve the problem at hand the representations improve over time based on how the neural network uses the gradient of the error function in each training iteration to update the weights of the various nodes there are several types of autoencodersand each learns different set of representations these include denoising autoencoderssparse autoencodersand variational autoencodersall of which we will explore later in the book unsupervised learning in the machine learning ecosystem
17,551
occur in the process firstthe gradient of the error function may become very smallandsince backpropagation relies on multiplying these small weights togetherthe weights of the network may update very slowly or not at allpreventing proper training of the network this is known as the vanishing gradient problem converselythe other issue is that the gradient of the error function might become very largewith backpropthe weights throughout the network may update in huge incrementsmaking the training of the network very unstable this is known as the exploding gradient problem unsupervised pretraining to address these difficulties in training very deepmultilayered neural networksmachine learning researchers train neural networks in multiplesuccessive stageswhere each stage involves shallow neural network the output of one shallow network is then used as the input of the next neural network typicallythe first shallow neural network in this pipeline involves an unsupervised neural networkbut the later networks are supervised this unsupervised portion is known as greedy layer-wise unsupervised pretraining in geoffrey hinton demonstrated the successful application of unsupervised pretraining to initialize the training of deeper neural network pipelineskicking off the current deep learning revolution unsupervised pretaining allows the ai to capture an improved representation of the original input datawhich the supervised portion then takes advantage of to solve the specific task at hand this approach is called "greedybecause each portion of the neural network is trained independentlynot jointly "layer-wiserefers to the layers of the network in most modern neural networkspretraining is usually not necessary insteadall the layers are trained jointly using backpropagation major computer advances have made the vanishing gradient problem and the exploding gradient problem much more manageable unsupervised pretraining not only makes supervised problems easier to solve but also facilitates transfer learning transfer learning involves using machine learning algorithms to store knowledge gained from solving one task to solve another related task much more quickly and with considerably less data backpropagation (also known as backward propagation of errorsis gradient descent-based algorithm used by neural networks to update weights in backpropthe weights of the final layer are calculated first and then used to update the weights of the preceding layers this process continues until the weights of the very first layer are updated closer look at unsupervised algorithms
17,552
one applied example of unsupervised pretraining is the restricted boltzmann machine (rbm) shallowtwo-layer neural network the first layer is the input layerand the second layer is the hidden layer each node is connected to every node in the other layerbut nodes are not connected to nodes of the same layer--this is where the restriction occurs rbms can perform unsupervised tasks such as dimensionality reduction and feature extraction and provide helpful unsupervised pretraining as part of supervised learning solutions rbms are similar to autoencoders but differ in some important ways for exampleautoencoders have an output layerwhile rbms do not we will explore these and other differences in detail later in the book deep belief networks rbms can be linked together to form multistage neural network pipeline known as deep belief network (dbnthe hidden layer of each rbm is used as the input for the next rbm in other wordseach rbm generates representation of the data that the next rbm then builds upon by successively linking this type of representation learningthe deep belief network is able to learn more complicated representations that are often used as feature detectors generative adversarial networks one major advance in unsupervised deep learning has been the advent of generative adversarial networks (gans)introduced by ian goodfellow and his fellow researchers at the university of montreal in gans have many applicationsfor examplewe can use gans to create near-realistic synthetic datasuch as images and speechor perform anomaly detection in ganswe have two neural networks one network--known as the generator-generates data based on model data distribution it has created using samples of real data it has received the other network--known as the discriminator--discriminates between the data created by the generator and data from the true data distribution as simple analogythe generator is the counterfeiterand the discriminator is the police trying to identify the forgery the two networks are locked in zero-sum game the generator is trying to fool the discriminator into thinking the synthetic data comes from the true data distributionand the discriminator is trying to call out the synthetic data as fake feature detectors learn good representations of the original datahelping separate distinct elements for examplein imagesfeature detectors help separate elements such as noseseyesmouthsetc unsupervised learning in the machine learning ecosystem
17,553
underlying structure of the true data distribution even when there are no labels gans learn the underlying structure in the data through the training process and efficiently capture the structure using smallmanageable number of parameters this process is similar to the representation learning that occurs in deep learning each hidden layer in the neutral network of generator captures representation of the underlying data--starting very simply--and subsequent layers pick up more complicated representations by building on the simpler preceding layers using all these layers togetherthe generator learns the underlying structure of the data andusing what it has learnedthe generator attempts to create synthetic data that is nearly identical to the true data distribution if the generator has captured the essence of the true data distributionthe synthetic data will appear real sequential data problems using unsupervised learning unsupervised learning can also handle sequential data such as time series data one such approach involves learning the hidden states of markov model in the simple markov modelstates are fully observed and change stochastically (in other wordsrandomlyfuture states depend only on the current state and are not dependent on previous states in hidden markov modelthe states are only partially observablebutlike with simple markov modelsthe outputs of these partially observable states are fully observable since the observations that we have are insufficient to determine the state completelywe need unsupervised learning to help discover these hidden states more fully hidden markov model algorithms involve learning the probable next state given what we know about the sequence of previously occurringpartially observable states and fully observable outputs these algorithms have had major commercial applications in sequential data problems involving speechtextand time series reinforcement learning using unsupervised learning reinforcement learning is the third major branch of machine learningin which an agent determines its optimal behavior (actionsin an environment based on feedback (rewardthat it receives this feedback is known as the reinforcement signal the agent' goal is to maximize its cumulative reward over time while reinforcement learning has been around since the sit has made mainstream headline news only in recent years in deepmind--now owned by google--applied reinforcement learning to achieve superhuman-level performance at reinforcement learning using unsupervised learning
17,554
sensory data as input and no prior knowledge of the rules of the games in deepmind again captured the imagination of the machine learning community--this time the deepmind reinforcement learning-based ai agent alphago beat lee sedolone of the world' best go players these successes have cemented reinforcement learning as mainstream ai topic todaymachine learning researchers are applying reinforcement learning to solve many different types of problems includingstock market tradingin which the agent buys and sells (actionsand receives profits or losses (rewardsin return video games and board gamesin which the agent makes game decisions (actionsand wins or loses (rewardsself-driving carsin which the agent directs the vehicle (actionsand either stays on course or crashes (rewardsmachine controlin which the agent moves about its environment (actionsand either completes the course or fails (rewardsin the simplest reinforcement learning problemswe have finite problem--with finite number of states of the environmenta finite number of actions that are possible at any given state of the environmentand finite number of rewards the action taken by the agent given the current state of the environment determines the next stateand the agent' goal is to maximize its long-term reward this family of problems is known as finite markov decision processes howeverin the real worldthings are not so simple--the reward is unknown and dynamic rather than known and static to help discover this unknown reward function and approximate it as best as possiblewe can apply unsupervised learning using this approximated reward functionwe can apply reinforcement learning solutions to increase the cumulative reward over time semisupervised learning even though supervised learning and unsupervised learning are two distinct major branches of machine learningthe algorithms from each branch can be mixed together as part of machine learning pipeline typicallythis mix of supervised and unsupervised is used when we want to take full advantage of the few labels that we have or when we want to find newyet unknown patterns from unlabeled data in pipeline refers to system of machine learning solutions that are applied in succession to achieve larger objective unsupervised learning in the machine learning ecosystem
17,555
solved using hybrid of supervised and unsupervised learning known as semisupervised learning we will explore this area in greater detail later in the book successful applications of unsupervised learning in the last ten yearsmost successful commercial applications of machine learning have come from the supervised learning spacebut this is changing unsupervised learning applications have become more commonplace sometimesunsupervised learning is just means to make supervised applications better other timesunsupervised learning achieves the commercial application itself here is closer look at two of the biggest applications of unsupervised learning to dateanomaly detection and group segmentation anomaly detection performing dimensionality reduction can reduce the original high-dimensional feature space into transformed lower-dimensional space in this lower-dimensional spacewe find where the majority of points densely lie this portion is the normal space points that lie much farther away are called outliers--or anomalies--and are worth investigating in greater detail anomaly detection systems are commonly used for fraud detection such as credit card fraudwire fraudcyber fraudand insurance fraud anomaly detection is also used to identify raremalicious events such as hacking of internet-connected devicesmaintenance failures in mission-critical equipment such as airplanes and trainsand cybersecurity breaches due to malware and other pernicious agents we can use these systems for spam detectionsuch as the email spam filter example we used earlier in the other applications include finding bad actors to stop activity such as terrorist financingmoney launderinghuman and narcotics traffickingand arms dealingidentifying high risk events in financial tradingand discovering diseases such as cancer to make the analysis of anomalies more manageablewe can use clustering algorithm to group similar anomalies together and then hand-label these clusters based on the types of behavior they represent with such systemwe can have an unsupervised learning ai that is able to identify anomaliescluster them into appropriate groupsandusing the cluster labels provided by humansrecommend to business analysts the appropriate course of action with anomaly detection systemswe can take an unsupervised problem and eventually create semisupervised one with this cluster-and-label approach over timewe can run supervised algorithms on the labeled data alongside the unsupervised successful applications of unsupervised learning
17,556
as tool to understand biological systems within this large fieldluis works in bioimage informaticswhich is the application of machine learning techniques to the analysis of images of biological specimens his main focus is on the processing of large scale image data with robotic microscopesit is possible to acquire hundreds of thousands of images in dayand visual inspection of all the images becomes impossible luis has phd from carnegie mellon universitywhich is one of the leading universities in the world in the area of machine learning he is also the author of several scientific publications luis started developing open source software in as way to apply to real code what he was learning in his computer science courses at the technical university of lisbon in he started developing in python and has contributed to several open source libraries in this language he is the lead developer on mahotasthe popular computer vision package for pythonand is the contributor of several machine learning codes thank my wife rita for all her love and supportand thank my daughter anna for being the best thing ever
17,557
matthieu brucher holds an engineering degree from the ecole superieure 'electricite (informationsignalsmeasures)franceand has phd in unsupervised manifold learning from the universite de strasbourgfrance he currently holds an hpc software developer position in an oil company and works on next generation reservoir simulation mike driscoll has been programming in python since spring he enjoys writing about python on his blog at also occasionally writes for the python software foundationi-programmerand developer zone he enjoys photography and reading good book mike has also been technical reviewer for the following packt publishing bookspython object oriented programmingpython graphics cookbookand python web development beginner' guide would like to thank my wifeevangelinefor always supporting me would also like to thank my friends and family for all that they do to help me and would like to thank jesus christ for saving me
17,558
molecular and cell biology at the university of melbourne he is currently research fellow at nanyang technological universitysingaporeand an honorary fellow at the university of melbourneaustralia he co-edits the python papers and has co-founded the python user group (singapore)where he has served as vice president since his research interests lie in life--biological lifeartificial lifeand artificial intelligence--using computer science and statistics as tools to understand life and its numerous aspects you can find his website at
17,559
support filesebooksdiscount offers and more you might want to visit www packtpub com for support files and downloads related to your book did you know that packt offers ebook versions of every book publishedwith pdf and epub files availableyou can upgrade to the ebook version at www packtpub com and as print book customeryou are entitled to discount on the ebook copy get in touch with us at service@packtpub com for more details at www packtpub comyou can also read collection of free technical articlessign up for range of free newsletters and receive exclusive discounts and offers on packt books and ebooks tm do you need instant solutions to your it questionspacktlib is packt' online digital book library hereyou can accessread and search across packt' entire library of books why subscribefully searchable across every book published by packt copy and pasteprint and bookmark content on demand and accessible via web browser free access for packt account holders if you have an account with packt at www packtpub comyou can use this to access packtlib today and view nine entirely free books simply use your login credentials for immediate access
17,560
preface getting started with python machine learning machine learning and python the dream team what the book will teach you (and what it will notwhat to do when you are stuck getting started introduction to numpyscipyand matplotlib installing python chewing data efficiently with numpy and intelligently with scipy learning numpy learning scipy our first (tinymachine learning application reading in the data preprocessing and cleaning the data choosing the right model and learning algorithm summary indexing handling non-existing values comparing runtime behaviors before building our first model starting with simple straight line towards some advanced stuff stepping back to go forward another look at our data training and testing answering our initial question learning how to classify with real-world examples the iris dataset the first step is visualization building our first classification model evaluation holding out data and cross-validation
17,561
building more complex classifiers more complex dataset and more complex classifier learning about the seeds dataset features and feature engineering nearest neighbor classification binary and multiclass classification summary clustering finding related posts measuring the relatedness of posts how not to do it how to do it preprocessing similarity measured as similar number of common words converting raw text into bag-of-words counting words normalizing the word count vectors removing less important words stemming installing and using nltk extending the vectorizer with nltk' stemmer stop words on steroids our achievements and goals clustering kmeans getting test data to evaluate our ideas on clustering posts solving our initial challenge another look at noise tweaking the parameters summary topic modeling classification detecting poor answers latent dirichlet allocation (ldabuilding topic model comparing similarity in topic space modeling the whole of wikipedia choosing the number of topics summary sketching our roadmap learning to classify classy answers ii
17,562
tuning the instance tuning the classifier fetching the data slimming the data down to chewable chunks preselection and processing of attributes defining what is good answer creating our first classifier starting with the -nearest neighbor (knnalgorithm engineering the features training the classifier measuring the classifier' performance designing more features deciding how to improve bias-variance and its trade-off fixing high bias fixing high variance high bias or low bias using logistic regression bit of math with small example applying logistic regression to our postclassification problem looking behind accuracy precision and recall slimming the classifier ship itsummary classification ii sentiment analysis sketching our roadmap fetching the twitter data introducing the naive bayes classifier getting to know the bayes theorem being naive using naive bayes to classify accounting for unseen words and other oddities accounting for arithmetic underflows creating our first classifier and tuning it solving an easy problem first using all the classes tuning the classifier' parameters cleaning tweets taking the word types into account determining the word types iii
17,563
successfully cheating using sentiwordnet our first estimator putting everything together summary regression recommendations regression recommendations improved classification iii music genre classification predicting house prices with regression multidimensional regression cross-validation for regression penalized regression and penalties using lasso or elastic nets in scikit-learn greater than scenarios an example based on text setting hyperparameters in smart way rating prediction and recommendations summary improved recommendations using the binary matrix of recommendations looking at the movie neighbors combining multiple methods basket analysis obtaining useful predictions analyzing supermarket shopping baskets association rule mining more advanced basket analysis summary sketching our roadmap fetching the music data converting into wave format looking at music decomposing music into sine wave components using fft to build our first classifier increasing experimentation agility training the classifier using the confusion matrix to measure accuracy in multiclass problems an alternate way to measure classifier performance using receiver operator characteristic (rociv
17,564
improving classification performance with mel frequency cepstral coefficients summary computer vision pattern recognition introducing image processing loading and displaying images basic image processing thresholding gaussian blurring filtering for different effects adding salt and pepper noise pattern recognition computing features from images writing your own features classifying harder dataset local feature representations summary putting the center in focus dimensionality reduction sketching our roadmap selecting features detecting redundant features using filters correlation mutual information asking the model about the features using wrappers other feature selection methods feature extraction about principal component analysis (pca limitations of pca and how lda can help multidimensional scaling (mdssummary sketching pca applying pca big(gerdata learning about big data using jug to break up your pipeline into tasks about tasks reusing partial results looking under the hood using jug for data analysis [
17,565
using amazon web services (awscreating your first machines automating the generation of clusters with starcluster summary installing python packages on amazon linux running jug on our cloud machine appendixwhere to learn more about machine learning index online courses books & sites blogs data sources getting competitive what was left out summary vi
17,566
you could argue that it is fortunate coincidence that you are holding this book in your hands (or your -book readerafter allthere are millions of books printed every yearwhich are read by millions of readersand then there is this book read by you you could also argue that couple of machine learning algorithms played their role in leading you to this book (or this book to youand wethe authorsare happy that you want to understand more about the how and why most of this book will cover the how how should the data be processed so that machine learning algorithms can make the most out of ithow should you choose the right algorithm for problem at handoccasionallywe will also cover the why why is it important to measure correctlywhy does one algorithm outperform another one in given scenariowe know that there is much more to learn to be an expert in the field after allwe only covered some of the "howsand just tiny fraction of the "whysbut at the endwe hope that this mixture will help you to get up and running as quickly as possible what this book covers getting started with python machine learningintroduces the basic idea of machine learning with very simple example despite its simplicityit will challenge us with the risk of overfitting learning how to classify with real-world examplesexplains the use of real data to learn about classificationwhereby we train computer to be able to distinguish between different classes of flowers clustering finding related postsexplains how powerful the bag-of-words approach is when we apply it to finding similar posts without really understanding them
17,567
topic modelingtakes us beyond assigning each post to single cluster and shows us how assigning them to several topics as real text can deal with multiple topics classification detecting poor answersexplains how to use logistic regression to find whether user' answer to question is good or bad behind the sceneswe will learn how to use the bias-variance trade-off to debug machine learning models classification ii sentiment analysisintroduces how naive bayes worksand how to use it to classify tweets in order to see whether they are positive or negative regression recommendationsdiscusses classical topic in handling databut it is still relevant today we will use it to build recommendation systemsa system that can take user input about the likes and dislikes to recommend new products regression recommendations improvedimproves our recommendations by using multiple methods at once we will also see how to build recommendations just from shopping data without the need of rating data (which users do not always provideclassification iii music genre classificationillustrates how if someone has scrambled our huge music collectionthen our only hope to create an order is to let machine learner classify our songs it will turn out that it is sometimes better to trust someone else' expertise than creating features ourselves computer vision pattern recognitionexplains how to apply classifications in the specific context of handling imagesa field known as pattern recognition dimensionality reductionteaches us what other methods exist that can help us in downsizing data so that it is chewable by our machine learning algorithms big(gerdataexplains how data sizes keep getting biggerand how this often becomes problem for the analysis in this we explore some approaches to deal with larger data by taking advantage of multiple core or computing clusters we also have an introduction to using cloud computing (using amazon' web services as our cloud providerappendixwhere to learn more about machine learningcovers list of wonderful resources available for machine learning [
17,568
what you need for this book this book assumes you know python and how to install library using easy_install or pip we do not rely on any advanced mathematics such as calculus or matrix algebra to summarize itwe are using the following versions throughout this bookbut you should be fine with any more recent onepython numpyscipy scikit-learn who this book is for this book is for python programmers who want to learn how to perform machine learning using open source libraries we will walk through the basic modes of machine learning based on realistic examples this book is also for machine learners who want to start using python to build their systems python is flexible language for rapid prototypingwhile the underlying algorithms are all written in optimized or +thereforethe resulting code is fast and robust enough to be usable in production as well conventions in this bookyou will find number of styles of text that distinguish between different kinds of information here are some examples of these stylesand an explanation of their meaning code words in text are shown as follows"we can include other contexts through the use of the include directivea block of code is set as followsdef nn_movie(movie_likenessreviewsuidmid)likes movie_likeness[midargsort(reverse the sorting so that most alike are in beginning likes likes[::- returns the rating for the most similar movie available for ell in likesif reviews[ ,ell return reviews[ ,ell[
17,569
when we wish to draw your attention to particular part of code blockthe relevant lines or items are set in bolddef nn_movie(movie_likenessreviewsuidmid)likes movie_likeness[midargsort(reverse the sorting so that most alike are in beginning likes likes[::- returns the rating for the most similar movie available for ell in likesif reviews[ ,ell return reviews[ ,ellnew terms and important words are shown in bold words that you see on the screenin menus or dialog boxes for exampleappear in the text like this"clicking on the next button moves you to the next screenwarnings or important notes appear in box like this tips and tricks appear like this reader feedback feedback from our readers is always welcome let us know what you think about this book--what you liked or may have disliked reader feedback is important for us to develop titles that you really get the most out of to send us general feedbacksimply send an -mail to feedback@packtpub comand mention the book title via the subject of your message if there is topic that you have expertise in and you are interested in either writing or contributing to booksee our author guide on www packtpub com/authors customer support now that you are the proud owner of packt bookwe have number of things to help you to get the most from your purchase [
17,570
downloading the example code you can download the example code files for all packt books you have purchased from your account at elsewhereyou can visit have the files -mailed directly to you errata although we have taken every care to ensure the accuracy of our contentmistakes do happen if you find mistake in one of our books--maybe mistake in the text or the code--we would be grateful if you would report this to us by doing soyou can save other readers from frustration and help us improve subsequent versions of this book if you find any errataplease report them by visiting com/submit-errataselecting your bookclicking on the errata submission form linkand entering the details of your errata once your errata are verifiedyour submission will be accepted and the errata will be uploaded on our websiteor added to any list of existing errataunder the errata section of that title any existing errata can be viewed by selecting your title from piracy piracy of copyright material on the internet is an ongoing problem across all media at packtwe take the protection of our copyright and licenses very seriously if you come across any illegal copies of our worksin any formon the internetplease provide us with the location address or website name immediately so that we can pursue remedy please contact us at copyright@packtpub com with link to the suspected pirated material we appreciate your help in protecting our authorsand our ability to bring you valuable content questions you can contact us at questions@packtpub com if you are having problem with any aspect of the bookand we will do our best to address it [
17,571
machine learning machine learning (mlteaches machines how to carry out tasks by themselves it is that simple the complexity comes with the detailsand that is most likely the reason you are reading this book maybe you have too much data and too little insightand you hoped that using machine learning algorithms will help you solve this challenge so you started to dig into random algorithms but after some time you were puzzledwhich of the myriad of algorithms should you actually chooseor maybe you are broadly interested in machine learning and have been reading few blogs and articles about it for some time everything seemed to be magic and coolso you started your exploration and fed some toy data into decision tree or support vector machine but after you successfully applied it to some other datayou wonderedwas the whole setting rightdid you get the optimal resultsand how do you know there are no better algorithmsor whether your data was "the right one"welcome to the clubwethe authorswere at those stages once upon timelooking for information that tells the real story behind the theoretical textbooks on machine learning it turned out that much of that information was "black art"not usually taught in standard textbooks soin sensewe wrote this book to our younger selvesa book that not only gives quick introduction to machine learningbut also teaches you lessons that we have learned along the way we hope that it will also give youthe readera smoother entry into one of the most exciting fields in computer science
17,572
machine learning and python the dream team the goal of machine learning is to teach machines (softwareto carry out tasks by providing them with couple of examples (how to do or not do tasklet us assume that each morning when you turn on your computeryou perform the same task of moving -mails around so that only those -mails belonging to particular topic end up in the same folder after some timeyou feel bored and think of automating this chore one way would be to start analyzing your brain and writing down all the rules your brain processes while you are shuffling your -mails howeverthis will be quite cumbersome and always imperfect while you will miss some rulesyou will over-specify others better and more future-proof way would be to automate this process by choosing set of -mail meta information and body/folder name pairs and let an algorithm come up with the best rule set the pairs would be your training dataand the resulting rule set (also called modelcould then be applied to future -mails that we have not yet seen this is machine learning in its simplest form of coursemachine learning (often also referred to as data mining or predictive analysisis not brand new field in itself quite the contraryits success over recent years can be attributed to the pragmatic way of using rock-solid techniques and insights from other successful fieldsfor examplestatistics therethe purpose is for us humans to get insights into the data by learning more about the underlying patterns and relationships as you read more and more about successful applications of machine learning (you have checked out kaggle com alreadyhaven' you?)you will see that applied statistics is common field among machine learning experts as you will see laterthe process of coming up with decent ml approach is never waterfall-like process insteadyou will see yourself going back and forth in your analysistrying out different versions of your input data on diverse sets of ml algorithms it is this explorative nature that lends itself perfectly to python being an interpreted high-level programming languageit may seem that python was designed specifically for the process of trying out different things what is moreit does this very fast sure enoughit is slower than or similar statically-typed programming languagesneverthelesswith myriad of easy-to-use libraries that are often written in cyou don' have to sacrifice speed for agility [
17,573
what the book will teach you (and what it will notthis book will give you broad overview of the types of learning algorithms that are currently used in the diverse fields of machine learning and what to watch out for when applying them from our own experiencehoweverwe know that doing the "coolstuff--using and tweaking machine learning algorithms such as support vector machines (svm)nearest neighbor search (nns)or ensembles thereof--will only consume tiny fraction of the overall time of good machine learning expert looking at the following typical workflowwe see that most of our time will be spent in rather mundane tasks reading the data and cleaning it exploring and understanding the input data analyzing how best to present the data to the learning algorithm choosing the right model and learning algorithm measuring the performance correctly when talking about exploring and understanding the input datawe will need bit of statistics and basic math but while doing thisyou will see that those topicswhich seemed so dry in your math classcan actually be really exciting when you use them to look at interesting data the journey begins when you read in the data when you have to face issues such as invalid or missing valuesyou will see that this is more an art than precise science and very rewarding oneas doing this part right will open your data to more machine learning algorithmsand thus increase the likelihood of success with the data being ready in your program' data structuresyou will want to get real feeling of what kind of animal you are working with do you have enough data to answer your questionsif notyou might want to think about additional ways to get more of it do you maybe even have too much datathen you probably want to think about how best to extract sample of it often you will not feed the data directly into your machine learning algorithm insteadyou will find that you can refine parts of the data before training many timesthe machine learning algorithm will reward you with increased performance you will even find that simple algorithm with refined data generally outperforms very sophisticated algorithm with raw data this part of the machine learning workflow is called feature engineeringand it is generally very exciting and rewarding challenge creative and intelligent that you areyou will immediately see the results [
17,574
choosing the right learning algorithm is not simply shootout of the three or four that are in your toolbox (there will be more algorithms in your toolbox that you will seeit is more of thoughtful process of weighing different performance and functional requirements do you need fast results and are willing to sacrifice qualityor would you rather spend more time to get the best possible resultdo you have clear idea of the future data or should you be bit more conservative on that sidefinallymeasuring the performance is the part where most mistakes are waiting for the aspiring ml learner there are easy onessuch as testing your approach with the same data on which you have trained but there are more difficult onesfor examplewhen you have imbalanced training data againdata is the part that determines whether your undertaking will fail or succeed we see that only the fourth point is dealing with the fancy algorithms neverthelesswe hope that this book will convince you that the other four tasks are not simply choresbut can be equally important if not more exciting our hope is that by the end of the book you will have truly fallen in love with data instead of learned algorithms to that endwe will not overwhelm you with the theoretical aspects of the diverse ml algorithmsas there are already excellent books in that area (you will find pointers in appendixwhere to learn more about machine learninginsteadwe will try to provide an intuition of the underlying approaches in the individual --just enough for you to get the idea and be able to undertake your first steps hencethis book is by no means "the definitive guideto machine learning it is more kind of starter kit we hope that it ignites your curiosity enough to keep you eager in trying to learn more and more about this interesting field in the rest of this we will set up and get to know the basic python librariesnumpy and scipyand then train our first machine learning using scikit-learn during this endeavorwe will introduce basic ml concepts that will later be used throughout the book the rest of the will then go into more detail through the five steps described earlierhighlighting different aspects of machine learning in python using diverse application scenarios what to do when you are stuck we try to convey every idea necessary to reproduce the steps throughout this book neverthelessthere will be situations when you might get stuck the reasons might range from simple typos over odd combinations of package versions to problems in understanding
17,575
in such situationthere are many different ways to get help most likelyyour problem will already have been raised and solved in the following excellent & siteslearning topics for almost every questionit contains above-average answers from machine learning experts even if you don' have any questionsit is good habit to check it out every now and then and read through some of the questions and answers #machinelearning on freenode this irc channel is focused on machine learning topics it is small but very active and helpful community of machine learning experts is similar to metaoptimizedbut focuses more on statistics problems but with broader focus on general programming topics it containsfor examplemore questions on some of the packages that we will use in this book (scipy and matplotlibthe authorsto support you in topics that don' fit in any of the above buckets if you post your questionwe will get an instant messageif any of us are onlinewe will be drawn into chat with you as stated at the beginningthis book tries to help you get started quickly on your machine learning journey we therefore highly encourage you to build up your own list of machine learning-related blogs and check them out regularly this is the best way to get to know what works and what does not the only blog we want to highlight right here is blog of the kaggle companywhich is carrying out machine learning competitions (more links are provided in appendixwhere to learn more about machine learningtypicallythey encourage the winners of the competitions to write down how they approached the competitionwhat strategies did not workand how they arrived at the winning strategy if you don' read anything elsefinebut this is must getting started assuming that you have already installed python (everything at least as recent as should be fine)we need to install numpy and scipy for numerical operations as well as matplotlib for visualization
17,576
introduction to numpyscipyand matplotlib before we can talk about concrete machine learning algorithmswe have to talk about how best to store the data we will chew through this is important as the most advanced learning algorithm will not be of any help to us if they will never finish this may be simply because accessing the data is too slow or maybe its representation forces the operating system to swap all day add to this that python is an interpreted language ( highly optimized onethoughthat is slow for many numerically heavy algorithms compared to or fortran so we might ask why on earth so many scientists and companies are betting their fortune on python even in the highly computation-intensive areasthe answer is that in pythonit is very easy to offload number-crunching tasks to the lower layer in the form of or fortran extension that is exactly what numpy and scipy do (the support of highly optimized multidimensional arrayswhich are the basic data structure of most state-of-the-art algorithms scipy uses those arrays to provide set of fast numerical recipes finallymatplotlib (most convenient and feature-rich library to plot high-quality graphs using python installing python luckilyfor all the major operating systemsnamely windowsmacand linuxthere are targeted installers for numpyscipyand matplotlib if you are unsure about the installation processyou might want to install enthought python distribution (python( , (come with all the earlier mentioned packages included chewing data efficiently with numpy and intelligently with scipy let us quickly walk through some basic numpy examples and then take look at what scipy provides on top of it on the waywe will get our feet wet with plotting using the marvelous matplotlib package you will find more interesting examples of what numpy can offer at scipy org/tentative_numpy_tutorial
17,577
you will also find the book numpy beginner' guide second editionivan idrispackt publishing very valuable additional tutorial style guides are at scipy-lectures github comyou may also visit the official scipy tutorial at in this bookwe will use numpy version and scipy version learning numpy so let us import numpy and play bit with it for thatwe need to start the python interactive shell import numpy numpy version full_version as we do not want to pollute our namespacewe certainly should not do the followingfrom numpy import the numpy array array will potentially shadow the array package that is included in standard python insteadwe will use the following convenient shortcutimport numpy as np np array([ , , , , , ] array([ ] ndim shape ( ,we just created an array in similar way to how we would create list in python howevernumpy arrays have additional information about the shape in this caseit is one-dimensional array of five elements no surprises so far we can now transform this array in to matrix reshape(( , ) array([[ ][ ][ ]] ndim shape (
17,578
the funny thing starts when we realize just how much the numpy package is optimized for exampleit avoids copies wherever possible [ ][ ]= array([ ][ ] ]] array( ]in this casewe have modified the value to in band we can immediately see the same change reflected in as well keep that in mind whenever you need true copy reshape(( , )copy( array([ ][ ] ]] [ ][ - array( ] array([[- ] ] ]]herec and are totally independent copies another big advantage of numpy arrays is that the operations are propagated to the individual elements * array( ] ** array( ]contrast that to ordinary python lists[ , , , , ]* [ [ , , , , ]** traceback (most recent call last)file ""line in typeerrorunsupported operand type(sfor *or pow()'listand 'int
17,579
of courseby using numpy arrays we sacrifice the agility python lists offer simple operations like adding or removing are bit complex for numpy arrays luckilywe have both at our disposaland we will use the right one for the task at hand indexing part of the power of numpy comes from the versatile ways in which its arrays can be accessed in addition to normal list indexingit allows us to use arrays themselves as indices [np array([ , , ])array([ ]in addition to the fact that conditions are now propagated to the individual elementswe gain very convenient way to access our data > array([falsefalsea[ > array([ ]truefalsefalsetrue]dtype=boolthis can also be used to trim outliers [ > array([ ]as this is frequent use casethere is special clip function for itclipping the values at both ends of an interval with one function call as followsa clip( , array([ ]handling non-existing values the power of numpy' indexing capabilities comes in handy when preprocessing data that we have just read in from text file it will most likely contain invalid valueswhich we will mark as not being real number using numpy nan as followsc np array([ np nan ]let' pretend we have read this from text file array( nan ]np isnan(carray([falsefalsetruefalsefalse]dtype=bool
17,580
[~np isnan( )array( ]np mean( [~np isnan( )] comparing runtime behaviors let us compare the runtime behavior of numpy with normal python lists in the following codewe will calculate the sum of all squared numbers of to and see how much time the calculation will take we do it times and report the total time so that our measurement is accurate enough import timeit normal_py_sec timeit timeit('sum( * for in xrange( ))'number= naive_np_sec timeit timeit('sum(na*na)'setup="import numpy as npna=np arange( )"number= good_np_sec timeit timeit('na dot(na)'setup="import numpy as npna=np arange( )"number= print("normal python% sec"%normal_py_secprint("naive numpy% sec"%naive_np_secprint("good numpy% sec"%good_np_secnormal python sec naive numpy sec good numpy sec we make two interesting observations firstjust using numpy as data storage (naive numpytakes times longerwhich is surprising since we believe it must be much faster as it is written as extension one reason for this is that the access of individual elements from python itself is rather costly only when we are able to apply algorithms inside the optimized extension code do we get speed improvementsand tremendous ones at thatusing the dot(function of numpywe are more than times faster in summaryin every algorithm we are about to implementwe should always look at how we can move loops over individual elements from python to some of the highly optimized numpy or scipy extension functions
17,581
howeverthe speed comes at price using numpy arrayswe no longer have the incredible flexibility of python listswhich can hold basically anything numpy arrays always have only one datatype np array([ , , ] dtype dtype('int 'if we try to use elements of different typesnumpy will do its best to coerce them to the most reasonable common datatypenp array([ "stringy"]array([' ''stringy']dtype='| 'np array([ "stringy"set([ , , ])]array([ stringyset([ ])]dtype=objectlearning scipy on top of the efficient data structures of numpyscipy offers magnitude of algorithms working on those arrays whatever numerical-heavy algorithm you take from current books on numerical recipesyou will most likely find support for them in scipy in one way or another whether it is matrix manipulationlinear algebraoptimizationclusteringspatial operationsor even fast fourier transformationthe toolbox is readily filled thereforeit is good habit to always inspect the scipy module before you start implementing numerical algorithm for conveniencethe complete namespace of numpy is also accessible via scipy sofrom now onwe will use numpy' machinery via the scipy namespace you can check this easily by comparing the function references of any base functionfor exampleimport scipynumpy scipy version full_version scipy dot is numpy dot true the diverse algorithms are grouped into the following toolboxesscipy package cluster functionality hierarchical clustering (cluster hierarchyvector quantization -means (cluster vq
17,582
scipy package functionality constants physical and mathematical constants conversion methods fftpack discrete fourier transform algorithms integrate integration routines interpolate interpolation (linearcubicand so onio data input and output linalg linear algebra routines using the optimized blas and lapack libraries maxentropy functions for fitting maximum entropy models ndimage -dimensional image package odr orthogonal distance regression optimize optimization (finding minima and rootssignal signal processing sparse sparse matrices spatial spatial data structures and algorithms special special mathematical functions such as bessel or jacobian stats statistics toolkit the toolboxes most interesting to our endeavor are scipy statsscipy interpolatescipy clusterand scipy signal for the sake of brevitywe will briefly explore some features of the stats package and leave the others to be explained when they show up in the
17,583
our first (tinymachine learning application let us get our hands dirty and have look at our hypothetical web startupmlaaswhich sells the service of providing machine learning algorithms via http with the increasing success of our companythe demand for better infrastructure also increases to serve all incoming web requests successfully we don' want to allocate too many resources as that would be too costly on the other handwe will lose money if we have not reserved enough resources for serving all incoming requests the question now iswhen will we hit the limit of our current infrastructurewhich we estimated being , requests per hour we would like to know in advance when we have to request additional servers in the cloud to serve all the incoming requests successfully without paying for unused ones reading in the data we have collected the web stats for the last month and aggregated them in ch data/web_traffic tsv (tsv because it contains tab separated valuesthey are stored as the number of hits per hour each line contains consecutive hours and the number of web hits in that hour the first few lines look like the following
17,584
using scipy' genfromtxt()we can easily read in the data import scipy as sp data sp genfromtxt("web_traffic tsv"delimiter="\ "we have to specify tab as the delimiter so that the columns are correctly determined quick check shows that we have correctly read in the data print(data[: ][ + + + nan + + + + + + + + + + + + + + + + ]print(data shape( we have data points with two dimensions preprocessing and cleaning the data it is more convenient for scipy to separate the dimensions into two vectorseach of size the first vectorxwill contain the hours and the otherywill contain the web hits in that particular hour this splitting is done using the special index notation of scipyusing which we can choose the columns individually data[:, data[:, there is much more to the way data can be selected from scipy array check out for more details on indexingslicingand iterating one caveat is that we still have some values in that contain invalid valuesnan the question iswhat can we do with themlet us check how many hours contain invalid data sp sum(sp isnan( )
17,585
we are missing only out of entriesso we can afford to remove them remember that we can index scipy array with another array sp isnan(yreturns an array of booleans indicating whether an entry is not number using ~we logically negate that array so that we choose only those elements from and where does contain valid numbers [~sp isnan( ) [~sp isnan( )to get first impression of our datalet us plot the data in scatter plot using matplotlib matplotlib contains the pyplot packagewhich tries to mimic matlab' interface-- very convenient and easy-to-use one (you will find more tutorials on plotting at import matplotlib pyplot as plt plt scatter( ,yplt title("web traffic over the last month"plt xlabel("time"plt ylabel("hits/hour"plt xticks([ * * for in range( )]['week % '% for in range( )]plt autoscale(tight=trueplt grid(plt show(in the resulting chartwe can see that while in the first weeks the traffic stayed more or less the samethe last week shows steep increase
17,586
choosing the right model and learning algorithm now that we have first impression of the datawe return to the initial questionhow long will our server handle the incoming web trafficto answer this we have tofind the real model behind the noisy data points use the model to extrapolate into the future to find the point in time where our infrastructure has to be extended before building our first model when we talk about modelsyou can think of them as simplified theoretical approximations of the complex reality as such there is always some inferiority involvedalso called the approximation error this error will guide us in choosing the right model among the myriad of choices we have this error will be calculated as the squared distance of the model' prediction to the real data that isfor learned model functionfthe error is calculated as followsdef error(fxy)return sp sum(( ( )- )** the vectors and contain the web stats data that we have extracted before it is the beauty of scipy' vectorized functions that we exploit here with (xthe trained model is assumed to take vector and return the results again as vector of the same size so that we can use it to calculate the difference to starting with simple straight line let us assume for second that the underlying model is straight line the challenge then is how to best put that line into the chart so that it results in the smallest approximation error scipy' polyfit(function does exactly that given data and and the desired order of the polynomial (straight line has order )it finds the model function that minimizes the error function defined earlier fp residualsranksvrcond sp polyfit(xy full=truethe polyfit(function returns the parameters of the fitted model functionfp and by setting full to truewe also get additional background information on the fitting process of itonly residuals are of interestwhich is exactly the error of the approximation print("model parameters%sfp model parameters
17,587
print(res + this means that the best straight line fit is the following functionf( we then use poly (to create model function from the model parameters sp poly (fp print(error( xy) we have used full=true to retrieve more details on the fitting process normallywe would not need itin which case only the model parameters would be returned in factwhat we do here is simple curve fitting you can find out more about it on wikipedia by going to org/wiki/curve_fitting we can now use (to plot our first trained model in addition to the earlier plotting instructionswe simply add the followingfx sp linspace( , [- ] generate -values for plotting plt plot(fxf (fx)linewidth= plt legend([" =%if order]loc="upper left"the following graph shows our first trained model
17,588
it seems like the first four weeks are not that far offalthough we clearly see that there is something wrong with our initial assumption that the underlying model is straight line plushow good or bad actually is the error of , , the absolute value of the error is seldom of use in isolation howeverwhen comparing two competing modelswe can use their errors to judge which one of them is better although our first model clearly is not the one we would useit serves very important purpose in the workflowwe will use it as our baseline until we find better one whatever model we will come up with in the futurewe will compare it against the current baseline towards some advanced stuff let us now fit more complex modela polynomial of degree to see whether it better "understandsour dataf sp polyfit(xy print( parray( - - + sp poly ( pprint(error( xy) + ]the following chart shows the model we trained before (straight line of one degreewith our newly trainedmore complex model with two degrees (dashed)
17,589
the error is , , which is almost half the error of the straight-line model this is goodhoweverit comes with price we now have more complex functionmeaning that we have one more parameter to tune inside polyfit(the fitted polynomial is as followsf( ** soif more complexity gives better resultswhy not increase the complexity even morelet' try it for degree and the more complex the data getsthe curves capture it and make it fit better the errors seem to tell the same story error = , , error = , , error = , , error = , , error = , , howevertaking closer look at the fitted curveswe start to wonder whether they also capture the true process that generated this data framed differentlydo our models correctly represent the underlying mass behavior of customers visiting our websitelooking at the polynomial of degree and we see wildly oscillating behavior it seems that the models are fitted too much to the data so much that it is now capturing not only the underlying process but also the noise this is called overfitting
17,590
at this pointwe have the following choicesselecting one of the fitted polynomial models switching to another more complex model classsplinesthinking differently about the data and starting again of the five fitted modelsthe first-order model clearly is too simpleand the models of order and are clearly overfitting only the secondand third-order models seem to somehow match the data howeverif we extrapolate them at both borderswe see them going berserk switching to more complex class also seems to be the wrong way to go about it what arguments would back which classat this pointwe realize that we probably have not completely understood our data stepping back to go forward another look at our data sowe step back and take another look at the data it seems that there is an inflection point between weeks and so let us separate the data and train two lines using week as separation point we train the first line with the data up to week and the second line with the remaining data inflection * * calculate the inflection point in hours xa [:inflectiondata before the inflection point ya [:inflectionxb [inflection:data after yb [inflection:fa sp poly (sp polyfit(xaya )fb sp poly (sp polyfit(xbyb )fa_error error(faxayafb_error error(fbxbybprint("error inflection=% (fa fb_error)error inflection= , , plotting the two models for the two data ranges gives the following chart
17,591
clearlythe combination of these two lines seems to be much better fit to the data than anything we have modeled before but stillthe combined error is higher than the higher-order polynomials can we trust the error at the endasked differentlywhy do we trust the straight line fitted only at the last week of our data more than any of the more complex modelsit is because we assume that it will capture future data better if we plot the models into the futurewe see how right we are ( = is again our initially straight line
17,592
the models of degree and don' seem to expect bright future for our startup they tried so hard to model the given data correctly that they are clearly useless to extrapolate further this is called overfitting on the other handthe lower-degree models do not seem to be capable of capturing the data properly this is called underfitting so let us play fair to the models of degree and above and try out how they behave if we fit them only to the data of the last week after allwe believe that the last week says more about the future than the data before the result can be seen in the following psychedelic chartwhich shows even more clearly how bad the problem of overfitting isstilljudging from the errors of the models when trained only on the data from week and afterwe should still choose the most complex one error = error = error = error = error = training and testing if only we had some data from the future that we could use to measure our models againstwe should be able to judge our model choice only on the resulting approximation error
17,593
although we cannot look into the futurewe can and should simulate similar effect by holding out part of our data let us removefor instancea certain percentage of the data and train on the remaining one then we use the hold-out data to calculate the error as the model has been trained not knowing the hold-out datawe should get more realistic picture of how the model will behave in the future the test errors for the models trained only on the time after the inflection point now show completely different picture error = , , error = , , error = , , error = , , error = , , the result can be seen in the following chartit seems we finally have clear winner the model with degree has the lowest test errorwhich is the error when measured using data that the model did not see during training and this is what lets us trust that we won' get bad surprises when future data arrives
17,594
answering our initial question finallywe have arrived at model that we think represents the underlying process bestit is now simple task of finding out when our infrastructure will reach , requests per hour we have to calculate when our model function reaches the value , having polynomial of degree we could simply compute the inverse of the function and calculate its value at , of coursewe would like to have an approach that is applicable to any model function easily this can be done by subtracting , from the polynomialwhich results in another polynomialand finding the root of it scipy' optimize module has the fsolve function to achieve this when providing an initial starting position let fbt be the winning polynomial of degree print(fbt + print(fbt - + from scipy optimize import fsolve reached_max fsolve(fbt - )/( * print(" , hits/hour expected at week %freached_max[ ] , hits/hour expected at week our model tells us that given the current user behavior and traction of our startupit will take another month until we have reached our threshold capacity of coursethere is certain uncertainty involved with our prediction to get the real pictureyou can draw in more sophisticated statistics to find out about the variance that we have to expect when looking farther and further into the future and then there are the user and underlying user behavior dynamics that we cannot model accurately howeverat this point we are fine with the current predictions after allwe can prepare all the time-consuming actions now if we then monitor our web traffic closelywe will see in time when we have to allocate new resources
17,595
summary congratulationsyou just learned two important things of thesethe most important one is that as typical machine learning operatoryou will spend most of your time understanding and refining the data--exactly what we just did in our first tiny machine learning example and we hope that the example helped you to start switching your mental focus from algorithms to data lateryou learned how important it is to have the correct experiment setupand that it is vital to not mix up training and testing admittedlythe use of polynomial fitting is not the coolest thing in the machine learning world we have chosen it so as not to distract you with the coolness of some shiny algorithmwhich encompasses the two most important points we just summarized above solet' move to the next in which we will dive deep into scikits-learnthe marvelous machine learning toolkitgive an overview of different types of learningand show you the beauty of feature engineering
17,596
real-world examples can machine distinguish between flower species based on imagesfrom machine learning perspectivewe approach this problem by having the machine learn how to perform this task based on examples of each species so that it can classify images where the species are not marked this process is called classification (or supervised learning)and is classic problem that goes back few decades we will explore small datasets using few simple algorithms that we can implement manually the goal is to be able to understand the basic principles of classification this will be solid foundation to understanding later as we introduce more complex methods that willby necessityrely on code written by others the iris dataset the iris dataset is classic dataset from the sit is one of the first modern examples of statistical classification the setting is that of iris flowersof which there are multiple species that can be identified by their morphology todaythe species would be defined by their genomic signaturesbut in the sdna had not even been identified as the carrier of genetic information the following four attributes of each plant were measuredsepal length sepal width petal length petal width
17,597
in generalwe will call any measurement from our data as features additionallyfor each plantthe species was recorded the question now isif we saw new flower out in the fieldcould we make good prediction about its species from its measurementsthis is the supervised learning or classification problemgiven labeled exampleswe can design rule that will eventually be applied to other examples this is the same setting that is used for spam classificationgiven the examples of spam and ham (non-spam -mailthat the user gave the systemcan we determine whether newincoming message is spam or notfor the momentthe iris dataset serves our purposes well it is small ( examples features eachand can easily be visualized and manipulated the first step is visualization because this dataset is so smallwe can easily plot all of the points and all twodimensional projections on page we will thus build intuitions that can then be extended to datasets with many more dimensions and datapoints each subplot in the following screenshot shows all the points projected into two of the dimensions the outlying group (trianglesare the iris setosa plantswhile iris versicolor plants are in the center (circleand iris virginica are indicated with "xmarks we can see that there are two large groupsone is of iris setosa and another is mixture of iris versicolor and iris virginica
17,598
we are using matplotlibit is the most well-known plotting package for python we present the code to generate the top-left plot the code for the other plots is similar to the following codefrom matplotlib import pyplot as plt from sklearn datasets import load_iris import numpy as np we load the data with load_iris from sklearn data load_iris(features data['data'feature_names data['feature_names'target data['target'for ,marker, in zip(xrange( ),">ox","rgb")we plot each class on its own to get different colored markers plt scatter(features[target = , ]features[target = , ]marker=markerc=cbuilding our first classification model if the goal is to separate the three types of flowerwe can immediately make few suggestions for examplethe petal length seems to be able to separate iris setosa from the other two flower species on its own we can write little bit of code to discover where the cutoff is as followsplength features[: use numpy operations to get setosa features is_setosa (labels ='setosa'this is the important stepmax_setosa =plength[is_setosamax(min_non_setosa plength[~is_setosamin(print('maximum of setosa{ format(max_setosa)print('minimum of others{ format(min_non_setosa)this prints and thereforewe can build simple modelif the petal length is smaller than twothis is an iris setosa flowerotherwiseit is either iris virginica or iris versicolor if features[:, print 'iris setosaelseprint 'iris virginica or iris versicolour
17,599
this is our first modeland it works very well in that it separates the iris setosa flowers from the other two species without making any mistakes what we had here was simple structurea simple threshold on one of the dimensions then we searched for the best dimension threshold we performed this visually and with some calculationmachine learning happens when we write code to perform this for us the example where we distinguished iris setosa from the other two species was very easy howeverwe cannot immediately see what the best threshold is for distinguishing iris virginica from iris versicolor we can even see that we will never achieve perfect separation we canhowevertry to do it the best possible way for thiswe will perform little computation we first select only the non-setosa features and labelsfeatures features[~is_setosalabels labels[~is_setosavirginica (labels ='virginica'here we are heavily using numpy operations on the arrays is_setosa is boolean arrayand we use it to select subset of the other two arraysfeatures and labels finallywe build new boolean arrayvirginicausing an equality comparison on labels nowwe run loop over all possible features and thresholds to see which one results in better accuracy accuracy is simply the fraction of examples that the model classifies correctlybest_acc - for fi in xrange(features shape[ ])we are going to generate all possible threshold for this feature thresh features[:,ficopy(thresh sort(now test all thresholdsfor in threshpred (features[:,fitacc (pred =virginicamean(if acc best_accbest_acc acc best_fi fi best_t