id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
18,600 | from the above outputwe can see that the first data instance is malignant tumor the radius of which is + step organizing data into sets in this stepwe will divide our data into two parts namely training set and test set splitting the data into these sets is very important because we have to test our model on the unseen data to split the data into setssklearn has function called the train_test_split(function with the help of the following commandswe can split the data in these setsfrom sklearn model_selection import train_test_split the above command will import the train_test_split function from sklearn and the command below will split the data into training and test data in the example given belowwe are using of the data for testing and the remaining data would be used for training the model traintesttrain_labelstest_labels train_test_split(features,labels,test_size random_state step building the model in this stepwe will be building our model we are going to use naive bayes algorithm for building the model following commands can be used to build the modelfrom sklearn naive_bayes import gaussiannb the above command will import the gaussiannb module nowthe following command will help you initialize the model gnb gaussiannb(we will train the model by fitting it to the data by using gnb fit(model gnb fit(traintrain_labelsstep evaluating the model and its accuracy in this stepwe are going to evaluate the model by making predictions on our test data then we will find out its accuracy also for making predictionswe will use the predict(function the following command will help you do thispreds gnb predict(testprint(preds[ |
18,601 | the above series of and are the predicted values for the tumor classes malignant and benign nowby comparing the two arrays namely test_labels and predswe can find out the accuracy of our model we are going to use the accuracy_score(function to determine the accuracy consider the following command for thisfrom sklearn metrics import accuracy_score print(accuracy_score(test_labels,preds) the result shows that the naivebayes classifier is accurate in this waywith the help of the above steps we can build our classifier in python building classifier in python in this sectionwe will learn how to build classifier in python naive bayes classifier naive bayes is classification technique used to build classifier using the bayes theorem the assumption is that the predictors are independent in simple wordsit assumes that the presence of particular feature in class is unrelated to the presence of any other feature for building naive bayes classifier we need to use the python library called scikit learn there are three types of naive bayes models named gaussianmultinomial and bernoulli under scikit learn package to build naive bayes machine learning classifier modelwe need the followingdataset we are going to use the dataset named breast cancer wisconsin diagnostic database the dataset includes various information about breast cancer tumorsas well as classification labels of malignant or benign the dataset has instancesor dataon tumors and includes information on attributesor featuressuch as the radius of the tumortexturesmoothnessand area we can import this dataset from sklearn package naive bayes model for building naive bayes classifierwe need naive bayes model as told earlierthere are three types of naive bayes models named gaussianmultinomial and bernoulli under scikit learn package herein the following example we are going to use the gaussian naive bayes model |
18,602 | by using the abovewe are going to build naive bayes machine learning model to use the tumor information to predict whether or not tumor is malignant or benign to begin withwe need to install the sklearn module it can be done with the help of the following commandimport sklearn nowwe need to import the dataset named breast cancer wisconsin diagnostic database from sklearn datasets import load_breast_cancer nowthe following command will load the dataset data load_breast_cancer(the data can be organized as followslabel_names data['target_names'labels data['target'feature_names data['feature_names'features data['data'nowto make it clearer we can print the class labelsthe first data instance' labelour feature names and the feature' value with the help of following commandsprint(label_namesthe above command will print the class names which are malignant and benign respectively it is shown as the output below['malignant'benign'nowthe command given below will show that they are mapped to binary values and here represents malignant cancer and represents benign cancer it is shown as the output belowprint(labels[ ] the following two commands will produce the feature names and feature values print(feature_names[ ]mean radius print(features[ ] + + + + |
18,603 | - - - - - - + - + + - - - - - - + + + + - - - - - - from the above outputwe can see that the first data instance is malignant tumor the main radius of which is + for testing our model on unseen datawe need to split our data into training and testing data it can be done with the help of the following codefrom sklearn model_selection import train_test_split the above command will import the train_test_split function from sklearn and the command below will split the data into training and test data in the below examplewe are using of the data for testing and the remining data would be used for training the model traintesttrain_labelstest_labels train_test_split(features,labels,test_size random_state nowwe are building the model with the following commandsfrom sklearn naive_bayes import gaussiannb the above command will import the gaussiannb module nowwith the command given belowwe need to initialize the model gnb gaussiannb(we will train the model by fitting it to the data by using gnb fit(model gnb fit(traintrain_labelsnowevaluate the model by making prediction on the test data and it can be done as followspreds gnb predict(testprint(preds[ |
18,604 | the above series of and are the predicted values for the tumor classes malignant and benign nowby comparing the two arrays namely test_labels and predswe can find out the accuracy of our model we are going to use the accuracy_score(function to determine the accuracy consider the following commandfrom sklearn metrics import accuracy_score print(accuracy_score(test_labels,preds) the result shows that naivebayes classifier is accurate that was machine learning classifier based on the naive bayse gaussian model support vector machines (svmbasicallysupport vector machine (svmis supervised machine learning algorithm that can be used for both regression and classification the main concept of svm is to plot each data item as point in -dimensional space with the value of each feature being the value of particular coordinate here would be the features we would have following is simple graphical representation to understand the concept of svmin the above diagramwe have two features hencewe first need to plot these two variables in two dimensional space where each point has two co-ordinatescalled support vectors the line splits the data into two different classified groups this line would be the classifier herewe are going to build an svm classifier by using scikit-learn and iris dataset scikitlearn library has the sklearn svm module and provides sklearn svm svc for classification the svm classifier to predict the class of the iris plant based on features are shown below |
18,605 | dataset we will use the iris dataset which contains classes of instances eachwhere each class refers to type of iris plant each instance has the four features namely sepal lengthsepal widthpetal length and petal width the svm classifier to predict the class of the iris plant based on features is shown below kernel it is technique used by svm basically these are the functions which take low-dimensional input space and transform it to higher dimensional space it converts non-separable problem to separable problem the kernel function can be any one among linearpolynomialrbf and sigmoid in this examplewe will use the linear kernel let us now import the following packagesimport pandas as pd import numpy as np from sklearn import svmdatasets import matplotlib pyplot as plt nowload the input datairis datasets load_iris(we are taking first two featuresx iris data[:: iris target we will plot the support vector machine boundaries with original data we are creating mesh to plot x_minx_max [: min( [: max( y_miny_max [: min( [: max( (x_max x_min)/ xxyy np meshgrid(np arange(x_minx_maxh)np arange(y_miny_maxh)x_plot np c_[xx ravel()yy ravel()we need to give the value of regularization parameter |
18,606 | we need to create the svm classifier object svc_classifier svm_classifier svc(kernel='linear' =cdecision_function_shape='ovr'fit(xyz svc_classifier predict(x_plotz reshape(xx shapeplt figure(figsize=( )plt subplot( plt contourf(xxyyzcmap=plt cm tab alpha= plt scatter( [: ] [: ] =ycmap=plt cm set plt xlabel('sepal length'plt ylabel('sepal width'plt xlim(xx min()xx max()plt title('svc with linear kernel'logistic regression basicallylogistic regression model is one of the members of supervised classification algorithm family logistic regression measures the relationship between dependent variables and independent variables by estimating the probabilities using logistic function hereif we talk about dependent and independent variables then dependent variable is the target class variable we are going to predict and on the other side the independent variables are the features we are going to use to predict the target class in logistic regressionestimating the probabilities means to predict the likelihood occurrence of the event for examplethe shop owner would like to predict the customer who entered into the shop will buy the play station (for exampleor not there would be many features of customer genderageetc which would be observed by the shop |
18,607 | keeper to predict the likelihood occurrencei buying play station or not the logistic function is the sigmoid curve that is used to build the function with various parameters prerequisites before building the classifier using logistic regressionwe need to install the tkinter package on our system it can be installed from nowwith the help of the code given belowwe can create classifier using logistic regressionfirstwe will import some packagesimport numpy as np from sklearn import linear_model import matplotlib pyplot as plt nowwe need to define the sample data which can be done as followsx np array([[ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ]] np array([ ]nextwe need to create the logistic regression classifierwhich can be done as followsclassifier_lr linear_model logisticregression(solver='liblinear' = last but not the leastwe need to train this classifierclassifier_lr fit(xynowhow we can visualize the outputit can be done by creating function named logistic_visualize()def logistic_visualize(classifier_lrxy)min_xmax_x [: min( [: max( min_ymax_y [: min( [: max( in the above linewe defined the minimum and maximum values and to be used in mesh grid in additionwe will define the step size for plotting the mesh grid mesh_step_size let us define the mesh grid of and values as followsx_valsy_vals np meshgrid(np arange(min_xmax_xmesh_step_size)np arange(min_ymax_ymesh_step_size) |
18,608 | with the help of following codewe can run the classifier on the mesh gridoutput classifier predict(np c_[x_vals ravel()y_vals ravel()]output output reshape(x_vals shapeplt figure(plt pcolormesh(x_valsy_valsoutputcmap=plt cm grayplt scatter( [: ] [: ] =ys= edgecolors='black'linewidth= cmap=plt cm pairedthe following line of code will specify the boundaries of the plot plt xlim(x_vals min()x_vals max()plt ylim(y_vals min()y_vals max()plt xticks((np arange(int( [: min( )int( [: max( ) ))plt yticks((np arange(int( [: min( )int( [: max( ) ))plt show(nowafter running the code we will get the following outputlogistic regression classifier |
18,609 | decision tree classifier decision tree is basically binary tree flowchart where each node splits group of observations according to some feature variable herewe are building decision tree classifier for predicting male or female we will take very small data set having samples these samples would consist of two features 'heightand 'length of hairprerequisite for building the following classifierwe need to install pydotplus and graphviz basicallygraphviz is tool for drawing graphics using dot files and pydotplus is module to graphviz' dot language it can be installed with the package manager or pip nowwe can build the decision tree classifier with the help of the following python codeto begin withlet us import some important libraries as followsimport pydotplus from sklearn import tree from sklearn datasets import load_iris from sklearn metrics import classification_report from sklearn import cross_validation import collections nowwe need to provide the dataset as followsx=[[ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ ],[ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ , ],[ ],[ , ] ['man','woman','woman','man','woman','man','woman','man','woman','man','woman''man','woman','woman','woman','man','woman','woman','man'data_feature_names ['height','length of hair'x_trainx_testy_trainy_test cross_validation train_test_split(xytest_size= random_state= after providing the datasetwe need to fit the model which can be done as followsclf tree decisiontreeclassifier(clf clf fit( ,yprediction can be made with the help of the following python codeprediction clf predict([[ , ]]print(prediction |
18,610 | we can visualize the decision tree with the help of the following python codedot_data tree export_graphviz(clf,feature_names=data_feature_names,out_file=none,filledtrue,rounded=truegraph pydotplus graph_from_dot_data(dot_datacolors ('orange''yellow'edges collections defaultdict(listfor edge in graph get_edge_list():edges[edge get_source()append(int(edge get_destination))for edge in edgesedges[edgesort(for in range( ):dest graph get_node(str(edges[edge][ ]))[ dest set_fillcolor(colors[ ]graph write_png('decisiontree png'it will give the prediction for the above code as ['woman'and create the following decision tree |
18,611 | we can change the values of features in prediction to test it random forest classifier as we know that ensemble methods are the methods which combine machine learning models into more powerful machine learning model random foresta collection of decision treesis one of them it is better than single decision tree because while retaining the predictive powers it can reduce over-fitting by averaging the results herewe are going to implement the random forest model on scikit learn cancer dataset import the necessary packagesfrom sklearn ensemble import randomforestclassifier from sklearn model_selection import train_test_split from sklearn datasets import load_breast_cancer cancer load_breast_cancer(import matplotlib pyplot as plt import numpy as np nowwe need to provide the dataset which can be done as followscancer load_breast_cancer(x_trainx_testy_trainy_test train_test_split(cancer datacancer targetrandom_state= after providing the datasetwe need to fit the model which can be done as followsforest randomforestclassifier(n_estimators= random_state= forest fit(x_train,y_trainnowget the accuracy on training as well as testing subsetif we will increase the number of estimators thenthe accuracy of testing subset would also be increased print('accuracy on the training subset:( )',format(forest score(x_train,y_train))print('accuracy on the training subset:( )',format(forest score(x_test,y_test))output accuracy on the training subset:( accuracy on the training subset:( nowlike the decision treerandom forest has the feature_importance module which will provide better view of feature weight than decision tree it can be plot and visualize as follows |
18,612 | n_features cancer data shape[ plt barh(range(n_features),forest feature_importances_align='center'plt yticks(np arange(n_features),cancer feature_namesplt xlabel('feature importance'plt ylabel('feature'plt show(performance of classifier after implementing machine learning algorithmwe need to find out how effective the model is the criteria for measuring the effectiveness may be based upon datasets and metric for evaluating different machine learning algorithmswe can use different performance metrics for examplesuppose if classifier is used to distinguish between images of different objectswe can use the classification performance metrics such as average accuracyaucetc in one or other sensethe metric we choose to evaluate our machine learning model is very important because the choice of metrics influences how the performance of machine learning algorithm is measured and compared following are some of the metricsconfusion matrix basically it is used for classification problem where the output can be of two or more types of classes it is the easiest way to measure the performance of classifier confusion matrix is basically table with two dimensions namely "actualand "predictedboth the dimensions have "true positives (tp)""true negatives (tn)""false positives (fp)""false negatives (fn) |
18,613 | actual predicted true positives (tpfalse positives (fpfalse negatives (fntrue negatives (tnconfusion matrix in the confusion matrix above is for positive class and is for negative class following are the terms associated with confusion matrixtrue positivestps are the cases when the actual class of data point was and the predicted is also true negativestns are the cases when the actual class of the data point was and the predicted is also false positivesfps are the cases when the actual class of data point was and the predicted is also false negativesfns are the cases when the actual class of the data point was and the predicted is also accuracy the confusion matrix itself is not performance measure as such but almost all the performance matrices are based on the confusion matrix one of them is accuracy in classification problemsit may be defined as the number of correct predictions made by the model over all kinds of predictions made the formula for calculating the accuracy is as followsaccuracy tp tn tp fp fn tn precision it is mostly used in document retrievals it may be defined as how many of the returned documents are correct following is the formula for calculating the precisionprecision tp tp fp recall or sensitivity it may be defined as how many of the positives do the model return following is the formula for calculating the recall/sensitivity of the modelrecall tp tp fn |
18,614 | specificity it may be defined as how many of the negatives do the model return it is exactly opposite to recall following is the formula for calculating the specificity of the modelspecificity tn tn fp class imbalance problem class imbalance is the scenario where the number of observations belonging to one class is significantly lower than those belonging to the other classes for examplethis problem is prominent in the scenario where we need to identify the rare diseasesfraudulent transactions in bank etc example of imbalanced classes let us consider an example of fraud detection data set to understand the concept of imbalanced classtotal observations fraudulent observations non-fraudulent observations event rate solution balancing the classesacts as solution to imbalanced classes the main objective of balancing the classes is to either increase the frequency of the minority class or decrease the frequency of the majority class following are the approaches to solve the issue of imbalances classesre-sampling re-sampling is series of methods used to reconstruct the sample data sets both training sets and testing sets re-sampling is done to improve the accuracy of model following are some re-sampling techniquesrandom under-samplingthis technique aims to balance class distribution by randomly eliminating majority class examples this is done until the majority and minority class instances are balanced out total observations fraudulent observations non-fraudulent observations event rate |
18,615 | in this casewe are taking samples without replacement from non-fraud instances and then combine them with the fraud instancesnon-fraudulent observations after random under sampling of total observations after combining them with fraudulent observations + hence nowthe event rate for new dataset after under sampling the main advantage of this technique is that it can reduce run time and improve storage but on the other sideit can discard useful information while reducing the number of training data samples random over-samplingthis technique aims to balance class distribution by increasing the number of instances in the minority class by replicating them total observations fraudulent observations non-fraudulent observations event rate in case we are replicating fraudulent observations times then fraudulent observations after replicating the minority class observations would be and then total observations in the new data after oversampling would be + hence the event rate for the new data set would be / the main advantage of this method is that there would be no loss of useful information but on the other handit has the increased chances of over-fitting because it replicates the minority class events ensemble techniques this methodology basically is used to modify existing classification algorithms to make them appropriate for imbalanced data sets in this approach we construct several two stage classifier from the original data and then aggregate their predictions random forest classifier is an example of ensemble based classifier data fresh sample of data total votes cn fresh sample of data classifiers ensemble based methodology |
18,616 | ai with python supervised learningregression regression is one of the most important statistical and machine learning tools we would not be wrong to say that the journey of machine learning starts from regression it may be defined as the parametric technique that allows us to make decisions based upon data or in other words allows us to make predictions based upon data by learning the relationship between input and output variables herethe output variables dependent on the input variablesare continuous-valued real numbers in regressionthe relationship between input and output variables matters and it helps us in understanding how the value of the output variable changes with the change of input variable regression is frequently used for prediction of priceseconomicsvariationsand so on building regressors in python in this sectionwe will learn how to build single as well as multivariable regressor linear regressor/single variable regressor let us important few required packagesimport numpy as np from sklearn import linear_model import sklearn metrics as sm import matplotlib pyplot as plt nowwe need to provide the input data and we have saved our data in the file named linear txt input ' :/programdata/linear txtwe need to load this data by using the np loadtxt function input_data np loadtxt(inputdelimiter=','xy input_data[::- ]input_data[:- the next step would be to train the model let us give training and testing samples training_samples int( len( )testing_samples len(xnum_training x_trainy_train [:training_samples] [:training_samplesx_testy_test [training_samples:] [training_samples: |
18,617 | nowwe need to create linear regressor object reg_linearlinear_model linearregression(train the object with the training samples reg_linear fit(x_trainy_trainwe need to do the prediction with the testing data y_test_pred reg_linear predict(x_testnow plot and visualize the data plt scatter(x_testy_testcolor='red'plt plot(x_testy_test_predcolor='black'linewidth= plt xticks(()plt yticks(()plt show(output nowwe can compute the performance of our linear regression as followsprint("performance of linear regressor:"print("mean absolute error ="round(sm mean_absolute_error(y_testy_test_pred) )print("mean squared error ="round(sm mean_squared_error(y_testy_test_pred) )print("median absolute error ="round(sm median_absolute_error(y_testy_test_pred) ) |
18,618 | print("explain variance score ="round(sm explained_variance_score(y_testy_test_pred) )print(" score ="round(sm _score(y_testy_test_pred) )output performance of linear regressormean absolute error mean squared error median absolute error explain variance score - score - in the above codewe have used this small data if you want some big dataset then you can use sklearn dataset to import bigger dataset ,,, , , , ,,,, , ,, ,,,,, multivariable regressor firstlet us import few required packagesimport numpy as np from sklearn import linear_model import sklearn metrics as sm import matplotlib pyplot as plt from sklearn preprocessing import polynomialfeatures nowwe need to provide the input data and we have saved our data in the file named linear txt input ' :/programdata/mul_linear txtwe will load this data by using the np loadtxt function input_data np loadtxt(inputdelimiter=','xy input_data[::- ]input_data[:- the next step would be to train the modelwe will give training and testing samples training_samples int( len( )testing_samples len(xnum_training |
18,619 | x_trainy_train [:training_samples] [:training_samplesx_testy_test [training_samples:] [training_samples:nowwe need to create linear regressor object reg_linear_mullinear_model linearregression(train the object with the training samples reg_linear_mul fit(x_trainy_trainnowat last we need to do the prediction with the testing data y_test_pred reg_linear_mul predict(x_testprint("performance of linear regressor:"print("mean absolute error ="round(sm mean_absolute_error(y_testy_test_pred) )print("mean squared error ="round(sm mean_squared_error(y_testy_test_pred) )print("median absolute error ="round(sm median_absolute_error(y_testy_test_pred) )print("explain variance score ="round(sm explained_variance_score(y_testy_test_pred) )print(" score ="round(sm _score(y_testy_test_pred) )output performance of linear regressormean absolute error mean squared error median absolute error explain variance score score nowwe will create polynomial of degree and train the regressor we will provide the sample data point polynomial polynomialfeatures(degree= x_train_transformed polynomial fit_transform(x_train |
18,620 | datapoint [[ ]poly_datapoint polynomial fit_transform(datapointpoly_linear_model linear_model linearregression(poly_linear_model fit(x_train_transformedy_trainprint("\nlinear regression:\ "reg_linear_mul predict(datapoint)print("\npolynomial regression:\ "poly_linear_model predict(poly_datapoint)output linear regression[ polynomial regression[ in the above codewe have used this small data if you want big dataset thenyou can use sklearn dataset to import bigger dataset , , ,, , ,, , , , , , , , ,, , , , ,, , ,, , ,, , , , , ,, , , , , ,, , ,, , ,, , ,, , , , , , |
18,621 | ai with python logic programming in this we will focus logic programming and how it helps in artificial intelligence we already know that logic is the study of principles of correct reasoning or in simple words it is the study of what comes after what for exampleif two statements are true then we can infer any third statement from it concept logic programming is the combination of two wordslogic and programming logic programming is programming paradigm in which the problems are expressed as facts and rules by program statements but within system of formal logic just like other programming paradigms like object orientedfunctionaldeclarativeand proceduraletc it is also particular way to approach programming how to solve problems with logic programming logic programming uses facts and rules for solving the problem that is why they are called the building blocks of logic programming goal needs to be specified for every program in logic programming to understand how problem can be solved in logic programmingwe need to know about the building blocks facts and rulesfacts actuallyevery logic program needs facts to work with so that it can achieve the given goal facts basically are true statements about the program and data for exampledelhi is the capital of india rules actuallyrules are the constraints which allow us to make conclusions about the problem domain rules basically written as logical clauses to express various facts for exampleif we are building any game then all the rules must be defined rules are very important to solve any problem in logic programming rules are basically logical conclusion which can express the facts following is the syntax of rulea bn herea is the head and bn is the body for exampleancestor( , :father( ,yancestor( , :father( , )ancestor( ,zthis can be read asfor every and yif is the father of and is an ancestor of zx is the ancestor of for every and yx is the ancestor of zif is the father of and is an ancestor of |
18,622 | installing useful packages for starting logic programming in pythonwe need to install the following two packageskanren it provides us way to simplify the way we made code for business logic it lets us express the logic in terms of rules and facts the following command will help you install kanrenpip install kanren sympy sympy is python library for symbolic mathematics it aims to become full-featured computer algebra system (caswhile keeping the code as simple as possible in order to be comprehensible and easily extensible the following command will help you install sympypip install sympy examples of logic programming followings are some examples which can be solved by logic programmingmatching mathematical expressions actually we can find the unknown values by using logic programming in very effective way the following python code will help you match mathematical expressionconsider importing the following packages firstfrom kanren import runvarfact from kanren assoccomm import eq_assoccomm as eq from kanren assoccomm import commutativeassociative we need to define the mathematical operations which we are going to useadd 'addmul 'mulboth addition and multiplication are communicative processes hencewe need to specify it and this can be done as followsfact(commutativemulfact(commutativeaddfact(associativemulfact(associativeadd |
18,623 | it is compulsory to define variablesthis can be done as followsab var(' ')var(' 'we need to match the expression with the original pattern we have the following original patternwhich is basically ( + )*boriginal_pattern (mul(add )bwe have the following two expressions to match with the original patternexp (mul (add )exp (add, ,(mul, , )output can be printed with the following commandprint(run( ( , )eq(original_patternexp ))print(run( ( , )eq(original_patternexp ))after running this codewe will get the following output(( , )(the first output represents the values for and the first expression matched the original pattern and returned the values for and but the second expression did not match the original pattern hence nothing has been returned checking for prime numbers with the help of logic programmingwe can find the prime numbers from list of numbers and can also generate prime numbers the python code given below will find the prime number from list of numbers and will also generate the first prime numbers let us first consider importing the following packagesfrom kanren import isvarrunmembero from kanren core import successfailgoalevalcondeseqeqvar from sympy ntheory generate import primeisprime import itertools as it nowwe will define function called prime_check which will check the prime numbers based on the given numbers as data def prime_check( )if isvar( ) |
18,624 | return condeseq([(eq, , )for in map(primeit count( ))elsereturn success if isprime(xelse fail nowwe need to declare variable which will be usedx var(print((set(run( , ,(membero, ,( , , , , , , , , , , , , , , , ),(prime_check, ))))print((run( , ,prime_check( )))the output of the above code will be as follows{ ( solving puzzles logic programming can be used to solve many problems like -puzzleszebra puzzlesudokun-queenetc here we are taking an example of variant of zebra puzzle which is as followsthere are five houses the english man lives in the red house the swede has dog the dane drinks tea the green house is immediately to the left of the white house they drink coffee in the green house the man who smokes pall mall has birds in the yellow house they smoke dunhill in the middle house they drink milk the norwegian lives in the first house the man who smokes blend lives in the house next to the house with cats in house next to the house where they have horsethey smoke dunhill the man who smokes blue master drinks beer the german smokes prince the norwegian lives next to the blue house they drink water in house next to the house where they smoke blend |
18,625 | we are solving it for the question who owns zebra with the help of python let us import the necessary packagesfrom kanren import from kanren core import lall import time nowwe need to define two functions left(and next(to check whose house is left or next to who' housedef left(qplist)return membero(( , )zip(listlist[ :])def next(qplist)return conde([left(qplist)][left(pqlist)]nowwe will declare variable house as followshouses var(we need to define the rules with the help of lall package as follows there are housesrules_zebraproblem lall(eq(var()var()var()var()var())houses)(membero,('englishman'var()var()var()'red')houses)(membero,('swede'var()var()'dog'var())houses)(membero,('dane'var()'tea'var()var())houses)(left,(var()var()var()var()'green')(var()var()var()var()'white')houses)(membero,(var()var()'coffee'var()'green')houses)(membero,(var()'pall mall'var()'birds'var())houses)(membero,(var()'dunhill'var()var()'yellow')houses)(eq,(var()var()(var()var()'milk'var()var())var()var())houses)(eq,(('norwegian'var()var()var()var())var()var()var()var())houses)(next,(var()'blend'var()var()var()) |
18,626 | (var()var()var()'cats'var())houses)(next,(var()'dunhill'var()var()var())(var()var()var()'horse'var())houses)(membero,(var()'blue master''beer'var()var())houses)(membero,('german''prince'var()var()var())houses)(next,('norwegian'var()var()var()var())(var()var()var()var()'blue')houses)(next,(var()'blend'var()var()var())(var()var()'water'var()var())houses)(membero,(var()var()var()'zebra'var())housesnowrun the solver with the preceding constraintssolutions run( housesrules_zebraproblemwith the help of the following codewe can extract the output from the solveroutput_zebra [house for house in solutions[ if 'zebrain house][ ][ the following code will help print the solutionprint ('\ 'output_zebra 'owns zebra 'the output of the above code would be as followsgerman owns zebra |
18,627 | ai with python unsupervised learningaiclustering unsupervised machine learning algorithms do not have any supervisor to provide any sort of guidance that is why they are closely aligned with what some call true artificial intelligence in unsupervised learningthere would be no correct answer and no teacher for the guidance algorithms need to discover the interesting pattern in data for learning what is clusteringbasicallyit is type of unsupervised learning method and common technique for statistical data analysis used in many fields clustering mainly is task of dividing the set of observations into subsetscalled clustersin such way that observations in the same cluster are similar in one sense and they are dissimilar to the observations in other clusters in simple wordswe can say that the main goal of clustering is to group the data on the basis of similarity and dissimilarity for examplethe following diagram shows similar kind of data in different clustersalgorithms for clustering the data following are few common algorithms for clustering the datak-means algorithm -means clustering algorithm is one of the well-known algorithms for clustering the data we need to assume that the numbers of clusters are already known this is also called flat clustering it is an iterative clustering algorithm the steps given below need to be followed for this algorithm |
18,628 | step we need to specify the desired number of subgroups step fix the number of clusters and randomly assign each data point to cluster or in other words we need to classify our data based on the number of clusters in this stepcluster centroids should be computed as this is an iterative algorithmwe need to update the locations of centroids with every iteration until we find the global optima or in other words the centroids reach at their optimal locations the following code will help in implementing -means clustering algorithm in python we are going to use the scikit-learn module let us import the necessary packagesimport matplotlib pyplot as plt import seaborn as snssns set(import numpy as np from sklearn cluster import kmeans the following line of code will help in generating the two-dimensional datasetcontaining four blobsby using make_blob from the sklearn dataset package from sklearn datasets samples_generator import make_blobs xy_true make_blobs(n_samples= centers= cluster_std= random_state= we can visualize the dataset by using the following codeplt scatter( [: ] [: ] = )plt show( |
18,629 | herewe are initializing kmeans to be the kmeans algorithmwith the required parameter of how many clusters (n_clusterskmeans kmeans(n_clusters= we need to train the -means model with the input data kmeans fit(xy_kmeans kmeans predict(xplt scatter( [: ] [: ] =y_kmeanss= cmap='viridis'centers kmeans cluster_centers_ the code given below will help us plot and visualize the machine' findings based on our dataand the fitment according to the number of clusters that are to be found plt scatter(centers[: ]centers[: ] ='black' = alpha= )plt show( |
18,630 | mean shift algorithm it is another popular and powerful clustering algorithm used in unsupervised learning it does not make any assumptions hence it is non-parametric algorithm it is also called hierarchical clustering or mean shift cluster analysis followings would be the basic steps of this algorithmfirst of allwe need to start with the data points assigned to cluster of their own nowit computes the centroids and update the location of new centroids by repeating this processwe move closer the peak of cluster towards the region of higher density this algorithm stops at the stage where centroids do not move anymore with the help of following code we are implementing mean shift clustering algorithm in python we are going to use scikit-learn module let us import the necessary packagesimport numpy as np from sklearn cluster import meanshift import matplotlib pyplot as plt from matplotlib import style style use("ggplot"the following code will help in generating the two-dimensional datasetcontaining four blobsby using make_blob from the sklearn dataset package |
18,631 | from sklearn datasets samples_generator import make_blobs we can visualize the dataset with the following codecenters [[ , ],[ , ],[ , ]x_ make_blobs(n_samples centers centerscluster_std plt scatter( [:, ], [:, ]plt show(nowwe need to train the mean shift cluster model with the input data ms meanshift(ms fit(xlabels ms labels_ cluster_centers ms cluster_centers_ the following code will print the cluster centers and the expected number of cluster as per the input dataprint(cluster_centersn_clusters_ len(np unique(labels)print("estimated clusters:"n_clusters_ |
18,632 | [ ]estimated clusters the code given below will help plot and visualize the machine' findings based on our dataand the fitment according to the number of clusters that are to be found colors *[' ',' ',' ',' ',' ',' ',' 'for in range(len( ))plt plot( [ ][ ] [ ][ ]colors[labels[ ]]markersize plt scatter(cluster_centers[:, ],cluster_centers[:, ]marker=" ",color=' ' = linewidths zorder= plt show( |
18,633 | measuring the clustering performance the real world data is not naturally organized into number of distinctive clusters due to this reasonit is not easy to visualize and draw inferences that is why we need to measure the clustering performance as well as its quality it can be done with the help of silhouette analysis silhouette analysis this method can be used to check the quality of clustering by measuring the distance between the clusters basicallyit provides way to assess the parameters like number of clusters by giving silhouette score this score is metric that measures how close each point in one cluster is to the points in the neighboring clusters analysis of silhouette score the score has range of [- following is the analysis of this scorescore of + score near + indicates that the sample is far away from the neighboring cluster score of score indicates that the sample is on or very close to the decision boundary between two neighboring clusters score of - negative score indicates that the samples have been assigned to the wrong clusters calculating silhouette score in this sectionwe will learn how to calculate the silhouette score silhouette score can be calculated by using the following formulasilhouette score ( )/max(pqherep is the mean distance to the points in the nearest cluster that the data point is not part of andq is the mean intra-cluster distance to all the points in its own cluster for finding the optimal number of clusterswe need to run the clustering algorithm again by importing the metrics module from the sklearn package in the following examplewe will run the -means clustering algorithm to find the optimal number of clustersimport the necessary packages as shownimport matplotlib pyplot as plt import seaborn as snssns set(import numpy as np from sklearn cluster import kmeans with the help of the following codewe will generate the two-dimensional datasetcontaining four blobsby using make_blob from the sklearn dataset package |
18,634 | from sklearn datasets samples_generator import make_blobs xy_true make_blobs(n_samples= centers= cluster_std= random_state= initialize the variables as shownscores [values np arange( we need to iterate the -means model through all the values and also need to train it with the input data for num_clusters in valueskmeans kmeans(init=' -means++'n_clusters=num_clustersn_init= kmeans fit(xnowestimate the silhouette score for the current clustering model using the euclidean distance metricscore metrics silhouette_score(xkmeans labels_metric='euclidean'sample_size=len( )the following line of code will help in displaying the number of clusters as well as silhouette score print("\nnumber of clusters ="num_clustersprint("silhouette score ="scorescores append(scoreyou will receive the following outputnumber of clusters silhouette score num_clusters np argmax(scoresvalues[ print('\noptimal number of clusters ='num_clustersnowthe output for optimal number of clusters would be as followsoptimal number of clusters |
18,635 | finding nearest neighbors if we want to build recommender systems such as movie recommender system then we need to understand the concept of finding the nearest neighbors it is because the recommender system utilizes the concept of nearest neighbors the concept of finding nearest neighbors may be defined as the process of finding the closest point to the input point from the given dataset the main use of this knn) -nearest neighborsalgorithm is to build classification systems that classify data point on the proximity of the input data point to various classes the python code given below helps in finding the -nearest neighbors of given data setimport the necessary packages as shown below nearestneighbors module from the sklearn packageherewe are using the import numpy as np import matplotlib pyplot as plt from sklearn neighbors import nearestneighbors let us now define the input dataa np array([[ ][ ][ ][ ][ ],[ ][ ][ ][ ][ ],]nowwe need to define the nearest neighborsk we also need to give the test data from which the nearest neighbors is to be foundtest_data [ the following code can visualize and plot the input data defined by usplt figure(plt title('input data'plt scatter( [:, ] [:, ]marker=' ' = color='black' |
18,636 | nowwe need to build the nearest neighbor the object also needs to be trainedknn_model nearestneighbors(n_neighbors=kalgorithm='auto'fit(xdistancesindices knn_model kneighbors([test_data]nowwe can print the nearest neighbors as followsprint("\nk nearest neighbors:"for rankindex in enumerate(indices[ ][: ]start= )print(str(rankis" [index]we can visualize the nearest neighbors along with the test data pointplt figure(plt title('nearest neighbors'plt scatter( [: ] [: ]marker=' ' = color=' 'plt scatter( [indices][ ][:][: ] [indices][ ][:][: ]marker=' ' = color=' 'facecolors='none'plt scatter(test_data[ ]test_data[ ]marker=' ' = color=' 'plt show( |
18,637 | output nearest neighbors is is is -nearest neighbors classifier -nearest neighbors (knnclassifier is classification model that uses the nearest neighbors algorithm to classify given data point we have implemented the knn algorithm in the last sectionnow we are going to build knn classifier using that algorithm concept of knn classifier the basic concept of -nearest neighbor classification is to find predefined numberi the 'kof training samples closest in distance to new samplewhich has to be classified new samples will get their label from the neighbors itself the knn classifiers have fixed user defined constant for the number of neighbors which have to be determined for the distancestandard euclidean distance is the most common choice the knn classifier works directly on the learned samples rather than creating the rules for learning the knn algorithm is among the simplest of all machine learning algorithms it has been quite successful in large number of classification and regression problemsfor examplecharacter recognition or image analysis |
18,638 | example we are building knn classifier to recognize digits for thiswe will use the mnist dataset we will write this code in the jupyter notebook import the necessary packages as shown below here we are using the kneighborsclassifier module from the sklearn neighbors packagefrom sklearn datasets import import pandas as pd %matplotlib inline from sklearn neighbors import kneighborsclassifier import matplotlib pyplot as plt import numpy as np the following code will display the image of digit to verify what image we have to testdef image_display( )plt imshow(digit['images'][ ],cmap='greys_r'plt show(nowwe need to load the mnist dataset actually there are total images but we are using the first images as training sample and the remaining would be kept for testing purpose digit load_digits(digit_d pd dataframe(digit['data'][ : ]nowon displaying the images we can see the output as followsimage_display( image_display( image of is displayed as follows |
18,639 | image_display( image of is displayed as followsdigit keys(nowwe need to create the training and testing data set and supply testing data set to the knn classifiers train_xdigit['data'][: train_y digit['target'][: knn kneighborsclassifier( knn fit(train_x,train_ythe following output will create the nearest neighbor classifier constructorkneighborsclassifier(algorithm='auto'leaf_size= metric='minkowski'metric_params=nonen_jobs= n_neighbors= = weights='uniform'we need to create the testing sample by providing any arbitrary number greater than which were the training samples test np array(digit['data'][ ]test test reshape( ,- image_display( |
18,640 | image_display( image of is displayed as followsnow we will predict the test data as followsknn predict(test the above code will generate the following outputarray([ ]nowconsider the followingdigit['target_names'the above code will generate the following outputarray([ ] |
18,641 | ai with python natural language processing natural language processing (nlprefers to ai method of communicating with intelligent systems using natural language such as english processing of natural language is required when you want an intelligent system like robot to perform as per your instructionswhen you want to hear decision from dialogue based clinical expert systemetc the field of nlp involves making computers erform useful tasks with the natural languages humans use the input and output of an nlp system can be speech written text components of nlp in this sectionwe will learn about the different components of nlp there are two components of nlp the components are described belownatural language understanding (nluit involves the following tasksmapping the given input in natural language into useful representations analyzing different aspects of the language natural language generation (nlgit is the process of producing meaningful phrases and sentences in the form of natural language from some internal representation it involvestext planning this includes retrieving the relevant content from the knowledge base sentence planning this includes choosing the required wordsforming meaningful phrasessetting tone of the sentence text realization this is mapping sentence plan into sentence structure difficulties in nlu the nlu is very rich in form and structurehoweverit is ambiguous there can be different levels of ambiguity lexical ambiguity it is at very primitive level such as the word-level for exampletreating the word "boardas noun or verb |
18,642 | syntax level ambiguity sentence can be parsed in different ways for example"he lifted the beetle with red cap did he use cap to lift the beetle or he lifted beetle that had red capreferential ambiguity referring to something using pronouns for examplerima went to gauri she said" am tired exactly who is tirednlp terminology let us now see few important terms in the nlp terminology phonology it is study of organizing sound systematically morphology it is study of construction of words from primitive meaningful units morpheme it is primitive unit of meaning in language syntax it refers to arranging words to make sentence it also involves determining the structural role of words in the sentence and in phrases semantics it is concerned with the meaning of words and how to combine words into meaningful phrases and sentences pragmatics it deals with using and understanding sentences in different situations and how the interpretation of the sentence is affected discourse it deals with how the immediately preceding sentence can affect the interpretation of the next sentence world knowledge it includes the general knowledge about the world steps in nlp this section shows the different steps in nlp lexical analysis it involves identifying and analyzing the structure of words lexicon of language means the collection of words and phrases in language lexical analysis is dividing the whole chunk of txt into paragraphssentencesand words syntactic analysis (parsingit involves analysis of words in the sentence for grammar and arranging words in manner that shows the relationship among the words the sentence such as "the school goes to boyis rejected by english syntactic analyzer semantic analysis it draws the exact meaning or the dictionary meaning from the text the text is checked for meaningfulness it is done by mapping syntactic structures and objects in the task domain the semantic analyzer disregards sentence such as "hot ice-cream |
18,643 | discourse integration the meaning of any sentence depends upon the meaning of the sentence just before it in additionit also brings about the meaning of immediately succeeding sentence pragmatic analysis during thiswhat was said is re-interpreted on what it actually meant it involves deriving those aspects of language which require real world knowledge |
18,644 | in this we will learn how to get started with the natural language toolkit package prerequisite if we want to build applications with natural language processing then the change in context makes it most difficult the context factor influences how the machine understands particular sentence hencewe need to develop natural language applications by using machine learning approaches so that machine can also understand the way human can understand the context to build such applications we will use the python package called nltk (natural language toolkit packageimporting nltk we need to install nltk before using it it can be installed with the help of the following commandpip install nltk to build conda package for nltkuse the following commandconda install - anaconda nltk now after installing the nltk packagewe need to import it through the python command prompt we can import it by writing the following command on the python command promptimport nltk downloading nltk' data now after importing nltkwe need to download the required data it can be done with the help of the following command on the python command promptnltk download( |
18,645 | installing other necessary packages for building natural language processing applications by using nltkwe need to install the necessary packages the packages are as followsgensim it is robust semantic modeling library that is useful for many applications we can install it by executing the following commandpip install gensim pattern it is used to make gensim package work properly we can install it by executing the following commandpip install pattern concept of tokenizationstemmingand lemmatization in this sectionwe will understand what is tokenizationstemmingand lemmatization tokenization it may be defined as the process of breaking the given text the character sequence into smaller units called tokens the tokens may be the wordsnumbers or punctuation marks it is also called word segmentation following is simple example of tokenizationinputmangobananapineapple and apple all are fruits outputmango banana pineapple and apple all are fruits the process of breaking the given text can be done with the help of locating the word boundaries the ending of word and the beginning of new word are called word boundaries the writing system and the typographical structure of the words influence the boundaries in the python nltk modulewe have different packages related to tokenization which we can use to divide the text into tokens as per our requirements some of the packages are as followssent_tokenize package as the name suggestthis package will divide the input text into sentences we can import this package with the help of the following python codefrom nltk tokenize import sent_tokenize |
18,646 | word_tokenize package this package divides the input text into words we can import this package with the help of the following python codefrom nltk tokenize import word_tokenize wordpuncttokenizer package this package divides the input text into words as well as the punctuation marks we can import this package with the help of the following python codefrom nltk tokenize import wordpuncttokenizer stemming while working with wordswe come across lot of variations due to grammatical reasons the concept of variations here means that we have to deal with different forms of the same words like democracydemocraticand democratization it is very necessary for machines to understand that these different words have the same base form in this wayit would be useful to extract the base forms of the words while we are analyzing the text we can achieve this by stemming in this waywe can say that stemming is the heuristic process of extracting the base forms of the words by chopping off the ends of words in the python nltk modulewe have different packages related to stemming these packages can be used to get the base forms of word these packages use algorithms some of the packages are as followsporterstemmer package this python package uses the porter' algorithm to extract the base form we can import this package with the help of the following python codefrom nltk stem porter import porterstemmer for exampleif we will give the word 'writingas the input to this stemmer them we will get the word 'writeafter stemming lancasterstemmer package this python package will use the lancaster' algorithm to extract the base form we can import this package with the help of the following python codefrom nltk stem lancaster import lancasterstemmer for exampleif we will give the word 'writingas the input to this stemmer them we will get the word 'writafter stemming |
18,647 | snowballstemmer package this python package will use the snowball' algorithm to extract the base form we can import this package with the help of the following python codefrom nltk stem snowball import snowballstemmer for exampleif we will give the word 'writingas the input to this stemmer then we will get the word 'writeafter stemming all of these algorithms have different level of strictness if we compare these three stemmers then the porter stemmers is the least strict and lancaster is the strictest snowball stemmer is good to use in terms of speed as well as strictness lemmatization we can also extract the base form of words by lemmatization it basically does this task with the use of vocabulary and morphological analysis of wordsnormally aiming to remove inflectional endings only this kind of base form of any word is called lemma the main difference between stemming and lemmatization is the use of vocabulary and morphological analysis of the words another difference is that stemming most commonly collapses derivationally related words whereas lemmatization commonly only collapses the different inflectional forms of lemma for exampleif we provide the word saw as the input word then stemming might return the word 'sbut lemmatization would attempt to return the word either see or saw depending on whether the use of the token was verb or noun in the python nltk modulewe have the following package related to lemmatization process which we can use to get the base forms of wordwordnetlemmatizer package this python package will extract the base form of the word depending upon whether it is used as noun or as verb we can import this package with the help of the following python codefrom nltk stem import wordnetlemmatizer chunkingdividing data into chunks it is one of the important processes in natural language processing the main job of chunking is to identify the parts of speech and short phrases like noun phrases we have already studied the process of tokenizationthe creation of tokens chunking basically is the labeling of those tokens in other wordschunking will show us the structure of the sentence in the following sectionwe will learn about the different types of chunking |
18,648 | types of chunking there are two types of chunking the types are as followschunking up in this process of chunkingthe objectthingsetc move towards being more general and the language gets more abstract there are more chances of agreement in this processwe zoom out for exampleif we will chunk up the question that "for what purpose cars are"we may get the answer "transportchunking down in this process of chunkingthe objectthingsetc move towards being more specific and the language gets more penetrated the deeper structure would be examined in chunking down in this processwe zoom in for exampleif we chunk down the question "tell specifically about car"we will get smaller pieces of information about the car example in this examplewe will do noun-phrase chunkinga category of chunking which will find the noun phrases chunks in the sentenceby using the nltk module in pythonfollow these steps in python for implementing noun phrase chunkingstep in this stepwe need to define the grammar for chunking it would consist of the rules which we need to follow step in this stepwe need to create chunk parser it would parse the grammar and give the output step in this last stepthe output is produced in tree format let us import the necessary nltk package as followsimport nltk nowwe need to define the sentence heredt means the determinantvbp means the verbjj means the adjectivein means the preposition and nn means the noun sentence [(" ""dt"),("clever""jj"),("fox","nn"),("was","vbp"),("jumping","vbp"),("over","in"),("the","dt"),"wall","nn")nowwe need to give the grammar herewe will give the grammar in the form of regular expression grammar "np:{?*}we need to define parser which will parse the grammar parser_chunking=nltk regexpparser(grammar |
18,649 | the parser parses the sentence as followsparser_chunking parse(sentencenextwe need to get the output the output is generated in the simple variable called output_chunk output_chunk=parser_chunking parse(sentenceupon execution of the following codewe can draw our output in the form of tree output draw(bag of word (bowmodel bag of word (bow) model in natural language processingis basically used to extract the features from text so that the text can be used in modeling such that in machine learning algorithms now the question arises that why we need to extract the features from text it is because the machine learning algorithms cannot work with raw data and they need numeric data so that they can extract meaningful information out of it the conversion of text data into numeric data is called feature extraction or feature encoding how it works this is very simple approach for extracting the features from text suppose we have text document and we want to convert it into numeric data or say want to extract the features out of it then first of all this model extracts vocabulary from all the words in the document then by using document term matrixit will build model in this waybow represents the document as bag of words only any information about the order or structure of words in the document is discarded concept of document term matrix the bow algorithm builds model by using the document term matrix as the name suggeststhe document term matrix is the matrix of various word counts that occur in the document with the help of this matrixthe text document can be represented as |
18,650 | weighted combination of various words by setting the threshold and choosing the words that are more meaningfulwe can build histogram of all the words in the documents that can be used as feature vector following is an example to understand the concept of document term matrixexample suppose we have the following two sentencessentence we are using the bag of words model sentence bag of words model is used for extracting the features nowby considering these two sentenceswe have the following distinct wordswe are using the bag of words model is used for extracting features nowwe need to build histogram for each sentence by using the word count in each sentencesentence [ , , , , , , , , , , , , sentence [ , , , , , , , , , , , , in this waywe have the feature vectors that have been extracted each feature vector is -dimensional because we have distinct words concept of the statistics the concept of the statistics is called termfrequency-inverse document frequency (tfidfevery word is important in the document the statistics help us nderstand the importance of every word term frequency(tfit is the measure of how frequently each word appears in document it can be obtained by dividing the count of each word by the total number of words in given document |
18,651 | inverse document frequency(idfit is the measure of how unique word is to this document in the given set of documents for calculating idf and formulating distinctive feature vectorwe need to reduce the weights of commonly occurring words like the and weigh up the rare words building bag of words model in nltk in this sectionwe will define collection of strings by using countvectorizer to create vectors from these sentences let us import the necessary packagefrom sklearn feature_extraction text import countvectorizer now define the set of sentences sentences=['we are using the bag of word model''bag of word used for extracting the features 'model is vectorizer_count countvectorizer(features_text vectorizer fit_transform(sentencestodense(print(vectorizer vocabulary_the above program generates the output as shown below it shows that we have distinct words in the above two sentences{'we' 'are' 'using' 'the' 'bag' 'of' 'word' 'model' 'is' 'used' 'for' 'extracting' 'features' these are the feature vectors (text to numeric formwhich can be used for machine learning solving problems in this sectionwe will solve few related problems category prediction in set of documentsnot only the words but the category of the words is also importantin which category of text particular word falls for examplewe want to predict whether given sentence belongs to the category emailnewssportscomputeretc in the following examplewe are going to use tf-idf to formulate feature vector to find the category of documents we will use the data from newsgroup dataset of sklearn we need to import the necessary packages |
18,652 | from sklearn datasets import fetch_ newsgroups from sklearn naive_bayes import multinomialnb from sklearn feature_extraction text import tfidftransformer from sklearn feature_extraction text import countvectorizer define the category map we are using five different categories named religionautossportselectronics and space category_map {'talk religion misc''religion''rec autos''autos','rec sport hockey':'hockey','sci electronics':'electronics''sci space''space'create the training settraining_data fetch_ newsgroups(subset='train'categories=category_map keys()shuffle=truerandom_state= build count vectorizer and extract the term countsvectorizer_count countvectorizer(train_tc vectorizer_count fit_transform(training_data dataprint("\ndimensions of training data:"train_tc shapethe tf-idf transformer is created as followstfidf tfidftransformer(train_tfidf tfidf fit_transform(train_tcnowdefine the test datainput_data 'discovery was space shuttle''hinduchristiansikh all are religions''we must have to drive safely''puck is disk made of rubber''televisionmicrowaverefrigrated all uses electricitythe above data will help us train multinomial naive bayes classifierclassifier multinomialnb(fit(train_tfidftraining_data target |
18,653 | transform the input data using the count vectorizerinput_tc vectorizer_count transform(input_datanowwe will transform the vectorized data using the tfidf transformerinput_tfidf tfidf transform(input_tcwe will predict the output categoriespredictions classifier predict(input_tfidfthe output is generated as followsfor sentcategory in zip(input_datapredictions)print('\ninput data:'sent'\ category:'category_map[training_data target_names[category]]the category predictor generates the following outputdimensions of training data( input datadiscovery was space shuttle categoryspace input datahinduchristiansikh all are religions categoryreligion input datawe must have to drive safely categoryautos input datapuck is disk made of rubber categoryhockey input datatelevisionmicrowaverefrigrated all uses electricity categoryelectronics gender finder this problem statementa classifier would be trained to find the gender (male or femaleby providing the names we need to use heuristic to construct feature vector and train |
18,654 | the classifier we will be using the labeled data from the scikit-learn package following is the python code to build gender finderlet us import the necessary packagesimport random from nltk import naivebayesclassifier from nltk classify import accuracy as nltk_accuracy from nltk corpus import names now we need to extract the last letters from the input word these letters will act as featuresdef extract_features(wordn= )last_n_letters word[- :return {'feature'last_n_letters lower()if __name__=='__main__'create the training data using labeled names (male as well as femaleavailable in nltkmale_list [(name'male'for name in names words('male txt')female_list [(name'female'for name in names words('female txt')data (male_list female_listrandom seed( random shuffle(datanowtest data will be created as followsnamesinput ['rajesh''gaurav''swati''shubha'define the number of samples used for train and test with the following code train_sample int( len(data)nowwe need to iterate through different lengths so that the accuracy can be comparedfor in range( )print('\nnumber of end letters:'ifeatures [(extract_features(ni)genderfor (ngenderin data |
18,655 | train_datatest_data features[:train_sample]features[train_sample:classifier naivebayesclassifier train(train_datathe accuracy of the classifier can be computed as followsaccuracy_classifier round( nltk_accuracy(classifiertest_data) print('accuracy str(accuracy_classifier'%'nowwe can predict the outputfor name in namesinputprint(name'==>'classifier classify(extract_features(namei))the above program will generate the following outputnumber of end letters accuracy rajesh -female gaurav -male swati -female shubha -female number of end letters accuracy rajesh -male gaurav -male swati -female shubha -female number of end letters accuracy rajesh -male gaurav -female swati -female shubha -female number of end letters accuracy rajesh -female |
18,656 | gaurav -female swati -female shubha -female number of end letters accuracy rajesh -female gaurav -female swati -female shubha -female in the above outputwe can see that accuracy in maximum number of end letters are two and it is decreasing as the number of end letters are increasing topic modelingidentifying patterns in text data we know that generally documents are grouped into topics sometimes we need to identify the patterns in text that correspond to particular topic the technique of doing this is called topic modeling in other wordswe can say that topic modeling is technique to uncover abstract themes or hidden structure in the given set of documents we can use the topic modeling technique in the following scenariostext classification with the help of topic modelingclassification can be improved because it groups similar words together rather than using each word separately as feature recommender systems with the help of topic modelingwe can build the recommender systems by using similarity measures algorithms for topic modeling topic modeling can be implemented by using algorithms the algorithms are as followslatent dirichlet allocation(ldathis algorithm is the most popular for topic modeling it uses the probabilistic graphical models for implementing topic modeling we need to import gensim package in python for using lda slgorithm latent semantic analysis(ldaor latent semantic indexing(lsithis algorithm is based upon linear algebra basically it uses the concept of svd (singular value decompositionon the document term matrix |
18,657 | non-negative matrix factorization (nmfit is also based upon linear algebra all of the above mentioned algorithms for topic modeling would have the number of topics as parameterdocument-word matrix as an input and wtm (word topic matrixtdm (topic document matrixas output |
18,658 | ai with python analyzing time series data predicting the next in given input sequence is another important concept in machine learning this gives you detailed explanation about analyzing time series data introduction time series data means the data that is in series of particular time intervals if we want to build sequence prediction in machine learningthen we have to deal with sequential data and time series data is an abstract of sequential data ordering of data is an important feature of sequential data basic concept of sequence analysis or time series analysis sequence analysis or time series analysis is to predict the next in given input sequence based on the previously observed the prediction can be of anything that may come nexta symbola numbernext day weathernext term in speech etc sequence analysis can be very handy in applications such as stock market analysisweather forecastingand product recommendations example consider the following example to understand sequence prediction here , , , are the given values and you have to predict the value using sequence prediction model ( , , ,dsequence prediction model (einstalling useful packages for time series data analysis using pythonwe need to install the following packagespandas pandas is an open source bsd-licensed library which provides high-performanceease of data structure usage and data analysis tools for python you can install pandas with the help of the following commandpip install pandas if you are using anaconda and want to install by using the conda package managerthen you can use the following commandconda install - anaconda pandas |
18,659 | hmmlearn it is an open source bsd-licensed library which consists of simple algorithms and models to learn hidden markov models(hmmin python you can install it with the help of the following commandpip install hmmlearn if you are using anaconda and want to install by using the conda package managerthen you can use the following commandconda install - omnia hmmlearn pystruct it is structured learning and prediction library learning algorithms implemented in pystruct have names such as conditional random fields(crf)maximum-margin markov random networks ( nor structural support vector machines you can install it with the help of the following commandpip install pystruct cvxopt it is used for convex optimization based on python programming language it is also free software package you can install it with the help of following commandpip install cvxopt if you are using anaconda and want to install by using the conda package managerthen you can use the following commandconda install - anaconda cvdoxt pandashandlingslicing and extracting statistic from time series data pandas is very useful tool if you have to work with time series data with the help of pandasyou can perform the followingcreate range of dates by using the pd date_range package index pandas with dates by using the pd series package perform re-sampling by using the ts resample package change the frequency |
18,660 | example the following example shows you handling and slicing the time series data by using pandas note that here we are using the monthly arctic oscillation datawhich can be downloaded from current ascii and can be converted to text format for our use handling time series data for handling time series datayou will have to perform the following stepsthe first step involves importing the following packagesimport numpy as np import matplotlib pyplot as plt import pandas as pd nextdefine function which will read the data from the input fileas shown in the code given belowdef read_data(input_file)input_data np loadtxt(input_filedelimiter=nonenowconvert this data to time series for thiscreate the range of dates of our time series in this examplewe keep one month as frequency of data our file is having the data which starts from january dates pd date_range(' - 'periods=input_data shape[ ]freq=' 'in this stepwe create the time series data with the help of pandas seriesas shown belowoutput pd series(input_data[:index]index=datesreturn output if __name__=='__main__'enter the path of the input file as shown hereinput_file "/users/admin/ao txtnowconvert the column to timeseries formatas shown here |
18,661 | timeseries read_data(input_filefinallyplot and visualize the datausing the commands shownplt figure(timeseries plot(plt show(you will observe the plots as shown in the following images |
18,662 | slicing time series data slicing involves retrieving only some part of the time series data as part of the examplewe are slicing the data only from to observe the following code that performs this tasktimeseries[' ':' 'plot(plt show( |
18,663 | when you run the code for slicing the time series datayou can observe the following graph as shown in the image hereextracting statistic from time series data you will have to extract some statistics from given datain cases where you need to draw some important conclusion meanvariancecorrelationmaximum valueand minimum value are some of such statistics you can use the following code if you want to extract such statistics from given time series datamean you can use the mean(functionfor finding the meanas shown heretimeseries mean(then the output that you will observe for the example discussed is- maximum you can use the max(functionfor finding maximumas shown heretimeseries max(then the output that you will observe for the example discussed is |
18,664 | minimum you can use the min(functionfor finding minimumas shown heretimeseries min(then the output that you will observe for the example discussed is- getting everything at once if you want to calculate all statistics at timeyou can use the describe(function as shown heretimeseries describe(then the output that you will observe for the example discussed iscount mean - std min - - - max dtypefloat re-sampling you can resample the data to different time frequency the two parameters for performing re-sampling aretime period method re-sampling with mean(you can use the following code to resample the data with the mean()methodwhich is the default methodtimeseries_mm timeseries resample(" "mean(timeseries_mm plot(style=' --'plt show( |
18,665 | thenyou can observe the following graph as the output of resampling using mean()re-sampling with median(you can use the following code to resample the data using the median()methodtimeseries_mm timeseries resample(" "median(timeseries_mm plot(plt show( |
18,666 | thenyou can observe the following graph as the output of re-sampling with median()rolling mean you can use the following code to calculate the rolling (movingmeantimeseries rolling(window= center=falsemean(plot(style='- 'plt show(thenyou can observe the following graph as the output of the rolling (movingmean |
18,667 | analyzing sequential data by hidden markov model (hmmhmm is statistic model which is widely used for data having continuation and extensibility such as time series stock market analysishealth checkupand speech recognition this section deals in detail with analyzing sequential data using hidden markov model (hmmhidden markov model (hmmhmm is stochastic model which is built upon the concept of markov chain based on the assumption that probability of future stats depends only on the current process state rather any state that preceded it for examplewhen tossing coinwe cannot say that the result of the fifth toss will be head this is because coin does not have any memory and the next result does not depend on the previous result mathematicallyhmm consists of the following variablesstates (sit is set of hidden or latent states present in hmm it is denoted by output symbols (oit is set of possible output symbols present in hmm it is denoted by state transition probability matrix (ait is the probability of making transition from one state to each of the other states it is denoted by observation emission probability matrix (bit is the probability of emitting/observing symbol at particular state it is denoted by |
18,668 | prior probability matrix (pit is the probability of starting at particular state from various states of the system it is denoted by hencea hmm may be defined as (soab)wheres { snis set of possible stateso { omis set of possible observation symbolsa is an nxn state transition probability matrix (tpm) is an nxm observation or emission probability matrix (epm) is an dimensional initial state probability distribution vector exampleanalysis of stock market data in this examplewe are going to analyze the data of stock marketstep by stepto get an idea about how the hmm works with sequential or time series data please note that we are implementing this example in python import the necessary packages as shown belowimport datetime import warnings nowuse the stock market data from the matpotlib finance packageas shown hereimport numpy as np from matplotlib import cmpyplot as plt from matplotlib dates import yearlocatormonthlocator tryfrom matplotlib finance import quotes_historical_yahoo_och except importerrorfrom matplotlib finance import quotes_historical_yahoo as quotes_historical_yahoo_och from hmmlearn hmm import gaussianhmm load the data from start date and end datei between two specific dates as shown herestart_date datetime date( end_date datetime date( quotes quotes_historical_yahoo_och ('intc'start_dateend_date |
18,669 | in this stepwe will extract the closing quotes every day for thisuse the following commandclosing_quotes np array([quote[ for quote in quotes]nowwe will extract the volume of shares traded every day for thisuse the following commandvolumes np array([quote[ for quote in quotes])[ :heretake the percentage difference of closing stock pricesusing the code shown belowdiff_percentages np diff(closing_quotesclosing_quotes[:-dates np array([quote[ for quote in quotes]dtype=np int)[ :training_data np column_stack([diff_percentagesvolumes]in this stepcreate and train the gaussian hmm for thisuse the following codehmm gaussianhmm(n_components= covariance_type='diag'n_iter= with warnings catch_warnings()warnings simplefilter('ignore'hmm fit(training_datanowgenerate data using the hmm modelusing the commands shownnum_samples samples_ hmm sample(num_samplesfinallyin this stepwe plot and visualize the difference percentage and volume of shares traded as output in the form of graph use the following code to plot and visualize the difference percentagesplt figure(plt title('difference percentages'plt plot(np arange(num_samples)samples[: ] ='black'use the following code to plot and visualize the volume of shares tradedplt figure(plt title('volume of shares'plt plot(np arange(num_samples)samples[: ] ='black'plt ylim(ymin= |
18,670 | plt show( |
18,671 | ai with python speech recognition in this we will learn about speech recognition using ai with python speech is the most basic means of adult human communication the basic goal of speech processing is to provide an interaction between human and machine speech processing system has mainly three tasksfirstspeech recognition that allows the machine to catch the wordsphrases and sentences we speak secondnatural language processing to allow the machine to understand what we speakand thirdspeech synthesis to allow the machine to speak this focuses on speech recognitionthe process of understanding the words that are spoken by human beings remember that the speech signals are captured with the help of microphone and then it has to be understood by the system building speech recognizer speech recognition or automatic speech recognition (asris the center of attention for ai projects like robotics without asrit is not possible to imagine cognitive robot interacting with human howeverit is not quite easy to build speech recognizer difficulties in developing speech recognition system developing high quality speech recognition system is really difficult problem the difficulty of speech recognition technology can be broadly characterized along number of dimensions as discussed belowsize of the vocabularysize of the vocabulary impacts the ease of developing an asr consider the following sizes of vocabulary for better understanding small size vocabulary consists of - wordsfor exampleas in voicemenu system medium size vocabulary consists of several to , of wordsfor exampleas in database-retrieval task large size vocabulary consists of several , of wordsas in general dictation task note thatthe larger the size of vocabularythe harder it is to perform recognition channel characteristicschannel quality is also an important dimension for examplehuman speech contains high bandwidth with full frequency rangewhile telephone speech consists of low bandwidth with limited frequency range note that it is harder in the latter speaking modeease of developing an asr also depends on the speaking modethat is whether the speech is in isolated word modeor connected word modeor in continuous speech mode note that continuous speech is harder to recognize |
18,672 | speaking stylea read speech may be in formal styleor spontaneous and conversational with casual style the latter is harder to recognize speaker dependencyspeech can be speaker dependentspeaker adaptiveor speaker independent speaker independent is the hardest to build type of noisenoise is another factor to consider while developing an asr signal to noise ratio may be in various rangesdepending on the acoustic environment that observes less versus more background noiseo if the signal to noise ratio is greater than dbit is considered as high range if the signal to noise ratio lies between db to dbit is considered as medium snr if the signal to noise ratio is lesser than dbit is considered as low range for examplethe type of background noise such as stationarynon-human noisebackground speech and crosstalk by other speakers also contributes to the difficulty of the problem microphone characteristicsthe quality of microphone may be goodaverageor below average alsothe distance between mouth and micro-phone can vary these factors also should be considered for recognition systems despite these difficultiesresearchers worked lot on various aspects of speech such as understanding the speech signalthe speakerand identifying the accents you will have to follow the steps given below to build speech recognizervisualizing audio signals reading from file and working on it this is the first step in building speech recognition system as it gives an understanding of how an audio signal is structured some common steps that can be followed to work with audio signals are as followsrecording when you have to read the audio signal from filethen record it using microphoneat first sampling when recording with microphonethe signals are stored in digitized form but to work upon itthe machine needs them in the discrete numeric form hencewe should perform sampling at certain frequency and convert the signal into the discrete numerical form choosing the high frequency for sampling implies that when humans listen to the signalthey feel it as continuous audio signal example the following example shows stepwise approach to analyze an audio signalusing pythonwhich is stored in file the frequency of this audio signal is , hz |
18,673 | import the necessary packages as shown hereimport numpy as np import matplotlib pyplot as plt from scipy io import wavfile nowread the stored audio file it will return two valuesthe sampling frequency and the audio signal provide the path of the audio file where it is storedas shown herefrequency_samplingaudio_signal wavfile read("/users/admin/audio_file wav"display the parameters like sampling frequency of the audio signaldata type of signal and its durationusing the commands shownprint('\nsignal shape:'audio_signal shapeprint('signal datatype:'audio_signal dtypeprint('signal duration:'round(audio_signal shape[ float(frequency_sampling) )'seconds'this step involves normalizing the signal as shown belowaudio_signal audio_signal np power( in this stepwe are extracting the first values from this signal to visualize use the following commands for this purposeaudio_signal audio_signal [: time_axis np arange( len(signal) float(frequency_samplingnowvisualize the signal using the commands given belowplt plot(time_axissignalcolor='blue'plt xlabel('time (milliseconds)'plt ylabel('amplitude'plt title('input audio signal'plt show(you would be able to see an output graph and data extracted for the above audio signal as shown in the image here |
18,674 | signal shape( ,signal datatypeint signal duration seconds characterizing the audio signaltransforming to frequency domain characterizing an audio signal involves converting the time domain signal into frequency domainand understanding its frequency componentsby this is an important step because it gives lot of information about the signal you can use mathematical tool like fourier transform to perform this transformation example the following example showsstep-by-stephow to characterize the signalusing pythonwhich is stored in file note that here we are using fourier transform mathematical tool to convert it into frequency domain import the necessary packagesas shown hereimport numpy as np import matplotlib pyplot as plt from scipy io import wavfile nowread the stored audio file it will return two valuesthe sampling frequency and the the audio signal provide the path of the audio file where it is stored as shown in the command herefrequency_samplingaudio_signal wavfile read("/users/admin/sample wav" |
18,675 | in this stepwe will display the parameters like sampling frequency of the audio signaldata type of signal and its durationusing the commands given belowprint('\nsignal shape:'audio_signal shapeprint('signal datatype:'audio_signal dtypeprint('signal duration:'round(audio_signal shape[ float(frequency_sampling) )'seconds'in this stepwe need to normalize the signalas shown in the following commandaudio_signal audio_signal np power( this step involves extracting the length and half length of the signal use the following commands for this purposelength_signal len(audio_signalhalf_length np ceil((length_signal astype(np intnowwe need to apply mathematics tools for transforming into frequency domain here we are using the fourier transform signal_frequency np fft fft(audio_signalnowdo the normalization of frequency domain signal and square itsignal_frequency abs(signal_frequency[ :half_length]length_signal signal_frequency ** nextextract the length and half length of the frequency transformed signallen_fts len(signal_frequencynote that the fourier transformed signal must be adjusted for even as well as odd case if length_signal signal_frequency[ :len_fts* elsesignal_frequency[ :len_fts- * nowextract the power in decibal(db)signal_power np log (signal_frequencyadjust the frequency in khz for -axis |
18,676 | x_axis np arange( len_half (frequency_sampling length_signal nowvisualize the characterization of signal as followsplt figure(plt plot(x_axissignal_powercolor='black'plt xlabel('frequency (khz)'plt ylabel('signal power (db)'plt show(you can observe the output graph of the above code as shown in the image belowgenerating monotone audio signal the two steps that you have seen till now are important to learn about signals nowthis step will be useful if you want to generate the audio signal with some predefined parameters note that this step will save the audio signal in an output file example in the following examplewe are going to generate monotone signalusing pythonwhich will be stored in file for thisyou will have to take the following stepsimport the necessary packages as shownimport numpy as np |
18,677 | import matplotlib pyplot as plt from scipy io wavfile import write provide the file where the output file should be savedoutput_file 'audio_signal_generated wavnowspecify the parameters of your choiceas shownduration in seconds frequency_sampling in hz frequency_tone min_val - np pi max_val np pi in this stepwe can generate the audio signalas shownt np linspace(min_valmax_valduration frequency_samplingaudio_signal np sin( np pi tone_freq tnowsave the audio file in the output filewrite(output_filefrequency_samplingsignal_scaledextract the first values for our graphas shownaudio_signal audio_signal[: time_axis np arange( len(signal) float(sampling_freqnowvisualize the generated audio signal as followsplt plot(time_axissignalcolor='blue'plt xlabel('time in milliseconds'plt ylabel('amplitude'plt title('generated audio signal'plt show( |
18,678 | you can observe the plot as shown in the figure given herefeature extraction from speech this is the most important step in building speech recognizer because after converting the speech signal into the frequency domainwe must convert it into the usable form of feature vector we can use different feature extraction techniques like mfccplpplprasta etc for this purpose example in the following examplewe are going to extract the features from signalstep-by-stepusing pythonby using mfcc technique import the necessary packagesas shown hereimport numpy as np import matplotlib pyplot as plt from scipy io import wavfile from python_speech_features import mfcclogfbank |
18,679 | nowread the stored audio file it will return two valuesthe sampling frequency and the audio signal provide the path of the audio file where it is stored frequency_samplingaudio_signal wavfile read("/users/admin/audio_file wav"note that here we are taking first samples for analysis audio_signal audio_signal[: use the mfcc techniques and execute the following command to extract the mfcc featuresfeatures_mfcc mfcc(audio_signalfrequency_samplingnowprint the mfcc parametersas shownprint('\nmfcc:\nnumber of windows ='features_mfcc shape[ ]print('length of each feature ='features_mfcc shape[ ]nowplot and visualize the mfcc features using the commands given belowfeatures_mfcc features_mfcc plt matshow(features_mfccplt title('mfcc'in this stepwe work with the filter bank features as shownextract the filter bank featuresfilterbank_features logfbank(audio_signalfrequency_samplingnowprint the filterbank parameters print('\nfilter bank:\nnumber of windows ='filterbank_features shape[ ]print('length of each feature ='filterbank_features shape[ ]nowplot and visualize the filterbank features filterbank_features filterbank_features plt matshow(filterbank_featuresplt title('filter bank'plt show( |
18,680 | as result of the steps aboveyou can observe the following outputsfigure for mfcc and figure for filter bank recognition of spoken words speech recognition means that when humans are speakinga machine understands it here we are using google speech api in python to make it happen we need to install the following packages for thispyaudioit can be installed by using pip install pyaudio command |
18,681 | speechrecognitionthis package can be installed by using pip install speechrecognition google-speech-apiit can be installed by using the command pip install google-api-python-client example observe the following example to understand about recognition of spoken wordsimport the necessary packages as shownimport speech_recognition as sr create an object as shown belowrecording sr recognizer(nowthe microphone(module will take the voice as inputwith sr microphone(as sourcerecording adjust_for_ambient_noise(sourceprint("please say something:"audio recording listen(sourcenow google api would recognize the voice and gives the output tryprint("you said\nrecording recognize_google(audio)except exception as eprint( |
18,682 | you can see the following outputplease say somethingyou saidfor exampleif you said tutorialspoint comthen the system recognizes it correctly as followstutorialspoint com |
18,683 | heuristic search plays key role in artificial intelligence in this you will learn in detail about it concept of heuristic search in ai heuristic is rule of thumb which leads us to the probable solution most problems in artificial intelligence are of exponential nature and have many possible solutions you do not know exactly which solutions are correct and checking all the solutions would be very expensive thusthe use of heuristic narrows down the search for solution and eliminates the wrong options the method of using heuristic to lead the search in search space is called heuristic search heuristic techniques are very useful because the search can be boosted when you use them difference between uninformed and informed search there are two types of control strategies or search techniquesuninformed and informed they are explained in detail as given hereuninformed search it is also called blind search or blind control strategy it is named so because there is information only about the problem definitionand no other extra information is available about the states this kind of search techniques would search the whole state space for getting the solution breadth first search (bfsand depth first search (dfsare the examples of uninformed search informed search it is also called heuristic search or heuristic control strategy it is named so because there is some extra information about the states this extra information is useful to compute the preference among the child nodes to explore and expand there would be heuristic function associated with each node best first search (bfs) *mean and analysis are the examples of informed search constraint satisfaction problems (cspsconstraint means restriction or limitation in aiconstraint satisfaction problems are the problems which must be solved under some constraints the focus must be on not to violate the constraint while solving such problems finallywhen we reach the final solutioncsp must obey the restriction |
18,684 | real world problem solved by constraint satisfaction the previous sections dealt with creating constraint satisfaction problems nowlet us apply this to real world problems too some examples of real world problems solved by constraint satisfaction are as followssolving algebraic relation with the help of constraint satisfaction problemwe can solve algebraic relations in this examplewe will try to solve simple algebraic relation it will return the value of and within the range that we would define after completing this python programyou would be able to understand the basics of solving problems with constraint satisfaction note that before writing the programwe need to install python package called pythonconstraint you can install it with the help of the following commandpip install python-constraint the following steps show you python program for solving algebraic relation using constraint satisfactionimport the constraint package using the following commandfrom constraint import nowcreate an object of module named problem(as shown belowproblem problem(nowdefine variables note that here we have two variables and band we are defining as their rangewhich means we got the solution within first numbers problem addvariable(' 'range( )problem addvariable(' 'range( )nextdefine the particular constraint that we want to apply on this problem observe that here we are using the constraint * problem addconstraint(lambda aba = |
18,685 | nowcreate the object of getsolution(module using the following commandsolutions problem getsolutions(lastlyprint the output using the following commandprint (solutionsyou can observe the output of the above program as follows[{' ' ' ' }{' ' ' ' }{' ' ' ' }{' ' ' ' }{' ' ' ' }magic square magic square is an arrangement of distinct numbersgenerally integersin square gridwhere the numbers in each row and in each column and the numbers in the diagonalall add up to the same number called the "magic constantthe following is stepwise execution of simple python code for generating magic squaresdefine function named magic_squareas shown belowdef magic_square(matrix_ms)isize len(matrix_ms[ ]sum_list [the following code shows the code for vertical of squaresfor col in range(isize)sum_list append(sum(row[colfor row in matrix_ms)the following code shows the code for horizantal of squaressum_list extend([sum (linesfor lines in matrix_ms]the following code shows the code for horizontal of squaresdlresult for in range( ,isize)dlresult +=matrix_ms[ ][isum_list append(dlresultdrresult for in range(isize- ,- ,- )drresult +=matrix_ms[ ][isum_list append(drresult |
18,686 | if len(set(sum_list))> return false return true nowgive the value of the matrix and check the outputprint(magic_square([[ , , ][ , , ][ , , ]])you can observe that the output would be false as the sum is not up to the same number print(magic_square([[ , , ][ , , ][ , , ]])you can observe that the output would be true as the sum is the same numberthat is here |
18,687 | ai with python games are played with strategy every player or team would make strategy before starting the game and they have to change or build new strategy according to the current situation(sin the game search algorithms you will have to consider computer games also with the same strategy as above note that search algorithms are the ones that figure out the strategy in computer games how it works the goal of search algorithms is to find the optimal set of moves so that they can reach at the final destination and win these algorithms use the winning set of conditionsdifferent for every gameto find the best moves visualize computer game as the tree we know that tree has nodes starting from the rootwe can come to the final winning nodebut with optimal moves that is the work of search algorithms every node in such tree represents future state the search algorithms search through this tree to make decisions at each step or node of the game combinational search the major disadvantage of using search algorithms is that they are exhaustive in naturewhich is why they explore the entire search space to find the solution that leads to wastage of resources it would be more cumbersome if these algorithms need to search the whole search space for finding the final solution to eliminate such kind of problemwe can use combinational search which uses the heuristic to explore the search space and reduces its size by eliminating the possible wrong moves hencesuch algorithms can save the resources some of the algorithms that use heuristic to search the space and save the resources are discussed hereminimax algorithm it is the strategy used by combinational search that uses heuristic to speed up the search strategy the concept of minimax strategy can be understood with the example of two player gamesin which each player tries to predict the next move of the opponent and tries to minimize that function alsoin order to winthe player always try to maximize its own function based on the current situation heuristic plays an important role in such kind of strategies like minimax every node of the tree would have heuristic function associated with it based on that heuristicit will take the decision to make move towards the node that would benefit them the most |
18,688 | alpha-beta pruning major issue with minimax algorithm is that it can explore those parts of the tree that are irrelevantleads to the wastage of resources hence there must be strategy to decide which part of the tree is relevant and which is irrelevant and leave the irrelevant part unexplored alpha-beta pruning is one such kind of strategy the main goal of alpha-beta pruning algorithm is to avoid the searching those parts of the tree that do not have any solution the main concept of alpha-beta pruning is to use two bounds named alphathe maximum lower boundand betathe minimum upper bound these two parameters are the values that restrict the set of possible solutions it compares the value of the current node with the value of alpha and beta parametersso that it can move to the part of the tree that has the solution and discard the rest negamax algorithm this algorithm is not different from minimax algorithmbut it has more elegant implementation the main disadvantage of using minimax algorithm is that we need to define two different heuristic functions the connection between these heuristic is thatthe better state of game is for one playerthe worse it is for the other player in negamax algorithmthe same work of two heuristic functions is done with the help of single heuristic function building bots to play games for building bots to play two player games in aiwe need to install the easyai library it is an artificial intelligence framework that provides all the functionality to build two-player games you can download it with the help of the following commandpip install easyai bot to play last coin standing in this gamethere would be pile of coins each player has to take number of coins from that pile the goal of the game is to avoid taking the last coin in the pile we will be using the class lastcoinstanding inherited from the twoplayersgame class of the easyai library the following code shows the python code for this gameimport the required packages as shownfrom easyai import twoplayersgameid_solvehuman_playerai_player from easyai ai import tt nowinherit the class from the twoplayergame class to handle all operations of the gameclass lastcoin_game(twoplayersgame)def __init__(selfplayers)nowdefine the players and the player who is going to start the game |
18,689 | self players players self nplayer nowdefine the number of coins in the gamehere we are using coins for the game self num_coins define the maximum number of coins player can take in move self max_coins now there are some certain things to define as shown in the following code define possible moves def possible_moves(self)return [str(afor in range( self max_coins )define the removal of the coins def make_move(selfmove)self num_coins -int(movedefine who took the last coin def win_game(self)return self num_coins < define when to stop the gamethat is when somebody wins def is_over(self)return self win(define how to compute the score def score(self)return if self win_game(else define number of coins remaining in the pile def show(self)print(self num_coins'coins left in the pile'if __name__ ="__main__"tt tt( |
18,690 | lastcoin_game ttentry lambda selfself num_coins solving the game with the following code blockrdm id_solve(lastcoin_gamerange( )win_score= tt=ttprint(rdmdeciding who will start the game game lastcoin_game([ai_player(tt)human_player()]game play(you can find the following output and simple play of this gamed: : : : : : : : : : : : : : : coins left in the pile move # player plays coins left in the pile player what do you play move # player plays coins left in the pile move # player plays coins left in the pile player what do you play move # player plays coins left in the pile move # player plays coins left in the pile player what do you play move # player plays coins left in the pile |
18,691 | bot to play tic tac toe tic-tac-toe is very familiar and one of the most popular games let us create this game by using the easyai library in python the following code is the python code of this gameimport the packages as shownfrom easyai import twoplayersgameai_playernegamax from easyai player import human_player inherit the class from the twoplayergame class to handle all operations of the gameclass tictactoe_game(twoplayersgame)def __init__(selfplayers)nowdefine the players and the player who is going to start the gameself players players self nplayer define the type of boardself board [ now there are some certain things to define as followsdefine possible movesdef possible_moves(self)return [ for xy in enumerate(self boardif = define the move of playerdef make_move(selfmove)self board[int(move self nplayer to boost aidefine when player makes movedef umake_move(selfmove)self board[int(move define the lose condition that an opponent have three in line def condition_for_lose(self)possible_combinations [[ , , ][ , , ][ , , ][ , , ][ , , ][ , , ][ , , ][ , , ] |
18,692 | return any([all([(self board[ - =self nopponentfor in combination]for combination in possible_combinations]define check for the finish of game def is_over(self)return (self possible_moves(=[]or self condition_for_lose(show the current position of the players in the game def show(self)print('\ '+'\njoin([join([['' '' '][self board[ * ]for in range( )]for in range( )])compute the scores def scoring(self)return - if self condition_for_lose(else define the main method to define the algorithm and start the gameif __name__ ="__main__"algo negamax( tictactoe_game([human_player()ai_player(algo)]play(you can see the following output and simple play of this gameplayer what do you play move # player plays move # player plays |
18,693 | player what do you play move # player plays move # player plays player what do you play move # player plays move # player plays |
18,694 | ai with python neural networks neural networks are parallel computing devices that are an attempt to make computer model of brain the main objective behind is to develop system to perform various computational task faster than the traditional systems these tasks include pattern recognition and classificationapproximationoptimization and data clustering what is artificial neural networks (annartificial neural network (annis an efficient computing system whose central theme is borrowed from the analogy of biological neural networks anns are also named as artificial neural systemsparallel distributed processing systemsand connectionist systems ann acquires large collection of units that are interconnected in some pattern to allow communications between them these unitsalso referred to as nodes or neuronsare simple processors which operate in parallel every neuron is connected with other neuron through connection link each connection link is associated with weight having the information about the input signal this is the most useful information for neurons to solve particular problem because the weight usually excites or inhibits the signal that is being communicated each neuron is having its internal state which is called activation signal output signalswhich are produced after combining input signals and activation rulemay be sent to other units if you want to study neural networks in detail then you can follow the linkinstalling useful packages for creating neural networks in pythonwe can use powerful package for neural networks called neurolab it is library of basic neural networks algorithms with flexible network configurations and learning algorithms for python you can install this package with the help of the following command on command promptpip install neurolab if you are using the anaconda environmentthen use the following command to install neurolabconda install - labfabulous neurolab building neural networks in this sectionlet us build some neural networks in python by using the neurolab package |
18,695 | perceptron based classifier perceptrons are the building blocks of ann if you want to know more about perceptronyou can follow the linkrvised_learning htm following is stepwise execution of the python code for building simple neural network perceptron based classifierimport the necessary packages as shownimport matplotlib pyplot as plt import neurolab as nl enter the input values note that it is an example of supervised learninghence you will have to provide target values too input [[ ][ ][ ][ ]target [[ ][ ][ ][ ]create the network with inputs and neuronnet nl net newp([[ ],[ ]] nowtrain the network herewe are using delta rule for training error_progress net train(inputtargetepochs= show= lr= nowvisualize the output and plot the graphplt figure(plt plot(error_progressplt xlabel('number of epochs'plt ylabel('training error'plt grid(plt show( |
18,696 | you can see the following graph showing the training progress using the error metricsingle layer neural networks in this examplewe are creating single layer neural network that consists of independent neurons acting on input data to produce the output note that we are using the text file named neural_simple txt as our input import the useful packages as shownimport numpy as np import matplotlib pyplot as plt import neurolab as nl load the dataset as followsinput_data np loadtxt("/users/admin/neural_simple txt'the following is the data we are going to use note that in this datafirst two columns are the features and last two columns are the labels array([[ ][ ][ ][ ][ ][ ][ ][ ][ ][ ][ ] |
18,697 | [ ][ ][ ][ ][ ]]nowseparate these four columns into data columns and labelsdata input_data[: : labels input_data[: :plot the input data using the following commandsplt figure(plt scatter(data[:, ]data[:, ]plt xlabel('dimension 'plt ylabel('dimension 'plt title('input data'nowdefine the minimum and maximum values for each dimension as shown heredim _mindim _max data[:, min()data[:, max(dim _mindim _max data[:, min()data[:, max(nextdefine the number of neurons in the output layer as followsnn_output_layer labels shape[ nowdefine single-layer neural networkdim [dim _mindim _maxdim [dim _mindim _maxneural_net nl net newp([dim dim ]nn_output_layertrain the neural network with number of epochs and learning rate as shownerror neural_net train(datalabelsepochs= show= lr= nowvisualize and plot the training progress using the following commandsplt figure(plt plot(errorplt xlabel('number of epochs' |
18,698 | plt ylabel('training error'plt title('training error progress'plt grid(plt show(nowuse the test data-points in above classifierprint('\ntest results:'data_test [[ ][ ][ ],[ ]for item in data_testprint(item'-->'neural_net sim([item])[ ]you can find the test results as shown here[ --[ [ --[ [ --[ [ --[ |
18,699 | you can see the following graphs as the output of the code discussed till nowmulti-layer neural networks in this examplewe are creating multi-layer neural network that consists of more than one layer to extract the underlying patterns in the training data this multilayer neural network will work like regressor we are going to generate some data points based on the equationy import the necessary packages as shownimport numpy as np import matplotlib pyplot as plt import neurolab as nl generate some data point based on the above mentioned equationmin_val - max_val num_points np linspace(min_valmax_valnum_pointsy np square( /np linalg norm(ynowreshape this data set as follows |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.