id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
18,900 | reasoning under uncertainty pearl' sprinkler example season rained sprinkler grass wet grass shiny shoes wet figure the sprinkler belief network f_ta prob(tamper,[],[ , ]f_fi prob(fire,[],[ , ]f_sm prob(smoke,[fire],[[ , ],[ , ]]f_al prob(alarm,[fire,tamper],[[[ ][ ]][[ ][ ]]]f_lv prob(leaving,[alarm],[[ ][ ]]f_re prob(report,[leaving],[[ ][ ]] bn_report beliefnetwork("report-of-leaving"{tamper,fire,smoke,alarm,leaving,report}{f_ta,f_fi,f_sm,f_al,f_lv,f_re}sprinkler example the third belief network is the sprinkler example from pearl the output of bn_sprinkler show(is shown in figure of this document probgraphicalmodels py -(continued season variable("season"["summer","winter"]position=( , )sprinkler variable("sprinkler"["on","off"]position=( , )rained variable("rained"booleanposition=( , )grass_wet variable("grass wet"booleanposition=( , )grass_shiny variable("grass shiny"booleanposition=( , )shoes_wet variable("shoes wet"booleanposition=( , )version june |
18,901 | f_season prob(season,[],{'summer': 'winter': }f_sprinkler prob(sprinkler,[season],{'summer':{'on': ,'off': }'winter':{'on': ,'off': }}f_rained prob(rained,[season],{'summer':[ , ]'winter'[ , ]}f_wet prob(grass_wet,[sprinkler,rained]{'on'[[ , ],[ , ]]'off':[[ , ],[ , ]]}f_shiny prob(grass_shiny[grass_wet][[ , ][ , ]]f_shoes prob(shoes_wet[grass_wet][[ , ][ , ]] bn_sprinkler beliefnetwork("pearl' sprinkler example"{seasonsprinklerrainedgrass_wetgrass_shinyshoes_wet}{f_seasonf_sprinklerf_rainedf_wetf_shinyf_shoes} bn_sprinkler_soff beliefnetwork("pearl' sprinkler example (do(sprinkler=off))"{seasonsprinklerrainedgrass_wetgrass_shinyshoes_wet}{f_seasonf_rainedf_wetf_shinyf_shoesprob(sprinkler,[],{'on': ,'off': })}bipartite diagnostic model with noisy-or the belief network bn_no is bipartite diagnostic modelwith independent diseasesand the symtoms depend on the diseaseswhere the cpds are defined using noisy-or bipartite means it is in two partsthe diseases are only connected to the symptoms and the symptoms are only connected to the diseases the output of bn_no show(is shown in figure of this document probgraphicalmodels py -(continued cough variable("cough"boolean( , )fever variable("fever"boolean( , )sneeze variable("sneeze"boolean( , )cold variable("cold",boolean( , )flu variable("flu",boolean( , )covid variable("covid",boolean( , ) p_cold_no prob(cold,[],[ , ]p_flu_no prob(flu,[],[ , ]p_covid_no prob(covid,[],[ , ] p_cough_no noisyor(cough[cold,flu,covid][ ]p_fever_no noisyor(feverflu,covid][ ]p_sneeze_no noisyor(sneeze[cold,flu ][ ] bn_no beliefnetwork("bipartite diagnostic network (noisy-or)"version june |
18,902 | reasoning under uncertainty bipartite diagnostic network (noisy-orcold flu covid cough fever sneeze figure bipartite diagnostic network {coughfeversneezecoldflucovid}{p_cold_nop_flu_nop_covid_nop_cough_nop_fever_nop_sneeze_no} to see the conditional probability of noisy-or doprint(p_cough_no to_table() example from box "noisy-or compared to logistic regressionx variable(" ",booleanw print(noisyor( ,[ , , , ],[ -()/( - ) -()/( - ) -()/( - ) -()/( - )]to_table(given={ :true})bipartite diagnostic model with logistic regression the belief network bn_lr is bipartite diagnostic modelwith independent diseasesand the symtoms depend on the diseaseswhere the cpds are defined using logistic regression it has the same graphical structure as the previous example (see figure this has the (approximatelythe same conditional probabilities as the previous example when zero or one diseases are present note that sigmoid(- probgraphicalmodels py -(continuedversion june |
18,903 | p_cold_lr prob(cold,[],[ , ]p_flu_lr prob(flu,[],[ , ]p_covid_lr prob(covid,[],[ , ] p_cough_lr logisticregression(cough[cold,flu,covid][- ]p_fever_lr logisticregression(feverflu,covid][- ]p_sneeze_lr logisticregression(sneeze[cold,flu ][- ] bn_lr beliefnetwork("bipartite diagnostic network logistic regression"{coughfeversneezecoldflucovid}{p_cold_lrp_flu_lrp_covid_lrp_cough_lrp_fever_lrp_sneeze_lr} to see the conditional probability of noisy-or do#print(p_cough_lr to_table() example from box "noisy-or compared to logistic regressionfrom learnlinear import sigmoidlogit =logit( variable(" ",booleanprint(logisticregression( ,[ , , , ],[ logit( )- logit( )- logit( )- logit( )- ]to_table(given={ :true})try to predict what would happen (and then testif we had =logit( inference methods each of the inference methods implements the query method that computes the posterior probability of variable given dictionary of {variable valueobservations the methods are displayable because they implement the display method which is currently text-based probgraphicalmodels py -(continued from display import displayable class inferencemethod(displayable)"""the abstract class of graphical model inference methods""method_name "unnamedeach method should have method name def __init__(self,gm=none)self gm gm def query(selfqvarobs={})"""returns {value:probdictionary for the query variable""version june |
18,904 | reasoning under uncertainty raise notimplementederror("inferencemethod query"abstract method we use bn_ ch as the test casein particular ( truethis needs an error thresholdparticularly for the approximate methodswhere the default threshold is much too accurate probgraphicalmodels py -(continued def testim(selfthreshold= )solver self(bn_ chres solver query( ,{ :true}correct_answer assert correct_answer-threshold res[truecorrect_answer+thresholdf"value {res[true]not in desired range for {self method_name}print( "unit test passed for {self method_name" recursive conditioning an instance of rc object takes in graphical model the query method uses recursive conditioning to compute the probability of query variable given observations on other variables probrc py -recursive conditioning for graphical models import math from probgraphicalmodels import graphicalmodelinferencemethod from probfactors import factor from utilities import dict_union class probsearch(inferencemethod)"""the class that queries graphical models using recursive conditioning gm is graphical model to query ""method_name "recursive conditioning def __init__(self,gm=none)inferencemethod __init__(selfgm#self max_display_level def query(selfqvarobs={}split_order=none)"""computes (qvar obswhere qvar is the query variable obs is variable:value dictionary split_order is list of the non-observed non-query variables in gm ""if qvar in obsreturn {val:( if val =obs[qvarelse for val in qvar domainversion june |
18,905 | elseif split_order =nonesplit_order [ for in self gm variables if ( not in obsand !qvarunnorm [self prob_search(dict_union({qvar:val},obs)self gm factorssplit_orderfor val in qvar domainp_obs sum(unnormreturn {val:pr/p_obs for val,pr in zip(qvar domainunnorm)the following is the naive search-based algorithm it is exponential in the number of variablesso is not very useful howeverit is simpleand useful to understand before looking at the more complicated algorithm used in the subclass probrc py -(continued def prob_search(selfcontextfactorssplit_order)"""simple search algorithm context is variable:value dictionary factors is set of factors split_order is list of variables in factors not assigned in context returns sum over variable assignments to variables in split order or product of factors ""self display( ,"calling prob_search,",(context,factors)if not factorsreturn elif to_eval :{fac for fac in factors if fac can_evaluate(context)}evaluate factors when all variables are assigned self display( ,"prob_search evaluating factors",to_evalval math prod(fac get_value(contextfor fac in to_evalreturn val self prob_search(contextfactors-to_evalsplit_orderelsetotal var split_order[ self display( "prob_search branching on"varfor val in var domaintotal +self prob_search(dict_union({var:val},context)factorssplit_order[ :]self display( "prob_search branching on"var,"returning"totalreturn total the recursive conditioning algorithm adds forgetting and caching and recognizing disconnected components we do this by adding cache and redefining the recursive search algorithm in inherits the query method probrc py -(continued class probrc(probsearch)version june |
18,906 | reasoning under uncertainty def __init__(self,gm=none)self cache {(frozenset()frozenset()): probsearch __init__(self,gm def prob_search(selfcontextfactorssplit_order)""returns the number \sum_{split_order\prod_{factorsgiven assignments in context context is variable:value dictionary factors is set of factors split_order is list of variables in factors that are not assigned in context returns sum over variable assignments to variables in split_order of the product of factors ""self display( ,"calling rc,",(context,factors)ce (frozenset(context items())frozenset(factors)key for the cache entry if ce in self cacheself display( ,"rc cache lookup",(context,factors)return self cache[ceif not factorsno factorsneeded if you don' have forgetting and caching return elif vars_not_in_factors :{var for var in context if not any(var in fac variables for fac in factors)}forget variables not in any factor self display( ,"rc forgetting variables"vars_not_in_factorsreturn self prob_search({key:val for (key,valin context items(if key not in vars_not_in_factors}factorssplit_orderelif to_eval :{fac for fac in factors if fac can_evaluate(context)}evaluate factors when all variables are assigned self display( ,"rc evaluating factors",to_evalval math prod(fac get_value(contextfor fac in to_evalif val = return elsereturn val self prob_search(context{fac for fac in factors if fac not in to_eval}split_orderelif len(comp :connected_components(contextfactorssplit_order) there are disconnected components self display( ,"splitting into connected components",comp,"in context",contextreturn(math prod(self prob_search(context, ,eofor ( ,eoin comp)version june |
18,907 | elseassert split_order"split_order should not be empty to get heretotal var split_order[ self display( "rc branching on"varfor val in var domaintotal +self prob_search(dict_union({var:val},context)factorssplit_order[ :]self cache[cetotal self display( "rc branching on"var,"returning"totalreturn total connected_components returns list of connected componentswhere connected component is set of factors and set of variableswhere the graph that connects variables and factors that involve them is connected the connected components are built one at timewith current connected component at all times factors is partitioned into disjoint setscomponent_factors containing factors in the current connected component where all factors that share variable are already in the component factors_to_check containing factors in the current connected component where potentially some factors that share variable are not in the componentthese need to be checked other_factors the other factors that are not (yetin the connected component probrc py -(continued def connected_components(contextfactorssplit_order)"""returns list of ( ,ewhere is subset of factors and is subset of split_order such that each element shares the same variables that are disjoint from other elements ""other_factors set(factors#copies factors factors_to_check {other_factors pop()factors in connected component still to be checked component_factors set(factors in first connected component already checked component_variables set(variables in first connected component while factors_to_checknext_fac factors_to_check pop(component_factors add(next_facnew_vars set(next_fac variablescomponent_variables context keys(component_variables |new_vars for var in new_varsversion june |
18,908 | reasoning under uncertainty factors_to_check |{ for in other_factors if var in variablesother_factors -factors_to_check set difference if other_factorsreturn [(component_factors,[ for in split_order if in component_variables])connected_components(contextother_factors[ for in split_order if not in component_variables]elsereturn [(component_factorssplit_order)testingprobrc py -(continued from probgraphicalmodels import bn_ cha, , , ,f_a,f_b,f_c,f_d bn_ chv probrc(bn_ ch#bn_ chv query( ,{}#bn_ chv query( ,{}#inferencemethod max_display_level show more detail in displaying #inferencemethod max_display_level show less detail in displaying #bn_ chv query( ,{ :true},[ , ]#bn_ chv query( ,{ :true, :false} from probgraphicalmodels import bn_report,alarm,fire,leaving,report,smoke,tamper bn_reportrc probrc(bn_reportanswers queries using recursive conditioning #bn_reportrc query(tamper,{}#inferencemethod max_display_level show no detail in displaying #bn_reportrc query(leaving,{}#bn_reportrc query(tamper,{}split_order=[smoke,fire,alarm,leaving,report]#bn_reportrc query(tamper,{report:true}#bn_reportrc query(tamper,{report:true,smoke:false}#note what happens to the cache when these are called in turn#bn_reportrc query(tamper,{report:true}split_order=[smoke,fire,alarm,leaving]#bn_reportrc query(smoke,{report:true}split_order=[tamper,fire,alarm,leaving] from probgraphicalmodels import bn_sprinklerseasonsprinklerrainedgrass_wetgrass_shinyshoes_wet bn_sprinklerv probrc(bn_sprinkler#bn_sprinklerv query(shoes_wet,{}#bn_sprinklerv query(shoes_wet,{rained:true}#bn_sprinklerv query(shoes_wet,{grass_shiny:true}#bn_sprinklerv query(shoes_wet,{grass_shiny:false,rained:true} version june |
18,909 | from probgraphicalmodels import bn_no bn_lr coughfeversneezecoldflucovid bn_no probrc(bn_no bn_lr probrc(bn_lr #bn_no query(flu{fever: sneeze: }#bn_lr query(flu{fever: sneeze: }#bn_lr query(cough,{}#bn_lr query(cold,{cough: ,sneeze: ,fever: }#bn_lr query(flu,{cough: ,sneeze: ,fever: }#bn_lr query(covid,{cough: ,sneeze: ,fever: }#bn_lr query(covid,{cough: ,sneeze: ,fever: ,flu: }#bn_lr query(covid,{cough: ,sneeze: ,fever: ,flu: } if __name__ ="__main__"inferencemethod testim(probrc variable elimination an instance of ve object takes in graphical model the query method uses variable elimination to compute the probability of variable given observations on some other variables probve py -variable elimination for graphical models from probfactors import factorfactorobservedfactorsumfactor_times from probgraphicalmodels import graphicalmodelinferencemethod class ve(inferencemethod)"""the class that queries graphical models using variable elimination gm is graphical model to query ""method_name "variable elimination def __init__(self,gm=none)inferencemethod __init__(selfgm def query(self,var,obs={},elim_order=none)"""computes (var|obswhere var is variable obs is {variable:valuedictionary""if var in obsreturn {var: if val =obs[varelse for val in var domainelseif elim_order =noneelim_order self gm variables projfactors [self project_observations(fact,obsfor fact in self gm factorsfor in elim_orderif !var and not in obsversion june |
18,910 | reasoning under uncertainty projfactors self eliminate_var(projfactors,vunnorm factor_times(var,projfactorsp_obs=sum(unnormself display( ,"unnormalized probs:",unnorm,"prob obs:",p_obsreturn {val:pr/p_obs for val,pr in zip(var domainunnorm) factorobserved is factor that is the result of some observations on another factor we don' store the values in listwe just look them up as needed the observations can include variables that are not in the listbut should have some intersection with the variables in the factor probfactors py -(continued class factorobserved(factor)def __init__(self,factor,obs)factor __init__(self[ for in factor variables if not in obs]self observed obs self orig_factor factor def get_value(self,assignment)ass assignment copy(for ob in self observedass[ob]=self observed[obreturn self orig_factor get_value(assa factorsum is factor that is the result of summing out variable from the product of other factors it constructs representation off var factors we store the values in list in lazy mannerif they are already computedwe used the stored values if they are not already computed we can compute and store them probfactors py -(continued class factorsum(factor)def __init__(self,var,factors)self var_summed_out var self factors factors vars [for fac in factorsfor in fac variablesif is not var and not in varsvars append(vfactor __init__(self,varsself values { def get_value(self,assignment)"""lazy implementationif not savedcompute it return saved value""asst frozenset(assignment items()version june |
18,911 | if asst in self valuesreturn self values[asstelsetotal new_asst assignment copy(for val in self var_summed_out domainnew_asst[self var_summed_outval total +math prod(fac get_value(new_asstfor fac in self factorsself values[assttotal return total the method factor times multiples set of factors that are all factors on the same variable (or on no variablesthis is the last step in variable elimination before normalizing it returns an array giving the product for each value of variable probfactors py -(continued def factor_times(variablefactors)"""when factors are factors just on variable (or on no variables)""prods [facs [ for in factors if variable in variablesfor val in variable domainast {variable:valprods append(math prod( get_value(astfor in facs)return prods to project observations onto factorfor each variable that is observed in the factorwe construct new factor that is the factor projected onto that variable factor observed creates new factor that is the result is assigning value to single variable probve py -(continued def project_observations(self,factor,obs)"""returns the resulting factor after observing obs obs is dictionary of {variable:valuepairs ""if any((var in obsfor var in factor variables) variable in factor is observed return factorobserved(factor,obselsereturn factor def eliminate_var(self,factors,var)"""eliminate variable var from list of factors returns new set of factors that has var summed out ""self display( ,"eliminating ",str(var)contains_var [not_contains_var [for fac in factorsversion june |
18,912 | reasoning under uncertainty if var in fac variablescontains_var append(facelsenot_contains_var append(facif contains_var =[]return factors elsenewfactor factorsum(var,contains_varself display( ,"multiplying:",[str(ffor in contains_var]self display( ,"creating factor:"newfactorself display( newfactor to_table()factor in detail not_contains_var append(newfactorreturn not_contains_var from probgraphicalmodels import bn_ cha, , , bn_ chv ve(bn_ ch#bn_ chv query( ,{}#bn_ chv query( ,{}#inferencemethod max_display_level show more detail in displaying #inferencemethod max_display_level show less detail in displaying #bn_ chv query( ,{ :true}#bn_ chv query( ,{ :true, :false} from probgraphicalmodels import bn_report,alarm,fire,leaving,report,smoke,tamper bn_reportv ve(bn_reportanswers queries using variable elimination #bn_reportv query(tamper,{}#inferencemethod max_display_level show no detail in displaying #bn_reportv query(leaving,{}#bn_reportv query(tamper,{},elim_order=[smoke,report,leaving,alarm,fire]#bn_reportv query(tamper,{report:true}#bn_reportv query(tamper,{report:true,smoke:false} from probgraphicalmodels import bn_sprinklerseasonsprinklerrainedgrass_wetgrass_shinyshoes_wet bn_sprinklerv ve(bn_sprinkler#bn_sprinklerv query(shoes_wet,{}#bn_sprinklerv query(shoes_wet,{rained:true}#bn_sprinklerv query(shoes_wet,{grass_shiny:true}#bn_sprinklerv query(shoes_wet,{grass_shiny:false,rained:true} from probgraphicalmodels import bn_lr coughfeversneezecoldflucovid vediag ve(bn_lr #vediag query(cough,{}#vediag query(cold,{cough: ,sneeze: ,fever: }#vediag query(flu,{cough: ,sneeze: ,fever: }#vediag query(covid,{cough: ,sneeze: ,fever: }#vediag query(covid,{cough: ,sneeze: ,fever: ,flu: }#vediag query(covid,{cough: ,sneeze: ,fever: ,flu: }version june |
18,913 | if __name__ ="__main__"inferencemethod testim(ve stochastic simulation sampling from discrete distribution the method sample one generates single sample from (possible unnormalizeddistribution dist is {value weightdictionarywhere weight > this returns value with probability in proportion to its weight probstochsim py -probabilistic inference using stochastic simulation import random from probgraphicalmodels import inferencemethod def sample_one(dist)"""returns the index of single sample from normalized distribution dist ""rand random random()*sum(dist values()cum cumulative weights for in distcum +dist[vif cum randreturn if we want to generate multiple samplesrepeatedly calling sample one may not be efficient if we want to generate samplesand the distribution is over valuessample one takes time (mnif and are of the same order of magnitudewe can do better the method sample multiple generates multiple samples from distribution defined by distwhere dist is {value weightdictionarywhere weight > and the weights cannot all be zero this returns list of valuesof length num sampleswhere each sample is selected with probability proportional to its weight the method generates all of the random numberssorts themand then goes through the distribution oncesaving the selected samples probstochsim py -(continued def sample_multiple(distnum_samples)"""returns list of num_samples values selected using distribution dist dist is {value:weightdictionary that does not need to be normalized ""total sum(dist values()rands sorted(random random()*total for in range(num_samples)result [dist_items list(dist items()version june |
18,914 | reasoning under uncertainty cum dist_items[ ][ cumulative sum index for in randswhile >cumindex + cum +dist_items[index][ result append(dist_items[index][ ]return result exercise what is the time and space complexity the following methods to generate sampleswhere is the length of dist(an calls to sample one (bsample multiple (ccreate the cumulative distribution (choose how this is representedandfor each random numberdo binary search to determine the sample associated with the random number (dchoose random number in the range [ / ( )/nfor each range( )where is the number of samples use these as the random numbers to select the particles (does this give random samples?for each method suggest when it might be the best method the test sampling method can be used to generate the statistics from number of samples it is useful to see the variability as function of the number of samples try it for few samples and also for many samples probstochsim py -(continued def test_sampling(distnum_samples)"""given distributiondistdraw num_samples samples and return the resulting counts ""result { : for in distfor in sample_multiple(distnum_samples)result[ + return result try the following queries number of times eachtest_sampling({ : , : , : , : } test_sampling({ : , : , : , : } sampling methods for belief network inference samplinginferencemethod is an inferencemethodbut the query method also takes arguments for the number of samples and the sample-order (which is an ordering of factorsthe first methods assume belief network (and not an undirected graphical modelversion june |
18,915 | probstochsim py -(continued class samplinginferencemethod(inferencemethod)"""the abstract class of sampling-based belief network inference methods"" def __init__(self,gm=none)inferencemethod __init__(selfgm def query(self,qvar,obs={},number_samples= ,sample_order=none)raise notimplementederror("samplinginferencemethod query"abstract rejection sampling probstochsim py -(continued class rejectionsampling(samplinginferencemethod)"""the class that queries graphical models using rejection sampling gm is belief network to query ""method_name "rejection sampling def __init__(selfgm=none)samplinginferencemethod __init__(selfgm def query(selfqvarobs={}number_samples= sample_order=none)"""computes (qvar obswhere qvar is variable obs is {variable:valuedictionary sample_order is list of variables where the parents come before the variable ""if sample_order is nonesample_order self gm topological_sort(self display( ,*sample_order,sep="\ "counts {val: for val in qvar domainfor in range(number_samples)rejected false sample {for nvar in sample_orderfac self gm var cpt[nvar#factor with nvar as child val sample_one({ :fac get_value({**samplenvar: }for in nvar domain}self display( ,val,end="\ "if nvar in obs and obs[nvar!valrejected true self display( ,"rejected"break sample[nvarval version june |
18,916 | reasoning under uncertainty if not rejectedcounts[sample[qvar]+ self display( ,"accepted"tot sum(counts values()as well as the distribution we also include raw counts dist { : /tot if tot> else /len(qvar domainfor ( ,vin counts items()dist["raw_counts"counts return dist likelihood weighting likelihood weighting includes weight for each sample instead of rejecting samples based on observationslikelihood weighting changes the weights of the sample in proportion with the probability of the observation the weight then becomes the probability that the variable would have been rejected probstochsim py -(continued class likelihoodweighting(samplinginferencemethod)"""the class that queries graphical models using likelihood weighting gm is belief network to query ""method_name "likelihood weighting def __init__(selfgm=none)samplinginferencemethod __init__(selfgm def query(self,qvar,obs={},number_samples= ,sample_order=none)"""computes (qvar obswhere qvar is variable obs is {variable:valuedictionary sample_order is list of factors where factors defining the parents come before the factors for the child ""if sample_order is nonesample_order self gm topological_sort(self display( ,*[ for in sample_order if not in obs],sep="\ "counts {val: for val in qvar domainfor in range(number_samples)sample {weight for nvar in sample_orderfac self gm var cpt[nvarif nvar in obssample[nvarobs[nvarweight *fac get_value(sampleelseval sample_one({ :fac get_value({**sample,nvar: }for in nvar domain}version june |
18,917 | self display( ,val,end="\ "sample[nvarval counts[sample[qvar]+weight self display( ,weighttot sum(counts values()as well as the distribution we also include the raw counts dist { : /tot for ( ,vin counts items()dist["raw_counts"counts return dist exercise change this algorithm so that it does importance sampling using proposal distribution it needs sample one using different distribution and then update the weight of the current sample for testinguse proposal distribution that only specifies probabilities for some of the variables (and the algorithm uses the probabilities for the network in other casesparticle filtering in this implementationa particle is {variable valuedictionary because adding new value to dictionary involves side effectthe dictionaries need to be copied during resampling probstochsim py -(continued class particlefiltering(samplinginferencemethod)"""the class that queries graphical models using particle filtering gm is belief network to query ""method_name "particle filtering def __init__(selfgm=none)samplinginferencemethod __init__(selfgm def query(selfqvarobs={}number_samples= sample_order=none)"""computes (qvar obswhere qvar is variable obs is {variable:valuedictionary sample_order is list of factors where factors defining the parents come before the factors for the child ""if sample_order is nonesample_order self gm topological_sort(self display( ,*[ for in sample_order if not in obs],sep="\ "particles [{for in range(number_samples)for nvar in sample_orderfac self gm var cpt[nvarif nvar in obsweights [fac get_value({**partnvar:obs[nvar]}for part in particlesversion june |
18,918 | reasoning under uncertainty particles [{**pnvar:obs[nvar]for in resample(particlesweightsnumber_samples)elsefor part in particlespart[nvarsample_one({ :fac get_value({**partnvar: }for in nvar domain}self display( ,part[nvar],end="\ "counts {val: for val in qvar domainfor part in particlescounts[part[qvar]+ tot sum(counts values()as well as the distribution we also include the raw counts dist { : /tot for ( ,vin counts items()dist["raw_counts"counts return dist resampling resample is based on sample multiple but works with an array of particles (asidepython doesn' let us use sample multiple directly as it uses dictionaryand particlesrepresented as dictionaries can' be the key of dictionariesprobstochsim py -(continued def resample(particlesweightsnum_samples)"""returns num_samples copies of particles resampled according to weights particles is list of particles weights is list of positive numbersof same length as particles num_samples is integer ""total sum(weightsrands sorted(random random()*total for in range(num_samples)result [cum weights[ cumulative sum index for in randswhile >cumindex + cum +weights[indexresult append(particles[index]return result examples probstochsim py -(continued from probgraphicalmodels import bn_ cha, , , bn_ chr rejectionsampling(bn_ chbn_ chl likelihoodweighting(bn_ chversion june |
18,919 | #inferencemethod max_display_level detailed tracing for all inference methods #bn_ chr query( ,{}#bn_ chr query( ,{}#bn_ chr query( ,{ :true}#bn_ chr query( ,{ :true, :false} from probgraphicalmodels import bn_report,alarm,fire,leaving,report,smoke,tamper bn_reportr rejectionsampling(bn_reportanswers queries using rejection sampling bn_reportl likelihoodweighting(bn_reportanswers queries using likelihood weighting bn_reportp particlefiltering(bn_reportanswers queries using particle filtering #bn_reportr query(tamper,{}#bn_reportr query(tamper,{}#bn_reportr query(tamper,{report:true}#inferencemethod max_display_level no detailed tracing for all inference methods #bn_reportr query(tamper,{report:true},number_samples= #bn_reportr query(tamper,{report:true,smoke:false}#bn_reportr query(tamper,{report:true,smoke:false},number_samples= #bn_reportl query(tamper,{report:true,smoke:false},number_samples= #bn_reportl query(tamper,{report:true,smoke:false},number_samples= from probgraphicalmodels import bn_sprinkler,seasonsprinkler from probgraphicalmodels import rainedgrass_wetgrass_shinyshoes_wet bn_sprinklerr rejectionsampling(bn_sprinkleranswers queries using rejection sampling bn_sprinklerl likelihoodweighting(bn_sprinkleranswers queries using rejection sampling bn_sprinklerp particlefiltering(bn_sprinkleranswers queries using particle filtering #bn_sprinklerr query(shoes_wet,{grass_shiny:true,rained:true}#bn_sprinklerl query(shoes_wet,{grass_shiny:true,rained:true}#bn_sprinklerp query(shoes_wet,{grass_shiny:true,rained:true} if __name__ ="__main__"inferencemethod testim(rejectionsamplingthreshold= inferencemethod testim(likelihoodweightingthreshold= inferencemethod testim(particlefilteringthreshold= exercise this code keeps regenerating the distribution of variable given its parents implement one or both of the followingand compare them to the original make cond dist return slice that corresponds to the distributionand then use the slice instead of the dictionary ( list slice does not generate new data structuresmake cond dist remember values it has already computedand only return these version june |
18,920 | reasoning under uncertainty gibbs sampling the following implements gibbs samplinga form of markov chain monte carlo mcmc probstochsim py -(continued #import random #from probgraphicalmodels import inferencemethod #from probstochsim import sample_onesamplinginferencemethod class gibbssampling(samplinginferencemethod)"""the class that queries graphical models using gibbs sampling bn is graphical model ( belief networkto query ""method_name "gibbs sampling def __init__(selfgm=none)samplinginferencemethod __init__(selfgmself gm gm def query(selfqvarobs={}number_samples= burn_in= sample_order=none)"""computes (qvar obswhere qvar is variable obs is {variable:valuedictionary sample_order is list of non-observed variables in orderor if sample_order nonethe variables are shuffled at each iteration ""counts {val: for val in qvar domainif sample_order is not nonevariables sample_order elsevariables [ for in self gm variables if not in obsvar_to_factors { :set(for in self gm variablesfor fac in self gm factorsfor var in fac variablesvar_to_factors[varadd(facsample {var:random choice(var domainfor var in variablesself display( ,"sample:",samplesample update(obsfor in range(burn_in number_samples)if sample_order =nonerandom shuffle(variablesfor var in variablesget unnormalized probability distribution of var given its neighbours vardist {val: for val in var domainfor val in var domainsample[varval version june |
18,921 | for fac in var_to_factors[var]markov blanket vardist[val*fac get_value(samplesample[varsample_one(vardistif >burn_incounts[sample[qvar]+= tot sum(counts values()as well as the computed distributionwe also include raw counts dist { : /tot for ( ,vin counts items()dist["raw_counts"counts return dist #from probgraphicalmodels import bn_ cha, , , bn_ chg gibbssampling(bn_ ch#inferencemethod max_display_level detailed tracing for all inference methods bn_ chg query( ,{}#bn_ chg query( ,{}#bn_ chg query( ,{ :true}#bn_ chg query( ,{ :true, :false} from probgraphicalmodels import bn_report,alarm,fire,leaving,report,smoke,tamper bn_reportg gibbssampling(bn_report#bn_reportg query(tamper,{report:true},number_samples= if __name__ ="__main__"inferencemethod testim(gibbssamplingthreshold= exercise change the code so that it can have multiple query variables make the list of query variable be an input to the algorithmso that the default value is the list of all non-observed variables exercise in this algorithmexplain where it computes the probability of variable given its markov blanket instead of returning the average of the samples for the query variableit is possible to return the average estimate of the probability of the query variable given its markov blanket does this converge to the same answer as the given codedoes it converge fastersloweror the sameplotting behaviour of stochastic simulators the stochastic simulation runs can give different answers each time they are run for the algorithms that give the same answer in the limit as the number of samples approaches infinity (as do all of these algorithms)the algorithms can be compared by comparing the accuracy for multiple runs summary statistics like the variance may provide some informationbut the assumptions behind the variance being appropriate (namely that the distribution is approximately gaussianmay not hold for cases where the predictions are bounded and often skewed version june |
18,922 | reasoning under uncertainty it is more appropriate to plot the distribution of predictions over multiple runs the plot stats method plots the prediction of particular variable (or for the partition functionfor number of runs of the same algorithm on the xaxisis the prediction of the algorithm on the -axis is the number of runs with prediction less than or equal to the value thus this is like cumulative distribution over the predictionsbut with counts on the -axis note that for runs where there are no samples that are consistent with the observations (as can happen with rejection sampling)the prediction of probability is (as convention for / that variable what contains the query variableor what is "prob ev"the probability of evidence probstochsim py -(continued import matplotlib pyplot as plt def plot_stats(methodqvarqvalobsnumber_runs= **queryargs)"""plots cumulative distribution of the prediction of the model method is inferencemethod (that implements appropriate query)plots (qvar=qval obsqvar is the query variableqval is corresponding value obs is the {variable:valuedictionary representing the observations number_iterations is the number of runs that are plotted **queryargs is the arguments to query (often number_samples for sampling methods""plt ion(plt xlabel("value"plt ylabel("cumulative number"method max_display_levelprev_mdl method max_display_level #no display answers [method query(qvar,obs,**queryargsfor in range(number_runs)values [ans[qvalfor ans in answerslabel "{method method_namep({qvar}={qval}|{',join( '{var}={val}for (var,valin obs items())})values sort(plt plot(values,range(number_runs),label=labelplt legend(#loc="upper left"plt draw(method max_display_level prev_mdl restore display level tryplot_stats(bn_reportr,tamper,true,{report:true,smoke:true},number_samples= number_runs= plot_stats(bn_reportl,tamper,true,{report:true,smoke:true},number_samples= number_runs= version june |
18,923 | plot_stats(bn_reportp,tamper,true,{report:true,smoke:true},number_samples= number_runs= plot_stats(bn_reportr,tamper,true,{report:true,smoke:true},number_samples= number_runs= plot_stats(bn_reportl,tamper,true,{report:true,smoke:true},number_samples= number_runs= plot_stats(bn_reportg,tamper,true,{report:true,smoke:true},number_samples= number_runs= def plot_mult(methodsexampleqvarqvalobsnumber_samples= number_runs= )for method in methodssolver method(exampleif isinstance(method,samplinginferencemethod)plot_stats(solverqvarqvalobsnumber_samplesnumber_runselseplot_stats(solverqvarqvalobsnumber_runs from probrc import probrc try following (but it takes while methods [probrc,rejectionsampling,likelihoodweighting,particlefiltering,gibbssampling#plot_mult(methods,bn_report,tamper,true,{report:true,smoke:false},number_samples= number_runs= plot_mult(methods,bn_report,tamper,true,{report:false,smoke:true},number_samples= number_runs= sprinkler exampleplot_stats(bn_sprinklerr,shoes_wet,true,{grass_shiny:true,rained:true},number_samples= plot_stats(bn_sprinklerl,shoes_wet,true,{grass_shiny:true,rained:true},number_samples= hidden markov models this code for hidden markov models is independent of the graphical models codeto keep it simple section gives code that models hidden markov modelsand more generallydynamic belief networksusing the graphical models code this hmm code assumes there are multiple boolean observation variables that depend on the current state and are independent of each other given the state probhmm py -hidden markov model version june |
18,924 | reasoning under uncertainty import random from probstochsim import sample_onesample_multiple class hmm(object)def __init__(selfstatesobsvarspobstransindist)""" hidden markov model states set of states obsvars set of observation variables pobs probability of observationspobs[ ][sis (obs_i=true state=strans transition probability trans[ ][jgives (state= state=iindist initial distribution indist[sis (state_ ""self states states self obsvars obsvars self pobs pobs self trans trans self indist indist consider the following example suppose you want to unobtrusively keep track of an animal in triangular enclosure using sound suppose you have microphones that provide unreliable (noisybinary information at each time step the animal is either close to one of the points of the triangle or in the middle of the triangle probhmm py -(continued state =middle , , are corners states {'middle'' '' '' 'states obs {' ',' ',' 'microphones the observation model is as follows if the animal is in cornerit will be detected by the microphone at that corner with probability and will be independently detected by each of the other microphones with probability of if the animal is in the middleit will be detected by each microphone with probability of probhmm py -(continued pobs gives the observation model#pobs[mi][stateis (mi=on stateclosemic= farmic= midmic= pobs {' ':{'middle':midmic' ':closemic' ':farmic' ':farmic}mic ' ':{'middle':midmic' ':farmic' ':closemic' ':farmic}mic ' ':{'middle':midmic' ':farmic' ':farmic' ':closemic}mic the transition model is as followsif the animal is in corner it stays in the same corner with probability goes to the middle with probability version june |
18,925 | or goes to one of the other corners with probability each if it is in the middleit stays in the middle with probability otherwise it moves to one the cornerseach with probability probhmm py -(continued trans specifies the dynamics trans[iis the distribution over states resulting from state trans[ ][jgives ( = =ism= mmc= transition probabilities when in middle sc= mcm= mcc= transition probabilities when in corner trans {'middle':{'middle':sm' ':mmc' ':mmc' ':mmc}was in middle ' ':{'middle':mcm' ':sc' ':mcc' ':mcc}was in corner ' ':{'middle':mcm' ':mcc' ':sc' ':mcc}was in corner ' ':{'middle':mcm' ':mcc' ':mcc' ':sc}was in corner initially the animal is in one of the four stateswith equal probability probhmm py -(continued initially we have uniform distribution over the animal' state indist {st: /len(states for st in states hmm hmm(states obs pobs trans indist exact filtering for hmms hmmvefilter has current state distribution which can be updated by observing or by advancing to the next time probhmm py -(continued from display import displayable class hmmvefilter(displayable)def __init__(self,hmm)self hmm hmm self state_dist hmm indist def filter(selfobsseq)"""updates and returns the state distribution following the sequence of observations in obsseq using variable elimination note that it first advances time this is what is required if it is called sequentially if that is not what is wanted initiallydo an observe first ""for obs in obsseqversion june |
18,926 | reasoning under uncertainty self advance(advance time self observe(obsobserve return self state_dist def observe(selfobs)"""updates state conditioned on observations obs is list of values for each observation variable""for in self hmm obsvarsself state_dist {st:self state_dist[st]*(self hmm pobs[ ][stif obs[ielse ( -self hmm pobs[ ][st])for st in self hmm statesnorm sum(self state_dist values()normalizing constant self state_dist {st:self state_dist[st]/norm for st in self hmm statesself display( ,"after observing",obs,"state distribution:",self state_dist def advance(self)"""advance to the next time""nextstate {st: for st in self hmm statesdistribution over next states for in self hmm statesj ranges over next states for in self hmm statesi ranges over previous states nextstate[ +self hmm trans[ ][ ]*self state_dist[iself state_dist nextstate self display( ,"after advancing state distribution:",self state_distthe following are some queries for hmm probhmm py -(continued hmm hmmvefilter(hmm hmm filter([{' ': ' ': ' ': }{' ': ' ': ' ': }]#hmmvefilter max_display_level show more detail in displaying hmm hmmvefilter(hmm hmm filter([{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }]hmm hmmvefilter(hmm hmm filter([{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }] how do the following differ in the resulting state distributionnote they start the samebut have different initial observations #hmmvefilter max_display_level show less detail in displaying for in range( )hmm advance(version june |
18,927 | hmm state_dist for in range( )hmm advance(hmm state_dist exercise the representation assumes that there are list of boolean observations extend the representation so that the each observation variable can have multiple discrete values you need to choose representation for the modeland change the algorithm localization the localization example in the book is controlled hmmwhere there is given action at each time and the transition depends on the action in this classthe transition is set to none initiallyand needs to be provided with an action to determine the transition probability problocalization py -controlled hmm and localization example from probhmm import hmmvefilterhmm from display import displayable import matplotlib pyplot as plt from matplotlib widgets import buttoncheckbuttons class hmm_controlled(hmm)""" controlled hmmwhere the transition probability depends on the action instead of the transition probabilityit has function act trans from action to transition probability any algorithms need to select the transition probability according to the action ""def __init__(selfstatesobsvarspobsact transindist)self act trans act trans hmm __init__(selfstatesobsvarspobsnoneindist local_states list(range( )door_positions { , , , def prob_door(loc)return if loc in door_positions else local_obs {'door':[prob_door(ifor in range( )]act trans {'right'[[ if next =current else if next =(current+ )% else if next =(current+ )% else for next in range( )for current in range( )]'left'[[ if next =current else if next =(current- )% else if next =(current- )% else for next in range( )for current in range( )]version june |
18,928 | reasoning under uncertainty hmm_ pos hmm_controlled(local_states{'door'}local_obsact trans[ / for in range( )]to change the ve localization code to allow for controlled hmmsnotice that the action selects which transition probability to us problocalization py -(continued class hmm_local(hmmvefilter)"""ve filter for controlled hmms ""def __init__(selfhmm)hmmvefilter __init__(selfhmm def go(selfaction)self hmm trans self hmm act trans[actionself advance( loc_filt hmm_local(hmm_ posloc_filt observe({'door':true})loc_filt go("right")loc_filt observe({'door':false})loc_filt go("right")loc_filt observe({'door':true}loc_filt state_dist the following lets us interactively move the agent and provide observations it shows the distribution over locations problocalization py -(continued class show_localization(displayable)def __init__(self,hmm)self hmm hmm self loc_filt hmm_local(hmmfig,(self axplt subplots(plt subplots_adjust(bottom= left_butt button(plt axes([ , , , ])"left"left_butt on_clicked(self leftright_butt button(plt axes([ , , , ])"right"right_butt on_clicked(self rightdoor_butt button(plt axes([ , , , ])"door"door_butt on_clicked(self doornodoor_butt button(plt axes([ , , , ])"no door"nodoor_butt on_clicked(self nodoorreset_butt button(plt axes([ , , , ])"reset"reset_butt on_clicked(self reset#this makes sure -axis goes to graph overwritten in draw_dist self draw_dist(plt show( def draw_dist(self)self ax clear(plt ylim( , version june |
18,929 | self ax set_ylabel("probability"self ax set_xlabel("location"self ax set_title("location probability distribution"self ax set_xticks(self hmm statesvals [self loc_filt state_dist[ifor in self hmm statesself bars self ax bar(self hmm statesvalscolor='black'self ax bar_label(self bars,["{ }format( =vfor in vals]padding plt draw( def left(self,event)self loc_filt go("left"self draw_dist(def right(self,event)self loc_filt go("right"self draw_dist(def door(self,event)self loc_filt observe({'door':true}self draw_dist(def nodoor(self,event)self loc_filt observe({'door':false}self draw_dist(def reset(self,event)self loc_filt state_dist { : / for in range( )self draw_dist( sl show_localization(hmm_ posparticle filtering for hmms in this implementation particle is just state if you want to do some form of smoothinga particle should probably be history of states this maintainsparticlesan array of statesweights an array of (non-negativereal numberssuch that weights[iis the weight of particles[iprobhmm py -(continued from display import displayable from probstochsim import resample class hmmparticlefilter(displayable)def __init__(self,hmm,number_particles= )self hmm hmm self particles [sample_one(hmm indistfor in range(number_particles)self weights [ for in range(number_particles) def filter(selfobsseq)"""returns the state distribution following the sequence of observations in obsseq using particle filtering version june |
18,930 | reasoning under uncertainty note that it first advances time this is what is required if it is called after previous filtering if that is not what is wanted initiallydo an observe first ""for obs in obsseqself advance(advance time self observe(obsobserve self resample_particles(self display( ,"after observing"str(obs)"state distribution:"self histogram(self particles)self display( ,"final state distribution:"self histogram(self particles)return self histogram(self particles def advance(self)"""advance to the next time this assumes that all of the weights are ""self particles [sample_one(self hmm trans[st]for st in self particles def observe(selfobs)"""reweighs the particles to incorporate observations obs""for in range(len(self particles))for obv in obsif obs[obv]self weights[ *self hmm pobs[obv][self particles[ ]elseself weights[ * -self hmm pobs[obv][self particles[ ] def histogram(selfparticles)"""returns list of the probability of each state as represented by the particles""tot= hist {st for st in self hmm statesfor (st,wtin zip(self particles,self weights)hist[st]+=wt tot +wt return {st:hist[st]/tot for st in hist def resample_particles(self)"""resamples to give new set of particles ""self particles resample(self particlesself weightslen(self particles)self weights [ len(self particlesthe following are some queries for hmm probhmm py -(continuedversion june |
18,931 | hmm pf hmmparticlefilter(hmm hmmparticlefilter max_display_level show each step hmm pf filter([{' ': ' ': ' ': }{' ': ' ': ' ': }]hmm pf hmmparticlefilter(hmm hmm pf filter([{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }]hmm pf hmmparticlefilter(hmm hmm pf filter([{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }{' ': ' ': ' ': }]exercise form of importance sampling can be obtained by not resampling is it better or worse than particle filteringhintyou need to think about how they can be compared is the comparison different if there are more states than particlesexercise extend the particle filtering code to continuous variables and observations in particularsuppose the state transition is linear function with gaussian noise of the previous stateand the observations are linear functions with gaussian noise of the state you may need to research how to sample from gaussian distribution generating examples the following code is useful for generating examples probhmm py -(continued def simulate(hmm,horizon)"""returns pair of (state sequenceobservation sequenceof length horizon for each time tthe agent is in state_sequence[tand observes observation_sequence[ ""state sample_one(hmm indistobsseq=[stateseq=[for time in range(horizon)stateseq append(statenewobs {obs:sample_one({ : -hmm pobs[obs][state], :hmm pobs[obs][state]}for obs in hmm obsvarsobsseq append(newobsstate sample_one(hmm trans[state]return stateseq,obsseq def simobs(hmm,stateseq)version june |
18,932 | reasoning under uncertainty """returns observation sequence for the state sequence""obsseq=[for state in stateseqnewobs {obs:sample_one({ : -hmm pobs[obs][state], :hmm pobs[obs][state]}for obs in hmm obsvarsobsseq append(newobsreturn obsseq def create_eg(hmm, )"""create an annotated example for horizon ""seq,obs simulate(hmm,nprint("true state sequence:",seqprint("sequence of observations:\ ",obshmmfilter hmmvefilter(hmmdist hmmfilter filter(obsprint("resulting distribution over states:\ ",dist dynamic belief networks dynamic belief network (dbnis belief network that extends in time there are number of ways that reasoning can be carried out in dbnincludingrolling out the dbn for some time periodand using standard belief network inference the latest time that needs to be in the rolled out network is the time of the latest observation or the time of query (whichever is laterthis allows us to observe any variables at any time and query any variables at any time this is covered in section an unrolled belief network may be very largeand we might only be interested in asking about "nowin this case we can just representing the variables "nowin this approach we can observe and query the current variables we can them move to the next time this does not allow for arbitrary historical queries (about the past or the future)but can be much simpler this is covered in section representing dynamic belief networks to specify dbnthink about the distribution now now will be represented as time each variable will have corresponding previous variablethese will be created together dynamic belief network consists ofa set of features variable is feature-time pair version june |
18,933 | an initial distribution over the features "now(time this is belief network with all variables being time variables specification of the dynamics we define the how the variables now (time depend on variables now and the previous time (time )in such way that the graph is acyclic probdbn py -dynamic belief networks from probvariables import variable from probgraphicalmodels import graphicalmodelbeliefnetwork from probfactors import probfactorcpd from probve import ve from display import displayable from utilities import dict_union class dbnvariable(variable)""" random variable that incorporates the stage (time variable can have both name and an index the index defaults to ""def __init__(self,name,domain=[false,true],index= )variable __init__(self, "{name} {index}",domainself basename name self domain domain self index index self previous none def __lt__(self,other)if self name !other namereturn self name<other name elsereturn self index<other index def __gt__(self,other)return other<self def variable_pair(name,domain=[false,true])"""returns variable and its predecessor this is used to define -stage dbns if the name is xit returns the pair of variables x_prev,x_now""var_now dbnvariable(name,domain,index='now'var_prev dbnvariable(name,domain,index='prev'var_now previous var_prev return var_prevvar_now factorrename is factor that is the result renaming the variables in the factor it takes factorfacand {new olddictionarywhere new is the name of variable in the resulting factor and old is the corresponding name in fac this assumes that the all variables are renamed version june |
18,934 | reasoning under uncertainty probdbn py -(continued class factorrename(factor)def __init__(self,fac,renaming)""" renamed factor fac is factor renaming is dictionary of the form {new:oldwhere old and new var variableswhere the variables in fac appear exactly once in the renaming ""factor __init__(self,[ for ( ,oin renaming items(if in fac variables]self orig_fac fac self renaming renaming def get_value(self,assignment)return self orig_fac get_value({self renaming[var]:val for (var,valin assignment items(if var in self variables}the following class renames the variables of conditional probability distribution it is used for template models ( dynamic decision networks or relational modelsprobdbn py -(continued class cpdrename(factorrenamecpd)def __init__(selfcpdrenaming)renaming_inverse {old:new for (new,oldin renaming items()cpd __init__(self,renaming_inverse[cpd child],[renaming_inverse[pfor in cpd parents]self orig_fac cpd self renaming renaming probdbn py -(continued class dbn(displayable)"""the class of stationary dynamic belief networks name is the dbn name vars_now is list of current variables (each must have previous variabletransition_factors is list of factors for ( |parentswhere is current variable and parents is list of current or previous variables init_factors is list of factors for ( |parentswhere is current variable and parents can only include current variables the graph of transition factors init factors must be acyclic ""def __init__(selftitlevars_nowtransition_factors=noneinit_factors=none)self title title self vars_now vars_now version june |
18,935 | self vars_prev [ previous for in vars_nowself transition_factors transition_factors self init_factors init_factors self var_index {var_index[vis the index of variable for , in enumerate(vars_now)self var_index[ ]= here is variable dbnprobdbn py -(continued , variable_pair(" "domain=[false,true] , variable_pair(" "domain=[false,true] , variable_pair(" "domain=[false,true] dynamics pc prob( ,[ , ],[[[ , ],[ , ]],[[ , ],[ , ]]]pb prob( ,[ , ],[[[ , ],[ , ]],[[ , ],[ , ]]]pa prob( ,[ , ],[[[ , ],[ , ]],[[ , ],[ , ]]] initial distribution pa prob( ,[],[ , ]pb prob( ,[ ],[[ , ],[ , ]]pc prob( ,[],[ , ] dbn dbn("simple dbn",[ , , ],[pa,pb,pc],[pa ,pb ,pc ]here is the animal example probdbn py -(continued from probhmm import closemicfarmicmidmicsmmmcscmcmmcc pos_ ,pos_ variable_pair("position",domain=[ , , , ]mic ,mic variable_pair("mic "mic ,mic variable_pair("mic "mic ,mic variable_pair("mic " conditional probabilities see hmm for the values of sm,mmcetc ppos prob(pos_ [pos_ ][[smmmcmmcmmc]#was in middle [mcmscmccmcc]#was in corner [mcmmccscmcc]#was in corner [mcmmccmccsc]]#was in corner pm prob(mic [pos_ ][[ -midmicmidmic][ -closemicclosemic][ -farmicfarmic][ -farmicfarmic]]pm prob(mic [pos_ ][[ -midmicmidmic][ -farmicfarmic][ -closemicclosemic][ -farmicfarmic]]pm prob(mic [pos_ ][[ -midmicmidmic][ -farmicfarmic][ -farmicfarmic][ -closemicclosemic]]ipos prob(pos_ ,[][ ]dbn_an =dbn("animal dbn",[pos_ ,mic ,mic ,mic ][ppospm pm pm ][ipospm pm pm ]version june |
18,936 | reasoning under uncertainty unrolling dbns probdbn py -(continued class bnfromdbn(beliefnetwork)"""belief network unrolled from dynamic belief network "" def __init__(self,dbn,horizon)"""dbn is the dynamic belief network being unrolled horizon> is the number of steps (so there will be horizon+ variables for each dbn variable ""self name var {var basename[dbnvariable(var basename,var domain,indexfor index in range(horizon+ )for var in dbn vars_nowself display( , "name var={self name var}"variables { for vs in self name var values(for in vsself display( , "variables={variables}"bnfactors {cpdrename(fac,{self name var[var basename][ ]:var for var in fac variables}for fac in dbn init_factorsbnfactors |{cpdrename(fac,dict_union({self name var[var basename][ ]:var for var in fac variables if var index=='prev'{self name var[var basename][ + ]:var for var in fac variables if var index=='now'})for fac in dbn transition_factors for in range(horizon)self display( , "bnfactors={bnfactors}"beliefnetwork __init__(selfdbn titlevariablesbnfactorshere are two examples note that we need to use bn name var[' '][ to get the variable ( at time probdbn py -(continued try #from probrc import probrc #bn bnfromdbn(dbn , construct belief network #drc probrc(bninitialize recursive conditioning # bn name var[' '][ #drc query( # ( #drc query(bn name var[' '][ ],{bn name var[' '][ ]:true,bn name var[' '][ ]:false}# ( | , dbn filtering if we only wanted to ask questions about the current statewe can save space by forgetting the history variables version june |
18,937 | probdbn py -(continued class dbnvefilter(ve)def __init__(self,dbn)self dbn dbn self current_factors dbn init_factors self current_obs { def observe(selfobs)"""updates the current observations with obs obs is variable:value dictionary where variable is current variable ""assert all(self current_obs[var]==obs[varfor var in obs if var in self current_obs),"inconsistent current observationsself current_obs update(obsnote 'updateis dict method def query(self,var)"""returns the posterior probability of current variable var""return ve(graphicalmodel(self dbn title,self dbn vars_now,self current_factors)query(var,sel def advance(self)"""advance to the next time""prev_factors [self make_previous(facfor fac in self current_factorsprev_obs {var previous:val for var,val in self current_obs items()two_stage_factors prev_factors self dbn transition_factors self current_factors self elim_vars(two_stage_factors,self dbn vars_prev,prev_obsself current_obs { def make_previous(self,fac)"""creates new factor from fac where the current variables in fac are renamed to previous variables ""return factorrename(fac{var previous:var for var in fac variables} def elim_vars(self,factorsvarsobs)for var in varsif var in obsfactors [self project_observations(fac,obsfor fac in factorselsefactors self eliminate_var(factorsvarreturn factors example queriesversion june |
18,938 | reasoning under uncertainty probdbn py -(continued #df dbnvefilter(dbn #df observe({ :true})df advance()df observe({ :false}#df query( # ( | , #df advance()df query( #dfa dbnvefilter(dbn_andfa observe({mic : mic : mic : }dfa advance(dfa observe({mic : mic : mic : }dfa query(pos_ causal models causal model can answer "doquestions the following adds the querydo method to the inferencemethod classso it can be used with any inference method probdo py -probabilistic inference with the do operator from probgraphicalmodels import inferencemethodbeliefnetwork from probfactors import cpdconstantcpd def querydo(selfqvarobs={}do={})assert isinstance(self gmbeliefnetwork)"do only applies to belief networksif do=={}return self query(qvarobselsenewfacs ({ for (ch,fin self gm var cpt items(if ch not in do{constantcpd( ,cfor ( ,cin do items()}self modbn beliefnetwork(self gm title+"(mod)"self gm variablesnewfacsoldbnself gm self gmself modbn result self query(qvarobsself gm oldbn restore original return result inferencemethod querydo querydo from probrc import probrc probdo py -(continued from probgraphicalmodels import bn_sprinklerseasonsprinklerrainedgrass_wetgrass_shinyshoes_wetbn_sprinkler_soff bn_sprinklerv probrc(bn_sprinkler#bn_sprinklerv querydo(shoes_wet#bn_sprinklerv querydo(shoes_wet,obs={sprinkler:"off"}#bn_sprinklerv querydo(shoes_wet,do={sprinkler:"off"}version june |
18,939 | #probrc(bn_sprinkler_soffquery(shoes_wetshould be same as previous case #bn_sprinklerv querydo(seasonobs={sprinkler:"off"}#bn_sprinklerv querydo(seasondo={sprinkler:"off"}probdo py -(continued from probvariables import variable from probfactors import prob from probgraphicalmodels import boolean drug_prone variable("drug_prone"booleanposition=( , )takes_marijuana variable("takes_marijuana"booleanposition=( , )side_effects variable("side_effects"booleanposition=( , )takes_hard_drugs variable("takes_hard_drugs"booleanposition=( , ) p_dp prob(drug_prone[][ ]p_tm prob(takes_marijuana[drug_prone][[ ][ ]]p_be prob(side_effects[takes_marijuana][[ ][ ]]p_thd prob(takes_hard_drugs[side_effectsdrug_prone]drug_prone=false drug_prone=true [[[ ][ ]]side_effects=false [[ ][ ]]]side_effects=true drugs beliefnetwork("gateway drugs"[drug_prone,takes_marijuana,side_effects,takes_hard_drugs][p_dpp_tmp_bep_thd]drugsq probrc(drugsdrugsq querydo(takes_hard_drugsdrugsq querydo(takes_hard_drugsobs {takes_marijuanatrue}drugsq querydo(takes_hard_drugsobs {takes_marijuanafalse}drugsq querydo(takes_hard_drugsdo {takes_marijuanatrue}drugsq querydo(takes_hard_drugsdo {takes_marijuanafalse}version june |
18,940 | planning with uncertainty decision networks the decision network code builds on the representation for belief networks of we first allow for factors that define the utility here the utility is function of the variables in vars in utility table the utility is defined in terms of aa list that enumerates the values as in section decnnetworks py -representations for decision networks from probgraphicalmodels import graphicalmodelbeliefnetwork from probfactors import factorcpdtabfactorfactor_timesprob from probvariables import variable import matplotlib pyplot as plt class utility(factor)""" factor defining utility""pass class utilitytable(tabfactorutility)""" factor defining utility using table""def __init__(selfvarstableposition=none)"""creates factor on vars from the table the table is ordered according to vars ""tabfactor __init__(self,vars,tableself position position decision variable is like random variable with string nameand domainwhich is list of possible values the decision variable also includes the parentsa list of the variables whose value will be known when the decision is made it also includes potionwhich is only used for plotting |
18,941 | planning with uncertainty decnnetworks py -(continued class decisionvariable(variable)def __init__(selfnamedomainparentsposition=none)variable __init__(selfnamedomainpositionself parents parents self all_vars set(parents{selfa decision network is graphical model where the variables can be random variables or decision variables among the factors we assume there is one utility factor decnnetworks py -(continued class decisionnetwork(beliefnetwork)def __init__(selftitlevarsfactors)"""vars is list of variables factors is list of factors (instances of cpd and utility""graphicalmodel __init__(selftitlevarsfactorsdon' call init for beliefnetwork self var parents ({ parents for in vars if isinstance( ,decisionvariable){ child: parents for in factors if isinstance( ,cpd)}self children { :[for in self variablesfor in self var parentsfor par in self var parents[ ]self children[parappend(vself utility_factor [ for in factors if isinstance( ,utility)][ self topological_sort_saved none the split order ensures that the parents of decision node are split before the decision nodeand no other variables (if that is possibledecnnetworks py -(continued def split_order(self)so [tops self topological_sort(for in topsif isinstance( ,decisionvariable)so +[ for in parents if not in soso append(vso +[ for in tops if not in soreturn so decnnetworks py -(continued def show(self)plt ion(interactive ax plt figure(gca(ax set_axis_off(plt title(self titleversion june |
18,942 | umbrella decision network weather forecast utility umbrella figure the umbrella decision network for par in self utility_factor variablesax annotate("utility"par positionxytext=self utility_factor positionarrowprops={'arrowstyle':'<-'},bbox=dict(boxstyle="sawtooth,pad= ha='center'for var in reversed(self topological_sort())if isinstance(var,decisionvariable)bbox dict(boxstyle="square,pad= ",color="green"elsebbox dict(boxstyle="round ,pad= ,rounding_size= "if self var parents[var]for par in self var parents[var]ax annotate(var namepar positionxytext=var positionarrowprops={'arrowstyle':'<-'},bbox=bboxha='center'elsex, var position plt text( , ,var name,bbox=bbox,ha='center'example decision networks umbrella decision network here is simple "umbrelladecision network the output of umbrella_dn show(is shown in figure decnnetworks py -(continued weather variable("weather"["norain""rain"]position=( , )forecast variable("forecast"["sunny""cloudy""rainy"]position=( , )each variant uses one of the followingversion june |
18,943 | planning with uncertainty umbrella decisionvariable("umbrella"["take""leave"]{forecast}position=( , ) p_weather prob(weather[][ ]p_forecast prob(forecast[weather][[ ][ ]]umb_utility utilitytable([weatherumbrella][[ ][ ]]position=( , ) umbrella_dn decisionnetwork("umbrella decision network"{weatherforecastumbrella}{p_weatherp_forecastumb_utility}the following is variant with the umbrella decision having parentsnothing else has changed this is interesting because one of the parents is not neededif the agent knows the weatherit can ignore the forecast decnnetworks py -(continued umbrella decisionvariable("umbrella"["take""leave"]{forecastweather}position=( , )umb_utility utilitytable([weatherumbrella ][[ ][ ]]position=( , )umbrella_dn decisionnetwork("umbrella decision network (extra arc)"{weatherforecastumbrella }{p_weatherp_forecastumb_utility }fire decision network the fire decision network of figure (showing the result of fire_dn show()is represented asdecnnetworks py -(continued boolean [falsetruealarm variable("alarm"booleanposition=( , )fire variable("fire"booleanposition=( , )leaving variable("leaving"booleanposition=( , )report variable("report"booleanposition=( , )smoke variable("smoke"booleanposition=( , )tamper variable("tamper"booleanposition=( , ) see_sm variable("see_sm"booleanposition=( , chk_sm decisionvariable("chk_sm"boolean{report}position=( )call decisionvariable("call"boolean,{see_sm,chk_sm,report}position=( , ) f_ta prob(tamper,[],[ , ]f_fi prob(fire,[],[ , ]f_sm prob(smoke,[fire],[[ , ],[ , ]]f_al prob(alarm,[fire,tamper],[[[ ][ ]][[ ][ ]]]version june |
18,944 | fire decision network tamper fire alarm smoke leaving chk_sm report utility see_sm call figure fire decision network f_lv prob(leaving,[alarm],[[ ][ ]]f_re prob(report,[leaving],[[ ][ ]]f_ss prob(see_sm,[chk_sm,smoke],[[[ , ],[ , ]],[[ , ],[ , ]]] ut utilitytable([chk_sm,fire,call],[[[ ,- ],[- ,- ]],[[- ,- ],[- ,- ]]]position=( , ) fire_dn decisionnetwork("fire decision network"{tamper,fire,alarm,leaving,smoke,call,see_sm,chk_sm,report}{f_ta,f_fi,f_sm,f_al,f_lv,f_re,f_ss,ut}cheating decision network the following is the representation of the cheating decision of figure note that we keep the names of the variables short (less than charactersso that the tables look good when printed decnnetworks py -(continued grades [' ',' ',' ',' 'watched variable("watched"booleanposition=( , )caught variable("caught "booleanposition=( , )caught variable("caught "booleanposition=( , )version june |
18,945 | planning with uncertainty cheat decision watched punish caught cheat_ caught cheat_ grade_ utility grade_ fin_grd figure cheating decision network punish variable("punish"["none","suspension","recorded"]position=( , )grade_ variable("grade_ "gradesposition=( , )grade_ variable("grade_ "gradesposition=( , )fin_grd variable("fin_grd"gradesposition=( , )cheat_ decisionvariable("cheat_ "booleanset()position=( , )#no parents cheat_ decisionvariable("cheat_ "boolean{cheat_ ,caught }position=( , ) p_wa prob(watched,[],[ ]p_cc prob(caught ,[watched,cheat_ ],[[[ ][ ]][[ ][ ]]]p_cc prob(caught ,[watched,cheat_ ],[[[ ][ ]][[ ][ ]]]p_pun prob(punish,[caught ,caught ],[[[ ][ ]][[ ][ ]]]p_gr prob(grade_ ,[cheat_ ][{' ': ' ': ' ': ' ' }{' ': ' ': ' ': ' ': }]p_gr prob(grade_ ,[cheat_ ][{' ': ' ': ' ': ' ' }{' ': ' ': ' ': ' ': }]p_fg prob(fin_grd,[grade_ ,grade_ ]{' ':{' ':{' ': ' ': ' ' ' ': }' '{' ': ' ': ' ' ' ': }version june |
18,946 | ' ':{' ': ' ': ' ' ' ': }' ':{' ': ' ': ' ' ' ': }}' ':{' ':{' ': ' ': ' ' ' ': }' '{' ': ' ': ' ' ' ': }' ':{' ': ' ': ' ' ' ': }' ':{' ': ' ': ' ' ' ': }}' ':{' ':{' ': ' ': ' ' ' ': }' '{' ': ' ': ' ' ' ': }' ':{' ': ' ': ' ' ' ': }' ':{' ': ' ': ' ' ' ': }}' ':{' ':{' ': ' ': ' ' ' ': }' '{' ': ' ': ' ' ' ': }' ':{' ': ' ': ' ' ' ': }' ':{' ': ' ': ' ' ' ': }}} utc utilitytable([punish,fin_grd],{'none':{' ': ' ': ' ' ' ': }'suspension':{' ': ' ': ' ' ' ': }'recorded':{' ': ' ': ' ' ' ': }}position=( , ) cheating_dn decisionnetwork("cheating decision network"{punish,caught ,watched,fin_grd,grade_ ,grade_ ,cheat_ ,caught ,cheat_ }{p_wap_cc p_cc p_punp_gr p_gr ,p_fg,utc}chain of decisions the following example is finite-stage fully-observable markov decision process with single reward (utilityat the end it is interesting because the parents do not include all predecessors the methods we use will work without change on thiseven though the agent does not condition on all of its previous observations and actions the output of ch show(is shown in figure decnnetworks py -(continued variable(' 'booleanposition=( , ) decisionvariable(' 'boolean{ }position=( / , ) variable(' 'booleanposition=( / , ) decisionvariable(' 'boolean{ }position=( / , ) variable(' 'booleanposition=( / , ) decisionvariable(' 'boolean{ }position=( / , ) variable(' 'booleanposition=( / , ) p_s prob( [][ , ]tr [[[ ][ ]][[ ][ ]] is flip is keep value p_s prob( [ , ]trp_s prob( [ , ]trp_s prob( [ , ]trversion june |
18,947 | planning with uncertainty -chain utility figure decision network that is chain of decisions ch utilitytable([ ],[ , ]position=( / , ) ch decisionnetwork(" -chain"{ , , , , , , },{p_s ,p_s ,p_s ,p_s ,ch }#rc rc_dn(ch #rc optimize(#rc opt_policy recursive conditioning for decision networks an instance of rc_dn object takes in decision network the query method uses recursive conditioning to compute the expected utility of the optimal policy self opt_policy becomes the optimal policy decnnetworks py -(continued import math from probgraphicalmodels import graphicalmodelinferencemethod from probfactors import factor from utilities import dict_union from probrc import connected_components class rc_dn(inferencemethod)"""the class that queries graphical models using recursive conditioning version june |
18,948 | gm is graphical model to query "" def __init__(self,gm=none)self gm gm self cache {(frozenset()frozenset()): #self max_display_level def optimize(selfsplit_order=none)"""computes expected utilityand creates optimal decision functionswhere elim_order is list of the non-observed non-query variables in gm ""if split_order =nonesplit_order self gm split_order(self opt_policy {return self rc({}self gm factorssplit_orderthe following us the simplest search-based algorithm it is exponential in the number of variablesso is not very useful howeverit is simpleand useful to understand before looking at the more complicated algorithm note that the above code does not call rc you will need to change the self rc to self rc in above code to use it decnnetworks py -(continued def rc (selfcontextfactorssplit_order)"""simplest search algorithm""self display( ,"calling rc ,",(context,factors),"with so",split_orderif not factorsreturn elif to_eval :{fac for fac in factors if fac can_evaluate(context)}self display( ,"rc evaluating factors",to_evalval math prod(fac get_value(contextfor fac in to_evalreturn val self rc (contextfactors-to_evalsplit_orderelsevar split_order[ self display( "rc branching on"varif isinstance(var,decisionvariable)assert set(context<set(var parents) "cannot optimize {varin context {context}maxres -math inf for val in var domainself display( ,"in rc branching on",var,"=",valnewres self rc (dict_union({var:val},context)factorssplit_order[ :]if newres maxresmaxres newres theval val version june |
18,949 | planning with uncertainty self opt_policy[frozenset(context items())(var,thevalreturn maxres elsetotal for val in var domaintotal +self rc (dict_union({var:val},context)factorssplit_order[ :]self display( "rc branching on"var,"returning"totalreturn total we can combine the optimization for decision networks abovewith the improvements of recursive conditioning used for graphical models (section page decnnetworks py -(continued def rc(selfcontextfactorssplit_order)""returns the number \sum_{split_order\prod_{factorsgiven assignments in context context is variable:value dictionary factors is set of factors split_order is list of variables in factors that are not in context ""self display( ,"calling rc,",(context,factors)ce (frozenset(context items())frozenset(factors)key for the cache entry if ce in self cacheself display( ,"rc cache lookup",(context,factors)return self cache[ceif not factorsno factorsneeded if you don' have forgetting and caching return elif vars_not_in_factors :{var for var in context if not any(var in fac variables for fac in factors)}forget variables not in any factor self display( ,"rc forgetting variables"vars_not_in_factorsreturn self rc({key:val for (key,valin context items(if key not in vars_not_in_factors}factorssplit_orderelif to_eval :{fac for fac in factors if fac can_evaluate(context)}evaluate factors when all variables are assigned self display( ,"rc evaluating factors",to_evalval math prod(fac get_value(contextfor fac in to_evalif val = return elsereturn val self rc(context{fac for fac in factors if fac not in to_eval}split_orderversion june |
18,950 | elif len(comp :connected_components(contextfactorssplit_order) there are disconnected components self display( ,"splitting into connected components",compreturn(math prod(self rc(context, ,eofor ( ,eoin comp)elseassert split_orderf"split_order empty rc({context},{factors})var split_order[ self display( "rc branching on"varif isinstance(var,decisionvariable)assert set(context<set(var parents) "cannot optimize {varin context {context}maxres -math inf for val in var domainself display( ,"in rcbranching on",var,"=",valnewres self rc(dict_union({var:val},context)factorssplit_order[ :]if newres maxresmaxres newres theval val self opt_policy[frozenset(context items())(var,thevalself cache[cemaxres return maxres elsetotal for val in var domaintotal +self rc(dict_union({var:val},context)factorssplit_order[ :]self display( "rc branching on"var,"returning"totalself cache[cetotal return total here is how to run the optimize the example decision networksdecnnetworks py -(continued umbrella decision network #urc rc_dn(umberella_dn#urc optimize(#urc opt_policy #rc_fire rc_dn(fire_dn#rc_fire optimize(#rc_fire opt_policy #rc_cheat rc_dn(cheating_dn#rc_cheat optimize(#rc_cheat opt_policy #rc_ch rc_dn(ch #rc_ch optimize(#rc_ch opt_policy version june |
18,951 | planning with uncertainty variable elimination for decision networks ve dn is variable elimination for decision networks the method optimize is used to optimize all the decisions note that optimize requires legal elimination ordering of the random and decision variablesotherwise it will give an exception ( decision node can only be maximized if the variables that are not its parents have already been eliminated decnnetworks py -(continued from probve import ve class ve_dn(ve)"""variable elimination for decision networks""def __init__(self,dn=none)"""dn is decision network""ve __init__(self,dnself dn dn def optimize(self,elim_order=none,obs={})if elim_order =noneelim_order reversed(self gm split_order()policy [proj_factors [self project_observations(fact,obsfor fact in self dn factorsfor in elim_orderif isinstance( ,decisionvariable)to_max [fac for fac in proj_factors if in fac variables and set(fac variables< all_varsassert len(to_max)== "illegal variable order "+str(elim_order)+at "+str(vnewfac factormax(vto_max[ ]policy append(newfac decision_funproj_factors [fac for fac in proj_factors if fac is not to_max[ ]]+[newfacself display( ,"maximizing", ,"resulting factor",newfac brief(self display( ,newfacelseproj_factors self eliminate_var(proj_factorsvassert len(proj_factors)== ,"should there be only one element of proj_factors?value proj_factors[ get_value({}return value,policy decnnetworks py -(continued class factormax(factor)""" factor obtained by maximizing variable in factor also builds decision_function this is based on factorsum ""version june |
18,952 | def __init__(selfdvarfactor)"""dvar is decision variable factor is factor that contains dvar and only parents of dvar ""self dvar dvar self factor factor vars [ for in factor variables if is not dvarfactor __init__(self,varsself values [none]*self size self decision_fun factordf(dvar,vars,[none]*self size def get_value(self,assignment)"""lazy implementationif savedreturn saved valueelse compute it""index self assignment_to_index(assignmentif self values[index]return self values[indexelsemax_val float("-inf"-infinity new_asst assignment copy(for elt in self dvar domainnew_asst[self dvarelt fac_val self factor get_value(new_asstif fac_val>max_valmax_val fac_val best_elt elt self values[indexmax_val self decision_fun values[indexbest_elt return max_val decision function is stored factor decnnetworks py -(continued class factordf(tabfactor)""" decision function""def __init__(self,dvarvarsvalues)tabstored __init__(self,vars,valuesself dvar dvar self name str(dvarused in printing here are some example queriesdecnnetworks py -(continued example queriesv, ve_dn(fire_dnoptimize()print(vfor df in pprint(df,"\ " ve_dn max_display_level if you want to show lots of detail , ve_dn(cheating_dnoptimize()print(vfor df in pprint(df,"\ "print decision functions version june |
18,953 | planning with uncertainty markov decision processes we will represent markov decision process (mdpdirectlyrather than using the recursive conditioning or variable elimination codeas we did for decision networks mdpproblem py -representations for markov decision processes from utilities import argmaxd import random import matplotlib pyplot as plt from matplotlib widgets import buttoncheckbuttons class mdp(object)""" markov decision process must defineself states the set (or listof states self actions the set (or listof actions self discount real-valued discount "" def __init__(selfstatesactionsdiscountinit= )self states states self actions actions self discount discount self initv self { :init for in self statesself initq self { {ainit for in self actionsfor in self states def (self, , )"""transition probability function returns dictionary of { : such that ( , )= other probabilities are zero ""raise notimplementederror(" "abstract method def (self, , )"""reward function ( ,areturns the expected reward for doing in state ""raise notimplementederror(" "abstract method two state partying example (example in poole and mackworth [ ])mdpexamples py -mdp examples from mdpproblem import mdpgridmdp class party(mdp)"""simple -state -action partying mdp example""def __init__(selfdiscount= )states {'healthy','sick'actions {'relax''party'mdp __init__(selfstatesactionsdiscountversion june |
18,954 | def (self, , )" ( , )return 'healthy'{'relax' 'party' }'sick'{'relax' 'party' }}[ ][ def (self, , )"returns dictionary of { : such that ( , )= other probabilities are zero phealthy ('healthysa'healthy'{'relax' 'party' }'sick'{'relax' 'party' }}[ ][areturn {'healthy':phealthy'sick': -phealthythe next example is the tiny game from example and figure of poole and mackworth [ the state is represented as (xywhere counts from zero from the leftand counts from zero upwardsso the state ( is on the bottom-left state the actions are upc for up-carefuland upr for up-risky (note that gridmdp is just type of mdp for which we have methods to showyou can assume it is just mdp heremdpexamples py -(continued class mdptiny(gridmdp)def __init__(selfdiscount= )actions ['right''upc''left''upr'self x_dim -dimension self y_dim states [( ,yfor in range(self x_dimfor in range(self y_dim)for gridmdp self xoff {'right': 'upc': 'left':- 'upr': self yoff {'right': 'upc':- 'left': 'upr': gridmdp __init__(selfstatesactionsdiscount def (self, , )"""return dictionary of { : if ( , )= other probabilities are zero ""( ,ys if ='right'return {( , ): elif ='upc'return {( ,min( + , )): elif ='left'if ( , =( , )return {( , ): elsereturn {( , ) elif ='upr'if == if < return {( , ): ( + , ): ( , + ): elseat ( , return {( , ): ( , ) ( , ) version june |
18,955 | planning with uncertainty elif == return {( , ): ( , ): ( , + ): elseat ( , return {( , ): ( , ) def (self, , )( ,ys if ='right'return [ ,- ][xelif ='upc'return [- ,- ,- ][yelif ='left'if == return [- - ][yelsereturn elif ='upr'return [[- - ],[- - - ]][ ][yat ( , reward is * + *- = here is the domain of example of poole and mackworth [ here the state is represented as (xywhere counts from zero from the leftand counts from zero upwardsso the state ( is on the bottom-left state mdpexamples py -(continued class grid(gridmdp)""x_dim y_dim grid with rewarding states""def __init__(selfdiscount x_dim= y_dim= )self x_dim x_dim size in -direction self y_dim y_dim size in -direction actions ['up''down''right''left'states [( ,yfor in range(y_dimfor in range(y_dim)self rewarding_states {( , ):- ( , ):- ( , ): ( , ): self fling_states {( , )( , )self xoff {'right': 'up': 'left':- 'down': self yoff {'right': 'up': 'left': 'down':- gridmdp __init__(selfstatesactionsdiscount def intended_next(self, , )"""returns the next state in the direction this is where the agent will end up if to goes in its intended_direction (which it does with probability ""( ,ys if =='up'return (xy+ if + self y_dim else yif =='down'return (xy- if else yif =='right'return ( + if + self x_dim else ,yif =='left'version june |
18,956 | return ( - if else , def (self, , )"""return dictionary of { : if ( , )= other probabilities are zero corners are tricky because different actions result in same state ""if in self fling_statesreturn {( , ) (self x_dim- , ): ( ,self y_dim- ): (self x_dim- ,self y_dim- ): res dict(for ai in self actionss self intended_next( ,aips if ai== else if in resoccurs in corners res[ +ps elseres[ ps return res def (self, , )if in self rewarding_statesreturn self rewarding_states[selse( ,ys rew rewards from crashingif == #on bottom rew +- if ='downelse - if ==self y_dim- #on top rew +- if ='upelse - if == #on left rew +- if ='leftelse - if ==self x_dim- #on right rew +- if ='rightelse - return rew value iteration this implements value iteration this uses indexes of the states and actions (not the namesthe value function is represented so [sis the value of state with index function is represented so [ ][ais the value for doing action with index state with index similarly policy is represented as list where pi[ ]where is the index of statereturns the index of the action mdpproblem py -(continued def vi(selfn)version june |
18,957 | planning with uncertainty """carries out iterations of value iterationupdating value function self returns -functionvalue functionpolicy ""print("calling vi"assert > ,"you must carry out at least one iteration of vi ="+str( # if is not none else { : for in self statesfor in range( )self { {aself ( , )+self discount*sum( *self [ for ( , in self ( ,aitems()for in self actionsfor in self statesself {smax(self [ ][afor in self actionsfor in self statesself pi {sargmaxd(self [ ]for in self statesreturn self qself vself pi the following shows how this can be used mdpexamples py -(continued #testing value iteration try the followingpt party(discount= pt vi( pt vi( party(discount= vi( party(discount= vi( gr grid(gr show( , ,pi gr vi( [( , )showing grid mdps gridmdp is type of mdp where we the states are ( ,ypositions it is special sort of mdp only because we have methods to show it mdpproblem py -(continued class gridmdp(mdp)def __init__(selfstatesactionsdiscount)mdp __init__(selfstatesactionsdiscount def show(self)#plt ion(interactive fig,(self axplt subplots(plt subplots_adjust(bottom= version june |
18,958 | stepb button(plt axes([ , , , ])"step"stepb on_clicked(self on_stepresetb button(plt axes([ , , , ])"reset"resetb on_clicked(self on_resetself qcheck checkbuttons(plt axes([ , , , ])["show -values","show policy"]self qcheck on_clicked(self show_valsself show_vals(noneplt show( def show_vals(self,event)self ax cla(array [[self [( , )for in range(self x_dim)for in range(self y_dim)self ax pcolormesh([ - for in range(self x_dim+ )][ - for in range(self y_dim+ )]arrayedgecolors='black',cmap='summer'for cmap see if self qcheck get_status()[ ]"show policyfor ( ,yin self qmaxv max(self [( , )][afor in self actionsfor in self actionsif self [( , )][ =maxvdraw arrow in appropriate direction self ax arrow( , ,self xoff[ ]* ,self yoff[ ]* color='red',width= head_width= length_includes_head=trueif self qcheck get_status()[ ]"show -valuesself show_q(eventelseself show_v(eventself ax set_xticks(range(self x_dim)self ax set_xticklabels(range(self x_dim)self ax set_yticks(range(self y_dim)self ax set_yticklabels(range(self y_dim)plt draw( def on_step(self,event)self vi( self show_vals(event def show_v(self,event)"""show values""for ( ,yin self vself ax text( , ,"{val }format(val=self [( , )]),ha='center' def show_q(self,event)"""show -values""for ( ,yin self qversion june |
18,959 | planning with uncertainty - show -values show policy reset step figure interface for tiny exampleafter number of steps each rectangle represents state in each rectangle are the -values for the state the leftmost number is the for the left actionthe rightmost number is for the right actionthe upper most is for the upr (up-riskyaction and the lowest number is for the upc action the arrow points to the action(swith the maximum -value for in self actionsself ax text( +self xoff[ ], +self yoff[ ]"{val }format(val=self [( , )][ ]),ha='center' def on_reset(self,event)self self initv self self initq self show_vals(eventfigure shows the user interfacewhich can be obtained using tiny(show()resizing itchecking "show -valuesand "show policy"and clicking "stepa few times figure shows the user interfacewhich can be obtained using grid(show()version june |
18,960 | resizing itchecking "show -valuesand "show policy"and clicking "stepa few times exercise computing before may seem like waste of space because we don' need to store in order to compute value function or the policy change the algorithm so that it loops through the states and actions once per iterationand only stores the value function and the policy note that to get the same results as beforeyou would need to make sure that you use the previous value of in the computation not the current value of does using the current value of hurt the algorithm or make it better (in approaching the actual value function)asynchronous value iteration this implements asynchronous value iterationstoring function is represented so [ ][ais the value for doing action with index state with index mdpproblem py -(continued def avi(self, )states list(self statesactions list(self actionsfor in range( ) random choice(statesa random choice(actionsself [ ][ (self ( ,aself discount sum( max(self [ ][ for in self actionsfor ( , in self ( ,aitems())return the following shows how avi can be used mdpexamples py -(continued #testing asynchronous value iteration try the followingpt party(discount= pt avi( pt vi( gr grid( gr avi( [( , )exercise implement value iteration that stores the -values rather than the -values does it work better than storing (what might better mean?exercise in asynchronous value iterationtry number of different ways to choose the states and actions to update ( sweeping through the state-action pairschoosing them at randomnote that the best way may be to determine version june |
18,961 | planning with uncertainty - - - - - - - - - - - - - - - - show -values show policy reset step figure interface for grid exampleafter number of steps each rectangle represents state in each rectangle are the -values for the state the leftmost number is the for the left actionthe rightmost number is for the right actionthe upper most is for the up action and the lowest number is for the down action the arrow points to the action(swith the maximum -value version june |
18,962 | which states have had their -values change the mostand then update the previous onesbut that is not so straightforward to implementbecause you need to find those previous states version june |
18,963 | learning with uncertainty -means the -means learner maintains two lists that suffice as sufficient statistics to classify examplesand to learn the classificationclass counts is list such that class counts[cis the number of examples in the training set with class feature sum is list such that feature sum[ ][cis sum of the values for the 'th feature for members of class the average value of the ith feature in class is feature sum[ ][cclass counts[cthe class is initialized by randomly assigning examples to classesand updating the statistics for class counts and feature sum learnkmeans py - -means learning from learnproblem import data_setlearnerdata_from_file import random import matplotlib pyplot as plt class k_means_learner(learner)def __init__(self,datasetnum_classes)self dataset dataset self num_classes num_classes self random_initialize( def random_initialize(self) |
18,964 | learning with uncertainty class_counts[cis the number of examples with class= self class_counts [ ]*self num_classes feature_sum[ ][cis the sum of the values of feature for class self feature_sum [[ ]*self num_classes for feat in self dataset input_featuresfor eg in self dataset traincl random randrange(self num_classesassign eg to random class self class_counts[cl+ for (ind,featin enumerate(self dataset input_features)self feature_sum[ind][cl+feat(egself num_iterations self display( ,"initial class counts",self class_countsthe distance from (the mean ofa class to an example is the sumover all fraturesof the sum-of-squares differences of the class mean and the example value learnkmeans py -(continued def distance(self,cl,eg)"""distance of the eg from the mean of the class""return sum(self class_prediction(ind,cl)-feat(eg))** for (ind,featin enumerate(self dataset input_features) def class_prediction(self,feat_ind,cl)"""prediction of the class cl on the feature with index feat_ind""if self class_counts[cl= return there are no examples so we can choose any value elsereturn self feature_sum[feat_ind][cl]/self class_counts[cl def class_of_eg(self,eg)"""class to which eg is assigned""return (min((self distance(cl,eg),clfor cl in range(self num_classes)))[ second element of tuplewhich is class with minimum distance one step of -means updates the class counts and feature sum it uses the old values to determine the classesand so the new values for class counts and feature sum at the end it determines whether the values of these have changesand then replaces the old ones with the new ones it returns an indicator of whether the values are stable (have not changedlearnkmeans py -(continued def k_means_step(self)"""updates the model with one step of -means returns whether the assignment is stable ""version june |
18,965 | new_class_counts [ ]*self num_classes feature_sum[ ][cis the sum of the values of feature for class new_feature_sum [[ ]*self num_classes for feat in self dataset input_featuresfor eg in self dataset traincl self class_of_eg(egnew_class_counts[cl+ for (ind,featin enumerate(self dataset input_features)new_feature_sum[ind][cl+feat(egstable (new_class_counts =self class_countsand (self feature_sum =new_feature_sumself class_counts new_class_counts self feature_sum new_feature_sum self num_iterations + return stable def learn(self, = )"""do steps of -meansor until convergence"" = stable false while < and not stablestable self k_means_step( + self display( ,"iteration",self num_iterations"class counts",self class_counts,stable=",stablereturn stable def show_classes(self)"""sorts the data by the class and prints in order for visualizing small data sets ""class_examples [[for in range(self num_classes)for eg in self dataset trainclass_examples[self class_of_eg(eg)append(egprint("class","example",sep='\ 'for cl in range(self num_classes)for eg in class_examples[cl]print(cl,*eg,sep='\ ' def plot_error(selfmaxstep= )"""plots the sum-of-suares error as function of the number of steps""plt ion(plt xlabel("step"plt ylabel("ave sum-of-squares error"train_errors [if self dataset testversion june |
18,966 | learning with uncertainty test_errors [for in range(maxstep)self learn( train_errors appendsum(self distance(self class_of_eg(eg),egfor eg in self dataset train/len(self dataset train)if self dataset testtest_errors appendsum(self distance(self class_of_eg(eg),egfor eg in self dataset test/len(self dataset test)plt plot(range( ,maxstep+ ),train_errorslabel=str(self num_classes)+classes training set"if self dataset testplt plot(range( ,maxstep+ ),test_errorslabel=str(self num_classes)+classes test set"plt legend(plt draw( %data data_from_file('data/emdata csv'num_train= target_index= trivial example data data_from_file('data/emdata csv'num_train= target_index= %data data_from_file('data/emdata csv'num_train= target_index= example from textbook kml k_means_learner(data, num_iter= print("class assignment after",num_iter,"iterations:"kml learn(num_iter)kml show_classes( plot the error km =k_means_learner(data, )km plot_error( classes km =k_means_learner(data, )km plot_error( classes km =k_means_learner(data, )km plot_error( classes data data_from_file('data/carbool csv'target_index= ,boolean_features=truekml k_means_learner(data, kml learn( )kml show_classes(km =k_means_learner(data, )km plot_error( classes km =k_means_learner(data, )km plot_error( classes exercise change boolean features true flag to allow for numerical features -means assumes the features are numericalso we want to make non-numerical features into numerical features (using characteristic functionsbut we probably don' want to change numerical features into boolean exercise if there are many classessome of the classes can become empty ( try classes with carbool csvimplement way to put some examples into classif possible two ideas areversion june |
18,967 | (ainitialize the classes with actual examplesso that the classes will not start empty (do the classes become empty?(bin class predictionwe test whether the code is emptyand make prediction of for an empty class it is possible to make different prediction to "stealan example (but you should make sure that class has consistent value for each feature in loopmake your own suggestionsand compare it with the originaland whichever of these you think may work better em in the following definitiona classcis integer in range [ num classesi is an index of featureso feat[iis the ith featureand feature is function from tuples to values val is value of feature model consists of listswhich form the sufficient statisticsclass counts is list such that class counts[cis the number of tuples with class cwhere each tuple is weighted by its probabilityi class counts[cp(tt:class( )= feature counts is list such that feature counts[ ][val][cis the weighted count of the number of tuples with feat[ ](tval and class(tceach tuple is weighted by its probabilityi feature counts[ ][val][cp(tt:feat[ ]( )=val andclass( )= learnem py -em learning from learnproblem import data_setlearnerdata_from_file import random import math import matplotlib pyplot as plt class em_learner(learner)def __init__(self,datasetnum_classes)self dataset dataset self num_classes num_classes self class_counts none self feature_counts none the function em step goes though the training examplesand updates these counts the first time it is runwhen there is no modelit uses random distributions version june |
18,968 | learning with uncertainty learnem py -(continued def em_step(selforig_class_countsorig_feature_counts)"""updates the model ""class_counts [ ]*self num_classes feature_counts [{val:[ ]*self num_classes for val in feat frangefor feat in self dataset input_featuresfor tple in self dataset trainif orig_class_countsa model exists tpl_class_dist self prob(tpleorig_class_countsorig_feature_countselseinitiallywith no modelreturn random distribution tpl_class_dist random_dist(self num_classesfor cl in range(self num_classes)class_counts[cl+tpl_class_dist[clfor (ind,featin enumerate(self dataset input_features)feature_counts[ind][feat(tple)][cl+tpl_class_dist[clreturn class_countsfeature_counts prob computes the probability of class for tuple tplgiven the current statistics ( tplep(cp(xi =tple(ici class counts[cfeature counts[ ][feati (tple)][clen(self datasetclass counts[ci feature counts[ ][feati (tple)][cclass counts[ ]|feats|- the last step is because len(self datasetis constant (independent of cclass counts[ccan be taken out of the productbut needs to be raised to the power of the number of featuresand one of them cancels learnem py -(continued def prob(selftpleclass_countsfeature_counts)"""returns distribution over the classes for tuple tple in the model defined by the counts ""feats self dataset input_features unnorm [prod(feature_counts[ ][feat(tple)][cfor ( ,featin enumerate(feats)/(class_counts[ ]**(len(feats)- )for in range(self num_classes)thesum sum(unnormreturn [un/thesum for un in unnormlearn does steps of emlearnem py -(continuedversion june |
18,969 | def learn(self, )"""do steps of em""for in range( )self class_counts,self feature_counts self em_step(self class_counts self feature_countsthe following is for visualizing the classes it prints the dataset ordered by the probability of class learnem py -(continued def show_class(self, )"""sorts the data by the class and prints in order for visualizing small data sets ""sorted_data sorted((self prob(tpl,self class_counts,self feature_counts)[ ]indpreserve ordering for equal probabilities tplfor (ind,tplin enumerate(self dataset train)for cc, ,tpl in sorted_dataprint(cc,*tpl,sep='\ 'the following are for evaluating the classes the probability of tuple can be evaluated by marginalizing over the classesp(tplep(cp(xi =tple(icc fc[ ][feati (tple)][ccc[ccc[cc len(self dataseti where cc is the class count and fc is feature count len(self datasetcan be distributed out of the sumand cc[ccan be taken out of the product fc[ ][feati (tple)][ #feats - len(self datasetc cc[ci given the probability of each tuplewe can evaluate the loglossas the negative of the log probabilitylearnem py -(continued def logloss(self,tple)"""returns the logloss of the prediction on tplewhich is -log( (tple)based on the current class counts and feature counts ""feats self dataset input_features res cc self class_counts fc self feature_counts version june |
18,970 | learning with uncertainty for in range(self num_classes)res +prod(fc[ ][feat(tple)][cfor ( ,featin enumerate(feats))/(cc[ ]**(len(feats)- )if res> return -math log (res/len(self dataset train)elsereturn float("inf"#infinity def plot_error(selfmaxstep= )"""plots the logloss error as function of the number of steps""plt ion(plt xlabel("step"plt ylabel("ave logloss (bits)"train_errors [if self dataset testtest_errors [for in range(maxstep)self learn( train_errors appendsum(self logloss(tplefor tple in self dataset train/len(self dataset train)if self dataset testtest_errors appendsum(self logloss(tplefor tple in self dataset test/len(self dataset test)plt plot(range( ,maxstep+ ),train_errorslabel=str(self num_classes)+classes training set"if self dataset testplt plot(range( ,maxstep+ ),test_errorslabel=str(self num_classes)+classes test set"plt legend(plt draw( def prod( )"""returns the product of the elements of ""res for in lres * return res def random_dist( )"""generate random numbers that sum to ""res [random random(for in range( ) sum(resreturn [ / for in res data data_from_file('data/emdata csv'num_train= target_index= eml em_learner(data, num_iter= version june |
18,971 | print("class assignment after",num_iter,"iterations:"eml learn(num_iter)eml show_class( plot the error em =em_learner(data, )em plot_error( classes em =em_learner(data, )em plot_error( classes em =em_learner(data, )em plot_error( classes data data_from_file('data/carbool csv'target_index= ,boolean_features=false[ frange for in data input_featureseml em_learner(data, eml learn( )eml show_class( em =em_learner(data, )em plot_error( classes em =em_learner(data, )em plot_error( classes exercise for the em datawhere there are naturally classes classes does better on the training set after while than classesbut worse on the test set explain why hintlook what the classes are use "em show class( )for each of the classes [ exercise write code to plot the logloss as function of the number of classes (from to say for fixed number of iterations (from the experience with the existing codethink about how many iterations is appropriate version june |
18,972 | reinforcement learning representing agents and environments when the learning agent does an action in the environmentit observes (staterewardpair from the environment the state is the world statethis is the fully observable assumption an rl environment implements do(actionmethod that returns (staterewardpair rlproblem py -representations for reinforcement learning import random from display import displayable from utilities import flip class rl_env(displayable)def __init__(self,actions,state)self actions actions set of actions self state state initial state def do(selfaction)"""do action returns state,reward ""raise notimplementederror("rl_env do"abstract method here is the definition of the simple -state -action party/relax decision rlproblem py -(continued class healthy_env(rl_env)def __init__(self)rl_env __init__(self,["party","relax"]"healthy" |
18,973 | reinforcement learning def do(selfaction)"""updates the state based on the agent doing action returns state,reward ""if self state=="healthy"if action=="party"self state "healthyif flip( else "sickreward elseaction=="relaxself state "healthyif flip( else "sickreward elseself state=="sickif action=="party"self state "healthyif flip( else "sickreward elseself state "healthyif flip( else "sickreward return self state,reward simulating an environment from an mdp given the definition for an mdp (page )env from mdp takes in an mdp and simulates the environment with those dynamics note that the mdp does not contain enough information to simulate systembecause it loses any dependency between the rewards and the resulting statehere we assume the agent always received the average reward for the state and action rlproblem py -(continued class env_from_mdp(rl_env)def __init__(selfmdp)initial_state mdp states[ rl_env __init__(self,mdp actionsinitial_stateself mdp mdp self action_index {action:index for (index,actionin enumerate(mdp actions)self state_index {state:index for (index,statein enumerate(mdp states) def do(selfaction)"""updates the state based on the agent doing action returns state,reward ""action_ind self action_index[actionstate_ind self state_index[self stateself state pick_from_dist(self mdp trans[state_ind][action_ind]self mdp statesreward self mdp reward[state_ind][action_indversion june |
18,974 | figure monster game return self statereward def pick_from_dist(dist,values)"" pick_from_dist([ , , ],[' ',' ',' ']should pick 'awith probability etc ""ran random random( = while ran>dist[ ]ran -dist[ii + return values[isimple game this is for the game depicted in figure rlsimpleenv py -simple game import random from utilities import flip from rlproblem import rl_env class simple_game_env(rl_env)xdim ydim vwalls [( , )( , )( , )vertical walls right of these locations hwalls [not implemented crashed_reward - version june |
18,975 | reinforcement learning prize_locs [( , )( , )( , )( , )prize_apears_prob prize_reward monster_locs [( , )( , )( , )( , )( , )monster_appears_prob monster_reward_when_damaged - repair_stations [( , ) actions ["up","down","left","right" def __init__(self)stateself self self damaged false self prize none statistics self number_steps self total_reward self min_reward self min_step self zero_crossing rl_env __init__(selfsimple_game_env actions(self xself yself damagedself prize)self display( ,"","step","tot rew","ave rew",sep="\ " def do(self,action)"""updates the state based on the agent doing action returns state,reward ""reward prize can appearif self prize is none and flip(self prize_apears_prob)self prize random choice(self prize_locsactions can be noisy if flip( )actual_direction random choice(self actionselseactual_direction action modeling the actions given the actual direction if actual_direction ="right"if self ==self xdim- or (self ,self yin self vwallsreward +self crashed_reward elseself + elif actual_direction ="left"if self == or (self - ,self yin self vwallsreward +self crashed_reward version june |
18,976 | elseself +- elif actual_direction ="up"if self ==self ydim- reward +self crashed_reward elseself + elif actual_direction ="down"if self == reward +self crashed_reward elseself +- elseraise runtimeerror("unknown_direction "+str(direction) monsters if (self ,self yin self monster_locs and flip(self monster_appears_prob)if self damagedreward +self monster_reward_when_damaged elseself damaged true if (self ,self yin self repair_stationsself damaged false prizes if (self ,self =self prizereward +self prize_reward self prize none statistics self number_steps + self total_reward +reward if self total_reward self min_rewardself min_reward self total_reward self min_step self number_steps if self total_reward> and reward>self total_rewardself zero_crossing self number_steps self display( ,"",self number_steps,self total_rewardself total_reward/self number_steps,sep="\ " return (self xself yself damagedself prize)reward evaluation and plotting rlplot py -rl plotter import matplotlib pyplot as plt def plot_rl(aglabel=noneyplot='total'step_size=noneversion june |
18,977 | reinforcement learning steps_explore= steps_exploit= xscale='linear')""plots the agent ag label is the label for the plot yplot is 'averageor 'totalstep_size is the number of steps between each point plotted steps_explore is the number of steps the agent spends exploring steps_exploit is the number of steps the agent spends exploiting xscale is 'logor 'linear returns total reward when exploringtotal reward when exploiting ""assert yplot in ['average','total'if step_size is nonestep_size max( ,(steps_explore+steps_exploit)// if label is nonelabel ag label ag max_display_level,old_mdl ,ag max_display_level plt ion(plt xscale(xscaleplt xlabel("step"plt ylabel(yplot+reward"steps [steps rewards [return ag restart(step while step steps_exploreag do(step_sizestep +step_size steps append(stepif yplot ="average"rewards append(ag acc_rewards/stepelserewards append(ag acc_rewardsacc_rewards_exploring ag acc_rewards ag explore,explore_save ,ag explore while step steps_explore+steps_exploitag do(step_sizestep +step_size steps append(stepif yplot ="average"rewards append(ag acc_rewards/stepelserewards append(ag acc_rewardsplt plot(steps,rewards,label=labelplt legend(loc="upper left"plt draw(ag max_display_level old_mdl ag explore=explore_save return acc_rewards_exploringag acc_rewards-acc_rewards_exploring version june |
18,978 | learning to run the -learning demoin folder "aipython"load "rlqtest py"and copy and paste the example queries at the bottom of that file this assumes python rlqlearner py - learning import random from display import displayable from utilities import argmaxeflip class rl_agent(displayable)"""an rl_agent has percepts (srfor some state and real reward ""rlqlearner py -(continued class q_learner(rl_agent)""" -learning agent has belief-state consisting of state is the previous state is {(state,action):valuedict visits is {(state,action):ndict is how many times action was done in state acc_rewards is the accumulated reward it observes (srfor some world-state and real reward ""rlqlearner py -(continued def __init__(selfenvdiscountexplore= fixed_alpha=truealpha= alpha_fun=lambda : /kqinit= label="q_learner")"""env is the environment to interact with discount is the discount factor explore is the proportion of time the agent will explore fixed_alpha specifies whether alpha is fixed or varies with the number of visits alpha is the weight of new experiences compared to old experiences alpha_fun is function that computes alpha from the number of visits qinit is the initial value of the ' label is the label for plotting ""rl_agent __init__(selfself env env self actions env actions version june |
18,979 | reinforcement learning self discount discount self explore explore self fixed_alpha fixed_alpha self alpha alpha self alpha_fun alpha_fun self qinit qinit self label label self restart(restart is used to make the learner relearn everything this is used by the plotter to create new plots rlqlearner py -(continued def restart(self)"""make the agent relearnand reset the accumulated rewards ""self acc_rewards self state self env state self {self visits {do takes in the number of steps rlqlearner py -(continued def do(self,num_steps= )"""do num_steps of interaction with the environment""self display( ," \ta\tr\ts'\tq"alpha self alpha for in range(num_steps)action self select_action(self statenext_state,reward self env do(actionif not self fixed_alphak self visits[(self stateaction)self visits get((self stateaction), )+ alpha self alpha_fun(kself [(self stateaction)( -alphaself get((self stateaction),self qinitalpha (reward self discount max(self get((next_statenext_act),self qinitfor next_act in self actions))self display( ,self stateactionrewardnext_stateself [(self stateaction)]sep='\ 'self state next_state self acc_rewards +reward select action us used to select the next action to perform this can be reimplemented to give different exploration strategy rlqlearner py -(continued def select_action(selfstate)"""returns an action to carry out for the current agent version june |
18,980 | given the stateand the -function ""if flip(self explore)return random choice(self actionselsereturn argmaxe((next_actself get((statenext_act),self qinit)for next_act in self actionsexercise implement soft-max action selection choose temperature that works well for the domain explain how you picked this temperature compare the epsilon-greedysoft-max and optimism in the face of uncertainty exercise implement sarsa hintit does not do max in do instead it needs to choose next act before it does the update testing -learning the first tests are for the -action -state rlqtest py -rl tester from rlproblem import healthy_env from rlqlearner import q_learner from rlplot import plot_rl env healthy_env(ag q_learner(env ag_opt q_learner(env qinit= label="optimisticoptimistic agent ag_exp_l q_learner(env explore= label="less explore"ag_exp_m q_learner(env explore= label="more explore"ag_disc q_learner(env qinit= label="disc "ag_va q_learner(env qinit= ,fixed_alpha=false,alpha_fun=lambda : /( + ),label="alpha= /( + )" ag max_display_level ag do( ag get the learned -values ag max_display_level ag do( ag get the learned -values plot_rl(ag,yplot="average"plot_rl(ag_opt,yplot="average"plot_rl(ag_exp_l,yplot="average"plot_rl(ag_exp_m,yplot="average"plot_rl(ag_disc,yplot="average"plot_rl(ag_va,yplot="average" from mdpexamples import mdptiny from rlproblem import env_from_mdp envt env_from_mdp(mdptiny()version june |
18,981 | reinforcement learning agt q_learner(envt agt do( from rlsimpleenv import simple_game_env senv simple_game_env(sag q_learner(senv, ,explore= ,fixed_alpha=true,alpha= plot_rl(sag ,steps_explore= ,steps_exploit= ,label="alpha="+str(sag alpha)sag q_learner(senv, ,explore= ,fixed_alpha=falseplot_rl(sag ,steps_explore= ,steps_exploit= ,label="alpha= / "sag q_learner(senv, ,explore= ,fixed_alpha=false,alpha_fun=lambda : /( + )plot_rl(sag ,steps_explore= ,steps_exploit= ,label="alpha= /( + )" -leaning with experience replay warningnot properly dubugged rlqexperiencereplay py -linear reinforcement learner with experience replay from rlqlearner import q_learner from utilities import flip import random class boundedbuffer(object)def __init__(selfbuffer_size= )self buffer_size buffer_size self buffer [ ]*buffer_size self number_added def add(self,experience)if self number_added self buffer_sizeself buffer[self number_addedexperience elseif flip(self buffer_size/self number_added)position random randrange(self buffer_sizeself buffer[positionexperience self number_added + def get(self)return self buffer[random randrange(min(self number_addedself buffer_size)) class q_ar_learner(q_learner)def __init__(selfenvdiscountexplore= fixed_alpha=truealpha= alpha_fun=lambda : /kqinit= label="q_ar_learner"max_buffer_size= num_updates_per_action= burn_in= )version june |
18,982 | q_learner __init__(selfenvdiscountexplorefixed_alphaalphaalpha_funqinitlabelself experience_buffer boundedbuffer(max_buffer_sizeself num_updates_per_action num_updates_per_action self burn_in burn_in def do(self,num_steps= )"""do num_steps of interaction with the environment""self display( ," \ta\tr\ts'\tq"alpha self alpha for in range(num_steps)action self select_action(self statenext_state,reward self env do(actionself experience_buffer add((self state,action,reward,next_state)#remember experience if not self fixed_alphak self visits[(self stateaction)self visits get((self stateaction), )+ alpha self alpha_fun(kself [(self stateaction)( -alphaself get((self stateaction),self qinitalpha (reward self discount max(self get((next_statenext_act),self qinitfor next_act in self actions))self display( ,self stateactionrewardnext_stateself [(self stateaction)]sep='\ 'self state next_state self acc_rewards +reward do some updates from experince buffer if self experience_buffer number_added self burn_infor in range(self num_updates_per_action)( , , ,nsself experience_buffer get(if not self fixed_alphak self visits[( , )alpha self alpha_fun(kself [( , )( -alphaself [( , )alpha (reward self discount max(self get((ns,na),self qinitfor na in self actions))rlqexperiencereplay py -(continued from rlsimpleenv import simple_game_env from rlqtest import sag sag sag from rlplot import plot_rl senv simple_game_env(sag ar q_ar_learner(senv, ,explore= ,fixed_alpha=true,alpha= version june |
18,983 | reinforcement learning plot_rl(sag ar,steps_explore= ,steps_exploit= ,label="ar alpha="+str(sag ar alpha)sag ar q_ar_learner(senv, ,explore= ,fixed_alpha=falseplot_rl(sag ar,steps_explore= ,steps_exploit= ,label="ar alpha= / "sag ar q_ar_learner(senv, ,explore= ,fixed_alpha=false,alpha_fun=lambda : /( + )plot_rl(sag ar,steps_explore= ,steps_exploit= ,label="ar alpha= /( + )" model-based reinforcement learner to run the demoin folder "aipython"load "rlmodellearner py"and copy and paste the example queries at the bottom of that file this assumes python model-based reinforcement learner builds markov decision process model of the domainsimultaneously learns the model and plans with that model the model-based reinforcement learner used the following data structuresq[sais dictionary thatgiven (sapair returns the -valuethe estimate of the future (discountedvalue of being in state and doing action [sais dictionary thatgiven (sapair returns the average reward from doing in state [sasis dictionary thatgiven (sastuple returns the number of times was done in state swith the result being state svisits[sais dictionary thatgiven (sapair returns the number of times action was carried out in state res states[sais dictionary thatgiven (sapair returns the list of resulting states that have occurred when action was carried out in state this is used in the asynchronous value iteration to determine the sstates to sum over visits list is list of (sapair that have been carried out this is used to ensure there is no divide-by zero in the asynchronous value iteration note that this could be constructed from rvisits or res states by enumerating the keysbut needs to be list for random choiceand we don' want to keep recreating it version june |
18,984 | rlmodellearner py -model-based reinforcement learner import random from rlqlearner import rl_agent from display import displayable from utilities import argmaxeflip class model_based_reinforcement_learner(rl_agent)""" model-based reinforcement learner "" def __init__(selfenvdiscountexplore= qinit= updates_per_step= label="mbr_learner")"""env is the environment to interact with discount is the discount factor explore is the proportion of time the agent will explore qinit is the initial value of the ' updates_per_step is the number of avi updates per action label is the label for plotting ""rl_agent __init__(selfself env env self actions env actions self discount discount self explore explore self qinit qinit self updates_per_step updates_per_step self label label self restart(rlmodellearner py -(continued def restart(self)"""make the agent relearnand reset the accumulated rewards ""self acc_rewards self state self env state self {{(st,action):q_valuemap self {{(st,action):rewardmap self {{(st,action,st_next):countmap self visits {{(st,action):countmap self res_states {{(st,action):set_of_statesmap self visits_list [list of (st,actionself previous_action none rlmodellearner py -(continued def do(self,num_steps= )"""do num_steps of interaction with the environment for each actiondo updates_per_step iterations of asynchronous value iteration ""for step in range(num_steps)version june |
18,985 | reinforcement learning pst self state previous state action self select_action(pstself state,reward self env do(actionself acc_rewards +reward self [(pst,action,self state)self get((pstaction,self state), )+ if (pst,actionin self visitsself visits[(pst,action)+ self [(pst,action)+(reward-self [(pst,action)])/self visits[(pst,action)self res_states[(pst,action)add(self stateelseself visits[(pst,action) self [(pst,action)reward self res_states[(pst,action){self stateself visits_list append((pst,action)st,act pst,action #initial state-action pair for avi for update in range(self updates_per_step)self [(st,act)self [(st,act)]+self discount*sum(self [st,act,rst]/self visits[st,act]max(self get((rst,nact),self qinitfor nact in self actionsfor rst in self res_states[(st,act)])st,act random choice(self visits_listrlmodellearner py -(continued def select_action(selfstate)"""returns an action to carry out for the current agent given the stateand the -function ""if flip(self explore)return random choice(self actionselsereturn argmaxe((next_actself get((statenext_act),self qinit)for next_act in self actionsrlmodellearner py -(continued from rlqtest import senv simple game environment mbl model_based_reinforcement_learner(senv, ,updates_per_step= plot_rl(mbl ,steps_explore= ,steps_exploit= ,label="model-based( )"mbl model_based_reinforcement_learner(senv, ,updates_per_step= plot_rl(mbl ,steps_explore= ,steps_exploit= ,label="model-based( )"exercise if there was only one update per stepthe algorithm can be made simpler and use less space explain how does it make it more efficientis it worthwhile having more than one update per step for the games implemented hereversion june |
18,986 | exercise it is possible to implement the model-based reinforcement learner by replacing qrvisitsres states with single dictionary that returns tuple (qrvtmwhere qr and are numbersand tm is map from resulting states into counts does this make the algorithm easier to understanddoes this make the algorithm more efficientexercise if the states and the actions were mapped into integersthe dictionaries could be implemented more efficiently as arrays this entails an extra step in specifying problems implement this for the simple game is it more efficient reinforcement learning with features to run the demoin folder "aipython"load "rlfeatures py"and copy and paste the example queries at the bottom of that file this assumes python representing features feature is function from state and action to construct the features for domainwe construct function that takes state and an action and returns the list of all feature values for that state and action this feature set is redesigned for each problem get features(stateactionreturns the feature values appropriate for the simple game rlsimplegamefeatures py -feature-based reinforcement learner from rlsimpleenv import simple_game_env from rlproblem import rl_env def get_features(state,action)"""returns the list of feature values for the state-action pair ""assert action in simple_game_env actions ( , , ,pstate would go to monster monster_ahead( , ,actionf would crash into wall wall_ahead( , ,actionf action is towards prize towards_prize( , ,action,pf damaged and action is toward repair station towards_repair( , ,actionif else damaged and towards monster if and else damaged if else not damaged version june |
18,987 | reinforcement learning - damaged and prize ahead if and else not damaged and prize ahead if not and else features [ , , , , , , , , , the next features are for prize locations and distances from outside in all directions for pr in simple_game_env prize_locs+[none]if ==prfeatures +[ -xy -yelsefeatures +[ fp feature for when prize is at , this knows about the wall to the right of the prize if ==( , )if == fp elif < fp elsefp - elsefp features append(fp return features def monster_ahead( , ,action)"""returns if the location expected to get to by doing action from ( ,ycan contain monster ""if action ="rightand ( + ,yin simple_game_env monster_locsreturn elif action ="leftand ( - ,yin simple_game_env monster_locsreturn elif action ="upand ( , + in simple_game_env monster_locsreturn elif action ="downand ( , - in simple_game_env monster_locsreturn elsereturn def wall_ahead( , ,action)"""returns if there is wall in the direction of action from ( ,ythis is complicated by the internal walls ""if action ="rightand ( ==simple_game_env xdim- or ( ,yin simple_game_env vwalls)return elif action ="leftand ( == or ( - ,yin simple_game_env vwalls)version june |
18,988 | return elif action ="upand ==simple_game_env ydim- return elif action ="downand == return elsereturn def towards_prize( , ,action, )"""action goes in the direction of the prize from ( , )""if is nonereturn elif ==( , )take into account the wall near the top-left prize if action ="leftand ( > or == and < )return elif action ="downand ( > and > )return elif action ="upand ( == or < )return elsereturn elsepx,py if ==( , and == if (action=="rightand or (action=="upand < )return elsereturn if (action ="upand <pyor (action ="downand py< )return elif (action ="leftand px<xor (action ="rightand <px)return elsereturn def towards_repair( , ,action)"""returns if action is towards the repair station ""if action ="upand ( > and < or == and < )return elif action ="leftand > return elif action ="rightand == and < return elif action ="downand == and > return elsereturn version june |
18,989 | reinforcement learning def simp_features(state,action)"""returns list of feature values for the state-action pair ""assert action in simple_game_env actions ( , , ,pstate would go to monster monster_ahead( , ,actionf would crash into wall wall_ahead( , ,actionf action is towards prize towards_prize( , ,action,preturn [ , , , feature-based rl learner this learns linear function approximation of the -values it requires the function get features that given state and an action returns list of values for all of the features each environment requires this function to be provided rlfeatures py -feature-based reinforcement learner import random from rlqlearner import rl_agent from display import displayable from utilities import argmaxeflip class sarsa_lfa_learner(rl_agent)""" sarsa_lfa learning agent has belief-state consisting of state is the previous state is {(state,action):valuedict visits is {(state,action):ndict is how many times action was done in state acc_rewards is the accumulated reward it observes (srfor some world-state and real reward ""def __init__(selfenvget_featuresdiscountexplore= step_size= winit= label="sarsa_lfa")"""env is the feature environment to interact with get_features is function get_features(state,actionthat returns the list of feature values discount is the discount factor explore is the proportion of time the agent will explore step_size is gradient descent step size winit is the initial value of the weights label is the label for plotting ""rl_agent __init__(selfself env env version june |
18,990 | self get_features get_features self actions env actions self discount discount self explore explore self step_size step_size self winit winit self label label self restart(restart(is used to make the learner relearn everything this is used by the plotter to create new plots rlfeatures py -(continued def restart(self)"""make the agent relearnand reset the accumulated rewards ""self acc_rewards self state self env state self features self get_features(self statelist(self env actions)[ ]self weights [self winit for in self featuresself action self select_action(self statedo takes in the number of steps rlfeatures py -(continued def do(self,num_steps= )"""do num_steps of interaction with the environment""self display( ," \ta\tr\ts'\tq\tdelta"for in range(num_steps)next_state,reward self env do(self actionself acc_rewards +reward next_action self select_action(next_statefeature_values self get_features(self state,self actionoldq dot_product(self weightsfeature_valuesnextq dot_product(self weightsself get_features(next_state,next_action)delta reward self discount nextq oldq for in range(len(self weights))self weights[ +self step_size delta feature_values[iself display( ,self stateself actionrewardnext_statedot_product(self weightsfeature_values)deltasep='\ 'self state next_state self action next_action def select_action(selfstate)"""returns an action to carry out for the current agent given the stateand the -function this implements an epsilon-greedy approach where self explore is the probability of exploring ""version june |
18,991 | reinforcement learning if flip(self explore)return random choice(self actionselsereturn argmaxe((next_actdot_product(self weightsself get_features(state,next_act))for next_act in self actions def show_actions(self,state=none)"""prints the value for each action in state this may be useful for debugging ""if state is nonestate self state for next_act in self actionsprint(next_act,dot_product(self weightsself get_features(state,next_act)) def dot_product( , )return sum( * for ( , in zip( , )test coderlfeatures py -(continued from rlqtest import senv simple game environment from rlsimplegamefeatures import get_featuressimp_features from rlplot import plot_rl fa sarsa_lfa_learner(senvget_features step_size= #fa max_display_level #fa do( #plot_rl(fa ,steps_explore= ,steps_exploit= ,label="sarsa_lfa( )"fas sarsa_lfa_learner(senvsimp_features step_size= #plot_rl(fas ,steps_explore= ,steps_exploit= ,label="sarsa_lfa(simp)"exercise how does the step-size affect performancetry different step sizes ( other sizes in betweenexplain the behaviour you observe which step size works best for this example explain what evidence you are basing your prediction on exercise does having extra features always helpdoes it sometime helpdoes whether it helps depend on the step sizegive evidence for your claims exercise for each of the following first predictthen plotthen explain the behavour you observed(asarsa lfamodel-based learning (with update per stepand -learning for , steps exploring followed by , steps exploiting (bsarsa lfamodel-based learning and -learning for , steps exploring followed by , steps exploit ii , steps exploring followed by , steps exploit version june |
18,992 | (csuppose your goal was to have the best accumulated reward after , steps you are allowed to change the exploration rate at fixed number of steps for each of the methodswhich is the best position to start exploiting morewhich method is betterwhat if you wanted to have the best reward after , or , stepsbased on this evidenceexplain when it is preferable to use sarsa lfamodelbased learneror -learning importantyou need to run each algorithm more than once your explanation should include the variability as well as the typical behavior experience replay here we consider experience replay with bounded replay buffer for sarsa lfa warningdoes not work properly yet should self env return (reward,stateto be consistent with ( , , , )rllinexperiencereplay py -linear reinforcement learner with experience replay from rlfeatures import sarsa_lfa_learnerdot_product from utilities import flip import random class sarsa_lfa_ar_learner(sarsa_lfa_learner) def __init__(selfenvget_featuresdiscountexplore= step_size= winit= label="sarsa_lfa-ar"max_buffer_size= num_updates_per_action= burn_in= )sarsa_lfa_learner __init__(selfenvget_featuresdiscountexplorestep_sizewinitlabelself max_buffer_size max_buffer_size self action_buffer [ ]*max_buffer_size self number_added self num_updates_per_action num_updates_per_action self burn_in burn_in def add_to_buffer(self,experience)if self number_added self max_buffer_sizeself action_buffer[self number_addedexperience elseif flip(self max_buffer_size/self number_added)position random randrange(self max_buffer_sizeself action_buffer[positionexperience self number_added + def do(self,num_steps= )"""do num_steps of interaction with the environment""self display( ," \ta\tr\ts'\tq\tdelta"for in range(num_steps)version june |
18,993 | reinforcement learning next_state,reward self env do(self actionself add_to_buffer((self state,self action,reward,next_state)#remember experience self acc_rewards +reward next_action self select_action(next_statefeature_values self get_features(self state,self actionoldq dot_product(self weightsfeature_valuesnextq dot_product(self weightsself get_features(next_state,next_action)delta reward self discount nextq oldq for in range(len(self weights))self weights[ +self step_size delta feature_values[iself display( ,self stateself actionrewardnext_statedot_product(self weightsfeature_values)deltasep='\ 'self state next_state self action next_action if self number_added self burn_infor in range(self num_updates_per_action)( , , ,nsself action_buffer[random randrange(min(self number_addedself max_buffer_size))na self select_action(nsfeature_values self get_features( ,aoldq dot_product(self weightsfeature_valuesnextq dot_product(self weightsself get_features(ns,na)delta reward self discount nextq oldq for in range(len(self weights))self weights[ +self step_size delta feature_values[ test coderllinexperiencereplay py -(continued from rlqtest import senv simple game environment from rlsimplegamefeatures import get_featuressimp_features from rlplot import plot_rl fa sarsa_lfa_ar_learner(senvget_features step_size= #fa max_display_level #fa do( #plot_rl(fa ,steps_explore= ,steps_exploit= ,label="sarsa_lfa_ar( )"fas sarsa_lfa_ar_learner(senvsimp_features step_size= #plot_rl(fas ,steps_explore= ,steps_exploit= ,label="sarsa_lfa_ar(simp)"version june |
18,994 | multiagent systems minimax here we consider two-player zero-sum games here player only wins when another player loses this can be modeled as where there is single utility which one agent (the maximizing agentis trying minimize and the other agent (the minimizing agentis trying to minimize creating two-player game masproblem py - multiagent problem from display import displayable class node(displayable)""" node in search tree it has name string ismax is true if it is maximizing nodeotherwise it is minimizing node children is the list of children value is what it evaluates to if it is leaf ""def __init__(selfnameismaxvaluechildren)self name name self ismax ismax self value value self allchildren children def isleaf(self)"""returns true of this is leaf node""return self allchildren is none |
18,995 | multiagent systems def children(self)"""returns the list of all children ""return self allchildren def evaluate(self)"""returns the evaluation for this node if it is leaf""return self value the following gives the tree from figure of the book note how is used as value herebut never appears in the trace masproblem py -(continued fig node(" ",true,nonenode(" ",false,nonenode(" ",true,nonenode(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])node(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])])node(" ",true,nonenode(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])node(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])])])node(" ",false,nonenode(" ",true,nonenode(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])node(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])])node(" ",true,nonenode(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])node(" ",false,nonenode(" ",true, ,none)node(" ",true, ,none)])])])]the following is representation of magic-sum gamewhere players take turns picking number in the range [ ]and the first player to have numbers that sum to wins note that this is syntactic variant of tic-tac-toe or naughts and crosses to see thisconsider the numbers on magic square (figure ) numbers that add to correspond exactly to the winning positions version june |
18,996 | figure magic square of tic-tac-toe played on the magic square note that we do not remove symmetries (what are the symmetrieshow do the symmetries of tic-tac-toe translate here?masproblem py -(continued class magic_sum(node)def __init__(selfxmove=truelast_move=noneavailable=[ , , , , , , , , ] =[] =[])"""this is node in the search for the magic-sum game xmove is true if the next move belongs to last_move is the number selected in the last move available is the list of numbers that are available to be chosen is the list of numbers already chosen by is the list of numbers already chosen by ""self ismax self xmove xmove self last_move last_move self available available self self self allchildren none #computed on demand lm str(last_moveself name "startif not last_move else " ="+lm if xmove else " ="+lm def children(self)if self allchildren is noneif self xmoveself allchildren magic_sum(xmove not self xmovelast_move selavailable [ for in self available if is not sel] self +[sel] self ofor sel in self availableelseself allchildren magic_sum(xmove not self xmovelast_move selavailable [ for in self available if is not sel]version june |
18,997 | multiagent systems self xo self +[sel]for sel in self availablereturn self allchildren def isleaf(self)""" leaf has no numbers available or is win for one of the players we only need to check for win for if it is currently ' turnand only check for win for if it is ' turn (otherwise it would have been win earlier""return (self available =[or (sum_to_ (self last_move,self oif self xmove else sum_to_ (self last_move,self )) def evaluate(self)if self xmove and sum_to_ (self last_move,self )return - elif not self xmove and sum_to_ (self last_move,self )return elsereturn def sum_to_ (last,selected)"""is true if lasttoegether with two other elements of selected sum to ""return any(last+ + = for in selected if !last for in selected if !last and !aminimax and - pruning this is naive depth-first minimax algorithmmasminimax py -minimax search with alpha-beta pruning def minimax(node,depth)"""returns the value of nodeand best path for the agents ""if node isleaf()return node evaluate(),none elif node ismaxmax_score float("-inf"max_path none for in node children()score,path minimax( ,depth+ if score max_scoremax_score score max_path name,path version june |
18,998 | return max_score,max_path elsemin_score float("inf"min_path none for in node children()score,path minimax( ,depth+ if score min_scoremin_score score min_path name,path return min_score,min_path the following is depth-first minimax with - pruning it returns the value for node as well as best path for the agents masminimax py -(continued def minimax_alpha_beta(node,alpha,beta,depth= )"""node is nodealpha and beta are cutoffsdepth is the depth returns valuepath where path is sequence of nodes that results in the value ""node display( ,"*depth,"minimax_alpha_beta(",node name,"",alpha""beta,")"best=none only used if it will be pruned if node isleaf()node display( ,"*depth,"returning leaf value",node evaluate()return node evaluate(),none elif node ismaxfor in node children()score,path minimax_alpha_beta( ,alpha,beta,depth+ if score >betabeta pruning node display( ,"*depth,"pruned due to beta=",beta," =", namereturn scorenone if score alphaalpha score best namepath node display( ,"*depth,"returning max alpha",alpha,"best",bestreturn alpha,best elsefor in node children()score,path minimax_alpha_beta( ,alpha,beta,depth+ if score <alphaalpha pruning node display( ,"*depth,"pruned due to alpha=",alpha," =", namereturn scorenone if score betabeta=score best name,path node display( ,"*depth,"returning min beta",beta,"best=",bestreturn beta,best testingversion june |
18,999 | multiagent systems masminimax py -(continued from masproblem import fig magic_sumnode node max_display_level= print detailed trace minimax_alpha_beta(fig - , minimax_alpha_beta(magic_sum()- , #to see how much time alpha-beta pruning can save over minimaxuncomment the following#import timeit #timeit timer("minimax(magic_sum(), )",setup="from __main__ import minimaxmagic_sum#timeit(number= #trace=false #timeit timer("minimax_alpha_beta(magic_sum()- , )"#setup="from __main__ import minimax_alpha_betamagic_sum#timeit(number= multiagent learning the next code of for multiple agnets that learn when interacting with other agents this code is designed to be extendedand as such is restricted to being two agentsa single stateand the only observation is the reward coordinating agents can' easily implement that agent architecture howeverin that architecturean agent calls the environment that architecture was chosen because it was simple howeverit does not really work when there are multiple agentsinstead we have controller that tells the egents the percepts (here the percepts are just the rewardmaslearn py -simulations of agents learning from display import displayable import utilities argmaxall for (element,valuepairs import matplotlib pyplot as plt import random class gameagent(displayable)next_id= def __init__(selfactions)""actions is the set of actions the agent can do it needs to be told that""self actions actions self id gameagent next_id gameagent next_id + self display( , "agent {self idhas actions {actions}"self total_score self dist {act: for act in actionsunnormalized distibution version june |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.