id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
18,800 | reasoning with constraints from cspexamples import test_csp def sls_solver(csp,prob_best= )"""stochastic local searcher (prob_best= )""se slsearcher(cspse search( ,prob_bestreturn se current_assignment def any_conflict_solver(csp)"""stochastic local searcher (any-conflict)""return sls_solver(csp, if __name__ ="__main__"test_csp(sls_solvertest_csp(any_conflict_solver from cspexamples import csp csp crossword crossword #test solving csps with search#se slsearcher(csp )print(se search( )#se slsearcher(csp )print(se search( , )greedy #se slsearcher(csp )print(se search( , )any_conflict #se slsearcher(csp )print(se search( , ) greedy any_conflict #slsearcher max_display_level= #more detailed display #se slsearcher(crossword )print(se search( ), # runtime_distribution(csp # plot_runs( , , any_conflict # plot_runs( , , greedy # plot_runs( , , greedy any_conflict exercise modify this to plot the runtimeinstead of the number of steps to measure runtime use timeit (htmlsmall runtimes are inaccurateso timeit can run the same code multiple times stochastic local algorithms give different runtimes each time called to make the timing meaningfulyou need to make sure the random seed is the same for each repeated call (see random getstate and random setstate in https//docs python org/ /library/random htmlbecause the runtime for different seeds can vary great dealfor each seedyou should start with iteration and multiplying it bysay until the time is greater than seconds make sure you plot the average time for each run before you starttry to estimate the total runtimeso you will be able to tell if there is problem with the algorithm stopping discrete optimization softconstraint is constraintbut where the condition is real-valued function because we did not force the condition to be booleanwe can makejust reuse the constraint class cspsoft py -representations of soft constraints from cspproblem import variableconstraintcsp version june |
18,801 | class softconstraint(constraint)""" constraint consists of scopea tuple of variables functiona real-valued function that can applied to tuple of values stringa string for printing the constraints all of the strings must be unique for the variables ""def __init__(selfscopefunctionstring=noneposition=none)constraint __init__(selfscopefunctionstringposition def value(self,assignment)return self holds(assignmentcspsoft py -(continued variable(' '{ , }position=( , ) variable(' '{ , , }position=( , ) variable(' '{ , }position=( , ) variable(' '{ , }position=( , ) def fun( , )if == return ( if == else elsereturn ( if == else if == else softconstraint([ , ], fun," "def fun( , )if == return ( if == else elif == return ( if == else elsereturn ( if == else softconstraint([ , ], fun," "def fun( , )if == return ( if == else elif == return elsereturn ( if == else softconstraint([ , ], fun," " def penalty_if_same(pen)"returns function that gives penalty of pen if the arguments are the samereturn lambda , (pen if ( ==yelse softconstraint([ , ],penalty_if_same( )," " scsp csp("scsp "{ , , , }[ , , , ] ##the second soft csp has an extra variableand constraints variable(' '{ , }position=( , ) softconstraint([ , ],penalty_if_same( )," " softconstraint([ , ],penalty_if_same( )," "scsp csp("scsp "{ , , , , }[ , , , , , ]version june |
18,802 | reasoning with constraints branch-and-bound search here we specialize the branch-and-bound algorithm (section on page cspsoft py -(continued from display import displayablevisualize import math class df_branch_and_bound_opt(displayable)"""returns branch and bound searcher for problem an optimal assignment with cost less than bound can be found by calling search(""def __init__(selfcspbound=math inf)"""creates searcher than can be used with search(to find an optimal path bound gives the initial bound by default this is infinite meaning there is no initial pruning due to depth bound ""super(__init__(self csp csp self best_asst none self bound bound def optimize(self)"""returns an optimal solution to problem with cost less than bound returns none if there is no solution with cost less than bound ""self num_expanded= self cbsearch({} self csp constraintsself display( ,"number of paths expanded:",self num_expandedreturn self best_asstself bound def cbsearch(selfasstcostconstraints)"""finds the optimal solution that extends path and is less the bound""self display( ,"cbsearch:",asst,cost,constraintscan_eval [ for in constraints if can_evaluate(asst)rem_cons [ for in constraints if not in can_evalnewcost cost sum( value(asstfor in can_evalself display( ,"evaluaing:",can_eval,"cost:",newcostif newcost self boundself num_expanded + if rem_cons==[]self best_asst asst self bound newcost self display( ,"new best assignment:",asst,cost:",newcostelsevar next(var for var in self csp variables if var not in asstversion june |
18,803 | for val in var domainself cbsearch({var:val}|asstnewcostrem_cons bnb df_branch_and_bound_opt(scsp bnb max_display_level= show more detail bnb optimize(version june |
18,804 | propositions and inference representing knowledge bases clause consists of head (an atomand body body is represented as list of atoms atoms are represented as strings logicproblem py -representations logics class clause(object)""" definite clause"" def __init__(self,head,body=[])"""clause with atom head and lost of atoms body""self head=head self body body def __str__(self)"""returns the string representation of clause ""if self bodyreturn self head <join(self bodyelsereturn self head an askable atom can be asked of the user the user can respond in english or french or just with "ylogicproblem py -(continued class askable(object)"""an askable atom"" def __init__(self,atom)"""clause with atom head and lost of atoms body"" |
18,805 | propositions and inference self atom=atom def __str__(self)"""returns the string representation of clause ""return "askable self atom def yes(ans)"""returns true if the answer is yes in some form""return ans lower(in ['yes''yes ''oui''oui '' '' 'bilingual knowledge base is list of clauses and askables in order to make top-down inference fasterthis creates dictionary that maps each atoms into the set of clauses with that atom in the head logicproblem py -(continued from display import displayable class kb(displayable)""" knowledge base consists of set of clauses this also creates dictionary to give fast access to the clauses with an atom in head ""def __init__(selfstatements=[])self statements statements self clauses [ for in statements if isinstance(cclause)self askables [ atom for in statements if isinstance(caskable)self atom_to_clauses {dictionary giving clauses with atom as head for in self clausesif head in self atom_to_clausesself atom_to_clauses[ headadd(celseself atom_to_clauses[ head{ def clauses_for_atom(self, )"""returns set of clauses with atom as the head""if in self atom_to_clausesreturn self atom_to_clauses[aelsereturn set( def __str__(self)"""returns string representation of this knowledge base ""return '\njoin([str(cfor in self statements]here is trivial example ( think therefore amusing in the unit testslogicproblem py -(continued triv_kb kb(version june |
18,806 | clause('i_am'['i_think'])clause('i_think')clause('i_smell'['i_exist']]here is representation of the electrical domain of the textbooklogicproblem py -(continued elect kb(clause('light_l ')clause('light_l ')clause('ok_l ')clause('ok_l ')clause('ok_cb ')clause('ok_cb ')clause('live_outside')clause('live_l '['live_w '])clause('live_w '['up_s ','live_w '])clause('live_w '['down_s ','live_w '])clause('live_w '['up_s ''live_w '])clause('live_w '['down_s ','live_w ])clause('live_l '['live_w '])clause('live_w '['up_s ','live_w ])clause('live_p_ '['live_w '])clause('live_w '['live_w ''ok_cb '])clause('live_p_ '['live_w '])clause('live_w '['live_w ''ok_cb '])clause('live_w '['live_outside'])clause('lit_l '['light_l ''live_l ''ok_l '])clause('lit_l '['light_l ''live_l ''ok_l '])askable('up_s ')askable('down_s ')askable('up_s ')askable('down_s ')askable('up_s ')askable('down_s '] print(kbthe following knowledge base is false of the intended interpretation one of the clauses is wrongcan you see which onewe will show how to debug it logicproblem py -(continued elect_bug kb(clause('light_l ')clause('ok_l ')clause('ok_l ')clause('ok_cb ')clause('ok_cb ')clause('live_outside')clause('live_p_ '['live_w '])version june |
18,807 | propositions and inference clause('live_w '['live_w ''ok_cb '])clause('light_l ')clause('live_w '['live_outside'])clause('lit_l '['light_l ''live_l ''ok_l '])clause('lit_l '['light_l ''live_l ''ok_l '])clause('live_l '['live_w '])clause('live_w '['up_s ','live_w '])clause('live_w '['down_s ','live_w '])clause('live_w '['up_s ''live_w '])clause('live_w '['down_s ','live_w ])clause('live_l '['live_w '])clause('live_w '['up_s ','live_w ])clause('live_p_ '['live_w '])clause('live_w '['live_w ''ok_cb '])askable('up_s ')askable('down_s ')askable('up_s ')clause('light_l ')clause('ok_l ')clause('light_l ')clause('ok_l ')clause('ok_l ')clause('ok_cb ')clause('ok_cb ')clause('live_outside')clause('live_p_ '['live_w '])clause('live_w '['live_w ''ok_cb '])clause('ok_l ')clause('ok_cb ')clause('ok_cb ')clause('live_outside')clause('live_p_ '['live_w '])clause('live_w '['live_w ''ok_cb '])askable('down_s ')askable('up_s ')askable('down_s '] print(kb bottom-up proofs (with askablesfixed point computes the fixed point of the knowledge base kb logicbottomup py -bottom-up proof procedure for definite clauses from logicproblem import yes def fixed_point(kb)"""returns the fixed point of knowledge base kb version june |
18,808 | ""fp ask_askables(kbadded true while addedadded false added is true when an atom was added to fp this iteration for in kb clausesif head not in fp and all( in fp for in body)fp add( headadded true kb display( , head,"added to fp due to clause",creturn fp def ask_askables(kb)return {at for at in kb askables if yes(input("is "+at+true"))the following provides trivial unit testby default using the knowledge base triv_kblogicbottomup py -(continued from logicproblem import triv_kb def test(kb=triv_kbfixedpt {'i_am','i_think'})fp fixed_point(kbassert fp =fixedpt"kb gave result "+str(fpprint("passed unit test"if __name__ ="__main__"test( from logicproblem import elect elect max_display_level= give detailed trace fixed_point(electexercise it is not very user-friendly to ask all of the askables up-front implement ask-the-user so that questions are only asked if usefuland are not re-asked for exampleif there is clause ewhere and are askablec and only need to be asked if abd are all in fp and they have not been asked before askable only needs to be asked if the user says "yesto askable doesn' need to be asked if the user previously replied "noto this form of ask-the-user can ask different set of questions than the topdown interpreter that asks questions when encountered give an example where they ask different questions (neither set of questions asked is subset of the otherexercise this algorithm runs in time ( )where is the number of clausesfor bounded number of elements in the bodyeach iteration goes through each of the clausesand in the worst caseit will do an iteration for each clause it is possible to implement this in time (ntime by creating an index that maps an atom to the set of clauses with that atom in the body implement this what is its complexity as function of and bthe maximum number of atoms in the body of clauseversion june |
18,809 | propositions and inference exercise it is possible to be asymptotically more efficient (in terms of the number of elements in bodythan the method in the previous question by noticing that each element of the body of clause only needs to be checked once for examplethe clause dneeds only be considered when is added to fp once is added to fpif is already in pf we know that can be added as soon as is added implement this what is its complexity as function of and bthe maximum number of atoms in the body of clause top-down proofs (with askablesprove(kbgoalis used to prove goal from knowledge basekbwhere goal is list of atoms it returns true if kb goal the indent is used when displaying the code (and doesn' need to have non-default valuelogictopdown py -top-down proof procedure for definite clauses from logicproblem import yes def prove(kbans_bodyindent="")"""returns true if kb |ans_body ans_body is list of atoms to be proved ""kb display( ,indent,'yes <-',join(ans_body)if ans_bodyselected ans_body[ select first atom from ans_body if selected in kb askablesreturn (yes(input("is "+selected+true")and prove(kb,ans_body[ :],indent+")elsereturn any(prove(kb,cl body+ans_body[ :],indent+"for cl in kb clauses_for_atom(selected)elsereturn true empty body is true the following provides simple unit test that is hard wired for triv_kblogictopdown py -(continued from logicproblem import triv_kb def test() prove(triv_kb,['i_am']assert "triv_kb proving i_am gave "+str( prove(triv_kb,['i_smell']assert not "triv_kb proving i_smell gave "+str( print("passed unit tests"if __name__ ="__main__"test(try from logicproblem import elect elect max_display_level= give detailed trace prove(elect,['live_w ']prove(elect,['lit_l ']version june |
18,810 | exercise this code can re-ask question multiple times implement this code so that it only asks question once and remembers the answer also implement function to forget the answers exercise what search method is this usingimplement the search interface so that it can use aor other searching methods define an admissible heuristic that is not always debugging and explanation here we modify the top-down procedure to build proof tree than can be traversed for explanation and debugging prove_atom(kb,atomreturns proof for atom from knowledge base kbwhere proof is pair of the atom and the proofs for the elements of the body of the clause used to prove the atom prove_body(kb,bodyreturns list of proofs for list body from knowledge basekb the indent is used when displaying the code (and doesn' need to have non-default valuelogicexplain py -explaining proof procedure for definite clauses from logicproblem import yes for asking the user def prove_atom(kbatomindent="")"""returns pair (atom,proofswhere proofs is the list of proofs of the elements of body of clause used to prove atom ""kb display( ,indent,'proving',atomif atom in kb askablesif yes(input("is "+atom+true"))return (atom,"answered"elsereturn "failelsefor cl in kb clauses_for_atom(atom)kb display( ,indent,"trying",atom,'<-',join(cl body)pr_body prove_body(kbcl bodyindentif pr_body !"fail"return (atompr_bodyreturn "fail def prove_body(kbans_bodyindent="")"""returns proof tree if kb |ans_body or "failif there is no proof ans_body is list of atoms in body to be proved ""proofs [for atom in ans_bodyproof_at prove_atom(kbatomindent+"if proof_at ="fail"return "failfail if any proof fails elseversion june |
18,811 | propositions and inference proofs append(proof_atreturn proofs the following provides simple unit test that is hard wired for triv_kblogicexplain py -(continued from logicproblem import triv_kb def test() prove_atom(triv_kb,'i_am'assert "triv_kb proving i_am gave "+str( prove_atom(triv_kb,'i_smell'assert =="fail""triv_kb proving i_smell gave "+str( print("passed unit tests"if __name__ ="__main__"test(try from logicproblem import electelect_bug elect max_display_level= give detailed trace prove_atom(elect'live_w 'prove_atom(elect'lit_l 'the interact(kbprovides an interactive interface to explore proofs for knowledge base kb the user can ask to prove atoms and can ask how an atom was proved to ask howthere must be current atom for which there is proof this starts as the atom asked when the user asks "how nthe current atom becomes the -th element of the body of the clause used to prove the (previouscurrent atom the command "upmakes the current atom the atom in the head of the rule containing the (previouscurrent atom thus "how nmoves down the proof tree and "upmoves up the proof treeallowing the user to explore the full proof logicexplain py -(continued helptext """commands areask atom ask is there is proof for atom (atom should not be in quoteshow show the clause that was used to prove atom how show the clause used to prove the nth element of the body up go back up proof tree to explore other parts of the proof tree kb print the knowledge base quit quit this interaction (and go back to pythonhelp print this text "" def interact(kb)going true ups [stack for going up proof="failthere is no proof to start while goinginp input("logicexplain"inps inp split("version june |
18,812 | trycommand inps[ if command ="quit"going false elif command ="ask"proof prove_atom(kbinps[ ]if proof ="fail"print("fail"elseprint("yes"elif command ="how"if proof=="fail"print("there is no proof"elif len(inps)== print_rule(proofelsetryups append(proofproof proof[ ][int(inps[ ])#nth argument of rule print_rule(proofexceptprint('in "how " must be number between and',len(proof[ ])- ,"inclusive "elif command ="up"if upsproof ups pop(elseprint("no rule to go up to "print_rule(proofelif command ="kb"print(kbelif command ="help"print(helptextelseprint("unknown command:"inpprint("use help for help"exceptprint("unknown command:"inpprint("use help for help" def print_rule(proof)(head,bodyproof if body ="answered"print(head,"was answered yes"elif body =[]print(head,"is fact"elseprint(head,"<-"for , in enumerate(body)print( ,":", [ ]version june |
18,813 | propositions and inference try interact(electwhich clause is wrong in elect_bugtryinteract(elect_buglogicexplainask lit_l the following shows an interaction for the knowledge base electinteract(electlogicexplainask lit_l is up_s trueno is down_s trueyes is down_s trueyes yes logicexplainhow lit_l < light_l live_l ok_l logicexplainhow live_l < live_w logicexplainhow live_w < down_s live_w logicexplainhow down_s was answered yes logicexplainup live_w < down_s live_w logicexplainhow live_w < down_s live_w logicexplainquit exercise the above code only ever explores one proof the first proof found change the code to enumerate the proof trees (by returning list all proof treesor preferably using yieldadd the command "retryto the user interface to try another proof version june |
18,814 | assumables atom can be made assumable by including assumable(ain the knowledge base knowledge base that can include assumables is declared with kba logicassumables py -definite clauses with assumables from logicproblem import clauseaskablekbyes class assumable(object)"""an askable atom"" def __init__(self,atom)"""clause with atom head and lost of atoms body""self atom atom def __str__(self)"""returns the string representation of clause ""return "assumable self atom class kba(kb)""" knowledge base that can include assumables""def __init__(self,statements)self assumables [ atom for in statements if isinstance(cassumable)kb __init__(self,statementsthe top-down horn clause interpreterprove all ass returns list of the sets of assumables that imply ans body this list will contain all of the minimal sets of assumablesbut can also find non-minimal setsand repeated setsif they can be generated with separate proofs the set assumed is the set of assumables already assumed logicassumables py -(continued def prove_all_ass(selfans_bodyassumed=set())"""returns list of sets of assumables that extends assumed to imply ans_body from self ans_body is list of atoms (it is the body of the answer clauseassumed is set of assumables already assumed ""if ans_bodyselected ans_body[ select first atom from ans_body if selected in self askablesif yes(input("is "+selected+true"))return self prove_all_ass(ans_body[ :],assumedelsereturn [no answers elif selected in self assumablesreturn self prove_all_ass(ans_body[ :],assumed|{selected}elsereturn [ass version june |
18,815 | propositions and inference for cl in self clauses_for_atom(selectedfor ass in self prove_all_ass(cl body+ans_body[ :],assumedunion of answers for each clause with head=selected elseempty body return [assumedone answer def conflicts(self)"""returns list of minimal conflicts""return minsets(self prove_all_ass(['false'])given list of setsminsets returns list of the minimal sets in the list for exampleminsets([{ }{ }{ }{ }{ }]returns [{ }{ }logicassumables py -(continued def minsets(ls)"""ls is list of sets returns list of minimal sets in ls ""ans [elements known to be minimal for in lsif not any( < for in lsand not any( < for in ans)ans append(creturn ans minsets([{ }{ }{ }{ }{ }]warningminsets works for list of sets or for set of (frozensetsbut it does not work for generator of sets for exampletry to predict and then testminsets( for in [{ }{ }{ }{ }{ }]the diagnoses can be constructed from the (minimalconflicts as follows this also works if there are non-minimal conflictsbut is not as efficient logicassumables py -(continued def diagnoses(cons)"""cons is list of (minimalconflicts returns list of diagnoses ""if cons =[]return [set()elsereturn minsets([({ }|dis set union for in cons[ for in diagnoses(cons[ :])]test caseslogicassumables py -(continued electa kba(clause('light_l ')version june |
18,816 | clause('light_l ')assumable('ok_l ')assumable('ok_l ')assumable('ok_s ')assumable('ok_s ')assumable('ok_s ')assumable('ok_cb ')assumable('ok_cb ')assumable('live_outside')clause('live_l '['live_w '])clause('live_w '['up_s ','ok_s ','live_w '])clause('live_w '['down_s ','ok_s ','live_w '])clause('live_w '['up_s ''ok_s ''live_w '])clause('live_w '['down_s ''ok_s ','live_w ])clause('live_l '['live_w '])clause('live_w '['up_s ','ok_s ','live_w ])clause('live_p_ '['live_w '])clause('live_w '['live_w ''ok_cb '])clause('live_p_ '['live_w '])clause('live_w '['live_w ''ok_cb '])clause('live_w '['live_outside'])clause('lit_l '['light_l ''live_l ''ok_l '])clause('lit_l '['light_l ''live_l ''ok_l '])askable('up_s ')askable('down_s ')askable('up_s ')askable('down_s ')askable('up_s ')askable('down_s ')askable('dark_l ')askable('dark_l ')clause('false'['dark_l ''lit_l '])clause('false'['dark_l ''lit_l ']]electa prove_all_ass(['false']cs=electa conflicts(print(csdiagnoses(csdiagnoses from conflicts exercise to implement version of conflicts that never generates non-minimal conflictsmodify prove all ass to implement iterative deepening on the number of assumables used in proofand prune any set of assumables that is superset of conflict exercise implement explanations(self body)where body is list of atomsthat returns the list of the minimal explanations of the body this does not require modification of prove all ass exercise implement explanationsas in the previous questionso that it never generates non-minimal explanations hintmodify prove all ass to implement iterversion june |
18,817 | propositions and inference ative deepening on the number of assumptionsgenerating conflicts and explanations togetherand pruning as early as possible version june |
18,818 | planning with certainty representing actions and planning problems the strips representation of an action consists ofthe name of the action preconditionsa dictionary of feature:value pairs that specifies that the feature must have this value for the action to be possible effectsa dictionary of feature:value pairs that are made true by this action in particulara feature in the dictionary has the corresponding value (and not its previous valueafter the actionand feature not in the dictionary keeps its old value stripsproblem py -strips representations of actions class strips(object)def __init__(selfnameprecondseffectscost= )""defines the strips representation for an actionname is the name of the action precondsthe preconditionsis feature:value dictionary that must hold for the action to be carried out effects is feature:value map that this action makes true the action changes the value of any feature specified hereand leaves other features unchanged cost is the cost of the action "" |
18,819 | planning with certainty self name name self preconds preconds self effects effects self cost cost def __repr__(self)return self name strips domain consists ofa set of actions dictionary that maps each feature into set of possible values for the feature list of the actions stripsproblem py -(continued class strips_domain(object)def __init__(selffeature_domain_dictactions)"""problem domain feature_domain_dict is feature:domain dictionarymapping each feature to its domain actions ""self feature_domain_dict feature_domain_dict self actions actions planning problem consists of planning domainan initial stateand goal the goal does not need to fully specify the final state stripsproblem py -(continued class planning_problem(object)def __init__(selfprob_domaininitial_stategoal)"" planning problem consists of planning domain the initial state goal ""self prob_domain prob_domain self initial_state initial_state self goal goal robot delivery domain the following specifies the robot delivery domain of section shown in figure version june |
18,820 | coffee shop (cs sam' office (off lab (labmail room (mr features to describe states actions rloc rob' location mc rhc rob has coffee mcc move counterclockwise swc sam wants coffee puc pickup coffee mw mail is waiting dc deliver coffee rhm rob has mail pum pickup mail dm move clockwise deliver mail figure robot delivery domain stripsproblem py -(continued boolean {truefalsedelivery_domain strips_domain{'rloc':{'cs''off''lab''mr'}'rhc':boolean'swc':boolean'mw':boolean'rhm':boolean}#feature:values dictionary strips('mc_cs'{'rloc':'cs'}{'rloc':'off'})strips('mc_off'{'rloc':'off'}{'rloc':'lab'})strips('mc_lab'{'rloc':'lab'}{'rloc':'mr'})strips('mc_mr'{'rloc':'mr'}{'rloc':'cs'})strips('mcc_cs'{'rloc':'cs'}{'rloc':'mr'})strips('mcc_off'{'rloc':'off'}{'rloc':'cs'})strips('mcc_lab'{'rloc':'lab'}{'rloc':'off'})strips('mcc_mr'{'rloc':'mr'}{'rloc':'lab'})strips('puc'{'rloc':'cs''rhc':false}{'rhc':true})strips('dc'{'rloc':'off''rhc':true}{'rhc':false'swc':false})strips('pum'{'rloc':'mr','mw':true}{'rhm':true,'mw':false})strips('dm'{'rloc':'off''rhm':true}{'rhm':false}stripsproblem py -(continuedversion june |
18,821 | planning with certainty move( , ,ac move( , ,tablea figure blocks world with two actions problem planning_problem(delivery_domain{'rloc':'lab''mw':true'swc':true'rhc':false'rhm':false}{'rloc':'off'}problem planning_problem(delivery_domain{'rloc':'lab''mw':true'swc':true'rhc':false'rhm':false}{'swc':false}problem planning_problem(delivery_domain{'rloc':'lab''mw':true'swc':true'rhc':false'rhm':false}{'swc':false'mw':false'rhm':false}blocks world the blocks world consist of blocks and table each block can be on the table or on another block block can only have one other block on top of it figure shows states with some of the actions between them state is defined by the two featureson where on(xy when block is on block or table clear where clear(xtrue when block has nothing on it there is one parameterized action move(xyzmove block from to zwhere and could be block or the table version june |
18,822 | to handle parameterized actions (which depend on the blocks involved)the actions and the features are all stringscreated for the all combinations of the blocks note that we treat moving to block separately from moving to the tablebecause the blocks needs to be clearbut the table always has room for another block stripsproblem py -(continued ##blocks world def move( , , )"""string for the 'moveaction""return 'move_'+ +'_from_'+ +'_to_'+ def on( )"""string for the 'onfeature""return +'_is_ondef clear( )"""string for the 'clearfeature""return 'clear_'+ def create_blocks_world(blocks {' ',' ',' ',' '})blocks_and_table blocks {'table'stmap {strips(move( , , ),{on( ):yclear( ):trueclear( ):true}{on( ):zclear( ):trueclear( ):false}for in blocks for in blocks_and_table for in blocks if != and != and !=xstmap update({strips(move( , ,'table'){on( ):yclear( ):true}{on( ):'table'clear( ):true}for in blocks for in blocks if != }feature_domain_dict {on( ):blocks_and_table-{xfor in blocksfeature_domain_dict update({clear( ):boolean for in blocks_and_table}return strips_domain(feature_domain_dictstmapthe problem blocks is classic examplewith blocksand the goal consists of two conditions see figure note that this example is challenging because we can' achieve one of the goals and then the otherwhichever one we achieve first has to be undone to achieve the second stripsproblem py -(continued blocks dom create_blocks_world({' ',' ',' '}blocks planning_problem(blocks dom{on(' '):'table'clear(' '):trueon(' '):' 'clear(' '):trueon(' '):'table'clear(' '):false}initial state {on(' '):' 'on(' '):' '}#goal the problem blocks is one to invert tower of size stripsproblem py -(continued blocks dom create_blocks_world({' ',' ',' ',' '}version june |
18,823 | planning with certainty figure blocks problem blocks tower {clear(' '):trueon(' '):' 'clear(' '):falseon(' '):' 'clear(' '):falseon(' '):' 'clear(' '):falseon(' '):'table'blocks planning_problem(blocks domtower initial state {on(' '):' ',on(' '):' ',on(' '):' '}#goal the problem blocks is to move the bottom block to the top of tower of size stripsproblem py -(continued blocks planning_problem(blocks domtower initial state {on(' '):' 'on(' '):' 'on(' '):' '}#goal exercise represent the problem of given tower of blocks ( on on on on table)the goal is to have tower with the previous top block on the bottom ( on on on ado not include the table in your goal (the goal does not care whether is on the table[before you run the programestimate how many steps it will take to solve this how many steps does an optimal planner takeexercise represent the domain so that on(xyis boolean feature that is true when is on ydoes the representation of the state need to not include negative on factswhy or why not(note that this may depend on the plannerwrite your answer with respect to particular planners exercise it is possible to write the representation of the problem without using clearwhere clear(xmeans nothing is on change the definition of the blocks world so that it does not use clear but uses on being false instead does this work better for any of the planners forward planning to run the demoin folder "aipython"load "stripsforwardplanner py"and copy and paste the commentedout example queries at the bottom of that file version june |
18,824 | in forward plannera node is state state consists of an assignmentwhich is variable:value dictionary in order to be able to do multiple-path pruningwe need to define hash functionand equality between states stripsforwardplanner py -forward planner with strips actions from searchproblem import arcsearch_problem from stripsproblem import stripsstrips_domain class state(object)def __init__(self,assignment)self assignment assignment self hash_value none def __hash__(self)if self hash_value is noneself hash_value hash(frozenset(self assignment items())return self hash_value def __eq__(self,st)return self assignment =st assignment def __str__(self)return str(self assignmentin order to define search problem (page )we need to define the goal conditionthe start nodesthe neighboursand (optionallya heuristic function here zero is the default heuristic function stripsforwardplanner py -(continued def zero(*args,**nargs)"""always returns ""return class forward_strips(search_problem)""" search problem from planning problem wherea node is state object the dynamics are specified by the strips representation of actions ""def __init__(selfplanning_problemheur=zero)"""creates forward search space from planning problem heur(state,goalis heuristic functionan underestimate of the cost from state to goalwhere both state and goals are feature:value dictionaries ""self prob_domain planning_problem prob_domain self initial_state state(planning_problem initial_stateself goal planning_problem goal self heur heur def is_goal(selfstate)"""is true if node is goal every goal feature has the same value in the state and the goal ""return all(state assignment[prop]==self goal[propversion june |
18,825 | planning with certainty for prop in self goal def start_node(self)"""returns start node""return self initial_state def neighbors(self,state)"""returns neighbors of state in this problem""return arc(stateself effect(act,state assignment)act costactfor act in self prob_domain actions if self possible(act,state assignment) def possible(self,act,state_asst)"""true if act is possible in state act is possible if all of its preconditions have the same value in the state""return all(state_asst[pre=act preconds[prefor pre in act preconds def effect(self,act,state_asst)"""returns the state that is the effect of doing act given state_asst python return state_asst act effects""new_state_asst state_asst copy(new_state_asst update(act effectsreturn state(new_state_asst def heuristic(self,state)"""in the forward planner node is state the heuristic is an (under)estimate of the cost of going from the state to the top-level goal ""return self heur(state assignmentself goalhere are some test cases to try stripsforwardplanner py -(continued from searchbranchandbound import df_branch_and_bound from searchmpp import searchermpp from stripsproblem import problem problem problem blocks blocks blocks searchermpp(forward_strips(problem )search(#awith mpp df_branch_and_bound(forward_strips(problem ), search(# & to find more than one plans searchermpp(forward_strips(problem )#as search(#find another plan version june |
18,826 | defining heuristics for planner each planning domain requires its own heuristics if you change the actionsyou will need to reconsider the heuristic functionas there might then be lower-cost pathwhich might make the heuristic non-admissible here is an example of defining (not very goodheuristic for the coffee delivery planning domain first we define the distance between two locationswhich is used for the heuristics stripsheuristic py -planner with heuristic function def dist(loc loc )"""returns the distance from location loc to loc ""if loc ==loc return if {loc ,loc in [{'cs','lab'},{'mr','off'}]return elsereturn note that the current state is complete descriptionthere is value for every feature however the goal need not be completeit does not need to define value for every feature before checking the value for feature in the goala heuristic needs to define whether the feature is defined in the goal stripsheuristic py -(continued def (state,goal)""the distance to the goal locationif there is one""if 'rlocin goalreturn dist(state['rloc']goal['rloc']elsereturn def (state,goal)""the distance to the coffee shop plus getting coffee and delivering it if the robot needs to get coffee ""if ('swcin goal and goal['swc']==false and state['swc']==true and state['rhc']==false)return dist(state['rloc'],'cs')+ elsereturn the maximum of the values of set of admissible heuristics is also an admissible heuristic the function maxh takes number of heuristic functions as argumentsand returns new heuristic function that takes the maximum of the values of the heuristics for exampleh and are heuristic functions and so maxh( , is also maxh can take an arbitrary number of arguments version june |
18,827 | planning with certainty stripsheuristic py -(continued def maxh(*heuristics)"""returns new heuristic function that is the maximum of the functions in heuristics heuristics is the list of arguments which must be heuristic functions ""return lambda state,goalmax( (state,goalfor in heuristicsdef newh(state,goal)return max( (state,goalfor in heuristicsreturn newh the following runs the example with and without the heuristic stripsheuristic py -(continued ####forward planner ####from searchmpp import searchermpp from stripsforwardplanner import forward_strips from stripsproblem import problem problem problem blocks blocks blocks def test_forward_heuristic(thisproblem=problem )print("\ ****forward no heuristic"print(searchermpp(forward_strips(thisproblem)search() print("\ ****forward with heuristic "print(searchermpp(forward_strips(thisproblem, )search() print("\ ****forward with heuristic "print(searchermpp(forward_strips(thisproblem, )search() print("\ ****forward with heuristics and "print(searchermpp(forward_strips(thisproblem,maxh( , ))search() if __name__ ="__main__"test_forward_heuristic(exercise try the forward planner with heuristic function of just with just and with both explain how each one prunes or doesn' prune the search space exercise create better heuristic than maxh( try it for number of different problems in particulartry and include the following costsih is like but also takes into account the case when rloc is in goal iih uses the distance to the mail room plus getting mail and delivering it if the robot needs to get need to deliver mail iiih is for getting mail when goal is for the robot to have mailand then getting to the goal destination (if there is oneexercise create an admissible heuristic for the blocks world version june |
18,828 | regression planning to run the demoin folder "aipython"load "stripsregressionplanner py"and copy and paste the commentedout example queries at the bottom of that file in regression planner node is subgoal that need to be achieved subgoal object consists of an assignmentwhich is variable:value dictionary we make it hashable so that multiple path pruning can work the hash is only computed when necessary (and only oncestripsregressionplanner py -regression planner with strips actions from searchproblem import arcsearch_problem class subgoal(object)def __init__(self,assignment)self assignment assignment self hash_value none def __hash__(self)if self hash_value is noneself hash_value hash(frozenset(self assignment items())return self hash_value def __eq__(self,st)return self assignment =st assignment def __str__(self)return str(self assignmenta regression search has subgoals as nodes the initial node is the top-level goal of the planner the goal for the search (when the search can stopis subgoal that holds in the initial state stripsregressionplanner py -(continued from stripsforwardplanner import zero class regression_strips(search_problem)""" search problem wherea node is goal to be achievedrepresented by set of propositions the dynamics are specified by the strips representation of actions "" def __init__(selfplanning_problemheur=zero)"""creates regression search space from planning problem heur(state,goalis heuristic functionan underestimate of the cost from state to goalwhere both state and goals are feature:value dictionaries ""self prob_domain planning_problem prob_domain self top_goal subgoal(planning_problem goalself initial_state planning_problem initial_state self heur heur version june |
18,829 | planning with certainty def is_goal(selfsubgoal)"""if subgoal is true in the initial statea path has been found""goal_asst subgoal assignment return all(self initial_state[ ]==goal_asst[gfor in goal_asst def start_node(self)"""the start node is the top-level goal""return self top_goal def neighbors(self,subgoal)"""returns list of the arcs for the neighbors of subgoal in this problem""goal_asst subgoal assignment return arc(subgoalself weakest_precond(act,goal_asst)act costactfor act in self prob_domain actions if self possible(act,goal_asst) def possible(self,act,goal_asst)"""true if act is possible to achieve goal_asst the action achieves an element of the effects and the action doesn' delete something that needs to be achieved and the preconditions are consistent with other subgoals that need to be achieved ""return any(goal_asst[prop=act effects[propfor prop in act effects if prop in goal_asstand all(goal_asst[prop=act effects[propfor prop in act effects if prop in goal_asstand all(goal_asst[prop]=act preconds[propfor prop in act preconds if prop not in act effects and prop in goal_asst def weakest_precond(self,act,goal_asst)"""returns the subgoal that must be true so goal_asst holds after act should beact preconds (goal_asst act effects""new_asst act preconds copy(for in goal_asstif not in act effectsnew_asst[ggoal_asst[greturn subgoal(new_asst def heuristic(self,subgoal)"""in the regression planner node is subgoal version june |
18,830 | the heuristic is an (under)estimate of the cost of going from the initial state to subgoal ""return self heur(self initial_statesubgoal assignmentstripsregressionplanner py -(continued from searchbranchandbound import df_branch_and_bound from searchmpp import searchermpp from stripsproblem import problem problem problem blocks blocks blocks searchermpp(regression_strips(problem )search(#awith mpp df_branch_and_bound(regression_strips(problem ), search(# & exercise multiple path pruning could be used to prune more than the current code in particularif the current node contains more conditions than previously visited nodeit can be pruned for exampleif { trueb falsehas been visitedthen any node that is supersete { trueb falsed true}need not be expanded if the simpler subgoal does not lead to solutionthe more complicated one wont either implement this more severe pruning (hintthis may require modifications to the searcher exercise it is possible thatas knowledge of the domainthat some assignment of values to variables can never be achieved for examplethe robot cannot be holding mail when there is mail waiting (assuming it isn' holding mail initiallyan assignment of values to (some of thevariables is incompatible if no possible (reachablestate can include that assignment for example{mw true,rhmtrueis an incompatible assignment this information may be useful information for plannerthere is no point in trying to achieve these together define subclass of strips domain that can accept list of incompatible assignments modify the regression planner code to use such list of incompatible assignments give an example where the search space is smaller exercise after completing the previous exercisedesign incompatible assignments for the blocks world (this should result in dramatic search improvements defining heuristics for regression planner the regression planner can use the same heuristic function as the forward planner howeverjust because heuristic is useful for forward planner does not mean it is useful for regression plannerand vice versa you should experiment with whether the same heuristic works well for both regression planner and forward planner the following runs the same example as the forward planner with and without the heuristic defined for the forward plannerstripsheuristic py -(continued ####regression planner from stripsregressionplanner import regression_strips version june |
18,831 | planning with certainty def test_regression_heuristic(thisproblem=problem )print("\ ****regression no heuristic"print(searchermpp(regression_strips(thisproblem)search() print("\ ****regression with heuristics and "print(searchermpp(regression_strips(thisproblem,maxh( , ))search() if __name__ ="__main__"test_regression_heuristic(exercise try the regression planner with heuristic function of just and with just (defined in section explain how each one prunes or doesn' prune the search space exercise create better heuristic than heuristic fun defined in section planning as csp to run the demoin folder "aipython"load "stripscspplanner py"and copy and paste the commented-out example queries at the bottom of that file this assumes python here we implement the csp planner assuming there is single action at each step this creates csp that can use any of the csp algorithms to solve ( stochastic local search or arc consistency with domain splittingthis assumes the same action representation as beforewe do not consider factored actions (action features)nor do we implement state constraints stripscspplanner py -csp planner where actions are represented using strips from cspproblem import variablecspconstraint class csp_from_strips(csp)""" csp wherecsp variables are constructed for each feature and timeand each action and time the dynamics are specified by the strips representation of actions "" def __init__(selfplanning_problemnumber_stages= )prob_domain planning_problem prob_domain initial_state planning_problem initial_state goal planning_problem goal self action_vars[tis the action variable for time self action_vars [variable( "action{ }"prob_domain actionsfor in range(number_stages)feat_time_var[ ][tis the variable for feature at time feat_time_var {feat[variable( "{feat} { }",domfor in range(number_stages+ )version june |
18,832 | for (feat,domin prob_domain feature_domain_dict items() initial state constraintsconstraints [constraint((feat_time_var[feat][ ],)is_(val)for (feat,valin initial_state items() goal constraints on the final stateconstraints +[constraint((feat_time_var[feat][number_stages],)is_(val)for (feat,valin goal items() precondition constraintsconstraints +[constraint((feat_time_var[feat][ ]self action_vars[ ])if_(val,act)feat@ ==val if action@ ==act for act in prob_domain actions for (feat,valin act preconds items(for in range(number_stages) effect constraintsconstraints +[constraint((feat_time_var[feat][ + ]self action_vars[ ])if_(val,act)feat@ + ==val if action@ ==act for act in prob_domain actions for feat,val in act effects items(for in range(number_stages)frame constraints constraints +[constraint((feat_time_var[feat][ ]self action_vars[ ]feat_time_var[feat][ + ])eq_if_not_in_({act for act in prob_domain actions if feat in act effects})for feat in prob_domain feature_domain_dict for in range(number_stagesvariables set(self action_vars{feat_time_var[feat][tfor feat in prob_domain feature_domain_dict for in range(number_stages+ )csp __init__(selfvariablesconstraints def extract_plan(self,soln)return [soln[afor in self action_varsthe following methods return methods which can be applied to the particular environment for exampleis ( returns function that when applied to returns true and when applied to any other value returns false so is ( )( returns true version june |
18,833 | planning with certainty and is ( )( returns false note that the underscore ('is part of the namehere we use it as the convention that it is function that returns function this uses two different styles to define is and if returning function defined by lambda is equivalent to returning the embedded functionexcept that the embedded function has name the embedded function can also be given docstring stripscspplanner py -(continued def is_(val)"""returns function that is true when it is it applied to val ""#return lambda xx =val def is_fun( )return =val is_fun __name__ "value_is_"+str(valreturn is_fun def if_( , )"""if the second argument is the first argument must be ""#return lambda , == if == else true def if_fun( , )return == if == else true if_fun __name__ "if is "+str( )+then is "+str( return if_fun def eq_if_not_in_(actset)"""first and third arguments are equal if action is not in actset""return lambda ax == if not in actset else true def eq_if_not_fun( ax )return == if not in actset else true eq_if_not_fun __name__ "first and third arguments are equal if action is not in "+str(actsetreturn eq_if_not_fun putting it togetherthis returns list of actions that solves the problem prob for given horizon if you want to do more than just return the list of actionsyou might want to get it to return the solution or even enumerate the solutions (by using search with ac from cspstripscspplanner py -(continued def con_plan(prob,horizon)"""finds plan for problem prob given horizon ""csp csp_from_strips(probhorizonsol con_solver(cspsolve_one(return csp extract_plan(solif sol else sol the following are some example queries stripscspplanner py -(continued from searchgeneric import searcher version june |
18,834 | from stripsproblem import delivery_domain from cspconsistency import search_with_ac_from_cspcon_solver from stripsproblem import planning_problemproblem problem problem blocks blocks blocks problem con_plan(problem , should it succeedcon_plan(problem , should it succeedcon_plan(problem , should it succeedto use search to enumerate solutions #searcher searcher(search_with_ac_from_csp(csp_from_strips(problem ))#print(searcher search()returns path to solution #problem con_plan(problem , should it succeedcon_plan(problem , should it succeed#to use search to enumerate solutions#searcher searcher(search_with_ac_from_csp(csp_from_strips(problem ))#print(searcher search()returns path to solution #problem #con_plan(problem should fail?#con_plan(problem should succeed?? #example problem planning_problem(delivery_domain{'swc':true'rhc':false}{'swc':false}#con_plan(problem , horizon of #con_plan(problem , horizon of problem planning_problem(delivery_domain,{'swc':true}{'swc':false'mw':false'rhm':false} for the stochastic local search#from cspsls import slsearcherruntime_distribution cspplanning csp_from_strips(problem should succeed #se slsearcher(cspplanning )print(se search( , )# runtime_distribution(cspplanning # plot_runs( , , warning will take few minutes partial-order planning to run the demoin folder "aipython"load "stripspop py"and copy and paste the commented-out example queries at the bottom of that file version june |
18,835 | planning with certainty partial order planner maintains partial order of action instances an action instance consists of name and an index we need action instances because the same action could be carried out at different times stripspop py -partial-order planner using strips representation from searchproblem import arcsearch_problem import random class action_instance(object)next_index def __init__(self,action,index=none)if index is noneindex action_instance next_index action_instance next_index + self action action self index index def __str__(self)return str(self action)+"#"+str(self index __repr__ __str__ __repr__ function is the same as the __str__ function node (as in the abstraction of search spacein partial-order planner consists ofactionsa set of action instances constraintsa set of ( pairswhere and are action instanceswhich represents that must come before in the partial order there are number of ways that this could be represented here we represent the set of pairs that are in transitive closure of the before relation this lets us quickly determine whether some before relation is consistent with the current constraints agendaa list of (sapairswhere is (varvalpair and is an action instance this means that variable var must have value val before can occur causal linksa set of ( ga tripleswhere and are action instances and is (varvalpair this holds when action makes true for action stripspop py -(continued class pop_node(object)""" (partialpartial-order plan this is node in the search space ""def __init__(selfactionsconstraintsagendacausal_links)""version june |
18,836 | actions is set of action instances constraints set of ( , pairsrepresenting < closed under transitivity agenda list of (subgoal,actionpairs to be achievedwhere subgoal is (variable,valuepair causal_links is set of ( , , tripleswhere ai are action instancesand is (variable,valuepair ""self actions actions set of action instances self constraints constraints set of ( , pairs self agenda agenda list of (subgoal,actionpairs to be achieved self causal_links causal_links set of ( , , triples def __str__(self)return ("actions"+str({str(afor in self actions})"\nconstraints"str({(str( ),str( )for ( , in self constraints})"\nagenda"str([(str( ),str( )for ( ,ain self agenda])"\ncausal_links:"str({(str( ),str( ),str( )for ( , , in self causal_links}extract plan constructs total order of action instances that is consistent with the partial order stripspop py -(continued def extract_plan(self)"""returns total ordering of the action instances consistent with the constraints raises indexerror if there is no choice ""sorted_acts [other_acts set(self actionswhile other_actsa random choice([ for in other_acts if all((( ,anot in self constraintsfor in other_acts)]sorted_acts append(aother_acts remove(areturn sorted_acts pop search from strips is an instance of search problem as suchwe need to define the start nodesthe goaland the neighbors of node stripspop py -(continued from display import displayable class pop_search_from_strips(search_problemdisplayable)def __init__(self,planning_problem)version june |
18,837 | planning with certainty search_problem __init__(selfself planning_problem planning_problem self start action_instance("start"self finish action_instance("finish" def is_goal(selfnode)return node agenda =[ def start_node(self)constraints {(self startself finish)agenda [(gself finishfor in self planning_problem goal items()return pop_node([self start,self finish]constraintsagenda[the neighbors method is coroutine that enumerates the neighbors of given node stripspop py -(continued def neighbors(selfnode)"""enumerates the neighbors of node""self display( ,"finding neighbors of\ ",nodeif node agendasubgoal,act node agenda[ self display( ,"selecting",subgoal,"for",act new_agenda node agenda[ :for act in node actionsif (self achieves(act subgoaland self possible((act ,act ),node constraints))self display( ,reusing",act consts self add_constraint((act ,act ),node constraintsnew_clink (act ,subgoal,act new_cls node causal_links [new_clinkfor consts in self protect_cl_for_actions(node actions,consts ,new_clink)yield arc(nodepop_node(node actions,consts ,new_agenda,new_cls)cost= for in self planning_problem prob_domain actions# is an action if self achieves( subgoal)# acheieves subgoal new_a action_instance( self display( ,using new action",new_anew_actions node actions [new_aconsts self add_constraint((self start,new_a),node constraintsconsts self add_constraint((new_a,act ),consts new_agenda new_agenda [(pre,new_afor pre in preconds items()new_clink (new_a,subgoal,act version june |
18,838 | new_cls node causal_links [new_clinkfor consts in self protect_all_cls(node causal_links,new_a,consts )for consts in self protect_cl_for_actions(node actions,consts ,new_clink)yield arc(nodepop_node(new_actions,consts ,new_agenda ,new_cls)cost= given casual link ( subgoala )the following method protects the causal link from each action in actions whenever an action deletes subgoalthe action needs to be before or after this method enumerates all constraints that result from protecting the causal link from all actions stripspop py -(continued def protect_cl_for_actions(selfactionsconstrsclink)"""yields constraints that extend constrs and protect causal link ( subgoala for each action in actions ""if actionsa actions[ rem_actions actions[ : subgoala clink if ! and ! and self deletes( ,subgoal)if self possible(( , ),constrs)new_const self add_constraint(( , ),constrsfor in self protect_cl_for_actions(rem_actions,new_const,clink)yield could be "yield fromif self possible(( , ),constrs)new_const self add_constraint(( , ),constrsfor in self protect_cl_for_actions(rem_actions,new_const,clink)yield elsefor in self protect_cl_for_actions(rem_actions,constrs,clink)yield elseyield constrs given an action actthe following method protects all the causal links in clinks from act whenever act deletes subgoal from some causal link ( subgoala )the action act needs to be before or after this method enumerates all constraints that result from protecting the causal links from act stripspop py -(continued def protect_all_cls(selfclinksactconstrs)"""yields constraints that protect all causal links from act""if clinksversion june |
18,839 | planning with certainty ( ,cond, clinks[ select causal link rem_clinks clinks[ :remaining causal links if act ! and act ! and self deletes(act,cond)if self possible((act, ),constrs)new_const self add_constraint((act, ),constrsfor in self protect_all_cls(rem_clinks,act,new_const)yield if self possible(( ,act),constrs)new_const self add_constraint(( ,act),constrsfor in self protect_all_cls(rem_clinks,act,new_const)yield elsefor in self protect_all_cls(rem_clinks,act,constrs)yield elseyield constrs the following methods check whether an action (or action instanceachieves or deletes some subgoal stripspop py -(continued def achieves(self,action,subgoal)var,val subgoal return var in self effects(actionand self effects(action)[var=val def deletes(self,action,subgoal)var,val subgoal return var in self effects(actionand self effects(action)[var!val def effects(self,action)"""returns the variable:value dictionary of the effects of action works for both actions and action instances""if isinstance(actionaction_instance)action action action if action ="start"return self planning_problem initial_state elif action ="finish"return {elsereturn action effects the constraints are represented as set of pairs closed under transitivity thus if (aband (bcare the listthen (acmust also be in the list this means that adding new constraint means adding the implied pairsbut querying whether some order is consistent is quick stripspop py -(continued def add_constraint(selfpairconst)if pair in constversion june |
18,840 | return const todo [pairnewconst const copy(while todox , todo pop(newconst add(( , )for , in newconstif == and ( ,ynot in newconsttodo append(( , )if == and ( , not in newconsttodo append(( , )return newconst def possible(self,pair,constraint)( ,ypair return ( ,xnot in constraint some code for testingstripspop py -(continued from searchbranchandbound import df_branch_and_bound from searchmpp import searchermpp from stripsproblem import problem problem problem blocks blocks blocks rplanning pop_search_from_strips(problem rplanning pop_search_from_strips(problem rplanning pop_search_from_strips(problem searcher df_branch_and_bound(rplanning , searcher searchermpp(rplanning searcher df_branch_and_bound(rplanning , searcher searchermpp(rplanning searcher df_branch_and_bound(rplanning , searcher searchermpp(rplanning try one of the following searchers searcher search( searcher search( end(extract_plan(print plan found end(constraints print the constraints searchermpp max_display_level less detailed display df_branch_and_bound max_display_level less detailed display searcher search( searcher search( searcher search( searcher search(version june |
18,841 | supervised machine learning this is the first on machine learning it covers the following topicsdatahow to load ittraining and test sets featuresmany of the features come directly from the data sometimes it is useful to construct featurese height might be boolean feature constructed from the real-values feature height the next is about neural networdks and how to learn featuresin this we construct explicitly in what is often known feature engineering learning with no input featuresthis is the base case of many methods what should we predict if we have no input featuresthis provides the base cases for many algorithms ( decision tree algorithmand baselines that more sophisticated algorithms need to beat it also provides ways to test various predictors decision tree learningone of the classic and simplest learning algorithmswhich is the basis of many other algorithms cross validation and parameter tuningmethods to prevent overfitting linear regression and classificationother classic and simple techniques that often work well (particularly combined with feature learning or engineeringboostingcombining simpler learning methods to make even better learners good source of classic datasets is the uci machine learning repository [lichman [dua and graff the spect and car datasets are from this repository |
18,842 | supervised machine learning dataset spect iris carbool holiday mail reading simp regr examples #columns input types boolean real categorical/real boolean boolean numerical target type boolean categorical real boolean boolean numerical figure some of the datasets used here mlr is uci machine learning repository representations of data and predictions the code uses the following definitions and conventionsa data set is an enumeration of examples an example is list (or tupleof values the values can be numbers or strings feature is function from examples into the range of the feature each feature also has the following attributesf ftype the type of fone of"boolean""categorical""numericf frange the range of frepresented as list __doc__ the docstringa string description of (for printingthus for examplea boolean feature is function from the examples into {falsetruesoif is boolean featuref frange =[falsetrue]and if is an examplef (eis either true or false learnproblem py - learning problem import mathrandomstatistics import csv from display import displayable from utilities import argmax boolean [falsetruewhen creating data setwe partition the data into training set (trainand test set (testthe target feature is the feature that we are making prediction of dataset ds has the following attributes ds train list of the training examples ds test list of the test examples ds target_index the index of the target version june |
18,843 | ds target the feature corresponding to the target ( function as described aboveds input_features list of the input features learnproblem py -(continued class data_set(displayable)"" data set consists of list of training data and list of test data "" def __init__(selftraintest=noneprob_test= target_index= header=nonetarget_typenoneseed=none)# )""" dataset for learning train is list of tuples representing the training examples test is the list of tuples representing the test examples if test is nonea test set is created by selecting each example with probability prob_test target_index is the index of the target if negativeit counts from right if target_index is larger than the number of propertiesthere is no target (for unsupervised learningheader is list of names for the features target_type is either none for automatic detection of target type or one of "numerical""boolean""cartegoricalseed is for random numbernone gives different test set each time ""if seedgiven seed makes partition consistent from run-to-run random seed(seedif test is nonetrain,test partition_data(trainprob_testself train train self test test self display( ,"training set has",len(train),"examples number of columns",{len(efor in train}self display( ,"test set has",len(test),"examples number of columns",{len(efor in test}self prob_test prob_test self num_properties len(self train[ ]if target_index #allows for - - etc self target_index self num_properties target_index elseself target_index target_index self header header self domains [set(for in range(self num_properties)for example in self trainfor ind,val in enumerate(example)self domains[indadd(valself conditions_cache {cache for computed conditions version june |
18,844 | supervised machine learning self create_features(if target_typeself target ftype target_type self display( ,"there are",len(self input_features),"input features" def __str__(self)if self train and len(self train)> return ("data"+str(len(self train))+training examples+str(len(self test))+test examples+str(len(self train[ ]))+features "elsereturn ("data"+str(len(self train))+training examples+str(len(self test))+test examples " feature is function that takes an example and returns value in the range of the feature each feature has frangewhich gives the range of the featureand an ftype that gives the typeone of "boolean""numericor "categoricallearnproblem py -(continued def create_features(self)"""create the set of features ""self target none self input_features [for in range(self num_properties)def feat( ,index= )return [indexif self headerfeat __doc__ self header[ielsefeat __doc__ " ["+str( )+"]feat frange list(self domains[ ]feat ftype self infer_type(feat frangeif =self target_indexself target feat elseself input_features append(featwe try to infer the type of each feature sometimes this can be wrong( when the numbers are really categoricaland so needs to be set explicitly learnproblem py -(continued def infer_type(self,domain)"""infers the type of feature with domain ""if all( in {true,falsefor in domain)return "booleanif all(isinstance( ,(float,int)for in domain)return "numericversion june |
18,845 | elsereturn "categoricalcreating boolean conditions from features some of the algorithms require boolean input features or features with range { in order to be able to use these algorithms on datasets that allow for arbitrary domains of input variableswe construct boolean conditions from the attributes there are caseswhen the range only has two valueswe designate one to be the "truevalue when the values are all numericwe assume they are ordered (as opposed to just being some classes that happen to be labelled with numbersand construct boolean features for splits of the data that isthe feature is [indcut for some value cut we choose number of cut valuesup to maximum number of cutsgiven by max num cuts when the values are not all numericwe create an indicator function for each value an indicator function for value returns true when that value is given and false otherwise note that we can' create an indicator function for values that appear in the test set but not in the training set because we haven' seen the test set for the examples in the test set with value that doesn' appear in the training set for that featurethe indicator functions all return false there is also an option to only create boolean features from categorical input features learnproblem py -(continued def conditions(selfmax_num_cuts= categorical_only false)"""returns set of boolean conditions from the input features max_num_cuts is the maximum number of cute for numerical features categorical_only is true if only categorical features are made binary ""if (max_num_cutscategorical_onlyin self conditions_cachereturn self conditions_cache[(max_num_cutscategorical_only)conds [for ind,frange in enumerate(self domains)if ind !self target_index and len(frange)> if len(frange= two valuesthe feature is equality to one of them true_val list(frange)[ choose one as true def feat(ei=indtv=true_val)return [ ]==tv version june |
18,846 | supervised machine learning if self headerfeat __doc__ "{self header[ind]}=={true_val}elsefeat __doc__ " [{ind}]=={true_val}feat frange boolean feat ftype "booleanconds append(featelif all(isinstance(val,(int,float)for val in frange)if categorical_onlynumericaldon' make cuts def feat(ei=ind)return [ifeat __doc__ " [{ind}]conds append(featelseall numericcreate cuts of the data sorted_frange sorted(frangenum_cuts min(max_num_cuts,len(frange)cut_positions [len(frange)* //num_cuts for in range( ,num_cuts)for cut in cut_positionscutat sorted_frange[cutdef feat(eind_=indcutat=cutat)return [ind_cutat if self headerfeat __doc__ self header[ind]+"<"+str(cutatelsefeat __doc__ " ["+str(ind)+"]<"+str(cutatfeat frange boolean feat ftype "booleanconds append(feat elsecreate an indicator function for every value for val in frangedef feat(eind_=indval_=val)return [ind_=val_ if self headerfeat __doc__ self header[ind]+"=="+str(valelsefeat __doc__" ["+str(ind)+"]=="+str(valfeat frange boolean feat ftype "booleanconds append(featself conditions_cache[(max_num_cutscategorical_only)conds return conds exercise change the code so that it splits using [ind<cut instead of [indcut check boundary casessuch as elements with cuts as test casemake sure that when the range is the integers from to and you want cutsthe resulting boolean features should be [ind< and [ind< to make version june |
18,847 | sure that each of the resulting domains is of equal size exercise this splits on whether the feature is less than one of the values in the training set sam suggested it might be better to split between the values in the training setand suggested using cutat (sorted frange[cutsorted frange[cut ])/ why might sam have suggested thisdoes this work better(try it on few data setsevaluating predictions predictor is function that takes an example and makes prediction on the values of the target features loss takes prediction and the actual value and returns non-negative real numberlower is better the error for dataset is either the mean lossor sometimes the sum of the losses when reporting results the mean is usually used when it is the sumthis will be made explicit the function evaluate dataset returns the average error for each examplewhere the error for each example depends on the evaluation criteria here we consider three evaluation criteriathe squared error (average of the square of the difference between the actual and predicted values)absolute errors(average of the absolute difference between the actual and predicted valuesand the log loss (the average negative log-likelihoodwhich can be interpreted as the number of bits to describe an example using code based on the prediction treated as probabilitylearnproblem py -(continued def evaluate_dataset(selfdatapredictorerror_measure)"""evaluates predictor on data according to the error_measure predictor is function that takes an example and returns prediction for the target features error_measure(prediction,actual-non-negative real ""if datatryvalue statistics mean(error_measure(predictor( )self target( )for in dataexcept valueerrorif error_measure gives an error return float("inf"infinity return value elsereturn math nan not number the following evaluation criteria are defined this is defined using classevaluate but no instances will be created just use evaluate squared_loss etc (please keep the __doc__ strings consistent length as they are used in tables version june |
18,848 | supervised machine learning the prediction is either real value or {value probabilitydictionary or list the actual is either real number or key of the prediction learnproblem py -(continued class evaluate(object)""" container for the evaluation measures"" def squared_loss(predictionactual)"squared loss if isinstance(prediction(list,dict))return ( -prediction[actual])** the correct value is elsereturn (prediction-actual)** def absolute_loss(predictionactual)"absolute loss if isinstance(prediction(list,dict))return abs( -prediction[actual]the correct value is elsereturn abs(prediction-actual def log_loss(predictionactual)"log loss (bits)tryif isinstance(prediction(list,dict))return -math log (prediction[actual]elsereturn -math log (predictionif actual== else -math log ( -predictionexcept valueerrorreturn float("inf"infinity def accuracy(predictionactual)"accuracy if isinstance(predictiondict)prev_val prediction[actualreturn if all(prev_val > for in prediction values()else if isinstance(predictionlist)prev_val prediction[actualreturn if all(prev_val > for in predictionelse elsereturn if abs(actual-prediction< else all_criteria [accuracyabsolute_losssquared_losslog_losscreating test and training sets the following method partitions the data into training set and test set note that this does not guarantee that the test set will contain exactly proportion version june |
18,849 | of the data equal to prob test [an alternative is to use random sample(which can guarantee that the test set will contain exactly particular proportion of the data however this would require knowing how many elements are in the data setwhich we may not knowas data may just be generator of the data ( when reading the data from filelearnproblem py -(continued def partition_data(dataprob_test= )"""partitions the data into training set and test setwhere prob_test is the probability of each example being in the test set ""train [test [for example in dataif random random(prob_testtest append(exampleelsetrain append(examplereturn traintest importing data from file data set is typically loaded from file the default here is that it loaded from csv (comma separated valuesfilealthough the separator can be changed this assumes that all lines that contain the separator are valid data (so we only include those data items that contain more than one elementthis allows for blank lines and comment lines that do not contain the separator howeverit means that this method is not suitable for cases where there is only one feature note that data all and data tuples are generators data all is generator of list of list of strings this version assumes that csv files are simple the standard csv packagethat allows quoted argumentscan be used by uncommenting the line for data all and commenting out the following line data tuples contains only those lines that contain the delimiter (others lines are assumed to be empty or comments)and tries to convert the elements to numbers whenever possible this allows for some of the columns to be includedspecified by include only note that if include only is specifiedthe target index is the index for the included columnsnot the original columns learnproblem py -(continued class data_from_file(data_set)def __init__(selffile_nameseparator=','num_train=noneprob_test= has_header=falsetarget_index= boolean_features=truecategorical=[]target_typenoneinclude_only=noneseed=none)#seed= )version june |
18,850 | supervised machine learning """create dataset from file separator is the character that separates the attributes num_train is number specifying the first num_train tuples are trainingor none prob_test is the probability an example should in the test set (if num_train is nonehas_header is true if the first line of file is header target_index specifies which feature is the target boolean_features specifies whether we want to create boolean features (if falseit uses the original featurescategorical is set (or listof features that should be treated as categorical target_type is either none for automatic detection of target type or one of "numerical""boolean""cartegoricalinclude_only is list or set of indexes of columns to include ""self boolean_features boolean_features with open(file_name,' ',newline=''as csvfileself display( ,"loading",file_namedata_all csv reader(csvfile,delimiter=separatorfor more complicated csv files data_all (line strip(split(separatorfor line in csvfileif include_only is not nonedata_all ([ for ( ,vin enumerate(lineif in include_onlyfor line in data_allif has_headerheader next(data_allelseheader none data_tuples (interpret_elements(dfor in data_all if len( )> if num_train is not nonetraining set is divided into training then text examples the file is only read onceand the data is placed in appropriate list train [for in range(num_train)will give an error if insufficient examples train append(next(data_tuples)test list(data_tuplesdata_set __init__(self,traintest=testtarget_index=target_index,header=headerelserandomly assign training and test examples data_set __init__(self,data_tuplestest=noneprob_test=prob_testtarget_index=target_indexheader=headerseed=seedtarget_type=target_typethe following class is used for datasets where the training and test are in difversion june |
18,851 | ferent files learnproblem py -(continued class data_from_files(data_set)def __init__(selftrain_file_nametest_file_nameseparator=','has_header=falsetarget_index= boolean_features=truecategorical=[]target_typenoneinclude_only=none)"""create dataset from separate training and file separator is the character that separates the attributes num_train is number specifying the first num_train tuples are trainingor none prob_test is the probability an example should in the test set (if num_train is nonehas_header is true if the first line of file is header target_index specifies which feature is the target boolean_features specifies whether we want to create boolean features (if falseit uses the original featurescategorical is set (or listof features that should be treated as categorical target_type is either none for automatic detection of target type or one of "numerical""boolean""cartegoricalinclude_only is list or set of indexes of columns to include ""self boolean_features boolean_features with open(train_file_name,' ',newline=''as train_filewith open(test_file_name,' ',newline=''as test_filedata_all csv reader(csvfile,delimiter=separatorfor more complicated csv files train_data (line strip(split(separatorfor line in train_filetest_data (line strip(split(separatorfor line in test_fileif include_only is not nonetrain_data ([ for ( ,vin enumerate(lineif in include_onlyfor line in train_datatest_data ([ for ( ,vin enumerate(lineif in include_onlyfor line in test_dataif has_headerthis assumes the training file has header and the test file doesn' header next(train_dataelseheader none train_tuples [interpret_elements(dfor in train_data if len( )> test_tuples [interpret_elements(dfor in test_data if len( )> data_set __init__(self,train_tuplestest_tuplestarget_index=target_indexheader=headerversion june |
18,852 | supervised machine learning when reading from file all of the values are strings this next method tries to convert each values into number (an int or floator booleanif it is possible learnproblem py -(continued def interpret_elements(str_list)"""make the elements of string list str_list numerical if possible otherwise remove initial and trailing spaces ""res [for in str_listtryres append(int( )except valueerrortryres append(float( )except valueerrorse strip(if se in ["true","true","true"]res append[trueif se in ["false","false","false"]res append[falseelseres append( strip()return res augmented features sometimes we want to augment the features with new features computed from the old features (eg the product of featureshere we allow the creation of new dataset from an old dataset but with new features note that special cases of these are kernelsmapping the original feature space into new spacewhich allow neat way to do learning in the augmented space for many mappings (the "kernel trick"this is beyond the scope of aipythonthose interested should read about support vector machines feature is function of examples unary feature constructor takes feature and returns new feature binary feature combiner takes two features and returns new feature learnproblem py -(continued class data_set_augmented(data_set)def __init__(selfdatasetunary_functions=[]binary_functions=[]include_orig=true)"""creates dataset like dataset but with new features unary_function is list of unary feature constructors binary_functions is list of binary feature combiners include_orig specifies whether the original features should be included version june |
18,853 | ""self orig_dataset dataset self unary_functions unary_functions self binary_functions binary_functions self include_orig include_orig self target dataset target data_set __init__(self,dataset traintest=dataset testtarget_index dataset target_index def create_features(self)if self include_origself input_features self orig_dataset input_features copy(elseself input_features [for in self unary_functionsfor in self orig_dataset input_featuresself input_features append( ( )for in self binary_functionsfor in self orig_dataset input_featuresfor in self orig_dataset input_featuresif ! self input_features append( ( , )the following are useful unary feature constructors and binary feature combiner learnproblem py -(continued def square( )""" unary feature constructor to construct the square of feature ""def sq( )return ( )** sq __doc__ __doc__+"** return sq def power_feat( )"""given returns unary feature constructor to construct the nth power of feature power_feat( is the same as squaredefined above ""def fn( , = )def pow( , = )return ( )** pow __doc__ __doc__+"**"+str(nreturn pow return fn def prod_feat( , )""" new feature that is the product of features and ""def feat( )version june |
18,854 | supervised machine learning return ( )* (efeat __doc__ __doc__+"*"+ __doc__ return feat def eq_feat( , )""" new feature that is if and give same value ""def feat( )return if ( )== (eelse feat __doc__ __doc__+"=="+ __doc__ return feat def neq_feat( , )""" new feature that is if and give different values ""def feat( )return if ( )!= (eelse feat __doc__ __doc__+"!="+ __doc__ return feat examplelearnproblem py -(continued from learnproblem import data_set_augmented,prod_feat data data_from_file('data/holiday csv'num_train= target_index=- data data_from_file('data/iris data'prob_test= / target_index=- #data data_from_file('data/spect csv'prob_test= target_index= dataplus data_set_augmented(data,[],[prod_feat]dataplus data_set_augmented(data,[],[prod_feat,neq_feat]exercise for symmetric propertiessuch as productwe don' need both as well as as extra properties allow the user to be able to declare feature constructors as symmetric (by associating boolean feature with themchange construct features so that it does not create both versions for symmetric combiners generic learner interface learner takes dataset (and possibly other arguments specific to the methodto get it to learnwe call the learn(method this implements displayable so that we can display traces at multiple levels of detail (and perhaps with guilearnproblem py -(continued from display import displayable class learner(displayable)def __init__(selfdataset)raise notimplementederror("learner __init__"abstract method version june |
18,855 | def learn(self)"""returns predictora function from tuple to value for the target feature ""raise notimplementederror("learn"abstract method learning with no input features if we make the same prediction for each examplewhat prediction should we makethis can be used as naive baselineif more sophisticated method does not do better than thisit is not useful this also provides the base case for some methodssuch as decision-tree learning to run demo to compare different prediction methods on various evaluation criteriain folder "aipython"load "learnnoinputs py"using ipython - learnnoinputs pyand it prints some test results there are few alternatives as to what could be allowed in predictiona point predictionwhere we are only allowed to predict one of the values of the feature for exampleif the values of the feature are { we are only allowed to predict or or of the values are ratings in { }we can only predict one of these integers point predictionwhere we are allowed to predict any value for exampleif the values of the feature are { we may be allowed to predict or even for all of the criteria we can imaginethere is no point in predicting value greater than or less that zero (but that doesn' mean we can' )but it is often useful to predict value between and if the values are ratings in { }we may want to predict probability distribution over the values of the feature for each value vwe predict non-negative number pv such that the sum over all predictions is for regressionwe do the first of these for classificationwe do the second the third can be implemented by having multiple indicator functions for the target here are some prediction functions that take in an enumeration of valuesa domainand returns value or dictionary of {value predictionnote that cmedian returns one of middle values when there are an even number of exampleswhereas median gives the average of them (and so cmedian is applicable for ordinals that cannot be considered cardinal valuessimilarlycmode picks one of the values when more than one value has the maximum number of elements version june |
18,856 | supervised machine learning learnnoinputs py -learning ignoring all input features from learnproblem import evaluate import mathrandomcollectionsstatistics import utilities argmax for (element,valuepairs class predict(object)"""the class of prediction methods for list of values please make the doc strings the same lengthbecause they are used in tables note that we don' need self argumentas we are creating predict objectsto use call predict laplace(dataetc "" ##the following return distribution over values (for classificationdef empirical(datadomain=[ , ]icount= )"empirical dist returns distribution over values counts { :icount for in domainfor in datacounts[ + sum(counts values()return { : / for ( ,vin counts items() def bounded_empirical(datadomain=[ , ]bound= )"bounded empiricalreturn { :min(max( ,bound), -boundfor ( ,vin predict empirical(datadomainitems() def laplace(datadomain=[ , ])"laplace for categorical data return predict empirical(datadomainicount= def cmode(datadomain=[ , ])"mode for categorical data md statistics mode(datareturn { if ==md else for in domain def cmedian(datadomain=[ , ])"median for categorical data md statistics median_low(dataalways return one of the values return { if ==md else for in domain ##the following return single prediction (for regressiondomain is ignored def mean(datadomain=[ , ])"mean returns real number return statistics mean(data version june |
18,857 | def rmean(datadomain=[ , ]mean = pseudo_count= )"regularized meanreturns real number mean is the mean to be used for data points with mean = pseudo_count= same as laplace for [ , data this works for enumerations as well as lists sum mean pseudo_count count pseudo_count for in datasum + count + return sum/count def mode(datadomain=[ , ])"mode return statistics mode(data def median(datadomain=[ , ])"median return statistics median(data all [empiricalmeanrmeanbounded_empiricallaplacecmodemodemedian,cmedian the following suggests appropriate predictions as function of the target type select {"boolean"[empiricalbounded_empiricallaplacecmodecmedian]"categorical"[empiricalbounded_empiricallaplacecmodecmedian]"numeric"[meanrmeanmodemedian]evaluation to evaluate point predictionwe first generate some data from simple (bernoullidistributionwhere there are two possible values and for the target feature given proba number in the range [ ]this generate some training and test data where prob is the probability of each example being to generate with probability probwe generate random number in range [ , and return if that number is less than prob prediction is computed by applying the predictor to the training datawhich is evaluated on the test set this is repeated num_samples times let' evaluate the predictions of the possible selections according to the different evaluation criteriafor various training sizes learnnoinputs py -(continued def test_no_inputs(error_measures evaluate all_criterianum_samples= test_size= )for train_size in [ , , , , , , , , ]version june |
18,858 | supervised machine learning results {predictor{error_measure for error_measure in error_measuresfor predictor in predict allfor sample in range(num_samples)prob random random(training [ if random random()<prob else for in range(train_size)test [ if random random()<prob else for in range(test_size)for predictor in predict allprediction predictor(trainingfor error_measure in error_measuresresults[predictor][error_measure+sumerror_measure(prediction,actualfor actual in test)/test_size print( "for training size {train_size}:"print(predictor\ ","\tjoin(error_measure __doc__ for error_measure in error_measures),sep="\ "for predictor in predict allprint( {predictor __doc__}""\tjoin("{ }format(results[predictor][error_measure]/num_samplesfor error_measure in error_measures),sep="\ " if __name__ ="__main__"test_no_inputs(exercise which predictor works best for low counts when the error is (asquared error (babsolute error (clog loss you may need to try this few times to make sure your answer is supported by the evidence does the difference from the other methods get more or less as the number of examples growexercise suggest some other predictions that only take the training data does your method do better than the given methodsa simple way to get other predictors is to vary the threshold of bounded averageor to change the pseodocounts of the laplace method (use other numbers instead of and version june |
18,859 | decision tree learning to run the decision tree learning demoin folder "aipython"load "learndt py"using ipython - learndt pyand it prints some test results to try more examplescopy and paste the commentedout commands at the bottom of that file this requires python with matplotlib the decision tree algorithm does binary splitsand assumes that all input features are binary functions of the examples it stops splitting if there are no input featuresthe number of examples is less than specified number of examples or all of the examples agree on the target feature learndt py -learning binary decision tree from learnproblem import learnerevaluate from learnnoinputs import predict import math class dt_learner(learner)def __init__(selfdatasetsplit_to_optimize=evaluate log_lossto minimize for at each split leaf_prediction=predict empiricalwhat to use for value at leaves train=noneused for cross validation max_num_cuts= maximum number of conditions to split numerical feature into gamma= - minimum improvement needed to expand node min_child_weight= )self dataset dataset self target dataset target self split_to_optimize split_to_optimize self leaf_prediction leaf_prediction self max_num_cuts max_num_cuts self gamma gamma self min_child_weight min_child_weight if train is noneself train self dataset train elseself train train def learn(selfmax_num_cuts= )"""learn decision tree""return self learn_tree(self dataset conditions(self max_num_cuts)self trainthe main recursive algorithmtakes in set of input features and set of training data it first decides whether to split if it doesn' splitit makes point predictionignoring the input features version june |
18,860 | supervised machine learning it only splits if the best split increases the error by at least gamma this implies it does not split whenthere are no more input features there are fewer examples than min number examplesall the examples agree on the value of the targetor the best split makes all examples in the same partition if it splitsit selects the best split according to the evaluation criterion (assuming that is the only split it gets to do)and returns the condition to split on (in the variable splitand the corresponding partition of the examples learndt py -(continued def learn_tree(selfconditionsdata_subset)"""returns decision tree conditions is set of possible conditions data_subset is subset of the data used to build this (sub)tree where decision tree is function that takes an example and makes prediction on the target feature ""self display( , "learn_tree with {len(conditions)features and {len(data_subset)examples"splitpartn self select_split(conditionsdata_subsetif split is noneno splitreturn point prediction prediction self leaf_value(data_subsetself target frangeself display( , "leaf prediction for {len(data_subset)examples is {prediction}"def leaf_fun( )return prediction leaf_fun __doc__ str(predictionleaf_fun num_leaves return leaf_fun elsea split succeeded false_examplestrue_examples partn rem_features [fe for fe in conditions if fe !splitself display( ,"splitting on",split __doc__,"with examples split"len(true_examples),":",len(false_examples)true_tree self learn_tree(rem_features,true_examplesfalse_tree self learn_tree(rem_features,false_examplesdef fun( )if split( )return true_tree(eelsereturn false_tree( #fun lambda etrue_tree(eif split(eelse false_tree(efun __doc__ ( "(if {split __doc__then {true_tree __doc__}version june |
18,861 | felse {false_tree __doc__})"fun num_leaves true_tree num_leaves false_tree num_leaves return fun learndt py -(continued def leaf_value(selfegsdomain)return self leaf_prediction((self target(efor in egs)domain def select_split(selfconditionsdata_subset)"""finds best feature to split on conditions is non-empty list of features returns featurepartition where feature is an input feature with the smallest error as judged by split_to_optimize or feature==none if there are no splits that improve the error partition is pair (false_examplestrue_examplesif feature is not none ""best_feat none best feature best_error float("inf"infinity more than any error best_error self sum_losses(data_subsetself gamma self display( ,no split has error=",best_error,"with",len(conditions),"conditions"best_partition none for feat in conditionsfalse_examplestrue_examples partition(data_subset,featif min(len(false_examples),len(true_examples))>=self min_child_weighterr (self sum_losses(false_examplesself sum_losses(true_examples)self display( ,split on",feat __doc__,"has error=",err"splits into",len(true_examples),":",len(false_examples),"gamma=",self gammaif err best_errorbest_feat feat best_error=err best_partition false_examplestrue_examples self display( ,"best split is on",best_feat __doc__"with err=",best_errorreturn best_featbest_partition def sum_losses(selfdata_subset)"""returns sum of losses for dataset (with no more splitsthere single prediction for all leaves using leaf_prediction it is evaluated using split_to_optimize ""prediction self leaf_value(data_subsetself target frangeerror sum(self split_to_optimize(predictionself target( )for in data_subsetversion june |
18,862 | supervised machine learning return error def partition(data_subset,feature)"""partitions the data_subset by the feature""true_examples [false_examples [for example in data_subsetif feature(example)true_examples append(exampleelsefalse_examples append(examplereturn false_examplestrue_examples test caseslearndt py -(continued from learnproblem import data_setdata_from_file def testdt(dataprint_tree=trueselections none**tree_args)"""prints errors and the trees for various evaluation criteria and ways to select leaves ""if selections =noneuse selections suitable for target type selections predict select[data target ftypeevaluation_criteria evaluate all_criteria print("split choice","leaf choice\ ","#leaves",'\tjoin(ecrit __doc__ for ecrit in evaluation_criteria),sep="\ "for crit in evaluation_criteriafor leaf in selectionstree dt_learner(datasplit_to_optimize=critleaf_prediction=leaf**tree_argslearn(print(crit __doc__leaf __doc__tree num_leaves"\tjoin("{ }format(data evaluate_dataset(data testtreeecrit)for ecrit in evaluation_criteria),sep="\ "if print_treeprint(tree __doc__ #dt_learner max_display_level if __name__ ="__main__"choose one of the data files #data=data_from_file('data/spect csv'target_index= )print("spect csv"#data=data_from_file('data/iris data'target_index=- )print("iris data"data data_from_file('data/carbool csv'target_index=- seed= #data data_from_file('data/mail_reading csv'target_index=- )print("mail_reading csv"version june |
18,863 | #data data_from_file('data/holiday csv'num_train= target_index=- )print("holiday csv"testdt(dataprint_tree=falsenote that different runs may provide different values as they split the training and test sets differently so if you have hypothesis about what works bettermake sure it is true for different runs exercise the current algorithm does not have very sophisticated stopping criterion what is the current stopping criterion(hintyou need to look at both learn tree and select split exercise extend the current algorithm to include in the stopping criterion (aa minimum child sizedon' use split if one of the children has fewer elements that this (ba depth-bound on the depth of the tree (can improvement bound such that split is only carried out if error with the split is better than the error without the split by at least the improvement bound which values for these parameters make the prediction errors on the test set the smallesttry it on more than one dataset exercise without any input featuresit is often better to include pseudocount that is added to the counts from the training data modify the code so that it includes pseudo-count for the predictions when evaluating splitincluding pseudo counts can make the split worse than no split does pruning with an improvement bound and pseudo-counts make the algorithm work better than with an improvement bound by itselfexercise some people have suggested using information gain (which is equivalent to greedy optimization of log lossas the measure of improvement when building the treeeven in they want to have non-probabilistic predictions in the final tree does this work better than myopically choosing the split that is best for the evaluation criteria we will use to judge the final prediction cross validation and parameter tuning to run the cross validation demoin folder "aipython"load "learncrossvalidation py"using ipython - learncrossvalidation py run the examples at the end to produce graph like figure note that different runs will produce different graphsso your graph will not look like the one in the textbook to try more examplescopy and paste the commented-out commands at the bottom of that file this requires python with matplotlib version june |
18,864 | supervised machine learning the above decision tree overfits the data one way to determine whether the prediction is overfitting is by cross validation the code below implements -fold cross validationwhich can be used to choose the value of parameters to best fit the training data if we want to use parameter tuning to improve predictions on particular data setwe can only use the training data (and not the test datato tune the parameter in -fold cross validationwe partition the training set into approximately equal-sized folds (each fold is an enumeration of examplesfor each foldwe train on the other examplesand determine the error of the prediction on that fold for exampleif there are foldswe train on of the dataand then test on remaining of the data we do this timesso that each example gets used as test set onceand in the training set times the code below creates one copy of the dataand multiple views of the data for each foldfold enumerates the examples in the foldand fold complement enumerates the examples not in the fold learncrossvalidation py -cross validation for parameter tuning from learnproblem import data_setdata_from_fileevaluate from learnnoinputs import predict from learndt import dt_learner import matplotlib pyplot as plt import random class k_fold_dataset(object)def __init__(selftraining_setnum_folds)self data training_set train copy(self target training_set target self input_features training_set input_features self num_folds num_folds self conditions training_set conditions random shuffle(self dataself fold_boundaries [(len(self data)* )//num_folds for in range( ,num_folds+ ) def fold(selffold_num)for in range(self fold_boundaries[fold_num]self fold_boundaries[fold_num+ ])yield self data[ def fold_complement(selffold_num)for in range( ,self fold_boundaries[fold_num])yield self data[ifor in range(self fold_boundaries[fold_num+ ],len(self data))yield self data[ithe validation error is the average error for each examplewhere we test on each foldand learn on the other folds learncrossvalidation py -(continuedversion june |
18,865 | def validation_error(selflearnererror_measure**other_params)error tryfor in range(self num_folds)predictor learner(selftrain=list(self fold_complement( ))**other_paramslearn(error +sumerror_measure(predictor( )self target( )for in self fold( )except valueerrorreturn float("inf"#infinity return error/len(self datathe plot error method plots the average error as function of the minimum number of examples in decision-tree searchboth for the validation set and for the test set the error on the validation set can be used to tune the parameter -choose the value of the parameter that minimizes the error the error on the test set cannot be used to tune the parametersif is were to be used this way then it cannot be used to test learncrossvalidation py -(continued def plot_error(datacriterion=evaluate squared_lossleaf_prediction=predict empiricalnum_folds= maxx=nonexscale='linear')"""plots the error on the validation set and the test set with respect to settings of the minimum number of examples xscale should be 'logor 'linear""plt ion(plt xscale(xscalechange between log and linear scale plt xlabel("min_child_weight"plt ylabel("average "+criterion __doc__folded_data k_fold_dataset(datanum_foldsif maxx =nonemaxx len(data train)// + verrors [validation errors terrors [test set errors for mcw in range( ,maxx)verrors append(folded_data validation_error(dt_learner,criterion,leaf_prediction=leaf_predi min_child_weight=mcw)tree dt_learner(datacriterionleaf_prediction=leaf_predictionmin_child_weight=mcwlearn(terrors append(data evaluate_dataset(data test,tree,criterion)plt plot(range( ,maxx)verrorsls='-',color=' 'label="validation for "+criterion __doc__plt plot(range( ,maxx)terrorsls='--',color=' 'label="test set for "+criterion __doc__plt legend(plt draw( the following produces figure of poole and mackworth [ version june |
18,866 | supervised machine learning data data_from_file('data/spect csv',target_index= seed= plot_error(datawarningmay take long time depending on the dataset #also trydata data_from_file('data/mail_reading csv'target_index=- data data_from_file('data/carbool csv'target_index=- seed= plot_error(datacriterion=evaluate log_lossleaf_prediction=predict laplacewarningmay take long time depending on the dataset note that different runs for the same data will have the same test errorbut different validation error if you rerun the data_from_fileyou will get the new test and training setsand so the graph will change exercise change the error plot so that it can evaluate the stopping criteria of the exercise of section which criteria makes the most difference linear regression and classification here we give gradient descent searcher for linear regression and classification learnlinear py -linear regression and classification from learnproblem import learner import randommath class linear_learner(learner)def __init__(selfdatasettrain=nonelearning_rate= max_init squashed=true)"""creates gradient descent searcher for linear classifier the main learning is carried out by learn( dataset provides the target and the input features train provides subset of the training data to use number_iterations is the default number of steps of gradient descent learning_rate is the gradient descent step size max_init is the maximum absolute value of the initial weights squashed specifies whether the output is squashed linear function ""self dataset dataset self target dataset target if train==noneself train self dataset train elseself train train self learning_rate learning_rate self squashed squashed version june |
18,867 | self input_features [one]+dataset input_features one is defined below self weights {feat:random uniform(-max_init,max_initfor feat in self input_featurespredictor predicts the value of an example from the current parameter settings predictor string gives string representation of the predictor learnlinear py -(continued def predictor(self, )"""returns the prediction of the learner on example ""linpred sum( * (efor , in self weights items()if self squashedreturn sigmoid(linpredelsereturn linpred def predictor_string(selfsig_dig= )"""returns the doc string for the current prediction function sig_dig is the number of significant digits in the numbers""doc "+join(str(round(val,sig_dig))+"*"+feat __doc__ for feat,val in self weights items()if self squashedreturn "sigmoid("doc+")elsereturn doc learn is the main algorithm of the learner it does num iter steps of stochastic gradient descent with batch size the other parameters it gets from the class learnlinear py -(continued def learn(self,num_iter= )for it in range(num_iter)self display( ,"prediction=",self predictor_string()for in self trainpredicted self predictor(eerror predicted self target(eupdate self learning_rate*error for feat in self weightsself weights[feat-update*feat(ereturn self predictor one is function that always returns this is used for one of the input properties learnlinear py -(continued def one( )" return version june |
18,868 | supervised machine learning sigmoid(xis the function - the inverse of sigmoid is the logit function learnlinear py -(continued def sigmoid( )return /( +math exp(- ) def logit( )return -math log( / - sigmoid([ ]returns [ where vi exp(xi exp(xj the inverse of sigmoid is the logit function learnlinear py -(continued def softmax(xs,domain=none)"""xs is list of valuesand domain is the domain ( listor none if the list should be returned returns distribution over the domain ( dict"" max(xsuse of prevents overflow (and all values underflowingexps [math exp( -mfor in xss sum(expsif domainreturn { : / for ( ,vin zip(domain,exps)elsereturn [ / for in exps def indicator(vdomain)return [ if ==dv else for dv in domainthe following tests the learner on data sets uncomment the other data sets for different examples learnlinear py -(continued from learnproblem import data_setdata_from_fileevaluate from learnproblem import evaluate import matplotlib pyplot as plt def test(**args)data data_from_file('data/spect csv'target_index= data data_from_file('data/mail_reading csv'target_index=- data data_from_file('data/carbool csv'target_index=- learner linear_learner(data,**argslearner learn(version june |
18,869 | print("function learned is"learner predictor_string()for ecrit in evaluate all_criteriatest_error data evaluate_dataset(data testlearner predictorecritprint(average"ecrit __doc__"is"test_errorthe following plots the errors on the training and test sets as function of the number of steps of gradient descent learnlinear py -(continued def plot_steps(learner=nonedata nonecriterion=evaluate squared_lossstep= num_steps= log_scale=truelegend_label="")""plots the training and test error for learner data is the learner_class is the class of the learning algorithm criterion gives the evaluation criterion plotted on the -axis step specifies how many steps are run for each point on the plot num_steps is the number of points to plot ""if legend_label !""legend_label+=plt ion(plt xlabel("step"plt ylabel("average "+criterion __doc__if log_scaleplt xscale('log'#plt semilogx(#makes log scale elseplt xscale('linear'if data is nonedata data_from_file('data/holiday csv'num_train= target_index=- #data data_from_file('data/spect csv'target_index= data data_from_file('data/mail_reading csv'target_index=- data data_from_file('data/carbool csv'target_index=- #random seed(nonereset seed if learner is nonelearner linear_learner(datatrain_errors [test_errors [for in range( ,num_steps+ ,step)test_errors append(data evaluate_dataset(data testlearner predictorcriterion)train_errors append(data evaluate_dataset(data trainlearner predictorcriterion)learner display( "train error:",train_errors[- ]version june |
18,870 | supervised machine learning "test error:",test_errors[- ]learner learn(num_iter=stepplt plot(range( ,num_steps+ ,step),train_errors,ls='-',label=legend_label+"training"plt plot(range( ,num_steps+ ,step),test_errors,ls='--',label=legend_label+"test"plt legend(plt draw(learner display( "train error:",train_errors[- ]"test error:",test_errors[- ] if __name__ ="__main__"test( this generates the figure from learnproblem import data_set_augmented,prod_feat data data_from_file('data/spect csv'prob_test= target_index= dataplus data_set_augmented(data,[],[prod_feat]plot_steps(data=data,num_steps= plot_steps(data=dataplus,num_steps= warning very slow exercise the squashed learner only makes predictions in the range ( if the output values are { there is no use prediction less than or greater than change the squashed learner so that it can learn values in the range ( test it on the file 'data/car csvthe following plots the prediction as function of the function of the number of steps of gradient descent we first define version of range that allows for real numbers (integers and floatslearnlinear py -(continued def arange(start,stop,step)"""returns enumeration of values in the range [start,stopseparated by step like the built-in range(start,stop,stepbut allows for integers and floats note that rounding errors are expected with real numbers (or use numpy arange""while start<stopyield start start +step def plot_prediction(datalearner noneminx maxx step_size for plotting label "function")plt ion(plt xlabel(" "plt ylabel(" "if learner is noneversion june |
18,871 | learner linear_learner(datasquashed=falselearner learning_rate= learner learn( learner learning_rate= learner learn( learner learning_rate= learner learn( learner display( ,"function learned is"learner predictor_string()"error=",data evaluate_dataset(data trainlearner predictorevaluate squared_loss)plt plot([ [ for in data train],[ [- for in data train],"bo",label="data"plt plot(list(arange(minx,maxx,step_size)),[learner predictor([ ]for in arange(minx,maxx,step_size)]label=labelplt legend(plt draw(learnlinear py -(continued from learnproblem import data_set_augmentedpower_feat def plot_polynomials(datalearner_class linear_learnermax_degree minx maxx num_iter learning_rate step_size for plotting )plt ion(plt xlabel(" "plt ylabel(" "plt plot([ [ for in data train],[ [- for in data train],"ko",label="data"x_values list(arange(minx,maxx,step_size)line_styles ['-','--','',':'colors [' ',' ',' ',' ',' 'for degree in range(max_degree)data_aug data_set_augmented(data,[power_feat(nfor in range( ,degree+ )]include_orig=falselearner learner_class(data_aug,squashed=falselearner learning_rate learning_rate learner learn(num_iterlearner display( ,"for degree",degree"function learned is"learner predictor_string()"error=",data evaluate_dataset(data trainlearner predictorevaluate squared_loss)ls line_styles[degree len(line_styles)version june |
18,872 | supervised machine learning col colors[degree len(colors)plt plot(x_values,[learner predictor([ ]for in x_values]linestyle=lscolor=collabel="degree="+str(degree)plt legend(loc='upper left'plt draw( trydata data_from_file('data/simp_regr csv'prob_test= boolean_features=falsetarget_index=- plot_prediction(data plot_polynomials(data #datam data_from_file('data/mail_reading csv'target_index=- #plot_prediction(datambatched stochastic gradient descent this implements batched stochastic gradient descent if the batch size is it can be simplified by not storing the differences in dbut applying them directlythis would the be equivalent to the original codethis overrides the learner linear learner note that the comparison with regular gradient descent is unfair as the number of updates per step is not the same (how could it me made more fair?learnlinearbsgd py -linear learner with batched stochastic gradient descent from learnlinear import linear_learner import randommath class linear_learner_bsgd(linear_learner)def __init__(self*argsbatch_size= **kargs)linear_learner __init__(self*args**kargsself batch_size batch_size def learn(self,num_iter=none)if num_iter is nonenum_iter self number_iterations batch_size min(self batch_sizelen(self train) {feat: for feat in self weightsfor it in range(num_iter)self display( ,"prediction=",self predictor_string()for in random sample(self trainbatch_size)error self predictor(eself target(eupdate self learning_rate*error for feat in self weightsd[feat+update*feat(efor feat in self weightsself weights[feat- [featd[feat]= return self predictor version june |
18,873 | from learnlinear import plot_steps from learnproblem import data_from_file data data_from_file('data/holiday csv'target_index=- learner linear_learner_bsgd(dataplot_steps(learner learnerdata=data to plot polynomials with batching (compare to sgdfrom learnlinear import plot_polynomials plot_polynomials(datalearner_class linear_learner_bsgd boosting the following code implements functional gradient boosting for regression boosted dataset is created from base dataset by subtracting the prediction of the offset function from each example this does not save the new datasetbut generates it as needed the amount of space used is constantindependent on the size of the data set learnboosting py -functional gradient boosting from learnproblem import data_setlearnerevaluate from learnnoinputs import predict from learnlinear import sigmoid import statistics import random class boosted_dataset(data_set)def __init__(selfbase_datasetoffset_funsubsample= )"""new dataset which is like base_datasetbut offset_fun(eis subtracted from the target of each example ""self base_dataset base_dataset self offset_fun offset_fun self train random sample(base_dataset train,int(subsample*len(base_dataset train))self test base_dataset test #data_set __init__(selfbase_dataset trainbase_dataset testbase_dataset prob_testbase_dataset target_index #def create_features(self)"""creates new features called at end of data_set init(defines new target ""self input_features self base_dataset input_features def newout( )return self base_dataset target(eself offset_fun(enewout frange self base_dataset target frange newout ftype self infer_type(newout frangeself target newout version june |
18,874 | supervised machine learning def conditions(self*argscolsample_bytree= **nargs)conds self base_dataset conditions(*args**nargsreturn random sample(condsint(colsample_bytree*len(conds)) boosting learner takes in dataset and base learnerand returns new predictor the base learnertakes datasetand returns learner object learnboosting py -(continued class boosting_learner(learner)def __init__(selfdatasetbase_learner_classsubsample= )self dataset dataset self base_learner_class base_learner_class self subsample subsample mean sum(self dataset target(efor in self dataset train)/len(self dataset trainself predictor lambda :mean function that returns mean for each example self predictor __doc__ "lambda :"+str(meanself offsets [self predictorlist of base learners self predictors [self predictorlist of predictors self errors [data evaluate_dataset(data testself predictorevaluate squared_loss)self display( ,"predict mean test set mean squared loss="self errors[ def learn(selfnum_ensembles= )"""adds num_ensemble learners to the ensemble returns new predictor ""for in range(num_ensembles)train_subset boosted_dataset(self datasetself predictorsubsample=self subsamplelearner self base_learner_class(train_subsetnew_offset learner learn(self offsets append(new_offsetdef new_pred(eold_pred=self predictoroff=new_offset)return old_pred( )+off(eself predictor new_pred self predictors append(new_predself errors append(data evaluate_dataset(data testself predictorevaluate squared_loss)self display( , "iteration {len(self offsets)- },treesize {new_offset num_leavesmean squared loss={self errors[- ]}"return self predictor for testingsp dt learner returns learner that predicts the mean at the leaves and is evaluated using squared loss it can also take arguments to change the default arguments for the trees version june |
18,875 | learnboosting py -(continued testing from learndt import dt_learner from learnproblem import data_setdata_from_file def sp_dt_learner(split_to_optimize=evaluate squared_lossleaf_prediction=predict mean,**nargs)"""creates learner with different default arguments replaced by **nargs ""def new_learner(dataset)return dt_learner(dataset,split_to_optimize=split_to_optimizeleaf_prediction=leaf_prediction**nargsreturn new_learner #data data_from_file('data/car csv'target_index=- regression data data_from_file('data/student/student-mat-nq csv'separator=';',has_header=true,target_index=- ,seed= ,include_only=list(range( ))+[ ]# #data data_from_file('data/spect csv'target_index= seed= # #data data_from_file('data/mail_reading csv'target_index=- #data data_from_file('data/holiday csv'num_train= target_index=- #learner boosting_learner(datasp_dt_learner(split_to_optimize=evaluate squared_lossleaf_prediction=predict meanmin_child_weight= )#learner boosting_learner(datasp_dt_learner( )#learner boosting_learner(datasp_dt_learner( )#predictor =learner learn( #for in learner offsetsprint( __doc__import matplotlib pyplot as plt def plot_boosting_trees(datasteps= mcws=[ , , , ]gammas[ , , , ])to reduce clutter uncomment one of following two lines #mcws=[ #gammas=[ learners [(mcwgammaboosting_learner(datasp_dt_learner(min_child_weight=mcwgamma=gamma))for gamma in gammas for mcw in mcws plt ion(plt xscale('linear'change between log and linear scale plt xlabel("number of trees"plt ylabel("mean squared loss"markers ( + for in [' ',' ',' ',' ',' ',' ',' 'for in ['-','--','',':']for (mcw,gamma,learnerin learnersdata display( , "min_child_weight={mcw}gamma={gamma}"learner learn(stepsversion june |
18,876 | supervised machine learning plt plot(range(steps+ )learner errorsnext(markers)label= "min_child_weight={mcw}gamma={gamma}"plt legend(plt draw( plot_boosting_trees(datagradient tree boosting the following implements gradient boosted trees for classification if you want to use this gradient tree boosting for real problemwe recommend using xgboost [chen and guestrin gtb_learner subclasses dt-learner the method learn_tree is used unchanged dt-learner assumes that the value at the leaf is the prediction of the leafthus leaf_value needs to be overridden it also assumes that all nodes at leaf have the same predictionbut in gbt the elements of leaf can have different valuesdepending on the previous trees thus sum_losses also needs to be overridden learnboosting py -(continued class gtb_learner(dt_learner)def __init__(selfdatasetnumber_treeslambda_reg= gamma= **dtargs)dt_learner __init__(selfdatasetsplit_to_optimize=evaluate log_loss**dtargsself number_trees number_trees self lambda_reg lambda_reg self gamma gamma self trees [ def learn(self)for in range(self number_trees)tree self learn_tree(self dataset conditions(self max_num_cuts)self trainself trees append(treeself display( , """iteration {itreesize {tree num_leavestrain logloss=self dataset evaluate_dataset(self dataset trainself gtb_predictorevaluate log_losstest logloss=self dataset evaluate_dataset(self dataset testself gtb_predictorevaluate log_loss)}"""return self gtb_predictor def gtb_predictor(selfexampleextra= )"""prediction for exampleextras is an extra contribution for this example being considered ""version june |
18,877 | return sigmoid(sum( (examplefor in self trees)+extra def leaf_value(selfegsdomain=[ , ])"""value at the leaves for examples egs domain argument is ignored""pred_acts [(self gtb_predictor( ),self target( )for in egsreturn sum( - for ( ,ain pred_acts/(sum( *( -pfor ( ,ain pred_acts)+self lambda_reg def sum_losses(selfdata_subset)"""returns sum of losses for dataset (assuming leaf is formed with no more splits""leaf_val self leaf_value(data_subseterror sum(evaluate log_loss(self gtb_predictor( ,leaf_val)self target( )for in data_subsetself gamma return error testing learnboosting py -(continued data data_from_file('data/carbool csv'target_index=- seed= gtb_learner gtb_learner(data gtb_learner learn(version june |
18,878 | neural networks and deep learning warningthis is not meant to be an efficient implementation of deep learning if you want to do serious machine learning on meduim-sized or large datawe would recommend keras ((arehoweverblack boxes the aipython neural network code should be seen like car engine made of glassyou can see exactly how it workseven if it is not fast we have made parameters that are the same as in keras have the same names layers neural network is built from layers this provides modular implementation of layers layers can easily be stacked in many configurations layer needs to implement function to compute the output values from the inputsa way to back-propagate the errorand perhaps update its parameters learnnn py -neural network learning from learnproblem import learnerdata_setdata_from_filedata_from_filesevaluate from learnlinear import sigmoidonesoftmaxindicator import randommathtime class layer(object)def __init__(selfnnnum_outputs=none) |
18,879 | neural networks and deep learning """given list of inputsoutputs will produce list of length num_outputs nn is the neural network this layer is part of num outputs is the number of outputs for this layer ""self nn nn self num_inputs nn num_outputs output of nn is the input to this layer if num_outputsself num_outputs num_outputs elseself num_outputs nn num_outputs same as the inputs def output_values(self,input_valuestraining=false)"""return the outputs for this layer for the given input values input_values is list of the inputs to this layer (of length num_inputsreturns list of length self num_outputs it can act differently when training and when predicting ""raise notimplementederror("output_values"abstract method def backprop(self,errors)"""backpropagate the errors on the outputs errors is list of errors for the outputs (of length self num_outputsreturns the errors for the inputs to this layer (of length self num_inputs you can assume that this is only called after corresponding output_valueswhich can remember information information required for the back-propagation ""raise notimplementederror("backprop"abstract method def update(self)"""updates parameters after batch overridden by layers that have parameters ""pass linear layer maintains an array of weights self weights[ ][iis the weight between input and output is added to the end of the inputs the default initialization is the glorot uniform initializer [glorot and bengio ]which is the default in keras an alternative is to provide limitin which case the values are selected uniformly in the range [-limitlimitkeras treats the bias separatelyand defaults to zero learnnn py -(continued class linear_complete_layer(layer)version june |
18,880 | """ completely connected layer""def __init__(selfnnnum_outputslimit=none)""" completely connected linear layer nn is neural network that the inputs come from num_outputs is the number of outputs the random initialization of parameters is in range [-limit,limit""layer __init__(selfnnnum_outputsif limit is nonelimit =math sqrt( /(self num_inputs+self num_outputs)self weights[ ][iis the weight between input and output self weights [[random uniform(-limitlimitif inf self num_inputs else for inf in range(self num_inputs+ )for outf in range(self num_outputs)self delta [[ for inf in range(self num_inputs+ )for outf in range(self num_outputs) def output_values(self,input_valuestraining=false)"""returns the outputs for the input values it remembers the values for the backprop note in self weights there is weight list for every outputso wts in self weights loops over the outputs the bias is the *lastvalue of each list in self weights ""self inputs input_values [ return [sum( *val for ( ,valin zip(wts,self inputs)for wts in self weights def backprop(self,errors)"""backpropagate the errorsupdating the weights and returning the error in its inputs ""input_errors [ ]*(self num_inputs+ for out in range(self num_outputs)for inp in range(self num_inputs+ )input_errors[inp+self weights[out][inperrors[outself delta[out][inp+self inputs[inperrors[outreturn input_errors[:- remove the error for the " def update(self)"""updates parameters after batch""batch_step_size self nn learning_rate self nn batch_size for out in range(self num_outputs)for inp in range(self num_inputs+ )self weights[out][inp-batch_step_size self delta[out][inpself delta[out][inp the standard activation function for hidden nodes is the relu version june |
18,881 | neural networks and deep learning learnnn py -(continued class relu_layer(layer)"""rectified linear unit (reluf(zmax( zthe number of outputs is equal to the number of inputs ""def __init__(selfnn)layer __init__(selfnn def output_values(selfinput_valuestraining=false)"""returns the outputs for the input values it remembers the input values for the backprop ""self input_values input_values self outputs[max( ,inpfor inp in input_valuesreturn self outputs def backprop(self,errors)"""returns the derivative of the errors""return [ if inp> else for ,inp in zip(errorsself input_values)one of the old standards for the activation function for hidden layers is the sigmoid it is included here to experiment with learnnn py -(continued class sigmoid_layer(layer)"""sigmoids of the inputs the number of outputs is equal to the number of inputs each output is the sigmoid of its corresponding input ""def __init__(selfnn)layer __init__(selfnn def output_values(selfinput_valuestraining=false)"""returns the outputs for the input values it remembers the output values for the backprop ""self outputs[sigmoid(inpfor inp in input_valuesreturn self outputs def backprop(self,errors)"""returns the derivative of the errors""return [ *out*( -outfor ,out in zip(errorsself outputs) feedforward networks learnnn py -(continued class nn(learner)version june |
18,882 | def __init__(selfdatasetvalidation_proportion learning_rate= )"""creates neural network for datasetlayers is the list of layers ""self dataset dataset self output_type dataset target ftype self learning_rate learning_rate self input_features dataset input_features self num_outputs len(self input_featuresvalidation_num int(len(self dataset train)*validation_proportionif validation_num random shuffle(self dataset trainself validation_set self dataset train[-validation_num:self training_set self dataset train[:-validation_numelseself validation_set [self training_set self dataset train self layers [self bn number of batches run def add_layer(self,layer)"""add layer to the network each layer gets number of inputs from the previous layers outputs ""self layers append(layerself num_outputs layer num_outputs def predictor(self,ex)"""predicts the value of the first output for example ex ""values [ (exfor in self input_featuresfor layer in self layersvalues layer output_values(valuesreturn sigmoid(values[ ]if self output_type =="booleanelse softmax(valuesself dataset target frangeif self output_type ="categoricalelse values[ def predictor_string(self)return "not implementedthe learn method learns network learnnn py -(continued def learn(selfepochs= batch_size= num_iter nonereport_each= )"""learns parameters for neural network using stochastic gradient decent epochs is the number of times through the data (on averagebatch_size is the maximum size of each batch version june |
18,883 | neural networks and deep learning num_iter is the number of iterations over the batches overrides epochs if provided (allows for fractions of epochsreport_each means give the errors after each multiple of that iterations ""self batch_size min(batch_sizelen(self training_set)don' have batches bigger than training size if num_iter is nonenum_iter (epochs len(self training_set)/self batch_size #self display( ,"batch\ ","\tjoin(criterion __doc__ for criterion in evaluate all_criteria)for in range(num_iter)batch random sample(self training_setself batch_sizefor in batchcompute all outputs values [ (efor in self input_featuresfor layer in self layersvalues layer output_values(valuestraining=truebackpropagate predicted [sigmoid(vfor in valuesif self output_type ="boolean"else softmax(valuesif self output_type ="categorical"else values actuals indicator(self dataset target( )self dataset target frangeif self output_type ="categorical"else [self dataset target( )errors [pred-obsd for (obsd,predin zip(actuals,predicted)for layer in reversed(self layers)errors layer backprop(errorsupdate all parameters in batch for layer in self layerslayer update(self bn+= if ( + )%report_each== self display( ,self bn,"\ ""\ \tjoin("{ }formatself dataset evaluate_dataset(self validation_setself predictorcriterion)for criterion in evaluate all_criteria)sep="" improved optimization momentum learnnn py -(continued class linear_complete_layer_momentum(linear_complete_layer)version june |
18,884 | """ completely connected layer""def __init__(selfnnnum_outputslimit=nonealpha= epsilon - vel = )""" completely connected linear layer nn is neural network that the inputs come from num_outputs is the number of outputs max_init is the maximum value for random initialization of parameters vel is the initial velocity for each parameter ""linear_complete_layer __init__(selfnnnum_outputslimit=limitself weights[ ][iis the weight between input and output self velocity [[vel for inf in range(self num_inputs+ )for outf in range(self num_outputs)self alpha alpha self epsilon epsilon def update(self)"""updates parameters after batch""batch_step_size self nn learning_rate self nn batch_size for out in range(self num_outputs)for inp in range(self num_inputs+ )self velocity[out][inpself alpha*self velocity[out][inpbatch_step_size self delta[out][inpself weights[out][inp+self velocity[out][inpself delta[out][inp rms-prop learnnn py -(continued class linear_complete_layer_rms_prop(linear_complete_layer)""" completely connected layer""def __init__(selfnnnum_outputslimit=nonerho= epsilon - )""" completely connected linear layer nn is neural network that the inputs come from num_outputs is the number of outputs max_init is the maximum value for random initialization of parameters ""linear_complete_layer __init__(selfnnnum_outputslimit=limitself weights[ ][iis the weight between input and output self ms [[ for inf in range(self num_inputs+ )for outf in range(self num_outputs)self rho rho self epsilon epsilon def update(self)"""updates parameters after batch""for out in range(self num_outputs)version june |
18,885 | neural networks and deep learning for inp in range(self num_inputs+ )gradient self delta[out][inpself nn batch_size self ms[out][inpself rho*self ms[out][inp]( -self rhogradient** self weights[out][inp-self nn learning_rate/(self ms[out][inp]+self epsilon)** gradient self delta[out][inp dropout dropout is implemented as layer learnnn py -(continued from utilities import flip class dropout_layer(layer)"""dropout layer "" def __init__(selfnnrate= )""rate is fraction of the input units to drop =rate ""self rate rate layer __init__(selfnn def output_values(selfinput_valuestraining=false)"""returns the outputs for the input values it remembers the input values for the backprop ""if trainingscaling /( -self rateself mask [ if flip(self rateelse for in input_valuesreturn [ * *scaling for ( ,yin zip(input_valuesself mask)elsereturn input_values def backprop(self,errors)"""returns the derivative of the errors""return [ * for ( ,yin zip(errorsself mask) class dropout_layer_ (layer)"""dropout layer "" def __init__(selfnnrate= )""rate is fraction of the input units to drop =rate ""version june |
18,886 | self rate rate layer __init__(selfnn def output_values(selfinput_valuestraining=false)"""returns the outputs for the input values it remembers the input values for the backprop ""if trainingscaling /( -self rateself outputs[ if flip(self rateelse inp*scaling make with probability rate for inp in input_valuesreturn self outputs elsereturn input_values def backprop(self,errors)"""returns the derivative of the errors""return errors examples the following constructs neural network with one hidden layer the hidden layer has width with relu activation function the output layer used sigmoid learnnn py -(continued #data data_from_file('data/mail_reading csv'target_index=- #data data_from_file('data/mail_reading_consis csv'target_index=- data data_from_file('data/spect csv'prob_test= target_index= #data data_from_file('data/iris data'prob_test= target_index=- examples approx test test #data data_from_file('data/if_x_then_y_else_z csv'num_train= target_index=- not linearly sep #data data_from_file('data/holiday csv'target_index=- #num_train= #data data_from_file('data/processed cleveland data'target_index=- random seed(none nn nn(datavalidation_proportion nn add_layer(linear_complete_layer(nn , )#nn add_layer(sigmoid_layer(nn )comment this or the next nn add_layer(relu_layer(nn )nn add_layer(linear_complete_layer(nn , )when using output_type="boolean#nn add_layer(linear_complete_layer(nn , )when using output_type="categorical#nn learn(epochs nn do nn(dataversion june |
18,887 | neural networks and deep learning nn do add_layer(linear_complete_layer(nn do, )#nn add_layer(sigmoid_layer(nn )comment this or the next nn do add_layer(relu_layer(nn do)nn do add_layer(dropout_layer(nn dorate= )#nn add_layer(linear_complete_layer(nn do, )when using output_type="booleannn do add_layer(linear_complete_layer(nn do, )when using output_type="categorical#nn do learn(epochs nn_r nn(datann_r add_layer(linear_complete_layer_rms_prop(nn_r , )#nn_r add_layer(sigmoid_layer(nn_r )comment this or the next nn_r add_layer(relu_layer(nn_r )#nn_r add_layer(linear_complete_layer(nn_r , )when using output_type="booleannn_r add_layer(linear_complete_layer_rms_prop(nn_r , )when using output_type="categorical#nn_r learn(epochs nnm nn(datannm add_layer(linear_complete_layer_momentum(nnm , )#nnm add_layer(sigmoid_layer(nnm )comment this or the next nnm add_layer(relu_layer(nnm )#nnm add_layer(linear_complete_layer(nnm , )when using output_type="booleannnm add_layer(linear_complete_layer_momentum(nnm , )when using output_type="categorical#nnm learn(epochs nn nn(data#"boolean"nn add_layer(linear_complete_layer_rms_prop(nn , )nn add_layer(relu_layer(nn )nn add_layer(linear_complete_layer_rms_prop(nn , )when using output_type="categorical nn nn(data#"boolean"nn add_layer(linear_complete_layer_rms_prop(nn , )nn add_layer(relu_layer(nn )nn add_layer(linear_complete_layer_rms_prop(nn , )when using output_type="categorical nn nn(data,learning_rate= nn add_layer(linear_complete_layer(nn , )categorical linear regression #nn add_layer(linear_complete_layer_rms_prop(nn , )categorical linear regression plotting version june |
18,888 | learnnn py -(continued from learnlinear import plot_steps from learnproblem import evaluate to show plotsplot_steps(learner nn data datacriterion=evaluate log_lossnum_steps= log_scale=falselegend_label="nn "plot_steps(learner nn data datacriterion=evaluate log_lossnum_steps= log_scale=falselegend_label="nn "plot_steps(learner nn data datacriterion=evaluate log_lossnum_steps= log_scale=falselegend_label="nn "plot_steps(learner nn data datacriterion=evaluate log_lossnum_steps= log_scale=falselegend_label="nn " plot_steps(learner nn data datacriterion=evaluate accuracynum_steps= log_scale=falselegend_label="nn "plot_steps(learner nn data datacriterion=evaluate accuracynum_steps= log_scale=falselegend_label="nn "plot_steps(learner nn data datacriterion=evaluate accuracynum_steps= log_scale=falselegend_label="nn "plot_steps(learner nn data datacriterion=evaluate accuracynum_steps= log_scale=falselegend_label="nn " print some training examples #for eg in random sample(data train, )print(eg,nn predictor(eg) print some test examples #for eg in random sample(data test, )print(eg,nn predictor(eg) to see the weights learned in linear layers nn layers[ weights nn layers[ weights print testfor in data trainprint( ,nn predictor( ) def test(datahidden_widths [ ]epochs= optimizers [linear_complete_layerlinear_complete_layer_momentumlinear_complete_layer_rms_prop])data display( ,"batch\ ","\tjoin(criterion __doc__ for criterion in evaluate all_criteria)for optimizer in optimizersnn nn(datafor width in hidden_widthsnn add_layer(optimizer(nn,width)nn add_layer(relu_layer(nn)if data target ftype ="boolean"nn add_layer(optimizer(nn, )version june |
18,889 | neural networks and deep learning elseerror( "not implemented{data output_type}"nn learn(epochsthe following tests on mnist the original files are from com/exdb/mnistthis code assumes you use the csv files from com/projects/mnist-in-csv/and put them in the directory /mnistnote that this is very inefficientyou would be better to use keras or pytorch there are input units and hidden unitswhich makes , parameters for the lowest linear layer so don' be surprised when it takes many hours in aipython (even if it only takes few seconds in keraslearnnn py -(continued simplified version( training instancesdata_mnist data_from_file(/mnist/mnist_train csv'prob_test= target_index= boolean_features=falsetarget_type="categorical" full versiondata_mnist data_from_files(/mnist/mnist_train csv'/mnist/mnist_test csv'target_index= boolean_features=falsetarget_type="categorical" nn_mnist nn(data_mnistvalidation_proportion learning_rate= #validation set nn_mnist add_layer(linear_complete_layer_rms_prop(nn_mnist, ))nn_mnist add_layer(relu_layer(nn_mnist))nn_mnist add_layer(linear_complete_layer_rms_prop(nn_mnist, )start_time time perf_counter();nn_mnist learn(epochs= batch_size= );end_time time perf_counter();print("time:"end_time start_time,"seconds"# epoch determine test errordata_mnist evaluate_dataset(data_mnist testnn_mnist predictorevaluate accuracyprint some random predictionsfor eg in random sample(data_mnist test, )print(data_mnist target(eg),nn_mnist predictor(eg),nn_mnist predictor(eg)[data_mnist target(eg)]exercise in the definition of nn abovefor each of the followingfirst hypothesize what will happenthen test your hypothesisthen explain whether you testing confirms your hypothesis or not test it for more than one data setand use more than one run for each data set (awhich fits the data betterhaving sigmoid layer or relu layer after the first linear layer(bwhich is fasterhaving sigmoid layer or relu layer after the first linear layer(cwhat happens if you have both the sigmoid layer and then relu layer after the first linear layer and before the second linear layer(dwhat happens if you have relu layer then sigmoid layer after the first linear layer and before the second linear layerversion june |
18,890 | (ewhat happens if you have neither the sigmoid layer nor relu layer after the first linear layerexercise do some version june |
18,891 | reasoning under uncertainty representing probabilistic models variable consists of namea domain and an optional ( ,yposition (for displayingthe domain of variable is list or tupleas the ordering will matter in the representation of factors probvariables py -probabilistic variables import random class variable(object)""" random variable name (stringname of the variable domain (lista list of the values for the variable variables are ordered according to their name "" def __init__(selfnamedomainposition=none)"""variable name string domain list of printable values position of form ( , ""self name name string self domain domain list of values self position position if position else (random random()random random()self size len(domain def __str__(self)return self name |
18,892 | reasoning under uncertainty def __repr__(self)return self name "variable({self name}) representing factors factor ismathematicallya function from variables into numberthat is given value for each of its variableit gives number factors are used for conditional probabilitiesutilities in the next and are explicitly constructed by some algorithms (in particular variable eliminationa variable assignmentor just assignmentis represented as {variable valuedictionary factor can be evaluated when all of its variables are assigned the method get_value evaluates the factor for an assignment the assignment can include extra variables not in the factor this method needs to be defined for every subclass probfactors py -factors for graphical models from display import displayable import math class factor(displayable)nextid= each factor has unique identifierfor printing def __init__(self,variables)self variables variables ordered list of variables self id factor nextid self name " {self id}factor nextid + def can_evaluate(self,assignment)"""true when the factor can be evaluated in the assignment assignment is {variable:valuedict ""return all( in assignment for in self variables def get_value(self,assignment)"""returns the value of the factor given the assignment of values to variables needs to be defined for each subclass ""assert self can_evaluate(assignmentraise notimplementederror("get_value"abstract method the method __str__ returns brief definition (like " ( , , )"the method to_table returns string representations of table showing all of the assignments of values to variablesand the corresponding value probfactors py -(continued def __str__(self)version june |
18,893 | """returns string representing summary of the factor""return "{self name}({',join(str(varfor var in self variables)}) def to_table(selfvariables=nonegiven={})"""returns string representation of the factor allows for an arbitrary variable ordering variables is list of the variables in the factor (can contain other variables)""if variables==nonevariables [ for in self variables if not in givenelse#enforce ordering and allow for extra variables in ordering variables [ for in variables if in self variables and not in givenhead "\tjoin(str(vfor in variablesreturn head+"\ "+self ass_to_str(variablesgivenvariables def ass_to_str(selfvarsasstallvars)#print( "ass_to_str({vars}{asst}{allvars})"if varsreturn "\njoin(self ass_to_str(vars[ :]{**asstvars[ ]:val}allvarsfor val in vars[ domainelsereturn ("\tjoin(str(asst[var]for var in allvars"\ "+"{ }format(self get_value(asst) __repr__ __str__ conditional probability distributions conditional probability distribution (cpdis type of factor that represents conditional probability cpd representing ( yk is type of factorwhere given values for and each yi returns number probfactors py -(continued class cpd(factor)def __init__(selfchildparents)"""represents (variable parents""self parents parents self child child factor __init__(selfparents+[child] def __str__(self)""" brief description of factor using in tracing""if self parentsreturn " ({self child}|{',join(str(pfor in self parents)})version june |
18,894 | reasoning under uncertainty elsereturn " ({self child}) __repr__ __str__ the simplest cpd is the constant that has probability when the child has the value specified probfactors py -(continued class constantcpd(cpd)def __init__(selfvariablevalue)cpd __init__(selfvariable[]self value value def get_value(selfassignment)return if self value==assignment[self childelse logistic regression logistic regression cpdfor boolean variable represents ( =true yk )using real-values weights so ( =true yk sigmoid( wi yi where for boolean yi true is represented as and false as probfactors py -(continued from learnlinear import sigmoidlogit class logisticregression(cpd)def __init__(selfchildparentsweights)""" logistic regression representation of conditional probability child is the boolean (or / variable whose cpd is being defined parents is the list of parents weights is list of parameterssuch that weights[ + is the weight for parents[ ""assert len(weights= +len(parentscpd __init__(selfchildparentsself weights weights def get_value(self,assignment)assert self can_evaluate(assignmentprob sigmoid(self weights[ sum(self weights[ + ]*assignment[self parents[ ]for in range(len(self parents)))if assignment[self child]#child is true return prob elsereturn ( -probversion june |
18,895 | noisy-or noisy-orfor boolean variable with boolean parents yk is parametrized by parameters pk where each <pi < the sematics is defined as though there are hidden variables zk where ( and (zi yi pi for > and where is true if and only if zk (where is "or"thus is false if all of the zi are false intuitivelyz is the probability of when all yi are false and each zi is noisy (probabilisticmeasure that yi makes trueand only needs one to make it true probfactors py -(continued class noisyor(cpd)def __init__(selfchildparentsweights)""" noisy representation of conditional probability variable is the boolean (or / child variable whose cpd is being defined parents is the list of boolean (or / parents weights is list of parameterssuch that weights[ + is the weight for parents[ ""assert len(weights= +len(parentscpd __init__(selfchildparentsself weights weights def get_value(self,assignment)assert self can_evaluate(assignmentprobfalse ( -self weights[ ])*math prod( -self weights[ + for in range(len(self parents)if assignment[self parents[ ]]if assignment[self child]return -probfalse elsereturn probfalse tabular factors tabular factor is factor that represents each assignment of values to variables separately it is represented by python array (or python dictif the variables are vk the value of ( vk vk is stored in [ ][ [vk if the domain of vi is [ ni this can be represented as an array otherwise we can use dictionary python is nice in that it doesn' carewhether an array or dict is used except when enumerating the valuesenumerating dict gives the keys (the variablesbut enumerating an array gives the values so we have to be careful not to do this probfactors py -(continuedversion june |
18,896 | reasoning under uncertainty from functools import reduce class tabfactor(factor) def __init__(selfvariablesvalues)factor __init__(selfvariablesself values values def get_value(selfassignment)return self get_val_rec(self valuesself variablesassignment def get_val_rec(selfvaluevariablesassignment)if variables =[]return value elsereturn self get_val_rec(value[assignment[variables[ ]]]variables[ :],assignmentprob is factor that represents conditional probability by enumerating all of the values probfactors py -(continued class prob(cpd,tabfactor)""" factor defined by conditional probability table""def __init__(self,var,pars,cpt)"""creates factor from conditional probability tablecpt the cpt values are assumed to be for the ordering par+[var""tabfactor __init__(self,pars+[var],cptself child var self parents pars graphical models graphical model consists of set of variables and set of factors belief network is graphical model where all of the factors represent conditional probabilities there are some operations (such as pruning variableswhich are applicable to belief networksbut are not applicable to more general models at the momentwe will treat them as the same probgraphicalmodels py -graphical models and belief networks from display import displayable from probfactors import cpd import matplotlib pyplot as plt class graphicalmodel(displayable)"""the class of graphical models graphical model consists of titlea set of variables and set of factors version june |
18,897 | vars is set of variables factors is set of factors ""def __init__(selftitlevariables=nonefactors=none)self title title self variables variables self factors factors belief network (also known as bayesian networkis graphical model where all of the factors are conditional probabilitiesand every variable has conditional probability of it given its parents this only checks the first conditionand builds some useful data structures probgraphicalmodels py -(continued class beliefnetwork(graphicalmodel)"""the class of belief networks "" def __init__(selftitlevariablesfactors)"""vars is set of variables factors is set of factors all of the factors are instances of cpd ( prob""graphicalmodel __init__(selftitlevariablesfactorsassert all(isinstance( ,cpdfor in factorsself var cpt { child: for in factorsself var parents { child: parents for in factorsself children { :[for in self variablesfor in self var parentsfor par in self var parents[ ]self children[parappend(vself topological_sort_saved none the following creates topological sort of the nodeswhere the parents of node come before the node in the resulting order this is based on kahn' algorithm from probgraphicalmodels py -(continued def topological_sort(self)"""creates topological ordering of variables such that the parents of node are before the node ""if self topological_sort_savedreturn self topological_sort_saved next_vars { for in self var parents if not self var parents[nself display( ,'topological_sortnext_vars',next_varstop_order=[while next_varsvar next_vars pop(version june |
18,898 | reasoning under uncertainty self display( ,'select variable',vartop_order append(varnext_vars |{ch for ch in self children[varif all( in top_order for in self var parents[ch])self display( ,'var_with_no_parents_left',next_varsself display( ,"top_order",top_orderassert set(top_order)==set(self var parents),(top_order,self var parentsself topologicalsort_saved=top_order return top_order the show method uses matplotlib to show the graphical structure of belief network probgraphicalmodels py -(continued def show(self)plt ion(interactive ax plt figure(gca(ax set_axis_off(plt title(self titlebbox dict(boxstyle="round ,pad= ,rounding_size= "for var in reversed(self topological_sort())if self var parents[var]for par in self var parents[var]ax annotate(var namepar positionxytext=var positionarrowprops={'arrowstyle':'<-'},bbox=bboxha='center'elsex, var position plt text( , ,var name,bbox=bbox,ha='center'example belief networks chain of variables the first example belief network is simple chain - - - please do not change thisas it is the example used for testing probgraphicalmodels py -(continued from probvariables import variable from probfactors import problogisticregressionnoisyor boolean [falsetruea variable(" "booleanposition=( , ) variable(" "booleanposition=( , ) variable(" "booleanposition=( , ) variable(" "booleanposition=( , ) f_a prob( ,[],[ , ]version june |
18,899 | report-of-leaving tamper fire alarm smoke leaving report figure the report-of-leaving belief network f_b prob( ,[ ],[[ , ],[ , ]]f_c prob( ,[ ],[[ , ],[ , ]]f_d prob( ,[ ],[[ , ],[ , ]] bn_ ch beliefnetwork(" -chain"{ , , , }{f_a,f_b,f_c,f_d}report-of-leaving example the second belief networkbn_reportis example of poole and mackworth [ (in figure of this document probgraphicalmodels py -(continued belief network report-of-leaving example (example shown in figure of poole and mackworthartificial intelligence alarm variable("alarm"booleanposition=( , )fire variable("fire"booleanposition=( , )leaving variable("leaving"booleanposition=( , )report variable("report"booleanposition=( , )smoke variable("smoke"booleanposition=( , )tamper variable("tamper"booleanposition=( , ) version june |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.