id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
23,700 | graph algorithms simple maximum-flow algorithm first attempt to solve the problem proceeds in stages we start with our graphgand construct flow graph gf gf tells the flow that has been attained at any stage in the algorithm initially all edges in gf have no flowand we hope that when the algorithm terminatesgf contains maximum flow we also construct graphgr called the residual graph gr tellsfor each edgehow much more flow can be added we can calculate this by subtracting the current flow from the capacity for each edge an edge in gr is known as residual edge at each stagewe find path in gr from to this path is known as an augmenting path the minimum edge on this path is the amount of flow that can be added to every edge on the path we do this by adjusting gf and recomputing gr when we find no path from to in gr we terminate this algorithm is nondeterministicin that we are free to choose any path from to tobviously some choices are better than othersand we will address this issue later we will run this algorithm on our example the graphs below are ggf gr respectively keep in mind that there is slight flaw in this algorithm the initial configuration is in figure there are many paths from to in the residual graph suppose we select sbdt then we can send two units of flow through every edge on this path we will adopt the convention that once we have filled (saturatedan edgeit is removed from the residual graph we then obtain figure nextwe might select the path sactwhich also allows two units of flow making the required adjustments gives the graphs in figure the only path left to select is sadtwhich allows one unit of flow the resulting graphs are shown in figure the algorithm terminates at this pointbecause is unreachable from the resulting flow of happens to be the maximum to see what the problem issuppose that with our initial graphwe chose the path sadt this path allows three units of flow and thus seems to be good choice the result of this choicehoweverleaves only one path from to in the residual graphit allows one more unit of flowand thusour algorithm has figure initial stages of the graphflow graphand residual graph |
23,701 | figure ggf gr after two units of flow added along sbdt figure ggf gr after two units of flow added along sact figure ggf gr after one unit of flow added along sadt--algorithm terminates |
23,702 | graph algorithms figure ggf gr if initial action is to add three units of flow along sadt--algorithm terminates after one more step with suboptimal solution failed to find an optimal solution this is an example of greedy algorithm that does not work figure shows why the algorithm fails in order to make this algorithm workwe need to allow the algorithm to change its mind to do thisfor every edge (vwwith flow fv, in the flow graphwe will add an edge in the residual graph (wvof capacity fv, in effectwe are allowing the algorithm to undo its decisions by sending flow back in the opposite direction this is best seen by example starting from our original graph and selecting the augmenting path sadtwe obtain the graphs in figure notice that in the residual graphthere are edges in both directions between and either one more unit of flow can be pushed from to dor up to three units can be pushed back--we can undo flow now the algorithm finds the augmenting path sbdactof flow by pushing two units of flow from to athe algorithm takes two units of flow away from the edge (adand is essentially changing its mind figure shows the new graphs figure graphs after three units of flow added along sadt using correct algorithm |
23,703 | figure graphs after two units of flow added along sbdact using correct algorithm there is no augmenting path in this graphso the algorithm terminates note that the same result would occur if at figure the augmenting path sact was chosen which allows one unit of flowbecause then subsequent augmenting path could be found it is easy to see that if the algorithm terminatesthen it must terminate with maximum flow termination implies that there is no path from to in the residual graph so cut the residual graphputting the vertices reachable from on one side and the unreachables (which include ton the other side figure shows the cut clearly any edges in the original graph that cross the cut must be saturatedotherwisethere would be residual flow remaining on one of the edgeswhich would then imply an edge that crosses the cut (in the wrong disallowed directionin gr but that means that the flow in is exactly equal to the capacity of cut in ghencewe have maximum flow if the edge costs in the graph are integersthen the algorithm must terminateeach augmentation adds unit of flowso we eventually reach the maximum flowthough there figure the vertices reachable from in the residual graph form one side of cutthe unreachables form the other side of the cut |
23,704 | graph algorithms figure the classic bad case for augmenting is no guarantee that this will be efficient in particularif the capacities are all integers and the maximum flow is fthensince each augmenting path increases the flow value by at least stages sufficeand the total running time is ( *| |)since an augmenting path can be found in (| |time by an unweighted shortest-path algorithm the classic example of why this is bad running time is shown by the graph in figure the maximum flow is seen by inspection to be , , by sending , , down each side random augmentations could continually augment along path that includes the edge connected by and if this were to occur repeatedly , , augmentations would be requiredwhen we could get by with only simple method to get around this problem is always to choose the augmenting path that allows the largest increase in flow finding such path is similar to solving weighted shortest-path problemand single-line modification to dijkstra' algorithm will do the trick if capmax is the maximum edge capacitythen one can show that (|elog capmax augmentations will suffice to find the maximum flow in this casesince (|elog | |time is used for each calculation of an augmenting patha total bound of (| | log |vlog capmax is obtained if the capacities are all small integersthis reduces to (| | log | |another way to choose augmenting paths is always to take the path with the least number of edgeswith the plausible expectation that by choosing path in this mannerit is less likely that smallflow-restricting edge will turn up on the path with this ruleeach augmenting step computes the shortest unweighted path from to in the residual graphso assume that each vertex in the graph maintains dv representing the shortest-path distance from to in the residual graph each augmenting step can add new edges into the residual graphbut it is clear that no dv can decreasebecause an edge is added in the opposite direction of an existing shortest path each augmenting step saturates at least one edge suppose edge (uvis saturatedat that pointu had distance du and had distance dv du then (uvwas removed from |
23,705 | the residual graphand edge (vuwas added (uvcannot reappear in the residual graph againunless and until (vuappears in future augmenting path but if it doesthen the distance to at that point must be dv which would be higher than at the time (uvwas previously removed this means that each time (uvreappearsu' distance goes up by this means that any edge can reappear at most | |/ times each augmentation causes some edge to reappear so the number of augmentations is (| || |each step takes (| |)due to the unweighted shortest-path calculationyielding an (| | | |bound on the running time further data structure improvements are possible to this algorithmand there are severalmore complicatedalgorithms long history of improved bounds has lowered the current best-known bound for this problem to (| || |there are also host of very good bounds for special cases for instanceo(| || | / time finds maximum flow in graphhaving the property that all vertices except the source and sink have either single incoming edge of capacity or single outgoing edge of capacity these graphs occur in many applications the analyses required to produce these bounds are rather intricateand it is not clear how the worst-case results relate to the running times encountered in practice relatedeven more difficult problem is the min-cost flow problem each edge has not only capacity but also cost per unit of flow the problem is to findamong all maximum flowsthe one flow of minimum cost both of these problems are being actively researched minimum spanning tree the next problem we will consider is that of finding minimum spanning tree in an undirected graph the problem makes sense for directed graphs but appears to be more difficult informallya minimum spanning tree of an undirected graph is tree formed from graph edges that connects all the vertices of at lowest total cost minimum spanning tree exists if and only if is connected although robust algorithm should report the case that is unconnectedwe will assume that is connected and leave the issue of robustness as an exercise to the reader in figure the second graph is minimum spanning tree of the first (it happens to be uniquebut this is unusualnotice that the number of edges in the minimum spanning tree is | the minimum spanning tree is tree because it is acyclicit is spanning because it covers every vertexand it is minimum for the obvious reason if we need to wire house with minimum of cable (assuming no other electrical constraints)then minimum spanning tree problem needs to be solved for any spanning treetif an edgeethat is not in is addeda cycle is created the removal of any edge on the cycle reinstates the spanning tree property the cost of the spanning tree is lowered if has lower cost than the edge that was removed ifas spanning tree is createdthe edge that is added is the one of minimum cost that avoids creation of cyclethen the cost of the resulting spanning tree cannot be improvedbecause any replacement edge would have cost at least as much as an edge already in the spanning tree this shows that greed works for the minimum spanning tree problem the two algorithms we present differ in how minimum edge is selected |
23,706 | graph algorithms figure graph and its minimum spanning tree prim' algorithm one way to compute minimum spanning tree is to grow the tree in successive stages in each stageone node is picked as the rootand we add an edgeand thus an associated vertexto the tree at any point in the algorithmwe can see that we have set of vertices that have already been included in the treethe rest of the vertices have not the algorithm then findsat each stagea new vertex to add to the tree by choosing the edge (uvsuch that the cost of (uvis the smallest among all edges where is in the tree and is not figure shows how this algorithm would build the minimum spanning treestarting from initiallyv is in the tree as root with no edges each step adds one edge and one vertex to the tree we can see that prim' algorithm is essentially identical to dijkstra' algorithm for shortest paths as beforefor each vertex we keep values dv and pv and an indication of whether it is known or unknown dv is the weight of the shortest edge connecting to known vertexand pv as beforeis the last vertex to cause change in dv the rest of the algorithm is exactly the samewith the exception that since the definition of dv is differentso is the update rule for this problemthe update rule is even simpler than beforeafter vertexvis selectedfor each unknown adjacent to vdw min(dw cw, the initial configuration of the table is shown in figure is selectedand and are updated the table resulting from this is shown in figure the next vertex |
23,707 | figure prim' algorithm after each stage known dv pv figure initial configuration of table used in prim' algorithm known dv pv figure the table after is declared known |
23,708 | graph algorithms known dv pv figure the table after is declared known selected is every vertex is adjacent to is not examinedbecause it is known is unchangedbecause it has dv and the edge cost from to is all the rest are updated figure shows the resulting table the next vertex chosen is (arbitrarily breaking tiethis does not affect any distances then is chosenwhich affects the distance in producing figure figure results from the selection of which forces and to be adjusted and then are selectedcompleting the algorithm known dv pv figure the table after and then are declared known known dv pv figure the table after is declared known |
23,709 | known dv pv figure the table after and are selected (prim' algorithm terminatesthe final table is shown in figure the edges in the spanning tree can be read from the table( )( )( )( )( )( the total cost is the entire implementation of this algorithm is virtually identical to that of dijkstra' algorithmand everything that was said about the analysis of dijkstra' algorithm applies here be aware that prim' algorithm runs on undirected graphsso when coding itremember to put every edge in two adjacency lists the running time is (| | without heapswhich is optimal for dense graphsand (|elog | |using binary heapswhich is good for sparse graphs kruskal' algorithm second greedy strategy is to continually select the edges in order of smallest weight and accept an edge if it does not cause cycle the action of the algorithm on the graph in the preceding example is shown in figure edge weight action ( ( ( ( ( ( ( ( ( accepted accepted accepted accepted rejected rejected accepted rejected accepted figure action of kruskal' algorithm on |
23,710 | graph algorithms figure kruskal' algorithm after each stage formallykruskal' algorithm maintains forest-- collection of trees initiallythere are |vsingle-node trees adding an edge merges two trees into one when the algorithm terminatesthere is only one treeand this is the minimum spanning tree figure shows the order in which edges are added to the forest the algorithm terminates when enough edges are accepted it turns out to be simple to decide whether edge (uvshould be accepted or rejected the appropriate data structure is the union/find algorithm from the invariant we will use is that at any point in the processtwo vertices belong to the same set if and only if they are connected in the current spanning forest thuseach vertex is initially in its own set if and are in the same setthe edge is rejectedbecause since they are already connectedadding (uvwould form cycle otherwisethe edge is acceptedand union is performed on the two sets containing and it is easy to see that this maintains the set invariantbecause once the edge (uvis added to the spanning forestif was connected to and was connected to vthen and must now be connectedand thus belong in the same set the edges could be sorted to facilitate the selectionbut building heap in linear time is much better idea then deletemins give the edges to be tested in order typicallyonly small fraction of the edges need to be tested before the algorithm can terminatealthough it is always possible that all the edges must be tried for instanceif there was an extra vertex and edge ( of cost all the edges would have to be examined function kruskal in figure finds minimum spanning tree the worst-case running time of this algorithm is (|elog | |)which is dominated by the heap operations notice that since |eo(| | )this running time is |
23,711 | vector kruskalvector edgesint numvertices disjsets dsnumvertices }priority_queue pqedges }vector mstwhilemst size!numvertices edge pq pop)/edge (uvsettype uset ds finde getu)settype vset ds finde getv)ifuset !vset /accept the edge mst push_backe )ds unionusetvset )return mstfigure pseudocode for kruskal' algorithm actually (|elog | |in practicethe algorithm is much faster than this time bound would indicate applications of depth-first search depth-first search is generalization of preorder traversal starting at some vertexvwe process and then recursively traverse all vertices adjacent to if this process is performed on treethen all tree vertices are systematically visited in total of (| |timesince | (| |if we perform this process on an arbitrary graphwe need to be careful to avoid cycles to do thiswhen we visit vertexvwe mark it visitedsince now we have been thereand recursively call depth-first search on all adjacent vertices that are not already marked we implicitly assume that for undirected graphs every edge (vwappears twice in the adjacency listsonce as (vwand once as (wvthe procedure in figure performs depth-first search (and does absolutely nothing elseand is template for the general style for each vertexthe data member visited is initialized to false by recursively calling the procedures only on nodes that have not been visitedwe guarantee that we do not loop indefinitely if the graph is undirected and not connectedor directed and not strongly connectedthis strategy might fail to visit some nodes we then search for an unmarked node |
23,712 | graph algorithms void graph::dfsvertex visited truefor each vertex adjacent to if! visited dfsw )figure template for depth-first search (pseudocodeapply depth-first traversal thereand continue this process until there are no unmarked nodes because this strategy guarantees that each edge is encountered only oncethe total time to perform the traversal is (| | |)as long as adjacency lists are used undirected graphs an undirected graph is connected if and only if depth-first search starting from any node visits every node because this test is so easy to applywe will assume that the graphs we deal with are connected if they are notthen we can find all the connected components and apply our algorithm on each of these in turn as an example of depth-first searchsuppose in the graph of figure we start at vertex then we mark as visited and call dfs(brecursively dfs(bmarks as visited and calls dfs(crecursively dfs(cmarks as visited and calls dfs(drecursively dfs(dsees both and bbut both of these are markedso no recursive calls are made dfs(dalso sees that is adjacent but markedso no recursive call is made thereand dfs(dreturns back to dfs(cdfs(csees adjacentignores itfinds previously unseen vertex adjacentand thus calls dfs(edfs(emarks eignores and cand returns to dfs(cdfs(creturns to dfs(bdfs(bignores both and and returns dfs(aignores both and and returns (we have actually touched every edge twiceonce as (vwand again as (wv)but this is really once per adjacency list entry we graphically illustrate these steps with depth-first spanning tree the root of the tree is athe first vertex visited each edge (vwin the graph is present in the tree ifwhen we process (vw)we find that is unmarkedor ifwhen we process (wv)we find that is unmarkedwe indicate this with tree edge ifwhen we process (vw)we find that is already markedand when processing (wv)we find that is already markedwe draw dashed linewhich we will call back edgeto indicate that this "edgeis not really part of the tree the depth-first search of the graph in figure is shown in figure the tree will simulate the traversal we performed preorder numbering of the treeusing only tree edgestells us the order in which the vertices were marked if the graph is not connectedthen processing all nodes (and edgesrequires several calls to dfsand each generates tree this entire collection is depth-first spanning forest an efficient way of implementing this is to begin the depth-first search at if we need to restart the depth-first searchwe examine the sequence vk vk+ for an unmarked vertexwhere vk- is the vertex where the last depth-first search was started this guarantees that throughout the algorithmonly (| |is spent looking for vertices where new depth-first search trees can be started |
23,713 | figure an undirected graph figure depth-first search of previous graph biconnectivity connected undirected graph is biconnected if there are no vertices whose removal disconnects the rest of the graph the graph in figure is biconnected if the nodes are computers and the edges are linksthen if any computer goes downnetwork mail is |
23,714 | graph algorithms figure graph with articulation points and unaffectedexceptof courseat the down computer similarlyif mass transit system is biconnectedusers always have an alternate route should some terminal be disrupted if graph is not biconnectedthe vertices whose removal would disconnect the graph are known as articulation points these nodes are critical in many applications the graph in figure is not biconnectedc and are articulation points the removal of would disconnect gand the removal of would disconnect and ffrom the rest of the graph depth-first search provides linear-time algorithm to find all articulation points in connected graph firststarting at any vertexwe perform depth-first search and number the nodes as they are visited for each vertexvwe call this preorder number num(vthenfor every vertexvin the depth-first search spanning treewe compute the lowestnumbered vertexwhich we call low( )that is reachable from by taking zero or more tree edges and then possibly one back edge (in that orderthe depth-first search tree in figure shows the preorder number firstand then the lowest-numbered vertex reachable under the rule described above the lowest-numbered vertex reachable by aband is vertex ( )because they can all take tree edges to and then one back edge back to we can efficiently compute low by performing postorder traversal of the depth-first spanning tree by the definition of lowlow(vis the minimum of num( the lowest num(wamong all back edges (vw the lowest low(wamong all tree edges (vwthe first condition is the option of taking no edgesthe second way is to choose no tree edges and back edgeand the third way is to choose some tree edges and possibly |
23,715 | / / / / / / / figure depth-first tree for previous graphwith num and low back edge this third method is succinctly described with recursive call since we need to evaluate low for all the children of before we can evaluate low( )this is postorder traversal for any edge (vw)we can tell whether it is tree edge or back edge merely by checking num(vand num(wthusit is easy to compute low( )we merely scan down ' adjacency listapply the proper ruleand keep track of the minimum doing all the computation takes (| | |time all that is left to do is to use this information to find articulation points the root is an articulation point if and only if it has more than one childbecause if it has two childrenremoving the root disconnects nodes in different subtreesand if it has only one childremoving the root merely disconnects the root any other vertex is an articulation point if and only if has some child such that low( >num(vnotice that this condition is always satisfied at the roothence the need for special test the if part of the proof is clear when we examine the articulation points that the algorithm determinesnamelyc and has child eand low( >num( )since both are thusthere is only one way for to get to any node above dand that is by going through similarlyc is an articulation pointbecause low( >num(cto prove that this algorithm is correctone must show that the only if part of the assertion is true (that isthis finds all articulation pointswe leave this as an exercise as second examplewe show (fig the result of applying this algorithm on the same graphstarting the depth-first search at |
23,716 | graph algorithms / / / / / / / figure depth-first tree that results if depth-first search starts at we close by giving pseudocode to implement this algorithm we will assume that vertex contains the data members visited (initialized to false)numlowand parent we will also keep (graphclass variable called counterwhich is initialized to to assign the preorder traversal numbersnum we also leave out the easily implemented test for the root as we have already statedthis algorithm can be implemented by performing preorder traversal to compute num and then postorder traversal to compute low third traversal can be used to check which vertices satisfy the articulation point criteria performing three traversalshoweverwould be waste the first pass is shown in figure the second and third passeswhich are postorder traversalscan be implemented by the code in figure the last if statement handles special case if is adjacent to /*assign num and compute parents *void graph::assignnumvertex num counter++ visited truefor each vertex adjacent to if! visited parent vassignnumw )figure routine to assign num to vertices (pseudocode |
23,717 | /*assign lowalso check for articulation points *void graph::assignlowvertex low num/rule for each vertex adjacent to ifw num num /forward edge assignloww )ifw low > num cout < <is an articulation point<endlv low minv loww low )/rule else ifv parent ! /back edge low minv loww num )/rule figure pseudocode to compute low and to test for articulation points (test for the root is omittedvthen the recursive call to will find adjacent to this is not back edgeonly an edge that has already been considered and needs to be ignored otherwisethe procedure computes the minimum of the various low and num entriesas specified by the algorithm there is no rule that traversal must be either preorder or postorder it is possible to do processing both before and after the recursive calls the procedure in figure combines the two routines assignnum and assignlow in straightforward manner to produce the procedure findart euler circuits consider the three figures in figure popular puzzle is to reconstruct these figures using pendrawing each line exactly once the pen may not be lifted from the paper while the drawing is being performed as an extra challengemake the pen finish at the same point at which it started this puzzle has surprisingly simple solution stop reading if you would like to try to solve it the first figure can be drawn only if the starting point is the lower leftor right-hand cornerand it is not possible to finish at the starting point the second figure is easily drawn with the finishing point the same as the starting pointbut the third figure cannot be drawn at all within the parameters of the puzzle we can convert this problem to graph theory problem by assigning vertex to each intersection then the edges can be assigned in the natural manneras in figure |
23,718 | visited truev low num counter++/rule for each vertex adjacent to if! visited /forward edge parent vfindartw )ifw low > num cout < <is an articulation point<endlv low minv loww low )/rule else ifv parent ! /back edge low minv loww num )/rule figure testing for articulation points in one depth-first search (test for the root is omitted(pseudocodefigure three drawings figure conversion of puzzle to graph |
23,719 | after this conversion is performedwe must find path in the graph that visits every edge exactly once if we are to solve the "extra challenge,then we must find cycle that visits every edge exactly once this graph problem was solved in by euler and marked the beginning of graph theory the problem is thus commonly referred to as an euler path (sometimes euler touror euler circuit problemdepending on the specific problem statement the euler tour and euler circuit problemsthough slightly differenthave the same basic solution thuswe will consider the euler circuit problem in this section the first observation that can be made is that an euler circuitwhich must end on its starting vertexis possible only if the graph is connected and each vertex has an even degree (number of edgesthis is becauseon the euler circuita vertex is entered and then left if any vertex has odd degreethen eventually we will reach the point where only one edge into is unvisitedand taking it will strand us at if exactly two vertices have odd degreean euler tourwhich must visit every edge but need not return to its starting vertexis still possible if we start at one of the odd-degree vertices and finish at the other if more than two vertices have odd degreethen an euler tour is not possible the observations of the preceding paragraph provide us with necessary condition for the existence of an euler circuit it does nothowevertell us that all connected graphs that satisfy this property must have an euler circuitnor does it give us guidance on how to find one it turns out that the necessary condition is also sufficient that isany connected graphall of whose vertices have even degreemust have an euler circuit furthermorea circuit can be found in linear time we can assume that we know that an euler circuit existssince we can test the necessary and sufficient condition in linear time then the basic algorithm is to perform depth-first search there are surprisingly large number of "obvioussolutions that do not work some of these are presented in the exercises the main problem is that we might visit portion of the graph and return to the starting point prematurely if all the edges coming out of the start vertex have been used upthen part of the graph is untraversed the easiest way to fix this is to find the first vertex on this path that has an untraversed edge and perform another depth-first search this will give another circuitwhich can be spliced into the original this is continued until all edges have been traversed as an exampleconsider the graph in figure it is easily seen that this graph has an euler circuit suppose we start at vertex and traverse the circuit then we are stuckand most of the graph is still untraversed the situation is shown in figure we then continue from vertex which still has unexplored edges depth-first search might come up with the path if we splice this path into the previous path of then we get new path of the graph that remains after this is shown in figure notice that in this graphall the vertices must have even degreeso we are guaranteed to find cycle to add the remaining graph might not be connectedbut this is not important the next vertex on the path that has untraversed edges is vertex possible circuit would then be when spliced inthis gives the path the graph that remains is in figure on this paththe next vertex with an untraversed edge is and the algorithm finds the circuit when this is added to the |
23,720 | graph algorithms figure graph for euler circuit problem figure graph remaining after figure graph after the path current patha circuit of is obtained as all the edges are traversedthe algorithm terminates with an euler circuit to make this algorithm efficientwe must use appropriate data structures we will sketch some of the ideasleaving the implementation as an exercise to make splicing simplethe path should be maintained as linked list to avoid repetitious scanning of adjacency listswe must maintainfor each adjacency lista pointer to the last edge scanned when path is spliced inthe search for new vertex from which to perform the next depth-first search must begin at the start of the splice point this guarantees that |
23,721 | figure graph remaining after the path the total work performed on the vertex search phase is (| |during the entire life of the algorithm with the appropriate data structuresthe running time of the algorithm is (| | | very similar problem is to find simple cyclein an undirected graphthat visits every vertex this is known as the hamiltonian cycle problem although it seems almost identical to the euler circuit problemno efficient algorithm for it is known we shall see this problem again in section directed graphs using the same strategy as with undirected graphsdirected graphs can be traversed in linear timeusing depth-first search if the graph is not strongly connecteda depth-first search starting at some node might not visit all nodes in this casewe repeatedly perform depth-first searchesstarting at some unmarked nodeuntil all vertices have been visited as an exampleconsider the directed graph in figure we arbitrarily start the depth-first search at vertex this visits vertices bcadeand we then restart at some unvisited vertex arbitrarilywe start at hwhich visits and finallywe start at gwhich is the last vertex that needs to be visited the corresponding depth-first search tree is shown in figure the dashed arrows in the depth-first spanning forest are edges (vwfor which was already marked at the time of consideration in undirected graphsthese are always back edgesbutas we can seethere are three types of edges that do not lead to new vertices firstthere are back edgessuch as (aband (ihthere are also forward edgessuch as (cdand (ce)that lead from tree node to descendant finallythere are cross edgessuch as (fcand (gf)which connect two tree nodes that are not directly related depthfirst search forests are generally drawn with children and new trees added to the forest from left to right in depth-first search of directed graph drawn in this mannercross edges always go from right to left some algorithms that use depth-first search need to distinguish between the three types of nontree edges this is easy to check as the depth-first search is being performedand it is left as an exercise |
23,722 | graph algorithms figure directed graph figure depth-first search of previous graph one use of depth-first search is to test whether or not directed graph is acyclic the rule is that directed graph is acyclic if and only if it has no back edges (the graph above has back edgesand thus is not acyclic the reader may remember that topological sort can also be used to determine whether graph is acyclic another way to perform topological sorting is to assign the vertices topological numbers nn by postorder traversal of the depth-first spanning forest as long as the graph is acyclicthis ordering will be consistent |
23,723 | finding strong components by performing two depth-first searcheswe can test whether directed graph is strongly connectedand if it is notwe can actually produce the subsets of vertices that are strongly connected to themselves this can also be done in only one depth-first searchbut the method used here is much simpler to understand firsta depth-first search is performed on the input graph the vertices of are numbered by postorder traversal of the depth-first spanning forestand then all edges in are reversedforming gr the graph in figure represents gr for the graph shown in figure the vertices are shown with their numbers the algorithm is completed by performing depth-first search on gr always starting new depth-first search at the highest-numbered vertex thuswe begin the depth-first search of gr at vertex gwhich is numbered this leads nowhereso the next search is started at this call visits and the next call starts at and visits acand the next calls after this are dfs(dand finally dfs(ethe resulting depth-first spanning forest is shown in figure each of the trees (this is easier to see if you completely ignore all nontree edgesin this depth-first spanning forest forms strongly connected component thusfor our examplethe strongly connected components are { }{hij}{bacf}{ }and {eto see why this algorithm worksfirst note that if two vertices and are in the same strongly connected componentthen there are paths from to and from to in the original graph gand hence also in gr nowif two vertices and are not in the same depth-first spanning tree of gr clearly they cannot be in the same strongly connected component , , , , , , , , , , figure gr numbered by postorder traversal of (from fig |
23,724 | graph algorithms figure depth-first search of gr --strong components are { }{hij}{bacf}{ }{eto prove that this algorithm workswe must show that if two vertices and are in the same depth-first spanning tree of gr there must be paths from to and from to equivalentlywe can show that if is the root of the depth-first spanning tree of gr containing vthen there is path from to and from to applying the same logic to would then give path from to and from to these paths would imply paths from to and to (going through xsince is descendant of in gr ' depth-first spanning treethere is path from to in gr and thus path from to in furthermoresince is the rootx has the higher postorder number from the first depth-first search thereforeduring the first depth-first searchall the work processing was completed before the work at was completed since there is path from to xit follows that must be descendant of in the spanning tree for --otherwise would finish after this implies path from to in and completes the proof introduction to np-completeness in this we have seen solutions to wide variety of graph theory problems all these problems have polynomial running timesand with the exception of the network flow problemthe running time is either linear or only slightly more than linear ( (|elog | |)we have also mentionedin passingthat for some problems certain variations seem harder than the original recall that the euler circuit problemwhich finds path that touches every edge exactly onceis solvable in linear time the hamiltonian cycle problem asks for simple cycle that contains every vertex no linear algorithm is known for this problem the single-source unweighted shortest-path problem for directed graphs is also solvable in linear time no linear-time algorithm is known for the corresponding longestsimple-path problem the situation for these problem variations is actually much worse than we have described not only are no linear algorithms known for these variationsbut there are no known algorithms that are guaranteed to run in polynomial time the best known algorithms for these problems could take exponential time on some inputs |
23,725 | in this section we will take brief look at this problem this topic is rather complexso we will only take quick and informal look at it because of thisthe discussion may be (necessarilysomewhat imprecise in places we will see that there are host of important problems that are roughly equivalent in complexity these problems form class called the np-complete problems the exact complexity of these np-complete problems has yet to be determined and remains the foremost open problem in theoretical computer science either all these problems have polynomial-time solutions or none of them do easy vs hard when classifying problemsthe first step is to examine the boundaries we have already seen that many problems can be solved in linear time we have also seen some (log nrunning timesbut these either assume some preprocessing (such as input already being read or data structure already being builtor occur on arithmetic examples for instancethe gcd algorithmwhen applied on two numbers and ntakes (log ntime since the numbers consist of log and log bitsrespectivelythe gcd algorithm is really taking time that is linear in the amount or size of input thuswhen we measure running timewe will be concerned with the running time as function of the amount of input generallywe cannot expect better than linear running time at the other end of the spectrum lie some truly hard problems these problems are so hard that they are impossible this does not mean the typical exasperated moanwhich means that it would take genius to solve the problem just as real numbers are not sufficient to express solution to one can prove that computers cannot solve every problem that happens to come along these "impossibleproblems are called undecidable problems one particular undecidable problem is the halting problem is it possible to have your +compiler have an extra feature that not only detects syntax errors but also all infinite loopsthis seems like hard problembut one might expect that if some very clever programmers spent enough time on itthey could produce this enhancement the intuitive reason that this problem is undecidable is that such program might have hard time checking itself for this reasonthese problems are sometimes called recursively undecidable if an infinite loop-checking program could be writtensurely it could be used to check itself we could then produce program called loop loop takes as input programpand runs on itself it prints out the phrase yes if loops when run on itself if terminates when run on itselfa natural thing to do would be to print out no instead of doing thatwe will have loop go into an infinite loop what happens when loop is given itself as inputeither loop haltsor it does not halt the problem is that both these possibilities lead to contradictionsin much the same way as does the phrase "this sentence is lie by our definitionloop(pgoes into an infinite loop if (pterminates suppose that when loopp(pterminates thenaccording to the loop programloop(pis obligated to go into an infinite loop thuswe must have loop(loopterminating and entering an infinite loopwhich is clearly not possible on the other handsuppose that when loopp(penters an infinite loop then loop(pmust terminate |
23,726 | graph algorithms and we arrive at the same set of contradictions thuswe see that the program loop cannot possibly exist the class np few steps down from the horrors of undecidable problems lies the class np np stands for nondeterministic polynomial-time deterministic machineat each point in timeis executing an instruction depending on the instructionit then goes to some next instructionwhich is unique nondeterministic machine has choice of next steps it is free to choose any that it wishesand if one of these steps leads to solutionit will always choose the correct one nondeterministic machine thus has the power of extremely good (optimalguessing this probably seems like ridiculous modelsince nobody could possibly build nondeterministic computerand because it would seem to be an incredible upgrade to your standard computer (every problem might now seem trivialwe will see that nondeterminism is very useful theoretical construct furthermorenondeterminism is not as powerful as one might think for instanceundecidable problems are still undecidableeven if nondeterminism is allowed simple way to check if problem is in np is to phrase the problem as yes/no question the problem is in np ifin polynomial timewe can prove that any "yesinstance is correct we do not have to worry about "noinstancessince the program always makes the right choice thusfor the hamiltonian cycle problema "yesinstance would be any simple circuit in the graph that includes all the vertices this is in npsincegiven the pathit is simple matter to check that it is really hamiltonian cycle appropriately phrased questionssuch as "is there simple path of length ?can also easily be checked and are in np any path that satisfies this property can be checked trivially the class np includes all problems that have polynomial-time solutionssince obviously the solution provides check one would expect that since it is so much easier to check an answer than to come up with one from scratchthere would be problems in np that do not have polynomial-time solutions to date no such problem has been foundso it is entirely possiblethough not considered likely by expertsthat nondeterminism is not such an important improvement the problem is that proving exponential lower bounds is an extremely difficult task the information theory bound techniquewhich we used to show that sorting requires ( log ncomparisonsdoes not seem to be adequate for the taskbecause the decision trees are not nearly large enough notice also that not all decidable problems are in np consider the problem of determining whether graph does not have hamiltonian cycle to prove that graph has hamiltonian cycle is relatively simple matter--we just need to exhibit one nobody knows how to showin polynomial timethat graph does not have hamiltonian cycle it seems that one must enumerate all the cycles and check them one by one thus the non-hamiltonian cycle problem is not known to be in np np-complete problems among all the problems known to be in npthere is subsetknown as the np-complete problemswhich contains the hardest an np-complete problem has the property that any problem in np can be polynomially reduced to it |
23,727 | problemp can be reduced to as followsprovide mapping so that any instance of can be transformed to an instance of solve and then map the answer back to the original as an examplenumbers are entered into pocket calculator in decimal the decimal numbers are converted to binaryand all calculations are performed in binary then the final answer is converted back to decimal for display for to be polynomially reducible to all the work associated with the transformations must be performed in polynomial time the reason that np-complete problems are the hardest np problems is that problem that is np-complete can essentially be used as subroutine for any problem in npwith only polynomial amount of overhead thusif any np-complete problem has polynomial-time solutionthen every problem in np must have polynomial-time solution this makes the np-complete problems the hardest of all np problems suppose we have an np-complete problemp suppose is known to be in np suppose further that polynomially reduces to so that we can solve by using with only polynomial time penalty since is np-completeevery problem in np polynomially reduces to by applying the closure property of polynomialswe see that every problem in np is polynomially reducible to we reduce the problem to and then reduce to thusp is np-complete as an examplesuppose that we already know that the hamiltonian cycle problem is np-complete the traveling salesman problem is as follows traveling salesman problem given complete graphg (ve)with edge costsand an integer kis there simple cycle that visits all vertices and has total cost <kthe problem is different from the hamiltonian cycle problembecause all | |(| |- )/ edges are present and the graph is weighted this problem has many important applications for instanceprinted circuit boards need to have holes punched so that chipsresistorsand other electronic components can be placed this is done mechanically punching the hole is quick operationthe time-consuming step is positioning the hole puncher the time required for positioning depends on the distance traveled from hole to hole since we would like to punch every hole (and then return to the start for the next board)and minimize the total amount of time spent travelingwhat we have is traveling salesman problem the traveling salesman problem is np-complete it is easy to see that solution can be checked in polynomial timeso it is certainly in np to show that it is np-completewe polynomially reduce the hamiltonian cycle problem to it to do this we construct new graphgghas the same vertices as for geach edge (vwhas weight of if (vwgand otherwise we choose |vsee figure it is easy to verify that has hamiltonian cycle if and only if ghas traveling salesman tour of total weight |vthere is now long list of problems known to be np-complete to prove that some new problem is np-completeit must be shown to be in npand then an appropriate np-complete problem must be transformed into it although the transformation to traveling salesman problem was rather straightforwardmost transformations are actually quite involved and require some tricky constructions generallyseveral different np-complete |
23,728 | graph algorithms figure hamiltonian cycle problem transformed to traveling salesman problem problems are considered before the problem that actually provides the reduction as we are only interested in the general ideaswe will not show any more transformationsthe interested reader can consult the references the alert reader may be wondering how the first np-complete problem was actually proven to be np-complete since proving that problem is np-complete requires transforming it from another np-complete problemthere must be some np-complete problem for which this strategy will not work the first problem that was proven to be np-complete was the satisfiability problem the satisfiability problem takes as input boolean expression and asks whether the expression has an assignment to the variables that gives value of true satisfiability is certainly in npsince it is easy to evaluate boolean expression and check whether the result is true in cook showed that satisfiability was np-complete by directly proving that all problems that are in np could be transformed to satisfiability to do thishe used the one known fact about every problem in npevery problem in np can be solved in polynomial time by nondeterministic computer the formal model for computer is known as turing machine cook showed how the actions of this machine could be simulated by an extremely complicated and longbut still polynomialboolean formula this boolean formula would be true if and only if the program which was being run by the turing machine produced "yesanswer for its input once satisfiability was shown to be np-completea host of new np-complete problemsincluding some of the most classic problemswere also shown to be np-complete in addition to the satisfiabilityhamiltonian circuittraveling salesmanand longestpath problemswhich we have already examinedsome of the more well-known npcomplete problems which we have not discussed are bin packingknapsackgraph coloringand clique the list is quite extensive and includes problems from operating systems (scheduling and security)database systemsoperations researchlogicand especially graph theory |
23,729 | summary in this we have seen how graphs can be used to model many real-life problems many of the graphs that occur are typically very sparseso it is important to pay attention to the data structures that are used to implement them we have also seen class of problems that do not seem to have efficient solutions in some techniques for dealing with these problems will be discussed exercises find topological ordering for the graph in figure if stack is used instead of queue for the topological sort algorithm in section does different ordering resultwhy might one data structure give "betteranswerwrite program to perform topological sort on graph an adjacency matrix requires (| | merely to initialize using standard double loop propose method that stores graph in an adjacency matrix (so that testing for the existence of an edge is ( )but avoids the quadratic running time find the shortest path from to all other vertices for the graph in figure find the shortest unweighted path from to all other vertices for the graph in figure what is the worst-case running time of dijkstra' algorithm when implemented with -heaps (section ) give an example where dijkstra' algorithm gives the wrong answer in the presence of negative edge but no negative-cost cycle show that the weighted shortest-path algorithm suggested in section works if there are negative-weight edgesbut no negative-cost cyclesand that the running time of this algorithm is (| | | figure graph used in exercises and |
23,730 | graph algorithms figure graph used in exercise suppose all the edge weights in graph are integers between and |ehow fast can dijkstra' algorithm be implementedwrite program to solve the single-source shortest-path problem explain how to modify dijkstra' algorithm to produce count of the number of different minimum paths from to explain how to modify dijkstra' algorithm so that if there is more than one minimum path from to wa path with the fewest number of edges is chosen find the maximum flow in the network of figure suppose that (veis trees is the rootand we add vertex and edges of infinite capacity from all leaves in to give linear-time algorithm to find maximum flow from to bipartite graphg (ve)is graph such that can be partitioned into two subsetsv and and no edge has both its vertices in the same subset give linear algorithm to determine whether graph is bipartite the bipartite matching problem is to find the largest subset eof such that no vertex is included in more than one edge matching of four edges (indicated by dashed edgesis shown in figure there is matching of five edgeswhich is maximum show how the bipartite matching problem can be used to solve the following problemwe have set of instructorsa set of coursesand list of courses that each instructor is qualified to teach if no instructor is required to teach more than one courseand only one instructor may teach given coursewhat is the maximum number of courses that can be offeredc show that the network flow problem can be used to solve the bipartite matching problem what is the time complexity of your solution to part ( ) |
23,731 | figure bipartite graph give an algorithm to find an augmenting path that permits the maximum flow let be the amount of flow remaining in the residual graph show that the augmenting path produced by the algorithm in part (aadmits path of capacity /|ec show that after |econsecutive iterationsthe total flow remaining in the residual graph is reduced from to at most /ewhere show that |eln iterations suffice to produce the maximum flow find minimum spanning tree for the graph in figure using both prim' and kruskal' algorithms is this minimum spanning tree uniquewhydoes either prim' or kruskal' algorithm work if there are negative edge weights show that graph of vertices can have - minimum spanning trees write program to implement kruskal' algorithm if all of the edges in graph have weights between and | |how fast can the minimum spanning tree be computed figure graph used in exercise |
23,732 | graph algorithms figure graph used in exercise give an algorithm to find maximum spanning tree is this harder than finding minimum spanning treefind all the articulation points in the graph in figure show the depth-first spanning tree and the values of num and low for each vertex prove that the algorithm to find articulation points works give an algorithm to find the minimum number of edges that need to be removed from an undirected graph so that the resulting graph is acyclic show that this problem is np-complete for directed graphs prove that in depth-first spanning forest of directed graphall cross edges go from right to left give an algorithm to decide whether an edge (vwin depth-first spanning forest of directed graph is treebackcrossor forward edge find the strongly connected components in the graph of figure figure graph used in exercise |
23,733 | write program to find the strongly connected components in digraph give an algorithm that finds the strongly connected components in only one depthfirst search use an algorithm similar to the biconnectivity algorithm the biconnected components of graphgis partition of the edges into sets such that the graph formed by each set of edges is biconnected modify the algorithm in figure to find the biconnected components instead of the articulation points suppose we perform breadth-first search of an undirected graph and build breadth-first spanning tree show that all edges in the tree are either tree edges or cross edges give an algorithm to find in an undirected (connectedgraph path that goes through every edge exactly once in each direction write program to find an euler circuit in graph if one exists write program to find an euler tour in graph if one exists an euler circuit in directed graph is cycle in which every edge is visited exactly once prove that directed graph has an euler circuit if and only if it is strongly connected and every vertex has equal indegree and outdegree give linear-time algorithm to find an euler circuit in directed graph where one exists consider the following solution to the euler circuit problemassume that the graph is biconnected perform depth-first searchtaking back edges only as last resort if the graph is not biconnectedapply the algorithm recursively on the biconnected components does this algorithm workb suppose that when taking back edgeswe take the back edge to the nearest ancestor does the algorithm work planar graph is graph that can be drawn in plane without any two edges intersecting show that neither of the graphs in figure is planar show that in planar graphthere must exist some vertex which is connected to no more than five nodes show that in planar graph| < | figure graph used in exercise |
23,734 | graph algorithms multigraph is graph in which multiple edges are allowed between pairs of vertices which of the algorithms in this work without modification for multigraphswhat modifications need to be done for the others let (vebe an undirected graph use depth-first search to design linear algorithm to convert each edge in to directed edge such that the resulting graph is strongly connectedor determine that this is not possible you are given set of stickswhich are lying on top of each other in some configuration each stick is specified by its two endpointseach endpoint is an ordered triple giving its xyand coordinatesno stick is vertical stick may be picked up only if there is no stick on top of it explain how to write routine that takes two sticks and and reports whether is abovebelowor unrelated to (this has nothing to do with graph theory give an algorithm that determines whether it is possible to pick up all the sticksand if soprovides sequence of stick pickups that accomplishes this graph is -colorable if each vertex can be given one of colorsand no edge connects identically colored vertices give linear-time algorithm to test graph for two-colorability assume graphs are stored in adjacency-list formatyou must specify any additional data structures that are needed give polynomial-time algorithm that finds / vertices that collectively cover at least three-fourths ( / of the edges in an arbitrary undirected graph show how to modify the topological sort algorithm so that if the graph is not acyclicthe algorithm will print out some cycle you may not use depth-first search let be directed graph with vertices vertex is called sink iffor every in such that vthere is an edge (vs)and there are no edges of the form (svgive an (nalgorithm to determine whether or not has sinkassuming that is given by its adjacency matrix when vertex and its incident edges are removed from treea collection of subtrees remains give linear-time algorithm that finds vertex whose removal from an vertex tree leaves no subtree with more than / vertices give linear-time algorithm to determine the longest unweighted path in an acyclic undirected graph (that isa tree consider an -by- grid in which some squares are occupied by black circles two squares belong to the same group if they share common edge in figure there is one group of four occupied squaresthree groups of two occupied squaresand two individual occupied squares assume that the grid is represented by two-dimensional array write program that does the followinga computes the size of group when square in the group is given computes the number of different groups lists all groups section described the generating of mazes suppose we want to output the path in the maze assume that the maze is represented as matrixeach cell in the matrix stores information about what walls are present (or absent |
23,735 | figure grid for exercise write program that computes enough information to output path in the maze give output in the form sen (representing go souththen eastthen northetc if you are using +compiler with windowing packagewrite program that draws the maze andat the press of buttondraws the path suppose that walls in the maze can be knocked downwith penalty of squares is specified as parameter to the algorithm (if the penalty is then the problem is trivial describe an algorithm to solve this version of the problem what is the running time for your algorithm suppose that the maze may or may not have solution describe linear-time algorithm that determines the minimum number of walls that need to be knocked down to create solution (hintuse double-ended queue describe an algorithm (not necessarily linear-timethat finds shortest path after knocking down the minimum number of walls note that the solution to part (awould give no information about which walls would be the best to knock down (hintuse exercise write program to compute word ladders where single-character substitutions have cost of and single-character additions or deletions have cost of specified by the user as mentioned at the end of section this is essentially weighted shortest-path problem explain how each of the following problems (exercises can be solved by applying shortest-path algorithm then design mechanism for representing an inputand write program that solves the problem the input is list of league game scores (and there are no tiesif all teams have at least one win and losswe can generally "prove,by silly transitivity argument |
23,736 | graph algorithms that any team is better than any other for instancein the six-team league where everyone plays three gamessuppose we have the following resultsa beat and cb beat and fc beat dd beat ee beat af beat and then we can prove that is better than fbecause beat bwho in turnbeat similarlywe can prove that is better than because beat and beat given list of game scores and two teams and yeither find proof (if one existsthat is better than yor indicate that no proof of this form can be found the input is collection of currencies and their exchange rates is there sequence of exchanges that makes money instantlyfor instanceif the currencies are xyand and the exchange rate is equals ys equals zsand equals zsthen zs will buy xswhich in turn will buy yswhich in turn will buy zs we have thus made profit of percent student needs to take certain number of courses to graduateand these courses have prerequisites that must be followed assume that all courses are offered every semester and that the student can take an unlimited number of courses given list of courses and their prerequisitescompute schedule that requires the minimum number of semesters the object of the kevin bacon game is to link movie actor to kevin bacon via shared movie roles the minimum number of links is an actor' bacon number for instancetom hanks has bacon number of he was in apollo with kevin bacon sally fields has bacon number of because she was in forrest gump with tom hankswho was in apollo with kevin bacon almost all well-known actors have bacon number of or assume that you have comprehensive list of actorswith roles, and do the followinga explain how to find an actor' bacon number explain how to find the actor with the highest bacon number explain how to find the minimum number of links between two arbitrary actors the clique problem can be stated as followsgiven an undirected graphg (ve)and an integerkdoes contain complete subgraph of at least verticesthe vertex cover problem can be stated as followsgiven an undirected graphg (ve)and an integerkdoes contain subset such that | < and every edge in has vertex in show that the clique problem is polynomially reducible to vertex cover assume that the hamiltonian cycle problem is np-complete for undirected graphs prove that the hamiltonian cycle problem is np-complete for directed graphs prove that the unweighted simple longest-path problem is np-complete for directed graphs the baseball card collector problem is as followsgiven packets pm each of which contains subset of the year' baseball cardsand an integerkis it possible to collect all the baseball cards by choosing < packetsshow that the baseball card collector problem is np-complete for instancesee the internet movie database filesactor list gz and actresses list gz at ftp://ftp fu-berlin de/pub/misc/movies/database |
23,737 | references good graph theory textbooks include [ ][ ][ ]and [ more advanced topicsincluding the more careful attention to running timesare covered in [ ][ ]and [ use of adjacency lists was advocated in [ the topological sort algorithm is from [ ]as described in [ dijkstra' algorithm appeared in [ the improvements using -heaps and fibonacci heaps are described in [ and [ ]respectively the shortest-path algorithm with negative edge weights is due to bellman [ ]tarjan [ describes more efficient way to guarantee termination ford and fulkerson' seminal work on network flow is [ the idea of augmenting along shortest paths or on paths admitting the largest flow increase is from [ other approaches to the problem can be found in [ ][ ][ ][ ][ ][ ]and [ an algorithm for the min-cost flow problem can be found in [ an early minimum spanning tree algorithm can be found in [ prim' algorithm is from [ ]kruskal' algorithm appears in [ two (|elog log | |algorithms are [ and [ the theoretically best-known algorithms appear in [ ][ ][ and [ an empirical study of these algorithms suggests that prim' algorithmimplemented with decreasekeyis best in practice on most graphs [ the algorithm for biconnectivity is from [ the first linear-time strong components algorithm (exercise appears in the same paper the algorithm presented in the text is due to kosaraju (unpublishedand sharir [ other applications of depth-first search appear in [ ][ ][ ]and [ (as mentioned in the results in [ and [ have been improvedbut the basic algorithm is unchangedthe classic reference work for the theory of np-complete problems is [ additional material can be found in [ the np-completeness of satisfiability is shown in [ and independently by levin the other seminal paper is [ ]which showed the np-completeness of problems an excellent survey of complexity theory is [ an approximation algorithm for the traveling salesman problemwhich generally gives nearly optimal resultscan be found in [ solution to exercise can be found in [ solutions to the bipartite matching problem in exercise can be found in [ and [ the problem can be generalized by adding weights to the edges and removing the restriction that the graph is bipartite efficient solutions for the unweighted matching problem for general graphs are quite complex details can be found in [ ][ ]and [ exercise deals with planar graphswhich commonly arise in practice planar graphs are very sparseand many difficult problems are easier on planar graphs an example is the graph isomorphism problemwhich is solvable in linear time for planar graphs [ no polynomial time algorithm is known for general graphs ahoj hopcroftand ullmanthe design and analysis of computer algorithmsaddison-wesleyreadingmass ahujak melhornj orlinand tarjan"faster algorithms for the shortest path problem,journal of the acm ( ) - bellman"on routing problem,quarterly of applied mathematics ( ) - boruvka"ojistem problemu minimalnim (on minimal problem),praca moravske prirodo-vedecke spolecnosti ( ) - |
23,738 | graph algorithms chazelle" minimum spanning tree algorithm with inverse-ackermann type complexity,journal of the acm ( ) - cheriton and tarjan"finding minimum spanning trees,siam journal on computing ( ) - cheriyan and hagerup" randomized maximum-flow algorithm,siam journal on computing ( ) - cook"the complexity of theorem proving procedures,proceedings of the third annual acm symposium on theory of computing ( ) - deograph theory with applications to engineering and computer scienceprentice hallenglewood cliffsn dijkstra" note on two problems in connexion with graphs,numerische mathematik ( ) - dinic"algorithm for solution of problem of maximum flow in networks with power estimation,soviet mathematics doklady ( ) - edmonds"pathstreesand flowers,canadian journal of mathematics ( ) - edmonds and karp"theoretical improvements in algorithmic efficiency for network flow problems,journal of the acm ( ) - evengraph algorithmscomputer science presspotomacmd fordjr and fulkersonflows in networksprinceton university pressprincetonn fredman and tarjan"fibonacci heaps and their uses in improved network optimization algorithms,journal of the acm ( ) - gabow"data structures for weighted matching and nearest common ancestors with linking,proceedings of first annual acm-siam symposium on discrete algorithms ( ) - gabowz galilt spencerand tarjan"efficient algorithms for finding minimum spanning trees on directed and undirected graphs,combinatorica ( ) - galil"efficient algorithms for finding maximum matchings in graphs,acm computing surveys ( ) - galil and tardos"an ( ( log nlog nmin-cost flow algorithm,journal of the acm ( ) - garey and johnsoncomputers and intractabilitya guide to the theory of np-completenessfreemansan francisco goldberg and rao"beyond the flow decomposition barrier,journal of the acm ( ) - goldberg and tarjan" new approach to the maximum-flow problem,journal of the acm ( ) - hararygraph theoryaddison-wesleyreadingmass hopcroft and karp"an / algorithm for maximum matchings in bipartite graphs,siam journal on computing ( ) - hopcroft and tarjan"algorithm efficient algorithms for graph manipulation,communications of the acm ( ) - |
23,739 | hopcroft and tarjan"dividing graph into triconnected components,siam journal on computing ( ) - hopcroft and tarjan"efficient planarity testing,journal of the acm ( ) - hopcroft and wong"linear-time algorithm for isomorphism of planar graphs,proceedings of the sixth annual acm symposium on theory of computing ( ) - johnson"efficient algorithms for shortest paths in sparse networks,journal of the acm ( ) - kahn"topological sorting of large networks,communications of the acm ( ) - kargerp kleinand tarjan" randomized linear-time algorithm to find minimum spanning trees,journal of the acm ( ) - karp"reducibility among combinatorial problems,complexity of computer computations (eds miller and thatcher)plenum pressnew york - karzanov"determining the maximal flow in network by the method of preflows,soviet mathematics doklady ( ) - kings raoand tarjan" faster deterministic maximum flow algorithm,journal of algorithms ( ) - knuththe art of computer programmingvol fundamental algorithms ed addison-wesleyreadingmass kruskaljr "on the shortest spanning subtree of graph and the traveling salesman problem,proceedings of the american mathematical society ( ) - kuhn"the hungarian method for the assignment problem,naval research logistics quarterly ( ) - lawlercombinatorial optimizationnetworks and matroidsholtreinhart and winstonnew york lin and kernighan"an effective heuristic algorithm for the traveling salesman problem,operations research ( ) - melhorndata structures and algorithms graph algorithms and np-completenessspringer-verlagberlin moret and shapiro"an empirical analysis of algorithms for constructing minimum spanning tree,proceedings of the second workshop on algorithms and data structures ( ) - orlin"max flows in (nmtimeor better,proceedings of the forty-fifth annual acm symposium on theory of computing ( papadimitriou and steiglitzcombinatorial optimizationalgorithms and complexityprentice hallenglewood cliffsn prim"shortest connection networks and some generalizations,bell system technical journal ( ) - sharir" strong-connectivity algorithm and its application in data flow analysis,computers and mathematics with applications ( ) - tarjan"depth first search and linear graph algorithms,siam journal on computing ( ) - |
23,740 | graph algorithms tarjan"testing flow graph reducibility,journal of computer and system sciences ( ) - tarjan"finding dominators in directed graphs,siam journal on computing ( ) - tarjan"complexity of combinatorial algorithms,siam review ( ) - tarjandata structures and network algorithmssociety for industrial and applied mathematicsphiladelphia yao"an (|elog log | |algorithm for finding minimum spanning trees,information processing letters ( ) - |
23,741 | algorithm design techniques so farwe have been concerned with the efficient implementation of algorithms we have seen that when an algorithm is giventhe actual data structures need not be specified it is up to the programmer to choose the appropriate data structure in order to make the running time as small as possible in this we switch our attention from the implementation of algorithms to the design of algorithms most of the algorithms that we have seen so far are straightforward and simple contains some algorithms that are much more subtleand some require an argument (in some cases lengthyto show that they are indeed correct in this we will focus on five of the common types of algorithms used to solve problems for many problemsit is quite likely that at least one of these methods will work specificallyfor each type of algorithm we will see the general approach look at several examples (the exercises at the end of the provide many more examplesr discussin general termsthe time and space complexitywhere appropriate greedy algorithms the first type of algorithm we will examine is the greedy algorithm we have already seen three greedy algorithms in dijkstra'sprim'sand kruskal' algorithms greedy algorithms work in phases in each phasea decision is made that appears to be goodwithout regard for future consequences generallythis means that some local optimum is chosen this "take what you can get nowstrategy is the source of the name for this class of algorithms when the algorithm terminateswe hope that the local optimum is equal to the global optimum if this is the casethen the algorithm is correctotherwisethe algorithm has produced suboptimal solution if the absolute best answer is not requiredthen simple greedy algorithms are sometimes used to generate approximate answersrather than using the more complicated algorithms generally required to generate an exact answer there are several real-life examples of greedy algorithms the most obvious is the coinchanging problem to make change in currencywe repeatedly dispense the largest |
23,742 | algorithm design techniques denomination thusto give out seventeen dollars and sixty-one cents in changewe give out ten-dollar billa five-dollar billtwo one-dollar billstwo quartersone dimeand one penny by doing thiswe are guaranteed to minimize the number of bills and coins this algorithm does not work in all monetary systemsbut fortunatelywe can prove that it does work in the american monetary system indeedit works even if two-dollar bills and fifty-cent pieces are allowed traffic problems provide an example where making locally optimal choices does not always work for exampleduring certain rush hour times in miamiit is best to stay off the prime streets even if they look emptybecause traffic will come to standstill mile down the roadand you will be stuck even more shockingit is better in some cases to make temporary detour in the direction opposite your destination in order to avoid all traffic bottlenecks in the remainder of this sectionwe will look at several applications that use greedy algorithms the first application is simple scheduling problem virtually all scheduling problems are either np-complete (or of similar difficult complexityor are solvable by greedy algorithm the second application deals with file compression and is one of the earliest results in computer science finallywe will look at an example of greedy approximation algorithm simple scheduling problem we are given jobs jn all with known running times tn respectively we have single processor what is the best way to schedule these jobs in order to minimize the average completion timein this entire sectionwe will assume nonpreemptive schedulingonce job is startedit must run to completion as an examplesuppose we have the four jobs and associated running times shown in figure one possible schedule is shown in figure because finishes in job time figure jobs and times figure schedule # |
23,743 | figure schedule # (optimal(time units) in in and in the average completion time is better schedulewhich yields mean completion time of is shown in figure the schedule given in figure is arranged by shortest job first we can show that this will always yield an optimal schedule let the jobs in the schedule be ji ji jin the first job finishes in time ti the second job finishes after ti ti and the third job finishes after ti ti ti from thiswe see that the total costcof the schedule is cn ( )tik ( = ( = tik tik ( = notice that in equation ( )the first sum is independent of the job orderingso only the second sum affects the total cost suppose that in an ordering there exists some such that tix tiy then calculation shows that by swapping jix and jiy the second sum increasesdecreasing the total cost thusany schedule of jobs in which the times are not monotonically nondecreasing must be suboptimal the only schedules left are those in which the jobs are arranged by smallest running time firstbreaking ties arbitrarily this result indicates the reason the operating system scheduler generally gives precedence to shorter jobs multiprocessor case we can extend this problem to the case of several processors again we have jobs jn with associated running times tn and number of processors we will assume without loss of generality that the jobs are orderedshortest running time first as an examplesuppose and the jobs are as shown in figure figure shows an optimal arrangement to minimize mean completion time jobs and are run on processor processor handles and and processor runs the remaining jobs the total time to completion is for an average of the algorithm to solve the multiprocessor case is to start jobs in ordercycling through processors it is not hard to show that no other ordering can do betteralthough if the number of processorspevenly divides the number of jobsnthere are many optimal orderings this is obtained byfor each < /pplacing each of the jobs jip+ through ( + ) on different processor in our casefigure shows second optimal solution even if does not divide exactlythere can still be many optimal solutionseven if all the job times are distinct we leave further investigation of this as an exercise |
23,744 | time figure jobs and times figure an optimal solution for the multiprocessor case figure second optimal solution for the multiprocessor case |
23,745 | minimizing the final completion time we close this section by considering very similar problem suppose we are only concerned with when the last job finishes in our two examples abovethese completion times are and respectively figure shows that the minimum final completion time is and this clearly cannot be improvedbecause every processor is always busy although this schedule does not have minimum mean completion timeit has merit in that the completion time of the entire sequence is earlier if the same user owns all these jobsthen this is the preferable method of scheduling although these problems are very similarthis new problem turns out to be np-completeit is just another way of phrasing the knapsack or bin packing problemswhich we will encounter later in this section thusminimizing the final completion time is apparently much harder than minimizing the mean completion time huffman codes in this sectionwe consider second application of greedy algorithmsknown as file compression the normal ascii character set consists of roughly "printablecharacters in order to distinguish these characterslog bits are required seven bits allow the representation of charactersso the ascii character set adds some other "nonprintablecharacters an eighth bit is added as parity check the important pointhoweveris that if the size of the character set is cthen log cbits are needed in standard encoding suppose we have file that contains only the characters aeistplus blank spaces and newlines suppose furtherthat the file has ten 'sfifteen 'stwelve 'sthree 'sfour 'sthirteen blanksand one newline as the table in figure showsthis file requires bits to representsince there are characters and each character requires three bits in real lifefiles can be quite large many of the very large files are output of some program and there is usually big disparity between the most frequent and least frequent characters for instancemany large data files have an inordinately large amount of digitsblanksand newlinesbut few ' and ' we might be interested in reducing the file size in figure minimizing the final completion time |
23,746 | algorithm design techniques character code frequency total bits space newline total figure using standard coding scheme the case where we are transmitting it over slow phone line alsosince on virtually every machinedisk space is preciousone might wonder if it would be possible to provide better code and reduce the total number of bits required the answer is that this is possibleand simple strategy achieves percent savings on typical large files and as much as to percent savings on many large data files the general strategy is to allow the code length to vary from character to character and to ensure that the frequently occurring characters have short codes notice that if all the characters occur with the same frequencythen there are not likely to be any savings the binary code that represents the alphabet can be represented by the binary tree shown in figure the tree in figure has data only at the leaves the representation of each character can be found by starting at the root and recording the pathusing to indicate the left branch and to indicate the right branch for instances is reached by going leftthen rightand finally right this is encoded as this data structure is sometimes referred to as trie if character ci is at depth di and occurs fi timesthen the cost of the code is equal to di fi better code than the one given in figure can be obtained by noticing that the newline is an only child by placing the newline symbol one level higher at its parentwe obtain the new tree in figure this new tree has cost of but is still far from optimal figure representation of the original code in tree sp nl |
23,747 | nl sp figure slightly better tree notice that the tree in figure is full treeall nodes either are leaves or have two children an optimal code will always have this propertysince otherwiseas we have already seennodes with only one child could move up level if the characters are placed only at the leavesany sequence of bits can always be decoded unambiguously for instancesuppose is the encoded string is not character code is not character codebut represents iso the first character is then followsgiving an then followswhich is newline the remainder of the code is aspacetieand newline thusit does not matter if the character codes are different lengthsas long as no character code is prefix of another character code such an encoding is known as prefix code converselyif character is contained in nonleaf nodeit is no longer possible to guarantee that the decoding will be unambiguous putting these facts togetherwe see that our basic problem is to find the full binary tree of minimum total cost (as defined above)where all characters are contained in the leaves the tree in figure shows the optimal tree for our sample alphabet as can be seen in figure this code uses only bits notice that there are many optimal codes these can be obtained by swapping children in the encoding tree the main unresolved questionthenis how the coding tree is constructed the algorithm to do this was given by huffman in thusthis coding system is commonly referred to as huffman code nl figure optimal prefix code sp |
23,748 | algorithm design techniques character code frequency total bits space newline total figure optimal prefix code huffman' algorithm throughout this section we will assume that the number of characters is huffman' algorithm can be described as followswe maintain forest of trees the weight of tree is equal to the sum of the frequencies of its leaves - timesselect the two treest and of smallest weightbreaking ties arbitrarilyand form new tree with subtrees and at the beginning of the algorithmthere are single-node trees--one for each character at the end of the algorithm there is one treeand this is the optimal huffman coding tree worked example will make the operation of the algorithm clear figure shows the initial forestthe weight of each tree is shown in small type at the root the two trees of lowest weight are merged togethercreating the forest shown in figure we will name the new root so that future merges can be stated unambiguously we have made the left child arbitrarilyany tiebreaking procedure can be used the total weight of the new tree is just the sum of the weights of the old treesand can thus be easily computed it is also simple matter to create the new treesince we merely need to get new nodeset the left and right pointersand record the weight now there are six treesand we again select the two trees of smallest weight these happen to be and twhich are then merged into new tree with root and weight sp nl figure initial stage of huffman' algorithm sp figure huffman' algorithm after the first merge nl |
23,749 | sp nl figure huffman' algorithm after the second merge sp nl figure huffman' algorithm after the third merge this is shown in figure the third step merges and acreating with weight figure shows the result of this operation after the third merge is completedthe two trees of lowest weight are the single-node trees representing and the blank space figure shows how these trees are merged into the new tree with root the fifth step is to merge the trees with roots and since these trees have the two smallest weights the result of this step is shown in figure finallythe optimal treewhich was shown in figure is obtained by merging the two remaining trees figure shows this optimal treewith root we will sketch the ideas involved in proving that huffman' algorithm yields an optimal codewe will leave the details as an exercise firstit is not hard to show by contradiction that the tree must be fullsince we have already seen how tree that is not full is improved sp nl figure huffman' algorithm after the fourth merge |
23,750 | algorithm design techniques sp nl figure huffman' algorithm after the fifth merge nextwe must show that the two least frequent characters and must be the two deepest nodes (although other nodes may be as deepagainthis is easy to show by contradictionsince if either or is not deepest nodethen there must be some that is (recall that the tree is fullif is less frequent than then we can improve the cost by swapping them in the tree we can then argue that the characters in any two nodes at the same depth can be swapped without affecting optimality this shows that an optimal tree can always be found that contains the two least frequent symbols as siblingsthusthe first step is not mistake the proof can be completed by using an induction argument as trees are mergedwe consider the new character set to be the characters in the roots thusin our exampleafter four mergeswe can view the character set as consisting of and the metacharacters and this is probably the trickiest part of the proofyou are urged to fill in all of the details the reason that this is greedy algorithm is that at each stage we perform merge without regard to global considerations we merely select the two smallest trees if we maintain the trees in priority queueordered by weightthen the running time is ( log )since there will be one buildheap deleteminsand insertst nl figure huffman' algorithm after the final merge sp |
23,751 | on priority queue that never has more than elements simple implementation of the priority queueusing listwould give an ( algorithm the choice of priority queue implementation depends on how large is in the typical case of an ascii character setc is small enough that the quadratic running time is acceptable in such an applicationvirtually all the running time will be spent on the disk / required to read the input file and write out the compressed version there are two details that must be considered firstthe encoding information must be transmitted at the start of the compressed filesince otherwise it will be impossible to decode there are several ways of doing thissee exercise for small filesthe cost of transmitting this table will override any possible savings in compressionand the result will probably be file expansion of coursethis can be detected and the original left intact for large filesthe size of the table is not significant the second problem is thatas describedthis is two-pass algorithm the first pass collects the frequency dataand the second pass does the encoding this is obviously not desirable property for program dealing with large files some alternatives are described in the references approximate bin packing in this sectionwe will consider some algorithms to solve the bin-packing problem these algorithms will run quickly but will not necessarily produce optimal solutions we will provehoweverthat the solutions that are produced are not too far from optimal we are given items of sizes sn all sizes satisfy si < the problem is to pack these items in the fewest number of binsgiven that each bin has unit capacity as an examplefigure shows an optimal packing for an item list with sizes there are two versions of the bin packing problem the first version is online bin packing in this versioneach item must be placed in bin before the next item can be processed the second version is the offline bin packing problem in an offline algorithmwe do not need to do anything until all the input has been read the distinction between online and offline algorithms was discussed in section figure optimal packing for |
23,752 | algorithm design techniques online algorithms the first issue to consider is whether or not an online algorithm can actually always give an optimal answereven if it is allowed unlimited computation remember that even though unlimited computation is allowedan online algorithm must place an item before processing the next item and cannot change its decision to show that an online algorithm cannot always give an optimal solutionwe will give it particularly difficult data to work on consider an input sequencei of small items of weight followed by large items of weight it is clear that these items can be packed in bins if we place one small item and one large item in each bin suppose there were an optimal online algorithmathat could perform this packing consider the operation of algorithm on the sequence consisting of only small items of weight can be packed in / bins howevera will place each item in separate binsince must yield the same results on as it does for the first half of and the first half of is exactly the same input as this means that will use twice as many bins as is optimal for what we have proved is that there is no optimal algorithm for online bin packing what the argument above shows is that an online algorithm never knows when the input might endso any performance guarantees it provides must hold at every instant throughout the algorithm if we follow the foregoing strategywe can prove the following theorem there are inputs that force any online bin packing algorithm to use at least the optimal number of bins proof suppose otherwiseand suppose for simplicitythat is even consider any online algorithm running on the input sequence above recall that this sequence consists of small items followed by large items let us consider what the algorithm has done after processing the mth item suppose has already used bins at this point in the algorithmthe optimal number of bins is / because we can place two elements in each bin thus we know that / by our assumption of better-than performance guarantee now consider the performance of algorithm after all items have been packed all bins created after the bth bin must contain exactly one itemsince all small items are placed in the first binsand two large items will not fit in bin since the first bins can have at most two items eachand the remaining bins have one item eachwe see that packing items will require at least bins since the items can be optimally packed using binsour performance guarantee assures us that ( )/ the first inequality implies that / and the second inequality implies that / which is contradiction thusno online algorithm can guarantee that it will produce packing with less than the optimal number of bins there are three simple algorithms that guarantee that the number of bins used is no more than twice optimal there are also quite few more complicated algorithms with better guarantees |
23,753 | next fit probably the simplest algorithm is next fit when processing any itemwe check to see whether it fits in the same bin as the last item if it doesit is placed thereotherwisea new bin is created this algorithm is incredibly simple to implement and runs in linear time figure shows the packing produced for the same input as figure not only is next fit simple to programits worst-case behavior is also easy to analyze theorem let be the optimal number of bins required to pack list of items then next fit never uses more than bins there exist sequences such that next fit uses bins proof consider any adjacent bins bj and bj+ the sum of the sizes of all items in bj and bj+ must be larger than since otherwise all of these items would have been placed in bj if we apply this result to all pairs of adjacent binswe see that at most half of the space is wasted thus next fit uses at most twice the optimal number of bins to see that this ratio is tightsuppose that the items have size si if is odd and si / if is even assume is divisible by the optimal packingshown in figure consists of / binseach containing elements of size and one bin containing the / elements of size /nfor total of ( / figure shows that next fit uses / bins thusnext fit can be forced to use almost twice as many bins as optimal first fit although next fit has reasonable performance guaranteeit performs poorly in practicebecause it creates new bins when it does not need to in the sample runit could have placed the item of size in either or rather than create new bin the first fit strategy is to scan the bins in order and place the new item in the first bin that is large enough to hold it thusa new bin is created only when the results of previous placements have left no other alternative figure shows the packing that results from first fit on our standard input empty empty empty empty empty figure next fit for |
23,754 | / / / / / / bn/ bn/ + figure optimal packing for / / /nempty empty empty / / bn/ figure next fit packing for / / /nempty empty empty empty figure first fit for / |
23,755 | simple method of implementing first fit would process each item by scanning down the list of bins sequentially this would take ( it is possible to implement first fit to run in ( log )we leave this as an exercise moment' thought will convince you that at any pointat most one bin can be more than half emptysince if second bin were also half emptyits contents would fit into the first bin thuswe can immediately conclude that first fit guarantees solution with at most twice the optimal number of bins on the other handthe bad case that we used in the proof of next fit' performance bound does not apply for first fit thusone might wonder if better bound can be proven the answer is yesbut the proof is complicated theorem let be the optimal number of bins required to pack list of items then first fit never uses more than bins there exist sequences such that first fit uses ( bins proof see the references at the end of the an example where first fit does almost as poorly as the previous theorem would indicate is shown in figure the input consists of items of size followed by items of size followed by items of size one simple packing places one item of each size in bin and requires bins first fit requires bins when first fit is run on large number of items with sizes uniformly distributed between and empirical results show that first fit uses roughly percent more bins than optimal in many casesthis is quite acceptable best fit the third online strategy we will examine is best fit instead of placing new item in the first spot that is foundit is placed in the tightest spot among all bins typical packing is shown in figure empty empty empty -bm bm+ - + - figure case where first fit uses bins instead of |
23,756 | algorithm design techniques empty empty empty figure best fit for notice that the item of size is placed in where it fits perfectlyinstead of one might expect that since we are now making more educated choice of binsthe performance guarantee would improve this is not the casebecause the generic bad cases are the same best fit is never more than roughly times as bad as optimaland there are inputs for which it (nearlyachieves this bound neverthelessbest fit is also simple to codeespecially if an ( log nalgorithm is requiredand it does perform better for random inputs offline algorithms if we are allowed to view the entire item list before producing an answerthen we should expect to do better indeedsince we can eventually find the optimal packing by exhaustive searchwe already have theoretical improvement over the online case the major problem with all the online algorithms is that it is hard to pack the large itemsespecially when they occur late in the input the natural way around this is to sort the itemsplacing the largest items first we can then apply first fit or best fityielding the algorithms first fit decreasing and best fit decreasingrespectively figure figure first fit for |
23,757 | shows that in our case this yields an optimal solution (althoughof coursethis is not true in generalin this sectionwe will deal with first fit decreasing the results for best fit decreasing are almost identical since it is possible that the item sizes are not distinctsome authors prefer to call the algorithm first fit nonincreasing we will stay with the original name we will also assumewithout loss of generalitythat input sizes are already sorted the first remark we can make is that the bad casewhich showed first fit using bins instead of binsdoes not apply when the items are sorted we will show that if an optimal packing uses binsthen first fit decreasing never uses more than ( )/ bins the result depends on two observations firstall the items with weight larger than will be placed in the first bins this implies that all the items in the extra bins have weight at most the second observation is that the number of items in the extra bins can be at most combining these two resultswe find that at most ( )/ extra bins can be required we now prove these two observations lemma let the items have (sorted in decreasing orderinput sizes sn respectivelyand suppose that the optimal packing is bins then all items that first fit decreasing places in extra bins have size at most proof suppose the ith item is the first placed in bin we need to show that si < we will prove this by contradiction assume si it follows that si- since the sizes are arranged in sorted order from this it follows that all bins bm have at most two items each consider the state of the system after the ( - )st item is placed in binbut before the ith item is placed we now want to show that (under the assumption that si the first bins are arranged as followsfirstthere are some bins with exactly one elementand then the remaining bins have two elements suppose there were two binsbx and by such that < <mbx has two itemsand by has one item let and be the two items in bx and let be the item in by > since was placed in the earlier bin >si by similar reasoning thusx > si this implies that si could be placed in by by our assumption this is not possible thusif si thenat the time that we try to process si the first bins are arranged such that the first have one element and the next have two elements to prove the lemma we will show that there is no way to place all the items in binswhich contradicts the premise of the lemma clearlyno two items sj can be placed in one binby any algorithmsince if they couldfirst fit would have done so too we also know that first fit has not placed any of the items of size sj+ sj+ si into the first binsso none of them fit thusin any packingspecifically the optimal packingthere must be bins that do not contain these items it follows that the items of size sj+ sj+ si- must be contained in |
23,758 | algorithm design techniques some set of binsand from previous considerationsthe total number of such items is ( the proof is completed by noting that if si there is no way for si to be placed in one of these bins clearlyit cannot go in one of the binssince if it couldthen first fit would have done so too to place it in one of the remaining bins requires distributing ( items into the bins thussome bin would have to have three itemseach of which is larger than clear impossibility this contradicts the fact that all the sizes can be placed in binsso the original assumption must be incorrect thussi < lemma the number of objects placed in extra bins is at most proof assume that there are at least objects placed in extra bins we know that = si <msince all the objects fit in bins suppose that bj is filled with wj total weight for < < suppose the first extra objects have sizes xm thensince the items in the first bins plus the first extra items are subset of all the itemsit follows that = si > wj = xj > = (wj xj = now wj +xj since otherwise the item corresponding to xj would have been placed in bj thus = si > = but this is impossible if the items can be packed in bins thusthere can be at most extra items theorem let be the optimal number of bins required to pack list of items then first fit decreasing never uses more than ( )/ bins proof there are at most extra itemsof size at most thusthere can be at most ( )/ extra bins the total number of bins used by first fit decreasing is thus at most ( )/ <( )/ it is possible to prove much tighter bound for both first fit decreasing and next fit decreasing recall that first fit packed these elements into bins and placed two items in each bin thusthere are ( jitems |
23,759 | optimal first fit decreasing / / empty empty / / / / / / / / / / / / / / / / / / - empty / figure example where first fit decreasing uses binsbut only bins are required theorem let be the optimal number of bins required to pack list of items then first fit decreasing never uses more than bins there exist sequences such that first fit decreasing uses bins proof the upper bound requires very complicated analysis the lower bound is exhibited by sequence consisting of elements of size followed by elements of size followed by elements of size followed by elements of size figure shows that the optimal packing requires binsbut first fit decreasing uses bins set and the result follows in practicefirst fit decreasing performs extremely well if sizes are chosen uniformly over the unit intervalthen the expected number of extra bins is mbin packing is fine example of how simple greedy heuristics can give good results divide and conquer another common technique used to design algorithms is divide and conquer divide-andconquer algorithms consist of two partsdividesmaller problems are solved recursively (exceptof coursebase casesconquerthe solution to the original problem is then formed from the solutions to the subproblems traditionallyroutines in which the text contains at least two recursive calls are called divide-and-conquer algorithmswhile routines whose text contains only one recursive call |
23,760 | algorithm design techniques are not we generally insist that the subproblems be disjoint (that isessentially nonoverlappinglet us review some of the recursive algorithms that have been covered in this text we have already seen several divide-and-conquer algorithms in section we saw an ( log nsolution to the maximum subsequence sum problem in we saw linear-time tree traversal strategies in we saw the classic examples of divide and conquernamely mergesort and quicksortwhich have ( log nworst-case and averagecase boundsrespectively we have also seen several examples of recursive algorithms that probably do not classify as divide-and-conquerbut merely reduce to single simpler case in section we saw simple routine to print number in we used recursion to perform efficient exponentiation in we examined simple search routines for binary search trees in section we saw simple recursion used to merge leftist heaps in section an algorithm was given for selection that takes linear average time the disjoint set find operation was written recursively in showed routines to recover the shortest path in dijkstra' algorithm and other procedures to perform depth-first search in graphs none of these algorithms are really divide-and-conquer algorithmsbecause only one recursive call is performed we have also seenin section very bad recursive routine to compute the fibonacci numbers this could be called divide-and-conquer algorithmbut it is terribly inefficientbecause the problem really is not divided at all in this sectionwe will see more examples of the divide-and-conquer paradigm our first application is problem in computational geometry given points in planewe will show that the closest pair of points can be found in ( log ntime the exercises describe some other problems in computational geometry which can be solved by divide and conquer the remainder of the section shows some extremely interestingbut mostly theoreticalresults we provide an algorithm that solves the selection problem in (nworst-case time we also show that -bit numbers can be multiplied in ( operations and that two matrices can be multiplied in ( operations unfortunatelyeven though these algorithms have better worst-case bounds than the conventional algorithmsnone are practical except for very large inputs running time of divide-and-conquer algorithms all the efficient divide-and-conquer algorithms we will see divide the problems into subproblemseach of which is some fraction of the original problemand then perform some additional work to compute the final answer as an examplewe have seen that mergesort operates on two problemseach of which is half the size of the originaland then uses (nadditional work this yields the running-time equation (with appropriate initial conditionst( ( / (nwe saw in that the solution to this equation is ( log nthe following theorem can be used to determine the running time of most divide-and-conquer algorithms |
23,761 | theorem the solution to the equation (nat( / (nk )where > and is log if bk ( (no(nk log nif bk (nk if bk proof following the analysis of mergesort in we will assume that is power of bthuslet bm then / bm- and nk (bm ) bmk bkm (bk ) let us assume ( and ignore the constant factor in (nk then we have (bm at(bm- (bk ) if we divide through by am we obtain the equation % bk (bm- (bm am am- we can apply this equation for other values of mobtaining % - bk (bm- (bm- am- am- % - (bm- bk (bm- am- am- ( ( bk ( ( ( % ( we use our standard trick of adding up the telescoping equations ( through ( virtually all the terms on the left cancel the leading terms on the rightyielding % (bm bk = ( = % bk ( = thus % bk (nt( = ( |
23,762 | algorithm design techniques if bk then the sum is geometric series with ratio smaller than since the sum of infinite series would converge to constantthis finite sum is also bounded by constantand thus equation ( appliest(no(am (alogb (nlogb ( if bk then each term in the sum is since the sum contains logb terms and bk implies that logb kt(no(am logb no(nlogb logb no(nk logb no(nk log ( finallyif bk then the terms in the geometric series are larger than and the second formula in section applies we obtain (nam (bk / ) + (am (bk / ) ((bk ) (nk (bk / ( proving the last case of the theorem as an examplemergesort has and the second case appliesgiving the answer ( log nif we solve three problemseach of which is half the original sizeand combine the solutions with (nadditional workthen and case applies heregiving bound of (nlog ( an algorithm that solved three half-sized problemsbut required ( work to merge the solutionwould have an ( running timesince the third case would apply there are two important cases that are not covered by theorem we state two more theoremsleaving the proofs as exercises theorem generalizes the previous theorem theorem the solution to the equation (nat( / (nk logp )where > and > is log if bk ( + (no( log nif bk (nk logp nif bk theorem if ki= ai then the solution to the equation (ni= (ai no(nis (no(nclosest-points problem the input to our first problem is list of points in plane if ( and ( )then the euclidean distance between and is [( - ) +( - ) ] / |
23,763 | we are required to find the closest pair of points it is possible that two points have the same positionin that casethat pair is the closestwith distance zero if there are pointsthen there are ( )/ pairs of distances we can check all of theseobtaining very short programbut at the expense of an ( algorithm since this approach is just an exhaustive searchwe should expect to do better let us assume that the points have been sorted by coordinate at worstthis adds ( log nto the final time bound since we will show an ( log nbound for the entire algorithmthis sort is essentially freefrom complexity standpoint figure shows small sample point setp since the points are sorted by coordinatewe can draw an imaginary vertical line that partitions the point set into two halvespl and pr this is certainly simple to do now we have almost exactly the same situation as we saw in the maximum subsequence sum problem in section either the closest points are both in pl or they are both in pr or one is in pl and the other is in pr let us call these distances dl dr and dc figure shows the partition of the point set and these three distances we can compute dl and dr recursively the problemthenis to compute dc since we would like an ( log nsolutionwe must be able to compute dc with only (nadditional work we have already seen that if procedure consists of two half-sized recursive calls and (nadditional workthen the total time will be ( log nlet min(dl dr the first observation is that we only need to compute dc if dc improves on if dc is such distancethen the two points that define dc must be within of the dividing linewe will refer to this area as strip as shown in figure this observation limits the number of points that need to be considered (in our cased dr there are two strategies that can be tried to compute dc for large point sets that are uniformly distributedthe number of pointsthat are expected to be in the strip is very small indeedit is easy to argue that only onpoints are in the strip on average thuswe could perform brute-force calculation on these points in (ntime the pseudocode figure small point set |
23,764 | algorithm design techniques dc dl dr figure partitioned into pl and pr shortest distances are shown in figure implements this strategyassuming the +convention that the points are indexed starting at in the worst caseall the points could be in the stripso this strategy does not always work in linear time we can improve this algorithm with the following observationthe coordinates of the two points that define dc can differ by at most otherwisedc suppose that the points in the strip are sorted by their coordinates thereforeif pi and pj ' dl dr figure two-lane stripcontaining all points considered for dc strip |
23,765 | /points are all in the strip fori numpointsinstripi+forj numpointsinstripj+ifdist(pi pj dist(pi pj )figure brute-force calculation of min(ddc /points are all in the strip and sorted by -coordinate fori numpointsinstripi+forj numpointsinstripj+ifpi and pj ' -coordinates differ by more than break/go to next pi else ifdist(pi pj dist(pi pj )figure refined calculation of min(ddc coordinates differ by more than dthen we can proceed to pi+ this simple modification is implemented in figure this extra test has significant effect on the running timebecause for each pi only few points pj are examined before pi ' and pj ' coordinates differ by more than and force an exit from the inner for loop figure showsfor instancethat for point only the two points and lie in the strip within vertical distance dl dr figure only and are considered in the second for loop |
23,766 | algorithm design techniques in the worst casefor any point pi at most points pj are considered this is because these points must lie either in the -by- square in the left half of the strip or in the -by- square in the right half of the strip on the other handall the points in each -by- square are separated by at least in the worst caseeach square contains four pointsone at each corner one of these points is pi leaving at most seven points to be considered this worstcase situation is shown in figure notice that even though pl and pr have the same coordinatesthey could be different points for the actual analysisit is only important that the number of points in the -by- rectangle be ( )and this much is certainly clear because at most seven points are considered for each pi the time to compute dc that is better than is (nthuswe appear to have an ( log nsolution to the closestpoints problembased on the two half-sized recursive calls plus the linear extra work to combine the two results howeverwe do not quite have an ( log nsolution yet the problem is that we have assumed that list of points sorted by coordinate is available if we perform this sort for each recursive callthen we have ( log nextra workthis gives an ( log nalgorithm this is not all that badespecially when compared to the brute-force ( howeverit is not hard to reduce the work for each recursive call to ( )thus ensuring an ( log nalgorithm we will maintain two lists one is the point list sorted by coordinateand the other is the point list sorted by coordinate we will call these lists and qrespectively these can be obtained by preprocessing sorting step at cost ( log nand thus does not affect the time bound pl and ql are the lists passed to the left-half recursive calland pr and qr are the lists passed to the right-half recursive call we have already seen that is easily split in the middle once the dividing line is knownwe step through sequentiallyplacing each element in ql or qr as appropriate it is easy to see that ql and qr will be automatically sorted by coordinate when the recursive calls returnwe scan through the list and discard all the points whose coordinates are not within the strip then pl pl left half pr right half pl figure at most eight points fit in the rectanglethere are two coordinates shared by two points each |
23,767 | contains only points in the stripand these points are guaranteed to be sorted by their coordinates this strategy ensures that the entire algorithm is ( log )because only (nextra work is performed the selection problem the selection problem requires us to find the kth smallest element in collection of elements of particular interest is the special case of finding the median this occurs when / in and we have seen several solutions to the selection problem the solution in uses variation of quicksort and runs in (naverage time indeedit is described in hoare' original paper on quicksort although this algorithm runs in linear average timeit has worst case of ( selection can easily be solved in ( log nworst-case time by sorting the elementsbut for long time it was unknown whether or not selection could be accomplished in (nworst-case time the quickselect algorithm outlined in section is quite efficient in practiceso this was mostly question of theoretical interest recall that the basic algorithm is simple recursive strategy assuming that is larger than the cutoff point where elements are simply sortedan element vknown as the pivotis chosen the remaining elements are placed into two setss and contains elements that are guaranteed to be no larger than vand contains elements that are no smaller than finallyif <| |then the kth smallest element in can be found by recursively computing the kth smallest element in if | then the pivot is the kth smallest element otherwisethe kth smallest element in is the ( | )st smallest element in the main difference between this algorithm and quicksort is that there is only one subproblem to solve instead of two in order to obtain linear algorithmwe must ensure that the subproblem is only fraction of the original and not merely only few elements smaller than the original of coursewe can always find such an element if we are willing to spend some time to do so the difficult problem is that we cannot spend too much time finding the pivot for quicksortwe saw that good choice for pivot was to pick three elements and use their median this gives some expectation that the pivot is not too bad but does not provide guarantee we could choose elements at randomsort them in constant timeuse the th largest as pivotand get pivot that is even more likely to be good howeverif these elements were the largestthen the pivot would still be poor extending thiswe could use up to ( /log nelementssort them using heapsort in (ntotal timeand be almost certainfrom statistical point of viewof obtaining good pivot in the worst casehoweverthis does not work because we might select the ( /log nlargest elementsand then the pivot would be the [ ( /log )]th largest elementwhich is not constant fraction of the basic idea is still useful indeedwe will see that we can use it to improve the expected number of comparisons that quickselect makes to get good worst casehoweverthe key idea is to use one more level of indirection instead of finding the median from sample of random elementswe will find the median from sample of medians |
23,768 | algorithm design techniques the basic pivot selection algorithm is as follows arrange the elements into / groups of five elementsignoring the (at most fourextra elements find the median of each group this gives list of / medians find the median of return this as the pivotv we will use the term median-of-median-of-five partitioning to describe the quickselect algorithm that uses the pivot selection rule given above we will now show that median-of-median-of-five partitioning guarantees that each recursive subproblem is at most roughly percent as large as the original we will also show that the pivot can be computed quickly enough to guarantee an (nrunning time for the entire selection algorithm let us assume for the moment that is divisible by so there are no extra elements suppose also that / is oddso that the set contains an odd number of elements this provides some symmetryas we shall see we are thus assumingfor conveniencethat is of the form we will also assume that all the elements are distinct the actual algorithm must make sure to handle the case where this is not true figure shows how the pivot might be chosen when in figure represents the element which is selected by the algorithm as pivot since is the median of nine elementsand we are assuming that all elements are distinctthere must be four medians that are larger than and four that are smaller we denote these by and srespectively consider group of five elements with large median (type lthe median of the group is smaller than two elements in the group and larger than two sorted groups of five elements medians figure how the pivot is chosen |
23,769 | elements in the group we will let represent the huge elements these are elements that are known to be larger than large median similarlyt represents the tiny elementswhich are smaller than small median there are elements of type htwo are in each of the groups with an type medianand two elements are in the same group as similarlythere are elements of type elements of type or are guaranteed to be larger than vand elements of type or are guaranteed to be smaller than there are thus guaranteed to be large and small elements in our problem thereforea recursive call could be on at most elements let us extend this analysis to general of the form in this casethere are elements of type and elements of type there are elements of type hand also elements of type thusthere are elements that are guaranteed to be larger than and elements that are guaranteed to be smaller thusin this casethe recursive call can contain at most elements if is not of the form similar arguments can be made without affecting the basic result it remains to bound the running time to obtain the pivot element there are two basic steps we can find the median of five elements in constant time for instanceit is not hard to sort five elements in eight comparisons we must do this / timesso this step takes (ntime we must then compute the median of group of / elements the obvious way to do this is to sort the group and return the element in the middle but this takes on/ log / ( log ntimeso this does not work the solution is to call the selection algorithm recursively on the / elements this completes the description of the basic algorithm there are still some details that need to be filled in if an actual implementation is desired for instanceduplicates must be handled correctlyand the algorithm needs cutoff large enough to ensure that the recursive calls make progress there is quite large amount of overhead involvedand this algorithm is not practical at allso we will not describe any more of the details that need to be considered even sofrom theoretical standpointthe algorithm is major breakthroughbecauseas the following theorem showsthe running time is linear in the worst case theorem the running time of quickselect using median-of-median-of-five partitioning is (nproof the algorithm consists of two recursive calls of size and nplus linear extra work by theorem the running time is linear reducing the average number of comparisons divide and conquer can also be used to reduce the expected number of comparisons required by the selection algorithm let us look at concrete example suppose we have groupsof , numbers and are looking for the th smallest numberwhich we will call we choose subsetsof consisting of numbers we would expect that the value of is similar in size to the th smallest number in smore specificallythe fifth smallest number in sis almost certainly less than xand the th smallest number in sis almost certainly greater than |
23,770 | algorithm design techniques more generallya samplesof elements is chosen from the elements let be some numberwhich we will choose later so as to minimize the average number of comparisons used by the procedure we find the ( ks/ )th and ( ks/ )th smallest elements in salmost certainlythe kth smallest element in will fall between and so we are left with selection problem on elements with low probabilitythe kth smallest element does not fall in this rangeand we have considerable work to do howeverwith good choice of and dwe can ensureby the laws of probabilitythat the second case does not adversely affect the total work if an analysis is performedwe find that if / log / and / log / nthen the expected number of comparisons is ( / log / )which is optimal except for the low-order term (if / we can consider the symmetric problem of finding the ( )th largest element most of the analysis is easy to do the last term represents the cost of performing the two selections to determine and the average cost of the partitioningassuming reasonably clever strategyis equal to plus the expected rank of in swhich is (nd/sif the kth element winds up in sthe cost of finishing the algorithm is equal to the cost of selection on snamelyo(sif the kth smallest element doesn' wind up in sthe cost is (nhowevers and have been chosen to guarantee that this happens with very low probability ( / )so the expected cost of this possibility is ( )which is term that goes to zero as gets large an exact calculation is left as exercise this analysis shows that finding the median requires about comparisons on average of coursethis algorithm requires some floating-point arithmetic to compute swhich can slow down the algorithm on some machines even soexperiments have shown that if correctly implementedthis algorithm compares favorably with the quickselect implementation in theoretical improvements for arithmetic problems in this section we describe divide-and-conquer algorithm that multiplies two -digit numbers our previous model of computation assumed that multiplication was done in constant timebecause the numbers were small for large numbersthis assumption is no longer valid if we measure multiplication in terms of the size of numbers being multipliedthen the natural multiplication algorithm takes quadratic time the divide-and-conquer algorithm runs in subquadratic time we also present the classic divide-and-conquer algorithm that multiplies two -by- matrices in subcubic time multiplying integers suppose we want to multiply two -digit numbersx and if exactly one of and is negativethen the answer is negativeotherwise it is positive thuswe can perform this check and then assume that xy > the algorithm that almost everyone uses when multiplying by hand requires ( operationsbecause each digit in is multiplied by each digit in if , , and , , xy , , , , , let us break and into two halvesconsisting of the most significant and least significant digits |
23,771 | respectively then xl , xr , yl , and yr , we also have xl xr and yl yr it follows that xy xl yl (xl yr xr yl ) xr yr notice that this equation consists of four multiplicationsxl yl xl yr xr yl and xr yr which are each half the size of the original problem ( / digitsthe multiplications by and amount to the placing of zeros this and the subsequent additions add only (nadditional work if we perform these four multiplications recursively using this algorithmstopping at an appropriate base casethen we obtain the recurrence ( ( / (nfrom theorem we see that (no( )sounfortunatelywe have not improved the algorithm to achieve subquadratic algorithmwe must use less than four recursive calls the key observation is that xl yr xr yl (xl xr )(yr yl xl yl xr yr thusinstead of using two multiplications to compute the coefficient of we can use one multiplicationplus the result of two multiplications that have already been performed figure shows how only three recursive subproblems need to be solved function value computational complexity xl xr yl yr , , , , given given given given xl xr yr yl - , - , (no(nx yl yr , , , , ( / ( / xl yl xr yr , , , , ( / (nx yr xl yl , , , , , , , , , , , computed above (no(nxl yl xr yr , , , , , (nfigure the divide-and-conquer algorithm in action |
23,772 | algorithm design techniques it is easy to see that now the recurrence equation satisfies ( ( / (nand so we obtain (no(nlog ( to complete the algorithmwe must have base casewhich can be solved without recursion when both numbers are one-digitwe can do the multiplication by table lookup if one number has zero digitsthen we return zero in practiceif we were to use this algorithmwe would choose the base case to be that which is most convenient for the machine although this algorithm has better asymptotic performance than the standard quadratic algorithmit is rarely usedbecause for small the overhead is significantand for larger there are even better algorithms these algorithms also make extensive use of divide and conquer matrix multiplication fundamental numerical problem is the multiplication of two matrices figure gives simple ( algorithm to compute abwhere aband are matrices the algorithm follows directly from the definition of matrix multiplication to compute ci, we compute the dot product of the ith row in with the jth column in as usualarrays begin at index /*standard matrix multiplication arrays start at assumes and are square *matrix operator*const matrix aconst matrix int numrows)matrix cnn }forint ++ /initialization forint ++ ci ] forint ++ forint ++ forint ++ ci ] +ai ] bk ] ]return cfigure simple ( matrix multiplication |
23,773 | , , , , ' , , , , , , , , figure decomposing ab into four quadrants for long time it was assumed that ( was required for matrix multiplication howeverin the late sixtiesstrassen showed how to break the ( barrier the basic idea of strassen' algorithm is to divide each matrix into four quadrantsas shown in figure then it is easy to show that , , , , , , , , , , , , , , , , , , , , as an exampleto perform the multiplication ab ab we define the following eight / -by- / matricesa , , , , , , , , we could then perform eight / -by- / matrix multiplications and four / -by- / matrix additions the matrix additions take ( time if the matrix multiplications are done recursivelythen the running time satisfies ( ( / ( from theorem we see that (no( )so we do not have an improvement as we saw with integer multiplicationwe must reduce the number of subproblems below strassen used strategy similar to the integer multiplication divide-and-conquer algorithm and showed how to use only seven recursive calls by carefully arranging the computations the seven multiplications are |
23,774 | algorithm design techniques ( , , )( , , ( , , )( , , ( , , )( , , ( , , ) , , ( , , , ( , , ( , , ) , once the multiplications are performedthe final answer can be obtained with eight more additions , , , , it is straightforward to verify that this tricky ordering produces the desired values the running time now satisfies the recurrence ( ( / ( the solution of this recurrence is (no(nlog ( as usualthere are details to considersuch as the case when is not power of but these are basically minor nuisances strassen' algorithm is worse than the straightforward algorithm until is fairly large it does not generalize for the case where the matrices are sparse (contain many zero entries)and it does not easily parallelize when run with floating-point entriesit is less stable numerically than the classic algorithm thusuntil recently it had only limited applicability neverthelessit has always represented an important theoretical milestone and certainly shows that in computer scienceas in many other fieldseven though problem seems to have an intrinsic complexitynothing is certain until proven dynamic programming in the previous sectionwe saw that problem that can be mathematically expressed recursively can also be expressed as recursive algorithmin many cases yielding significant performance improvement over more naive exhaustive search any recursive mathematical formula could be directly translated to recursive algorithmbut the underlying reality is that often the compiler will not do justice to the recursive algorithmand an inefficient program results when we suspect that this is likely to be the casewe must provide little more help to the compilerby rewriting the recursive algorithm as nonrecursive algorithm that systematically records the answers to the subproblems in table one technique that makes use of this approach is known as dynamic programming |
23,775 | using table instead of recursion in we saw that the natural recursive program to compute the fibonacci numbers is very inefficient recall that the program shown in figure has running timet( )that satisfies ( > ( ( since (nsatisfies the same recurrence relation as the fibonacci numbers and has the same initial conditionst(nin fact grows at the same rate as the fibonacci numbers and is thus exponential on the other handsince to compute fn all that is needed is fn- and fn- we only need to record the two most recently computed fibonacci numbers this yields the (nalgorithm in figure /*compute fibonacci numbers as described in *long long fibint ifn < return else return fibn fibn )figure inefficient algorithm to compute fibonacci numbers /*compute fibonacci numbers as described in *long long fibonacciint ifn < return long long last long long last nexttolast long long answer forint < ++ answer last nexttolastnexttolast lastlast answerreturn answerfigure linear algorithm to compute fibonacci numbers |
23,776 | algorithm design techniques the reason that the recursive algorithm is so slow is because of the algorithm used to simulate recursion to compute fn there is one call to fn- and fn- howeversince fn- recursively makes call to fn- and fn- there are actually two separate calls to compute fn- if one traces out the entire algorithmthen we can see that fn- is computed three timesfn- is computed five timesfn- is computed eight timesand so on as figure showsthe growth of redundant calculations is explosive if the compiler' recursion simulation algorithm were able to keep list of all precomputed values and not make recursive call for an already solved subproblemthen this exponential explosion would be avoided this is why the program in figure is so much more efficient asa second examplewe saw in how to solve the recurrence ( ( /nn- = (inwith ( suppose that we want to checknumericallywhether the solution we obtained is correct we could then write the simple program in figure to evaluate the recursion once againthe recursive calls duplicate work in this casethe running time (nsatisfies (nn- = (inbecauseas shown in figure there is one (directrecursive call of each size from to plus (nadditional work (where else have we seen the tree shown in figure ?solving for ( )we find that it grows exponentially figure trace of the recursive calculation of fibonacci numbers double evalint ifn = return else double sum forint ++ sum +evali )return sum nfigure recursive function to evaluate ( / - = (in |
23,777 | figure trace of the recursive calculation in eval double evalint vector cn ) forint < ++ double sum forint ++ sum +cj ]ci sum ireturn cn ]figure evaluating ( / - = (in with table by using tablewe obtain the program in figure this program avoids the redundant recursive calls and runs in ( it is not perfect programas an exerciseyou should make the simple change that reduces its running time to (nordering matrix multiplications suppose we are given four matricesabcand dof dimensions and although matrix multiplication is not commutativeit is associativewhich means that the matrix product abcd can be parenthesizedand thus evaluatedin any order the obvious way to multiply two matrices of dimensions and rrespectivelyuses pqr scalar multiplications (using theoretically superior |
23,778 | algorithm design techniques algorithm such as strassen' algorithm does not significantly alter the problem we will considerso we will assume this performance bound what is the best way to perform the three matrix multiplications required to compute abcdin the case of four matricesit is simple to solve the problem by exhaustive searchsince there are only five ways to order the multiplications we evaluate each case belowr ( ((bc) ))evaluating bc requires , multiplications evaluating (bc) requires the , multiplications to compute bcplus an additional , multiplicationsfor total of , evaluating ( ((bc) )requires , multiplications for (bc)dplus an additional , multiplicationsfor grand total of , multiplications ( ( (cd)))evaluating cd requires , multiplications evaluating (cdrequires the , multiplications to compute cdplus an additional , multiplicationsfor total of , evaluating ( ( (cd))requires , multiplications for (cd)plus an additional , multiplicationsfor grand total of , multiplications ((ab)(cd))evaluating cd requires , multiplications evaluating ab requires , multiplications evaluating ((ab)(cd)requires , multiplications for cd , multiplications for abplus an additional , multiplications for grand total of , multiplications (((ab) ) )evaluating ab requires , multiplications evaluating (ab) requires the , multiplications to compute abplus an additional , multiplicationsfor total of , evaluating (((ab) )drequires , multiplications for (ab)cplus an additional , multiplicationsfor grand total of , multiplications (( (bc)) )evaluating bc requires , multiplications evaluating (bcrequires the , multiplications to compute bcplus an additional , multiplicationsfor total of , evaluating (( (bc))drequires , multiplications for (bc)plus an additional , multiplicationsfor grand total of , multiplications the calculations show that the best ordering uses roughly one-ninth the number of multiplications as the worst ordering thusit might be worthwhile to perform few calculations to determine the optimal ordering unfortunatelynone of the obvious greedy strategies seems to work moreoverthe number of possible orderings grows quickly suppose we define (nto be this number then ( ( ( and ( as we have seen in generalt(nn- ( ) ( ii= to see thissuppose that the matrices are an and the last multiplication performed is ( ai )(ai+ ai+ an then there are (iways to compute ( ai and ( -iways to compute (ai+ ai+ an thusthere are ( ) ( -iways to compute ( ai )(ai+ ai+ an for each possible |
23,779 | the solution of this recurrence is the well-known catalan numberswhich grow exponentially thusfor large nan exhaustive search through all possible orderings is useless neverthelessthis counting argument provides basis for solution that is substantially better than exponential let ci be the number of columns in matrix ai for < < then ai has ci- rowssince otherwise the multiplications are not valid we will define to be the number of rows in the first matrixa suppose mleftright is the number of multiplications required to multiply aleft aleft+ aright- aright for consistencymleftleft suppose the last multiplication is (aleft ai )(ai+ aright )where left < right then the number of multiplications used is mlefti mi+ right cleft- ci cright these three terms represent the multiplications required to compute (aleft ai )(ai+ aright )and their productrespectively if we define mleftright to be the number of multiplications required in an optimal orderingthenif left rightmleftright min {mlefti mi+ right cleft- ci cright left<= <right this equation implies that if we have an optimal multiplication arrangement of aleft aright the subproblems aleft ai and ai+ aright cannot be performed suboptimally this should be clearsince otherwise we could improve the entire result by replacing the suboptimal computation by an optimal computation the formula translates directly to recursive programbutas we have seen in the last sectionsuch program would be blatantly inefficient howeversince there are only approximately / values of mleftright that ever need to be computedit is clear that table can be used to store these values further examination shows that if right left kthen the only values mxy that are needed in the computation of mleftright satisfy - this tells us the order in which we need to compute the table if we want to print out the actual ordering of the multiplications in addition to the final answer , then we can use the ideas from the shortest-path algorithms in whenever we make change to mleftright we record the value of that is responsible this gives the simple program shown in figure although the emphasis of this is not codingit is worth noting that many programmers tend to shorten variable names to single letter ciand are used as single-letter variables because this agrees with the names we have used in the description of the algorithmwhich is very mathematical howeverit is generally best to avoid as variable namebecause looks too much like and can make for very difficult debugging if you make transcription error returning to the algorithmic issuesthis program contains triply nested loop and is easily seen to run in ( time the references describe faster algorithmbut since the time to perform the actual matrix multiplication is still likely to be much larger than the time to compute the optimal orderingthis algorithm is still quite practical optimal binary search tree our second dynamic programming example considers the following inputwe are given list of wordsw wn and fixed probabilitiesp pn of their occurrence |
23,780 | algorithm design techniques /*compute optimal ordering of matrix multiplication contains the number of columns for each of the matrices is the number of rows in matrix the minimum number of multiplications is left in ] actual ordering is computed via another procedure using lastchange and lastchange are indexed starting at instead of noteentries below main diagonals of and lastchange are meaningless and uninitialized *void optmatrixconst vector cmatrix mmatrix lastchange int size forint left left < ++left mleft ]left forint ++ / is right left forint left left < ++left /for each position int right left kmleft ]right infinityforint lefti right++ int thiscost mleft ] mi ]right cleft ci cright ]ifthiscost mleft ]right /update min mleft ]right thiscostlastchangeleft ]right ifigure program to find optimal ordering of matrix multiplications the problem is to arrange these words in binary search tree in way that minimizes the expected total access time in binary search treethe number of comparisons needed to access an element at depth is so if wi is placed at depth di then we want to minimize = pi ( di as an examplefigure shows seven words along with their probability of occurrence in some context figure shows three possible binary search trees their searching costs are shown in figure |
23,781 | word probability am and egg if the two figure sample input for optimal binary search tree problem if egg two and am and am the the and if two if am egg two egg the figure three possible binary search trees for data in previous table the first tree was formed using greedy strategy the word with the highest probability of being accessed was placed at the root the left and right subtrees were then formed recursively the second tree is the perfectly balanced search tree neither of these trees input word wi probability pi am and egg if the two totals tree # tree # tree # access cost once sequence access cost once sequence access cost once sequence figure comparison of the three binary search trees |
23,782 | algorithm design techniques is optimalas demonstrated by the existence of the third tree from this we can see that neither of the obvious solutions works this is initially surprisingsince the problem appears to be very similar to the construction of huffman encoding treewhichas we have already seencan be solved by greedy algorithm construction of an optimal binary search tree is harderbecause the data are not constrained to appear only at the leavesand also because the tree must satisfy the binary search tree property dynamic programming solution follows from two observations once againsuppose we are trying to place the (sortedwords wleft wleft+ wright- wright into binary search tree suppose the optimal binary search tree has wi as the rootwhere left < <right then the left subtree must contain wleft wi- and the right subtree must contain wi+ wright (by the binary search tree propertyfurtherboth of these subtrees must also be optimalsince otherwise they could be replaced by optimal subtreeswhich would give better solution for wleft wright thuswe can write formula for the cost cleft,right of an optimal binary search tree figure may be helpful if left rightthen the cost of the tree is this is the nullptr casewhich we always have for binary search trees otherwisethe root costs pi the left subtree has cost of clefti- relative to its rootand the right subtree has cost of ci+ right relative to its root as figure showseach node in these subtrees is one level deeper from wi than from right their respective rootsso we must add - =left pj and = + pj this gives the formula right - cleftright min pj pj pi clefti- ci+ right left<= <=right = + =left right pj min clefti- ci+ right left<= <=right =left from this equationit is straightforward to write program to compute the cost of the optimal binary search tree as usualthe actual search tree can be maintained by saving the value of that minimizes cleftright the standard recursive routine can be used to print the actual tree wi wleft -wi - figure structure of an optimal binary search tree wi + -wright |
23,783 | left= left= left= am am and and iteration= am and am and and egg am iteration= and and and if am egg and iteration= am and if am if and the egg iteration= am and if am the and two if iteration= and and if am two the iteration= and and two iteration= and left= left= if if egg egg egg if if the egg if if if egg the if two if if egg two if left= left= the the two two the two the two two figure computation of the optimal binary search tree for sample input figure shows the table that will be produced by the algorithm for each subrange of wordsthe cost and root of the optimal binary search tree are maintained the bottommost entry computes the optimal binary search tree for the entire set of words in the input the optimal tree is the third tree shown in figure the precise computation for the optimal binary search tree for particular subrangenamelyam ifis shown in figure it is obtained by computing the minimum-cost tree obtained by placing amandeggand if at the root for instancewhen and is placed at the rootthe left subtree contains am am (of cost via previous calculation)the right subtree contains egg if (of cost )and pam pand pegg pif for total cost of the running time of this algorithm is ( )because when it is implementedwe obtain triple loop an ( algorithm for the problem is sketched in the exercises all-pairs shortest path our third and final dynamic programming application is an algorithm to compute shortest weighted paths between every pair of points in directed graphg (vein we saw an algorithm for the single-source shortest-path problemwhich finds the shortest path from some arbitrary vertexsto all others that algorithm (dijkstra'sruns in (| | time on dense graphsbut substantially faster on sparse graphs we will give short algorithm to solve the all-pairs problem for dense graphs the running time of the algorithm is (| | )which is not an asymptotic improvement over |viterations of dijkstra' algorithm but could be faster on very dense graphbecause its loops are tighter the algorithm also |
23,784 | algorithm design techniques and am (nulland if am am egg if egg if am and if if am egg (null figure computation of table entry ( andfor am if performs correctly if there are negative edge costs but no negative-cost cyclesdijkstra' algorithm fails in this case let us recall the important details of dijkstra' algorithm (the reader may wish to review section dijkstra' algorithm starts at vertexsand works in stages each vertex in the graph is eventually selected as an intermediate vertex if the current selected vertex is vthen for each vwe set dw min(dw dv cv, this formula says that the best distance to (from sis either the previously known distance to from sor the result of going from to (optimallyand then directly from to dijkstra' algorithm provides the idea for the dynamic programming algorithmwe select the vertices in sequential order we will define dk, , to be the weight of the shortest path from vi to vj that uses only vk as intermediates by this definitiond , , ci, where ci, is if (vi vj is not an edge in the graph alsoby definitiond| |, , is the shortest path from vi to vj in the graph as figure showswhen we can write simple formula for dk, , the shortest path from vi to vj that uses only vk as intermediates is the shortest path that either does not use vk as an intermediate at allor consists of the merging of the two paths vi vk and vk vj each of which uses only the first - vertices as intermediates this leads to the formula dk, , min{dk- , , dk- , , dk- , , the time requirement is once again (| | unlike the two previous dynamic programming examplesthis time bound has not been substantially lowered by another approach |
23,785 | /*compute all-shortest paths contains the adjacency matrix with ai ] presumed to be zero contains the values of the shortest path vertices are numbered starting at all arrays have equal dimension negative cycle exists if di ] is set to negative value actual path can be computed using path]not_a_vertex is - *void allpairsconst matrix amatrix dmatrix path int numrows)/initialize and path forint ++ forint ++ di ] ai ] ]pathi ] not_a_vertexforint ++ /consider each vertex as an intermediate forint ++ forint ++ ifdi ] dk ] di ] /update shortest path di ] di ] dk ] ]pathi ] kfigure all-pairs shortest path because the kth stage depends only on the ( )th stageit appears that only two |vx |vmatrices need to be maintained howeverusing as an intermediate vertex on path that starts or finishes with does not improve the result unless there is negative cycle thusonly one matrix is necessarybecause dk- , , dk, , and dk- , , dk, , which implies that none of the terms on the right change values and need to be saved this observation leads to the simple program in figure which numbers vertices starting at zero to conform with ++' conventions |
23,786 | algorithm design techniques on complete graphwhere every pair of vertices is connected (in both directions)this algorithm is almost certain to be faster than |viterations of dijkstra' algorithmbecause the loops are so tight lines to can be executed in parallelas can lines to thusthis algorithm seems to be well suited for parallel computation dynamic programming is powerful algorithm design techniquewhich provides starting point for solution it is essentially the divide-and-conquer paradigm of solving simpler problems firstwith the important difference being that the simpler problems are not clear division of the original because subproblems are repeatedly solvedit is important to record their solutions in table rather than recompute them in some casesthe solution can be improved (although it is certainly not always obvious and frequently difficult)and in other casesthe dynamic programming technique is the best approach known in some senseif you have seen one dynamic programming problemyou have seen them all more examples of dynamic programming can be found in the exercises and references randomized algorithms suppose you are professor who is giving weekly programming assignments you want to make sure that the students are doing their own programs orat the very leastunderstand the code they are submitting one solution is to give quiz on the day that each program is due on the other handthese quizzes take time out of classso it might only be practical to do this for roughly half of the programs your problem is to decide when to give the quizzes of courseif the quizzes are announced in advancethat could be interpreted as an implicit license to cheat for the percent of the programs that will not get quiz one could adopt the unannounced strategy of giving quizzes on alternate programsbut students would figure out the strategy before too long another possibility is to give quizzes on what seem like the important programsbut this would likely lead to similar quiz patterns from semester to semester student grapevines being what they arethis strategy would probably be worthless after semester one method that seems to eliminate these problems is to use coin quiz is made for every program (making quizzes is not nearly as time-consuming as grading them)and at the start of classthe professor will flip coin to decide whether the quiz is to be given this wayit is impossible to know before class whether or not the quiz will occurand these patterns do not repeat from semester to semester thusthe students will have to expect that quiz will occur with percent probabilityregardless of previous quiz patterns the disadvantage is that it is possible that there is no quiz for an entire semester this is not likely occurrenceunless the coin is suspect each semesterthe expected number of quizzes is half the number of programsand with high probabilitythe number of quizzes will not deviate much from this this example illustrates what we call randomized algorithms at least once during the algorithma random number is used to make decision the running time of the algorithm depends not only on the particular inputbut also on the random numbers that occur |
23,787 | the worst-case running time of randomized algorithm is often the same as the worstcase running time of the nonrandomized algorithm the important difference is that good randomized algorithm has no bad inputs but only bad random numbers (relative to the particular inputthis may seem like only philosophical differencebut actually it is quite importantas the following example shows consider two variants of quicksort variant uses the first element as pivotwhile variant uses randomly chosen element as pivot in both casesthe worst-case running time is ( )because it is possible at each step that the largest element is chosen as pivot the difference between these worst cases is that there is particular input that can always be presented to variant to cause the bad running time variant will run in ( time every single time it is given an already-sorted list if variant is presented with the same input twiceit will have two different running timesdepending on what random numbers occur throughout the textin our calculations of running timeswe have assumed that all inputs are equally likely this is not truebecause nearly sorted inputfor instanceoccurs much more often than is statistically expectedand this causes problemsparticularly for quicksort and binary search trees by using randomized algorithmthe particular input is no longer important the random numbers are importantand we can get an expected running timewhere we now average over all possible random numbers instead of over all possible inputs using quicksort with random pivot gives an ( log )-expectedtime algorithm this means that for any inputincluding already-sorted inputthe running time is expected to be ( log )based on the statistics of random numbers an expected running-time bound is somewhat stronger than an average-case bound butof courseis weaker than the corresponding worst-case bound on the other handas we saw in the selection problemsolutions that obtain the worst-case bound are frequently not as practical as their average-case counterparts randomized algorithms usually are randomized alogrithms were used implicitly in perfect and universal hashing (sections and in this sectionwe will examine two uses of randomization firstwe will see novel scheme for supporting the binary search tree operations in (log nexpected time once againthis means that there are no bad inputsjust bad random numbers from theoretical point of viewthis is not terribly excitingsince balanced search trees achieve this bound in the worst case neverthelessthe use of randomization leads to relatively simple algorithms for searchinginsertingand especially deleting our second application is randomized algorithm to test the primality of large numbers the algorithm we present runs quickly but occasionally makes an error the probability of error canhoweverbe made negligibly small random-number generators since our algorithms require random numberswe must have method to generate them actuallytrue randomness is virtually impossible to do on computersince these numbers will depend on the algorithmand thus cannot possibly be random generallyit suffices to produce pseudorandom numberswhich are numbers that appear to be random random numbers have many known statistical propertiespseudorandom numbers satisfy most of these properties surprisinglythis too is much easier said than done |
23,788 | algorithm design techniques suppose we only need to flip cointhuswe must generate (for headsor (for tailsrandomly one way to do this is to examine the system clock the clock might record time as an integer that counts the number of seconds since some starting time we could then use the lowest bit the problem is that this does not work well if sequence of random numbers is needed one second is long timeand the clock might not change at all while the program is running even if the time was recorded in units of microsecondsif the program was running by itselfthe sequence of numbers that would be generated would be far from randomsince the time between calls to the generator would be essentially identical on every program invocation we seethenthat what is really needed is sequence of random numbers these numbers should appear independent if coin is flipped and heads appearsthe next coin flip should still be equally likely to come up heads or tails the simplest method to generate random numbers is the linear congruential generatorwhich was first described by lehmer in numbers are generated satisfying xi+ axi mod to start the sequencesome value of must be given this value is known as the seed if then the sequence is far from randombut if and are correctly chosenthen any other < is equally valid if is primethen xi is never as an exampleif and then the numbers generated are notice that after numbersthe sequence repeats thusthis sequence has period of which is as large as possible (by the pigeonhole principleif is primethere are always choices of that give full period of some choices of do notif and the sequence has short period of if is chosen to be large -bit primethe period should be significantly large for most applications lehmer suggested the use of the -bit prime - , , , for this primea , is one of the many values that gives full-period generator its use has been well studied and is recommended by experts in the field we will see later that with random-number generatorstinkering usually means breakingso one is well advised to stick with this formula until told otherwise this seems like simple routine to implement generallya class variable is used to hold the current value in the sequence of ' when debugging program that uses random we will use random in place of pseudorandom in the rest of this section for instanceit seems that xi+ ( , xi mod( would somehow be even more random this illustrates how fragile these generators are [ , ( , , mod( , , so if the seed is , , the generator gets stuck in cycle of period |
23,789 | numbersit is probably best to set so that the same random sequence occurs all the time when the program seems to workeither the system clock can be used or the user can be asked to input value for the seed it is also common to return random real number in the open interval ( ( and are not possible values)this can be done by dividing by from thisa random number in any closed interval [abcan be computed by normalizing this yields the "obviousclass in figure whichunfortunatelyis erroneous the problem with this class is that the multiplication could overflowalthough this is not an errorit affects the result and thus the pseudorandomness even though we could use -bit long longsthis could slow down the computation schrage gave procedure in which all the calculations can be done on -bit machine without overflow we compute the quotient and remainder of / and define these as and rrespectively in our caseq , , and we have axi xi+ axi mod axi xi xi axi axi + - xi axi xi + axi since xi xqi xi mod qwe can replace the leading axi and obtain xi axi xi xi xi mod + xi+ xi xi axi (aq ma(xi mod qm since aq rit follows that aq - thuswe obtain xi xi axi + xi+ (xi mod qr the term (xi xqi ax is either or because both terms are integers and their difference lies between and thuswe have xi xi+ (xi mod qr md(xi quick check shows that because qall the remaining terms can be calculated without overflow (this is one of the reasons for choosing , furthermored(xi only if the remaining terms evaluate to less than zero thus (xi does not need to be explicitly computed but can be determined by simple test this leads to the revision in figure one might be tempted to assume that all machines have random-number generator at least as good as the one in figure in their standard library sadlythis is not true many libraries have generators based on the function |
23,790 | static const int static const int class random publicexplicit randomint initialvalue )int randomint)double random )int randomintint lowint high )privateint state}/*construct with initialvalue for the state *random::randomint initialvalue ifinitialvalue initialvalue +mstate initialvalueifstate = state /*return pseudorandom intand change the internal state does not work correctly correct implementation is in figure *int random::randomintreturn state state /*return pseudorandom double in the open range and change the internal state *double random::random return static_castrandomintmfigure random-number generator that does not work |
23,791 | static const int static const int static const int astatic const int /*return pseudorandom intand change the internal state *int random::randomintint tmpstate state state )iftmpstate > state tmpstateelse state tmpstate mreturn statefigure random-number modification that does not overflow on -bit machines xi+ (axi cmod where is chosen to match the number of bits in the machine' integerand and are odd unfortunatelythese generators always produce values of xi that alternate between even and odd--hardly desirable property indeedthe lower bits cycle with period (at bestmany other random-number generators have much smaller cycles than the one provided in figure these are not suitable for the case where long sequences of random numbers are needed the unix drand function uses generator of this form howeverit uses -bit linear congruential generator and returns only the high bitsthus avoiding the cycling problem in the low-order bits the constants are , , , and ++ provides very general framework for the generation of random numbers in this frameworkthe generation of random numbers ( the type of random-number generator usedis separated from the actual distribution used ( whether the numbers are uniformly distributed over integersa range of realsor follow other distributionssuch as the normal distribution or poisson distributionthe generators that are provided include linear-congruential generatorswith the class template linear_congruential_enginewhich allows the specification of the parameters acand template class linear congruential engine |
23,792 | algorithm design techniques along with this typedef that yields the random-number generator (the "minimal standard"described earliertypedef linear congruential engine minstd rand the library also provides generator based on newer algorithmknown as the mersenne twisterwhose description is beyond the scope of the bookalong with typedef mt that uses its most common parametersand third type of random-number generatorknown as "subtract-with-carrygenerator figure illustrates how random-number generator engine can be combined with distribution (which is function objectto provide an easy-to-use class for the generation of random numbers skip lists our first use of randomization is data structure that supports both searching and insertion in (log nexpected time as mentioned in the introduction to this sectionthis means that the running time for each operation on any input sequence has expected value (log )where the expectation is based on the random-number generator it is possible to add deletion and all the operations that involve ordering and obtain expected time bounds that match the average time bounds of binary search trees the simplest possible data structure to support searching is the linked list figure shows simple linked list the time to perform search is proportional to the number of nodes that have to be examinedwhich is at most figure shows linked list in which every other node has an additional link to the node two ahead of it in the list because of thisat most / nodes are examined in the worst case we can extend this idea and obtain figure hereevery fourth node has link to the node four ahead only / nodes are examined the limiting case of this argument is shown in figure every th node has link to the node ahead of it the total number of links has only doubledbut now at most log nnodes are examined during search it is not hard to see that the total time spent for search is (log )because the search consists of either advancing to new node or dropping to lower link in the same node each of these steps consumes at most (log ntotal time during search notice that the search in this data structure is essentially binary search the problem with this data structure is that it is much too rigid to allow efficient insertion the key to making this data structure usable is to relax the structure conditions slightly we define level node to be node that has links as figure showsthe ith link in any level node ( >ilinks to the next node with at least levels this is an easy property to maintainhoweverfigure shows more restrictive property than this we thus drop the restriction that the ith link links to the node aheadand we replace it with the less restrictive condition above when it comes time to insert new elementwe allocate new node for it we must at this point decide what level the node should be examining figure we find that roughly half the nodes are level nodesroughly quarter are level andin generalapproximately / nodes are level we choose the level of the node randomlyin |
23,793 | #include #include #include using namespace stdclass uniformrandom publicuniformrandomint seed currenttimesecondsgeneratorseed /return pseudorandom int int nextintstatic uniform_int_distribution distributionreturn distributiongenerator)/return pseudorandom int in range [ highint nextintint high return nextint high )/return pseudorandom int in range [low highint nextintint lowint high uniform_int_distribution distributionlowhigh )return distributiongenerator )/return pseudorandom double in the range [ double nextdoublestatic uniform_real_distribution distribution )return distributiongenerator)privatemt generator}int currenttimesecondsauto now chrono::high_resolution_clock::nowtime_since_epoch)return (chrono::duration_castnow count)figure class that uses ++ random-number facilities |
23,794 | algorithm design techniques figure simple linked list figure linked list with links to two cells ahead figure linked list with links to four cells ahead figure linked list with links to cells ahead accordance with this probability distribution the easiest way to do this is to flip coin until head occurs and use the total number of flips as the node level figure shows typical skip list given thisthe skip list algorithms are simple to describe to perform searchwe start at the highest link at the header we traverse along this level until we find that the next node is larger than the one we are looking for (or nullptrwhen this occurswe go to the next lower level and continue the strategy when progress is stopped at level either we are in front of the node we are looking foror it is not in the list to perform an insertwe figure skip list |
23,795 | figure before and after an insertion proceed as in searchand keep track of each point where we switch to lower level the new nodewhose level is determined randomlyis then spliced into the list this operation is shown in figure cursory analysis shows that since the expected number of nodes at each level is unchanged from the original (nonrandomizedalgorithmthe total amount of work that is expected to be performed traversing to nodes on the same level is unchanged this tells us that these operations have (log nexpected costs of coursea more formal proof is requiredbut it is not much different from this skip lists are similar to hash tablesin that they require an estimate of the number of elements that will be in the list (so that the number of levels can be determinedif an estimate is not availablewe can assume large number or use technique similar to rehashing experiments have shown that skip lists are as efficient as many balanced search tree implementations and are certainly much simpler to implement in many languages skip lists also have efficient concurrent implementationsunlike balanced binary search trees hence they are provided in the java librarythough not yet in +primality testing in this sectionwe examine the problem of determining whether or not large number is prime as was mentioned at the end of some cryptography schemes depend on the difficulty of factoring large -digit number into two -digit primes in order to implement this schemewe need method of generating these two primes if is the number of digits in nthe obviousmethod of testing for the divisibility by odd numbers from to requires roughly divisionswhich is about / and is completely impractical for -digit numbers in this sectionwe will give polynomial-time algorithm that can test for primality if the algorithm declares that the number is not primewe can be certain that the number is not prime if the algorithm declares that the number is primethenwith high probability but not percent certaintythe number is prime the error probability does not depend on the particular number that is being tested but instead depends on random choices made |
23,796 | algorithm design techniques by the algorithm thusthis algorithm occasionally makes mistakebut we will see that the error ratio can be made arbitrarily negligible the key to the algorithm is well-known theorem due to fermat theorem (fermat' lesser theoremif is primeand pthen ap- (mod pproof proof of this theorem can be found in any textbook on number theory for instancesince is prime (mod this suggests an algorithm to test whether number is prime merely check whether - (mod nif - (mod )then we can be certain that is not prime on the other handif the equality holdsthen is probably prime for instancethe smallest that satisfies - (mod nbut is not prime is this algorithm will occasionally make errorsbut the problem is that it will always make the same errors put another waythere is fixed set of for which it does not work we can attempt to randomize the algorithm as followspick at random if an- (mod )declare that is probably primeotherwise declare that is definitely not prime if and we find that (mod thusif the algorithm happens to choose it will get the correct answer for although this seems to workthere are numbers that fool even this algorithm for most choices of one such set of numbers is known as the carmichael numbers these are not prime but satisfy an- (mod nfor all that are relatively prime to the smallest such number is thuswe need an additional test to improve the chances of not making an error in we proved theorem related to quadratic probing special case of this theorem is the followingtheorem if is prime and pthe only solutions to (mod pare proof (mod pimplies that (mod pthis implies ( )( (mod psince is prime pand must divide either ( or ( )the theorem follows thereforeif at any point in the computation of an- (mod nwe discover violation of this theoremwe can conclude that is definitely not prime if we use powfrom section we see that there will be several opportunities to apply this test we modify this routine to perform operations mod nand apply the test of theorem this strategy is implemented in the pseudocode shown in figure recall that if witness returns anything but it has proven that cannot be prime the proof is nonconstructivebecause it gives no method of actually finding the factors it has been shown that for any (sufficiently largenat most ( )/ values of fool this algorithm thusif is chosen at randomand the algorithm answers that is (probablyprimethen the algorithm is correct at least percent of the time suppose witness is run times the probability that the algorithm is fooled once is at most thusthe probability that independent random trials fool the algorithm is never more than |
23,797 | /*function that implements the basic primality test if witness does not return is definitely composite do this by computing (mod nand looking for non-trivial square roots of along the way *hugeint witnessconst hugeint aconst hugeint iconst hugeint ifi = return hugeint witnessai )ifx = /if is recursively compositestop return / is not prime if we find non-trivial square root of hugeint nify = & ! & ! return ifi ! nreturn /*the number of witnesses queried in randomized primality test *const int trials /*randomized primality test adjust trials to increase confidence level is the number to test if return value is falsen is definitely not prime if return value is truen is probably prime *bool isprimeconst hugeint random rforint counter counter trials++counter ifwitnessr randomhugeint ) ! return falsereturn truefigure probabilistic primality testing algorithm |
23,798 | algorithm design techniques / - this is actually very conservative estimatewhich holds for only few choices of even soone is more likely to see hardware error than an incorrect claim of primality randomized algorithms for primality testing are important because they have long been significantly faster than the best nonrandomized algorithmsand although the randomized algorithm can occasionally produce false positivethe chances of this happening can be made small enough to be negligible for many yearsit was suspected that it was possible to test definitively the primality of -digit number in time polynomial in dbut no such algorithm was known recentlyhoweverdeterministic polynomial time algorithms for primality testing have been discovered while these algorithms are tremendously exciting theoretical resultsthey are not yet competitive with the randomized algorithms the end of references provide more information backtracking algorithms the last algorithm design technique we will examine is backtracking in many casesa backtracking algorithm amounts to clever implementation of exhaustive searchwith generally unfavorable performance this is not always the casehoweverand even soin some casesthe savings over brute-force exhaustive search can be significant performance isof courserelativean ( algorithm for sorting is pretty badbut an ( algorithm for the traveling salesman (or any np-completeproblem would be landmark result practical example of backtracking algorithm is the problem of arranging furniture in new house there are many possibilities to trybut typically only few are actually considered starting with no arrangementeach piece of furniture is placed in some part of the room if all the furniture is placed and the owner is happythen the algorithm terminates if we reach point where all subsequent placement of furniture is undesirablewe have to undo the last step and try an alternative of coursethis might force another undoand so forth if we find that we undo all possible first stepsthen there is no placement of furniture that is satisfactory otherwisewe eventually terminate with satisfactory arrangement notice that although this algorithm is essentially brute forceit does not try all possibilities directly for instancearrangements that consider placing the sofa in the kitchen are never tried many other bad arrangements are discarded earlybecause an undesirable subset of the arrangement is detected the elimination of large group of possibilities in one step is known as pruning we will see two examples of backtracking algorithms the first is problem in computational geometry our second example shows how computers select moves in gamessuch as chess and checkers the turnpike reconstruction problem suppose we are given pointsp pn located on the -axis xi is the coordinate of pi let us further assume that and the points are given from left to right these points determine ( )/ (not necessarily uniquedistances dn between |
23,799 | every pair of points of the form |xi xj ( jit is clear that if we are given the set of pointsit is easy to construct the set of distances in ( time this set will not be sortedbut if we are willing to settle for an ( log ntime boundthe distances can be sortedtoo the turnpike reconstruction problem is to reconstruct point set from the distances this finds applications in physics and molecular biology (see the references for pointers to more specific informationthe name derives from the analogy of points to turnpike exits on east coast highways just as factoring seems harder than multiplicationthe reconstruction problem seems harder than the construction problem nobody has been able to give an algorithm that is guaranteed to work in polynomial time the algorithm that we will present generally runs in ( log nbut can take exponential time in the worst case of coursegiven one solution to the probleman infinite number of others can be constructed by adding an offset to all the points this is why we insist that the first point is anchored at and that the point set that constitutes solution is output in nondecreasing order let be the set of distancesand assume that |dm ( )/ as an examplesuppose that { since | we know that we start the algorithm by setting clearlyx since is the largest element in we remove from the points that we have placed and the remaining distances are as shown in the following figurex { the largest remaining distance is which means that either or by symmetrywe can conclude that the choice is unimportantsince either both choices lead to solutions (which are mirror images of each other)or neither doso we can set without affecting the solution we then remove the distances and from dobtaining { the next step is not obvious since is the largest value in deither or if then the distances and must also be present in quick check shows that indeed they are on the other handif we set then - and must be present in these distances are also in dso we have no guidance on which choice to make thuswe try one and see if it leads to solution if it turns out that it does notwe can come back and try the other trying the first choicewe set which leaves { |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.