id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
23,800 | algorithm design techniques at this pointwe have and now the largest distance is so either or but if then which is impossiblesince is no longer in on the other handif then and this is also impossiblesince only appears once in thusthis line of reasoning leaves no solutionso we backtrack since failed to produce solutionwe try if this also failswe give up and report no solution we now have { once againwe have to choose between and is impossiblebecause only has one occurrence of and two would be implied by this choice is possibleso we obtain { the only remaining choice is to assign this works because it leaves emptyand so we have solution { figure shows decision tree representing the actions taken to arrive at the solution instead of labeling the brancheswe have placed the labels in the branchesdestination nodes node with an asterisk indicates that the points chosen are inconsistent with the given distancesnodes with two asterisks have only impossible nodes as childrenand thus represent an incorrect path the pseudocode to implement this algorithm is mostly straightforward the driving routineturnpikeis shown in figure it receives the point array (which need not be initializedand the distance set and if solution is discoveredthen true will be returnedthe answer will be placed in xand will be empty otherwisefalse will be returnedx will be undefinedand the distance set will be untouched the routine sets xn- and xn as described abovealters dand calls the backtracking algorithm place to place the other points we presume that check has already been made to ensure that |dn( )/ the more difficult part is the backtracking algorithmwhich is shown in figure like most backtracking algorithmsthe most convenient implementation is recursive we we have used one-letter variable nameswhich is generally poor stylefor consistency with the worked example we alsofor simplicitydo not give the type of variables finallywe index arrays starting at instead of |
23,801 | = = = = * = = = = = = figure decision tree for the worked turnpike reconstruction example bool turnpikevector xdistset dint deletemaxxn ) deletemaxxn )ifxn xn removexn xn )return placexdn )else return falsefigure turnpike reconstruction algorithmdriver routine (pseudocodepass the same arguments plus the boundaries left and rightxleft xright are the coordinates of points that we are trying to place if is empty (or left right)then solution has been foundand we can return otherwisewe first try to place xright dmax if all the appropriate distances are present (in the correct quantity)then we tentatively place this pointremove these distancesand try to fill from left to right if the distances are not presentor the attempt to fill left to right failsthen we try setting xleft xn dmax using similar strategy if this does not workthen there is no solutionotherwise solution has been foundand this information is eventually passed back to turnpike by the return statement and array |
23,802 | backtracking algorithm to place the points [leftx[rightx[ [left- and [right+ [nalready tentatively placed if place returns truethen [leftx[rightwill have values *bool placevector xdistset dint nint leftint right int dmaxbool found false ifd isemptyreturn true dmax findmax) /check if setting [rightdmax is feasible ifx[jdmax for all <= <left and right< <= [rightdmax/try [right]=dmax for <= <leftright< <= removex[jdmax )found placexdnleftright- ) if!found /backtrack for <= <leftright< <= /undo the deletion insertx[jdmax ) /if first attempt failedtry to see if setting / [left]= [ ]-dmax is feasible if!found & [ndmax [jd for all <= <left and right< <= [leftx[ndmax/same logic as before for <= <leftright< <= removex[ndmax [ )found placexdnleft+ right ) if!found /backtrack for <= <leftright< <= /undo the deletion insertx[ndmax [ ) return foundfigure turnpike reconstruction algorithmbacktracking steps (pseudocode |
23,803 | the analysis of the algorithm involves two factors suppose lines to and to are never executed we can maintain as balanced binary search (or splaytree (this would require code modificationof courseif we never backtrackthere are at most ( operations involving dsuch as deletion and the contains implied at lines and this claim is obvious for deletionssince has ( elements and no element is ever reinserted each call to place uses at most containsand since place never backtracks in this analysisthere can be at most contains thusif there is no backtrackingthe running time is ( log nof coursebacktracking happensand if it happens repeatedlythen the performance of the algorithm is affected this can be forced to happen by construction of pathological case experiments have shown that if the points have integer coordinates distributed uniformly and randomly from [ dmax ]where dmax ( )thenalmost certainlyat most one backtrack is performed during the entire algorithm games as our last applicationwe will consider the strategy that computer might use to play strategic gamesuch as checkers or chess we will useas an examplethe much simpler game of tic-tac-toebecause it makes the points easier to illustrate tic-tac-toe is draw if both sides play optimally by performing careful case-by-case analysisit is not difficult matter to construct an algorithm that never loses and always wins when presented the opportunity this can be donebecause certain positions are known traps and can be handled by lookup table other strategiessuch as taking the center square when it is availablemake the analysis simpler if this is donethen by using table we can always choose move based only on the current position of coursethis strategy requires the programmerand not the computerto do most of the thinking minimax strategy the more general strategy is to use an evaluation function to quantify the "goodnessof position position that is win for computer might get the value of + draw could get and position that the computer has lost would get - position for which this assignment can be determined by examining the board is known as terminal position if position is not terminalthe value of the position is determined by recursively assuming optimal play by both sides this is known as minimax strategybecause one player (the humanis trying to minimize the value of the positionwhile the other player (the computeris trying to maximize it successor position of is any positionps that is reachable from by playing one move if the computer is to move when in some positionpit recursively evaluates the value of all the successor positions the computer chooses the move with the largest valuethis is the value of to evaluate any successor positionps all of ps ' successors are recursively evaluatedand the smallest value is chosen this smallest value represents the most favorable reply for the human player the code in figure makes the computer' strategy more clear lines to evaluate immediate wins or draws if neither of these cases applythen the position is nonterminal recalling that value should contain the maximum of all possible successor |
23,804 | /*recursive function to find best move for computer returns the evaluation and sets bestmovewhich ranges from to and indicates the best square to occupy possible evaluations satisfy comp_loss draw comp_win complementary function findhumanmove is figure *int tictactoe::findcompmoveint bestmove int iresponsevalueint dc/dc means don' careits value is unused int valueiffullboardvalue drawelse ifimmediatecompwinbestmove return comp_win/bestmove will be set by immediatecompwin else value comp_lossbestmove fori < ++ /try each square ifisemptyi placeicomp )responsevalue findhumanmovedc )unplacei )/restore board ifresponsevalue value /update best move value responsevaluebestmove ireturn valuefigure minimax tic-tac-toe algorithmcomputer selection |
23,805 | positionsline initializes it to the smallest possible valueand the loop in lines to searches for improvements each successor position is recursively evaluated in turn by lines to this is recursivebecauseas we will seefindhumanmove calls findcompmove if the human' response to move leaves the computer with more favorable position than that obtained with the previously best computer movethen the value and bestmove are updated figure shows the function for the human' move selection the logic is virtually identicalexcept that the human player chooses the move that leads to the lowestvalued position indeedit is not difficult to combine these two procedures into one by int tictactoe::findhumanmoveint bestmove int iresponsevalueint dc/dc means don' careits value is unused int valueiffullboardvalue drawelse ifimmediatehumanwinbestmove return comp_losselse value comp_winbestmove fori < ++ /try each square ifisemptyi placeihuman )responsevalue findcompmovedc )unplacei )/restore board ifresponsevalue value /update best move value responsevaluebestmove ireturn valuefigure minimax tic-tac-toe algorithmhuman selection |
23,806 | algorithm design techniques passing an extra variablewhich indicates whose turn it is to move this does make the code somewhat less readableso we have stayed with separate routines we leave supporting routines as an exercise the most costly computation is the case where the computer is asked to pick the opening move since at this stage the game is forced drawthe computer selects square total of , positions were examinedand the calculation took few seconds no attempt was made to optimize the code when the computer moves secondthe number of positions examined is , if the human selects the center square , when corner square is selectedand , when noncorner edge square is selected for more complex gamessuch as checkers and chessit is obviously infeasible to search all the way to the terminal nodes in this casewe have to stop the search after certain depth of recursion is reached the nodes where the recursion is stopped become terminal nodes these terminal nodes are evaluated with function that estimates the value of the position for instancein chess programthe evaluation function measures such variables as the relative amount and strength of pieces and positional factors the evaluation function is crucial for successbecause the computer' move selection is based on maximizing this function the best computer chess programs have surprisingly sophisticated evaluation functions neverthelessfor computer chessthe single most important factor seems to be number of moves of look-ahead the program is capable of this is sometimes known as plyit is equal to the depth of the recursion to implement thisan extra parameter is given to the search routines the basic method to increase the look-ahead factor in game programs is to come up with methods that evaluate fewer nodes without losing any information one method which we have already seen is to use table to keep track of all positions that have been evaluated for instancein the course of searching for the first movethe program will examine the positions in figure if the values of the positions are savedthe second occurrence of position need not be recomputedit essentially becomes terminal position the data structure that records this is known as transposition tableit is almost always implemented by hashing in many casesthis can save considerable computation for instancein chess endgamewhere there are relatively few piecesthe time savings can allow search to go several levels deeper pruning - probably the most significant improvement one can obtain in general is known as pruning figure shows the trace of the recursive calls used to evaluate some - hypothetical position in hypothetical game this is commonly referred to as game tree (we have avoided the use of this term until nowbecause it is somewhat misleadingno we numbered the squares starting from the top left and moving right howeverthis is only important for the supporting routines it is estimated that if this search were conducted for chessat least positions would be examined for the first move even if the improvements described later in this section were incorporatedthis number could not be reduced to practical level |
23,807 | figure two searches that arrive at identical position tree is actually constructed by the algorithm the game tree is just an abstract concept the value of the game tree is figure shows the evaluation of the same game tree with several (but not all possibleunevaluated nodes almost half of the terminal nodes have not been checked we show that evaluating them would not change the value at the root firstconsider node figure shows the information that has been gathered when it is time to evaluate at this pointwe are still in findhumanmove and are contemplating call to findcompmove on howeverwe already know that findhumanmove will return at most since it is min node on the other handits max node parent has already found sequence that guarantees nothing that does can possibly increase this value therefored does not need to be evaluated this pruning of the tree is known as pruning an identical situation occurs at node to implement pruningfindcompmove passes its tentative maximum (ato findhumanmove if the tentative minimum of findhumanmove falls below this valuethen findhumanmove returns immediately similar thing happens at nodes and this timewe are in the middle of findcompmove and are about to make call to findhumanmove to evaluate figure shows the situation that is encountered at node howeverthe findhumanmoveat the min max min figure hypothetical game tree max min max |
23,808 | algorithm design techniques max min min max max figure pruned game tree >= max <= min dfigure the node marked is unimportant <= min >= max cfigure the node marked is unimportant levelwhich has called findcompmovehas already determined that it can force value of at most (recall that low values are good for the human sidesince findcompmove has tentative maximum of nothing that does will affect the result at the min level thereforec should not be evaluated this type of pruning is known as pruningit is the symmetric version of pruning when both techniques are combinedwe have - pruning |
23,809 | /*same as beforebut perform alpha-beta pruning the main routine should make the call with alpha comp_loss and beta comp_win *int tictactoe::findcompmoveint bestmoveint alphaint beta int iresponsevalueint dc/dc means don' careits value is unused int valueiffullboardvalue drawelse ifimmediatecompwinbestmove return comp_win/bestmove will be set by immediatecompwin else value alphabestmove fori < &value beta++ /try each square ifisemptyi placeicomp )responsevalue findhumanmovedcvaluebeta )unplacei )/restore board ifresponsevalue value /update best move value responsevaluebestmove ireturn valuefigure minimax tic-tac-toe algorithm with pruningcomputer selection implementing - pruning requires surprisingly little code figure shows half of the - pruning scheme (minus type declarations)you should have no trouble coding the other half to take full advantage of - pruninggame programs usually try to apply the evaluation function to nonterminal nodes in an attempt to place the best moves early in the |
23,810 | algorithm design techniques search the result is even more pruning than one would expect from random ordering of the nodes other techniquessuch as searching deeper in more active lines of playare also employed in practicea- pruning limits the searching to only onnodeswhere is the size of the full game tree this is huge savings and means that searches using - pruning can go twice as deep as compared to an unpruned tree our tic-tac-toe example is not idealbecause there are so many identical valuesbut even sothe initial search of , nodes is reduced to , nodes (these counts include nonterminal nodes in many gamescomputers are among the best players in the world the techniques used are very interestingand can be applied to more serious problems more details can be found in the references summary this illustrates five of the most common techniques found in algorithm design when confronted with problemit is worthwhile to see if any of these methods apply proper choice of algorithmcombined with judicious use of data structurescan often lead quickly to efficient solutions exercises show that the greedy algorithm to minimize the mean completion time for multiprocessor job scheduling works the input is set of jobs jn each of which takes one time unit to complete each job ji earns di dollars if it is completed by the time limit ti but no money if completed after the time limit give an ( greedy algorithm to solve the problem modify your algorithm to obtain an ( log ntime bound (hintthe time bound is due entirely to sorting the jobs by money the rest of the algorithm can be implementedusing the disjoint set data structurein ( log file contains only colonsspacesnewlinescommasand digits in the following frequencycolon ( )space ( )newline ( )comma ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( construct the huffman code part of the encoded file must be header indicating the huffman code give method for constructing the header of size at most ( (in addition to the symbols)where is the number of symbols complete the proof that huffman' algorithm generates an optimal prefix code show that if the symbols are sorted by frequencyhuffman' algorithm can be implemented in linear time write program to implement file compression (and uncompressionusing huffman' algorithm |
23,811 | show that any online bin packing algorithm can be forced to use at least the optimal number of binsby considering the following sequence of itemsn items of size items of size items of size give simple analysis to show the performance bound for first fit decreasing bin packing when the smallest item size is larger than the smallest item size is larger than the smallest item size is smaller than explain how to implement first fit and best fit in ( log ntime show the operation of all the bin packing strategies discussed in section on the input write program that compares the performance (both in time and number of bins usedof the various bin packing heuristics prove theorem prove theorem points are placed in unit square show that the distance between the closest pair is ( - / argue thatfor the closest-points algorithmthe average number of points in the strip is on(hintuse the result of the previous exercise write program to implement the closest-pair algorithm what is the asymptotic running time of quickselect using median-of-median-ofthree partitioning strategy show that quickselect with median-of-median-of-seven partitioning is linear why is median-of-median-of-seven partitioning not used in the proof implement the quickselect algorithm in quickselect using medianof-median-of-five partitioningand the sampling algorithm at the end of section compare the running times much of the information used to compute the median-of-median-of-five is thrown away show how the number of comparisons can be reduced by more careful use of the information complete the analysis of the sampling algorithm described at the end of section and explain how the values of and are chosen show how the recursive multiplication algorithm computes xywhere and include all recursive computations show how to multiply two complex numbers bi and di using only three multiplications show that xl yr xr yl (xl xr )(yl yr xl yl xr yr |
23,812 | algorithm design techniques this gives an ( algorithm to multiply -bit numbers compare this method to the solution in the text show how to multiply two numbers by solving five problems that are roughly one-third of the original size generalize this problem to obtain an ( algorithm for any constant is the algorithm in part (bbetter than ( log ) why is it important that strassen' algorithm does not use commutativity in the multiplication of matrices two matrices can be multiplied using multiplications show how this can be used to improve the bound given by strassen' algorithm what is the optimal way to compute where the dimensions of the matrices area show that none of the following greedy algorithms for chained matrix multiplication work at each step compute the cheapest multiplication compute the most expensive multiplication compute the multiplication between the two matrices mi and mi+ such that the number of columns in mi is minimized (breaking ties by one of the rules abovewrite program to compute the best ordering of matrix multiplication include the routine to print out the actual ordering show the optimal binary search tree for the following wordswhere the frequency of occurrence is in parenthesesa ( )and ( ) ( )it ( )or ( extend the optimal binary search tree algorithm to allow for unsuccessful searches in this caseqj for < nis the probability that search is performed for any word satisfying wj wj+ is the probability of performing search for and qn is the probability of performing search for wn notice that = pi = qj suppose ci, and that otherwise cij wij min (cik- ckj < <= suppose that satisfies the quadrangle inequalitynamelyfor all < < <jwij wij<wij wijsuppose furtherthat is monotoneif <iand <jthen wij <wija prove that satisfies the quadrangle inequality let rij be the largest that achieves the minimum cik- ckj (that isin case of tieschoose the largest prove that rij <rij+ <ri+ + |
23,813 | show that is nondecreasing along each row and column use this to show that all entries in can be computed in ( time which of the dynamic programming algorithms can be solved in ( using these techniqueswrite routine to reconstruct the shortest paths from the algorithm in section the binomial coefficients (nkcan be defined recursively as followsc( (nn andfor nc(nkc( kc( write function and give an analysis of the running time to compute the binomial coefficients as followsa recursively using dynamic programming write the routines to perform insertiondeletionand searching in skip lists give formal proof that the expected time for the skip list operations is (log na examine the random-number generator on your system how random is itb figure shows routine to flip coinassuming that random returns an integer (which is prevalent in many systemswhat is the expected performance of the skip list algorithms if the random-number generator uses modulus of the form (which is unfortunately prevalent on many systems) use the exponentiation algorithm to prove that (mod show how the randomized primality test works for with several choices of implement the turnpike reconstruction algorithm two point sets are homometric if they yield the same distance set and are not rotations of each other the following distance set gives two distinct point sets find the two point sets extend the reconstruction algorithm to find all homometric point sets given distance set show the result of - pruning of the tree in figure does the code in figure implement pruning or pruningb implement the complementary routine coinside flipifrandom = return headselse return tailsfigure questionable coin flipper |
23,814 | algorithm design techniques max min max min max figure game treewhich can be pruned write the remaining procedures for tic-tac-toe the one-dimensional circle packing problem is as followsyou have circles of radii rn these circles are packed in box such that each circle is tangent to the bottom of the box and are arranged in the original order the problem is to find the width of the minimum-sized box figure shows an example with circles of radii respectively the minimum-sized box has width suppose that the edges in an undirected graph satisfy the triangle inequalitycu, cv, >cu, show how to compute traveling salesman tour of cost at most twice optimal (hintconstruct minimum spanning tree you are tournament director and need to arrange round robin tournament among players in this tournamenteveryone plays exactly one game each dayafter daysa match has occurred between every pair of players give recursive algorithm to do this prove that in round robin tournament it is always possible to arrange the players in an order pi pi pin such that for all < npij has won the match against pij+ figure sample for circle packing problem |
23,815 | figure voronoi diagram give an ( log nalgorithm to find one such arrangement your algorithm may serve as proof for part ( we are given set pn of points in plane voronoi diagram is partition of the plane into regions ri such that all points in ri are closer to pi than any other point in figure shows sample voronoi diagram for seven (nicely arrangedpoints give an ( log nalgorithm to construct the voronoi diagram convex polygon is polygon with the property that any line segment whose endpoints are on the polygon lies entirely within the polygon the convex hull problem consists of finding the smallest (areaconvex polygon that encloses set of points in the plane figure shows the convex hull for set of points give an ( log nalgorithm to find the convex hull figure example of convex hull |
23,816 | algorithm design techniques consider the problem of right-justifying paragraph the paragraph contains sequence of words wn of length an which we wish to break into lines of length words are separated by blanks whose ideal length is (millimeters)but blanks can stretch or shrink as necessary (but must be > )so that line wi wi+ wj has length exactly howeverfor each blank bwe charge |bbugliness points the exception to this is the last linefor which we charge only if bb (in other wordswe charge only for shrinking)since the last line does not need to be justified thusif bi is the length of the blank between ai and ai+ then the ugliness of setting any line (but the lastwi wi+ wj for - is = |bk ( )|bb|where bis the average size of blank on this line this is true of the last line only if bbotherwise the last line is not ugly at all give dynamic programming algorithm to find the least ugly setting of wn into lines of length (hintfor nn compute the best way to set wi wi+ wn give the time and space complexities for your algorithm (as function of the number of wordsnc consider the special case where we are using fixed-width fontand assume the optimal value of is (spacein this caseno shrinking of blanks is allowedsince the next smallest blank space would be give linear-time algorithm to generate the least ugly setting for this case the longest increasing subsequence problem is as followsgiven numbers an find the maximum value of such that ai ai aik and ik as an exampleif the input is the maximum increasing subsequence has length four ( among othersgive an ( algorithm to solve the longest increasing subsequence problem the longest common subsequence problem is as followsgiven two sequences am and bn find the lengthkof the longest sequence ck such that is subsequence (not necessarily contiguousof both and as an exampleif dynamic and programmingthen the longest common subsequence is , , and has length give an algorithm to solve the longest common subsequence problem your algorithm should run in (mntime the pattern-matching problem is as followsgiven stringsof textand patternpfind the first occurrence of in approximate pattern matching allows mismatches of three types( character can be in that is not in ( character can be in that is not in ( and can differ in position |
23,817 | as an exampleif we are searching for the pattern "textbookwith at most three mismatches in the string "data structures txtborpk"we find match (insert an echange an to an odelete pgive an (mnalgorithm to solve the approximate string matching problemwhere |pand |sone form of the knapsack problem is as followswe are given set of integersa an and an integerk is there subset of whose sum is exactly ka give an algorithm that solves the knapsack problem in (nktime why does this not show that npyou are given currency system with coins of (decreasingvalue cn cents give an algorithm that computes the minimum number of coins required to give cents in change give an algorithm that computes the number of different ways to give cents in change consider the problem of placing eight queens on an (eight-by-eightchess board two queens are said to attack each other if they are on the same rowcolumnor (not necessarily maindiagonal give randomized algorithm to place eight nonattacking queens on the board give backtracking algorithm to solve the same problem implement both algorithms and compare the running time in the game of chessa knight in row and column may move to row < < and column < < (where is the size of the boardprovided that either | and | or and | | knight' tour is sequence of moves that visits all squares exactly once before returning to the starting point if is oddshow that knight' tour cannot exist give backtracking algorithm to find knight' tour consider the recursive algorithm in figure for finding the shortest weighted path in an acyclic graphfrom to why does this algorithm not work for general graphsb prove that this algorithm terminates for acyclic graphs what is the worst-case running time of the algorithmlet be an -by- matrix of zeroes and ones submatrix of is any group of contiguous entries that forms square design an ( algorithm that determines the size of the largest submatrix of ones in for instancein the matrix that followsthe largest submatrix is four-by-four square |
23,818 | algorithm design techniques distance graph::shortestst distance dt tempifs = return dt for each vertex adjacent to tmp shortestvt )ifcs, tmp dt dt cs, tmpreturn dt figure recursive shortest-path algorithm (pseudocode *repeat part (aif is allowed to be rectangleinstead of square largest is measured by area even if the computer has move that gives an immediate winit may not make it if it detects another move that is also guaranteed to win some early chess programs had the problem that they would get into repetition of position when forced win was detectedthereby allowing the opponent to claim draw in tic-tac-toethis is not problembecause the program eventually will win modify the tictac-toe algorithm so that when winning position is foundthe move that leads to the shortest win is always taken you can do this by adding -depth to comp_win so that the quickest win gives the highest value write programto play five-by-five tic-tac-toewhere four in row wins can you search to terminal nodesthe game of boggle consists of grid of letters and word list the object is to find words in the grid subject to the constraint that two adjacent letters must be adjacent in the grid and each item in the grid can be usedat mostonce per word write program to play boggle write program to play maxit the board is represented as an -by- grid of numbers randomly placed at the start of the game one position is designated |
23,819 | as the initial current position two players alternate turns at each turna player must select grid element in the current row or column the value of the selected position is added to the player' scoreand that position becomes the current position and cannot be selected again players alternate until all grid elements in the current row and column are already selectedat which point the game ends and the player with the higher score wins othello played on six-by-six board is forced win for black prove this by writing program what is the final score if play on both sides is optimalreferences the original paper on huffman codes is [ variations on the algorithm are discussed in [ ][ ]and [ another popular compression scheme is ziv-lempel encoding [ ][ here the codes have fixed length but represent strings instead of characters [ and [ are good surveys of the common compression schemes the analysis of bin-packing heuristics first appeared in johnson' ph thesis and was published in [ improvements in the additive constants of the bounds for first fit and first fit decreasing were given in [ and [ ]respectively the improved lower bound for online bin packing given in exercise is from [ ]this result has been improved further in [ ][ ]and [ [ describes another approach to online bin packing theorem is from [ the closest points algorithm appeared in [ [ and describes the turnpike reconstruction problem and its applications the exponential worstcase input was given by [ books on computational geometry include [ ][ ][ ]and [ [ contains the lecture notes for computational geometry course taught at mitit includes an extensive bibliography the linear-time selection algorithm appeared in [ the best bound for selecting the median is currently ~ comparisons [ [ discusses the sampling approach that finds the median in expected comparisons the ( multiplication is from [ generalizations are discussed in [ and [ strassen' algorithm appears in the short paper [ the paper states the results and not much else pan [ gives several divideand-conquer algorithmsincluding the one in exercise coopersmith and winograd [ gave an on algorithm that was the best-known for over two decades this bound was silently lowered to on in by stothersand then to on by vassilevska-williams in [ the classic references on dynamic programming are the books [ and [ the matrix ordering problem was first studied in [ it was shown in [ that the problem can be solved in ( log ntime an ( algorithm was provided for the construction of optimal binary search trees by knuth [ the all-pairs shortest-path algorithm is from floyd [ theoretically better ( (log log nlog ) / algorithm is given by fredman [ ]but not surprisinglyit is not practical slightly improved bound (with / instead of / is given in [ ]lowered to ( log log nlog [ ]and most recently to ( log log nlog [ ]see also [ for related results for undirected graphsthe all-pairs problem can be solved in | ||vlog (| || |)where was previously seen in the union find analysis in [ under certain conditionsthe running time of dynamic programs can |
23,820 | algorithm design techniques automatically be improved by factor of or more this is discussed in exercise [ ]and [ the discussion of random-number generators is based on [ park and miller attribute the portable implementation to schrage [ the mersenne twister generator was proposed in [ ]the subtract with-carry generator was described in [ skip lists are discussed by pugh in [ an alternativenamely the treapis discussed in the randomized primality-testing algorithm is due to miller [ and rabin [ the theorem that at most ( - )/ values of fool the algorithm is from monier [ in an ( deterministic polynomial-time primality testing algorithm was discovered [ ]and subsequently an improved algorithm with running time ( was found [ howeverthese algorithms are slower than the randomized algorithm other randomized algorithms are discussed in [ more examples of randomization techniques can be found in [ ][ ]and [ more information on - pruning can be found in [ ][ ]and [ the top programs that play chesscheckersothelloand backgammon all achieved world class status in the the world' leading checkers programchinookhas improved to the point that in it provably could not lose game [ [ describes an othello program the paper appears in special issue on computer games (mostly chess)this issue is gold mine of ideas one of the papers describes the use of dynamic programming to solve chess endgames completely when only few pieces are left on the board related research resulted in the change in (later revoked in of the -move rule in certain cases exercise is solved in [ determining whether homometric point set with no duplicate distances exists for > is open christofides [ gives solution to exercise and also an algorithm that generates tour at most optimal exercise is discussed in [ exercise is solved in [ an (knalgorithm is given in [ exercise is discussed in [ ]but do not be misled by the title of the paper abramson"control strategies for two-player games,acm computing surveys ( ) - aggarwal and weincomputational geometrylecture notes for mit laboratory for computer science agrawaln kayaland saxena"primes is in ,annals of mathematics ( ) - alonz galiland margalit"on the exponent of the all-pairs shortest path problem,proceedings of the thirty-second annual symposium on the foundations of computer science ( ) - baloghj bekesiand galambos"new lower bounds for certain classes of binpacking algorithms,theoretical computer science - ( ) - belli wittenand cleary"modeling for text compression,acm computing surveys ( ) - bellmandynamic programmingprinceton university pressprincetonn bellman and dreyfusapplied dynamic programmingprinceton university pressprincetonn bentleyd hakenand saxe" general method for solving divide-and-conquer recurrences,sigact news ( ) - |
23,821 | bloom" counterexample to the theorem of piccard,journal of combinatorial theory ( ) - blumr floydv prattr rivestand tarjan"time bounds for selection,journal of computer and system sciences ( ) - borodin and munrothe computational complexity of algebraic and numerical problemsamerican elseviernew york chang and korsh"canonical coin changing and greedy solutions,journal of the acm ( ) - christofides"worst-case analysis of new heuristic for the traveling salesman problem,management science research report # carnegie-mellon universitypittsburghpa coppersmith and winograd"matrix multiplication via arithmetic progressions,proceedings of the nineteenth annual acm symposium on the theory of computing ( ) - dor and zwick"selecting the median,siam journal on computing ( ) - dosa"the tight bound of first fit decreasing bin-packing algorithm is ffd( )=( / )opt( )+ / ,combinatoricsalgorithmsprobabilistic and experimental methodologies (escape )( ) - edelsbrunneralgorithms in combinatorial geometryspringer-verlagberlin eppsteinz galiland giancarlo"speeding up dynamic programming,proceedings of the twenty-ninth annual ieee symposium on the foundations of computer science ( ) - floyd"algorithm shortest path,communications of the acm ( ) floyd and rivest"expected time bounds for selection,communications of the acm ( ) - fredman"new bounds on the complexity of the shortest path problem,siam journal on computing ( ) - godbole"on efficient computation of matrix chain products,ieee transactions on computers ( ) - guptas smolkaand bhaskar"on randomization in sequential and distributed algorithms,acm computing surveys ( ) - han and takaoka"an ( log log nlog ntime algorithm for all pairs shortest paths,proceedings of the thirteenth scandinavian symposium and workshops on algorithm theory ( ) - hu and shing"computations of matrix chain productspart ,siam journal on computing ( ) - huffman" method for the construction of minimum redundancy codes,proceedings of the ire ( ) - johnsona demersj ullmanm gareyand graham"worst-case performance bounds for simple one-dimensional packing algorithms,siam journal on computing ( ) - karatsuba and ofman"multiplication of multi-digit numbers on automata,doklady akademii nauk sssr ( ) - karger"random sampling in graph optimization problems,ph thesisstanford university |
23,822 | algorithm design techniques knuththe art of computer programmingvol seminumerical algorithms ed addison-wesleyreadingmass knuth"optimum binary search trees,acta informatica ( ) - knuth"an analysis of alpha-beta cutoffs,artificial intelligence ( ) - knuthtex and metafontnew directions in typesettingdigital pressbedfordmass knuth"dynamic huffman coding,journal of algorithms ( ) - knuth and moore"estimating the efficiency of backtrack programs,mathematics of computation ( ) - landau and vishkin"introducing efficient parallelism into approximate string matching and new serial algorithm,proceedings of the eighteenth annual acm symposium on theory of computing ( ) - larmore"height-restricted optimal binary trees,siam journal on computing ( ) - larmore and hirschberg" fast algorithm for optimal length-limited huffman codes,journal of the acm ( ) - lee and mahajan"the development of world class othello program,artificial intelligence ( ) - lelewer and hirschberg"data compression,acm computing surveys ( ) - lenstrajr and pomerance"primality testing with gaussian periods,manuscript ( liang" lower bound for on-line bin packing,information processing letters ( ) - marsaglia and zaman" new class of random-number generators,the annals of applied probability ( ) - matsumoto and nishimura"mersenne twistera -dimensionally equidistributed uniform pseudo-random number generator,acm transactiona on modeling and computer simulation (tomacs) ( ) - miller"riemann' hypothesis and tests for primality,journal of computer and system sciences ( ) - monier"evaluation and comparison of two efficient probabilistic primality testing algorithms,theoretical computer science ( ) - motwani and raghavanrandomized algorithmscambridge university pressnew york mulmuleycomputational geometryan introduction through randomized algorithmsprentice hallenglewood cliffsn 'rourkecomputational geometry in ccambridge university pressnew york pan"strassen' algorithm is not optimal,proceedings of the nineteenth annual ieee symposium on the foundations of computer science ( ) - park and miller"random-number generatorsgood ones are hard to find,communications of the acm ( ) - (see also technical correspondencein ( - pettie and ramachandran" shortest path algorithm for undirected graphs,siam journal on computing ( ) - |
23,823 | preparata and shamoscomputational geometryan introductionspringer-verlagnew york pugh"skip listsa probabilistic alternative to balanced trees,communications of the acm ( ) - rabin"probabilistic algorithms,in algorithms and complexityrecent results and new directions ( traubed )academic pressnew york - rabin"probabilistic algorithms for testing primality,journal of number theory ( ) - ramanand brownc leeand lee"on-line bin packing in linear time,journal of algorithms ( ) - schaeffern burchy bjornssona kishimotom mullerr lakep luand sutphen"checkers in solved,science ( ) - shamos and hoey"closest-point problems,proceedings of the sixteenth annual ieee symposium on the foundations of computer science ( ) - schrage" more portable fortran random-number generator,acm transactions on mathematics software ( ) - skienaw smithand lemke"reconstructing sets from interpoint distances,proceedings of the sixth annual acm symposium on computational geometry ( ) - strassen"gaussian elimination is not optimal,numerische mathematik ( ) - takaoka" new upper bound on the complexity of the all-pairs shortest path problem,information processing letters ( ) - van vliet"an improved lower bound for on-line bin-packing algorithms,information processing letters ( ) - vassilevska-williams"multiplying matrices faster than coppersmith-winograd,proceedings of the forty-fourth symposium on theory of computing ( ) - wagner and fischer"the string-to-string correction problem,journal of the acm ( ) - xia and tan"tighter bounds of the first fit algorithm for the bin-packing problem,discrete applied mathematics( ) - yao"new algorithms for bin packing,journal of the acm ( ) - yao"efficient dynamic programming using quadrangle inequalities,proceedings of the twelfth annual acm symposium on the theory of computing ( ) - zhang"an exponential example for partial digest mapping algorithm,journal of computational molecular biology ( ) - ziv and lempel" universal algorithm for sequential data compression,ieee transactions on information theory it ( ) - ziv and lempel"compression of individual sequences via variable-rate coding,ieee transactions on information theory it ( ) - zwick" slightly improved sub-cubic algorithm for the all-pairs shortest paths problem with real edge lengths,proceedings of the fifteenth international symposium on algorithms and computation ( ) - |
23,824 | |
23,825 | amortized analysis in this we will analyze the running times for several of the advanced data structures that have been presented in and in particularwe will consider the worstcase running time for any sequence of operations this contrasts with the more typical analysisin which worst-case bound is given for any single operation as an examplewe have seen that avl trees support the standard tree operations in (log nworst-case time per operation avl trees are somewhat complicated to implementnot only because there are host of casesbut also because height balance information must be maintained and updated correctly the reason that avl trees are used is that sequence of (noperations on an unbalanced search tree could require ( timewhich would be expensive for search treesthe (nworst-case running time of an operation is not the real problem the major problem is that this could happen repeatedly splay trees offer pleasant alternative although any operation can still require (ntimethis degenerate behavior cannot occur repeatedlyand we can prove that any sequence of operations takes ( log nworst-case time (totalthusin the long run this data structure behaves as though each operation takes (log nwe call this an amortized time bound amortized bounds are weaker than the corresponding worst-case boundsbecause there is no guarantee for any single operation since this is generally not importantwe are willing to sacrifice the bound on single operationif we can retain the same bound for the sequence of operations and at the same time simplify the data structure amortized bounds are stronger than the equivalent average-case bound for instancebinary search trees have (log naverage time per operationbut it is still possible for sequence of operations to take (mntime because deriving an amortized bound requires us to look at an entire sequence of operations instead of just onewe expect that the analysis will be more tricky we will see that this expectation is generally realized in this we will analyze the binomial queue operations analyze skew heaps introduce and analyze the fibonacci heap analyze splay trees |
23,826 | amortized analysis an unrelated puzzle consider the following puzzletwo kittens are placed on opposite ends of football field yards apart they walk toward each other at the speed of yards per minute at the same timetheir mother is at one end of the field she can run at yards per minute the mother runs from one kitten to the othermaking turns with no loss of speeduntil the kittens (and thus the mothermeet at midfield how far does the mother runit is not hard to solve this puzzle with brute-force calculation we leave the details to youbut one expects that this calculation will involve computing the sum of an infinite geometric series although this straightforward calculation will lead to an answerit turns out that much simpler solution can be arrived at by introducing an extra variablenamelytime because the kittens are yards apart and approach each other at combined velocity of yards per minuteit takes them five minutes to get to midfield since the mother runs yards per minuteher total is yards this puzzle illustrates the point that sometimes it is easier to solve problem indirectly than directly the amortized analyses that we will perform will use this idea we will introduce an extra variableknown as the potentialto allow us to prove results that seem very difficult to establish otherwise binomial queues the first data structure we will look at is the binomial queue of which we now review briefly recall that binomial tree is one-node treeand for the binomial tree bk is built by melding two binomial trees bk- together binomial trees through are shown in figure the rank of node in binomial tree is equal to the number of childrenin particularthe rank of the root of bk is binomial queue is collection of heap-ordered binomial trees in which there can be at most one binomial tree bk for any two binomial queuesh and are shown in figure the most important operation is merge to merge two binomial queuesan operation similar to addition of binary integers is performedat any stage we may have zeroonetwoor possibly three bk treesdepending on whether or not the two priority queues contain bk tree and whether or not bk tree is carried over from the previous step if there is zero or one bk treeit is placed as tree in the resultant binomial queue if there are two bk treesthey are melded into bk+ tree and carried overif there are three bk treesone is placed as tree in the binomial queue and the other two are melded and carried over the result of merging and is shown in figure insertion is performed by creating one-node binomial queue and performing merge the time to do this is where represents the smallest type of binomial tree bm not present in the binomial queue thusinsertion into binomial queue that has tree but no tree requires two steps deletion of the minimum is accomplished by removing the |
23,827 | figure binomial trees and figure two binomial queues and figure binomial queue the result of merging and |
23,828 | amortized analysis minimum and splitting the original binomial queue into two binomial queueswhich are then merged less terse explanation of these operations is given in we consider very simple problem first suppose we want to build binomial queue of elements we know that building binary heap of elements can be done in ( )so we expect similar bound for binomial queues claim binomial queue of elements can be built by successive insertions in (ntime the claimif truewould give an extremely simple algorithm since the worst-case time for each insertion is (log )it is not obvious that the claim is true recall that if this algorithm were applied to binary heapsthe running time would be ( log nto prove the claimwe could do direct calculation to measure the running timewe define the cost of each insertion to be one time unit plus an extra unit for each linking step summing this cost over all insertions gives the total running time this total is units plus the total number of linking steps the st rd thand all odd-numbered steps require no linking stepssince there is no present at the time of insertion thushalf the insertions require no linking steps quarter of the insertions require only one linking step ( nd th thand so onan eighth requires twoand so on we could add this all up and bound the number of linking steps by nproving the claim this brute-force calculation will not help when we try to analyze sequence of operations that include more than just insertionsso we will use another approach to prove this result consider the result of an insertion if there is no tree present at the time of the insertionthen the insertion costs total of one unitusing the same accounting as above the result of the insertion is that there is now treeand thus we have added one tree to the forest of binomial trees if there is tree but no treethen the insertion costs two units the new forest will have tree but will no longer have treeso the number of trees in the forest is unchanged an insertion that costs three units will create tree but destroy and treeyielding net loss of one tree in the forest in factit is easy to see thatin generalan insertion that costs units results in net increase of trees in the forestbecause bc- tree is created but all bi trees < are removed thusexpensive insertions remove treeswhile cheap insertions create trees let ci be the cost of the ith insertion let ti be the number of trees after the ith insertion is the number of trees initially then we have the invariant ci (ti ti- we then have ( ( cn- (tn- tn- cn (tn tn- ( |
23,829 | if we add all these equationsmost of the ti terms cancelleaving ci tn = or equivalentlyn ci (tn = recall that and tn the number of trees after the insertionsis certainly not negativeso (tn is not negative thus ci < = which proves the claim during the buildbinomialqueue routineeach insertion had worstcase time of (log )but since the entire routine used at most units of timethe insertions behaved as though each used no more than two units each this example illustrates the general technique we will use the state of the data structure at any time is given by function known as the potential the potential function is not maintained by the programbut rather is an accounting device that will help with the analysis when operations take less time than we have allocated for themthe unused time is "savedin the form of higher potential in our examplethe potential of the data structure is simply the number of trees in the analysis abovewhen we have insertions that use only one unit instead of the two units that are allocatedthe extra unit is saved for later by an increase in potential when operations occur that exceed the allotted timethen the excess time is accounted for by decrease in potential one may view the potential as representing savings account if an operation uses less than its allotted timethe difference is saved for use later on by more expensive operations figure shows the cumulative running time used by buildbinomialqueue over sequence of insertions observe that the running time never exceeds and that the potential in the binomial queue after any insertion measures the amount of savings once potential function is chosenwe write the main equationtactual potential tamortized ( tactual the actual time of an operationrepresents the exact (observedamount of time required to execute particular operation in binary search treefor examplethe actual time to perform find(xis plus the depth of the node containing if we sum the basic equation over the entire sequenceand if the final potential is at least as large as the initial potentialthen the amortized time is an upper bound on the actual time used during the execution of the sequence notice that while tactual varies from operation to operationtamortized is stable picking potential function that proves meaningful bound is very tricky taskthere is no one method that is used generallymany potential functions are tried before the one |
23,830 | amortized analysis total time total potential figure sequence of inserts that works is found neverthelessthe discussion above suggests few ruleswhich tell us the properties that good potential functions have the potential function should always assume its minimum at the start of the sequence popular method of choosing potential functions is to ensure that the potential function is initially and always nonnegative all the examples that we will encounter use this strategy cancel term in the actual time in our caseif the actual cost was cthen the potential change was when these are addedan amortized cost of is obtained this is shown in figure we can now perform complete analysis of binomial queue operations theorem the amortized running times of insertdeleteminand merge are ( ) (log )and (log )respectivelyfor binomial queues proof the potential function is the number of trees the initial potential is and the potential is always nonnegativeso the amortized time is an upper bound on the actual time the analysis for insert follows from the argument above for mergeassume the two queues have and nodes with and treesrespectively let + the actual time to perform the merge is (log( log( ) (log nafter the mergethere can be at most log treesso the potential can increase by at most (log nthis gives an amortized bound of (log nthe deletemin bound follows in similar manner |
23,831 | insert cost - - potential change - - - figure the insertion cost and potential change for each operation in sequence skew heaps the analysis of binomial queues is fairly easy example of an amortized analysis we now look at skew heaps as is common with many of our examplesonce the right potential function is foundthe analysis is easy the difficult part is choosing meaningful potential function recall that for skew heapsthe key operation is merging to merge two skew heapswe merge their right paths and make this the new left path for each node on the new pathexcept the lastthe old left subtree is attached as the right subtree the last node on the new left path is known to not have right subtreeso it is silly to give it one the bound does not depend on this exceptionand if the routine is coded recursivelythis is what will happen naturally figure shows the result of merging two skew heaps suppose we have two heapsh and and there are and nodes on their respective right paths then the actual time to perform the merge is proportional to so we figure merging of two skew heaps |
23,832 | amortized analysis will drop the big-oh notation and charge one unit of time for each node on the paths since the heaps have no structureit is possible that all the nodes in both heaps lie on the right pathand this would give (nworst-case bound to merge the heaps (exercise asks you to construct an examplewe will show that the amortized time to merge two skew heaps is (log nwhat is needed is some sort of potential function that captures the effect of skew heap operations recall that the effect of merge is that every node on the right path is moved to the left pathand its old left child becomes the new right child one idea might be to classify each node as right node or left nodedepending on whether or not it is right childand use the number of right nodes as potential function although the potential is initially and always nonnegativethe problem is that the potential does not decrease after merge and thus does not adequately reflect the savings in the data structure the result is that this potential function cannot be used to prove the desired bound similar idea is to classify nodes as either heavy or lightdepending on whether or not the right subtree of any node has more nodes than the left subtree definition nodepis heavy if the number of descendants of ' right subtree is at least half of the number of descendants of pand light otherwise note that the number of descendants of node includes the node itself as an examplefigure shows skew heap the nodes with values and are heavyand all other nodes are light the potential function we will use is the number of heavy nodes in the (collection ofheaps this seems like good choicebecause long right path will contain an inordinate number of heavy nodes because nodes on this path have their children swappedthese nodes will be converted to light nodes as result of the merge theorem the amortized time to merge two skew heaps is (log figure skew heap--heavy nodes are and |
23,833 | figure change in heavy/light status after merge proof let and be the two heapswith and nodesrespectively suppose the right path of has light nodes and heavy nodesfor total of likewiseh has light and heavy nodes on its right pathfor total of nodes if we adopt the convention that the cost of merging two skew heaps is the total number of nodes on their right pathsthen the actual time to perform the merge is now the only nodes whose heavy/light status can change are nodes that are initially on the right path (and wind up on the left path)since no other nodes have their subtrees altered this is shown by the example in figure if heavy node is initially on the right paththen after the merge it must become light node the other nodes that were on the right path were light and may or may not become heavybut since we are proving an upper boundwe will have to assume the worstwhich is that they become heavy and increase the potential then the net change in the number of heavy nodes is at most adding the actual time and the potential change [equation ( )gives an amortized bound of ( + now we must show that (log nsince and are the number of light nodes on the original right pathsand the right subtree of light node is less than half the size of the tree rooted at the light nodeit follows directly that the number of light nodes on the right path is at most log log which is (log nthe proof is completed by noting that the initial potential is and that the potential is always nonnegative it is important to verify thissince otherwise the amortized time does not bound the actual time and is meaningless since the insert and deletemin operations are basically just mergesthey also have (log namortized bounds fibonacci heaps in section we showed how to use priority queues to improve on the naive (| | running time of dijkstra' shortest-path algorithm the important observation was that the running time was dominated by |edecreasekey operations and |vinsert and deletemin operations these operations take place on set of size at most |vby using binary heapall these operations take (log | |timeso the resulting bound for dijkstra' algorithm can be reduced to (|elog | | |
23,834 | amortized analysis in order to lower this time boundthe time required to perform the decreasekey operation must be improved -heapswhich were described in section give an (logd | |time bound for the decreasekey operation as well as for insertbut an ( logd | |bound for deletemin by choosing to balance the costs of |edecreasekey operations with |vdeletemin operationsand remembering that must always be at least we see that good choice for is max( | |/|vthis improves the time bound for dijkstra' algorithm to (|elog( | |/| | |the fibonacci heap is data structure that supports all the basic heap operations in ( amortized timewith the exception of deletemin and removewhich take (log namortized time it immediately follows that the heap operations in dijkstra' algorithm will require total of (| |vlog | |time fibonacci heaps generalize binomial queues by adding two new conceptsa different implementation of decreasekeythe method we have seen before is to percolate the element up toward the root it does not seem reasonable to expect an ( amortized bound for this strategyso new method is needed lazy mergingtwo heaps are merged only when it is required to do so this is similar to lazy deletion for lazy mergingmerges are cheapbut because lazy merging does not actually combine treesthe deletemin operation could encounter lots of treesmaking that operation expensive any one deletemin could take linear timebut it is always possible to charge the time to previous merge operations in particularan expensive deletemin must have been preceded by large number of unduly cheap mergeswhich were able to store up extra potential cutting nodes in leftist heaps in binary heapsthe decreasekey operation is implemented by lowering the value at node and then percolating it up toward the root until heap order is established in the worst casethis can take (log ntimewhich is the length of the longest path toward the root in balanced tree this strategy does not work if the tree that represents the priority queue does not have (log ndepth as an exampleif this strategy is applied to leftist heapsthen the decreasekey operation could take (ntimeas the example in figure shows we see that for leftist heapsanother strategy is needed for the decreasekey operation our example will be the leftist heap in figure suppose we want to decrease the key with value down to if we make the changewe find that we have created violation of heap orderwhich is indicated by dashed line in figure the name comes from property of this data structurewhich we will prove later in the section |
23,835 | - - - - figure decreasing to via percolate up would take (ntime figure sample leftist heap figure decreasing to creates heap-order violation |
23,836 | amortized analysis figure the two trees after the cut we do not want to percolate the to the rootbecauseas we have seenthere are cases where this could be expensive the solution is to cut the heap along the dashed linethus creating two treesand then merge the two trees back into one let be the node to which the decreasekey operation is being appliedand let be its parent after the cutwe have two treesnamelyh with root xand which is the original tree with removed the situation is shown in figure if these two trees were both leftist heapsthen they could be merged in (log ntimeand we would be done it is easy to see that is leftist heapsince none of its nodes have had any changes in their descendants thussince all of its nodes originally satisfied the leftist propertythey still must neverthelessit seems that this scheme will not workbecause is not necessarily leftist howeverit is easy to reinstate the leftist heap property by using two observationsr only nodes on the path from to the root of can be in violation of the leftist heap propertythese can be fixed by swapping children since the maximum right path length has at most log( nodeswe only need to check the first log( nodes on the path from to the root of figure shows and after is converted to leftist heap because we can convert to the leftist heap in (log nstepsand then merge and we have an (log nalgorithm for performing the decreasekey operation in leftist heaps the heap that results in our example is shown in figure lazy merging for binomial queues the second idea that is used by fibonacci heaps is lazy merging we will apply this idea to binomial queues and show that the amortized time to perform merge operation (as well as insertionwhich is special caseis ( the amortized time for deletemin will still be (log nthe idea is as followsto merge two binomial queuesmerely concatenate the two lists of binomial treescreating new binomial queue this new queue may have several trees of |
23,837 | figure converted to the leftist heap figure decreasekey( completed by merging and the same sizeso it violates the binomial queue property we will call this lazy binomial queue in order to maintain consistency this is fast operation that always takes constant (worst-casetime as beforean insertion is done by creating one-node binomial queue and merging the difference is that the merge is lazy the deletemin operation is much more painfulbecause it is where we finally convert the lazy binomial queue back into standard binomial queuebutas we will showit is still (log namortized time--but not (log nworst-case timeas before to perform deleteminwe find (and eventually returnthe minimum element as beforewe delete it from the queuemaking each of its children new trees we then merge all the trees into binomial queue by merging two equal-sized trees until it is no longer possible as an examplefigure shows lazy binomial queue in lazy binomial queuethere can be more than one tree of the same size to perform the deleteminwe remove the smallest elementas beforeand obtain the tree in figure we now have to merge all the trees and obtain standard binomial queue standard binomial queue has at most one tree of each rank in order to do this efficientlywe must |
23,838 | amortized analysis figure lazy binomial queue figure lazy binomial queue after removing the smallest element ( forr <log ++ while|lr > remove two trees from lr merge the two trees into new treeadd the new tree to lr+ figure procedure to reinstate binomial queue be able to perform the merge in time proportional to the number of trees present ( (or log nwhichever is largerto do thiswe form an array of listsl lrmax + where rmax is the rank of the largest tree each listlr contains all of the trees of rank the procedure in figure is then applied each time through the loopat lines to the total number of trees is reduced by this means that this part of the codewhich takes constant time per executioncan only be performed timeswhere is the number of trees the for loop counters and tests at the end of the while loop take (log ntimeso the running time is ( log )as required figure shows the execution of this algorithm on the previous collection of binomial trees amortized analysis of lazy binomial queues to carry out the amortized analysis of lazy binomial queueswe will use the same potential function that was used for standard binomial queues thusthe potential of lazy binomial queue is the number of trees |
23,839 | figure combining the binomial trees into binomial queue theorem the amortized running times of merge and insert are both ( for lazy binomial queues the amortized running time of deletemin is (log nproof the potential function is the number of trees in the collection of binomial queues the initial potential is and the potential is always nonnegative thusover sequence of operationsthe total amortized time is an upper bound on the total actual time for the merge operationthe actual time is constantand the number of trees in the collection of binomial queues is unchangedsoby equation ( )the amortized time is ( for the insert operationthe actual time is constantand the number of trees can increase by at most so the amortized time is ( the deletemin operation is more complicated let be the rank of the tree that contains the minimum elementand let be the number of trees thusthe potential at the start of the deletemin operation is to perform deleteminthe children of the smallest node are split off into separate trees this creates treeswhich must be merged into standard binomial queue the actual time to perform this is + +log |
23,840 | amortized analysis if we ignore the constant in the big-oh notationby the argument above once this is donethere can be at most log trees remainingso the potential function can increase by at most (log nt adding the actual time and the change in potential gives an amortized bound of log since all the trees are binomial treeswe know that <log thus we arrive at an (log namortized time bound for the deletemin operation the fibonacci heap operations as we mentioned beforethe fibonacci heap combines the leftist heap decreasekey operation with the lazy binomial queue merge operation unfortunatelywe cannot use both operations without slight modification the problem is that if arbitrary cuts are made in the binomial treesthe resulting forest will no longer be collection of binomial trees because of thisit will no longer be true that the rank of every tree is at most log since the amortized bound for deletemin in lazy binomial queues was shown to be log rwe need (log nfor the deletemin bound to hold in order to ensure that (log )we apply the following rules to all non-root nodesr mark (non-rootnode the first time that it loses child (because of cutr if marked node loses another childthen cut it from its parent this node now becomes the root of separate tree and is no longer marked this is called cascading cutbecause several of these could occur in one decreasekey operation figure shows one tree in fibonacci heap prior to decreasekey operation when the node with key is changed to the heap order is violated thereforethe node is cut from its parentbecoming the root of new tree since the node containing is markedthis is its second lost childand thus it is cut from its parent ( now has figure tree in the fibonacci heap prior to decreasing to we can do this because we can place the constant implied by the big-oh notation in the potential function and still get the cancellation of termswhich is needed in the proof |
23,841 | figure the resulting segment of the fibonacci heap after the decreasekey operation lost its second childso it is cut from the process stops heresince was unmarked the node is now marked the result is shown in figure notice that and which used to be marked nodesare no longer markedbecause they are now root nodes this will be crucial observation in our proof of the time bound proof of the time bound recall that the reason for marking nodes is that we needed to bound the rank (number of childrenr of any node we will now show that any node with descendants has rank (log nlemma let be any node in fibonacci heap let ci be the ith oldest child of then the rank of ci is at least proof at the time when ci was linked to xx already had (olderchildren ci- thusx had at least children when it linked to ci since nodes are linked only if they have the same rankit follows that at the time that ci was linked to xci had at least children since that timeit could have lost at most one childor else it would have been cut from thusci has at least children from lemma it is easy to show that any node of rank must have lot of descendants lemma let fk be the fibonacci numbers defined (in section by and fk fk- fk- any node of rank > has at least fr+ descendants (including itselfproof let sr be the smallest tree of rank clearlys and by lemma tree of rank must have subtrees of rank at least and plus another subtreewhich has at least one node along with the root of sr itselfthis gives - si it is easy to show that sr fr+ minimum value for sr> of sr = (exercise ( ) |
23,842 | amortized analysis because it is well known that the fibonacci numbers grow exponentiallyit immediately follows that any node with descendants has rank at most (log sthuswe have lemma the rank of any node in fibonacci heap is (log nproof immediate from the discussion above if all we were concerned about were the time bounds for the mergeinsertand deletemin operationsthen we could stop here and prove the desired amortized time bounds of coursethe whole point of fibonacci heaps is to obtain an ( time bound for decreasekey as well the actual time required for decreasekey operation is plus the number of cascading cuts that are performed during the operation since the number of cascading cuts could be much more than ( )we will need to pay for this with loss in potential if we look at figure we see that the number of trees actually increases with each cascading cutso we will have to enhance the potential function to include something that decreases during cascading cuts notice that we cannot just throw out the number of trees from the potential functionsince then we will not be able to prove the time bound for the merge operation looking at figure againwe see that cascading cut causes decrease in the number of marked nodesbecause each node that is the victim of cascading cut becomes an unmarked root since each cascading cut costs unit of actual time and increases the tree potential by we will count each marked node as two units of potential this waywe have chance of canceling out the number of cascading cuts theorem the amortized time bounds for fibonacci heaps are ( for insertmergeand decreasekey and (log nfor deletemin proof the potential is the number of trees in the collection of fibonacci heaps plus twice the number of marked nodes as usualthe initial potential is and is always nonnegative thusover sequence of operationsthe total amortized time is an upper bound on the total actual time for the merge operationthe actual time is constantand the number of trees and marked nodes is unchangedsoby equation ( )the amortized time is ( for the insert operationthe actual time is constantthe number of trees increases by and the number of marked nodes is unchanged thusthe potential increases by at most so the amortized time is ( for the deletemin operationlet be the rank of the tree that contains the minimum elementand let be the number of trees before the operation to perform deleteminwe once again split the children of treecreating an additional new trees notice thatalthough this can remove marked nodes (by making them unmarked roots)this cannot create any additional marked nodes these new treesalong with the other treesmust now be mergedat cost of log (log )by lemma since there can be at most (log ntreesand the number of marked nodes cannot |
23,843 | increasethe potential change is at most (log nt adding the actual time and potential change gives the (log namortized bound for deletemin finallyfor the decreasekey operationlet be the number of cascading cuts the actual cost of decreasekey is which is the total number of cuts performed the first (noncascadingcut creates new tree and thus increases the potential by each cascading cut creates new tree but converts marked node to an unmarked (rootnodefor net loss of one unit per cascading cut the last cut also can convert an unmarked node (in fig it is node into marked nodethus increasing the potential by the total change in potential is thus at most adding the actual time and the potential change gives total of which is ( splay trees as final examplewe analyze the running time of splay trees recallfrom that after an access of some item is performeda splaying step moves to the root by series of three operationszigzig-zagand zig-zig these tree rotations are shown in figure we adopt the convention that if tree rotation is being performed at node xthen prior to the rotationp is its parent and (if is not child of the rootg is its grandparent recall that the time required for any tree operation on node is proportional to the number of nodes on the path from the root to if we count each zig operation as one figure zigzig-zagand zig-zig operationseach has symmetric case (not shown |
23,844 | amortized analysis rotation and each zig-zig or zig-zag as two rotationsthen the cost of any access is equal to plus the number of rotations in order to show an (log namortized bound for the splaying stepwe need potential function that can increase by at most (log nover the entire splaying step but that will also cancel out the number of rotations performed during the step it is not at all easy to find potential function that satisfies these criteria simple first guess at potential function might be the sum of the depths of all the nodes in the tree this does not workbecause the potential can increase by (nduring an access canonical example of this occurs when elements are inserted in sequential order potential function that does work is defined as log ( (tit where (irepresents the number of descendants of (including itselfthe potential function is the sumover all nodes in the tree tof the logarithm of (ito simplify the notationwe will define (ilog (ithis makes ( (iit (irepresents the rank of node the terminology is similar to what we used in the analysis of the disjoint set algorithmbinomial queuesand fibonacci heaps in all these data structuresthe meaning of rank is somewhat differentbut the rank is generally meant to be on the order (magnitudeof the logarithm of the size of the tree for tree with nodesthe rank of the root is simply (tlog using the sum of ranks as potential function is similar to using the sum of heights as potential function the important difference is that while rotation can change the heights of many nodes in the treeonly xpand can have their ranks changed before proving the main theoremwe need the following lemma lemma if <cand and are both positive integersthen log log < log proof by the arithmetic-geometric mean inequalityab <( )/ thusab < / |
23,845 | squaring both sides gives ab < / taking logarithms of both sides proves the lemma with the preliminaries taken care ofwe are ready to prove the main theorem theorem the amortized time to splay tree with root at node is at most ( ( )- ( ))+ (log nproof the potential function is the sum of the ranks of the nodes in if is the root of tthen there are no rotationsso there is no potential change the actual time is to access the nodethusthe amortized time is and the theorem is true thuswe may assume that there is at least one rotation for any splaying steplet ri (xand si (xbe the rank and size of before the stepand let rf (xand sf (xbe the rank and size of immediately after the splaying step we will show that the amortized time required for zig is at most (rf (xri ( ) and that the amortized time for either zig-zag or zig-zig is at most (rf (xri ( )we will show that when we add over all stepsthe sum telescopes to the desired time bound zig stepfor the zig stepthe actual time is (for the single rotation)and the potential change is rf (xrf (pri (xri (pnotice that the potential change is easy to computebecause only ' and ' trees change size thususing at to represent amortized timeatzig rf (xrf (pri (xri (pfrom figure we see that si ( >sf ( )thusit follows that ri ( >rf (pthusatzig < rf (xri (xsince sf ( >si ( )it follows that rf (xri ( > so we may increase the right sideobtaining atzig < (rf (xri ( )zig-zag stepfor the zig-zag casethe actual cost is and the potential change is rf (xrf (prf (gri (xri (pri (gthis gives an amortized time bound of atzig-zag rf (xrf (prf (gri (xri (pri (gfrom figure we see that sf (xsi ( )so their ranks must be equal thuswe obtain atzig-zag rf (prf (gri (xri ( |
23,846 | amortized analysis we also see that si ( >si (xconsequentlyri ( <ri (pmaking this substitution gives atzig-zag < rf (prf ( ri (xfrom figure we see that sf (psf ( <sf (xif we apply lemma we obtain log sf (plog sf ( < log sf ( by the definition of rankthis becomes rf (prf ( < rf ( substituting thiswe obtain atzig-zag < rf ( ri ( < (rf (xri ( )since rf ( >ri ( )we obtain atzig-zag < (rf (xri ( )zig-zig stepthe third case is the zig-zig the proof of this case is very similar to the zig-zag case the important inequalities are rf (xri ( )rf ( >rf ( )ri ( <ri ( )and si (xsf ( <sf (xwe leave the details as exercise the amortized cost of an entire splay is the sum of the amortized costs of each splay step figure shows the steps that are performed in splay at node let ( ) ( ) ( )and ( be the rank of node in each of the four trees the cost of the first stepwhich is zig-zagis at most ( ( ( )the cost of the second stepwhich is zig-zigis ( ( ( )the last step is zig and has cost no larger than ( ( ( ) the total cost thus telescopes to ( ( ( ) figure the splaying steps involved in splaying at node |
23,847 | in generalby adding up the amortized costs of all the rotationsof which at most one can be zigwe see that the total amortized cost to splay at node is at most (rf (xri ( ) where ri (xis the rank of before the first splaying step and rf (xis the rank of after the last splaying step since the last splaying step leaves at the rootwe obtain an amortized bound of ( (tri ( ) which is (log nbecause every operation on splay tree requires splaythe amortized cost of any operation is within constant factor of the amortized cost of splay thusall splay tree access operations take (log namortized time to show that insertions and deletions take (log )amortized timepotential changes that occur either prior to or after the splaying step should be accounted for in the case of insertionassume we are inserting into an - node tree thusafter the insertionwe have an -node treeand the splaying bound applies howeverthe insertion at the leaf node adds potential prior to the splay to each node on the path from the leaf node to the root let nk be the nodes on the path prior to the insertion of the leaf (nk is the root)and assume they have size sk after the insertionsthe sizes are + sk (the leaf will contribute to the potential so we can ignore it note that (excluding the root nodesj <sj+ so the new rank of nj is no more than the old rank of nj+ thusthe increase of rankswhich is the maximum increase in potential that results from adding new leafis limited by the new rank of the rootwhich is (log na deletion consists of nonsplaying step that attaches one tree to another this does increase the rank of one nodebut that is limited by log (and is compensated by the removal of nodewhich at the time was rootthus the splaying costs accurately bound the cost of deletion by using more general potential functionit is possible to show that splay trees have several remarkable properties this is discussed in more detail in the exercises summary in this we have seen how an amortized analysis can be used to apportion charges among operations to perform the analysiswe invent fictitious potential function the potential function measures the state of the system high-potential data structure is volatilehaving been built on relatively cheap operations when the expensive bill comes for an operationit is paid for by the savings of previous operations one can view potential as standing for potential for disasterin that very expensive operations can occur only when the data structure has high potential and has used considerably less time than has been allocated low potential in data structure means that the cost of each operation has been roughly equal to the amount allocated for it negative potential means debtmore time has been spent than has been allocatedso the allocated (or amortizedtime is not meaningful bound as expressed by equation ( )the amortized time for an operation is equal to the sum of the actual time and potential change taken over an entire sequence of operationsthe amortized time for the sequence is equal to the total sequence time plus the net change in potential as long as this net change is positivethen the amortized bound provides an upper bound for the actual time spent and is meaningful |
23,848 | amortized analysis the keys to choosing potential function are to guarantee that the minimum potential occurs at the beginning of the algorithmand to have the potential increase for cheap operations and decrease for expensive operations it is important that the excess or saved time be measured by an opposite change in potential unfortunatelythis is sometimes easier said than done exercises when do consecutive insertions into binomial queue take less than time unitssuppose binomial queue of elements is built alternately perform insert and deletemin pairs clearlyeach operation takes (log ntime why does this not contradict the amortized bound of ( for insertionshow that the amortized bound of (log nfor the skew heap operations described in the text cannot be converted to worst-case bound by giving sequence of operations that lead to merge requiring (ntime show how to merge two skew heaps with one top-down pass and reduce the merge cost to ( amortized time extend skew heaps to support the decreasekey operation in (log namortized time implement fibonacci heaps and compare their performance with that of binary heaps when used in dijkstra' algorithm standard implementation of fibonacci heaps requires four links per node (parentchildand two siblingsshow how to reduce the number of linksat the cost of at most constant factor in the running time show that the amortized time of zig-zig splay is at most (rf (xri ( )by changing the potential functionit is possible to prove different bounds for splaying let the weight function (ibe some function assigned to each node in the treeand let (ibe the sum of the weights of all the nodes in the subtree rooted at iincluding itself the special case ( for all nodes corresponds to the function used in the proof of the splaying bound let be the number of nodes in the treeand let be the number of accesses prove the following two theoremsa the total access time is ( ( nlog nb if is the number of times that item is accessedand for all ithen the total access time is mqi log( /qi = show how to implement the merge operation on splay trees so that any sequence of - merges starting from single-element trees takes ( log ntime improve the bound to ( log |
23,849 | in we described rehashingwhen table becomes more than half fulla new table twice as large is constructedand the entire old table is rehashed give formal amortized analysiswith potential functionto show that the amortized cost of an insertion is still ( what is the maximum depth of fibonacci heapa deque with heap order is data structure consisting of list of items on which the following operations are possiblepush( )insert item on the front end of the deque pop()remove the front item from the deque and return it inject( )insert item on the rear end of the deque eject()remove the rear item from the deque and return it findmin()return the smallest item from the deque (breaking ties arbitrarilya describe how to support these operations in constant amortized time per operation describe how to support these operations in constant worst-case time per operation show that the binomial queues actually support merging in ( amortized time define the potential of binomial queue to be the number of trees plus the rank of the largest tree suppose that in an attempt to save timewe splay on every second tree operation does the amortized cost remain logarithmicusing the potential function in the proof of the splay tree boundwhat is the maximum and minimum potential of splay treeby how much can the potential function decrease in one splayby how much can the potential function increase in one splayyou may give big-oh answers as result of splaymost of the nodes on the access path are moved halfway towards the rootwhile couple of nodes on the path move down one level this suggests using the sum over all nodes of the logarithm of each node' depth as potential function what is the maximum value of the potential functionb what is the minimum value of the potential functionc the difference in the answers to parts (aand (bgives some indication that this potential function isn' too good show that splaying operation could increase the potential by (nlog references an excellent survey of amortized analysis is provided in [ most of the references below duplicate citations in earlier we cite them again for convenience and completeness binomial queues were first described in [ and analyzed in [ solutions to exercises and appear in [ fibonacci heaps are described in [ exercise (ashows that splay trees are optimalto within constant |
23,850 | amortized analysis factor of the best static search trees exercise (bshows that splay trees are optimalto within constant factor of the best optimal search trees theseas well as two other strong resultsare proved in the original splay tree paper [ amortization is used in [ to merge balanced search tree efficiently the merge operation for splay trees is described in [ solution to exercise can be found in [ exercise is from [ amortized analysis is used in [ to design an online algorithm that processes series of queries in time only constant factor larger than any offline algorithm in its class brown"implementation and analysis of binomial queue algorithms,siam journal on computing ( ) - brown and tarjan"design and analysis of data structure for representing sorted lists,siam journal on computing ( ) - fredman and tarjan"fibonacci heaps and their uses in improved network optimization algorithms,journal of the acm ( ) - gajewska and tarjan"deques with heap order,information processing letters ( ) - khoong and leong"double-ended binomial queues,proceedings of the fourth annual international symposium on algorithms and computation ( ) - port and moffat" fast algorithm for melding splay trees,proceedings of first workshop on algorithms and data structures ( ) - sleator and tarjan"self-adjusting binary search trees,journal of the acm ( ) - sleator and tarjan"amortized efficiency of list update and paging rules,communications of the acm ( ) - sleator and tarjan"self-adjusting heaps,siam journal on computing ( ) - tarjan"amortized computational complexity,siam journal on algebraic and discrete methods ( ) - vuillemin" data structure for manipulating priority queues,communications of the acm ( ) - |
23,851 | advanced data structures and implementation in this we discuss six data structures with an emphasis on practicality we begin by examining alternatives to the avl tree discussed in these include an optimized version of the splay treethe red-black treeand the treap we also examine the suffix treewhich allows searching for pattern in large text we then examine data structure that can be used for multidimensional data in this caseeach item may have several keys the - tree allows searching relative to any key finallywe examine the pairing heapwhich seems to be the most practical alternative to the fibonacci heap recurring themes include nonrecursivetop-down (instead of bottom-upsearch tree implementations when appropriate implementations that make use ofamong other thingssentinel nodes top-down splay trees in we discussed the basic splay tree operation when an itemxis inserted as leafa series of tree rotationsknown as splaymakes the new root of the tree splay is also performed during searchesand if an item is not founda splay is performed on the last node on the access path in we showed that the amortized cost of splay tree operation is (log na direct implementation of this strategy requires traversal from the root down the treeand then bottom-up traversal to implement the splaying step this can be done either by maintaining parent linksor by storing the access path on stack unfortunatelyboth methods require substantial amount of overheadand both must handle many special cases in this sectionwe show how to perform rotations on the initial access path the result is procedure that is faster in practiceuses only ( extra spacebut retains the (log namortized time bound figure shows the rotations for the zigzig-zigand zig-zag cases (as is customarythree symmetric rotations are omitted at any point in the accesswe have current node |
23,852 | advanced data structures and implementation figure top-down splay rotationszigzig-zigand zig-zag xthat is the root of its subtreethis is represented in our diagrams as the "middletree tree stores nodes in the tree that are less than xbut not in ' subtreesimilarly tree stores nodes in the tree that are larger than xbut not in ' subtree initiallyx is the root of tand and are empty if the rotation should be zigthen the tree rooted at becomes the new root of the middle tree and subtree are attached as left child of the smallest item in rx' left child is logically made nullptr as resultx is the new smallest item in note carefully that does not have to be leaf for the zig case to apply if we are searching for an item that is smaller than yand has no left child (but does have right child)then the zig case will apply for the zig-zig casewe have similar dissection the crucial point is that rotation between and is performed the zig-zag case brings the bottom node to the top in the middle treeand attaches subtrees and to and lrespectively note that is attached toand then becomesthe largest item in the zig-zag step can be simplified somewhat because no rotations are performed instead of making the root of the middle treewe make the root this is shown in figure this simplifies the coding because the action for the zig-zag case becomes for simplicity we don' distinguish between "nodeand the item in the node in the codethe smallest node in does not have nullptr left link because there is no need for it this means that printtree(rwill include some items that logically are not in |
23,853 | figure simplified top-down zig-zag identical to the zig case this would seem advantageous because testing for host of cases is time-consuming the disadvantage is that by descending only one levelwe have more iterations in the splaying procedure once we have performed the final splaying stepfigure shows how lrand the middle tree are arranged to form single tree note carefully that the result is different from bottom-up splaying the crucial fact is that the (log namortized bound is preserved (exercise an example of the top-down splaying algorithm is shown in figure we attempt to access in the tree the first step is zig-zag in accordance with ( symmetric version offigure we bring the subtree rooted at to the root of the middle tree and attach and its left subtree to next we have zig-zig is elevated to the root of the middle treeand rotation between and is performedwith the resulting subtree being attached to the search for then results in terminal zig the middle tree' new root is and and its left subtree are attached as right child of ' largest node the reassemblyin accordance with figure terminates the splay step we will use header with left and right links to eventually contain the roots of the left and right trees since these trees are initially emptya header is used to correspond to the min or max node of the right or left treerespectivelyin this initial state this way the code can avoid checking for empty trees the first time the left tree becomes nonemptythe right pointer will get initialized and will not change in the futurethus it will contain the root of the left tree at the end of the top-down search similarlythe left pointer will eventually contain the root of the right tree the splaytree class interfacealong with its constructor and destructorare shown in figure the constructor allocates the nullnode sentinel we use the sentinel nullnode to represent logically nullptr pointerthe destructor deletes it after calling makeempty figure final arrangement for top-down splaying |
23,854 | empty simplified zig-zag empty zig-zig zig reassemble figure steps in top-down splay (access in top tree |
23,855 | template class splaytree publicsplaytreenullnode new binarynodenullnode->left nullnode->right nullnoderoot nullnode~splaytreemakeempty)delete nullnode/same methods as for binarysearchtree (omittedsplaytreeconst splaytree rhs )splaytreesplaytree &rhs )splaytree operator=const splaytree rhs )splaytree operator=splaytree &rhs privatestruct binarynode /usual code for binary search tree nodes *}binarynode *rootbinarynode *nullnode/same methods as for binarysearchtree (omitted/tree manipulations void rotatewithleftchildbinarynode )void rotatewithrightchildbinarynode )void splayconst comparable xbinarynode )}figure splay treesclass interfaceconstructorand destructor we will repeatedly use this technique to simplify the code (and consequently make the code somewhat fasterfigure gives the code for the splaying procedure the header node allows us to be certain that we can attach to the largest node in without having to worry that might be empty (and similarly for the symmetric case dealing with |
23,856 | /*internal method to perform top-down splay the last accessed node becomes the new root this method may be overridden to use different splaying algorithmhoweverthe splay tree code depends on the accessed item going to the root is the target item to splay around is the root of the subtree to splay *void splayconst comparable xbinarynode binarynode *lefttreemax*righttreeminstatic binarynode headerheader left header right nullnodelefttreemax righttreemin &headernullnode->element /guarantee match forifx element ifx left->element rotatewithleftchildt )ift->left =nullnode break/link right righttreemin->left trighttreemin tt ->leftelse ift->element ift->right->element rotatewithrightchildt )ift->right =nullnode break/link left lefttreemax->right tlefttreemax tt ->rightelse breaklefttreemax->right ->leftrighttreemin->left ->rightt->left header rightt->right header leftfigure top-down splaying method |
23,857 | as we mentioned abovebefore the reassembly at the end of the splayheader left and header right point to the roots of and lrespectively (this is not typo--follow the linksexcept for this detailthe code is relatively straightforward figure shows the method to insert an item into tree new node is allocated (if necessary)and if the tree is emptya one-node tree is created otherwisewe splay root around the inserted value if the data in the new root is equal to xwe have void insertconst comparable static binarynode *newnode nullptrifnewnode =nullptr newnode new binarynodenewnode->element xifroot =nullnode newnode->left newnode->right nullnoderoot newnodeelse splayxroot )ifx element newnode->left root->leftnewnode->right rootroot->left nullnoderoot newnodeelse ifroot->element newnode->right root->rightnewnode->left rootroot->right nullnoderoot newnodeelse returnnewnode nullptr/so next insert will call new figure top-down splay tree insert |
23,858 | advanced data structures and implementation void removeconst comparable if!containsx return/item not founddo nothing /if is foundit will be splayed to the root by contains binarynode *newtreeifroot->left =nullnode newtree root->rightelse /find the maximum in the left subtree /splay it to the rootand then attach right child newtree root->leftsplayxnewtree )newtree->right root->rightdelete rootroot newtreefigure top-down deletion procedure and makeempty duplicate instead of reinserting xwe preserve newnode for future insertion and return immediately if the new root contains value larger than xthen the new root and its right subtree become right subtree of newnodeand root' left subtree becomes the left subtree of newnode similar logic applies if root' new root contains value smaller than in either casenewnode becomes the new root in we showed that deletion in splay trees is easybecause splay will place the target of the deletion at the root we close by showing the deletion routine in figure it is indeed rare that deletion procedure is shorter than the corresponding insertion procedure figure also shows makeempty simple recursive postorder traversal to reclaim the tree nodes is unsafe because splay tree may well be unbalancedeven while giving good performance in that casethe recursion could run out of stack space we use simple alternative that is still ( (though that is far from obvioussimilar considerations are required for operator red-black trees historically popular alternative to the avl tree is the red-black tree operations on red-black trees take (log ntime in the worst caseandas we will seea careful nonrecursive implementation (for insertioncan be done relatively effortlessly (compared with avl trees |
23,859 | red-black tree is binary search tree with the following coloring properties every node is colored either red or black the root is black if node is redits children must be black every path from node to null pointer must contain the same number of black nodes consequence of the coloring rules is that the height of red-black tree is at most log( consequentlysearching is guaranteed to be logarithmic operation figure shows red-black tree red nodes are shown with double circles the difficultyas usualis inserting new item into the tree the new itemas usualis placed as leaf in the tree if we color this item blackthen we are certain to violate condition because we will create longer path of black nodes thusthe item must be colored red if the parent is blackwe are done if the parent is already redthen we will violate condition by having consecutive red nodes in this casewe have to adjust the tree to ensure that condition is enforced (without introducing violation of condition the basic operations that are used to do this are color changes and tree rotations bottom-up insertion as we have already mentionedif the parent of the newly inserted item is blackwe are done thus insertion of into the tree in figure is trivial there are several cases (each with mirror image symmetryto consider if the parent is red firstsuppose that the sibling of the parent is black (we adopt the convention that null nodes are blackthis would apply for an insertion of or but not for the insertion of let be the newly added leafp be its parents be the sibling of the parent (if it exists)and be the grandparent only and are red in this caseg is blackbecause otherwise there would be two consecutive red nodes prior to the insertionin violation of red-black rules adopting the splay tree terminologyxpand can form either zig-zig figure example of red-black tree (insertion sequence is |
23,860 | advanced data structures and implementation figure zig rotation and zig-zag rotation work if is black chain or zig-zag chain (in either of two directionsfigure shows how we can rotate the tree for the case where is left child (note there is symmetric caseeven though is leafwe have drawn more general case that allows to be in the middle of the tree we will use this more general rotation later the first case corresponds to single rotation between and gand the second case corresponds to double rotationfirst between and and then between and when we write the codewe have to keep track of the parentthe grandparentandfor reattachment purposesthe great-grandparent in both casesthe subtree' new root is colored blackand so even if the original great-grandparent was redwe removed the possibility of two consecutive red nodes equally importantthe number of black nodes on the paths into aband has remained unchanged as result of the rotations so far so good but what happens if is redas is the case when we attempt to insert in the tree in figure in that caseinitially there is one black node on the path from the subtree' root to after the rotationthere must still be only one black node but in both casesthere are three nodes (the new rootgand son the path to since only one may be blackand since we cannot have consecutive red nodesit follows that we' have to color both and the subtree' new root redand (and our fourth nodeblack that' greatbut what happens if the great-grandparent is also redin that casewe could percolate this procedure up toward the root as is done for -trees and binary heapsuntil we no longer have two consecutive red nodesor we reach the root (which will be recolored blacktop-down red-black trees implementing the percolation would require maintaining the path using stack or parent links we saw that splay trees are more efficient if we use top-down procedureand it |
23,861 | figure color fliponly if ' parent is red do we continue with rotation turns out that we can apply top-down procedure to red-black trees that guarantees that won' be red the procedure is conceptually easy on the way downwhen we see node that has two red childrenwe make red and the two children black (if is the rootafter the color flip it will be red but can be made black immediately to restore property figure shows this color flip this will induce red-black violation only if ' parent is also red but in that casewe can apply the appropriate rotations in figure what if ' parent' sibling is redthis possibility has been removed by our actions on the way downand so ' parent' sibling can' be redspecificallyif on the way down the tree we see node that has two red childrenwe know that ' grandchildren must be blackand that since ' children are made black tooeven after the rotation that may occurwe won' see another red node for two levels thus when we see xif ' parent is redit is not possible for ' parent' sibling to be red also as an examplesuppose we want to insert into the tree in figure on the way down the treewe see node which has two red children thuswe perform color flipmaking redand and black now and are both red we perform the single rotation between and making the black root of ' right subtreeand and both red we then continueperforming an identical action if we see other nodes on the path that contain two red children when we get to the leafwe insert as red nodeand since the parent is blackwe are done the resulting tree is shown in figure as figure showsthe red-black tree that results is frequently very well balanced experiments suggest that the average red-black tree is about as deep as an average avl tree and thatconsequentlythe searching times are typically near optimal the advantage figure insertion of into figure |
23,862 | advanced data structures and implementation of red-black trees is the relatively low overhead required to perform insertionand the fact thatin practicerotations occur relatively infrequently an actual implementation is complicated not only by the host of possible rotations but also by the possibility that some subtrees (such as ' right subtreemight be emptyand the special case of dealing with the root (which among other thingshas no parentthuswe use two sentinel nodesone for the rootand nullnodewhich indicates nullptr pointer as it did for splay trees the root sentinel will store the key and right link to the real root because of thisthe searching and printing procedures need to be adjusted the recursive routines are trickiest figure shows how the inorder traversal is rewritten the printtree routines are straightforward the test != ->left could be written as !=nullnode howeverthere is trap in similar routine that performs the deep copy this is also shown in figure the copy constructor calls clone after other initialization is complete but in clonethe test ==nullnode does not workbecause nullnode is the target' nullnodenot the source' (that isnot rhs'sthus we use trickier test figure shows the redblacktree skeletonalong with the constructor nextfigure (page shows the routine to perform single rotation because the resultant tree must be attached to parentrotate takes the parent node as parameter rather than keeping track of the type of rotation as we descend the treewe pass item as parameter since we expect very few rotations during the insertion procedureit turns out that it is not only simplerbut actually fasterto do it this way rotate simply returns the result of performing an appropriate single rotation finallywe provide the insertion procedure in figure (on page the routine handlereorient is called when we encounter node with two red childrenand also when we insert leaf the trickiest part is the observation that double rotation is really two single rotationsand is done only when branching to (represented in the insert method by currenttakes opposite directions as we mentioned in the earlier discussioninsert must keep track of the parentgrandparentand great-grandparent as the tree is descended since these are shared with handlereorientwe make these class members note that after rotationthe values stored in the grandparent and great-grandparent are no longer correct howeverwe are assured that they will be restored by the time they are next needed top-down deletion deletion in red-black trees can also be performed top-down everything boils down to being able to delete leaf this is because to delete node that has two childrenwe replace it with the smallest node in the right subtreethat nodewhich must have at most one childis then deleted nodes with only right child can be deleted in the same mannerwhile nodes with only left child can be deleted by replacement with the largest node in the left subtreeand subsequent deletion of that node note that for red-black treeswe don' want to use the strategy of bypassing for the case of node with one child because that may connect two red nodes in the middle of the treemaking enforcement of the red-black condition difficult |
23,863 | void printtreeconst ifheader->right =nullnode cout <"empty tree<endlelse printtreeheader->right )void printtreeredblacknode * const ift ! ->left printtreet->left )cout element <endlprinttreet->right )redblacktreeconst redblacktree rhs nullnode new redblacknodenullnode->left nullnode->right nullnodeheader header->left header->right new redblacknoderhs header->element }nullnodeclonerhs header->right )redblacknode cloneredblacknode const ift = ->left /cannot test against nullnode!!return nullnodeelse return new redblacknodet->elementclonet->left )clonet->right ) ->color }figure tree traversals with two sentinelsprinttree and copy constructor deletion of red leaf isof coursetrivial if leaf is blackhoweverthe deletion is more complicated because removal of black node will violate condition the solution is to ensure during the top-down pass that the leaf is red throughout this discussionlet be the current nodet be its siblingand be their parent we begin by coloring the root sentinel red as we traverse down the treewe attempt to ensure that is red when we arrive at new nodewe are certain |
23,864 | template class redblacktree publicexplicit redblacktreeconst comparable neginf )redblacktreeconst redblacktree rhs )redblacktreeredblacktree &rhs )~redblacktree)const comparable findminconstconst comparable findmaxconstbool containsconst comparable constbool isemptyconstvoid printtreeconstvoid makeempty)void insertconst comparable )void removeconst comparable )enum redblack }redblacktree operator=const redblacktree rhs )redblacktree operator=redblacktree &rhs )privatestruct redblacknode comparable elementredblacknode *leftredblacknode *rightint colorredblacknodeconst comparable theelement comparable}redblacknode *lt nullptrredblacknode *rt nullptrint black elementtheelement }leftlt }rightrt }colorc redblacknodecomparable &theelementredblacknode *lt nullptrredblacknode *rt nullptrint black elementstd::movetheelement }leftlt }rightrt }colorc }redblacknode *header/the tree header (contains neginfredblacknode *nullnode/used in insert routine and its helpers (logically staticredblacknode *currentfigure class interface and constructor |
23,865 | redblacknode *parentredblacknode *grandredblacknode *great/usual recursive stuff void reclaimmemoryredblacknode * )void printtreeredblacknode * constredblacknode cloneredblacknode const/red-black tree manipulations void handlereorientconst comparable item )redblacknode rotateconst comparable itemredblacknode *theparent )void rotatewithleftchildredblacknode )void rotatewithrightchildredblacknode )}/*construct the tree neginf is value less than or equal to all others *explicit redblacktreeconst comparable neginf nullnode new redblacknodenullnode->left nullnode->right nullnodeheader new redblacknodeneginf }header->left header->right nullnodefigure (continuedthat is red (inductivelyby the invariant we are trying to maintain)and that and are black (because we can' have two consecutive red nodesthere are two main cases firstsuppose has two black children then there are three subcaseswhich are shown in figure if also has two black childrenwe can flip the colors of xtand to maintain the invariant otherwiseone of ' children is red depending on which one it is, we can apply the rotation shown in the second and third cases of figure note carefully that this case will apply for the leafbecause nullnode is considered to be black if both children are redwe can apply either rotation as usualthere are symmetric rotations for the case when is right child that are not shown |
23,866 | /*internal routine that performs single or double rotation because the result is attached to the parentthere are four cases called by handlereorient item is the item in handlereorient theparent is the parent of the root of the rotated subtree return the root of the rotated subtree *redblacknode rotateconst comparable itemredblacknode *theparent ifitem element item left->element rotatewithleftchildtheparent->left /ll rotatewithrightchildtheparent->left /lr return theparent->leftelse item right->element rotatewithleftchildtheparent->right /rl rotatewithrightchildtheparent->right )/rr return theparent->rightfigure rotate method /*internal routine that is called during an insertion if node has two red children performs flip and rotations item is the item being inserted *void handlereorientconst comparable item /do the color flip current->color redcurrent->left->color blackcurrent->right->color blackifparent->color =red figure insertion procedure /have to rotate |
23,867 | grand->color redifitem element !item element parent rotateitemgrand )/start dbl rotate current rotateitemgreat )current->color blackheader->right->color black/make root black void insertconst comparable current parent grand headernullnode->element xwhilecurrent->element ! great grandgrand parentparent currentcurrent element current->left current->right/check if two red childrenfix if so ifcurrent->left->color =red ¤t->right->color =red handlereorientx )/insertion fails if already present ifcurrent !nullnode returncurrent new redblacknodexnullnodenullnode }/attach to parent ifx element parent->left currentelse parent->right currenthandlereorientx )figure (continuedotherwise one of ' children is red in this casewe fall through to the next levelobtaining new xtand if we're luckyx will land on the red childand we can continue onward if notwe know that will be redand and will be black we can rotate and pmaking ' new parent redx and its grandparent willof coursebe black at this pointwe can go back to the first main case |
23,868 | advanced data structures and implementation figure three cases when is left child and has two black children treaps our last type of binary search treeknown as the treapis probably the simplest of all like the skip listit uses random numbers and gives (log nexpected time behavior for any input searching time is identical to an unbalanced binary search tree (and thus slower than balanced search trees)while insertion time is only slightly slower than recursive unbalanced binary search tree implementation although deletion is much slowerit is still (log nexpected time the treap is so simple that we can describe it without picture each node in the tree stores an itema left and right pointerand priority that is randomly assigned when the node is created treap is binary search tree with the property that the node priorities satisfy heap orderany node' priority must be at least as large as its parent' collection of distinct items each of which has distinct priority can only be represented by one treap this is easily deduced by inductionsince the node with the lowest priority must be the root consequentlythe tree is formed on the basis of the npossible arrangements of priority instead of the nitem orderings the node declarations are straightforwardrequiring only the addition of the priority data member the sentinel nullnode will have priority of as shown in figure |
23,869 | template class treap publictreapnullnode new treapnodenullnode->left nullnode->right nullnodenullnode->priority int_maxroot nullnodetreapconst treap rhs )treaptreap &rhs )~treap)treap operator=const treap rhs )treap operator=treap &rhs )/additional public member functions (not shownprivatestruct treapnode comparable elementtreapnode *lefttreapnode *rightint prioritytreapnodeleftnullptr }rightnullptr }priorityint_max treapnodeconst comparable etreapnode *lttreapnode *rtint pr elemente }leftlt }rightrt }prioritypr treapnodecomparable &etreapnode *lttreapnode *rtint pr elementstd::movee }leftlt }rightrt }prioritypr }treapnode *roottreapnode *nullnodeuniformrandom randomnums/additional private member functions (not shown}figure treap class interface and constructor |
23,870 | advanced data structures and implementation insertion into the treap is simpleafter an item is added as leafwe rotate it up the treap until its priority satisfies heap order it can be shown that the expected number of rotations is less than after the item to be deleted has been foundit can be deleted by increasing its priority to and rotating it down through the path of low-priority children once it is leafit can be removed the routines in figure and figure implement these strategies using recursion nonrecursive implementation is left for the reader (exercise for deletionnote that when the node is logically leafit still has nullnode as both its left and right children consequentlyit is rotated with the right child after the rotationt is nullnodeand the left childwhich now stores the item to be deletedcan be freed note also that our implementation assumes that there are no duplicatesif this is not truethen the remove could fail (why?the treap implementation never has to worry about adjusting the priority data member one of the difficulties of the balanced tree approaches is that it is difficult to track down errors that result from failing to update balance information in the course of an operation in terms of total lines for reasonable insertion and deletion packagethe treapespecially nonrecursive implementationseems like the hands-down winner /*internal method to insert into subtree is the item to insert is the node that roots the tree set the new root of the subtree (randomnums is uniformrandom object that is data member of treap *void insertconst comparable xtreapnodet ift =nullnode new treapnodexnullnodenullnoderandomnums nextint}else ifx element insertxt->left )ift->left->priority priority rotatewithleftchildt )else ift->element insertxt->right )ift->right->priority priority rotatewithrightchildt )/else duplicatedo nothing figure treapsinsertion routine |
23,871 | /*internal method to remove from subtree is the item to remove is the node that roots the tree set the new root of the subtree *void removeconst comparable xtreapnode ift !nullnode ifx element removext->left )else ift->element removext->right )else /match found ift->left->priority right->priority rotatewithleftchildt )else rotatewithrightchildt )ift !nullnode removext )else delete ->leftt->left nullnode/continue on down /at leaf figure treapsdeletion procedure suffix arrays and suffix trees one of the most fundamental problems in data processing is to find the location of patternpin textt for instancewe may be interested in answering questions such as is there substring of matching pr how many times does appear in tr where are all occurrences of in |
23,872 | advanced data structures and implementation assuming that the size of is less than (and usually it is significantly less)then we would reasonably expect that the time to solve this problem for given and would be at least linear in the length of tand in fact there are several ot algorithms howeverwe are interested in more common problemin which is fixedand queries with different occur frequently for instancet could be huge archive of email messagesand we are interested in repeatedly searching the email messages for different patterns in this casewe are willing to preprocess into nice form that would make each individual search much more efficienttaking time significantly less than linear in the size of --either logarithmic in the size of tor even betterindependent of and dependent only on the length of one such data structure is the suffix array and suffix tree (that sounds like two data structuresbut as we will seethey are basically equivalentand trade time for spacesuffix arrays suffix array for texttis simply an array of all suffixes of arranged in sorted order for instancesuppose our text string is banana then the suffix array for banana is shown in figure suffix array that stores the suffixes explicitly would seem to require quadratic spacesince it stores one string of each length to (where is the length of tin ++this is not exactly truesince in +we can use the primitive null-terminated array of character representation of stringsand in that casea suffix is specified by char that points at the first character of the substring thusthe same array of characters is sharedand the additional memory requirement is only the char pointer for the new substring nonethelessusing char is highly or +dependentthus it is common for practical implementation to store only the starting indices of the suffixes in the suffix arraywhich is much more language independent figure shows the indices that would be stored the suffix array by itself is extremely powerful for instanceif patternpoccurs in the textthen it must be prefix of some suffix binary search of the suffix array would be enough to determine if the pattern is in the textthe binary search either lands on por would be between two valuesone smaller than and one larger than if is prefix of some substringit is prefix of the larger value found at the end of the binary search immediatelythis reduces the query time to op log )where the log is the binary searchand the is the cost of the comparison at each step ana anana banana na nana figure suffixes for "banana |
23,873 | index substring being represented ana anana banana na nana figure suffix array that stores only indices (full substrings shown for referencewe can also use the suffix array to find the number of occurrences of pthey will be stored sequentially in the suffix arraythus two binary searches suffice to find range of suffixes that will be guaranteed to begin with one way to speed this search is to compute the longest common prefix (lcpfor each consecutive pair of substringsif this computation is done as the suffix array is builtthen each query to find the number of occurrences of can be sped up to op log although this is not obvious figure shows the lcp computed for each substringrelative to the preceding substring the longest common prefix also provides information about the longest pattern that occurs twice in the textlook for the largest lcp valueand take that many characters of the corresponding substring in figure this is and the longest repeated pattern is ana figure shows simple code to compute the suffix array and longest common prefix information for any string line obtains primitive (char *string from strand lines to obtain the suffixes by computing and storing these pointers using pointer arithmetic (line at lines and the suffixes are sortedthe code at line represents the ++ lambda featurein which the "less thanfunction that is needed for two char types is provided as the third parameter to sortwithout the need to write named function lines and compute the suffixesstarting indices using pointer arithmeticand lines to compute the longest common prefixes for adjacent entries by calling the computelcp routine written at lines to index lcp substring being represented ana anana banana na nana figure suffix array for "banana"includes longest common prefix (lcp |
23,874 | /returns the lcp for any two strings *int computelcpconst string const string int whilei length& length& = ++ireturn /fill in the suffix array and lcp information for string str str is the input string sa is an existing array to place the suffix array lcp is an existing array to place the lcp information *void createsuffixarrayslowconst string strvector savector lcp ifsa size!str length|lcp size!str lengththrow invalid_argument"mismatched vector sizes}size_t str length)const char *cstr str c_str)vector suffixesn )forint ++ suffixesi cstr istd::sortbeginsuffixes )endsuffixes )[const char * const char * return strcmps )forint ++ sai suffixesi cstrlcp forint ++ lcpi computelcpsuffixesi ]suffixesi )figure simple algorithm to create suffix array and lcp array |
23,875 | the running time of the suffix array computation is dominated by the sorting stepwhich uses on log comparisons in many circumstances this can be reasonably acceptable performance for instancea suffix array for , , -character english-language novel can be built in just few seconds howeverthe on log costbased on the number of comparisonshides the fact that string comparison between and takes time that depends on lcp( )so while it is true that almost all these comparisons end quickly when run on the suffixes found in natural language processingthe comparisons will be expensive in applications where there are many long common substrings one such example occurs in pattern searching of dnawhose alphabet consists of four characters (acgtand whose strings can be huge for instancethe dna string for human chromosome has roughly million characterswith maximum lcp of approximately , and an average lcp of nearly , and even the html/java distribution for jdk (much smaller than the current distributionis nearly million characterswith maximum lcp of roughly , and an average lcp of roughly , in the degenerate case of string that contains only one characterrepeated timesit is easy to see that each comparison takes (ntimeand the total cost is on log in section we will show linear-time algorithm to construct the suffix array suffix trees suffix arrays are easily searchable by binary searchbut the binary search itself automatically implies log cost what we would like to do is find matching suffix even more efficiently one idea is to store the suffixes in trie binary trie was seen in our discussion of huffman codes in section the basic idea of the trie is to store the suffixes in tree at the rootinstead of having two brancheswe would have one branch for each possible first character then at the next levelwe would have one branch for the next characterand so on at each level we are doing multiway branchingmuch like radix sortand thus we can find match in time that would depend only on the length of the match in figure we see on the left basic trie to store the suffixes of the string deed these suffixes are ddeededand eed in this trieinternal branching nodes are drawn in circlesand the suffixes that are reached are drawn in rectangles each branch is labeled with the character that is chosenbut the branch prior to completed suffix has no label this representation could waste significant space if there are many nodes that have only one child thus in figure we see an equivalent representation on the rightknown as compressed trie heresingle-branch nodes are collapsed into single node notice that although the branches now have multicharacter labelsall the labels for the branches of any given node must have unique first characters thusit is still just as easy as before to choose which branch to take thus we can see that search for patternpdepends only on the length of the pattern pas desired (we assume that the letters of the alphabet are represented by numbers then each node stores an array representing each possible branch and we can locate the appropriate branch in constant time the empty edge label can be represented by if the original string has length nthe total number of branches is less than howeverthis by itself does not mean that the compressed trie uses linear spacethe labels on the edges take up space the total length of all the labels on the compressed |
23,876 | advanced data structures and implementation eed deed ed ed eed ed eed deed figure lefttrie representing the suffixes for deed{ddeededeed}rightcompressed trie that collapses single-node branches trie in figure is exactly one less than the number of internal branching nodes in the original trie in figure and of course writing all the suffixes in the leaves could take quadratic space so if the original used quadratic spaceso does the compressed trie fortunatelywe can get by with linear space as follows in the leaveswe use the index where the suffix begins (as in the suffix array in the internal nodeswe store the number of common characters matched from the root until the internal nodethis number represents the letter depth figure shows how the compressed trie is stored for the suffixes of banana the leaves are simply the indices of the starting points for each suffix the internal node with letter depth of is representing the common string "ain all nodes that are below it the internal na na banana banana na ana anana na na nana figure compressed trie representing the suffixes for banana{aanaananabanananananaleftthe explicit representationrightthe implicit representation that stores only one integer (plus branchesper node |
23,877 | node with letter depth of is representing the common string "anain all nodes that are below it and the internal node with letter depth of is representing the common string "nain all nodes that are below it in factthis analysis makes clear that suffix tree is equivalent to suffix array plus an lcp array if we have suffix treewe can compute the suffix array and the lcp array by performing an inorder traversal of the tree (compare figure with the suffix tree in figure at that time we can compute the lcp as followsif the suffix node value plus the letter depth of the parent is equal to nthen use the letter depth of the grandparent as the lcpotherwise use the parent' letter depth as the lcp in figure if we proceed inorderwe obtain for our suffixes and lcp values suffix with lcp (the grandparentbecause equals suffix with lcp (the grandparentbecause equals suffix with lcp (the parentbecause does not equal suffix with lcp (the parentbecause does not equal suffix with lcp (the grandparentbecause equals suffix with lcp (the parentbecause does not equal this transformation can clearly be done in linear time the suffix array and lcp array also uniquely define the suffix tree firstcreate root with letter depth then search the lcp array (ignoring position for which lcp is not really definedfor all occurrences of the minimum (which at this phase will be the zerosonce these minimums are foundthey will partition the array (view the lcp as residing between adjacent elementsfor instancein our examplethere are two zeros in the lcp arraywhich partitions the suffix array into three portionsone portion containing the suffixes { }another portion containing the suffix { }and the third portion containing the suffixes { the internal nodes for these portions can be built recursivelyand then the suffix leaves can be attached with an inorder traversal although it is not obviouswith care the suffix tree can be generated in linear time from the suffix array and lcp array the suffix tree solves many problems efficientlyespecially if we augment each internal node to also maintain the number of suffixes stored below it small sampling of suffix tree applications includes the following find the longest repeated substring in ttraverse the treefinding the internal node with the largest number letter depththis represents the maximum lcp the running time is ot this generalizes to the longest substring repeated at least times find the longest common substring in two strings and form string # where is character that is not in either string then build suffix tree for the resulting string and find the deepest internal node that has at least one suffix that starts prior to the #and one that starts after the this can be done in time proportional to the total size of the strings and generalizes to an ok algorithm for strings of total length find the number of occurrences of the pattern passuming that the suffix tree is augmented so that each node keeps track of the number of suffixes below itsimply follow the path down the treethe first internal node that is prefix of provides the answer |
23,878 | advanced data structures and implementation if there is no such nodethe answer is either zero or one and is found by checking the suffix at which the search terminates this takes timeproportional to the length of the pattern pand is independent of the size of | find the most common substring of specified length return the internal node with largest size amongst those with letter depth at least this takes time ot linear-time construction of suffix arrays and suffix trees in section we showed the simplest algorithm to construct suffix array and an lcp arraybut this algorithm has on log worst-case running time for an -character string and can occur if the string has suffixes with long common prefixes in this section we describe an on worst-case time algorithm to compute the suffix array this algorithm can also be enhanced to compute the lcp array in linear timebut there is also very simple linear-time algorithm to compute the lcp array from the suffix array (see exercise and complete code in fig either waywe can thus also build suffix tree in linear time this algorithm makes use of divide and conquer the basic idea is as follows choose sampleaof suffixes sort the sample by recursion sort the remaining suffixesbby using the now-sorted sample of suffixes merge and to get an intuition of how step might worksuppose the sample of suffixes are all suffixes that start at an odd index then the remaining suffixesbare those suffixes that start at an even index so suppose we have computed the sorted set of suffixes to compute the sorted set of suffixes bwe would in effect need to sort all the suffixes that start at even indices but these suffixes each consist of single first character in an even positionfollowed by string that starts with the second characterwhich must be in an odd position thus the string that starts in the second character is exactly string that is in so to sort all the suffixes bwe can do something similar to radix sortfirst sort the strings in starting from the second character this should take linear timesince the sorted order of is already known then stably sort on the first character of the strings in thus could be sorted in linear timeafter is sorted recursively if and could then be merged in linear timewe would have linear-time algorithm the algorithm we present uses different sampling stepthat admits simple linear-time merging step as we describe the algorithmwe will also show how it computes the suffix array for the string abracadabra we adopt the following conventionss[is[ -represents the ith character of string represents the suffix of starting at index represents an array |
23,879 | step sort the characters in the stringassigning them numbers sequentially starting at then use those numbers for the remainder of the algorithm note that the numbers that are assigned depend on the text soif the text contains dna characters acgand onlythen there will be only four numbers then pad the array with three to avoid boundary cases if we assume that the alphabet is fixed sizethen the sort takes some constant amount of time examplein our examplethe mapping is and the transformation can be visualized in figure step divide the text into three groupss [ ] [ ] [ for the idea is that each of consists of roughly / symbolsbut the symbols are no longer the original alphabetbut instead each new symbol is some group of three symbols from the original alphabet we will call these tri-characters most importantlythe suffixes of and combine to form the suffixes of thus one idea would be to recursively compute the suffixes of and (which by definition implicitly represent sorted stringsand then merge the results in linear time howeversince this would be three recursive calls on problems / the original sizethat would result in an on log algorithm so the idea is going to be to avoid one of the three recursive callsby computing two of the suffix groups recursively and using that information to compute the third suffix group examplein our exampleif we look at the original character set and use to represent the padded characterwe get [abr][aca][dab][ra$ [bra][cad][abr][ $$ [rac][ada][brawe can see that in and each tri-character is now trio of characters from the original alphabet using that alphabets and are arrays of length four and is an input strings new problem index figure mapping of character in string to an array of integers |
23,880 | advanced data structures and implementation array of length three and thus have fourfourand three suffixesrespectively ' suffixes are [abr][aca][dab][ra$][aca][dab][ra$][dab][ra$][ra$]which clearly correspond to the suffixes abracadabraacadabradabraand ra in the original string in the original string sthese suffixes are located at indices and respectivelyso looking at all three of and we can see that each si represents the suffixes that are located at indices mod in step concatenate and and recursively compute the suffix array in order to compute this suffix arraywe will need to sort the new alphabet of tri-characters this can be done in linear time by three passes of radix sortsince the old characters were already sorted in step if in fact all the tri-characters in the new alphabet are uniquethen we do not even need to bother with recursive call making three passes of radix sort takes linear time if (nis the running time of the suffix array construction algorithmthen the recursive call takes ( / time examplein our example [bra][cad][abr][ $$][rac][ada][brathe sorted suffixes that will be computed recursively will represent tri-character strings as shown in figure notice that these are not exactly the same as the corresponding suffixes in showeverif we strip out characters starting at the first $we do have match of suffixes also note that the indices returned by the recursive call do not correspond directly to the indices in sthough it is simple matter to map them back so to see how the algorithm actually forms the recursive callobserve that three passes of radix sort will assign the following alphabet[ $$ [abr [ada [bra [cad [rac figure shows the mapping of tri-charactersthe resulting array that is formed for and the resulting suffix array that is computed recursively index substring being represented [ $$[rac[ada[bra [abr[ $$[rac[ada[bra [ada[bra [bra [bra[cad[abr[ $$[rac[ada[bra [cad[abr[ $$[rac[ada[bra [rac[ada[brafigure suffix array for in tri-character set |
23,881 | [bra[cad[abr[ $$[rac[ada[braintegers sa[ index figure mapping of tri-charactersthe resulting array that is formed for and the resulting suffix array that is computed recursively step compute the suffix array for this is easy to do because [ - [ - [ is[ - [ ] [ - [ ] [ -since our recursive call has already sorted all [ -]we can do step with simple two-pass radix sortthe first pass is on [ -]and the second pass is on [iexamplein our example [abr][aca][dab][ra$from the recursive call in step we can rank the suffixes in and figure shows how the indices in the original string can be referenced from the recursively computed suffix array and shows how the suffix array from figure leads to ranking of suffixes among entries in the next-to-last row are easily obtained from the prior two rows in the last rowthe ith entry is given by the location of in the row labelled sa[ the ranking established in can be used directly for the first radix sort pass on then we do second pass on the single characters from susing the prior radix sort to break ties notice that it is convenient if has exactly as many elements as figure shows how we can compute the suffix array for at this pointwe now have the suffix array for and for the combined group and since this is two-pass radix sortthis step takes on [bra[cad[abr[ $$[rac[ada[braindex in sa[ sa using ' indices rank in group figure ranking of suffixes based on suffix array shown in figure |
23,882 | advanced data structures and implementation [abrindex index of second element radix pass ordering radix pass ordering rank in group sausing ' indices [aca[dab[ra$ add one to above last line of figure stably radix sort by first char using results of previous line using results of previous line figure computing suffix array for step merge the two suffix arrays using the standard algorithm to merge two sorted lists the only issue is that we must be able to compare each suffix pair in constant time there are two cases case comparing an element with an elementcompare the first letterif they do not matchwe are doneotherwisecompare the remainder of (which is an suffixwith the remainder of (which is an suffix)those are already orderedso we are done case comparing an element with an elementcompare at most the first two lettersif we still have matchthen at that point compare the remainder of (which after skipping the two letters becomes an suffixwith the remainder of (which after skipping two letters becomes an suffix)as in case those suffixes are already ordered by sa so we are done examplein our examplewe have to merge sa for sa for and with the first comparison is between index (an )which is an element and index (also an awhich is an element since that is tiewe now have to compare index with index normally this would have already been computedsince index is while index is in howeverthis is special because index is past the end of the stringconsequently it always represents the earlier suffix lexicographicallyand the first element in the final suffix array is we advance in the second group and now we have |
23,883 | sa for sa for and final sa input index again the first characters matchso we compare indices and and this is already computedwith index having the smaller string so that means that now goes into the final suffix arrayand we advance the second groupobtaining sa for sa for and final sa input index once againthe first characters matchso now we have to compare indices and since this is comparison between an element and an elementwe cannot look up the result thus we have to compare characters directly index contains and index contains dso index wins thus goes into the final suffix array and we advance the first group sa for sa for and final sa input index |
23,884 | advanced data structures and implementation the same situation occurs on the next comparison between pair of 'sthe second comparison is between index ( cand index ( )so the element from the first group advances sa for sa for and final sa input index at this pointthere are no ties for whileso we quickly advance to the last characters of each groupsa for sa for and final sa input index finallywe get to the end the comparison between two ' requires that we compare the next characterswhich are at indices and since this comparison is between an element and an elementas we saw beforewe cannot look up the result and must compare directly but those are also the sameso now we have to compare indices and which is an automatic winner for index (since it is past the end of the stringthus the in index advancesand then we can finish the merge notice that had we not been at the end of the stringwe could have used the fact that the comparison is between an element and an elementwhich means the ordering would have been obtainable from the suffix array for |
23,885 | sa for sa for and final sa input index /fill in the suffix array information for string str str is the input string sa is an existing array to place the suffix array lcp is an existing array to place the lcp information *void createsuffixarrayconst string strvector savector lcp ifsa size!str length|lcp size!str lengththrow invalid_argument"mismatched vector sizes}int str length)vector sn )vector san )forint ++ si stri ]makesuffixarrayssan )forint ++ sai sai ]makelcparrayssalcp )figure code to set up the first call to makesuffixarraycreate appropriate size arraysand to keep things simplejust use the ascii character codes |
23,886 | advanced data structures and implementation since this is standard mergewith at most two comparisons per suffix pairthis step takes linear time the entire algorithm thus satisfies (nt( / on and takes linear time although we have only computed the suffix arraythe lcp information can also be computed as the algorithm runsbut there are some tricky details that are involvedand often the lcp information is computed by separate linear-time algorithm we close by providing working implementation to compute suffix arraysrather than fully implementing step to sort the original characterswe'll assume only small set of ascii characters (residing in values - are present in the string in figure we allocate the arrays that have three extra slots for padding and call makesuffixarraywhich is the basic linear-time algorithm figure shows makesuffixarray at lines to it allocates all the needed arrays and makes sure that and have the same number of elements (lines to )it then delegates work to assignnamescomputesl computes and merge /find the suffix array sa of [ - in { kn /requires [ ]= [ + ]= [ + ]= >= void makesuffixarrayconst vector svector saint nint int int int int / iff % = int tvector )vector sa )vector )vector sa )/generate positions in for items in /the "+tadds dummy mod suffix if % = /at that pointthe size of is forint ++ ifi ! +iint assignnamesss sa )computes sa )computes ss sa sa )mergess sasa sa nn )figure the main routine for linear-time suffix array construction |
23,887 | /assigns the new supercharacter names /at end of routinesa will have indices into sin sorted order /and will have new character names /returns the number of names assignednote that if /this value is the same as then sa is suffix array for int assignnamesconst vector svector vector sa int int int /radix sort the new character trios radixpasss sa )radixpasssa )radixpasss sa )/find lexicographic names of triples int name int - - - forint ++ ifssa ! |ssa ! |ssa ! ++namec ssa ] ssa ] ssa ]ifsa = sa nameelse sa name/ / return namefigure routine to compute and assign the tri-character names assignnamesshown in figure begins by performing three passes of radix sort thenit assigns names ( numbers)sequentially using the next available number if the current item has different trio of characters than the prior item (recall that the tricharacters have already been sorted by the three passes of radix sortand also recall that and have the same sizeso at line adding adds the number of elements in we can use the basic counting radix sort from to obtain linear-time sort this code is shown in figure the array in represents the indexes into sthe result of the |
23,888 | advanced data structures and implementation /stably sort in[ - with indices into that has keys in /into out[ - ]sort is relative to offset into /uses counting radix sort void radixpassconst vector invector outconst vector sint offsetint nint vector countk )/counter array forint ++ ++countsini offset ]forint < ++ counti +counti ]/count occurrences /compute exclusive sums forint ++ outcountsini offset ]+ini ]/sort /stably sort in[ - with indices into that has keys in /into out[ - /uses counting radix sort void radixpassconst vector invector outconst vector sint nint radixpassinouts nk )figure counting radix sort for the suffix array radix sort is that the indices are sorted so that the characters in are sorted at those indices (where the indices are offset as specifiedfigure contains the routines to compute the suffix arrays for and then finallythe merge routine is shown in figure with some supporting routines in figure the merge routine has the same basic look and feel as the standard merging algorithm seen in figure - trees suppose that an advertising company maintains database and needs to generate mailing labels for certain constituencies typical request might require sending out mailing to people who are between the ages of and and whose annual income is between $ , and $ , this problem is known as two-dimensional range query in one dimensionthe problem can be solved by simple recursive algorithm in ( log naverage timeby traversing preconstructed binary search tree here is the number |
23,889 | /compute the suffix array for placing result into sa void computes vector vector sa int int ifk = /if unique namesdon' need recursion forint ++ sa [ ielse makesuffixarrays sa )/store unique names in using the suffix array forint ++ sa void computes const vector svector vector sa const vector sa int int int forint ++ ifsa + sa ]radixpasss sa sn )figure compute the suffix array for (possibly recursivelyand the suffix array for of matches reported by the query we would like to obtain similar bound for two or more dimensions the two-dimensional search tree has the simple property that branching on odd levels is done with respect to the first keyand branching on even levels is done with respect to the second key the root is arbitrarily chosen to be an odd level figure shows - tree insertion into - tree is trivial extension of insertion into binary search treeas we go down the treewe need to maintain the current level to keep our code simplewe assume that basic item is an array of two elements we then need to toggle the level between and figure shows the code to perform an insertion we use recursion in this sectiona nonrecursive implementation that would be used in practice is straightforward and left as exercise one difficulty is duplicatesparticularly since several items can agree in one key our code allows duplicatesand always places them in right branchesclearly this can be problem if there are too many duplicates moment' thought will convince you that randomly constructed - tree has the same structural properties as random binary search treethe height is (log non averagebut (nin the worst case |
23,890 | advanced data structures and implementation /merge sorted sa suffixes and sorted sa suffixes void mergeconst vector sconst vector vector saconst vector sa const vector sa int nint int int int whilet ! & ! int getindexintossa tn )/ int sa ]/ ifsuffix issmallerss sa ijt sak+ ++telse sak+ ++pwhilep sak+sa +]whilet sak+getindexintossa ++ )figure merge the suffix arrays sa and sa unlike binary search treesfor which clever (log nworst-case variants existthere are no schemes that are known to guarantee balanced - tree the problem is that such scheme would likely be based on tree rotationsand tree rotations don' work in - trees the best one can do is to periodically rebalance the tree by reconstructing subtreeas described in the exercises similarlythere are no deletion algorithms beyond the obvious lazy deletion strategy if all the items arrive before we need to process queriesthen we can construct perfectly balanced - tree in ( log ntimewe leave this as exercise (cseveral kinds of queries are possible on - tree we can ask for an exact match or match based on one of the two keysthe latter type of request is partial match query both of these are special cases of an (orthogonalrange query an orthogonal range query gives all items whose first key is between specified set of values and whose second key is between another specified set of values this is exactly the problem that was described in the introduction to this section range query is easily |
23,891 | int getindexintosconst vector sa int tint ifsa return sa else return sa /true if [ <[ bool leqint int int int return | = & < /true if [ <[ bool leqint int int int int int return | = &leqa )bool suffix issmallerconst vector sconst vector const vector sa int int iint jint ifsa / vs can break tie after character return leqsi ] sa ]sj ] )else / vs can break tie after characters return leqsi ]si ] sa ]sj ]sj ] )figure supporting routines for merging the suffix arrays sa and sa figure sample - tree |
23,892 | advanced data structures and implementation publicvoid insertconst vector insertxroot )privatevoid insertconst vector xkdnode tint level ift =nullptr new kdnodex }else ifxlevel datalevel insertxt->left level )else insertxt->right level )figure insertion into - trees solved by recursive tree traversalas shown in figure by testing before making recursive callwe can avoid unnecessarily visiting all nodes to find specific itemwe can set low equal to high equal to the item we are searching for to perform partial match querywe set the range for the key not involved in the match to to the other range is set with the low and high point equal to the value of the key involved in the match an insertion or exact match search in - tree takes time that is proportional to the depth of the treenamelyo(log non average and (nin the worst case the running time of range search depends on how balanced the tree iswhether or not partial match is requestedand how many items are actually found we mention three results that have been shown for perfectly balanced treea range query could take ( ntime in the worst case to report matches at any nodewe may have to visit two of the four grandchildrenleading to the equation ( ( / ( in practicehoweverthese searches tend to be veryefficientand even the worst case is not poor because for typical nthe difference between and log is compensated by the smaller constant that is hidden in the big-oh notation for randomly constructed treethe average running time of partial match query is ( na )where (- )/ (see belowa recentand somewhat surprisingresult is that this essentially describes the average running time of range search of random - tree for dimensionsthe same algorithm workswe just cycle through the keys at each level howeverin practicethe balance starts getting worse because typically the effect of duplicates and nonrandom inputs becomes more pronounced we leave the coding details as an exercise for the reader and mention the analytical resultsfor perfectly balanced |
23,893 | public/*print items satisfying low < <high and low < <high *void printrangeconst vector lowconst vector high const printrangelowhighroot )privatevoid printrangeconst vector lowconst vector highkdnode *tint level const ift !nullptr iflow data &high > ->data &low data &high > ->data cout data <",data <")<endliflowlevel datalevel printrangelowhight->left level )ifhighlevel > ->datalevel printrangelowhight->right level )figure - treesrange search treethe worst-case running time of range query is ( kn - / in randomly constructed - treea partial match query that involves of the keys takes ( na )where is the (onlypositive root of ( ) ( ) - computation of for various and is left as an exercisethe value for and is reflected in the result stated above for partial matching in random - trees although there are several exotic structures that support range searchingthe - tree is probably the simplest such structure that achieves respectable running times |
23,894 | advanced data structures and implementation pairing heaps the last data structure we examine is the pairing heap the analysis of the pairing heap is still openbut when decreasekey operations are neededit seems to outperform other heap structures the most likely reason for its efficiency is its simplicity the pairing heap is represented as heap-ordered tree figure shows sample pairing heap the actual pairing heap implementation uses left childright sibling representation as discussed in the decreasekey operationas we will seerequires that each node contain an additional link node that is leftmost child contains link to its parentotherwise the node is right sibling and contains link to its left sibling we'll refer to this data member as prev the class skeleton and pairing heap node declaration are omitted for brevitythey are completely straightforward figure shows the actual representation of the pairing heap in figure we begin by sketching the basic operations to merge two pairing heapswe make the heap with the larger root left child of the heap with the smaller root insertion isof coursea special case of merging to perform decreasekeywe lower the value in the requested node because we are not maintaining parent pointers for all nodeswe figure sample pairing heapabstract representation figure actual representation of previous pairing heap |
23,895 | figure compareandlink merges two subheaps don' know if this violates the heap order thus we cut the adjusted node from its parent and complete the decreasekey by merging the two heaps that result to perform deleteminwe remove the rootcreating collection of heaps if there are children of the rootthen calls to the merge procedure will reassemble the heap the most important detail is the method used to perform the merge and how the merges are applied figure shows how two subheaps are combined the procedure is generalized to allow the second subheap to have siblings as we mentioned earlierthe subheap with the larger root is made leftmost child of the other subheap the code is straightforward and shown in figure notice that we have several instances in which pointer is tested against nullptr before assigning its prev data memberthis suggests that perhaps it would be useful to have nullnode sentinelwhich was customary in this search tree implementations the insert and decreasekey operations arethensimple implementations of the abstract description decreasekey requires position objectwhich is just pairnodesince this is determined (irrevocablywhen an item is first insertedinsert returns the pointer to pairnode it allocates back to the caller the code is shown in figure our routine for decreasekey throws an exception if the new value is not smaller than the oldotherwisethe resulting structure might not obey heap order the basic deletemin procedure follows directly from the abstract description and is shown in figure the devilof courseis in the detailshow is combinesiblings implementedseveral variants have been proposedbut none has been shown to provide the same amortized bounds as the fibonacci heap it has recently been shown that almost all of the proposed methods are in fact theoretically less efficient than the fibonacci heap even sothe method coded in figure on page always seems to perform as well as or better than other heap structuresincluding the binary heapfor the typical graph theory uses that involve host of decreasekey operations this methodknown as two-pass mergingis the simplest and most practical of the many variants that have been suggested we first scan left to rightmerging pairs of |
23,896 | advanced data structures and implementation /*internal method that is the basic operation to maintain order links first and second together to satisfy heap order first is root of tree which may not be nullptr first->nextsibling must be nullptr on entry second is root of tree which may be nullptr first becomes the result of the tree merge *void compareandlinkpairnode firstpairnode *second ifsecond =nullptr returnifsecond->element element /attach first as leftmost child of second second->prev first->prevfirst->prev secondfirst->nextsibling second->leftchildiffirst->nextsibling !nullptr first->nextsibling->prev firstsecond->leftchild firstfirst secondelse /attach second as leftmost child of first second->prev firstfirst->nextsibling second->nextsiblingiffirst->nextsibling !nullptr first->nextsibling->prev firstsecond->nextsibling first->leftchildifsecond->nextsibling !nullptr second->nextsibling->prev secondfirst->leftchild secondfigure pairing heapsroutine to merge two subheaps children after the first scanwe have half as many trees to merge second scan is then performedright to left at each step we merge the rightmost tree remaining from the first we must be careful if there is an odd number of children when that happenswe merge the last child with the result of the rightmost merge to complete the first scan |
23,897 | struct pairnodetypedef pairnode position/*insert item into the priority queuemaintaining heap order return the position ( pointer to the nodecontaining the new item *position insertconst comparable pairnode *newnode new pairnodex }ifroot =nullptr root newnodeelse compareandlinkrootnewnode )return newnode/*change the value of the item stored in the pairing heap throw invalid_argument if newval is larger than currently stored value is position returned by insert newval is the new valuewhich must be smaller than the currently stored value *void decreasekeyposition pconst comparable newval ifp->element newval throw invalid_argument"newval too large} ->element newvalifp !root ifp->nextsibling !nullptr ->nextsibling->prev ->previfp->prev->leftchild = ->prev->leftchild ->nextsiblingelse ->prev->nextsibling ->nextsiblingp->nextsibling nullptrcompareandlinkrootp )figure pairing heapsinsert and decreasekey |
23,898 | advanced data structures and implementation void deleteminifisemptythrow underflowexception}pairnode *oldroot rootifroot->leftchild =nullptr root nullptrelse root combinesiblingsroot->leftchild )delete oldrootfigure pairing heap deletemin scan with the current merged result as an exampleif we have eight childrenc through the first scan performs the merges and and and and and as resultwe obtain and we perform the second pass by merging and is then merged with that resultand then is merged with the result of the previous merge our implementation requires an array to store the subtrees in the worst casen items could be children of the rootbut declaring (non-staticarray of size inside of combinesiblings would give an (nalgorithm so we use single expanding array instead because it is staticit is reused in each callwithout the overhead of reinitialization other merging strategies are discussed in the exercises the only simple merging strategy that is easily seen to be poor is left-to-right single-pass merge (exercise the pairing heap is good example of "simple is betterand seems to be the method of choice for serious applications requiring the decreasekey or merge operation summary in this we've seen several efficient variations of the binary search tree the top-down splay tree provides (log namortized performancethe treap gives (log nrandomized performanceand the red-black tree gives (log nworst-case performance for the basic operations the trade-offs between the various structures involve code complexityease of deletionand differing searching and insertion costs it is difficult to say that any one structure is clear winner recurring themes include tree rotations and the use of sentinel nodes to eliminate many of the annoying tests for nullptr that would otherwise be necessary the suffix tree and array are powerful data structure that allows quick repeated searching for fixed text the - tree provides practical method for performing range searcheseven though the theoretical bounds are not optimal finallywe described and coded the pairing heapwhich seems to be the most practical mergeable priority queueespecially when decreasekey operations are requiredeven though it is theoretically less efficient than the fibonacci heap |
23,899 | /*internal method that implements two-pass merging firstsibling the root of the conglomerate and is assumed not nullptr *pairnode combinesiblingspairnode *firstsibling iffirstsibling->nextsibling =nullptr return firstsibling/allocate the array static vector treearray )/store the subtrees in an array int numsiblings forfirstsibling !nullptr++numsiblings ifnumsiblings =treearray sizetreearray resizenumsiblings )treearraynumsiblings firstsiblingfirstsibling->prev->nextsibling nullptr/break links firstsibling firstsibling->nextsiblingifnumsiblings =treearray sizetreearray resizenumsiblings )treearraynumsiblings nullptr/combine subtrees two at timegoing left to right int fori numsiblingsi + compareandlinktreearrayi ]treearrayi )int / has the result of last compareandlink /if an odd number of treesget the last one ifj =numsiblings compareandlinktreearrayj ]treearrayj )/now go right to leftmerging last tree with /next to last the result becomes the new last forj > - compareandlinktreearrayj ]treearrayj )return treearray ]figure pairing heapstwo-pass merging |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.