id
int64
0
25.6k
text
stringlengths
0
4.59k
21,300
(mand between (mand (nnote the conquer step performed in step ( (continued from figure
21,301
merge-sort on sequence of size has height log we leave the justification of proposition as simple exercise ( - we will use this proposition to analyze the running time of the merge-sort algorithm having given an overview of merge-sort and an illustration of how it workslet us consider each of the steps of this divide-and-conquer algorithm in more detail the divide and recur steps of the merge-sort algorithm are simpledividing sequence of size involves separating it at the element with index / and the recursive calls simply involve passing these smaller sequences as parameters the difficult step is the conquer stepwhich merges two sorted sequences into single sorted sequence thusbefore we present our analysis of merge-sortwe need to say more about how this is done merging arrays and lists to merge two sorted sequencesit is helpful to know if they are implemented as arrays or lists thuswe give detailed pseudo-code describing how to merge two sorted sequences represented as arrays and as linked lists in this section merging two sorted arrays
21,302
we illustrate step in the merge of two sorted arrays in figure code fragment algorithm for merging two sorted array-based sequences figure step in the merge of two sorted arrays we show the arrays before the copy step in (aand after it in (bmerging two sorted lists in code fragment we give list-based version of algorithm mergefor merging two sorted sequencess and implemented as linked lists the main
21,303
lists and add it to the end of the output sequencesuntil one of the two input lists is emptyat which point we copy the remainder of the other list to we show an example execution of this version of algorithm merge in figure code fragment algorithm merge for merging two sorted sequences implemented as linked lists the running time for merging we analyze the running time of the merge algorithm by making some simple observations let and be the number of elements of and respectively algorithm merge has three while loops independent of whether we are analyzing the array-based version or the list-based versionthe operations performed inside each loop take ( time each the key observation is that during each iteration of one of the loopsone element is copied or moved from either or into (and that element is considered no furthersince no insertions are performed into or this observation implies that the overall number of iterations of the three loops is + thusthe running time of algorithm merge is ( figure example of an execution of the algorithm merge shown in code fragment
21,304
the running time of merge-sort now that we have given the details of the merge-sort algorithmin both its arraybased and list-based versionsand we have analyzed the running time ofthe crucial merge algorithm used in the conquer steplet us analyze the running time of the entire merge-sort algorithmassuming it is given an input sequence of elements for simplicitywe restrict our attention to the case where is power of we leave it to an exercise ( - to show that the result of our analysis also holds when is not power of as we did in the analysis of the merge algorithmwe assume that the input sequence and the auxiliary sequences and created by each recursive call of
21,305
merging two sorted sequences can be done in linear time as we mentioned earlierwe analyze the merge-sort algorithm by referring to the merge-sort tree (recall figures through we call the time spent at node of the running time of the recursive call associated with vexcluding the time taken waiting for the recursive calls associated with the children of to terminate in other wordsthe time spent at node includes the running times of the divide and conquer stepsbut excludes the running time of the recur step we have already observed that the details of the divide step are straightforwardthis step runs in time proportional to the size of the sequence for in additionas discussed abovethe conquer stepwhich consists of merging two sorted subsequencesalso takes linear timeindependent of whether we are dealing with arrays or linked lists that isletting denote the depth of node vthe time spent at node is ( / )since the size of the sequence handled by the recursive call associated with is equal to / looking at the tree more globallyas shown in figure we see thatgiven our definition of "time spent at node,the running time of merge-sort is equal to the sum of the times spent at the nodes of observe that has exactly nodes at depth this simple observation has an important consequencefor it implies that the overall time spent at all the nodes of at depth is ( / )which is (nby proposition the height of is logn thussince the time spent at each of the logn levels of is ( )we have the following resultproposition algorithm merge-sort sorts sequence of size in (nlogntimeassuming two elements of can be compared in ( time in other wordsthe merge-sort algorithm asymptotically matches the fast running time of the heap-sort algorithm figure visual time analysis of the merge-sort tree each node is shown labeled with the size of its subproblem
21,306
java implementations of merge-sort in this sectionwe present two java implementations of the merge-sort algorithmone for lists and the other for arrays recursive list-based implementation of merge-sort in code fragment we show complete java implementation of the listbased merge-sort algorithm as static recursive methodmergesort comparator (see section is used to decide the relative order of two elements in this implementationthe input is listland auxiliary listsl and are processed by the recursive calls each list is modified by insertions and deletions only at the head and tailhenceeach list update takes ( timeassuming the lists are implemented with doubly linked lists (see table in our codewe use class nodelist (code fragments for the auxiliary lists thusfor list of size nmethod mergesort( ,cruns in time (nlognprovided the list is implemented with doubly linked list and the comparator can compare two elements of in ( time
21,307
merge implementing the recursive merge-sort algorithm
21,308
there is nonrecursive version of array-based merge-sortwhich runs in ( log ntime it is bit faster than recursive list-based merge-sort in practiceas it avoids the extra overheads of recursive calls and node creation the main idea is to perform merge-sort bottom-upperforming the merges level-by-level going up the merge-sort tree given an input array of elementswe begin by merging every odd-even pair of elements into sorted runs of length two we merge these runs into runs of length fourmerge these new runs into runs of length eightand so onuntil the array is sorted to keep the space usage reasonablewe deploy an output array that stores the merged runs (swapping input and output arrays after each iterationwe give java implementation in code fragment where we use the built-in method system arraycopy to copy range of cells between two arrays code fragment an implementation of the nonrecursive merge-sort algorithm
21,309
merge-sort and recurrence equations there is another way to justify that the running time of the merge-sort algorithm is ( log (proposition namelywe can deal more directly with the recursive nature of the merge-sort algorithm in this sectionwe present such an analysis of the running time of merge-sortand in so doing introduce the mathematical concept of recurrence equation (also known as recurrence relationlet the function (ndenote the worst-case running time of merge-sort on an input sequence of size since merge-sort is recursivewe can characterize function (nby means of an equation where the function (nis recursively expressed in terms of itself in order to simplify our characterization of ( )let us restrict our attention to
21,310
asymptotic characterization still holds in the general case as an exercise in this casewe can specify the definition of (nas an expression such as the one above is called recurrence equationsince the function appears on both the leftand right-hand sides of the equal sign although such characterization is correct and accuratewhat we really desire is big-oh type of characterization of (nthat does not involve the function (nitself that iswe want closed-form characterization of (nwe can obtain closed-form solution by applying the definition of recurrence equationassuming is relatively large for exampleafter one more application of the equation abovewe can write new recurrence for (nas ( ( ( / (cn/ )cn ( / (cn/ cn ( / cn if we apply the equation againwe get ( ( / cn at this pointwe should see pattern emergingso that after applying this equation times we get ( it( / iicn the issue that remainsthenis to determine when to stop this process to see when to stoprecall that we switch to the closed form (nb when < which will occur when in other wordsthis will occur when log making this substitutionthenyields ( lognt( / logn(logn)cn nt( cnlogn nb cnlogn that iswe get an alternative justification of the fact that (nis (nlogn quick-sort the next sorting algorithm we discuss is called quick-sort like merge-sortthis algorithm is also based on the divide-and-conquer paradigmbut it uses this technique in somewhat opposite manneras all the hard work is done before the recursive calls high-level description of quick-sort
21,311
main idea is to apply the divide-and-conquer techniquewhereby we divide into subsequencesrecur to sort each subsequenceand then combine the sorted subsequences by simple concatenation in particularthe quick-sort algorithm consists of the following three steps (see figure ) divideif has at least two elements (nothing needs to be done if has zero or one element)select specific element from swhich is called the pivot as is common practicechoose the pivot to be the last element in remove all the elements from and put them into three sequenceslstoring the elements in less than estoring the elements in equal to gstoring the elements in greater than of courseif the elements of are all distinctthen holds just one element--the pivot itself recurrecursively sort sequences and conquerput back the elements into in order by first inserting the elements of lthen those of eand finally those of figure algorithm visual schematic of the quick-sort like merge-sortthe execution of quick-sort can be visualized by means of binary recursion treecalled the quick-sort tree figure summarizes an execution of the quick-sort algorithm by showing the input and output sequences processed at
21,312
shown in figures and unlike merge-sorthoweverthe height of the quick-sort tree associated with an execution of quick-sort is linear in the worst case this happensfor exampleif the sequence consists of distinct elements and is already sorted indeedin this casethe standard choice of the pivot as the largest element yields subsequence of size while subsequence has size and subsequence has size at each invocation of quick-sort on subsequence lthe size decreases by hencethe height of the quick-sort tree is figure quick-sort tree for an execution of the quick-sort algorithm on sequence with elements(ainput sequences processed at each node of (boutput sequences generated at each node of the pivot used at each level of the recursion is shown in bold
21,313
the tree represents recursive call the nodes drawn with dashed lines represent calls that have not been made yet the node drawn with thick lines represents the running invocation the empty nodes drawn with thin lines represent terminated calls the remaining nodes represent suspended calls (that isactive invocations that are waiting for child invocation to
21,314
( (continues in figure figure visualization of an execution of quicksort note the conquer step performed in ( (continues in figure
21,315
omitted note the conquer steps performed in (oand ( (continued from figure
21,316
in code fragment we give pseudo-code description of the quick-sort algorithm that is efficient for sequences implemented as arrays or linked lists the algorithm follows the template for quick-sort given aboveadding the detail of scanning the input sequence backwards to divide it into the lists leand of elements that are respectively less thanequal toand greater than the pivot we perform this scan backwardssince removing the last element in sequence is constant-time operation independent of whether the sequence is implemented as an array or linked list we then recur on the and listsand copy the sorted lists leand back to we perform this latter set of copies in the forward direction
21,317
independent of whether the sequence is implemented as an array or linked list code fragment quick-sort for an input sequence implemented with linked list or an array running time of quick-sort we can analyze the running time of quick-sort with the same technique used for merge-sort in section namelywe can identify the time spent at each node of the quick-sort tree and sum up the running times for all the nodes examining code fragment we see that the divide step and the conquer step of quick-sort can be implemented in linear time thusthe time spent at node of is proportional to the input size (vof vdefined as the size of the sequence handled by the invocation of quick-sort associated with node since subsequence
21,318
is atmosts( given quick-sort tree tlet denote the sum of the input sizes of the nodes at depth in clearlys nsince the root of is associated with the entire sequence alsos < since the pivot is not propagated to the children of consider next if both children of have nonzero input sizethen otherwise (one child of the root has zero sizethe other has size ) thuss < continuing this line of reasoningwe obtain that < as observed in section the height of is in the worst case thusthe worstcase running time of quick-sort is by proposition which is that isthusquick-sort runs in ( worst-case time given its namewe would expect quick-sort to run quickly howeverthe quadratic bound above indicates that quick-sort is slow in the worst case paradoxicallythis worst-case behavior occurs for problem instances when sorting should be easy--if the sequence is already sorted going back to our analysisnote that the best case for quick-sort on sequence of distinct elements occurs when subsequences and happen to have roughly the same size that isin the best casewe have - ( si
21,319
thusin the best caset has height (lognand quick-sort runs in (nlogntimewe leave the justification of this fact as an exercise ( - the informal intuition behind the expected behavior of quick-sort is that at each invocation the pivot will probably divide the input sequence about equally thuswe expect the average running time quick-sort to be similar to the best-case running timethat iso(nlognwe will see in the next section that introducing randomization makes quick-sort behave exactly in this way randomized quick-sort one common method for analyzing quick-sort is to assume that the pivot will always divide the sequence almost equally we feel such an assumption would presuppose knowledge about the input distribution that is typically not availablehowever for examplewe would have to assume that we will rarely be given "almostsorted sequences to sortwhich are actually common in many applications fortunatelythis assumption is not needed in order for us to match our intuition to quick-sort' behavior in generalwe desire some way of getting close to the best-case running time for quick-sort the way to get close to the best-case running timeof courseis for the pivot to divide the input sequence almost equally if this outcome were to occurthen it would result in running time that is asymptotically the same as the bestcase running time that ishaving pivots close to the "middleof the set of elements leads to an (nlognrunning time for quick-sort picking pivots at random since the goal of the partition step of the quick-sort method is to divide the sequence almost equallylet us introduce randomization into the algorithm and pick as the pivot random element of the input sequence that isinstead of picking the pivot as the last element of swe pick an element of at random as the pivotkeeping the rest of the algorithm unchanged this variation of quicksort is called randomized quick-sort the following proposition shows that the expected running time of randomized quick-sort on sequence with elements is (nlognthis expectation is taken over all the possible random choices the algorithm makesand is independent of any assumptions about the distribution of the possible input sequences the algorithm is likely to be given proposition the expected running time of randomized quick-sort on sequence of size is (nlognjustificationwe assume two elements of can be compared in ( time consider single recursive call of randomized quick-sortand let denote
21,320
such that subsequences and have size at least / and at most / eachotherwisea call is "bad nowconsider the implications of our choosing pivot uniformly at random note that there are / possible good choices for the pivot for any given call of size of the randomized quick-sort algorithm thusthe probability that any call is good is / note further that good call will at least partition list of size into two lists of size / and / and bad call could be as bad as producing single call of size now consider recursion trace for randomized quick-sort this trace defines binary treetsuch that each node in corresponds to different recursive call on subproblem of sorting portion of the original list say that node in is in size group if the size of ' subproblem is greater than ( / ) and at most ( / )in let us analyze the expected time spent working on all the subproblems for nodes in size group by the linearity of expectation (proposition )the expected time for working on all these subproblems is the sum of the expected times for each one some of these nodes correspond to good calls and some correspond to bad calls but note thatsince good call occurs with probability / the expected number of consecutive calls we have to make before getting good call is moreovernotice that as soon as we have good call for node in size group iits children will be in size groups higher than thusfor any element from in the input listthe expected number of nodes in size group containing in their subproblems is in other wordsthe expected total size of all the subproblems in size group is since the nonrecursive work we perform for any subproblem is proportional to its sizethis implies that the total expected time spent processing subproblems for nodes in size group is (nthe number of size groups is log / nsince repeatedly multiplying by / is the same as repeatedly dividing by / that isthe number of size groups is (lognthereforethe total expected running time of randomized quick-sort is (nlogn(see figure figure visual time analysis of the quicksort tree each node is shown labeled with the size of its subproblem
21,321
with high probability (see exercise - in-place quick-sort recall from section that sorting algorithm is in-place if it uses only small amount of memory in addition to that needed for the objects being sorted themselves the merge-sort algorithmas we have described it aboveis not inplaceand making it be in-place requires more complicated merging method than the one we discuss in section in-place sorting is not inherently difficulthowever foras with heap-sortquick-sort can be adapted to be in-place performing the quick-sort algorithm in-place requires bit of ingenuityhoweverfor we must use the input sequence itself to store the subsequences for all the recursive calls we show algorithm inplacequicksortwhich performs inplace quick-sortin code fragment algorithm inplacequicksort assumes that the input sequencesis given as an array of distinct elements the reason for this restriction is explored in exercise - the extension to the general case is discussed in exercise - code fragment input array in-place quick-sort for an
21,322
not explicitly create subsequences indeeda subsequence of the input sequence is implicitly represented by range of positions specified by left-most index and right-most index the divide step is performed by scanning the array simultaneously from forward and from backwardswapping pairs of elements that are in reverse orderas shown in figure when these two indices "meet,subarrays and are on opposite sides of the meeting point the algorithm completes by recurring on these two subarrays in-place quick-sort reduces the running time caused by the creation of new sequences and the movement of elements between them by constant factor we show java version of in-place quick-sort in code fragment figure divide step of in-place quick-sort index scans the sequence from left to rightand index scans the sequence from right to left swap is performed when is at an element larger than the pivot
21,323
swap with the pivot completes the divide step code fragment coding of in-place quicksortassuming distinct elements
21,324
section we note that we need space for stack proportional to the depth of the recursion treewhich in this case can be as large as admittedlythe expected stack depth is (logn)which is small compared to neverthelessa simple trick lets us guarantee the stack size is (lognthe main idea is to design nonrecursive version of in-place quick-sort using an explicit stack to iteratively process subproblems (each of which can be represented with pair of indices marking subarray boundaries each iteration involves popping the top subproblemsplitting it in two (if it is big enough)and pushing the two new subproblems the trick is that when pushing the new subproblemswe should first push the larger subproblem and then the smaller one in this waythe sizes of the subproblems will at least double as we go down the stackhencethe stack can have depth at most (lognwe leave the details of this implementation to an exercise ( -
21,325
lower bound on sorting recapping our discussions on sorting to this pointwe have described several methods with either worst-case or expected running time of (nlognon an input sequence of size these methods include merge-sort and quick-sortdescribed in this as well as heap-sortdescribed in section natural question to askthenis whether it is possible to sort any faster than in (nlogntime in this sectionwe show that if the computational primitive used by sorting algorithm is the comparison of two elementsthen this is the best we can do-comparison-based sorting has an ohm(nlognworst-case lower bound on its running time (recall the notation ohm(*from section to focus on the main cost of comparison-based sortinglet us only count the comparisons that sorting algorithm performs since we want to derive lower boundthis will be sufficient suppose we are given sequence ( , ,, - that we wish to sortand assume that all the elements of are distinct (this is not really restriction since we are deriving lower boundwe do not care if is implemented as an array or linked listfor the sake of our lower boundsince we are only counting comparisons each time sorting algorithm compares two elements and (that isit asks"is ?")there are two outcomes"yesor "no based on the result of this comparisonthe sorting algorithm may perform some internal calculations (which we are not counting hereand will eventually perform another comparison between two other elements of swhich again will have two outcomes thereforewe can represent comparison-based sorting algorithm with decision tree (recall example that iseach internal node in corresponds to comparison and the edges from node to its children correspond to the computations resulting from either "yesor "noanswer (see figure it is important to note that the hypothetical sorting algorithm in question probably has no explicit knowledge of the tree we simply use to represent all the possible sequences of comparisons that sorting algorithm might makestarting from the first comparison (associated with the rootand ending with the last comparison (associated with the parent of an external nodejust before the algorithm terminates its execution each possible initial orderingor permutationof the elements in will cause our hypothetical sorting algorithm to execute series of comparisonstraversing path in from the root to some external node let us associate with each external node in tthenthe set of permutations of that cause our sorting algorithm to end up in the most important observation in our lower-bound argument is that each external node in can represent the sequence of comparisons for at most one permutation of the justification for this claim is simpleif two different permutations and of are associated with the same external nodethen there are at least two objects and such that is before in but is after in at the same timethe output associated with must be specific reordering of swith either or appearing before the other but if and both cause the sorting
21,326
trick the algorithm into outputting and in the wrong order since this cannot be allowed by correct sorting algorithmeach external node of must be associated with exactly one permutation of we use this property of the decision tree associated with sorting algorithm to prove the following resultproposition the running time of any comparison-based algorithm for sorting an -element sequence is ohm(nlogn)in the worst case justificationthe running time of comparison-based sorting algorithm must be greater than or equal to the height of the decision tree associated with this algorithmas described above (see figure by the argument aboveeach external node in must be associated with one permutation of moreovereach permutation of must result in different external node of the number of permutations of objects is nn( )( thust must have at least nexternal nodes by proposition the height of is at least log( !this immediately justifies the propositionbecause there are at least / terms that are greater than or equal to / in the product !hence which is ohm(nlognfigure visualizing the lower bound for comparison-based sorting
21,327
bucket-sort and radix-sort in the previous sectionwe showed that ohm(nlogntime is necessaryin the worst caseto sort an -element sequence with comparison-based sorting algorithm natural question to askthenis whether there are other kinds of sorting algorithms that can be designed to run asymptotically faster than (nlogntime interestinglysuch algorithms existbut they require special assumptions about the input sequence to be sorted even sosuch scenarios often arise in practiceso discussing them is worthwhile in this sectionwe consider the problem of sorting sequence of entrieseach key-value pair bucket-sort consider sequence of entries whose keys are integers in the range [ , ]for some integer > and suppose that should be sorted according to the keys of the entries in this caseit is possible to sort in ( ntime it might seem surprisingbut this impliesfor examplethat if is ( )then we can sort in (ntime of coursethe crucial point is thatbecause of the restrictive assumption about the format of the elementswe can avoid using comparisons the main idea is to use an algorithm called bucket-sortwhich is not based on comparisonsbut on using keys as indices into bucket array that has cells indexed from to an entry with key is placed in the "bucketb[ ]which
21,328
sequence into its bucketwe can put the entries back into in sorted order by enumerating the contents of the buckets [ ] [ ], [ in order we describe the bucket-sort algorithm in code fragment code fragment bucket-sort it is easy to see that bucket-sort runs in ( ntime and uses ( nspace hencebucket-sort is efficient when the range of values for the keys is small compared to the sequence size nsay (nor (nlognstillits performance deteriorates as grows compared to an important property of the bucket-sort algorithm is that it works correctly even if there are many different elements with the same key indeedwe described it in way that anticipates such occurrences stable sorting when sortting key-value pairsan important issue is how equal keys are handled let (( , ),( - , - )be sequence of such entries we say that sorting algorithm is stable iffor any two entries ( , and ( , of ssuch that and ( , precedes ( , in before sorting (that isi )entry ( , also precedes entry ( , after sorting stability is important for sorting algorithm because applications may want to preserve the initial ordering of elements with the same key our informal description of bucket-sort in code fragment does not guarantee stability this is not inherent in the bucket-sort method itselfhoweverfor we can easily modify our description to make bucket-sort stablewhile still preserving its ( nrunning time indeedwe can obtain stable bucket-sort algorithm by always removing the first element from sequence and from the sequences [iduring the execution of the algorithm
21,329
radix-sort one of the reasons that stable sorting is so important is that it allows the bucket-sort approach to be applied to more general contexts than to sort integers supposefor examplethat we want to sort entries with keys that are pairs ( , )where and are integers in the range [ , ]for some integer > in context such as thisit is natural to define an ordering on these keys using the lexicographical (dictionaryconventionwhere ( , ( , if or if and (section this is pair-wise version of the lexicographic comparison functionusually applied to equal-length character strings (and it easily generalizes to tuples of numbers for the radix-sort algorithm sorts sequence of entries with keys that are pairsby applying stable bucket-sort on the sequence twicefirst using one component of the pair as the ordering key and then using the second component but which order is correctshould we first sort on the ' (the first componentand then on the ' (the second component)or should it be the other way aroundbefore we answer this questionwe consider the following example example consider the following sequence (we show only the keys) (( , ),( , )( , )( , )( , )( , )( , ),( , )if we sort stably on the first componentthen we get the sequence (( , )( , )( , )( , )( , )( , )( , )( , )if we then stably sort this sequence using the second componentthen we get the sequence , (( , )( , )( , )( , )( , )( , )( , )( , ))which is not exactly sorted sequence on the other handif we first stably sort using the second componentthen we get the sequence (( , )( , )( , )( , )( , )( , )( , )( , )if we then stably sort sequence using the first componentthen we get the sequence , (( , )( , )( , )( , )( , )( , )( , )( , ))which is indeed sequence lexicographically ordered sofrom this examplewe are led to believe that we should first sort using the second component and then again using the first component this intuition is exactly right by first stably sorting by the second component and then again by the
21,330
first component)then their relative order in the starting sequence (which is sorted by the second componentis preserved thusthe resulting sequence is guaranteed to be sorted lexicographically every time we leave to simple exercise ( - the determination of how this approach can be extended to triples and other -tuples of numbers we can summarize this section as followsproposition let be sequence of key-value pairseach of which has key ( , , )where is an integer in the range [ , for some integer > we can sort lexicographically in time ( ( )using radix-sort as important as it issorting is not the only interesting problem dealing with total order relation on set of elements there are some applicationsfor examplethat do not require an ordered listing of an entire setbut nevertheless call for some amount of ordering information about the set before we study such problem (called "selection")let us step back and briefly compare all of the sorting algorithms we have studied so far comparison of sorting algorithms at this pointit might be useful for us to take breath and consider all the algorithms we have studied in this book to sort an -element array listnode listor general sequence considering running time and other factors we have studied several methodssuch as insertion-sortand selection-sortthat have ( )-time behavior in the average and worst case we have also studied several methods with (nlogn)-time behaviorincluding heap-sortmerge-sortand quick-sort finallywe have studied special class of sorting algorithmsnamelythe bucket-sort and radix-sort methodsthat run in linear time for certain types of keys certainlythe selection-sort algorithm is poor choice in any applicationsince it runs in ( time even in the best case butof the remaining sorting algorithmswhich is the bestas with many things in lifethere is no clear "bestsorting algorithm from the remaining candidates the sorting algorithm best suited for particular application depends on several properties of that application we can offer some guidance and observationsthereforebased on the known properties of the "goodsorting algorithms insertion-sort if implemented wellthe running time of insertion-sort is ( )where is the number of inversions (that isthe number of pairs of elements out of orderthus
21,331
elements)because insertion-sort is simple to programand small sequences necessarily have few inversions alsoinsertion-sort is quite effective for sorting sequences that are already "almostsorted by "almost,we mean that the number of inversions is small but the ( )-time performance of insertion-sort makes it poor choice outside of these special contexts merge-sort merge-sorton the other handruns in (nlogntime in the worst casewhich is optimal for comparison-based sorting methods stillexperimental studies have shown thatsince it is difficult to make merge-sort run in-placethe overheads needed to implement merge-sort make it less attractive than the in-place implementations of heap-sort and quick-sort for sequences that can fit entirely in computer' main memory area even somerge-sort is an excellent algorithm for situations where the input cannot all fit into main memorybut must be stored in blocks on an external memory devicesuch as disk in these contextsthe way that merge-sort processes runs of data in long merge streams makes the best use of all the data brought into main memory in block from disk thusfor external memory sortingthe merge-sort algorithm tends to minimize the total number of disk reads and writes neededwhich makes the merge-sort algorithm superior in such contexts quick-sort experimental studies have shown that if an input sequence can fit entirely in main memorythen the in-place versions of quick-sort and heap-sort run faster than merge-sort the extra overhead needed for copying nodes or entries puts mergesort at disadvantage to quick-sort and heap-sort in these applications in factquicksort tendson averageto beat heap-sort in these tests soquick-sort is an excellent choice as general-purposein-memory sorting algorithm indeedit is included in the qsort sorting utility provided in language libraries stillits ( time worst-case performance makes quick-sort poor choice in real-time applications where we must make guarantees on the time needed to complete sorting operation heap-sort in real-time scenarios where we have fixed amount of time to perform sorting operation and the input data can fit into main memorythe heap-sort algorithm is probably the best choice it runs in (nlognworst-case time and can easily be made to execute in-place
21,332
finallyif our application involves sorting entries with small integer keys or dtuples of small integer keysthen bucket-sort or radix-sort is an excellent choicefor it runs in ( ( )timewhere [ , is the range of integer keys (and for bucket sortthusif ( nis significantly "belowthe nlogn functionthen this sorting method should run faster than even quick-sort or heap-sort thusour study of all these different sorting algorithms provides us with versatile collection of sorting methods in our algorithm engineering "toolbox the set adt and union/find structures in this sectionwe introduce the set adt set is collection of distinct objects that isthere are no duplicate elements in setand there is no explicit notion of keys or even an order even sowe include our discussion of sets here in on sortingbecause sorting can play an important role in efficient implementations of the operations of the set adt sets and some of their uses firstwe recall the mathematical definitions of the unionintersectionand subtraction of two sets and ba { :xis in or is in } { : is in and is in } { : is in and /is in bexample most internet search engines storefor each word in their dictionary databasea setw( )of web pages that contain xwhere each web page is identified by unique internet address when presented with query for word xsuch search engine need only return the web pages in the set ( )sorted according to some proprietary priority ranking of page "importance but when presented with two-word query for words and ysuch search engine must first compute the intersection (xw( )and then return the web pages in the resulting set sorted by priority several search engines use the set intersection algorithm described in this section for this computation fundamental methods of the set adt the fundamental methods of the set adtacting on set aare as followsunion( )replace with the union of and bthat isexecute -
21,333
subtract( )replace with the difference of and bthat isexecute - simple set implementation one of the simplest ways of implementing set is to store its elements in an ordered sequence this implementation is included in several software libraries for generic data structuresfor example thereforelet us consider implementing the set adt with an ordered sequence (we consider other implementations in several exercisesany consistent total order relation among the elements of the set can be usedprovided the same order is used for all the sets we implement each of the three fundamental set operations using generic version of the merge algorithm that takesas inputtwo sorted sequences representing the input setsand constructs sequence representing the output setbe it the unionintersectionor subtraction of the input sets incidentallywe have defined these operations so that they modify the contents of the set involved alternativelywe could have defined these methods so that they do not modify but return new set instead the generic merge algorithm iteratively examines and compares the current elements and of the input sequence and brespectivelyand finds out whether thenbased on the outcome of this comparisonit determines whether it should copy one of the elements and to the end of the output sequence this determination is made based on the particular operation we are performingbe it unionintersectionor subtraction for examplein union operationwe proceed as followsif bwe copy to the end of and advance to the next element of if bwe copy to the end of and advance to the next elements of and if bwe copy to the end of and advance to the next element of performance of generic merging let us analyze the running time of generic merging at each iterationwe compare two elements of the input sequences and bpossibly copy one element to the output sequenceand advance the current element of abor both assuming that comparing and copying elements takes ( timethe total running time is ( )where is the size of and is the size of bthat is
21,334
have the followingproposition the set adt can be implemented with an ordered sequence and generic merge scheme that supports operationsunionintersectandsubtract in (ntimewhere denotes the sum of sizes of the sets involved generic merging as template method pattern the generic merge algorithm is based on the template method pattern (see section the template method pattern is software engineering design pattern describing generic computation mechanism that can be specialized by redefining certain steps in this casewe describe method that merges two sequences into one and can be specialized by the behavior of three abstract methods code fragment shows the class merge providing java implementation of the generic merge algorithm code fragment class merge for generic merging
21,335
classes that redefine the three auxiliary methodsaislessbothareequaland bisless we show how unionintersectionand subtraction can be easily described in terms of these methods in code fragment the auxiliary methods are redefined so that the template method merge performs as follows
21,336
in class union mergemerge copies every element from and into cbut does not duplicate any element in class intersectmergemerge copies every element that is in both and into cbut "throws awayelements in one set but not in the other in class subtractmergemerge copies every element that is in and not in into code fragment classes extending the merge class by specializing the auxiliary methods to perform set unionintersectionand subtractionrespectively
21,337
partitions with union-find operations partition is collection of disjoint sets we define the methods of the partition adt using position objects (section )each of which stores an element the parition adt supports the following methods makeset( )create singleton set containing the element and return the position storing in this set union(ab)return the set bdestroying the old and find( )return the set containing the element in position simple implementation of partition with total of elements is with collection of sequencesone for each setwhere the sequence for set stores set positions as its elements each position object stores variableelementwhich references its associated element and allows the execution of the element(method in ( time in additionwe also store variablesetin each positionwhich references the sequence storing psince this sequence is representing the set containing ' element (see figure thuswe can perform operation find(pin ( timeby following the set reference for likewisemakeset also takes ( time operation union( ,brequires that we join two sequences into one and update the set references of the positions in one of the two we choose to implement this operation by removing all the positions from the sequence with smaller sizeand inserting them in the sequence with larger size each time we take position from the smaller set and insert it into the larger set twe update the set reference for to now point to hencethe operation union( ,btakes time (min(| |,| |))which is ( )becausein the worst case| |bn/ neverthelessas shown belowan amortized analysis shows this implementation to be much better than appears from this worst-case analysis figure sequence-based implementation of partition consisting of three setsa , , , , , and , , , ,
21,338
the sequence implementation above is simplebut it is also efficientas the following theorem shows proposition performing series of nmakesetunionandfind operationsusing the sequence-based implementation abovestarting from an initially empty partition takes (nlogntime justificationwe use the accounting method and assume that one cyberdollar can pay for the time to perform find operationa makeset operationor the movement of position object from one sequence to another in union operation in the case of find or makeset operationwe charge the operation itself cyber-dollar in the case of union operationhoweverwe charge cyber-dollar to each position that we move from one set to another note that we charge nothing to the union operations themselves clearlythe total charges to find and makeset operations sum to be (nconsiderthenthe number of charges made to positions on behalf of union operations the important observation is that each time we move position from one set to anotherthe size of the new set at least doubles thuseach position is moved from one set to another at most logn timeshenceeach position can be charged at most (logntimes since we assume that the partition is initially emptythere are (ndifferent elements referenced in the given series of operationswhich implies that the total time for all the union operations is (nlognthe amortized running time of an operation in series of makesetunionand find operationsis the total time taken for the series divided by the number of operations we conclude from the proposition above thatfor partition implemented using sequencesthe amortized running time of each operation is
21,339
partition implementation as follows proposition using sequence-based implementation of partitionin series of nmakesetunionandfind operations starting from an initially empty partitionthe amortized running time of each operation is (lognnote that in this sequence-based implementation of partitioneach find operation takes worst-case ( time it is the running time of the union operations that is the computational bottleneck in the next sectionwe describe tree-based implementation of partition that does not guarantee constant-time find operationsbut has amortized time much better than (lognper union operation tree-based partition implementation an alternative data structure uses collection of trees to store the elements in setswhere each tree is associated with different set (see figure in particularwe implement each tree with linked data structure whose nodes are themselves the set position objects we still view each position as being node having variableelementreferring to its element xand variablesetreferring to set containing xas before but now we also view each position as being of the "setdata type thusthe set reference of each position can point to positionwhich could even be itself moreoverwe implement this approach so that all the positions and their respective set references together define collection of trees we associate each tree with set for any position pif ' set reference points back to pthen is the root of its treeand the name of the set containing is " (that iswe will be using position names as set names in this caseotherwisethe set reference for points to ' parent in its tree in either casethe set containing is the one associated with the root of the tree containing figure tree-based implementation of partition consisting of three disjoint setsa , , , , , and , , , ,
21,340
arguments and that respectively represent the sets and (that isa and qwe perform this operation by making one of the trees subtree of the other (figure )which can be done in ( time by setting the set reference of the root of one tree to point to the root of the other tree operation find for position is performed by walking up to the root of the tree containing the position (figure )which takes (ntime in the worst case note that this representation of tree is specialized data structure used to implement partitionand is not meant to be realization of the tree abstract data type (section indeedthe representation has only "upwardlinksand does not provide way to access the children of given node figure tree-based implementation of partition(aoperation union( , )(boperation find( )where denotes the position object for element
21,341
structurebut we add the following simple heuristics to make it run fasterunion-by-sizestorewith each position node pthe size of the subtree rooted at in union operationmake the tree of the smaller set become subtree of the other treeand update the size field of the root of the resulting tree path compressionin find operationfor each node that the find visitsreset the parent pointer from to point to the root (see figure these heuristics increase the running time of an operation by constant factorbut as we discuss belowthey significantly improve the amortized running time figure path-compression heuristic(apath traversed by operation find on element (brestructured tree
21,342
surprising property of the tree-based partition data structurewhen implemented using the union-by-size and path-compression heuristicsis that performing series of nunion and find operations takes (nlog*ntimewhere log* is the log-star functionwhich is the inverse of the tower-of-twos function intuitivelylog* is the number of times that one can iteratively take the logarithm (base of number before getting number smaller than table shows few sample values table some values of logn and critical values for its inverse as is demonstrated in table for all practical purposeslogn < it is an amazingly slow-growing function (but one that is growing nonethelessin factthe running time of series of partition operations implemented as above can actually be shown to be (na( ))where (nis the inverse of the ackermann function,awhich grows asymptotically even slower than logn although we will not prove this factlet us define the ackermann function hereso as to appreciate just how quickly it growshencehow slowly its inverse grows we first define an indexed ackermann functionaias followsa ( ) for >=
21,343
for >= ai( )ai (ai( )for > and > in other wordsthe ackermann functions define progression of functionsa ( is the multiply-by-two function ( is the power-of-two function ( =math (with 'sis the tower-of-twos function and so on we then define the ackermann function as (nan( )which is an incredibly fast growing function likewisethe inverse ackermann functiona(nmin{ma( > }is an incredibly slow growing function it grows much slower than the logn function (which is the inverse of ( ))for exampleand we have already noted that logn is very slow-growing function selection there are number of applications in which we are interested in identifying single element in terms of its rank relative to an ordering of the entire set examples include identifying the minimum and maximum elementsbut we may also be interested insayidentifying the median elementthat isthe element such that half of the other elements are smaller and the remaining half are larger in generalqueries that ask for an element with given rank are called order statistics defining the selection problem in this sectionwe discuss the general order-statistic problem of selecting the kth smallest element from an unsorted collection of comparable elements this is known as the selection problem of coursewe can solve this problem by sorting the collection and then indexing into the sorted sequence at index - using the best comparison-based sorting algorithmsthis approach would take (nlogntimewhich is obviously an overkill for the cases where or (or even or )because we can easily solve the selection problem for these values of in (ntime thusa natural question to ask is whether we can achieve an (nrunning time for all values of (including the interesting case of finding the medianwhere =floorln/ floorr;prune-and-search
21,344
in (ntime for any value of moreoverthe technique we use to achieve this result involves an interesting algorithmic design pattern this design pattern is known as prune-and-search or decrease-and-conquer in applying this design patternwe solve given problem that is defined on collection of objects by pruning away fraction of the objects and recursively solving the smaller problem when we have finally reduced the problem to one defined on constantsized collection of objectsthen we solve the problem using some brute-force method returning back from all the recursive calls completes the construction in some caseswe can avoid using recursionin which case we simply iterate the prune-and-search reduction step until we can apply brute-force method and stop incidentallythe binary search method described in section is an example of the prune-andsearch design pattern randomized quick-select code fragment randomized quick-select algorithm in applying the prune-and-search pattern to the selection problemwe can design simple and practical methodcalled randomized quick-selectfor finding the kth smallest element in an unordered sequence of elements on which total order relation is defined randomized quick-select runs in (nexpected timetaken over all possible random choices made by the algorithmand this expectation does not depend whatsoever on any randomness assumptions about the input distribution we note though that randomized quick-select runs in ( time in the worst casethe justification of which is left as an exercise ( - we also provide an
21,345
selection algorithm that runs in (nworst-case time the existence of this deterministic algorithm is mostly of theoretical interesthoweversince the constant factor hidden by the big-oh notation is relatively large in this case suppose we are given an unsorted sequence of comparable elements together with an integer [ ,nat high levelthe quick-select algorithm for finding the kth smallest element in is similar in structure to the randomized quicksort algorithm described in section we pick an element from at random and use this as "pivotto subdivide into three subsequences leand gstoring the elements of less than xequal to xand greater than xrespectively this is the prune step thenbased on the value of kwe then determine which of these sets to recur on randomized quick-select is described in code fragment algorithm quickselect( , )inputsequence of comparable elementsand an integer [ ,noutputthe kth smallest element of if then return the (firstelement of pick random (pivotelement of and divide into three sequences*lstoring the elements in less than *estoring the elements in equal to *gstoring the elements in greater than if <=|lthen quickselect( ,kelse if <| |ethen return {each element in is equal to xelse quickselect( , | -| |{note the new selection parameteranalyzing randomized quick-select showing that randomized quick-select runs in (ntime requires simple probabilistic argument the argument is based on the linearity of expectationwhich states that if and are random variables and is numberthen ( )= ( )+ (yand (cx)=ce( )where we use (zto denote the expected value of the expression let (nbe the running time of randomized quick-select on sequence of size since this algorithm depends on random eventsits running timet( )is random variable we want to bound ( ( ))the expected value of (nsay that recursive invocation of our algorithm is "goodif it partitions so that the size of and is at most / clearlya recursive call is good with probability / let (ndenote the number of consecutive recursive calls we makeincluding the present onebefore we get good one then we can characterize (nusing the following recurrence equationt( )<=bn* (nt( / )where > is constant applying the linearity of expectation for we get ( ( )<= (bn* (nt( / )=bn* ( ( ) ( ( / )since recursive call is good with probability / and whether recursive call is good or not is independent of its parent call being goodthe expected value of (
21,346
up "heads that ise( ( ) thusif we let (nbe shorthand for ( ( ))then we can write the case for as ( )<= ( / bn to convert this relation into closed formlet us iteratively apply this inequality assuming is large sofor exampleafter two applicationst( <= (( / ) ( / ) bn at this pointwe should see that the general case is in other wordsthe expected running time is at most bn times geometric sum whose base is positive number less than thusby proposition (nis (nproposition the expected running time of randomized quick-select on sequence of size is ( )assuming two elements of can be compared in ( time exercises for source code and help with exercisesplease visit java datastructures net reinforcement - suppose is list of bitsthat isn ' and ' how long will it take to sort with the merge-sort algorithmwhat about quick-sortr- suppose is list of bitsthat isn ' and ' how long will it take to sort stably with the bucket-sort algorithmr- give complete justification of proposition -
21,347
drawn as arrows what is the meaning of downward arrowhow about an upward arrowr- give complete pseudo-code description of the recursive merge-sort algorithm that takes an array as its input and output - show that the running time of the merge-sort algorithm on an -element sequence is (nlogn)even when is not power of - suppose we are given two -element sorted sequences and that should not be viewed as sets (that isa and may contain duplicate entriesdescribe an ( )-time method for computing sequence representing the set (with no duplicatesr- show that ( ( bx ( )for any three sets xaand - suppose we modify the deterministic version of the quick-sort algorithm so thatinstead of selecting the last element in an -element sequence as the pivotwe choose the element at index ln/ what is the running time of this version of quick-sort on sequence that is already sortedr- consider again the modification of the deterministic version of the quicksort algorithm so thatinstead of selecting the last element in an -element sequence as the pivotwe choose the element at index ln/ describe the kind of sequence that would cause this version of quick-sort to run in ( time - show that the best-case running time of quick-sort on sequence of size with distinct elements is (nlognr- describe randomized version of in-place quick-sort in pseudo-code -
21,348
logn subproblems in size group ifor randomized quick-sortis at most / - suppose algorithm inplacequicksort (code fragment is executed on sequence with duplicate elements show that the algorithm still correctly sorts the input sequencebut the result of the divide step may differ from the highlevel description given in section and may result in inefficiencies in particularwhat happens in the partition step when there are elements equal to the pivotis the sequence (storing the elements equal to the pivotactually computeddoes the algorithm recur on the subsequences and gor on some other subsequenceswhat is the running time of the algorithm if all the input elements are equalr- of the npossible inputs to given comparison-based sorting algorithmwhat is the absolute maximum number of inputs that could be sorted with just comparisonsr- jonathan has comparison-based sorting algorithm that sorts the first elements in sequence of size in (ntime give big-oh characterization of the biggest that can ber- is the merge-sort algorithm in section stablewhy or why notr- an algorithm that sorts key-value entries by key is said to be straggling ifany time two entries ei and ej have equal keysbut ei appears before ej in the inputthen the algorithm places ei after ej in the output describe change to the merge-sort algorithm in section to make it straggling - describe radix-sort method for lexicographically sorting sequence of triplets ( , , )where kland are integers in the range [ , ]for some >= how could this scheme be extended to sequences of -tuples ( , ,kd)where each ki is an integer in the range [ - is the bucket-sort algorithm in-placewhy or why not
21,349
give an example input list that requires merge-sort and heap-sort to take (nlogntime to sortbut insertion-sort runs in (ntime what if you reverse this listr- describein pseudo-codehow to perform path compression on path of length in (htime in tree-based partition union/find structure - george claims he has fast way to do path compression in partition structurestarting at node he puts into list land starts following parent pointers each time he encounters new nodeuhe adds to and updates the parent pointer of each node in to point to ' parent show that george' algorithm runs in ( time on path of length - describe an in-place version of the quick-select algorithm in pseudo-code - show that the worst-case running time of quick-select on an -element sequence is ( creativity - linda claims to have an algorithm that takes an input sequence and produces an output sequence that is sorting of the elements in give an algorithmissortedfor testing in (ntime if is sorted explain why the algorithm issorted is not sufficient to prove particular output to linda' algorithm is sorting of
21,350
her algorithm' correctness could be established on any given and in (ntime - given two sets and represented as sorted sequencesdescribe an efficient algorithm for computing bwhich is the set of elements that are in or bbut not in both - suppose that we represent sets with balanced search trees describe and analyze algorithms for each of the methods in the set adtassuming that one of the two sets is much smaller than the other - describe and analyze an efficient method for removing all duplicates from collection of elements - consider sets whose elements are integers in the range [ , popular scheme for representing set of this type is by means of boolean arraybwhere we say that is in if and only if [xtrue since each cell of can be represented with single bitb is sometimes referred to as bit vector describe and analyze efficient algorithms for performing the methods of the set adt assuming this representation - consider version of deterministic quick-sort where we pick as our pivot the median of the last elements in the input sequence of elementsfor fixedconstant odd number > argue informally why this should be good choice for pivot what is the asymptotic worst-case running time of quick-sort in this casein terms of and dc- another way to analyze randomized quick-sort is to use recurrence equation in this casewe let (ndenote the expected running time of randomized quicksortand we observe thatbecause of the worst-case partitions for good and bad splitswe can write ( < / ( ( / ( / )( ( )bn
21,351
the result sublists after the recursive calls return showby inductionthat (nis (nlognc- modify inplacequicksort (code fragment to handle the general case efficiently when the input sequencesmay have duplicate keys - describe nonrecursivein-place version of the quick-sort algorithm the algorithm should still be based on the same divide-and-conquer approachbut use an explicit stack to process subproblems your algorithm should also guarantee the stack depth is at most (lognc- show that randomized quick-sort runs in (nlogntime with probability at least /nthat iswith high probabilityby answering the followinga for ach input element xdefine ci, (xto be / random variable that is if and only if element is in subproblems that belong to size group argue why we need not define ci, for let xi, be / random variable that is with probability / jindependent of any other eventsand let log / argue why show the expected value of is at most show that the probability that / using the chernoff bound that states that if is the sum of finite number of independent / random variables with expected value then pr( ( / )where
21,352
(nlogntime with probability at least / - given an array of entries with keys equal to or describe an in-place method for ordering so that all the ' are before every - suppose we are given an -element sequence such that each element in represents different vote for presidentwhere each vote is given as an integer representing particular candidate design an (nlogn)time algorithm to see who wins the election representsassuming the candidate with the most votes wins (even if there are (ncandidatesc- consider the voting problem from exercise - but now suppose that we know the number of candidates running describe an (nlogk)time algorithm for determining who wins the election - consider the voting problem from exercise - but now suppose candidate wins only if he or she gets majority of the votes cast design and analyze fast algorithm for determining the winner if there is one - show that any comparison-based sorting algorithm can be made to be stable without affecting its asymptotic running time - suppose we are given two sequences and of elementspossibly containing duplicateson which total order relation is defined describe an efficient algorithm for determining if and contain the same set of elements what is the running time of this methodc- given an array of integers in the range [ , ]describe simple method for sorting in (ntime - let , ,sk be different sequences whose elements have integer keys in the range [ , ]for some parameter > describe an algorithm running in
21,353
total size of all the sequences - given sequence of elementson which total order relation is defineddescribe an efficient method for determining whether there are two equal elements in what is the running time of your methodc- let be sequence of elements on which total order relation is defined recall that an inversion in is pair of elements and such that appears before in but describe an algorithm running in (nlogntime for determining the number of inversions in - let be sequence of integers describe method for printing out all the pairs of inversions in in ( ktimewhere is the number of such inversions - let be random permutation of distinct integers argue that the expected running time of insertion-sort on is ( (hintnote that half of the elements ranked in the top half of sorted version of are expected to be in the first half of - let and be two sequences of integers each given an integer mdescribe an ( log -time algorithm for determining if there is an integer in and an integer in such that - given set of integersdescribe and analyze fast method for finding the logn integers closest to the median - bob has set of nuts and set of boltssuch that each nut in has unique matching bolt in unfortunatelythe nuts in all look the sameand the bolts in all look the same as well the only kind of comparison that bob can make is to take nut-bolt pair (ab)such that is in and is in band test it to see if the threads of are largersmalleror perfect match with the
21,354
of his nuts and bolts - show how to use deterministic ( )-time selection algorithm to sort sequence of elements in ( log nworst-case time - given an unsorted sequence of comparable elementsand an integer kgive an (nlogkexpected-time algorithm for finding the (kelements that have rank / / / and so on - let be sequence of insert and removemin operationswhere all the keys involved are integers in the range [ , describe an algorithm running in (nlognfor determining the answer to each removemin - space aliens have given us programaliensplitthat can take sequence of integers and partition in (ntime into sequences sk of size at most / eachsuch that the elements in si are less than or equal to every element in si + for , for fixed numberk show how to use aliensplit to sort in (nlogn/logktime - karen has new way to do path compression in tree-based union/find partition data structure starting at node she puts all the nodes that are on the path from to the root in set then she scans through and sets the parent pointer of each node in to its parent' parent pointer (recall that the parent pointer of the root points to itselfif this pass changed the value of any node' parent pointerthen she repeats this processand goes on repeating this process until she makes scan through that does not change any node' parent value show that karen' algorithm is correct and analyze its running time for path of length - this problem deals with modification of the quick-select algorithm to make it deterministic yet still run in (ntime on an -element sequence the idea is to modify the way we choose the pivot so that it is chosen deterministicallynot randomlyas follows
21,355
groupsort each little set and identify the median element in this set from this set of / "babymediansapply the selection algorithm recursively to find the median of the baby medians use this element as the pivot and proceed as in the quick-select algorithm show that this deterministic method runs in (ntime by answering the following questions (please ignore floor and ceiling functions if that simplifies the mathematicsfor the asymptotics are the same either way) how many baby medians are less than or equal to the chosen pivothow many are greater than or equal to the pivotb for each baby median less than or equal to the pivothow many other elements are less than or equal to the pivotis the same true for those greater than or equal to the pivotc argue why the method for finding the deterministic pivot and using it to partition takes (ntime based on these estimateswrite recurrence equation to bound the worstcase running time (nfor this selection algorithm (note that in the worst case there are two recursive calls--one to find the median of the baby medians and one to recur on the larger of and ge using this recurrence equationshow by induction that (nis (nprojects - experimentally compare the performance of in-place quick-sort and version of quick-sort that is not in-place -
21,356
sequence of elements with integer keys taken from the range [ , ]for > the algorithm should run in ( ntime - implement merge-sort and deterministic quick-sort and perform series of benchmarking tests to see which one is faster your tests should include sequences that are "randomas well as "almostsorted - implement deterministic and randomized versions of the quick-sort algorithm and perform series of benchmarking tests to see which one is faster your tests should include sequences that are very "randomlooking as well as ones that are "almostsorted - implement an in-place version of insertion-sort and an in-place version of quick-sort perform benchmarking tests to determine the range of values of where quick-sort is on average better than insertion-sort - design and implement an animation for one of the sorting algorithms described in this your animation should illustrate the key properties of this algorithm in an intuitive manner - implement the randomized quick-sort and quick-select algorithmsand design series of experiments to test their relative speeds - implement an extended set adt that includes the methods union( )intersect( )subtract( )size()isempty()plus the methods equals( )contains( )insert( )and remove(ewith obvious meaning - implement the tree-based union/find partition data structure with both the union-by-size and path-compression heuristics notes
21,357
the sorting problem and algorithms for solving it huang and langston [ describe how to merge two sorted lists in-place in linear time our set adt is derived from the set adt of ahohopcroftand ullman [ the standard quick-sort algorithm is due to hoare [ more information about randomizationincluding chernoff boundscan be found in the appendix and the book by motwani and raghavan [ the quick-sort analysis given in this is combination of an analysis given in previous edition of this book and the analysis of kleinberg and tardos [ the quick-sort analysis of exercise - is due to littman gonnet and baeza-yates [ provide experimental comparisons and theoretical analyses of number of different sorting algorithms the term "prune-and-searchcomes originally from the computational geometry literature (such as in the work of clarkson [ and megiddo [ ]the term "decrease-and-conqueris from levitin [ text processing
21,358
string operations the java string class the java stringbuffer class patternmatching algorithms brute force the boyer-moore algorithm
21,359
tries standard tries compressed tries suffix tries search engines text compression the huffman coding algorithm the greedy method
21,360
text similarity testing the longest common subsequence problem dynamic programming applying dynamic programming to the lcs problem exercises java datastructures net string operations document processing is rapidly becoming one of the dominant functions of computers computers are used to edit documentsto search documentsto transport documents over the internetand to display documents on printers and computer screens for examplethe internet document formats html and xml are primarily text formatswith added tags for multimedia content making sense of the many terabytes of information on the internet requires considerable amount of text processing in addition to having interesting applicationstext processing algorithms also highlight some important algorithmic design patterns in particularthe pattern matching problem gives rise to the brute-force methodwhich is often inefficient but has wide applicability for text compressionwe can apply the greedy methodwhich
21,361
(such as in text compressionactually gives rise to optimal algorithms finallyin discussing text similaritywe introduce the dynamic programming design patternwhich can be applied in some special instances to solve problem in polynomial time that appears at first to require exponential time to solve text processing at the heart of algorithms for processing text are methods for dealing with character strings character strings can come from wide variety of sourcesincluding scientificlinguisticand internet applications indeedthe following are examples of such stringsp "cgtaaactgctttaatcaaacgcs "the first stringpcomes from dna applicationsand the second stringsis the internet address (urlfor the web site that accompanies this book several of the typical string processing operations involve breaking large strings into smaller strings in order to be able to speak about the pieces that result from such operationswe use the term substring of an -character string to refer to string of the form [ ] [ ] [ [ ]for some < < < that isthe string formed by the characters in from index to index jinclusive technicallythis means that string is actually substring of itself (taking and )so if we want to rule this out as possibilitywe must restrict the definition to proper substringswhich require that either or to simplify the notation for referring to substringslet us use [ jto denote the substring of from index to index jinclusive that isp[ ]= [ ] [ + [jwe use the convention that if jthen [ jis equal to the null stringwhich has length in additionin order to distinguish some special kinds of substringslet us refer to any substring of the form [ ]for < < - as prefix of pand any substring of the form [ ]for < < as suffix of for exampleif we again take to be the string of dna given abovethen "cgtaais prefix of "cgcis suffix of pand "ttaatcis (propersubstring of note that the null string is prefix and suffix of any other string to allow for fairly general notions of character stringwe typically do not restrict the characters in and to explicitly come from well-known character setlike the unicode character set insteadwe typically use the symbol to denote the character setor alphabetfrom which characters can come since most document processing algorithms are used in applications where the underlying character set is
21,362
constant string operations come in two flavorsthose that modify the string they act on and those that simply return information about the string without actually modifying it java makes this distinction precise by defining the string class to represent immutable stringswhich cannot be modifiedand the stringbuffer class to represent mutable stringswhich can be modified the java string class the main operations of the java string class are listed belowlength()return the lengthnof charat( )return the character at index in startswith( )determine if is prefix of endswith( )determine if is suffix of substring( , )return the substring [ ,jconcat( )return the concatenation of and qthat iss+ equals( )determine if is equal to indexof( )if is substring of sreturn the index of the beginning of the first occurrence of in selse return - this collection forms the typical operations for immutable strings
21,363
on the string "abcdefghijklmnop"operation output length( charat( 'fconcat("qrs""abcdefghijklmnopqrsendswith("javapop"false indexof("ghi" startswith("abcd"true substring( , "efghijwith the exception of the indexof(qmethodwhich we discuss in section all the methods above are easily implemented simply by representing the string as an array of characterswhich is the standard string implementation in java the java stringbuffer class the main methods of the java stringbuffer class are listed belowappend( )return +qreplacing with insert(iq)
21,364
starting at index reverse()reverse and return the string setcharat( ,ch)set the character at index in to be ch charat( )return the character at index in error conditions occur when the index is out of the bounds of the indices of the string with the exception of the charat methodmost of the methods of the string class are not immediately available to stringbuffer object in java fortunatelythe java stringbuffer class provides tostring(method that returns string version of swhich can be used to access string methods example consider the following sequence of operationswhich are performed on the mutable string that is initially abcdefghijklmnop"operation append("qrs""abcdefghijklmnopqrsinsert( ,"xyz""abcxyzdefghijklmnopqrsreverse("srqponmlkjihgfedzyxcbasetcharat( ,' '"srqponmwkjihgfedzyxcba pattern matching algorithms
21,365
length and apattern string of length mand want to find whether is substring of the notion of "matchis that there is substring of starting at some index that matches pcharacter by characterso that [ip[ ] [ [ ] [ [ that isp [ thusthe output from pattern matching algorithm could either be some indication that the pattern does not exist in or an integer indicating the starting index in of substring matching this is exactly the computation performed by the indexof method of the java string interface alternativelyone may want to find all the indices where substring of matching begins in this sectionwe present three pattern matching algorithms (with increasing levels of difficultybrute force the brute force algorithmic design pattern is powerful technique for algorithm design when we have something we wish to search for or when we wish to optimize some function in applying this technique in general situation we typically enumerate all possible configurations of the inputs involved and pick the best of all these enumerated configurations in applying this technique to design the brute-force pattern matching algorithmwe derive what is probably the first algorithm that we might think of for solving the pattern matching problem--we simply test all the possible placements of relative to this algorithmshown in code fragment is quite simple algorithm bruteforcematch( , )inputstrings (textwith characters and (patternwith characters outputstarting index of the first substring of matching por an indication that is not substring of for to {for each candidate index in tdo while ( and [ jp[ ]do if then return return "there is no substring of matching code fragment brute-force pattern matching
21,366
the brute-force pattern matching algorithm could not be simpler it consists of two nested loopswith the outer loop indexing through all possible starting indices of the pattern in the textand the inner loop indexing through each character of the patterncomparing it to its potentially corresponding character in the text thusthe correctness of the brute-force pattern matching algorithm follows immediately from this exhaustive search approach the running time of brute-force pattern matching in the worst case is not goodhoweverbecausefor each candidate index in twe can perform up to character comparisons to discover that does not match at the current index referring to code fragment we see that the outer for loop is executed at most timesand the inner loop is executed at most times thusthe running time of the brute-force method is (( ) )which is simplified as (nmnote that when / this algorithm has quadratic running time ( example suppose we are given the text string "abacaabaccabacabaabband the pattern string "abacabin figure we illustrate the execution of the brute-force pattern matching algorithm on and figure example run of the brute-force pattern matching algorithm the algorithm performs character comparisonsindicated above with numerical labels
21,367
the boyer-moore algorithm at firstwe might feel that it is always necessary to examine every character in in order to locate pattern as substring but this is not always the casefor the boyer-moore (bmpattern matching algorithmwhich we study in this sectioncan sometimes avoid comparisons between and sizable fraction of the characters in the only caveat is thatwhereas the brute-force algorithm can work even with potentially unbounded alphabetthe bm algorithm assumes the alphabet is of fixedfinite size it works the fastest when the alphabet is moderately sized and the pattern is relatively long thusthe bm algorithm is ideal for searching words in documents in this sectionwe describe simplified version of the original algorithm by boyer and moore the main idea of the bm algorithm is to improve the running time of the bruteforce algorithm by adding two potentially time-saving heuristics roughly statedthese heuristics are as followslooking-glass heuristicwhen testing possible placement of against tbegin the comparisons from the end of and move backward to the front of character-jump heuristicduring the testing of possible placement of against ta mismatch of text character [ic with the corresponding pattern character [jis handled as follows if is not contained anywhere in pthen shift completely past [ (for it cannot match any character in potherwiseshift until an occurrence of character in gets aligned with [
21,368
integrated team the looking-glass heuristic sets up the other heuristic to allow us to avoid comparisons between and whole groups of characters in in this case at leastwe can get to the destination faster by going backwardsfor if we encounter mismatch during the consideration of at certain location in tthen we are likely to avoid lots of needless comparisons by significantly shifting relative to using the character-jump heuristic the character-jump heuristic pays off big if it can be applied early in the testing of potential placement of against let us therefore get down to the business of defining how the character-jump heuristics can be integrated into string pattern matching algorithm to implement this heuristicwe define function last(cthat takes character from the alphabet and characterizes how far we may shift the pattern if character equal to is found in the text that does not match the pattern in particularwe define last(cas if is in plast(cis the index of the last (right-mostoccurrence of in otherwisewe conventionally define last( if characters can be used as indices in arraysthen the last function can be easily implemented as look-up table we leave the method for computing this table in ( +| |timegiven pas simple exercise ( - this last function will give us all the information we need to perform the character-jump heuristic in code fragment we show the bm pattern matching algorithm code fragment matching algorithm the boyer-moore pattern
21,369
figure illustration of the jump step in the algorithm of code fragment where we let last( [ ]we distinguish two cases( + <jwhere we shift the pattern by units(bj lwhere we shift the pattern by one unit
21,370
algorithm on an input string similar to example figure an illustration of the bm pattern matching algorithm the algorithm performs character comparisonswhich are indicated with numerical labels
21,371
each time the method makes shiftit is guaranteed not to "skipover any possible matches for last(cis the location of the last occurrence of in the worst-case running time of the bm algorithm is (nm+| |namelythe computation of the last function takes time ( +| |and the actual search for the pattern takes (nmtime in the worst casethe same as the brute-force algorithm an example of text-pattern pair that achieves the worst case is the worst-case performancehoweveris unlikely to be achieved for english textforin this casethe bm algorithm is often able to skip large portions of text (see figure experimental evidence on english text shows that the average number of comparisons done per character is for five-character pattern string figure an example of boyer-moore execution on english text
21,372
fragment code fragment java implementation of the bm pattern matching algorithm the algorithm is expressed by two static methodsmethod bmmatch performs the matching and calls the auxiliary method build lastfunction to compute the last functionexpressed by an array indexed by the ascii code of the character method bmmatch indicates the absence of match by returning the conventional value
21,373
algorithm the original bm algorithm achieves running time ( | |by
21,374
shifts the pattern more than the character-jump heuristic this alternative shift heuristic is based on applying the main idea from the knuth-morris-pratt pattern matching algorithmwhich we discuss next the knuth-morris-pratt algorithm in studying the worst-case performance of the brute-force and bm pattern matching algorithms on specific instances of the problemsuch as that given in example we should notice major inefficiency specificallywe may perform many comparisons while testing potential placement of the pattern against the textyet if we discover pattern character that does not match in the textthen we throw away all the information gained by these comparisons and start over again from scratch with the next incremental placement of the pattern the knuth-morris-pratt (or "kmp"algorithmdiscussed in this sectionavoids this waste of information andin so doingit achieves running time of ( )which is optimal in the worst case that isin the worst case any pattern matching algorithm will have to examine all the characters of the text and all the characters of the pattern at least once the failure function the main idea of the kmp algorithm is to preprocess the pattern string so as to compute failure function that indicates the proper shift of so thatto the largest extent possiblewe can reuse previously performed comparisons specificallythe failure function (jis defined as the length of the longest prefix of that is suffix of [ (note that we did not put [ jherewe also use the convention that ( laterwe will discuss how to compute the failure function efficiently the importance of this failure function is that it "encodesrepeated substrings inside the pattern itself example consider the pattern string "abacabfrom example the knuth-morris-pratt (kmpfailure function (jfor the string is as shown in the following tablethe kmp pattern matching algorithmshown in code fragment incrementally processes the text string comparing it to the pattern string each time there is matchwe increment the current indices on the other handif there is mismatch and we have previously made progress in pthen we consult the failure function to determine the new index in where we need to continue checking against otherwise (there was mismatch and we are at the
21,375
for at its beginningwe repeat this process until we find match of in or the index for reaches nthe length of (indicating that we did not find the pattern pintcode fragment the kmp pattern matching algorithm the main part of the kmp algorithm is the while loopwhich performs comparison between character in and character in each iteration depending upon the outcome of this comparisonthe algorithm either moves on to the next characters in and pconsults the failure function for new candidate character in por starts over with the next index in the correctness of this algorithm follows from the definition of the failure function any comparisons that are skipped are actually unnecessaryfor the failure function guarantees that all the ignored comparisons are redundant--they would involve comparing the same matching characters over again figure an illustration of the kmp pattern matching algorithm the failure function for this pattern is given in example the algorithm performs character comparisonswhich are indicated with numerical labels
21,376
on the same input strings as in example note the use of the failure function to avoid redoing one of the comparisons between character of the pattern and character of the text also note that the algorithm performs fewer overall comparisons than the brute-force algorithm run on the same strings (figure performance excluding the computation of the failure functionthe running time of the kmp algorithm is clearly proportional to the number of iterations of the while loop for the sake of the analysislet us define intuitivelyk is the total amount by which the pattern has been shifted with respect to the text note that throughout the execution of the algorithmwe have < one of the following three cases occurs at each iteration of the loop if [ip[ ]then increases by and does not changesince also increases by if [ip[jand then does not change and increases by at least since in this case changes from to ( )which is an addition of ( )which is positive because ( if [ip[jand then increases by and increases by since does not change thusat each iteration of the loopeither or increases by at least (possibly both)hencethe total number of iterations of the while loop in the kmp pattern matching algorithm is at most achieving this boundof courseassumes that we have already computed the failure function for
21,377
to construct the failure functionwe use the method shown in code fragment which is "bootstrappingprocess quite similar to the kmpmatch algorithm we compare the pattern to itself as in the kmp algorithm each time we have two characters that matchwe set (ij note that since we have throughout the execution of the algorithmf( is always defined when we need to use it code fragment computation of the failure function used in the kmp pattern matching algorithm note how the algorithm uses the previous values of the failure function to efficiently compute new values algorithm kmpfailurefunction runs in (mtime its analysis is analogous to that of algorithm kmpmatch thuswe haveproposition the knuth-morris-pratt algorithm performs pattern matching on text string of length and pattern string of length in ( mtime
21,378
fragment code fragment java implementation of the kmp pattern matching algorithm the algorithm is expressed by two static methodsmethod kmpmatch performs the matching and calls the auxiliary method computefailfunction to compute the failure functionexpressed by an array method kmpmatch indicates the absence of match by returning the conventional value -
21,379
tries the pattern matching algorithms presented in the previous section speed up the search in text by preprocessing the pattern (to compute the failure function in the kmp algorithm or the last function in the bm algorithmin this sectionwe take complementary approachnamelywe present string searching algorithms that preprocess the text this approach is suitable for applications where series of queries is performed on fixed textso that the initial cost of preprocessing the text is compensated by speedup in each subsequent query (for examplea web site that offers pattern matching in shakespeare' hamlet or search engine that offers web pages on the hamlet topica trie (pronounced "try"is tree-based data structure for storing strings in order to support fast pattern matching the main application for tries is in information retrieval indeedthe name "triecomes from the word "retrieval in an information retrieval applicationsuch as search for certain dna sequence in genomic databasewe are given collection of stringsall defined using the same alphabet the primary query operations that tries support are pattern matching and prefix matching the latter operation involves being given string xand looking for all the strings in that contain as prefix standard tries let be set of strings from alphabet such that no string in is prefix of another string standard trie for is an ordered tree with the following properties (see figure )each node of texcept the rootis labeled with character of the ordering of the children of an internal node of is determined by canonical ordering of the alphabet has external nodeseach associated with string of ssuch that the concatenation of the labels of the nodes on the path from the root to an external node of yields the string of associated with thusa trie represents the strings of with paths from the root to the external nodes of note the importance of assuming that no string in is prefix of another string this ensures that each string of is uniquely associated with an external node of we can always satisfy this assumption by adding special character that is not in the original alphabet at the end of each string an internal node in standard trie can have anywhere between and childrenwhere is the size of the alphabet there is an edge going from the root to one of its children for each character that is first in some string in the collection in additiona path from the root of to an internal node at depth corresponds to
21,380
bidbullbuysellstockstopan -character prefix [ of string of in factfor each character that can follow the prefix [ in string of the set sthere is child of labeled with character in this waya trie concisely stores the common prefixes that exist among set of strings if there are only two characters in the alphabetthen the trie is essentially binary treewith some internal nodes possibly having only one child (that isit may be an improper binary treein generalif there are characters in the alphabetthen the trie will be multi-way tree where each internal node has between and children in additionthere are likely to be several internal nodes in standard trie that have fewer than children for examplethe trie shown in figure has several internal nodes with only one child we can implement trie with tree storing characters at its nodes the following proposition provides some important structural properties of standard trieproposition standard trie storing collection of strings of total length from an alphabet of size has the following propertiesevery internal node of has at most children has external nodes the height of is equal to the length of the longest string in the number of nodes of is (
21,381
common nonempty prefixthat isexcept for the rootall internal nodes have one child trie for set of strings can be used to implement dictionary whose keys are the strings of namelywe perform search in for string by tracing down from the root the path indicated by the characters in if this path can be traced and terminates at an external nodethen we know is in the dictionary for examplein the trie in figure tracing the path for "bullends up at an external node if the path cannot be traced or the path can be traced but terminates at an internal nodethen is not in the dictionary in the example in figure the path for "betcannot be traced and the path for "beends at an internal node neither such word is in the dictionary note that in this implementation of dictionarysingle characters are compared instead of the entire string (keyit is easy to see that the running time of the search for string of size is (dm)where is the size of the alphabet indeedwe visit at most nodes of and we spend (dtime at each node for some alphabetswe may be able to improve the time spent at node to be ( or (logdby using dictionary of characters implemented in hash table or search table howeversince is constant in most applicationswe can stick with the simple approach that takes (dtime per node visited from the discussion aboveit follows that we can use trie to perform special type of pattern matchingcalled word matchingwhere we want to determine whether given pattern matches one of the words of the text exactly (see figure word matching differs from standard pattern matching since the pattern cannot match an arbitrary substring of the textbut only one of its words using trieword matching for pattern of length takes (dmtimewhere is the size of the alphabetindependent of the size of the text if the alphabet has constant size (as is the case for text in natural languages and dna strings) query takes (mtimeproportional to the size of the pattern simple extension of this scheme supports prefix matching queries howeverarbitrary occurrences of the pattern in the text (for examplethe pattern is proper suffix of word or spans two wordscannot be efficiently performed to construct standard trie for set of stringswe can use an incremental algorithm that inserts the strings one at time recall the assumption that no string of is prefix of another string to insert string into the current trie twe first try to trace the path associated with in since is not already in and no string in is prefix of another stringwe will stop tracing the path at an internal node of before reaching the end of we then create new chain of node descendents of to store the remaining characters of the time to insert is (dm)where is the length of and is the size of the alphabet thusconstructing the entire trie for set takes (dntimewhere is the total length of the strings of figure word matching and prefix matching with standard trie(atext to be searched(bstandard
21,382
which are also known as stop wordsexcluded)with external nodes augmented with indications of the word positions there is potential space inefficiency in the standard trie that has prompted the development of the compressed triewhich is also known (for historical reasonsas the patricia trie namelythere are potentially lot of nodes in the standard trie that have only one childand the existence of such nodes is waste we discuss the compressed trie next compressed tries
21,383
in the trie has at least two children it enforces this rule by compressing chains of single-child nodes into individual edges (see figure let be standard trie we say that an internal node of is redundant if has one child and is not the root for examplethe trie of figure has eight redundant nodes let us also say that chain of > edges( , )( , ( - , )is redundant ifv is redundant for - and are not redundant we can transform into compressed trie by replacing each redundant chain ( , ( - , of > edges into single edge ( )relabeling with the concatenation of the labels of nodes figure compressed trie for the strings bearbellbidbullbuysellstockstop compare this with the standard trie shown in figure thusnodes in compressed trie are labeled with stringswhich are substrings of strings in the collectionrather than with individual characters the advantage of compressed trie over standard trie is that the number of nodes of the compressed trie is proportional to the number of strings and not to their total lengthas shown in the following proposition (compare with proposition proposition compressed trie storing collection of strings from an alphabet of size has the following propertiesevery internal node of has at least two children and most children
21,384
has external nodes the number of nodes of is (sthe attentive reader may wonder whether the compression of paths provides any significant advantagesince it is offset by corresponding expansion of the node labels indeeda compressed trie is truly advantageous only when it is used as an auxiliary index structure over collection of strings already stored in primary structureand is not required to actually store all the characters of the strings in the collection supposefor examplethat the collection of strings is an array of strings [ ] [ ] [ instead of storing the label of node explicitlywe represent it implicitly by triplet of integers (ijk)such that [ ][ ]that isx is the substring of [iconsisting of the characters from the jth to the kth included (see the example in figure also compare with the standard trie of figure figure (acollection of strings stored in an array (bcompact representation of the compressed trie for this additional compression scheme allows us to reduce the total space for the trie itself from (nfor the standard trie to (sfor the compressed triewhere is the total length of the strings in and is the number of strings in we must still store the different strings in sof coursebut we nevertheless reduce the space for the
21,385
can also be stored compactly suffix tries one of the primary applications for tries is for the case when the strings in the collection are all the suffixes of string such trie is called the suffix trie (also known as suffix tree or position treeof string for examplefigure shows the suffix trie for the eight suffixes of string "minimize for suffix triethe compact representation presented in the previous section can be further simplified namelythe label of each vertex is pair ( ,jindicating the string [ (see figure to satisfy the rule that no suffix of is prefix of another suffixwe can add special characterdenoted with $that is not in the original alphabet at the end of (and thus to every suffixthat isif string has length nwe build trie for the set of strings [ ]$for , saving space using suffix trie allows us to save space over standard trie by using several space compression techniquesincluding those used for the compressed trie the advantage of the compact representation of tries now becomes apparent for suffix tries since the total length of the suffixes of string of length is storing all the suffixes of explicitly would take ( space even sothe suffix trie represents these strings implicitly in (nspaceas formally stated in the following proposition proposition the compact representation of suffix trie for string of length uses (nspace construction we can construct the suffix trie for string of length with an incremental algorithm like the one given in section this construction takes (dn time because the total length of the suffixes is quadratic in howeverthe (compactsuffix trie for string of length can be constructed in (ntime with specialized algorithmdifferent from the one for general tries this linear-time construction algorithm is fairly complexhoweverand is not reported here stillwe can take advantage of the existence of this fast construction algorithm when we want to use suffix trie to solve other problems
21,386
(asuffix trie for the string "minimize'(bcompact representation of twhere pair ( ,jdenotes [ jusing suffix trie the suffix trie for string can be used to efficiently perform pattern matching queries on text namelywe can determine whether pattern is substring of by trying to trace path associated with in is substring of if and only
21,387
given in code fragment which assumes the following additional property on the labels of the nodes in the compact representation of the suffix trieif node has label ( ,jand is the string of length associated with the path from the root to (included)then [ = this property ensures that we can easily compute the start index of the pattern in the text when match occurs code fragment pattern matching with suffix trie we denote the label of node with (start( ),end( ))that isthe pair of indices specifying the substring of the text associated with
21,388
search down the trie tmatching characters of the pattern one at time until one of the following events occurswe completely match the pattern we get mismatch (caught by the termination of the for loop without break outwe are left with characters of still to be matched after processing an external node
21,389
determine the running time of algorithm suffixtriematchwe make the following observationswe process at most nodes of the trie each node processed has at most children at each node processedwe perform at most one character comparison for each child of to determine which child of needs to be processed next (which may possibly be improved by using fast dictionary to index the children of vwe perform at most character comparisons overall in the processed nodes we spend ( time for each character comparison performance we conclude that algorithm suffixtriematch performs pattern matching queries in (dmtime (and would possibly run even faster if we used dictionary to index children of nodes in the suffix trienote that the running time does not depend on the size of the text alsothe running time is linear in the size of the patternthat isit is ( )for constant-size alphabet hencesuffix tries are suited for repetitive pattern matching applicationswhere series of pattern matching queries is performed on fixed text we summarize the results of this section in the following proposition proposition let be text string with characters from an alphabet of size we can perform pattern matching queries on in (dmtimewhere is the length of the patternwith the suffix trie of xwhich uses (nspace and can be constructed in (dntime we explore another application of tries in the next subsection search engines the world wide web contains huge collection of text documents (web pagesinformation about these pages are gathered by program called web crawlerwhich then stores this information in special dictionary database web search engine allows users to retrieve relevant information from this databasethereby identifying relevant pages on the web containing given keywords in this sectionwe present simplified model of search engine
21,390
the core information stored by search engine is dictionarycalled an inverted index or inverted filestoring key-value pairs ( , )where is word and is collection of pages containing word the keys (wordsin this dictionary are called index terms and should be set of vocabulary entries and proper nouns as large as possible the elements in this dictionary are called occurrence lists and should cover as many web pages as possible we can efficiently implement an inverted index with data structure consisting of an array storing the occurrence lists of the terms (in no particular order compressed trie for the set of index termswhere each external node stores the index of the occurrence list of the associated term the reason for storing the occurrence lists outside the trie is to keep the size of the trie data structure sufficiently small to fit in internal memory insteadbecause of their large total sizethe occurrence lists have to be stored on disk with our data structurea query for single keyword is similar to word matching query (see section namelywe find the keyword in the trie and we return the associated occurrence list when multiple keywords are given and the desired output are the pages containing all the given keywordswe retrieve the occurrence list of each keyword using the trie and return their intersection to facilitate the intersection computationeach occurrence list should be implemented with sequence sorted by address or with dictionary (seefor examplethe generic merge computation discussed in section in addition to the basic task of returning list of pages containing given keywordssearch engines provide an important additional service by ranking the pages returned by relevance devising fast and accurate ranking algorithms for search engines is major challenge for computer researchers and electronic commerce companies text compression in this sectionwe consider an important text processing tasktext compression in this problemwe are given string defined over some alphabetsuch as the ascii or unicode character setsand we want to efficiently encode into small binary string (using only the characters and text compression is useful in any situation where we are communicating over low-bandwidth channelsuch as modem line or infrared connectionand we wish to minimize the time needed to
21,391
large documents more efficientlyso as to allow for fixed-capacity storage device to contain as many documents as possible the method for text compression explored in this section is the huffman code standard encoding schemessuch as the ascii and unicode systemsuse fixedlength binary strings to encode characters (with bits in the ascii system and in the unicode systema huffman codeon the other handuses variablelength encoding optimized for the string the optimization is based on the use of character frequencieswhere we havefor each character ca count (cof the number of times appears in the string the huffman code saves space over fixed-length encoding by using short code-word strings to encode high-frequency characters and long codeword strings to encode low-frequency characters to encode the string xwe convert each character in from its fixed-length code word to its variable-length code wordand we concatenate all these code words in order to produce the encoding for in order to avoid ambiguitieswe insist that no code word in our encoding is prefix of another code word in our encoding such code is called prefix codeand it simplifies the decoding of in order to get back (see figure even with this restrictionthe savings produced by variablelength prefix code can be significantparticularly if there is wide variance in character frequencies (as is the case for natural language text in almost every spoken languagehuffman' algorithm for producing an optimal variable-length prefix code for is based on the construction of binary tree that represents the code each node in texcept the rootrepresents bit in code wordwith each left child representing " and each right child representing " each external node is associated with specific characterand the code word for that character is defined by the sequence of bits associated with the nodes in the path from the root of to (see figure each external node has frequency ( )which is simply the frequency in of the character associated with in additionwe give each internal node in frequencyf( )that is the sum of the frequencies of all the external nodes in the subtree rooted at figure an illustration of an example huffman code for the input string " fast runner need never be afraid of the dark"(afrequency of each character of (bhuffman tree for string the code for character is obtained by tracing the path from the root of to the external node where is storedand associating left child with and right child with
21,392
is the huffman coding algorithm the huffman coding algorithm begins with each of the distinct characters of the string to encode being the root node of single-node binary tree the algorithm proceeds in series of rounds in each roundthe algorithm takes the two binary trees with the smallest frequencies and merges them into single binary tree it repeats this process until only one tree is left (see code fragment each iteration of the while loop in huffman' algorithm can be implemented in (logdtime using priority queue represented with heap in additioneach iteration takes two nodes out of and adds one ina process that will be repeated times before exactly one node is left in thusthis algorithm runs in ( dlogdtime although full justification of this algorithm' correctness is beyond our scope herewe note that its intuition comes from simple idea--any optimal code can be converted into an optimal code in which the code words for the two lowest-frequency charactersa and bdiffer only in their last bit repeating the argument for string with and replaced by character cgives the followingproposition huffman' algorithm constructs an optimal prefix code for string of length with distinct characters in ( dlogdtime code fragment huffman coding algorithm
21,393
the greedy method huffman' algorithm for building an optimal encoding is an example application of an algorithmic design pattern called the greedy method this design pattern is applied to optimization problemswhere we are trying to construct some structure while minimizing or maximizing some property of that structure the general formula for the greedy method pattern is almost as simple as that for the brute-force method in order to solve given optimization problem using the greedy methodwe proceed by sequence of choices the sequence starts from some well-understood starting conditionand computes the cost for that initial condition the pattern then asks that we iteratively make additional choices by identifying the decision that achieves the best cost improvement from all of the choices that are currently possible this approach does not always lead to an optimal solution but there are several problems that it does work forand such problems are said to possess the greedy-choice property this is the property that global optimal condition can be reached by series of locally optimal choices (that ischoices that are each the current best from among the possibilities available at the time)starting from well-defined starting condition the problem of computing an optimal variable-length prefix code is just one example of problem that possesses the greedy-choice property
21,394
text similarity testing common text processing problemwhich arises in genetics and software engineeringis to test the similarity between two text strings in genetics applicationthe two strings could correspond to two strands of dnawhich couldfor examplecome from two individualswho we will consider genetically related if they have long subsequence common to their respective dna sequences likewisein software engineering applicationthe two strings could come from two versions of source code for the same programand we may wish to determine which changes were made from one version to the next indeeddetermining the similarity between two strings is considered such common operation that the unix and linux operating systems come with programcalled difffor comparing text files problem the longest common subsequence there are several different ways we can define the similarity between two strings even sowe can abstract simpleyet commonversion of this problem using character strings and their subsequences given string - subsequence of is any string that is of the form ik where + that isit is sequence of characters that are not necessarily contiguous but are nevertheless taken in order from for examplethe string aaag is subsequence of the string cgataattgaga note that the concept of subsequence of string is different from the one of substring of stringdefined in section problem definition the specific text similarity problem we address here is the longest common subsequence (lcsproblem in this problemwe are given two character stringsx - and - over some alphabet (such as the alphabet { ,cgtcommon in computational geneticsand are asked to find longest string that is subsequence of both and one way to solve the longest common subsequence problem is to enumerate all subsequences of and take the largest one that is also subsequence of since each character of is either in or not in subsequencethere are potentially different subsequences of xeach of which requires (mtime to determine whether it is subsequence of thusthis brute-force approach yields an exponential-time algorithm that runs in ( nmtimewhich is very inefficient in this sectionwe discuss how to use an algorithmic design pattern called dynamic programming to solve the longest common subsequence problem much faster than this dynamic programming
21,395
exponential time and produce polynomial-time algorithms to solve them dynamic programming is one such technique in additionthe algorithms that result from applications of the dynamic programming technique are usually quite simple-often needing little more than few lines of code to describe some nested loops for filling in table the components of dynamic programming solution the dynamic programming technique is used primarily for optimization problemswhere we wish to find the "bestway of doing something often the number of different ways of doing that "somethingis exponentialso bruteforce search for the best is computationally infeasible for all but the smallest problem sizes we can apply the dynamic programming technique in such situationshoweverif the problem has certain amount of structure that we can exploit this structure involves the following three componentssimple subproblemsthere has to be some way of repeatedly breaking the global optimization problem into subproblems moreoverthere should be simple way of defining subproblems with just few indiceslike ijkand so on subproblem optimizationan optimal solution to the global problem must be composition of optimal subproblem solutions we should not be able to find globally optimal solution that contains suboptimal subproblems subproblem overlapoptimal solutions to unrelated subproblems can contain subproblems in common having given the general components of dynamic programming algorithmwe next show how to apply it to the longest common subsequence problem problem applying dynamic programming to the lcs we can solve the longest common subsequence problem much faster than exponential time using the dynamic programming technique as mentioned aboveone of the key components of the dynamic programming technique is the definition of simple subproblems that satisfy the subproblem optimization and subproblem overlap properties recall that in the lcs problemwe are given two character stringsx and yof length and mrespectivelyand are asked to find longest string that is subsequence of both and since and are character stringswe have natural set of indices with which to define subproblems--indices into the strings and let us define subproblemthereforeas that of computing the value [ij]
21,396
both [ ix and [ jy this definition allows us to rewrite [ ,jin terms of optimal subproblem solutions this definition depends on which of two cases we are in (see figure in this casewe have match between the last character of [ iand the last character of [ jwe claim that this character belongs to longest common subsequence of [ iand [ jto justify this claimlet us suppose it is not true there has to be some longest common subsequence ik jk if ik or jk then we get the same sequence by setting and alternatelyif jk then we can get an even longer common subsequence by adding to the end thusa longest common subsequence of [ iand [ jends with thereforewe set [ , ]= [ - , if in this casewe cannot have common subsequence that includes both and that iswe can have common subsequence end with or one that ends with (or possibly neither)but certainly not both thereforewe set [ijmax{ [ - , ] [ij- ]if in order to make both of these equations make sense in the boundary cases when or we assign [ for - and [- for - , , , - the definition of [ ,jabove satisfies subproblem optimizationfor we cannot have longest common subsequence without also having longest common subsequences for the subproblems alsoit uses subproblem overlapbecause subproblem solution [ijcan be used in several other problems (namelythe problems [ ] [ , ]and [ , ]figure the two cases in the longest common subsequence algorithm(ax (bx note that the algorithm stores only the [ ,jvaluesnot the matches the lcs algorithm
21,397
straightforward we initialize an ( ( arraylfor the boundary cases when or namelywe initialize [ for - , , and [ for - , , (this is slight abuse of notationsince in realitywe would have to index the rows and columns of starting with thenwe iteratively build up values in until we have [ ]the length of longest common subsequence of and we give pseudo-code description of how this approach results in dynamic programming solution to the longest common subsequence (lcsproblem in code fragment code fragment dynamic programming algorithm for the lcs problem performance the running time of the algorithm of code fragment is easy to analyzefor it is dominated by two nested for loopswith the outer one iterating times and the inner one iterating times since the if-statement and assignment inside the loop each requires ( primitive operationsthis algorithm runs in (nmtime thusthe dynamic programming technique can be applied to the longest common subsequence problem to improve significantly over the exponential-time bruteforce solution to the lcs problem algorithm lcs (code fragment computes the length of the longest common subsequence (stored in [ , ])but not the subsequence itself as shown in the following propositiona simple postprocessing step can extract the longest common subsequence from the array returned by algorithm
21,398
characterswe can find the longest common subsequence of and in (nmtime justificationalgorithm lcs computes [ , ]the length of longest common subsequencein (nmtime given the table of [ijvaluesconstructing longest common subsequence is straightforward one method is to start from [nmand work back through the tablereconstructing longest common subsequence from back to front at any position [ij]we can determine whether if this is truethen we can take as the next character of the subsequence (noting that is before the previous character we foundif any)moving next to [ if then we can move to the larger of [ij and [ - , (see figure we stop when we reach boundary cell (with or - this method constructs longest common subsequence in ( madditional time figure illustration of the algorithm for constructing longest common subsequence from the array exercises for source code and help with exercisesplease visit java datastructures net
21,399
- list the prefixes of the stringp "aaabbaaathat are also suffixes of - draw figure illustrating the comparisons done by brute-force pattern matching for the text "aaabaadaabaaaand pattern "aabaaar- repeat the previous problem for the bm pattern matching algorithmnot counting the comparisons made to compute the last(cfunction - repeat the previous problem for the kmp pattern matching algorithmnot counting the comparisons made to compute the failure function - compute table representing the last function used in the bm pattern matching algorithm for the pattern string "the quick brown fox jumped over lazy catassuming the following alphabet (which starts with the space character) , , , , , , , , , , , , , , , , , , , , , , , , , ,zr- assuming that the characters in alphabet can be enumerated and can be used to index arraysgive an ( | |)-time method for constructing the last function from an -length pattern string - compute table representing the kmp failure function for the pattern string "cgtacgttcgt acr- draw standard trie for the following set of strings{abab,babaccccc,bbaaaacaa,bbaacc,cbcc,cbcar- draw compressed trie for the set of strings given in exercise -