id
int64
0
25.6k
text
stringlengths
0
4.59k
22,200
sorting and selection merge-sort and recurrence equations there is another way to justify that the running time of the merge-sort algorithm is ( log (proposition namelywe can deal more directly with the recursive nature of the merge-sort algorithm in this sectionwe present such an analysis of the running time of merge-sortand in so doingintroduce the mathematical concept of recurrence equation (also known as recurrence relationlet the function (ndenote the worst-case running time of merge-sort on an input sequence of size since merge-sort is recursivewe can characterize function (nby means of an equation where the function (nis recursively expressed in terms of itself in order to simplify our characterization of ( )let us restrict our attention to the case when is power of (we leave the problem of showing that our asymptotic characterization still holds in the general case as an exercise in this casewe can specify the definition of (nas if < ( ( / cn otherwise an expression such as the one above is called recurrence equationsince the function appears on both the leftand right-hand sides of the equal sign although such characterization is correct and accuratewhat we really desire is big-oh type of characterization of (nthat does not involve the function (nitself that iswe want closed-form characterization of (nwe can obtain closed-form solution by applying the definition of recurrence equationassuming is relatively large for exampleafter one more application of the equation abovewe can write new recurrence for (nas ( ( ( / (cn/ )cn ( / (cn/ cn ( / cn if we apply the equation againwe get ( ( / cn at this pointwe should see pattern emergingso that after applying this equation timeswe get ( ( / icn the issue that remainsthenis to determine when to stop this process to see when to stoprecall that we switch to the closed form (nb when < which will occur when in other wordsthis will occur when log making this substitutionthenyields ( log ( / log (log )cn nt( cn log nb cn log that iswe get an alternative justification of the fact that (nis ( log
22,201
alternative implementations of merge-sort sorting linked lists the merge-sort algorithm can easily be adapted to use any form of basic queue as its container type in code fragment we provide such an implementationbased on use of the linkedqueue class from section the ( log nbound for merge-sort from proposition applies to this implementation as wellsince each basic operation runs in ( time when implemented with linked list we show an example execution of this version of the merge algorithm in figure def merge( ) """merge two sorted queue instances and into empty queue "" while not is emptyand not is empty) if firsts first) enqueue( dequeue) else enqueue( dequeue)move remaining elements of to while not is empty) enqueue( dequeue)move remaining elements of to while not is empty) enqueue( dequeue) def merge sort( ) """sort the elements of queue using the merge-sort algorithm "" len( if return list is already sorted divide linkedqueueor any other queue implementation linkedqueue while len( / move the first // elements to enqueue( dequeue)move the rest to while not is empty) enqueue( dequeue) conquer (with recursionsort first half merge sort( sort second half merge sort( merge results merge( smerge sorted halves back into code fragment an implementation of merge-sort using basic queue
22,202
(as (bs (cs (ds ( ( ( (hs (ifigure example of an execution of the merge algorithmas implemented in code fragment using queues
22,203
bottom-up (nonrecursivemerge-sort there is nonrecursive version of array-based merge-sortwhich runs in ( log ntime it is bit faster than recursive merge-sort in practiceas it avoids the extra overheads of recursive calls and temporary memory at each level the main idea is to perform merge-sort bottom-upperforming the merges level by level going up the merge-sort tree given an input array of elementswe begin by merging every successive pair of elements into sorted runs of length two we merge these runs into runs of length fourmerge these new runs into runs of length eightand so onuntil the array is sorted to keep the space usage reasonablewe deploy second array that stores the merged runs (swapping input and output arrays after each iterationwe give python implementation in code fragment similar bottom-up approach can be used for sorting linked lists (see exercise - def merge(srcresultstartinc) """merge src[start:start+incand src[start+inc:start+ incinto result "" end start+inc boundary for run boundary for run end min(start+ inclen(src) xyz startstart+incstart index into run run result while end and end if src[xsrc[ ] result[zsrc[ ] + copy from run and increment else result[zsrc[ ] + copy from run and increment + increment to reflect new result if end result[ :end src[ :end copy remainder of run to output elif end result[ :end src[ :end copy remainder of run to output def merge sort( ) """sort the elements of python list using the merge-sort algorithm "" len( logn math ceil(math log( , )make temporary storage for dest srcdest [nonen for in ( for in range(logn))pass creates all runs of length each pass merges two length runs for in range( ) merge(srcdestji srcdest destsrc reverse roles of lists if is not src [ :nsrc[ :nadditional copy to get results to code fragment an implementation of the nonrecursive merge-sort algorithm
22,204
quick-sort the next sorting algorithm we discuss is called quick-sort like merge-sortthis algorithm is also based on the divide-and-conquer paradigmbut it uses this technique in somewhat opposite manneras all the hard work is done before the recursive calls high-level description of quick-sort the quick-sort algorithm sorts sequence using simple recursive approach the main idea is to apply the divide-and-conquer techniquewhereby we divide into subsequencesrecur to sort each subsequenceand then combine the sorted subsequences by simple concatenation in particularthe quick-sort algorithm consists of the following three steps (see figure ) divideif has at least two elements (nothing needs to be done if has zero or one element)select specific element from swhich is called the pivot as is common practicechoose the pivot to be the last element in remove all the elements from and put them into three sequenceslstoring the elements in less than estoring the elements in equal to gstoring the elements in greater than of courseif the elements of are distinctthen holds just one element-the pivot itself conquerrecursively sort sequences and combineput back the elements into in order by first inserting the elements of lthen those of eand finally those of split using pivot ( recur recur (xg( concatenate figure visual schematic of the quick-sort algorithm
22,205
like merge-sortthe execution of quick-sort can be visualized by means of binary recursion treecalled the quick-sort tree figure summarizes an execution of the quick-sort algorithm by showing the input and output sequences processed at each node of the quick-sort tree the step-by-step evolution of the quick-sort tree is shown in figures and unlike merge-sorthoweverthe height of the quick-sort tree associated with an execution of quick-sort is linear in the worst case this happensfor exampleif the sequence consists of distinct elements and is already sorted indeedin this casethe standard choice of the last element as pivot yields subsequence of size while subsequence has size and subsequence has size at each invocation of quick-sort on subsequence lthe size decreases by hencethe height of the quick-sort tree is ( (bfigure quick-sort tree for an execution of the quick-sort algorithm on sequence with elements(ainput sequences processed at each node of (boutput sequences generated at each node of the pivot used at each level of the recursion is shown in bold
22,206
( ( ( ( ( ( figure visualization of quick-sort each node of the tree represents recursive call the nodes drawn with dashed lines represent calls that have not been made yet the node drawn with thick lines represents the running invocation the empty nodes drawn with thin lines represent terminated calls the remaining nodes represent suspended calls (that isactive invocations that are waiting for child invocation to returnnote the divide steps performed in ( )( )and ( (continues in figure
22,207
( ( ( ( ( (lfigure visualization of an execution of quick-sort note the concatenation step performed in ( (continues in figure
22,208
( ( ( ( ( (rfigure visualization of an execution of quick-sort several invocations between (pand (qhave been omitted note the concatenation steps performed in (oand ( (continued from figure
22,209
performing quick-sort on general sequences in code fragment we give an implementation of the quick-sort algorithm that works on any sequence type that operates as queue this particular version relies on the linkedqueue class from section we provide more streamlined implementation of quick-sort using an array-based sequence in section our implementation chooses the first item of the queue as the pivot (since it is easily accessible)and then it divides sequence into queues leand of elements that are respectively less thanequal toand greater than the pivot we then recur on the and listsand transfer elements from the sorted lists leand back to all of the queue operations run in ( worst-case time when implemented with linked list def quick sort( ) """sort the elements of queue using the quick-sort algorithm "" len( if return list is already sorted divide firstusing first as arbitrary pivot linkedqueue linkedqueue linkedqueuedivide into leand while not is empty) if firstp enqueue( dequeue) elif first) enqueue( dequeue) elses first(must equal pivot enqueue( dequeue) conquer (with recursionsort elements less than quick sort(lsort elements greater than quick sort( concatenate results while not is empty) enqueue( dequeue) while not is empty) enqueue( dequeue) while not is empty) enqueue( dequeue)code fragment quick-sort for sequence implemented as queue
22,210
running time of quick-sort we can analyze the running time of quick-sort with the same technique used for merge-sort in section namelywe can identify the time spent at each node of the quick-sort tree and sum up the running times for all the nodes examining code fragment we see that the divide step and the final concatenation of quick-sort can be implemented in linear time thusthe time spent at node of is proportional to the input size (vof vdefined as the size of the sequence handled by the invocation of quick-sort associated with node since subsequence has at least one element (the pivot)the sum of the input sizes of the children of is at most ( let si denote the sum of the input sizes of the nodes at depth for particular quick-sort tree clearlys nsince the root of is associated with the entire sequence alsos < since the pivot is not propagated to the children of more generallyit must be that si si- since the elements of the subsequences at depth all come from distinct subsequences at depth and at least one element from depth does not propagate to depth because it is in set (in factone element from each node at depth does not propagate to depth iwe can therefore bound the overall running time of an execution of quick-sort as ( hwhere is the overall height of the quick-sort tree for that execution unfortunatelyin the worst casethe height of quick-sort tree is th( )as observed in section thusquick-sort runs in ( worst-case time paradoxicallyif we choose the pivot as the last element of the sequencethis worst-case behavior occurs for problem instances when sorting should be easy--if the sequence is already sorted given its namewe would expect quick-sort to run quicklyand it often does in practice the best case for quick-sort on sequence of distinct elements occurs when subsequences and have roughly the same size in that caseas we saw with merge-sortthe tree has height (log nand therefore quick-sort runs in ( log ntimewe leave the justification of this fact as an exercise ( - more sowe can observe an ( log nrunning time even if the split between and is not as perfect for exampleif every divide step caused one subsequence to have one-fourth of those elements and the other to have three-fourths of the elementsthe height of the tree would remain (log nand thus the overall performance ( log nwe will see in the next section that introducing randomization in the choice of pivot will makes quick-sort essentially behave in this way on averagewith an expected running time that is ( log
22,211
randomized quick-sort one common method for analyzing quick-sort is to assume that the pivot will always divide the sequence in reasonably balanced manner we feel such an assumption would presuppose knowledge about the input distribution that is typically not availablehowever for examplewe would have to assume that we will rarely be given "almostsorted sequences to sortwhich are actually common in many applications fortunatelythis assumption is not needed in order for us to match our intuition to quick-sort' behavior in generalwe desire some way of getting close to the best-case running time for quick-sort the way to get close to the best-case running timeof courseis for the pivot to divide the input sequence almost equally if this outcome were to occurthen it would result in running time that is asymptotically the same as the best-case running time that ishaving pivots close to the "middleof the set of elements leads to an ( log nrunning time for quick-sort picking pivots at random since the goal of the partition step of the quick-sort method is to divide the sequence with sufficient balancelet us introduce randomization into the algorithm and pick as the pivot random element of the input sequence that isinstead of picking the pivot as the first or last element of swe pick an element of at random as the pivotkeeping the rest of the algorithm unchanged this variation of quick-sort is called randomized quick-sort the following proposition shows that the expected running time of randomized quick-sort on sequence with elements is ( log nthis expectation is taken over all the possible random choices the algorithm makesand is independent of any assumptions about the distribution of the possible input sequences the algorithm is likely to be given proposition the expected running time of randomized quick-sort on sequence of size is ( log njustificationwe assume two elements of can be compared in ( time consider single recursive call of randomized quick-sortand let denote the size of the input for this call say that this call is "goodif the pivot chosen is such that subsequences and have size at least / and at most / eachotherwisea call is "bad nowconsider the implications of our choosing pivot uniformly at random note that there are / possible good choices for the pivot for any given call of size of the randomized quick-sort algorithm thusthe probability that any call is good is / note further that good call will at least partition list of size into two lists of size / and / and bad call could be as bad as producing single call of size
22,212
now consider recursion trace for randomized quick-sort this trace defines binary treet such that each node in corresponds to different recursive call on subproblem of sorting portion of the original list say that node in is in size group if the size of ' subproblem is greater than ( / ) + and at most ( / ) let us analyze the expected time spent working on all the subproblems for nodes in size group by the linearity of expectation (proposition )the expected time for working on all these subproblems is the sum of the expected times for each one some of these nodes correspond to good calls and some correspond to bad calls but note thatsince good call occurs with probability / the expected number of consecutive calls we have to make before getting good call is moreovernotice that as soon as we have good call for node in size group iits children will be in size groups higher than thusfor any element from in the input listthe expected number of nodes in size group containing in their subproblems is in other wordsthe expected total size of all the subproblems in size group is since the nonrecursive work we perform for any subproblem is proportional to its sizethis implies that the total expected time spent processing subproblems for nodes in size group is (nthe number of size groups is log / nsince repeatedly multiplying by / is the same as repeatedly dividing by / that isthe number of size groups is (log nthereforethe total expected running time of randomized quick-sort is ( log (see figure if factwe can show that the running time of randomized quick-sort is ( log nwith high probability (see exercise - number of size groups size group (rs(as(bsize group (log ns(cs(ds(eexpected time per size group (nsf size group (no(ntotal expected timeo( log nfigure visual time analysis of the quick-sort tree each node is shown labeled with the size of its subproblem
22,213
additional optimizations for quick-sort an algorithm is in-place if it uses only small amount of memory in addition to that needed for the original input our implementation of heap-sortfrom section is an example of such an in-place sorting algorithm our implementation of quick-sort from code fragment does not qualify as in-place because we use additional containers leand when dividing sequence within each recursive call quick-sort of an array-based sequence can be adapted to be in-placeand such an optimization is used in most deployed implementations performing the quick-sort algorithm in-place requires bit of ingenuityhoweverfor we must use the input sequence itself to store the subsequences for all the recursive calls we show algorithm inplace quick sortwhich performs in-place quick-sortin code fragment our implementation assumes that the input sequencesis given as python list of elements in-place quick-sort modifies the input sequence using element swapping and does not explicitly create subsequences insteada subsequence of the input sequence is implicitly represented by range of positions specified by leftmost index and rightmost index the def inplace quick sort(sab) """sort the list from [ato [binclusive using the quick-sort algorithm "" if >breturn range is trivially sorted pivot [blast element of range is pivot left will scan rightward right - will scan leftward while left <right scan until reaching value equal or larger than pivot (or right marker while left <right and [leftpivot left + scan until reaching value equal or smaller than pivot (or left marker while left <right and pivot [right] right - if left <rightscans did not strictly cross [left] [rights[right] [leftswap values leftright left right shrink range put pivot into its final place (currently marked by left index [left] [bs[ ] [left make recursive calls inplace quick sort(saleft inplace quick sort(sleft bcode fragment in-place quick-sort for python list
22,214
divide step is performed by scanning the array simultaneously using local variables leftwhich advances forwardand rightwhich advances backwardswapping pairs of elements that are in reverse orderas shown in figure when these two indices pass each otherthe division step is complete and the algorithm completes by recurring on these two sublists there is no explicit "combinestepbecause the concatenation of the two sublists is implicit to the in-place use of the original list it is worth noting that if sequence has duplicate valueswe are not explicitly creating three sublists leand gas in our original quick-sort description we instead allow elements equal to the pivot (other than the pivot itself to be dispersed across the two sublists exercise - explores the subtlety of our implementation in the presence of duplicate keysand exercise - describes an in-place algorithm that strictly partitions into three sublists leand ( (cl (bl (al lr ( ( (gfigure divide step of in-place quick-sortusing index as shorthand for identifier leftand index as shorthand for identifier right index scans the sequence from left to rightand index scans the sequence from right to left swap is performed when is at an element as large as the pivot and is at an element as small as the pivot final swap with the pivotin part ( )completes the divide step
22,215
although the implementation we describe in this section for dividing the sequence into two pieces is in-placewe note that the complete quick-sort algorithm needs space for stack proportional to the depth of the recursion treewhich in this case can be as large as admittedlythe expected stack depth is (log )which is small compared to neverthelessa simple trick lets us guarantee the stack size is (log nthe main idea is to design nonrecursive version of in-place quick-sort using an explicit stack to iteratively process subproblems (each of which can be represented with pair of indices marking subarray boundarieseach iteration involves popping the top subproblemsplitting it in two (if it is big enough)and pushing the two new subproblems the trick is that when pushing the new subproblemswe should first push the larger subproblem and then the smaller one in this waythe sizes of the subproblems will at least double as we go down the stackhencethe stack can have depth at most (log nwe leave the details of this implementation as an exercise ( - pivot selection our implementation in this section blindly picks the last element as the pivot at each level of the quick-sort recursion this leaves it susceptible to the th( )-time worst casemost notably when the original sequence is already sortedreverse sortedor nearly sorted as described in section this can be improved upon by using randomly chosen pivot for each partition step in practiceanother common technique for choosing pivot is to use the median of tree valuestaken respectively from the frontmiddleand tail of the array this median-of-three heuristic will more often choose good pivot and computing median of three may require lower overhead than selecting pivot with random number generator for larger data setsthe median of more than three potential pivots might be computed hybrid approaches although quick-sort has very good performance on large data setsit has rather high overhead on relatively small data sets for examplethe process of quicksorting sequence of eight elementsas illustrated in figures through involves considerable bookkeeping in practicea simple algorithm like insertionsort (section will execute faster when sorting such short sequence it is therefore commonin optimized sorting implementationsto use hybrid approachwith divide-and-conquer algorithm used until the size of subsequence falls below some threshold (perhaps elements)insertion-sort can be directly invoked upon portions with length below the threshold we will further discuss such practical considerations in section when comparing the performance of various sorting algorithms
22,216
studying sorting through an algorithmic lens recapping our discussions on sorting to this pointwe have described several methods with either worst case or expected running time of ( log non an input sequence of size these methods include merge-sort and quick-sortdescribed in this as well as heap-sort (section in this sectionwe study sorting as an algorithmic problemaddressing general issues about sorting algorithms lower bound for sorting natural first question to ask is whether we can sort any faster than ( log ntime interestinglyif the computational primitive used by sorting algorithm is the comparison of two elementsthis is in fact the best we can do--comparison-based sorting has an ( log nworst-case lower bound on its running time (recall the notation (*from section to focus on the main cost of comparison-based sortinglet us only count comparisonsfor the sake of lower bound suppose we are given sequence ( xn- that we wish to sortand assume that all the elements of are distinct (this is not really restriction since we are deriving lower boundwe do not care if is implemented as an array or linked listfor the sake of our lower boundsince we are only counting comparisons each time sorting algorithm compares two elements xi and (that isit asks"is xi ?")there are two outcomes"yesor "no based on the result of this comparisonthe sorting algorithm may perform some internal calculations (which we are not counting hereand will eventually perform another comparison between two other elements of swhich again will have two outcomes thereforewe can represent comparison-based sorting algorithm with decision tree (recall example that iseach internal node in corresponds to comparison and the edges from position to its children correspond to the computations resulting from either "yesor "noanswer it is important to note that the hypothetical sorting algorithm in question probably has no explicit knowledge of the tree the tree simply represents all the possible sequences of comparisons that sorting algorithm might makestarting from the first comparison (associated with the rootand ending with the last comparison (associated with the parent of an external nodeeach possible initial orderor permutationof the elements in will cause our hypothetical sorting algorithm to execute series of comparisonstraversing path in from the root to some external node let us associate with each external node in thenthe set of permutations of that cause our sorting algorithm to end up in the most important observation in our lower-bound argument is that each external node in can represent the sequence of comparisons for at most one permutation of the justification for this claim is simpleif two different
22,217
permutations and of are associated with the same external nodethen there are at least two objects xi and such that xi is before in but xi is after in at the same timethe output associated with must be specific reordering of swith either xi or appearing before the other but if and both cause the sorting algorithm to output the elements of in this orderthen that implies there is way to trick the algorithm into outputting xi and in the wrong order since this cannot be allowed by correct sorting algorithmeach external node of must be associated with exactly one permutation of we use this property of the decision tree associated with sorting algorithm to prove the following resultproposition the running time of any comparison-based algorithm for sorting an -element sequence is ( log nin the worst case justificationthe running time of comparison-based sorting algorithm must be greater than or equal to the height of the decision tree associated with this algorithmas described above (see figure by the argument aboveeach external node in must be associated with one permutation of moreovereach permutation of must result in different external node of the number of permutations of objects is nn( )( thust must have at least nexternal nodes by proposition the height of is at least log( !this immediately justifies the propositionbecause there are at least / terms that are greater than or equal to / in the product !hencen log log( !>log which is ( log nminimum height ( worst-case running timexi xa xb xc xd log( !xe xg xh xk xl xm xn nfigure visualizing the lower bound for comparison-based sorting
22,218
linear-time sortingbucket-sort and radix-sort in the previous sectionwe showed that ( log ntime is necessaryin the worst caseto sort an -element sequence with comparison-based sorting algorithm natural question to askthenis whether there are other kinds of sorting algorithms that can be designed to run asymptotically faster than ( log ntime interestinglysuch algorithms existbut they require special assumptions about the input sequence to be sorted even sosuch scenarios often arise in practicesuch as when sorting integers from known range or sorting character stringsso discussing them is worthwhile in this sectionwe consider the problem of sorting sequence of entrieseach key-value pairwhere the keys have restricted type bucket-sort consider sequence of entries whose keys are integers in the range [ ]for some integer > and suppose that should be sorted according to the keys of the entries in this caseit is possible to sort in ( ntime it might seem surprisingbut this impliesfor examplethat if is ( )then we can sort in (ntime of coursethe crucial point is thatbecause of the restrictive assumption about the format of the elementswe can avoid using comparisons the main idea is to use an algorithm called bucket-sortwhich is not based on comparisonsbut on using keys as indices into bucket array that has cells indexed from to an entry with key is placed in the "bucketb[ ]which itself is sequence (of entries with key kafter inserting each entry of the input sequence into its bucketwe can put the entries back into in sorted order by enumerating the contents of the buckets [ ] [ ] [ in order we describe the bucket-sort algorithm in code fragment algorithm bucketsort( )inputsequence of entries with integer keys in the range [ outputsequence sorted in nondecreasing order of the keys let be an array of sequenceseach of which is initially empty for each entry in do the key of remove from and insert it at the end of bucket (sequenceb[kfor to - do for each entry in sequence [ido remove from [iand insert it at the end of code fragment bucket-sort
22,219
it is easy to see that bucket-sort runs in ( ntime and uses ( nspace hencebucket-sort is efficient when the range of values for the keys is small compared to the sequence size nsay (nor ( log nstillits performance deteriorates as grows compared to an important property of the bucket-sort algorithm is that it works correctly even if there are many different elements with the same key indeedwe described it in way that anticipates such occurrences stable sorting when sorting key-value pairsan important issue is how equal keys are handled let (( )(kn- vn- )be sequence of such entries we say that sorting algorithm is stable iffor any two entries (ki vi and ( of such that ki and (ki vi precedes ( in before sorting (that isi )entry (ki vi also precedes entry ( after sorting stability is important for sorting algorithm because applications may want to preserve the initial order of elements with the same key our informal description of bucket-sort in code fragment guarantees stability as long as we ensure that all sequences act as queueswith elements processed and removed from the front of sequence and inserted at the back that iswhen initially placing elements of into bucketswe should process from front to backand add each element to the end of its bucket subsequentlywhen transferring elements from the buckets back to swe should process each [ifrom front to backwith those elements added to the end of radix-sort one of the reasons that stable sorting is so important is that it allows the bucket-sort approach to be applied to more general contexts than to sort integers supposefor examplethat we want to sort entries with keys that are pairs (kl)where and are integers in the range [ ]for some integer > in context such as thisit is common to define an order on these keys using the lexicographic (dictionaryconventionwhere ( ( if or if and (see page this is pairwise version of the lexicographic comparison functionwhich can be applied to equal-length character stringsor to tuples of length the radix-sort algorithm sorts sequence of entries with keys that are pairsby applying stable bucket-sort on the sequence twicefirst using one component of the pair as the key when ordering and then using the second component but which order is correctshould we first sort on the ' (the first componentand then on the ' (the second component)or should it be the other way around
22,220
sorting and selection to gain intuition before answering this questionwe consider the following example example consider the following sequence (we show only the keys) (( )( )( )( )( )( )( )( )if we sort stably on the first componentthen we get the sequence (( )( )( )( )( )( )( )( )if we then stably sort this sequence using the second componentwe get the sequence , (( )( )( )( )( )( )( )( ))which is unfortunately not sorted sequence on the other handif we first stably sort using the second componentthen we get the sequence (( )( )( )( )( )( )( )( )if we then stably sort sequence using the first componentwe get the sequence , (( )( )( )( )( )( )( )( ))which is indeed sequence lexicographically ordered sofrom this examplewe are led to believe that we should first sort using the second component and then again using the first component this intuition is exactly right by first stably sorting by the second component and then again by the first componentwe guarantee that if two entries are equal in the second sort (by the first component)then their relative order in the starting sequence (which is sorted by the second componentis preserved thusthe resulting sequence is guaranteed to be sorted lexicographically every time we leave to simple exercise ( - the determination of how this approach can be extended to triples and other -tuples of numbers we can summarize this section as followsproposition let be sequence of key-value pairseach of which has key ( kd )where ki is an integer in the range [ for some integer > we can sort lexicographically in time ( ( )using radix-sort radix sort can be applied to any key that can be viewed as composite of smaller pieces that are to be sorted lexicographically for examplewe can apply it to sort character strings of moderate lengthas each individual character can be represented as an integer value (some care is needed to properly handle strings with varying lengths
22,221
comparing sorting algorithms at this pointit might be useful for us to take moment and consider all the algorithms we have studied in this book to sort an -element sequence considering running time and other factors we have studied several methodssuch as insertion-sortand selection-sortthat have ( )-time behavior in the average and worst case we have also studied several methods with ( log )-time behaviorincluding heap-sortmerge-sortand quick-sort finallythe bucket-sort and radix-sort methods run in linear time for certain types of keys certainlythe selection-sort algorithm is poor choice in any applicationsince it runs in ( time even in the best case butof the remaining sorting algorithmswhich is the bestas with many things in lifethere is no clear "bestsorting algorithm from the remaining candidates there are trade-offs involving efficiencymemory usageand stability the sorting algorithm best suited for particular application depends on the properties of that application in factthe default sorting algorithm used by computing languages and systems has evolved greatly over time we can offer some guidance and observationsthereforebased on the known properties of the "goodsorting algorithms insertion-sort if implemented wellthe running time of insertion-sort is ( )where is the number of inversions (that isthe number of pairs of elements out of orderthusinsertion-sort is an excellent algorithm for sorting small sequences (sayless than elements)because insertion-sort is simple to programand small sequences necessarily have few inversions alsoinsertion-sort is quite effective for sorting sequences that are already "almostsorted by "almost,we mean that the number of inversions is small but the ( )-time performance of insertion-sort makes it poor choice outside of these special contexts heap-sort heap-sorton the other handruns in ( log ntime in the worst casewhich is optimal for comparison-based sorting methods heap-sort can easily be made to execute in-placeand is natural choice on smalland medium-sized sequenceswhen input data can fit into main memory howeverheap-sort tends to be outperformed by both quick-sort and merge-sort on larger sequences standard heap-sort does not provide stable sortbecause of the swapping of elements
22,222
quick-sort although its ( )-time worst-case performance makes quick-sort susceptible in real-time applications where we must make guarantees on the time needed to complete sorting operationwe expect its performance to be ( log )-timeand experimental studies have shown that it outperforms both heap-sort and merge-sort on many tests quick-sort does not naturally provide stable sortdue to the swapping of elements during the partitioning step for decades quick-sort was the default choice for general-purposein-memory sorting algorithm quick-sort was included as the qsort sorting utility provided in language librariesand was the basis for sorting utilities on unix operating systems for many years it was also the standard algorithm for sorting arrays in java through version of that language (we discuss java below merge-sort merge-sort runs in ( log ntime in the worst case it is quite difficult to make merge-sort run in-place for arraysand without that optimization the extra overhead of allocate temporary arrayand copying between the arrays is less attractive than in-place implementations of heap-sort and quick-sort for sequences that can fit entirely in computer' main memory even somerge-sort is an excellent algorithm for situations where the input is stratified across various levels of the computer' memory hierarchy ( cachemain memoryexternal memoryin these contextsthe way that merge-sort processes runs of data in long merge streams makes the best use of all the data brought as block into level of memorythereby reducing the total number of memory transfers the gnu sorting utility (and most current versions of the linux operating systemrelies on multiway merge-sort variant since the standard sort method of python' list class has been hybrid approach named tim-sort (designed by tim peters)which is essentially bottom-up merge-sort that takes advantage of some initial runs in the data while using insertion-sort to build additional runs tim-sort has also become the default algorithm for sorting arrays in java bucket-sort and radix-sort finallyif an application involves sorting entries with small integer keyscharacter stringsor -tuples of keys from discrete rangethen bucket-sort or radix-sort is an excellent choicefor it runs in ( ( )timewhere [ is the range of integer keys (and for bucket sortthusif ( nis significantly "belowthe log functionthen this sorting method should run faster than even quick-sortheap-sortor merge-sort
22,223
python' built-in sorting functions python provides two built-in ways to sort data the first is the sort method of the list class as an examplesuppose that we define the following listcolors red green blue cyan magenta yellow that method has the effect of reordering the elements of the list into orderas defined by the natural meaning of the operator for those elements in the above examplewithin elements that are stringsthe natural order is defined alphabetically thereforeafter call to colors sort)the order of the list would becomeblue cyan green magenta red yellow python also supports built-in functionnamed sortedthat can be used to produce new ordered list containing the elements of any existing iterable container going back to our original examplethe syntax sorted(colorswould return new list of those colorsin alphabetical orderwhile leaving the contents of the original list unchanged this second form is more general because it can be applied to any iterable object as parameterfor examplesortedgreen returns sorting according to key function there are many situations in which we wish to sort list of elementsbut according to some order other than the natural order defined by the operator for examplewe might wish to sort list of strings from shortest to longest (rather than alphabeticallyboth of python' built-in sort functions allow caller to control the notion of order that is used when sorting this is accomplished by providingas an optional keyword parametera reference to secondary function that computes key for each element of the primary sequencethen the primary elements are sorted based on the natural order of their keys (see pages and of section for discussion of this technique in the context of the built-in min and max functions key function must be one-parameter function that accepts an element as parameter and returns key for examplewe could use the built-in len function when sorting strings by lengthas call len(sfor string returns its length to sort our colors list based on lengthwe use the syntax colors sort(key=lento mutate the list or sorted(colorskey=lento generate new ordered listwhile leaving the original alone when sorted with the length function as keythe contents arered blue cyan green yellow magenta these built-in functions also support keyword parameterreversethat can be set to true to cause the sort order to be from largest to smallest
22,224
decorate-sort-undecorate design pattern python' support for key function when sorting is implemented using what is known as the decorate-sort-undecorate design pattern it proceeds in steps each element of the list is temporarily replaced with "decoratedversion that includes the result of the key function applied to the element the list is sorted based upon the natural order of the keys (figure the decorated elements are replaced by the original elements red blue cyan green magenta yellow figure list of "decoratedstringsusing their lengths as decoration this list has been sorted by those keys although there is already built-in support for pythonif we were to implement such strategy ourselvesa natural way to represent "decoratedelement is using the same composition strategy that we used for representing key-value pairs within priority queue code fragment of section includes just such an item classdefined so that the operator for items relies upon the given keys with such compositionwe could trivially adapt any sorting algorithm to use the decoratesort-undecorate patternas demonstrated in code fragment with merge-sort def decorated merge sort(datakey=none) """demonstration of the decorate-sort-undecorate pattern "" if key is not none for in range(len(data))decorate each element data[jitem(key(data[ ])data[ ] merge sort(datasort with existing algorithm if key is not none for in range(len(data))undecorate each element data[jdata[jvalue code fragment an approach for implementing the decorate-sort-undecorate pattern based upon the array-based merge-sort of code fragment the item class is identical to that which was used in the priorityqueuebase class (see code fragment
22,225
selection as important as it issorting is not the only interesting problem dealing with total order relation on set of elements there are number of applications in which we are interested in identifying single element in terms of its rank relative to the sorted order of the entire set examples include identifying the minimum and maximum elementsbut we may also be interested insayidentifying the median elementthat isthe element such that half of the other elements are smaller and the remaining half are larger in generalqueries that ask for an element with given rank are called order statistics defining the selection problem in this sectionwe discuss the general order-statistic problem of selecting the kth smallest element from an unsorted collection of comparable elements this is known as the selection problem of coursewe can solve this problem by sorting the collection and then indexing into the sorted sequence at index using the best comparison-based sorting algorithmsthis approach would take ( log ntimewhich is obviously an overkill for the cases where or (or even or )because we can easily solve the selection problem for these values of in (ntime thusa natural question to ask is whether we can achieve an (nrunning time for all values of (including the interesting case of finding the medianwhere  / prune-and-search we can indeed solve the selection problem in (ntime for any value of moreoverthe technique we use to achieve this result involves an interesting algorithmic design pattern this design pattern is known as prune-and-search or decreaseand-conquer in applying this design patternwe solve given problem that is defined on collection of objects by pruning away fraction of the objects and recursively solving the smaller problem when we have finally reduced the problem to one defined on constant-sized collection of objectswe then solve the problem using some brute-force method returning back from all the recursive calls completes the construction in some caseswe can avoid using recursionin which case we simply iterate the prune-and-search reduction step until we can apply brute-force method and stop incidentallythe binary search method described in section is an example of the prune-and-search design pattern
22,226
randomized quick-select in applying the prune-and-search pattern to finding the kth smallest element in an unordered sequence of elementswe describe simple and practical algorithmknown as randomized quick-select this algorithm runs in (nexpected timetaken over all possible random choices made by the algorithmthis expectation does not depend whatsoever on any randomness assumptions about the input distribution we note though that randomized quick-select runs in ( time in the worst casethe justification of which is left as an exercise ( - we also provide an exercise ( - for modifying randomized quick-select to define deterministic selection algorithm that runs in (nworst-case time the existence of this deterministic algorithm is mostly of theoretical interesthoweversince the constant factor hidden by the big-oh notation is relatively large in that case suppose we are given an unsorted sequence of comparable elements together with an integer [ nat high levelthe quick-select algorithm for finding the kth smallest element in is similar to the randomized quick-sort algorithm described in section we pick "pivotelement from at random and use this to subdivide into three subsequences leand gstoring the elements of less thanequal toand greater than the pivotrespectively in the prune stepwe determine which of these subsets contains the desired elementbased on the value of and the sizes of those subsets we then recur on the appropriate subsetnoting that the desired element' rank in the subset may differ from its rank in the full set an implementation of randomized quick-select is shown in code fragment def quick select(sk) """return the kth smallest element of list sfor from to len( "" if len( = return [ pivot random choice(spick random pivot element from [ for in if pivotelements less than pivot [ for in if =pivotelements equal to pivot [ for in if pivot xelements greater than pivot if <len( )kth smallest lies in return quick select(lk elif <len(llen( ) return pivot kth smallest equal to pivot else len(llen(enew selection parameter kth smallest is jth in return quick select(gjcode fragment randomized quick-select algorithm
22,227
analyzing randomized quick-select showing that randomized quick-select runs in (ntime requires simple probabilistic argument the argument is based on the linearity of expectationwhich states that if and are random variables and is numberthen ( + ( ( and (cx ce( )where we use (zto denote the expected value of the expression let (nbe the running time of randomized quick-select on sequence of size since this algorithm depends on random eventsits running timet( )is random variable we want to bound ( ( ))the expected value of (nsay that recursive invocation of our algorithm is "goodif it partitions so that the size of each of and is at most / clearlya recursive call is good with probability at least / let (ndenote the number of consecutive recursive calls we makeincluding the present onebefore we get good one then we can characterize (nusing the following recurrence equationt( <bn (nt( / )where > is constant applying the linearity of expectation for we get ( ( )< (bn (nt( / )bn ( ( ) ( ( / )since recursive call is good with probability at least / and whether recursive call is good or not is independent of its parent call being goodthe expected value of (nis at most the expected number of times we must flip fair coin before it comes up "heads that ise( ( )< thusif we let (nbe shorthand for ( ( ))then we can write the case for as ( < ( / bn to convert this relation into closed formlet us iteratively apply this inequality assuming is large sofor exampleafter two applicationst ( < (( / ) ( / ) bn at this pointwe should see that the general case is log / nt ( < bn ( / ) = in other wordsthe expected running time is at most bn times geometric sum whose base is positive number less than thusby proposition (nis (nproposition the expected running time of randomized quick-select on sequence of size is ( )assuming two elements of can be compared in ( time
22,228
exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - give complete justification of proposition - in the merge-sort tree shown in figures through some edges are drawn as arrows what is the meaning of downward arrowhow about an upward arrowr- show that the running time of the merge-sort algorithm on an -element sequence is ( log )even when is not power of - is our array-based implementation of merge-sort given in section stableexplain why or why not - is our linked-list-based implementation of merge-sort given in code fragment stableexplain why or why not - an algorithm that sorts key-value entries by key is said to be straggling ifany time two entries ei and have equal keysbut ei appears before in the inputthen the algorithm places ei after in the output describe change to the merge-sort algorithm in section to make it straggling - suppose we are given two -element sorted sequences and each with distinct elementsbut potentially some elements that are in both sequences describe an ( )-time method for computing sequence representing the union (with no duplicatesas sorted sequence - suppose we modify the deterministic version of the quick-sort algorithm so thatinstead of selecting the last element in an -element sequence as the pivotwe choose the element at index  / what is the running time of this version of quick-sort on sequence that is already sortedr- consider modification of the deterministic version of the quick-sort algorithm where we choose the element at index  / as our pivot describe the kind of sequence that would cause this version of quick-sort to run in ( time - show that the best-case running time of quick-sort on sequence of size with distinct elements is ( log nr- suppose function inplace quick sort is executed on sequence with duplicate elements prove that the algorithm still correctly sorts the input sequence what happens in the partition step when there are elements equal to the pivotwhat is the running time of the algorithm if all the input elements are equal
22,229
- if the outermost while loop of our implementation of inplace quick sort (line of code fragment were changed to use condition left right (rather than left <right)there would be flaw explain the flaw and give specific input sequence on which such an implementation fails - if the conditional at line of our inplace quick sort implementation of code fragment were changed to use condition left right (rather than left <right)there would be flaw explain the flaw and give specific input sequence on which such an implementation fails - following our analysis of randomized quick-sort in section show that the probability that given input element belongs to more than log subproblems in size group is at most / - of the npossible inputs to given comparison-based sorting algorithmwhat is the absolute maximum number of inputs that could be correctly sorted with just comparisonsr- jonathan has comparison-based sorting algorithm that sorts the first elements of sequence of size in (ntime give big-oh characterization of the biggest that can ber- is the bucket-sort algorithm in-placewhy or why notr- describe radix-sort method for lexicographically sorting sequence of triplets (klm)where kland are integers in the range [ ]for some > how could this scheme be extended to sequences of -tuples ( kd )where each ki is an integer in the range [ ] - suppose is sequence of valueseach equal to or how long will it take to sort with the merge-sort algorithmwhat about quick-sortr- suppose is sequence of valueseach equal to or how long will it take to sort stably with the bucket-sort algorithmr- given sequence of valueseach equal to or describe an in-place method for sorting - give an example input list that requires merge-sort and heap-sort to take ( log ntime to sortbut insertion-sort runs in (ntime what if you reverse this listr- what is the best algorithm for sorting each of the followinggeneral comparable objectslong character strings -bit integersdouble-precision floating-point numbersand bytesjustify your answer - show that the worst-case running time of quick-select on an -element sequence is (
22,230
creativity - linda claims to have an algorithm that takes an input sequence and produces an output sequence that is sorting of the elements in give an algorithmis sortedthat tests in (ntime if is sorted explain why the algorithm is sorted is not sufficient to prove particular output to linda' algorithm is sorting of describe what additional information linda' algorithm could output so that her algorithm' correctness could be established on any given and in (ntime - describe and analyze an efficient method for removing all duplicates from collection of elements - augment the positionallist class (see section to support method named merge with the following behavior if and are positionallist instances whose elements are sortedthe syntax merge(bshould merge all elements of into so that remains sorted and becomes empty your implementation must accomplish the merge by relinking existing nodesyou are not to create any new nodes - augment the positionallist class (see section to support method named sort that sorts the elements of list by relinking existing nodesyou are not to create any new nodes you may use your choice of sorting algorithm - implement bottom-up merge-sort for collection of items by placing each item in its own queueand then repeatedly merging pairs of queues until all items are sorted within single queue - modify our in-place quick-sort implementation of code fragment to be randomized version of the algorithmas discussed in section - consider version of deterministic quick-sort where we pick as our pivot the median of the last elements in the input sequence of elementsfor fixedconstant odd number > what is the asymptotic worst-case running time of quick-sort in this casec- another way to analyze randomized quick-sort is to use recurrence equation in this casewe let (ndenote the expected running time of randomized quick-sortand we observe thatbecause of the worst-case partitions for good and bad splitswe can write ( < ( ( / ( / )( ( )bn where bn is the time needed to partition list for given pivot and concatenate the result sublists after the recursive calls return showby inductionthat (nis ( log
22,231
- our high-level description of quick-sort describes partitioning the elements into three sets leand ghaving keys less thanequal toor greater than the pivotrespectively howeverour in-place quick-sort implementation of code fragment does not gather all elements equal to the pivot into set an alternative strategy for an in-placethreeway partition is as follows loop through the elements from left to right maintaining indices ijand and the invariant that all elements of slice [ :iare strictly less than the pivotall elements of slice [ :jare equal to the pivotand all elements of slice [ :kare strictly greater than the pivotelements of [ :nare yet unclassified in each pass of the loopclassify one additional elementperforming constant number of swaps as needed implement an in-place quick-sort using this strategy - suppose we are given an -element sequence such that each element in represents different vote for presidentwhere each vote is given as an integer representing particular candidateyet the integers may be arbitrarily large (even if the number of candidates is notdesign an ( log )time algorithm to see who wins the election representsassuming the candidate with the most votes wins - consider the voting problem from exercise - but now suppose that we know the number of candidates runningeven though the integer ids for those candidates can be arbitrarily large describe an ( log )time algorithm for determining who wins the election - consider the voting problem from exercise - but now suppose the integers to are used to identify candidates design an ( )-time algorithm to determine who wins the election - show that any comparison-based sorting algorithm can be made to be stable without affecting its asymptotic running time - suppose we are given two sequences and of elementspossibly containing duplicateson which total order relation is defined describe an efficient algorithm for determining if and contain the same set of elements what is the running time of this methodc- given an array of integers in the range [ ]describe simple method for sorting in (ntime - let sk be different sequences whose elements have integer keys in the range [ ]for some parameter > describe an algorithm that produces respective sorted sequences in ( ntimewere denotes the sum of the sizes of those sequences - given sequence of elementson which total order relation is defineddescribe an efficient method for determining whether there are two equal elements in what is the running time of your method
22,232
sorting and selection - let be sequence of elements on which total order relation is defined recall that an inversion in is pair of elements and such that appears before in but describe an algorithm running in ( log ntime for determining the number of inversions in - let be sequence of integers describe method for printing out all the pairs of inversions in in ( ktimewhere is the number of such inversions - let be random permutation of distinct integers argue that the expected running time of insertion-sort on is ( (hintnote that half of the elements ranked in the top half of sorted version of are expected to be in the first half of - let and be two sequences of integers each given an integer mdescribe an ( log )-time algorithm for determining if there is an integer in and an integer in such that - given set of integersdescribe and analyze fast method for finding the log integers closest to the median - bob has set of nuts and set of boltssuch that each nut in has unique matching bolt in unfortunatelythe nuts in all look the sameand the bolts in all look the same as well the only kind of comparison that bob can make is to take nut-bolt pair (ab)such that is in and is in band test it to see if the threads of are largersmalleror perfect match with the threads of describe and analyze an efficient algorithm for bob to match up all of his nuts and bolts - our quick-select implementation can be made more space-efficient by initially computing only the counts for sets leand gcreating only the new subset that will be needed for recursion implement such version - describe an in-place version of the quick-select algorithm in pseudo-codeassuming that you are allowed to modify the order of elements - show how to use deterministic ( )-time selection algorithm to sort sequence of elements in ( log nworst-case time - given an unsorted sequence of comparable elementsand an integer kgive an ( log kexpected-time algorithm for finding the (kelements that have rank /  /  / and so on - space aliens have given us functionalien splitthat can take sequence of integers and partition in (ntime into sequences sk of size at most /keachsuch that the elements in si are less than or equal to every element in si+ for for fixed numberk show how to use alien split to sort in ( log nlog ktime - read documenation of the reverse keyword parameter of python' sorting functionsand describe how the decorate-sort-undecorate paradigm could be used to implement itwithout assuming anything about the key type
22,233
- show that randomized quick-sort runs in ( log ntime with probability at least /nthat iswith high probabilityby answering the followinga for each input element xdefine cij (xto be / random variable that is if and only if element is in subproblems that belong to size group argue why we need not define cij for let xij be / random variable that is with probability / independent of any other eventsand let log / nargue why - - = = cij ( < = = xij show that the expected value of - = = xij is ( / ) show that the probability that = = xij is at most / using the chernoff bound that states that if is the sum of finite number of independent / random variables with expected value then pr( ( / )- where argue why the previous claim proves randomized quick-sort runs in ( log ntime with probability at least / - we can make the quick-select algorithm deterministicby choosing the pivot of an -element sequence as followspartition the set into / groups of size each (except possibly for one groupsort each little set and identify the median element in this set from this set of / "babymediansapply the selection algorithm recursively to find the median of the baby medians use this element as the pivot and proceed as in the quick-select algorithm show that this deterministic quick-select algorithm runs in (ntime by answering the following questions (please ignore floor and ceiling functions if that simplifies the mathematicsfor the asymptotics are the same either way) how many baby medians are less than or equal to the chosen pivothow many are greater than or equal to the pivotb for each baby median less than or equal to the pivothow many other elements are less than or equal to the pivotis the same true for those greater than or equal to the pivotc argue why the method for finding the deterministic pivot and using it to partition takes (ntime based on these estimateswrite recurrence equation to bound the worst-case running time (nfor this selection algorithm (note that in the worst case there are two recursive calls--one to find the median of the baby medians and one to recur on the larger of and ge using this recurrence equationshow by induction that (nis (
22,234
projects - implement nonrecursivein-place version of the quick-sort algorithmas described at the end of section - experimentally compare the performance of in-place quick-sort and version of quick-sort that is not in-place - perform series of benchmarking tests on version of merge-sort and quick-sort to determine which one is faster your tests should include sequences that are "randomas well as "almostsorted - implement deterministic and randomized versions of the quick-sort algorithm and perform series of benchmarking tests to see which one is faster your tests should include sequences that are very "randomlooking as well as ones that are "almostsorted - implement an in-place version of insertion-sort and an in-place version of quick-sort perform benchmarking tests to determine the range of values of where quick-sort is on average better than insertion-sort - design and implement version of the bucket-sort algorithm for sorting list of entries with integer keys taken from the range [ ]for > the algorithm should run in ( ntime - design and implement an animation for one of the sorting algorithms described in this your animation should illustrate the key properties of this algorithm in an intuitive manner notes knuth' classic text on sorting and searching [ contains an extensive history of the sorting problem and algorithms for solving it huang and langston [ show how to merge two sorted lists in-place in linear time the standard quick-sort algorithm is due to hoare [ several optimizations for quick-sort are described by bentley and mcilroy [ more information about randomizationincluding chernoff boundscan be found in the appendix and the book by motwani and raghavan [ the quick-sort analysis given in this is combination of the analysis given in an earlier java edition of this book and the analysis of kleinberg and tardos [ exercise - is due to littman gonnet and baeza-yates [ analyze and compare experimentally several sorting algorithms the term "prune-and-searchcomes originally from the computational geometry literature (such as in the work of clarkson [ and megiddo [ ]the term "decreaseand-conqueris from levitin [
22,235
text processing contents abundance of digitized text notations for strings and the python str class pattern-matching algorithms brute force the boyer-moore algorithm the knuth-morris-pratt algorithm dynamic programming matrix chain-product dna and text sequence alignment text compression and the greedy method the huffman coding algorithm the greedy method tries standard tries compressed tries suffix tries search engine indexing exercises
22,236
text processing abundance of digitized text despite the wealth of multimedia informationtext processing remains one of the dominant functions of computers computer are used to editstoreand display documentsand to transport documents over the internet furthermoredigital systems are used to archive wide range of textual informationand new data is being generated at rapidly increasing pace large corpus can readily surpass petabyte of data (which is equivalent to thousand terabytesor million gigabytescommon examples of digital collections that include textual information aresnapshots of the world wide webas internet document formats html and xml are primarily text formatswith added tags for multimedia content all documents stored locally on user' computer email archives customer reviews compilations of status updates on social networking sites such as facebook feeds from microblogging sites such as twitter and tumblr these collections include written text from hundreds of international languages furthermorethere are large data sets (such as dnathat can be viewed computationally as "stringseven though they are not language in this we explore some of the fundamental algorithms that can be used to efficiently analyze and process large textual data sets in addition to having interesting applicationstext-processing algorithms also highlight some important algorithmic design patterns we begin by examining the problem of searching for pattern as substring of larger piece of textfor examplewhen searching for word in document the pattern-matching problem gives rise to the brute-force methodwhich is often inefficient but has wide applicability nextwe introduce an algorithmic technique known as dynamic programmingwhich can be applied in certain settings to solve problem in polynomial time that appears at first to require exponential time to solve we demonstrate the application on this technique to the problem of finding partial matches between strings that may be similar but not perfectly aligned this problem arises when making suggestions for misspelled wordor when trying to match related genetic samples because of the massive size of textual data setsthe issue of compression is importantboth in minimizing the number of bits that need to be communicated through network and to reduce the long-term storage requirements for archives for text compressionwe can apply the greedy methodwhich often allows us to approximate solutions to hard problemsand for some problems (such as in text compressionactually gives rise to optimal algorithms finallywe examine several special-purpose data structures that can be used to better organize textual data in order to support more efficient run-time queries
22,237
notations for strings and the python str class we use character strings as model for text when discuss algorithms for text processing character strings can come from wide variety of sourcesincluding scientificlinguisticand internet applications indeedthe following are examples of such stringss "cgtaaactgctttaatcaaacgct "the first stringscomes from dna applicationsand the second stringt is the internet address (urlfor the publisher of this book we refer to appendix for an overview of the operations supported by python' str class to allow fairly general notions of string in our algorithm descriptionswe only assume that characters of string come from known alphabetwhich we denote as for examplein the context of dnathere are four symbols in the standard alphabets { ,cgt this alphabet canof coursebe subset of the ascii or unicode character setsbut it could also be something more general although we assume that an alphabet has fixed finite sizedenoted as | |that size can be nontrivialas with python' treatment of the unicode alphabetwhich allows for more than million distinct characters we therefore consider the impact of |sin our asymptotic analysis of text-processing algorithms several string-processing operations involve breaking large strings into smaller strings in order to be able to speak about the pieces that result from such operationswe will rely on python' indexing and slicing notations for the sake of notationwe let denote string of length in that casewe let [jrefer to the character at index for < < we let notation [ :kfor < < < denote the slice (or substringof consisting of characters [jup to and including [ - ]but not [kby this definitionnote that substring [ mhas length and that substring [ :jis trivially the null stringhaving length in accordance with python conventionsthe substring [ :kis also the null string when in order to distinguish some special kinds of substringslet us refer to any substring of the form [ :kfor < < as prefix of ssuch prefix results in python when the first index is omitted from slice notationas in [:ksimilarlyany substring of the form [ :nfor < < is suffix of ssuch suffix results in python when the second index is omitted from slice notationas in [ :for exampleif we again take to be the string of dna given abovethen "cgtaais prefix of "cgcis suffix of sand "cis both prefix and suffix of note that the null string is prefix and suffix of any string
22,238
pattern-matching algorithms in the classic pattern-matching problemwe are given text string of length and pattern string of length mand want to find whether is substring of if sowe may want to find the lowest index within at which beginssuch that [ : +mequals por perhaps to find all indices of at which pattern begins the pattern-matching problem is inherent to many behaviors of python' str classsuch as in tt find( ) index( ) count( )and is subtask of more complex behaviors such as partition( ) split( )and replace(pqin this sectionwe present three pattern-matching algorithms (with increasing levels of difficultyfor simplicitywe model the outward semantics of our functions upon the find method of the string classreturning the lowest index at which the pattern beginsor - if the pattern is not found brute force the brute-force algorithmic design pattern is powerful technique for algorithm design when we have something we wish to search for or when we wish to optimize some function when applying this technique in general situationwe typically enumerate all possible configurations of the inputs involved and pick the best of all these enumerated configurations in applying this technique to design brute-force pattern-matching algorithmwe derive what is probably the first algorithm that we might think of for solving the problem--we simply test all the possible placements of relative to an implementation of this algorithm is shown in code fragment def find brute(tp) """return the lowest index of at which substring begins (or else - "" nm len( )len(pintroduce convenient notations for in range( - + )try every potential starting index within = an index into pattern while and [ = [ ]kth character of matches + if =mif we reached the end of pattern return substring [ : +mmatches return - failed to find match starting with any code fragment an implementation of brute-force pattern-matching algorithm
22,239
performance the analysis of the brute-force pattern-matching algorithm could not be simpler it consists of two nested loopswith the outer loop indexing through all possible starting indices of the pattern in the textand the inner loop indexing through each character of the patterncomparing it to its potentially corresponding character in the text thusthe correctness of the brute-force pattern-matching algorithm follows immediately from this exhaustive search approach the running time of brute-force pattern matching in the worst case is not goodhoweverbecausefor each candidate index in twe can perform up to character comparisons to discover that does not match at the current index referring to code fragment we see that the outer for loop is executed at most timesand the inner while loop is executed at most times thusthe worst-case running time of the brute-force method is (nmexample suppose we are given the text string "abacaabaccabacabaabband the pattern string "abacabfigure illustrates the execution of the brute-force pattern-matching algorithm on and textpatterna comparisons not shown figure example run of the brute-force pattern-matching algorithm the algorithm performs character comparisonsindicated above with numerical labels
22,240
the boyer-moore algorithm at firstit might seem that it is always necessary to examine every character in in order to locate pattern as substring or to rule out its existence but this is not always the case the boyer-moore pattern-matching algorithmwhich we study in this sectioncan sometimes avoid comparisons between and sizable fraction of the characters in in this sectionwe describe simplified version of the original algorithm by boyer and moore the main idea of the boyer-moore algorithm is to improve the running time of the brute-force algorithm by adding two potentially time-saving heuristics roughly statedthese heuristics are as followslooking-glass heuristicwhen testing possible placement of against tbegin the comparisons from the end of and move backward to the front of character-jump heuristicduring the testing of possible placement of within ta mismatch of text character [ ]= with the corresponding pattern character [kis handled as follows if is not contained anywhere in pthen shift completely past [ (for it cannot match any character in potherwiseshift until an occurrence of character in gets aligned with [iwe will formalize these heuristics shortlybut at an intuitive levelthey work as an integrated team the looking-glass heuristic sets up the other heuristic to allow us to avoid comparisons between and whole groups of characters in in this case at leastwe can get to the destination faster by going backwardsfor if we encounter mismatch during the consideration of at certain location in tthen we are likely to avoid lots of needless comparisons by significantly shifting relative to using the character-jump heuristic the character-jump heuristic pays off big if it can be applied early in the testing of potential placement of against figure demonstrates few simple applications of these heuristics texte patterns figure simple example demonstrating the intuition of the boyer-moore pattern-matching algorithm the original comparison results in mismatch with character of the text because that character is nowhere in the patternthe entire pattern is shifted beyond its location the second comparison is also mismatchbut the mismatched character occurs elsewhere in the pattern the pattern is next shifted so that its last occurrence of is aligned with the corresponding in the text the remainder of the process is not illustrated in this figure
22,241
the example of figure is rather basicbecause it only involves mismatches with the last character of the pattern more generallywhen match is found for that last characterthe algorithm continues by trying to extend the match with the second-to-last character of the pattern in its current alignment that process continues until either matching the entire patternor finding mismatch at some interior position of the pattern if mismatch is foundand the mismatched character of the text does not occur in the patternwe shift the entire pattern beyond that locationas originally illustrated in figure if the mismatched character occurs elsewhere in the patternwe must consider two possible subcases depending on whether its last occurrence is before or after the character of the pattern that was aligned with the mismatched those two cases are illustrated in figure texta pattern(aa + texta pattern(bb - figure additional rules for the character-jump heuristic of the boyer-moore algorithm we let represent the index of the mismatched character in the textk represent the corresponding index in the patternand represent the index of the last occurrence of [iwithin the pattern we distinguish two cases(aj kin which case we shift the pattern by unitsand thusindex advances by units(bj kin which case we shift the pattern by one unitand index advances by units in the case of figure ( )we slide the pattern only one unit it would be more productive to slide it rightward until finding another occurrence of mismatched character [iin the patternbut we do not wish to take time to search for
22,242
text processing another occurrence the efficiency of the boyer-moore algorithm relies on creating lookup table that quickly determines where mismatched character occurs elsewhere in the pattern in particularwe define function last(cas if is in plast(cis the index of the last (rightmostoccurrence of in otherwisewe conventionally define last( - if we assume that the alphabet is of fixedfinite sizeand that characters can be converted to indices of an array (for exampleby using their character code)the last function can be easily implemented as lookup table with worst-case ( )time access to the value last(choweverthe table would have length equal to the size of the alphabet (rather than the size of the pattern)and time would be required to initialize the entire table we prefer to use hash table to represent the last functionwith only those characters from the pattern occurring in the structure the space usage for this approach is proportional to the number of distinct alphabet symbols that occur in the patternand thus (mthe expected lookup time remains independent of the problem (although the worst-case bound is ( )our complete implementation of the boyer-moore pattern-matching algorithm is given in code fragment def find boyer moore(tp) """return the lowest index of at which substring begins (or else - "" nm len( )len(pintroduce convenient notations if = return trivial search for empty string last build 'lastdictionary for in range( ) lastp[kk later occurrence overwrites align end of pattern at index - of text - an index into - an index into while if [ = [ ] matching character if = return pattern begins at index of text else - examine previous character - of both and else last get( [ ]- last( [ ]is - if not found + min(kj case analysis for jump step = - restart at end of pattern return - code fragment an implementation of the boyer-moore algorithm
22,243
the correctness of the boyer-moore pattern-matching algorithm follows from the fact that each time the method makes shiftit is guaranteed not to "skipover any possible matches for last(cis the location of the last occurrence of in in figure we illustrate the execution of the boyer-moore pattern-matching algorithm on an input string similar to example texta last(ca - patterna figure an illustration of the boyer-moore pattern-matching algorithmincluding summary of the last(cfunction the algorithm performs character comparisonswhich are indicated with numerical labels performance if using traditional lookup tablethe worst-case running time of the boyer-moore algorithm is (nm | |namelythe computation of the last function takes time ( | |)and the actual search for the pattern takes (nmtime in the worst casethe same as the brute-force algorithm (with hash tablethe dependence on |sis removed an example of text-pattern pair that achieves the worst case is aaaaaaa -  aa the worst-case performancehoweveris unlikely to be achieved for english textforin that casethe boyer-moore algorithm is often able to skip large portions of text experimental evidence on english text shows that the average number of comparisons done per character is for five-character pattern string we have actually presented simplified version of the boyer-moore algorithm the original algorithm achieves running time (nm | |by using an alternative shift heuristic to the partially matched text stringwhenever it shifts the pattern more than the character-jump heuristic this alternative shift heuristic is based on applying the main idea from the knuth-morris-pratt pattern-matching algorithmwhich we discuss next
22,244
the knuth-morris-pratt algorithm in examining the worst-case performances of the brute-force and boyer-moore pattern-matching algorithms on specific instances of the problemsuch as that given in example we should notice major inefficiency for certain alignment of the patternif we find several matching characters but then detect mismatchwe ignore all the information gained by the successful comparisons after restarting with the next incremental placement of the pattern the knuth-morris-pratt (or "kmp"algorithmdiscussed in this sectionavoids this waste of information andin so doingit achieves running time of ( )which is asymptotically optimal that isin the worst case any pattern-matching algorithm will have to examine all the characters of the text and all the characters of the pattern at least once the main idea of the kmp algorithm is to precompute self-overlaps between portions of the pattern so that when mismatch occurs at one locationwe immediately know the maximum amount to shift the pattern before continuing the search motivating example is shown in figure textpatterna figure motivating example for the knuth-morris-pratt algorithm if mismatch occurs at the indicated locationthe pattern could be shifted to the second alignmentwithout explicit need to recheck the partial match with the prefix ama if the mismatched character is not an lthen the next potential alignment of the pattern can take advantage of the common the failure function to implement the kmp algorithmwe will precompute failure functionf that indicates the proper shift of upon failed comparison specificallythe failure function (kis defined as the length of the longest prefix of that is suffix of [ : + (note that we did not include [ heresince we will shift at least one unitintuitivelyif we find mismatch upon character [ + ]the function (ktells us how many of the immediately preceding characters can be reused to restart the pattern example describes the value of the failure function for the example pattern from figure
22,245
example consider the pattern "amalgamationfrom figure the knuth-morris-pratt (kmpfailure functionf ( )for the string is as shown in the following tablek [kf ( implementation our implementation of the kmp pattern-matching algorithm is shown in code fragment it relies on utility functioncompute kmp faildiscussed on the next pageto compute the failure function efficiently the main part of the kmp algorithm is its while loopeach iteration of which performs comparison between the character at index in and the character at index in if the outcome of this comparison is matchthe algorithm moves on to the next characters in both and (or reports match if reaching the end of the patternif the comparison failedthe algorithm consults the failure function for new candidate character in por starts over with the next index in if failing on the first character of the pattern (since nothing can be reused def find kmp(tp) """return the lowest index of at which substring begins (or else - "" nm len( )len(pintroduce convenient notations if = return trivial search for empty string rely on utility to precompute fail compute kmp fail( = index into text = index into pattern while if [ = [ ] [ : +kmatched thus far if = match is complete return + try to extend match + elif fail[ - reuse suffix of [ : else + return - reached end without match code fragment an implementation of the kmp pattern-matching algorithm the compute kmp fail utility function is given in code fragment
22,246
constructing the kmp failure function to construct the failure functionwe use the method shown in code fragment which is "bootstrappingprocess that compares the pattern to itself as in the kmp algorithm each time we have two characters that matchwe set jk note that since we have throughout the execution of the algorithmf ( is always well defined when we need to use it def compute kmp fail( ) """utility that computes and returns kmp fail list "" len(pby defaultpresume overlap of everywhere fail [ = = while mcompute (jduring this passif nonzero if [ = [ ] characters match thus far fail[jk + + elif follows matching prefix fail[ - elseno match found starting at + return fail code fragment an implementation of the compute kmp fail utility in support of the kmp pattern-matching algorithm note how the algorithm uses the previous values of the failure function to efficiently compute new values performance excluding the computation of the failure functionthe running time of the kmp algorithm is clearly proportional to the number of iterations of the while loop for the sake of the analysislet us define intuitivelys is the total amount by which the pattern has been shifted with respect to the text note that throughout the execution of the algorithmwe have < one of the following three cases occurs at each iteration of the loop if jp[ ]then and each increase by and thuss does not change if  [kand then does not change and increases by at least since in this case changes from to ( )which is an addition of ( )which is positive because ( if  [kand then increases by and increases by since does not change
22,247
thusat each iteration of the loopeither or increases by at least (possibly both)hencethe total number of iterations of the while loop in the kmp patternmatching algorithm is at most achieving this boundof courseassumes that we have already computed the failure function for the algorithm for computing the failure function runs in (mtime its analysis is analogous to that of the main kmp algorithmyet with pattern of length compared to itself thuswe haveproposition the knuth-morris-pratt algorithm performs pattern matching on text string of length and pattern string of length in ( mtime the correctness of this algorithm follows from the definition of the failure function any comparisons that are skipped are actually unnecessaryfor the failure function guarantees that all the ignored comparisons are redundant--they would involve comparing the same matching characters over again in figure we illustrate the execution of the kmp pattern-matching algorithm on the same input strings as in example note the use of the failure function to avoid redoing one of the comparisons between character of the pattern and character of the text also note that the algorithm performs fewer overall comparisons than the brute-force algorithm run on the same strings (figure the failure function [kf (ktextpatterna no comparison performed figure an illustration of the kmp pattern-matching algorithm the primary algorithm performs character comparisonswhich are indicated with numerical labels (additional comparisons would be performed during the computation of the failure function
22,248
dynamic programming in this sectionwe discuss the dynamic programming algorithm-design technique this technique is similar to the divide-and-conquer technique (section )in that it can be applied to wide variety of different problems dynamic programming can often be used to take problems that seem to require exponential time and produce polynomial-time algorithms to solve them in additionthe algorithms that result from applications of the dynamic programming technique are usually quite simple--often needing little more than few lines of code to describe some nested loops for filling in table matrix chain-product rather than starting out with an explanation of the general components of the dynamic programming techniquewe begin by giving classicconcrete example suppose we are given collection of two-dimensional matrices for which we wish to compute the mathematical product an- where ai is di di+ matrixfor in the standard matrix multiplication algorithm (which is the one we will use)to multiply -matrix times an -matrix cwe compute the productaas - [ ]jb[ ][ * [ ]jk= this definition implies that matrix multiplication is associativethat isit implies that ( ( cd thuswe can parenthesize the expression for any way we wish and we will end up with the same answer howeverwe will not necessarily perform the same number of primitive (that isscalarmultiplications in each parenthesizationas is illustrated in the following example example let be -matrixlet be -matrixand let be -matrix computing ( drequires multiplicationswhereas computing ( *cd requires multiplications the matrix chain-product problem is to determine the parenthesization of the expression defining the product that minimizes the total number of scalar multiplications performed as the example above illustratesthe differences between parenthesizations can be dramaticso finding good solution can result in significant speedups
22,249
defining subproblems one way to solve the matrix chain-product problem is to simply enumerate all the possible ways of parenthesizing the expression for and determine the number of multiplications performed by each one unfortunatelythe set of all different parenthesizations of the expression for is equal in number to the set of all different binary trees that have leaves this number is exponential in thusthis straightforward ("brute-force"algorithm runs in exponential timefor there are an exponential number of ways to parenthesize an associative arithmetic expression we can significantly improve the performance achieved by the brute-force algorithmhoweverby making few observations about the nature of the matrix chain-product problem the first is that the problem can be split into subproblems in this casewe can define number of different subproblemseach of which is to compute the best parenthesization for some subexpression ai ai+ as concise notationwe use nij to denote the minimum number of multiplications needed to compute this subexpression thusthe original matrix chain-product problem can be characterized as that of computing the value of , - this observation is importantbut we need one more in order to apply the dynamic programming technique characterizing optimal solutions the other important observation we can make about the matrix chain-product problem is that it is possible to characterize an optimal solution to particular subproblem in terms of optimal solutions to its subproblems we call this property the subproblem optimality condition in the case of the matrix chain-product problemwe observe thatno matter how we parenthesize subexpressionthere has to be some final matrix multiplication that we perform that isa full parenthesization of subexpression ai ai+ has to be of the form (ai ak (ak+ )for some {ii moreoverfor whichever is the correct onethe products (ai ak and (ak+ must also be solved optimally if this were not sothen there would be global optimal that had one of these subproblems solved suboptimally but this is impossiblesince we could then reduce the total number of multiplications by replacing the current subproblem solution by an optimal solution for the subproblem this observation implies way of explicitly defining the optimization problem for nij in terms of other optimal subproblem solutions namelywe can compute nij by considering each place where we could put the final multiplication and taking the minimum over all such choices
22,250
designing dynamic programming algorithm we can therefore characterize the optimal subproblem solutionnij as nij min {ni, nk+ di dk+ + } <=kj where ni, since no work is needed for single matrix that isnij is the minimumtaken over all possible places to perform the final multiplicationof the number of multiplications needed to compute each subexpression plus the number of multiplications needed to perform the final matrix multiplication notice that there is sharing of subproblems going on that prevents us from dividing the problem into completely independent subproblems (as we would need to do to apply the divide-and-conquer techniquewe canneverthelessuse the equation for nij to derive an efficient algorithm by computing nij values in bottom-up fashionand storing intermediate solutions in table of nij values we can begin simply enough by assigning ni, for we can then apply the general equation for nij to compute ni, + valuessince they depend only on ni, and ni+ , + values that are available given the ni, + valueswe can then compute the ni, + valuesand so on thereforewe can build nij values up from previously computed values until we can finally compute the value of , - which is the number that we are searching for python implementation of this dynamic programming solution is given in code fragment we use techniques from section for representing multidimensional table in python def matrix chain( ) """ is list of + numbers such that size of kth matrix is [ ]-by- [ + return an -by- table such that [ ][jrepresents the minimum number of multiplications needed to compute the product of ai through aj inclusive "" len( number of matrices initialize -by- result to zero [[ for in range( ) for in range( )number of products in subchain for in range( - )start of subchain = + end of subchain [ ][jmin( [ ][ ]+ [ + ][ ]+ [id[ + [ + for in range( , ) return code fragment dynamic programming algorithm for the matrix chainproduct problem thuswe can compute , - with an algorithm that consists primarily of three nested loops (the third of which computes the min termeach of these loops iterates at most times per executionwith constant amount of additional work within thereforethe total running time of this algorithm is (
22,251
dna and text sequence alignment common text-processing problemwhich arises in genetics and software engineeringis to test the similarity between two text strings in genetics applicationthe two strings could correspond to two strands of dnafor which we want to compute similarities likewisein software engineering applicationthe two strings could come from two versions of source code for the same programfor which we want to determine changes made from one version to the next indeeddetermining the similarity between two strings is so common that the unix and linux operating systems have built-in programnamed difffor comparing text files given string xn- subsequence of is any string that is of the form xi xi xik where + that isit is sequence of characters that are not necessarily contiguous but are nevertheless taken in order from for examplethe string aaag is subsequence of the string cgat aat gaga the dna and text similarity problem we address here is the longest common subsequence (lcsproblem in this problemwe are given two character stringsx xn- and ym- over some alphabet (such as the alphabet { ,cgt common in computational geneticsand are asked to find longest string that is subsequence of both and one way to solve the longest common subsequence problem is to enumerate all subsequences of and take the largest one that is also subsequence of since each character of is either in or not in subsequencethere are potentially different subsequences of each of which requires (mtime to determine whether it is subsequence of thusthis brute-force approach yields an exponential-time algorithm that runs in ( mtimewhich is very inefficient fortunatelythe lcs problem is efficiently solvable using dynamic programming the components of dynamic programming solution as mentioned abovethe dynamic programming technique is used primarily for optimization problemswhere we wish to find the "bestway of doing something we can apply the dynamic programming technique in such situations if the problem has certain propertiessimple subproblemsthere has to be some way of repeatedly breaking the global optimization problem into subproblems moreoverthere should be way to parameterize subproblems with just few indiceslike ijkand so on subproblem optimizationan optimal solution to the global problem must be composition of optimal subproblem solutions subproblem overlapoptimal solutions to unrelated subproblems can contain subproblems in common
22,252
applying dynamic programming to the lcs problem recall that in the lcs problemwe are given two character stringsx and of length and mrespectivelyand are asked to find longest string that is subsequence of both and since and are character stringswe have natural set of indices with which to define subproblems--indices into the strings and let us define subproblemthereforeas that of computing the value , which we will use to denote the length of longest string that is subsequence of both prefixes [ jand [ kthis definition allows us to rewrite , in terms of optimal subproblem solutions this definition depends on which of two cases we are in (see figure , , , max( , , =gt cc taa ta =gt ctaa ycgataat tgaga ycgataat tgag ( (bfigure the two cases in the longest common subsequence algorithm for computing , (ax - yk- (bx - yk- - yk- in this casewe have match between the last character of [ jand the last character of [ kwe claim that this character belongs to longest common subsequence of [ jand [ kto justify this claimlet us suppose it is not true there has to be some longest common subsequence xa xa xac yb yb ybc if xac - or ybc yk- then we get the same sequence by setting ac and bc alternatelyif xac  - and ybc yk- then we can get an even longer common subsequence by adding - yk- to the end thusa longest common subsequence of [ jand [ kends with - thereforewe set , - , - if - yk- - yk- in this casewe cannot have common subsequence that includes both - and yk- that iswe can have common subsequence end with - or one that ends with yk- (or possibly neither)but certainly not both thereforewe set , max{ - , , - if - yk- we note that because slice [ is the empty stringl , for nsimilarlybecause slice [ is the empty stringl , for
22,253
the lcs algorithm the definition of , satisfies subproblem optimizationfor we cannot have longest common subsequence without also having longest common subsequences for the subproblems alsoit uses subproblem overlapbecause subproblem solution , can be used in several other problems (namelythe problems + , , + and + , + turning this definition of , into an algorithm is actually quite straightforward we create an ( ( arrayldefined for < < and < < we initialize all entries to in particular so that all entries of the form , and , are zero thenwe iteratively build up values in until we have ln, the length of longest common subsequence of and we give python implementation of this algorithm in code fragment def lcs(xy) """return table such that [ ][kis length of lcs for [ :jand [ : "" nm len( )len(yintroduce convenient notations ( + ( + table [[ ( + for in range( + ) for in range( ) for in range( ) if [ = [ ]align this match [ + ][ + [ ][ elsechoose to ignore one character [ + ][ + max( [ ][ + ] [ + ][ ] return code fragment dynamic programming algorithm for the lcs problem the running time of the algorithm of the lcs algorithm is easy to analyzefor it is dominated by two nested for loopswith the outer one iterating times and the inner one iterating times since the if-statement and assignment inside the loop each requires ( primitive operationsthis algorithm runs in (nmtime thusthe dynamic programming technique can be applied to the longest common subsequence problem to improve significantly over the exponential-time brute-force solution to the lcs problem the lcs function of code fragment computes the length of the longest common subsequence (stored as ln, )but not the subsequence itself fortunatelyit is easy to extract the actual longest common subsequence if given the complete table of , values computed by the lcs function the solution can be reconstructed back to front by reverse engineering the calculation of length ln, at any position , if yk then the length is based on the common subsequence associated with length - , - followed by common character we can record as part of the sequenceand then continue the analysis from - , - if yk
22,254
then we can move to the larger of , - and - , we continue this process until reaching some , (for exampleif or is as boundary casea python implementation of this strategy is given in code fragment this function constructs longest common subsequence in ( madditional timesince each pass of the while loop decrements either or (or bothan illustration of the algorithm for computing the longest common subsequence is given in figure def lcs solution(xyl) """return the longest common substring of and ygiven lcs table "" solution , len( )len( while [ ][ common characters remain if [ - = [ - ] solution append( [ - ] - - elif [ - ][ > [ ][ - ] -= else - return left-to-right version return join(reversed(solution)code fragment reconstructing the longest common subsequence =gttcctaata ycgat aat gaga figure illustration of the algorithm for constructing longest common subsequence from the array diagonal step on the highlighted path represents the use of common character (with that character' respective indices in the sequences highlighted in the margins
22,255
text compression and the greedy method in this sectionwe consider an important text-processing tasktext compression in this problemwe are given string defined over some alphabetsuch as the ascii or unicode character setsand we want to efficiently encode into small binary string (using only the characters and text compression is useful in any situation where we wish to reduce bandwidth for digital communicationsso as to minimize the time needed to transmit our text likewisetext compression is useful for storing large documents more efficientlyso as to allow fixed-capacity storage device to contain as many documents as possible the method for text compression explored in this section is the huffman code standard encoding schemessuch as asciiuse fixed-length binary strings to encode characters (with or bits in the traditional or extended ascii systemsrespectivelythe unicode system was originally proposed as -bit fixedlength representationalthough common encodings reduce the space usage by allowing common groups of characterssuch as those from the ascii systemwith fewer bits the huffman code saves space over fixed-length encoding by using short code-word strings to encode high-frequency characters and long code-word strings to encode low-frequency characters furthermorethe huffman code uses variable-length encoding specifically optimized for given string over any alphabet the optimization is based on the use of character frequencieswhere we havefor each character ca count (cof the number of times appears in the string to encode the string we convert each character in to variable-length code-wordand we concatenate all these code-words in order to produce the encoding for in order to avoid ambiguitieswe insist that no code-word in our encoding be prefix of another code-word in our encoding such code is called prefix codeand it simplifies the decoding of to retrieve (see figure even with this restrictionthe savings produced by variable-length prefix code can be significantparticularly if there is wide variance in character frequencies (as is the case for natural language text in almost every written languagehuffman' algorithm for producing an optimal variable-length prefix code for is based on the construction of binary tree that represents the code each edge in represents bit in code-wordwith an edge to left child representing " and an edge to right child representing " each leaf is associated with specific characterand the code-word for that character is defined by the sequence of bits associated with the edges in the path from the root of to (see figure each leaf has frequencyf ( )which is simply the frequency in of the character associated with in additionwe give each internal node in frequencyf ( )that is the sum of the frequencies of all the leaves in the subtree rooted at
22,256
(acharacter frequency (ba figure an illustration of an example huffman code for the input string " fast runner need never be afraid of the dark"(afrequency of each character of (bhuffman tree for string the code for character is obtained by tracing the path from the root of to the leaf where is storedand associating left child with and right child with for examplethe code for "ris and the code for "his the huffman coding algorithm the huffman coding algorithm begins with each of the distinct characters of the string to encode being the root node of single-node binary tree the algorithm proceeds in series of rounds in each roundthe algorithm takes the two binary trees with the smallest frequencies and merges them into single binary tree it repeats this process until only one tree is left (see code fragment each iteration of the while loop in huffman' algorithm can be implemented in (log dtime using priority queue represented with heap in additioneach iteration takes two nodes out of and adds one ina process that will be repeated times before exactly one node is left in thusthis algorithm runs in ( log dtime although full justification of this algorithm' correctness is beyond our scope herewe note that its intuition comes from simple idea--any optimal code can be converted into an optimal code in which the code-words for the two lowest-frequency charactersa and bdiffer only in their last bit repeating the argument for string with and replaced by character cgives the followingproposition huffman' algorithm constructs an optimal prefix code for string of length with distinct characters in ( log dtime
22,257
algorithm huffman( )inputstring of length with distinct characters outputcoding tree for compute the frequency (cof each character of initialize priority queue for each character in do create single-node binary tree storing insert into with key (cwhile len( do remove min( remove min(create new binary tree with left subtree and right subtree insert into with key remove min(return tree code fragment huffman coding algorithm the greedy method huffman' algorithm for building an optimal encoding is an example application of an algorithmic design pattern called the greedy method this design pattern is applied to optimization problemswhere we are trying to construct some structure while minimizing or maximizing some property of that structure the general formula for the greedy method pattern is almost as simple as that for the brute-force method in order to solve given optimization problem using the greedy methodwe proceed by sequence of choices the sequence starts from some well-understood starting conditionand computes the cost for that initial condition the pattern then asks that we iteratively make additional choices by identifying the decision that achieves the best cost improvement from all of the choices that are currently possible this approach does not always lead to an optimal solution but there are several problems that it does work forand such problems are said to possess the greedy-choice property this is the property that global optimal condition can be reached by series of locally optimal choices (that ischoices that are each the current best from among the possibilities available at the time)starting from well-defined starting condition the problem of computing an optimal variable-length prefix code is just one example of problem that possesses the greedy-choice property
22,258
tries the pattern-matching algorithms presented in section speed up the search in text by preprocessing the pattern (to compute the failure function in the knuthmorris-pratt algorithm or the last function in the boyer-moore algorithmin this sectionwe take complementary approachnamelywe present string searching algorithms that preprocess the text this approach is suitable for applications where series of queries is performed on fixed textso that the initial cost of preprocessing the text is compensated by speedup in each subsequent query (for examplea web site that offers pattern matching in shakespeare' hamlet or search engine that offers web pages on the hamlet topica trie (pronounced "try"is tree-based data structure for storing strings in order to support fast pattern matching the main application for tries is in information retrieval indeedthe name "triecomes from the word "retrieval in an information retrieval applicationsuch as search for certain dna sequence in genomic databasewe are given collection of stringsall defined using the same alphabet the primary query operations that tries support are pattern matching and prefix matching the latter operation involves being given string and looking for all the strings in that contain as prefix standard tries let be set of strings from alphabet such that no string in is prefix of another string standard trie for is an ordered tree with the following properties (see figure )each node of except the rootis labeled with character of the children of an internal node of have distinct labels has leaveseach associated with string of ssuch that the concatenation of the labels of the nodes on the path from the root to leaf of yields the string of associated with thusa trie represents the strings of with paths from the root to the leaves of note the importance of assuming that no string in is prefix of another string this ensures that each string of is uniquely associated with leaf of (this is similar to the restriction for prefix codes with huffman codingas described in section we can always satisfy this assumption by adding special character that is not in the original alphabet at the end of each string an internal node in standard trie can have anywhere between and |schildren there is an edge going from the root to one of its children for each character that is first in some string in the collection in additiona path from the root of to an internal node at depth corresponds to -character prefix [ :
22,259
figure standard trie for the strings {bearbellbidbullbuysellstockstopof string of in factfor each character that can follow the prefix [ kin string of the set sthere is child of labeled with character in this waya trie concisely stores the common prefixes that exist among set of strings as special caseif there are only two characters in the alphabetthen the trie is essentially binary treewith some internal nodes possibly having only one child (that isit may be an improper binary treein generalalthough it is possible that an internal node has up to |schildrenin practice the average degree of such nodes is likely to be much smaller for examplethe trie shown in figure has several internal nodes with only one child on larger data setsthe average degree of nodes is likely to get smaller at greater depths of the treebecause there may be fewer strings sharing the common prefixand thus fewer continuations of that pattern furthermorein many languagesthere will be character combinations that are unlikely to naturally occur the following proposition provides some important structural properties of standard trieproposition standard trie storing collection of strings of total length from an alphabet has the following propertiesthe height of is equal to the length of the longest string in every internal node of has at most |schildren has leaves the number of nodes of is at most the worst case for the number of nodes of trie occurs when no two strings share common nonempty prefixthat isexcept for the rootall internal nodes have one child
22,260
text processing trie for set of strings can be used to implement set or map whose keys are the strings of namelywe perform search in for string by tracing down from the root the path indicated by the characters in if this path can be traced and terminates at leaf nodethen we know is key in the map for examplein the trie in figure tracing the path for "bullends up at leaf if the path cannot be traced or the path can be traced but terminates at an internal nodethen is not key in the map in the example in figure the path for "betcannot be traced and the path for "beends at an internal node neither such word is in the map it is easy to see that the running time of the search for string of length is ( | |)because we visit at most nodes of and we spend (| |time at each node determining the child having the subsequent character as label the (| |upper bound on the time to locate child with given label is achievableeven if the children of node are unorderedsince there are at most |schildren we can improve the time spent at node to be (log | |or expected ( )by mapping characters to children using secondary search table or hash table at each nodeor by using direct lookup table of size |sat each nodeif |sis sufficiently small (as is the case for dna stringsfor these reasonswe typically expect search for string of length to run in (mtime from the discussion aboveit follows that we can use trie to perform special type of pattern matchingcalled word matchingwhere we want to determine whether given pattern matches one of the words of the text exactly word matching differs from standard pattern matching because the pattern cannot match an arbitrary substring of the text--only one of its words to accomplish thiseach word of the original document must be added to the trie (see figure simple extension of this scheme supports prefix-matching queries howeverarbitrary occurrences of the pattern in the text (for examplethe pattern is proper suffix of word or spans two wordscannot be efficiently performed to construct standard trie for set of stringswe can use an incremental algorithm that inserts the strings one at time recall the assumption that no string of is prefix of another string to insert string into the current trie we trace the path associated with in creating new chain of nodes to store the remaining characters of when we get stuck the running time to insert with length is similar to searchwith worst-case ( | |performanceor expected (mif using secondary hash tables at each node thusconstructing the entire trie for set takes expected (ntimewhere is the total length of the strings of there is potential space inefficiency in the standard trie that has prompted the development of the compressed triewhich is also known (for historical reasonsas the patricia trie namelythere are potentially lot of nodes in the standard trie that have only one childand the existence of such nodes is waste we discuss the compressed trie next
22,261
(ab (bfigure word matching with standard trie(atext to be searched (articles and prepositionswhich are also known as stop wordsexcluded)(bstandard trie for the words in the textwith leaves augmented with indications of the index at which the given work begins in the text for examplethe leaf for the word stock notes that the word begins at indices and of the text
22,262
compressed tries compressed trie is similar to standard trie but it ensures that each internal node in the trie has at least two children it enforces this rule by compressing chains of single-child nodes into individual edges (see figure let be standard trie we say that an internal node of is redundant if has one child and is not the root for examplethe trie of figure has eight redundant nodes let us also say that chain of > edges( )( (vk- vk )is redundant ifvi is redundant for and vk are not redundant we can transform into compressed trie by replacing each redundant chain ( (vk- vk of > edges into single edge ( vk )relabeling vk with the concatenation of the labels of nodes vk ar id ll ll to ell ck figure compressed trie for the strings {bearbellbidbullbuysellstockstop(compare this with the standard trie shown in figure in addition to compression at the leavesnotice the internal node with label to shared by words stock and stop thusnodes in compressed trie are labeled with stringswhich are substrings of strings in the collectionrather than with individual characters the advantage of compressed trie over standard trie is that the number of nodes of the compressed trie is proportional to the number of strings and not to their total lengthas shown in the following proposition (compare with proposition proposition compressed trie storing collection of strings from an alphabet of size has the following propertiesevery internal node of has at least two children and most children has leaves nodes the number of nodes of is (
22,263
the attentive reader may wonder whether the compression of paths provides any significant advantagesince it is offset by corresponding expansion of the node labels indeeda compressed trie is truly advantageous only when it is used as an auxiliary index structure over collection of strings already stored in primary structureand is not required to actually store all the characters of the strings in the collection supposefor examplethat the collection of strings is an array of strings [ ] [ ] [ instead of storing the label of node explicitlywe represent it implicitly by combination of three integers (ij )such that [ ] ]that isx is the slice of [iconsisting of the characters from the jth up to but not including the kth (see the example in figure also compare with the standard trie of figure [ [ [ [ [ [ [ [ [ [ ( (bfigure (acollection of strings stored in an array (bcompact representation of the compressed trie for this additional compression scheme allows us to reduce the total space for the trie itself from (nfor the standard trie to (sfor the compressed triewhere is the total length of the strings in and is the number of strings in we must still store the different strings in sof coursebut we nevertheless reduce the space for the trie searching in compressed trie is not necessarily faster than in standard treesince there is still need to compare every character of the desired pattern with the potentially multi-character labels while traversing paths in the trie
22,264
suffix tries one of the primary applications for tries is for the case when the strings in the collection are all the suffixes of string such trie is called the suffix trie (also known as suffix tree or position treeof string for examplefigure shows the suffix trie for the eight suffixes of string "minimize for suffix triethe compact representation presented in the previous section can be further simplified namelythe label of each vertex is pair jkindicating the string (see figure to satisfy the rule that no suffix of is prefix of another suffixwe can add special characterdenoted with $that is not in the original alphabet at the end of (and thus to every suffixthat isif string has length nwe build trie for the set of strings ]for saving space using suffix trie allows us to save space over standard trie by using several space compression techniquesincluding those used for the compressed trie the advantage of the compact representation of tries now becomes apparent for suffix tries since the total length of the suffixes of string of length is *** ( storing all the suffixes of explicitly would take ( space even sothe suffix trie represents these strings implicitly in (nspaceas formally stated in the following proposition proposition the compact representation of suffix trie for string of length uses (nspace construction we can construct the suffix trie for string of length with an incremental algorithm like the one given in section this construction takes (| | time because the total length of the suffixes is quadratic in howeverthe (compactsuffix trie for string of length can be constructed in (ntime with specialized algorithmdifferent from the one for general tries this linear-time construction algorithm is fairly complexhoweverand is not reported here stillwe can take advantage of the existence of this fast construction algorithm when we want to use suffix trie to solve other problems
22,265
mi mize nimize ze nimize nimize ze ze ( : : : : : : : : : : (bfigure (asuffix trie for the string "minimize(bcompact representation of where pair denotes slice kin the reference string using suffix trie the suffix trie for string can be used to efficiently perform pattern-matching queries on text namelywe can determine whether pattern is substring of by trying to trace path associated with in is substring of if and only if such path can be traced the search down the trie assumes that nodes in store some additional informationwith respect to the compact representation of the suffix trieif node has label jkand is the string of length associated with the path from the root to (included)then [ ky this property ensures that we can easily compute the start index of the pattern in the text when match occurs
22,266
search engine indexing the world wide web contains huge collection of text documents (web pagesinformation about these pages are gathered by program called web crawlerwhich then stores this information in special dictionary database web search engine allows users to retrieve relevant information from this databasethereby identifying relevant pages on the web containing given keywords in this sectionwe present simplified model of search engine inverted files the core information stored by search engine is dictionarycalled an inverted index or inverted filestoring key-value pairs (wl)where is word and is collection of pages containing word the keys (wordsin this dictionary are called index terms and should be set of vocabulary entries and proper nouns as large as possible the elements in this dictionary are called occurrence lists and should cover as many web pages as possible we can efficiently implement an inverted index with data structure consisting of the following an array storing the occurrence lists of the terms (in no particular order compressed trie for the set of index termswhere each leaf stores the index of the occurrence list of the associated term the reason for storing the occurrence lists outside the trie is to keep the size of the trie data structure sufficiently small to fit in internal memory insteadbecause of their large total sizethe occurrence lists have to be stored on disk with our data structurea query for single keyword is similar to wordmatching query (section namelywe find the keyword in the trie and we return the associated occurrence list when multiple keywords are given and the desired output are the pages containing all the given keywordswe retrieve the occurrence list of each keyword using the trie and return their intersection to facilitate the intersection computationeach occurrence list should be implemented with sequence sorted by address or with mapto allow efficient set operations in addition to the basic task of returning list of pages containing given keywordssearch engines provide an important additional service by ranking the pages returned by relevance devising fast and accurate ranking algorithms for search engines is major challenge for computer researchers and electronic commerce companies
22,267
exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - list the prefixes of the string ="aaabbaaathat are also suffixes of - what is the longest (properprefix of the string "cgtacgttcgtacgthat is also suffix of this stringr- draw figure illustrating the comparisons done by brute-force pattern matching for the text "aaabaadaabaaaand pattern "aabaaar- repeat the previous problem for the boyer-moore algorithmnot counting the comparisons made to compute the last(cfunction - repeat exercise - for the knuth-morris-pratt algorithmnot counting the comparisons made to compute the failure function - compute map representing the last function used in the boyer-moore pattern-matching algorithm for characters in the pattern string"the quick brown fox jumped over lazy catr- compute table representing the knuth-morris-pratt failure function for the pattern string "cgtacgttcgtacr- what is the best way to multiply chain of matrices with dimensions that are and show your work - in figure we illustrate that gtttaa is longest common subsequence for the given strings and howeverthat answer is not unique give another common subsequence of and having length six - show the longest common subsequence array for the two stringsx "skullandbonesy "lullabybabieswhat is longest common subsequence between these stringsr- draw the frequency array and huffman tree for the following string"dogs do not spot hot pots or catsr- draw standard trie for the following set of stringsababbabacccccbbaaaacaabbaacccbcccbca - draw compressed trie for the strings given in the previous problem - draw the compact representation of the suffix trie for the string"minimize minime
22,268
creativity - describe an example of text of length and pattern of length such that force the brute-force pattern-matching algorithm achieves running time that is (nmc- adapt the brute-force pattern-matching algorithm in order to implement functionrfind brute( , )that returns the index at which the rightmost occurrence of pattern within text if any - redo the previous problemadapting the boyer-moore pattern-matching algorithm appropriately to implement function rfind boyer moore( ,pc- redo exercise - adapting the knuth-morris-pratt pattern-matching algorithm appropriately to implement function rfind kmp( ,pc- the count method of python' str class reports the maximum number of nonoverlapping occurrences of pattern within string for examplethe call abababa countaba returns (not adapt the brute-force pattern-matching algorithm to implement functioncount brute( , )with similar outcome - redo the previous problemadapting the boyer-moore pattern-matching algorithm in order to implement function count boyer moore( ,pc- redo exercise - adapting the knuth-morris-pratt pattern-matching algorithm appropriately to implement function count kmp( ,pc- give justification of why the compute kmp fail function (code fragment runs in (mtime on pattern of length - let be text of length nand let be pattern of length describe an ( + )-time method for finding the longest prefix of that is substring of - say that pattern of length is circular substring of text of length if is (normalsubstring of or if is equal to the concatenation of suffix of and prefix of that isif there is an index < msuch that [ nt [ kgive an ( )-time algorithm for determining whether is circular substring of - the knuth-morris-pratt pattern-matching algorithm can be modified to run faster on binary strings by redefining the failure function asf (kthe largest such that [ jp is suffix of [ ]where  denotes the complement of the jth bit of describe how to modify the kmp algorithm to be able to take advantage of this new failure function and also give method for computing this failure function show that this method makes at most comparisons between the text and the pattern (as opposed to the comparisons needed by the standard kmp algorithm given in section
22,269
- modify the simplified boyer-moore algorithm presented in this using ideas from the kmp algorithm so that it runs in ( mtime - design an efficient algorithm for the matrix chain multiplication problem that outputs fully parenthesized expression for how to multiply the matrices in the chain using the minimum number of operations - native australian named anatjari wishes to cross desert carrying only single water bottle he has map that marks all the watering holes along the way assuming he can walk miles on one bottle of waterdesign an efficient algorithm for determining where anatjari should refill his bottle in order to make as few stops as possible argue why your algorithm is correct - describe an efficient greedy algorithm for making change for specified value using minimum number of coinsassuming there are four denominations of coins (called quartersdimesnickelsand pennies)with values and respectively argue why your algorithm is correct - give an example set of denominations of coins so that greedy changemaking algorithm will not use the minimum number of coins - in the art gallery guarding problem we are given line that represents long hallway in an art gallery we are also given set { xn- of real numbers that specify the positions of paintings in this hallway suppose that single guard can protect all the paintings within distance at most of his or her position (on both sidesdesign an algorithm for finding placement of guards that uses the minimum number of guards to guard all the paintings with positions in - let be convex polygona triangulation of is an addition of diagonals connecting the vertices of so that each interior face is triangle the weight of triangulation is the sum of the lengths of the diagonals assuming that we can compute lengths and add and compare them in constant timegive an efficient algorithm for computing minimum-weight triangulation of - let be text string of length describe an ( )-time method for finding the longest prefix of that is substring of the reversal of - describe an efficient algorithm to find the longest palindrome that is suffix of string of length recall that palindrome is string that is equal to its reversal what is the running time of your methodc- given sequence ( xn- of numbersdescribe an ( )time algorithm for finding longest subsequence (xi xi xik- of numberssuch that xi + that ist is longest decreasing subsequence of - give an efficient algorithm for determining if pattern is subsequence (not substringof text what is the running time of your algorithm
22,270
- define the edit distance between two strings and of length and mrespectivelyto be the number of edits that it takes to change into an edit consists of character insertiona character deletionor character replacement for examplethe strings "algorithmand "rhythmhave edit distance design an (nm)-time algorithm for computing the edit distance between and - let and be strings of length and mrespectively define bjkto be the length of the longest common substring of the suffix [ :nand the suffix [ :mdesign an (nm)-time algorithm for computing all the values of bjkfor and - anna has just won contest that allows her to take pieces of candy out of candy store for free anna is old enough to realize that some candy is expensivewhile other candy is relatively cheapcosting much less the jars of candy are numbered so that jar has pieces in itwith price of per piece design an ( )-time algorithm that allows anna to maximize the value of the pieces of candy she takes for her winnings show that your algorithm produces the maximum value for anna - let three integer arraysaband cbe giveneach of size given an arbitrary integer kdesign an ( log )-time algorithm to determine if there exist numbersa in ab in band in csuch that - give an ( )-time algorithm for the previous problem - given string of length and string of length mdescribe an ( )-time algorithm for finding the longest prefix of that is suffix of - give an efficient algorithm for deleting string from standard trie and analyze its running time - give an efficient algorithm for deleting string from compressed trie and analyze its running time - describe an algorithm for constructing the compact representation of suffix triegiven its noncompact representationand analyze its running time projects - use the lcs algorithm to compute the best sequence alignment between some dna stringswhich you can get online from genbank - write program that takes two character strings (which could befor examplerepresentations of dna strandsand computes their edit distanceshowing the corresponding pieces (see exercise -
22,271
- perform an experimental analysis of the efficiency (number of character comparisons performedof the brute-force and kmp pattern-matching algorithms for varying-length patterns - perform an experimental analysis of the efficiency (number of character comparisons performedof the brute-force and boyer-moore patternmatching algorithms for varying-length patterns - perform an experimental comparison of the relative speeds of the bruteforcekmpand boyer-moore pattern-matching algorithms document the relative running times on large text documents that are then searched using varying-length patterns - experiment with the efficiency of the find method of python' str class and develop hypothesis about which pattern-matching algorithm it uses try using inputs that are likely to cause both best-case and worst-case running times for various algorithms describe your experiments and your conclusions - implement compression and decompression scheme that is based on huffman coding - create class that implements standard trie for set of ascii strings the class should have constructor that takes list of strings as an argumentand the class should have method that tests whether given string is stored in the trie - create class that implements compressed trie for set of ascii strings the class should have constructor that takes list of strings as an argumentand the class should have method that tests whether given string is stored in the trie - create class that implements prefix trie for an ascii string the class should have constructor that takes string as an argumentand method for pattern matching on the string - implement the simplified search engine described in section for the pages of small web site use all the words in the pages of the site as index termsexcluding stop words such as articlesprepositionsand pronouns - implement search engine for the pages of small web site by adding page-ranking feature to the simplified search engine described in section your page-ranking feature should return the most relevant pages first use all the words in the pages of the site as index termsexcluding stop wordssuch as articlesprepositionsand pronouns
22,272
notes the kmp algorithm is described by knuthmorrisand pratt in their journal article [ ]and boyer and moore describe their algorithm in journal article published the same year [ in their articlehoweverknuth et al [ also prove that the boyer-moore algorithm runs in linear time more recentlycole [ shows that the boyer-moore algorithm makes at most character comparisons in the worst caseand this bound is tight all of the algorithms discussed above are also discussed in the book by aho [ ]albeit in more theoretical frameworkincluding the methods for regular-expression pattern matching the reader interested in further study of string pattern-matching algorithms is referred to the book by stephen [ and the book by aho [ ]and crochemore and lecroq [ dynamic programming was developed in the operations research community and formalized by bellman [ the trie was invented by morrison [ and is discussed extensively in the classic sorting and searching book by knuth [ the name "patriciais short for "practical algorithm to retrieve information coded in alphanumeric[ mccreight [ shows how to construct suffix tries in linear time an introduction to the field of information retrievalwhich includes discussion of search engines for the webis provided in the book by baeza-yates and ribeiro-neto [
22,273
graph algorithms contents graphs the graph adt data structures for graphs edge list structure adjacency list structure adjacency map structure adjacency matrix structure python implementation graph traversals depth-first search dfs implementation and extensions breadth-first search transitive closure directed acyclic graphs topological ordering shortest paths weighted graphs dijkstra' algorithm minimum spanning trees prim-jarnik algorithm kruskal' algorithm disjoint partitions and union-find structures exercises
22,274
graphs graph is way of representing relationships that exist between pairs of objects that isa graph is set of objectscalled verticestogether with collection of pairwise connections between themcalled edges graphs have applications in modeling many domainsincluding mappingtransportationcomputer networksand electrical engineering by the waythis notion of "graphshould not be confused with bar charts and function plotsas these kinds of "graphsare unrelated to the topic of this viewed abstractlya graph is simply set of vertices and collection of pairs of vertices from called edges thusa graph is way of representing connections or relationships between pairs of objects from some set incidentallysome books use different terminology for graphs and refer to what we call vertices as nodes and what we call edges as arcs we use the terms "verticesand "edges edges in graph are either directed or undirected an edge (uvis said to be directed from to if the pair (uvis orderedwith preceding an edge (uvis said to be undirected if the pair (uvis not ordered undirected edges are sometimes denoted with set notationas {uv}but for simplicity we use the pair notation (uv)noting that in the undirected case (uvis the same as (vugraphs are typically visualized by drawing the vertices as ovals or rectangles and the edges as segments or curves connecting pairs of ovals and rectangles the following are some examples of directed and undirected graphs example we can visualize collaborations among the researchers of certain discipline by constructing graph whose vertices are associated with the researchers themselvesand whose edges connect pairs of vertices associated with researchers who have coauthored paper or book (see figure such edges are undirected because coauthorship is symmetric relationthat isif has coauthored something with bthen necessarily has coauthored something with snoeyink garg goldwasser goodrich tamassia tollis vitter preparata chiang figure graph of coauthorship among some authors
22,275
example we can associate with an object-oriented program graph whose vertices represent the classes defined in the programand whose edges indicate inheritance between classes there is an edge from vertex to vertex if the class for inherits from the class for such edges are directed because the inheritance relation only goes in one direction (that isit is asymmetricif all the edges in graph are undirectedthen we say the graph is an undirected graph likewisea directed graphalso called digraphis graph whose edges are all directed graph that has both directed and undirected edges is often called mixed graph note that an undirected or mixed graph can be converted into directed graph by replacing every undirected edge (uvby the pair of directed edges (uvand (vuit is often usefulhoweverto keep undirected and mixed graphs represented as they arefor such graphs have several applicationsas in the following example example city map can be modeled as graph whose vertices are intersections or dead endsand whose edges are stretches of streets without intersections this graph has both undirected edgeswhich correspond to stretches of two-way streetsand directed edgeswhich correspond to stretches of one-way streets thusin this waya graph modeling city map is mixed graph example physical examples of graphs are present in the electrical wiring and plumbing networks of building such networks can be modeled as graphswhere each connectorfixtureor outlet is viewed as vertexand each uninterrupted stretch of wire or pipe is viewed as an edge such graphs are actually components of much larger graphsnamely the local power and water distribution networks depending on the specific aspects of these graphs that we are interested inwe may consider their edges as undirected or directedforin principlewater can flow in pipe and current can flow in wire in either direction the two vertices joined by an edge are called the end vertices (or endpointsof the edge if an edge is directedits first endpoint is its origin and the other is the destination of the edge two vertices and are said to be adjacent if there is an edge whose end vertices are and an edge is said to be incident to vertex if the vertex is one of the edge' endpoints the outgoing edges of vertex are the directed edges whose origin is that vertex the incoming edges of vertex are the directed edges whose destination is that vertex the degree of vertex vdenoted deg( )is the number of incident edges of the in-degree and out-degree of vertex are the number of the incoming and outgoing edges of vand are denoted indeg(vand outdeg( )respectively
22,276
example we can study air transportation by constructing graph gcalled flight networkwhose vertices are associated with airportsand whose edges are associated with flights (see figure in graph gthe edges are directed because given flight has specific travel direction the endpoints of an edge in correspond respectively to the origin and destination of the flight corresponding to two airports are adjacent in if there is flight that flies between themand an edge is incident to vertex in if the flight for flies to or from the airport for the outgoing edges of vertex correspond to the outbound flights from ' airportand the incoming edges correspond to the inbound flights to ' airport finallythe in-degree of vertex of corresponds to the number of inbound flights to ' airportand the out-degree of vertex in corresponds to the number of outbound flights lax aa dfw dl aa aa jfk dl ua ua ord sfo bos nw sw aa aa mia figure example of directed graph representing flight network the endpoints of edge ua are lax and ordhencelax and ord are adjacent the in-degree of dfw is and the out-degree of dfw is the definition of graph refers to the group of edges as collectionnot setthus allowing two undirected edges to have the same end verticesand for two directed edges to have the same origin and the same destination such edges are called parallel edges or multiple edges flight network can contain parallel edges (example )such that multiple edges between the same pair of vertices could indicate different flights operating on the same route at different times of the day another special type of edge is one that connects vertex to itself namelywe say that an edge (undirected or directedis self-loop if its two endpoints coincide self-loop may occur in graph associated with city map (example )where it would correspond to "circle( curving street that returns to its starting pointwith few exceptionsgraphs do not have parallel edges or self-loops such graphs are said to be simple thuswe can usually say that the edges of simple graph are set of vertex pairs (and not just collectionthroughout this we assume that graph is simple unless otherwise specified
22,277
path is sequence of alternating vertices and edges that starts at vertex and ends at vertex such that each edge is incident to its predecessor and successor vertex cycle is path that starts and ends at the same vertexand that includes at least one edge we say that path is simple if each vertex in the path is distinctand we say that cycle is simple if each vertex in the cycle is distinctexcept for the first and last one directed path is path such that all edges are directed and are traversed along their direction directed cycle is similarly defined for examplein figure (bosnw jfkaa dfwis directed simple pathand (laxua ordua dfwaa laxis directed simple cycle note that directed graph may have cycle consisting of two edges with opposite direction between the same pair of verticesfor example (ordua dfwdl ordin figure directed graph is acyclic if it has no directed cycles for exampleif we were to remove the edge ua from the graph in figure the remaining graph is acyclic if graph is simplewe may omit the edges when describing path or cycle cas these are well definedin which case is list of adjacent vertices and is cycle of adjacent vertices example given graph representing city map (see example )we can model couple driving to dinner at recommended restaurant as traversing path though if they know the wayand do not accidentally go through the same intersection twicethen they traverse simple path in likewisewe can model the entire trip the couple takesfrom their home to the restaurant and backas cycle if they go home from the restaurant in completely different way than how they wentnot even going through the same intersection twicethen their entire round trip is simple cycle finallyif they travel along one-way streets for their entire tripwe can model their night out as directed cycle given vertices and of (directedgraph gwe say that reaches vand that is reachable from uif has (directedpath from to in an undirected graphthe notion of reachability is symmetricthat is to sayu reaches if an only if reaches howeverin directed graphit is possible that reaches but does not reach ubecause directed path must be traversed according to the respective directions of the edges graph is connected iffor any two verticesthere is path is strongly connected if for any two vertices between them directed graph and of gu reaches and reaches (see figure for some examples subgraph of graph is graph whose vertices and edges are subsets of the vertices and edges of grespectively spanning subgraph of is subgraph of that contains all the vertices of the graph if graph is not connectedits maximal connected subgraphs are called the connected components of forest is graph without cycles tree is connected forestthat isa connected graph without cycles spanning tree of graph is spanning subgraph that is tree (note that this definition of tree is somewhat different from the one given in as there is not necessarily designated root
22,278
bos bos ord ord jfk jfk sfo sfo dfw dfw lax lax mia mia ( (bbos bos ord ord jfk sfo jfk sfo dfw dfw lax lax mia (cmia (dfigure examples of reachability in directed graph(aa directed path from bos to lax is highlighted(ba directed cycle (ordmiadfwlaxordis highlightedits vertices induce strongly connected subgraph(cthe subgraph of the vertices and edges reachable from ord is highlighted(dthe removal of the dashed edges results in an acyclic directed graph example perhaps the most talked about graph today is the internetwhich can be viewed as graph whose vertices are computers and whose (undirectededges are communication connections between pairs of computers on the internet the computers and the connections between them in single domainlike wiley comform subgraph of the internet if this subgraph is connectedthen two users on computers in this domain can send email to one another without having their information packets ever leave their domain suppose the edges of this subgraph form spanning tree this implies thatif even single connection goes down (for examplebecause someone pulls communication cable out of the back of computer in this domain)then this subgraph will no longer be connected
22,279
in the propositions that followwe explore few important properties of graphs proposition if is graph with edges and vertex set then deg( in justificationan edge (uvis counted twice in the summation aboveonce by its endpoint and once by its endpoint thusthe total contribution of the edges to the degrees of the vertices is twice the number of edges proposition if is directed graph with edges and vertex set then indeg(voutdeg(vm in in justificationin directed graphan edge (uvcontributes one unit to the out-degree of its origin and one unit to the in-degree of its destination thusthe total contribution of the edges to the out-degrees of the vertices is equal to the number of edgesand similarly for the in-degrees we next show that simple graph with vertices has ( edges proposition let be simple graph with vertices and edges if is undirectedthen < ( )/ and if is directedthen < ( justificationsuppose that is undirected since no two edges can have the same endpoints and there are no self-loopsthe maximum degree of vertex in is in this case thusby proposition < ( now suppose that is directed since no two edges can have the same origin and destinationand there are no self-loopsthe maximum in-degree of vertex in is in this case thusby proposition < ( there are number of simple properties of treesforestsand connected graphs proposition let be an undirected graph with vertices and edges if is connectedthen > if is treethen if is forestthen <
22,280
the graph adt graph is collection of vertices and edges we model the abstraction as combination of three data typesvertexedgeand graph vertex is lightweight object that stores an arbitrary element provided by the user ( an airport code)we assume it supports methodelement)to retrieve the stored element an edge also stores an associated object ( flight numbertravel distancecost)retrieved with the elementmethod in additionwe assume that an edge supports the following methodsendpoints)return tuple (uvsuch that vertex is the origin of the edge and vertex is the destinationfor an undirected graphthe orientation is arbitrary opposite( )assuming vertex is one endpoint of the edge (either origin or destination)return the other endpoint the primary abstraction for graph is the graph adt we presume that graph can be either undirected or directedwith the designation declared upon constructionrecall that mixed graph can be represented as directed graphmodeling edge {uvas pair of directed edges (uvand (vuthe graph adt includes the following methodsvertex count)return the number of vertices of the graph vertices)return an iteration of all the vertices of the graph edge count)return the number of edges of the graph edges)return an iteration of all the edges of the graph get edge( , )return the edge from vertex to vertex vif one existsotherwise return none for an undirected graphthere is no difference between get edge( ,vand get edge( ,udegree(vout=true)for an undirected graphreturn the number of edges incident to vertex for directed graphreturn the number of outgoing (resp incomingedges incident to vertex vas designated by the optional parameter incident edges(vout=true)return an iteration of all edges incident to vertex in the case of directed graphreport outgoing edges by defaultreport incoming edges if the optional parameter is set to false insert vertex( =none)create and return new vertex storing element insert edge(uvx=none)create and return new edge from vertex to vertex vstoring element (none by defaultremove vertex( )remove vertex and all its incident edges from the graph remove edge( )remove edge from the graph
22,281
data structures for graphs in this sectionwe introduce four data structures for representing graph in each representationwe maintain collection to store the vertices of graph howeverthe four representations differ greatly in the way they organize the edges in an edge listwe maintain an unordered list of all edges this minimally sufficesbut there is no efficient way to locate particular edge (uv)or the set of all edges incident to vertex in an adjacency listwe maintainfor each vertexa separate list containing those edges that are incident to the vertex the complete set of edges can be determined by taking the union of the smaller setswhile the organization allows us to more efficiently find all edges incident to given vertex an adjacency map is very similar to an adjacency listbut the secondary container of all edges incident to vertex is organized as maprather than as listwith the adjacent vertex serving as key this allows for access to specific edge (uvin ( expected time an adjacency matrix provides worst-case ( access to specific edge (uvby maintaining an matrixfor graph with vertices each entry is dedicated to storing reference to the edge (uvfor particular pair of vertices and vif no such edge existsthe entry will be none summary of the performance of these structures is given in table we give further explanation of the structures in the remainder of this section operation vertex countedge countverticesedgesget edge( ,vdegree(vincident edges(vinsert vertex(xremove vertex(vinsert edge( , ,xremove edge(eedge list ( ( (no(mo(mo(mo(mo( (mo( ( adj list ( ( (no(mo(min(du dv ) ( (dv ( (dv ( ( adj map ( ( (no(mo( exp ( (dv ( (dv ( exp ( exp adj matrix ( ( (no(mo( (no(no( ( ( ( table summary of the running times for the methods of the graph adtusing the graph representations discussed in this section we let denote the number of verticesm the number of edgesand dv the degree of vertex note that the adjacency matrix uses ( spacewhile all other structures use ( mspace
22,282
edge list structure the edge list structure is possibly the simplestthough not the most efficientrepresentation of graph all vertex objects are stored in an unordered list and all edge objects are stored in an unordered list we illustrate an example of the edge list structure for graph in figure (ah (bfigure (aa graph (bschematic representation of the edge list structure for notice that an edge object refers to the two vertex objects that correspond to its endpointsbut that vertices do not refer to incident edges to support the many methods of the graph adt (section )we assume the following additional features of an edge list representation collections and are represented with doubly linked lists using our positionallist class from vertex objects the vertex object for vertex storing element has instance variables fora reference to element xto support the elementmethod reference to the position of the vertex instance in the list thereby allowing to be efficiently removed from if it were removed from the graph edge objects the edge object for an edge storing element has instance variables fora reference to element xto support the elementmethod references to the vertex objects associated with the endpoint vertices of these allow the edge instance to provide constant-time support for methods endpointsand opposite(va reference to the position of the edge instance in list ethereby allowing to be efficiently removed from if it were removed from the graph
22,283
performance of the edge list structure the performance of an edge list structure in fulfilling the graph adt is summarized in table we begin by discussing the space usagewhich is ( mfor representing graph with vertices and edges each individual vertex or edge instance uses ( spaceand the additional lists and use space proportional to their number of entries in terms of running timethe edge list structure does as well as one could hope in terms of reporting the number of vertices or edgesor in producing an iteration of those vertices or edges by querying the respective list or ethe vertex count and edge count methods run in ( timeand by iterating through the appropriate listthe methods vertices and edges run respectively in (nand (mtime the most significant limitations of an edge list structureespecially when compared to the other graph representationsare the (mrunning times of methods get edge( , )degree( )and incident edges(vthe problem is that with all edges of the graph in an unordered list ethe only way to answer those queries is through an exhaustive inspection of all edges the other data structures introduced in this section will implement these methods more efficiently finallywe consider the methods that update the graph it is easy to add new vertex or new edge to the graph in ( time for examplea new edge can be added to the graph by creating an edge instance storing the given element as dataadding that instance to the positional list eand recording its resulting position within as an attribute of the edge that stored position can later be used to locate and remove this edge from in ( timeand thus implement the method remove edge(eit is worth discussing why the remove vertex(vmethod has running time of (mas stated in the graph adtwhen vertex is removed from the graphall edges incident to must also be removed (otherwisewe would have contradiction of edges that refer to vertices that are not part of the graphto locate the incident edges to the vertexwe must examine all edges of operation vertex count)edge countverticesedgesget edge( , )degree( )incident edges(vinsert vertex( )insert edge( , , )remove edge(eremove vertex(vrunning time ( (no(mo(mo( (mtable running times of the methods of graph implemented with the edge list structure the space used is ( )where is the number of vertices and is the number of edges
22,284
adjacency list structure in contrast to the edge list representation of graphthe adjacency list structure groups the edges of graph by storing them in smallersecondary containers that are associated with each individual vertex specificallyfor each vertex vwe maintain collection ( )called the incidence collection of vwhose entries are edges incident to (in the case of directed graphoutgoing and incoming edges can be respectively stored in two separate collectionsiout (vand iin (vtraditionallythe incidence collection (vfor vertex is listwhich is why we call this way of representing graph the adjacency list structure we require that the primary structure for an adjacency list maintain the collection of vertices in way so that we can locate the secondary structure (vfor given vertex in ( time this could be done by using positional list to represent with each vertex instance maintaining direct reference to its (vincidence collectionwe illustrate such an adjacency list structure of graph in figure if vertices can be uniquely numbered from to we could instead use primary array-based structure to access the appropriate secondary lists the primary benefit of an adjacency list is that the collection (vcontains exactly those edges that should be reported by the method incident edges(vthereforewe can implement this method by iterating the edges of (vin (deg( )timewhere deg(vis the degree of vertex this is the best possible outcome for any graph representationbecause there are deg(vedges to be reported (ah (bfigure (aan undirected graph (ba schematic representation of the adjacency list structure for collection is the primary list of verticesand each vertex has an associated list of incident edges although not diagrammed as suchwe presume that each edge of the graph is represented with unique edge instance that maintains references to its endpoint vertices
22,285
performance of the adjacency list structure table summarizes the performance of the adjacency list structure implementation of graphassuming that the primary collection and all secondary collections (vare implemented with doubly linked lists asymptoticallythe space requirements for an adjacency list are the same as an edge list structureusing ( mspace for graph with vertices and edges the primary list of vertices uses (nspace the sum of the lengths of all secondary lists is ( )for reasons that were formalized in propositions and in shortan undirected edge (uvis referenced in both (uand ( )but its presence in the graph results in only constant amount of additional space we have already noted that the incident edges(vmethod can be achieved in (deg( )time based on use of (vwe can achieve the degree(vmethod of the graph adt to use ( timeassuming collection (vcan report its size in similar time to locate specific edge for implementing get edge( , )we can search through either (uand (vby choosing the smaller of the twowe get (min(deg( )deg( ))running time the rest of the bounds in table can be achieved with additional care to efficiently support deletions of edgesan edge (uvwould need to maintain reference to its positions within both (uand ( )so that it could be deleted from those collections in ( time to remove vertex vwe must also remove any incident edgesbut at least we can locate those edges in (deg( )time the easiest way to support edgesin (mand count edgesin ( is to maintain an auxiliary list of edgesas in the edge list representation otherwisewe can implement the edges method in ( mtime by accessing each secondary list and reporting its edgestaking care not to report an undirected edge (uvtwice operation vertex count)edge countverticesedgesget edge( ,vdegree(vincident edges(vinsert vertex( )insert edge( , ,xremove edge(eremove vertex(vrunning time ( (no(mo(min(deg( )deg( )) ( (deg( ) ( ( (deg( )table running times of the methods of graph implemented with the adjacency list structure the space used is ( )where is the number of vertices and is the number of edges
22,286
adjacency map structure in the adjacency list structurewe assume that the secondary incidence collections are implemented as unordered linked lists such collection (vuses space proportional to (deg( ))allows an edge to be added or removed in ( timeand allows an iteration of all edges incident to vertex in (deg( )time howeverthe best implementation of get edge( ,vrequires (min(deg( )deg( ))timebecause we must search through either (uor (vwe can improve the performance by using hash-based map to implement (vfor each vertex specificallywe let the opposite endpoint of each incident edge serve as key in the mapwith the edge structure serving as the value we call such graph representation an adjacency map (see figure the space usage for an adjacency map remains ( )because (vuses (deg( )space for each vertex vas with the adjacency list the advantage of the adjacency maprelative to an adjacency listis that the get edge( ,vmethod can be implemented in expected ( time by searching for vertex as key in ( )or vice versa this provides likely improvement over the adjacency listwhile retaining the worst-case bound of (min(deg( )deg( ))in comparing the performance of adjacency map to other representations (see table )we find that it essentially achieves optimal running times for all methodsmaking it an excellent all-purpose choice as graph representation ( (bfigure (aan undirected graph (ba schematic representation of the adjacency map structure for each vertex maintains secondary map in which neighboring vertices serve as keyswith the connecting edges as associated values although not diagrammed as suchwe presume that there is unique edge instance for each edge of the graphand that it maintains references to its endpoint vertices
22,287
adjacency matrix structure the adjacency matrix structure for graph augments the edge list structure with matrix (that isa two-dimensional arrayas in section )which allows us to locate an edge between given pair of vertices in worst-case constant time in the adjacency matrix representationwe think of the vertices as being the integers in the set { and the edges as being pairs of such integers this allows us to store references to edges in the cells of two-dimensional array specificallythe cell [ijholds reference to the edge (uv)if it existswhere is the vertex with index and is the vertex with index if there is no such edgethen [ijnone we note that array is symmetric if graph is undirectedas [ijajifor all pairs and (see figure the most significant advantage of an adjacency matrix is that any edge (uvcan be accessed in worst-case ( timerecall that the adjacency map supports that operation in ( expected time howeverseveral operation are less efficient with an adjacency matrix for exampleto find the edges incident to vertex vwe must presumably examine all entries in the row associated with vrecall that an adjacency list or map can locate those edges in optimal (deg( )time adding or removing vertices from graph is problematicas the matrix must be resized furthermorethe ( space usage of an adjacency matrix is typically far worse than the ( mspace required of the other representations althoughin the worst casethe number of edges in dense graph will be proportional to most real-world graphs are sparse in such casesuse of an adjacency matrix is inefficient howeverif graph is densethe constants of proportionality of an adjacency matrix can be smaller than that of an adjacency list or map in factif edges do not have auxiliary dataa boolean adjacency matrix can use one bit per edge slotsuch that [ijtrue if and only if associated (uvis an edge (ah (bfigure (aan undirected graph (ba schematic representation of the auxiliary adjacency matrix structure for gin which vertices are mapped to indices to although not diagrammed as suchwe presume that there is unique edge instance for each edgeand that it maintains references to its endpoint vertices we also assume that there is secondary edge list (not pictured)to allow the edgesmethod to run in (mtimefor graph with edges
22,288
python implementation in this sectionwe provide an implementation of the graph adt our implementation will support directed or undirected graphsbut for ease of explanationwe first describe it in the context of an undirected graph we use variant of the adjacency map representation for each vertex vwe use python dictionary to represent the secondary incidence map (vhoweverwe do not explicitly maintain lists and eas originally described in the edge list representation the list is replaced by top-level dictionary that maps each vertex to its incidence map ( )note that we can iterate through all vertices by generating the set of keys for dictionary by using such dictionary to map vertices to the secondary incidence mapswe need not maintain references to those incidence maps as part of the vertex structures alsoa vertex does not need to explicitly maintain reference to its position in dbecause it can be determined in ( expected time this greatly simplifies our implementation howevera consequence of our design is that some of the worst-case running time bounds for the graph adt operationsgiven in table become expected bounds rather than maintain list ewe are content with taking the union of the edges found in the various incidence mapstechnicallythis runs in ( mtime rather than strictly (mtimeas the dictionary has keyseven if some incidence maps are empty our implementation of the graph adt is given in code fragments through classes vertex and edgegiven in code fragment are rather simpleand can be nested within the more complex graph class note that we define the hash method for both vertex and edge so that those instances can be used as keys in python' hash-based sets and dictionaries the rest of the graph class is given in code fragments and graphs are undirected by defaultbut can be declared as directed with an optional parameter to the constructor internallywe manage the directed case by having two different top-level dictionary instancesoutgoing and incomingsuch that outgoing[vmaps to another dictionary representing iout ( )and incoming[vmaps to representation of iin (vin order to unify our treatment of directed and undirected graphswe continue to use the outgoing and incoming identifiers in the undirected caseyet as aliases to the same dictionary for conveniencewe define utility named is directed to allow us to distinguish between the two cases for methods degree and incident edgeswhich each accept an optional parameter to differentiate between the outgoing and incoming orientationswe choose the appropriate map before proceeding for method insert vertexwe always initialize outgoing[vto an empty dictionary for new vertex in the directed casewe independently initialize incoming[vas well for the undirected casethat step is unnecessary as outgoing and incoming are aliases we leave the implementations of methods remove vertex and remove edge as exercises ( - and -
22,289
nested vertex class class vertex """lightweight vertex structure for graph ""slots _element def init (selfx) """do not call constructor directly use graph insert vertex( "" self element def element(self) """return element associated with this vertex "" return self element will allow vertex to be map/set key def hash (self) return hash(id(self) nested edge class class edge """lightweight edge structure for graph ""slots _origin _destination _element def init (selfuvx) """do not call constructor directly use graph insert edge( , , "" self origin self destination self element def endpoints(self) """return ( ,vtuple for vertices and "" return (self originself destination def opposite(selfv) """return the vertex that is opposite on this edge "" return self destination if is self origin else self origin def element(self) """return element associated with this edge "" return self element will allow edge to be map/set key def hash (self) return hash(self originself destinationcode fragment vertex and edge classes (to be nested within graph class
22,290
graph algorithms class graph """representation of simple graph using an adjacency map "" def init (selfdirected=false) """create an empty graph (undirectedby default graph is directed if optional paramter is set to true "" self outgoing only create second map for directed graphuse alias for undirected self incoming if directed else self outgoing def is directed(self) """return true if this is directed graphfalse if undirected property is based on the original declaration of the graphnot its contents "" return self incoming is not self outgoing directed if maps are distinct def vertex count(self) """return the number of vertices in the graph "" return len(self outgoing def vertices(self) """return an iteration of all vertices of the graph "" return self outgoing keys def edge count(self) """return the number of edges in the graph "" total sum(len(self outgoing[ ]for in self outgoing for undirected graphsmake sure not to double-count edges return total if self is directedelse total / def edges(self) """return set of all edges of the graph "" result setavoid double-reporting edges of undirected graph for secondary map in self outgoing values)add edges to resulting set result update(secondary map values) return result code fragment graph class definition (continued in code fragment
22,291
def get edge(selfuv)"""return the edge from to vor none if not adjacent ""returns none if not adjacent return self outgoing[uget(vdef degree(selfvoutgoing=true)"""return number of (outgoingedges incident to vertex in the graph if graph is directedoptional parameter used to count incoming edges ""adj self outgoing if outgoing else self incoming return len(adj[ ]def incident edges(selfvoutgoing=true)"""return all (outgoingedges incident to vertex in the graph if graph is directedoptional parameter used to request incoming edges ""adj self outgoing if outgoing else self incoming for edge in adj[vvalues)yield edge def insert vertex(selfx=none)"""insert and return new vertex with element "" self vertex(xself outgoing[vif self is directed)need distinct map for incoming edges self incoming[vreturn def insert edge(selfuvx=none)"""insert and return new edge from to with auxiliary element "" self edge(uvxself outgoing[ ][ve self incoming[ ][ue code fragment graph class definition (continued from code fragment we omit error-checking of parameters for brevity
22,292
graph traversals greek mythology tells of an elaborate labyrinth that was built to house the monstrous minotaurwhich was part bull and part man this labyrinth was so complex that neither beast nor human could escape it no humanthat isuntil the greek herotheseuswith the help of the king' daughterariadnedecided to implement graph traversal algorithm theseus fastened ball of thread to the door of the labyrinth and unwound it as he traversed the twisting passages in search of the monster theseus obviously knew about good algorithm designforafter finding and defeating the beasttheseus easily followed the string back out of the labyrinth to the loving arms of ariadne formallya traversal is systematic procedure for exploring graph by examining all of its vertices and edges traversal is efficient if it visits all the vertices and edges in time proportional to their numberthat isin linear time graph traversal algorithms are key to answering many fundamental questions about graphs involving the notion of reachabilitythat isin determining how to travel from one vertex to another while following paths of graph interesting problems that deal with reachability in an undirected graph include the followingcomputing path from vertex to vertex vor reporting that no such path exists given start vertex of gcomputingfor every vertex of ga path with the minimum number of edges between and vor reporting that no such path exists testing whether is connected computing spanning tree of gif is connected computing the connected components of computing cycle in gor reporting that has no cycles include the interesting problems that deal with reachability in directed graph followingcomputing directed path from vertex to vertex vor reporting that no such path exists that are reachable from given vertex finding all the vertices of is acyclic determine whether is strongly connected determine whether in the remainder of this sectionwe present two efficient graph traversal algorithmscalled depth-first search and breadth-first searchrespectively
22,293
depth-first search the first traversal algorithm we consider in this section is depth-first search (dfsdepth-first search is useful for testing number of properties of graphsincluding whether there is path from one vertex to another and whether or not graph is connected depth-first search in graph is analogous to wandering in labyrinth with string and can of paint without getting lost we begin at specific starting vertex in gwhich we initialize by fixing one end of our string to and painting as "visited the vertex is now our "currentvertex--call our current vertex we then traverse by considering an (arbitraryedge (uvincident to the current vertex if the edge (uvleads us to vertex that is already visited (that ispainted)we ignore that edge ifon the other hand(uvleads to an unvisited vertex vthen we unroll our stringand go to we then paint as "visited,and make it the current vertexrepeating the computation above eventuallywe will get to "dead end,that isa current vertex such that all the edges incident to lead to vertices already visited to get out of this impassewe roll our string back upbacktracking along the edge that brought us to vgoing back to previously visited vertex we then make our current vertex and repeat the computation above for any edges incident to that we have not yet considered if all of ' incident edges lead to visited verticesthen we again roll up our string and backtrack to the vertex we came from to get to uand repeat the procedure at that vertex thuswe continue to backtrack along the path that we have traced so far until we find vertex that has yet unexplored edgestake one such edgeand continue the traversal the process terminates when our backtracking leads us back to the start vertex sand there are no more unexplored edges incident to the pseudo-code for depth-first search traversal starting at vertex (see code fragment follows our analogy with string and paint we use recursion to implement the string analogyand we assume that we have mechanism (the paint analogyto determine whether vertex or edge has been previously explored algorithm dfs( , ){we assume has already been marked as visitedinputa graph and vertex of outputa collection of vertices reachable from uwith their discovery edges for each outgoing edge (uvof do if vertex has not been visited then mark vertex as visited (via edge erecursively call dfs( ,vcode fragment the dfs algorithm
22,294
classifying graph edges with dfs an execution of depth-first search can be used to analyze the structure of graphbased upon the way in which edges are explored during the traversal the dfs process naturally identifies what is known as the depth-first search tree rooted at starting vertex whenever an edge (uvis used to discover new vertex during the dfs algorithm of code fragment that edge is known as discovery edge or tree edgeas oriented from to all other edges that are considered during the execution of dfs are known as nontree edgeswhich take us to previously visited vertex in the case of an undirected graphwe will find that all nontree edges that are explored connect the current vertex to one that is an ancestor of it in the dfs tree we will call such an edge back edge when performing dfs on directed graphthere are three possible kinds of nontree edgesback edgeswhich connect vertex to an ancestor in the dfs tree forward edgeswhich connect vertex to descendant in the dfs tree cross edgeswhich connect vertex to vertex that is neither its ancestor nor its descendant an example application of the dfs algorithm on directed graph is shown in figure demonstrating each type of nontree edge an example application of the dfs algorithm on an undirected graph is shown in figure bos bos ord ord jfk jfk sfo sfo dfw dfw lax lax mia mia ( (bfigure an example of dfs in directed graphstarting at vertex (bos)(aintermediate stepwherefor the first timea considered edge leads to an already visited vertex (dfw)(bthe completed dfs the tree edges are shown with thick linesthe back edges are shown with dashed linesand the forward and cross edges are shown with dotted lines the order in which the vertices are visited is indicated by label next to each vertex the edge (ord,dfwis back edgebut (dfw,ordis forward edge edge (bos,sfois forward edgeand (sfo,laxis cross edge
22,295
( (ba ( (da ( ( figure example of depth-first search traversal on an undirected graph starting at vertex we assume that vertex' adjacencies are considered in alphabetical order visited vertices and explored edges are highlightedwith discovery edges drawn as solid lines and nontree (backedges as dashed lines(ainput graph(bpath of tree edgestraced from until back edge ( ,cis examined(creaching fwhich is dead end(dafter backtracking to iresuming with edge ( , )and hitting another dead end at (eafter backtracking to gcontinuing with edge ( , )and hitting another dead end at ( final result
22,296
properties of depth-first search there are number of observations that we can make about the depth-first search algorithmmany of which derive from the way the dfs algorithm partitions the edges of graph into groups we begin with the most significant property proposition let be an undirected graph on which dfs traversal starting at vertex has been performed then the traversal visits all vertices in the connected component of sand the discovery edges form spanning tree of the connected component of justificationsuppose there is at least one vertex in ' connected component not visitedand let be the first unvisited vertex on some path from to (we may have wsince is the first unvisited vertex on this pathit has neighbor that was visited but when we visited uwe must have considered the edge (uv)henceit cannot be correct that is unvisited thereforethere are no unvisited vertices in ' connected component since we only follow discovery edge when we go to an unvisited vertexwe will never form cycle with such edges thereforethe discovery edges form connected subgraph without cycleshence tree moreoverthis is spanning tree becauseas we have just seenthe depth-first search visits each vertex in the connected component of be directed graph depth-first search on starting at proposition let vertex visits all the vertices of that are reachable from alsothe dfs tree contains directed paths from to every vertex reachable from visited by dfs starting at justificationlet vs be the subset of vertices of vertex we want to show that vs contains and every vertex reachable from belongs to vs suppose nowfor the sake of contradictionthat there is vertex reachable from that is not in vs consider directed path from to wand let (uvbe the first edge on such path taking us out of vs that isu is in vs but is not in vs when dfs reaches uit explores all the outgoing edges of uand thus must reach also vertex via edge (uvhencev should be in vs and we have obtained contradiction thereforevs must contain every vertex reachable from we prove the second fact by induction on the steps of the algorithm we claim that each time discovery edge (uvis identifiedthere exists directed path from to in the dfs tree since must have previously been discoveredthere exists path from to uso by appending the edge (uvto that pathwe have directed path from to note that since back edges always connect vertex to previously visited vertex ueach back edge implies cycle in gconsisting of the discovery edges from to plus the back edge (uv
22,297
running time of depth-first search in terms of its running timedepth-first search is an efficient method for traversing graph note that dfs is called at most once on each vertex (since it gets marked as visited)and therefore every edge is examined at most twice for an undirected graphonce from each of its end verticesand at most once in directed graphfrom its origin vertex if we let ns < be the number of vertices reachable from vertex sand ms < be the number of incident edges to those verticesa dfs starting at runs in (ns ms timeprovided the following conditions are satisfiedthe graph is represented by data structure such that creating and iterating through the incident edges(vtakes (deg( )timeand the opposite(vmethod takes ( time the adjacency list structure is one such structurebut the adjacency matrix structure is not we have way to "marka vertex or edge as exploredand to test if vertex or edge has been explored in ( time we discuss ways of implementing dfs to achieve this goal in the next section given the assumptions abovewe can solve number of interesting problems proposition let be an undirected graph with vertices and edges dfs traversal of can be performed in ( mtimeand can be used to solve the following problems in ( mtimecomputing path between two given vertices of gif one exists testing whether is connected computing spanning tree of gif is connected computing the connected components of computing cycle in gor reporting that has no cycles be directed graph with vertices and edges proposition let can be performed in ( mtimeand can be used to solve dfs traversal of the following problems in ( mtimeif one exists computing directed path between two given vertices of computing the set of vertices of that are reachable from given vertex is strongly connected testing whether or reporting that is acyclic computing directed cycle in computing the transitive closure of (see section the justification of propositions and is based on algorithms that use slightly modified versions of the dfs algorithm as subroutines we will explore some of those extensions in the remainder of this section
22,298
dfs implementation and extensions we begin by providing python implementation of the basic depth-first search algorithmoriginally described with pseudo-code in code fragment our dfs function is presented in code fragment def dfs(gudiscovered) """perform dfs of the undiscovered portion of graph starting at vertex discovered is dictionary mapping each vertex to the edge that was used to discover it during the dfs ( should be "discoveredprior to the call newly discovered vertices will be added to the dictionary as result ""for every outgoing edge from for in incident edges( ) opposite( if not in discoveredv is an unvisited vertex discovered[ve is the tree edge that discovered dfs(gvdiscoveredrecursively explore from code fragment recursive implementation of depth-first search on graphstarting at designated vertex in order to track which vertices have been visitedand to build representation of the resulting dfs treeour implementation introduces third parameternamed discovered this parameter should be python dictionary that maps vertex of the graph to the tree edge that was used to discover that vertex as technicalitywe assume that the source vertex occurs as key of the dictionarywith none as its value thusa caller might start the traversal as followsresult { nonedfs(guresulta new dictionarywith trivially discovered the dictionary serves two purposes internallythe dictionary provides mechanism for recognizing visited verticesas they will appear as keys in the dictionary externallythe dfs function augments this dictionary as it proceedsand thus the values within the dictionary are the dfs tree edges at the conclusion of the process because the dictionary is hash-basedthe test"if not in discovered,and the record-keeping step"discovered[ve,run in ( expected timerather than worst-case time in practicethis is compromise we are willing to acceptbut it does violate the formal analysis of the algorithmas given on page if we could assume that vertices could be numbered from to then those numbers could be used as indices into an array-based lookup table rather than hash-based map alternativelywe could store each vertex' discovery status and associated tree edge directly as part of the vertex instance
22,299
reconstructing path from to we can use the basic dfs function as tool to identify the (directedpath leading from vertex to vif is reachable from this path can easily be reconstructed from the information that was recorded in the discovery dictionary during the traversal code fragment provides an implementation of secondary function that produces an ordered list of vertices on the path from to to reconstruct the pathwe begin at the end of the pathexamining the discovery dictionary to determine what edge was used to reach vertex vand then what the other endpoint of that edge is we add that vertex to listand then repeat the process to determine what edge was used to discover it once we have traced the path all the way back to the starting vertex uwe can reverse the list so that it is properly oriented from to vand return it to the caller this process takes time proportional to the length of the pathand therefore it runs in (ntime (in addition to the time originally spent calling dfs def construct path(uvdiscovered) path empty path by default if in discovered we build list from to and then reverse it at the end path append( walk while walk is not discovered[walkfind edge leading to walk parent opposite(walk path append(parent walk parent path reversereorient path from to return path code fragment function to reconstruct directed path from to vgiven the trace of discovery from dfs started at the function returns an ordered list of vertices on the path testing for connectivity we can use the basic dfs function to determine whether graph is connected in the case of an undirected graphwe simply start depth-first search at an arbitrary vertex and then test whether len(discoveredequals at the conclusion if the graph is connectedthen by proposition all vertices will have been discoveredconverselyif the graph is not connectedthere must be at least one vertex that is not reachable from uand that will not be discovered