id
int64
0
25.6k
text
stringlengths
0
4.59k
20,000
we can still use treesort howeverto output the sorted items into the original arraywe will need another procedure fillarray(tree tarray aint jto traverse the tree and fill the array that is easiest done by passing and returning an index that keeps track of the next array position to be filled this results in the complete treesort algorithmtreesort(array at emptytree for size(ai+ insert( [ ],tfillarray( , , fillarray(tree tarray aint jif not isempty(tj fillarray(left( ), ,ja[ ++root(tj fillarray(right( ), ,jreturn which assumes that is pointer to the array location and that its elements can be accessed and updated given that and the relevant array index since there are items to insert into the treeand each insertion has time complexity (log )treesort has an overall average time complexity of (nlog nsowe already have one algorithm that achieves the theoretical best average case time complexity of (nlog nnotehoweverthat if the tree is not kept balanced while the items are being insertedand the items are already sortedthe height of the tree and number of comparisons per insertion will be ( )leading to worst case time complexity of ( )which is no better than the simpler array-based algorithms we have already considered exercisewe have assumed so far that the items stored in binary search tree must not contain any duplicates find the simplest ways to relax that restriction and determine how the choice of approach affects the stability of the associated treesort algorithm heapsort we now consider another way of implementing selection sorting algorithm using more efficient data structure we have already studied the underlying idea here is that it would help if we could pre-arrange the data so that selecting the smallest/biggest entry becomes easier for thatremember the idea of priority queue discussed earlier we can take the value of each item to be its priority and then queue the items accordingly thenif we remove the item with the highest priority at each step we can fill an array in order 'from the rear'starting with the biggest item priority queues can be implemented in number of different waysand we have already studied straightforward implementation using binary heap trees in howeverthere may be better wayso it is worth considering the other possibilities
20,001
with the highest priority appears in [nremoving this item would be very simplebut inserting new item would always involve finding the right position and shifting number of items to the right to make room for it for exampleinserting into the queue [ ] [ [ [ that kind of item insertion is effectively insertion sort and clearly inefficient in generalof (ncomplexity rather than (log nwith binary heap tree another approach would be to use an unsorted array in this casea new item would be inserted by just putting it into [ + ]but to delete the entry with the highest priority would involve having to find it first thenafter thatthe last item would have to be swapped into the gapor all items with higher index 'shifted downagainthat kind of item deletion is clearly inefficient in generalof (ncomplexity rather than (log nwith heap tree thusof those three representationsonly one is of use in carrying out the above idea efficiently an unsorted array is what we started fromso that is not any helpand ordering the array is what we are trying to achieveso heaps are the way forward to make use of binary heap treeswe first have to take the unsorted array and re-arrange it so that it satisfies the heap tree priority ordering we have already studied the heapify algorithm which can do that with (ntime complexity then we need to extract the sorted array from it in the heap treethe item with the highest prioritythat is the largest itemis always in [ in the sorted arrayit should be in the last position [nif we simply swap the twowe will have that item at the right position of the arrayand also have begun the standard procedure of removing the root of the heap-treesince [nis precisely the item that would be moved into the root position at the next step since [nnow contains the correct itemwe will never have to look at it again insteadwe just take the items [ ], [ - and bring them back into heap-tree form using the bubble down procedure on the new rootwhich we know to have complexity (log nnow the second largest item is in position [ ]and its final position should be [ - ]so we now swap these two items then we rearrange [ ], [ - back into heap tree using the bubble down procedure on the new root and so on when the ith step has been completedthe items [ - + ], [nwill have the correct entriesand there will be heap tree for the items [ ], [ -inote that the sizeand therefore the heightof the heap tree decreases at each step as part of the ith stepwe have to bubble down the new root this will take at most twice as many comparisons as the height of the original heap treewhich is log so overall there are stepswith at most log comparisonstotalling ( )log the number of comparisons will actually be less than thatbecause the number of bubble down steps will usually be less than the full height of the treebut usually not much lessso the time complexity is still (nlog nthe full heapsort algorithm can thus be written in very simple formusing the bubble down and heapify procedures we already have from first heapify converts the
20,002
time into the correct position in the sorted arrayheapsort(array aint nheapify( ,nforj -swap [ and [jbubbledown( , , - it is clear from the swap step that the order of identical items can easily be reversedso there is no way to render the heapsort algorithm stable the average and worst-case time complexities of the entire heapsort algorithm are given by the sum of two complexity functionsfirst that of heapify rearranging the original unsorted array into heap tree which is ( )and then that of making the sorted array out of the heap tree which is (nlog ncoming from the (nbubble-downs each of which has (log ncomplexity thus the overall average and worst-case complexities are both (nlog )and we now have sorting algorithm that achieves the theoretical best worst-case time complexity using more sophisticated priority queuessuch as binomial or fibonacci heapscannot improve on this because they have the same delete time complexity useful feature of heapsort is that if only the largest items need to be found and sortedrather than all nthe complexity of the second stage is only (mlog )which can easily be less than (nand thus render the whole algorithm only ( divide and conquer algorithms all the sorting algorithms considered so far work on the whole set of items together insteaddivide and conquer algorithms recursively split the sorting problem into more manageable sub-problems the idea is that it will usually be easier to sort many smaller collections of items than one big oneand sorting single items is trivial so we repeatedly split the given collection into two smaller parts until we reach the 'base caseof one-item collectionswhich require no effort to sortand then merge them back together again there are two main approaches for doing thisassuming we are working on an array of size with entries [ ], [ - ]then the obvious approach is to simply split the set of indices that iswe split the array at item / and consider the two sub-arrays [ ], [( - )/ and [( + )/ ], [ - this method has the advantage that the splitting of the collection into two collections of equal (or nearly equalsize at each stage is easy howeverthe two sorted arrays that result from each split have to be merged together carefully to maintain the ordering this is the underlying idea for sorting algorithm called mergesort another approach would be to split the array in such way thatat each stageall the items in the first collection are no bigger than all the items in the second collection the splitting here is obviously more complexbut all we have to do to put the pieces back together again at each stage is to take the first sorted array followed by the second sorted array this is the underlying idea for sorting algorithm called quicksort we shall now look in detail at how these two approaches work in practice
20,003
quicksort the general idea here is to repeatedly split (or partitionthe given array in such way that all the items in the first sub-array are smaller than all the items in the second sub-arrayand then concatenate all the sub-arrays to give the sorted full array how to partition the important question is how to perform this kind of splitting most efficiently if the array is very simplefor example [ , , , , , ]then good split would be to put all the items smaller than into one partgiving [ , , ]and all the items bigger than or equal to into the otherthat is [ , , indeedmoving all items with smaller key than some given value into one sub-arrayand all entries with bigger or equal key into the other sub-array is the standard quicksort strategy the value that defines the split is called the pivot howeverit is not obvious what is the best way to choose the pivot value one situation that we absolutely have to avoid is splitting the array into an empty subarray and the whole array again if we do thisthe algorithm will not just perform badlyit will not even terminate howeverif the pivot is chosen to be an item in the arrayand the pivot is kept in between and separate from both sub-arraysthen the sub-arrays being sorted at each recursion will always be at least one item shorter than the previous arrayand the algorithm is guaranteed to terminate thusit proves convenient to split the array at each stage into the sub-array of values smaller than or equal to some chosen pivot itemfollowed by that chosen pivot itemfollowed by the sub-array of values greater than or equal to the chosen pivot item moreoverto save spacewe do not actually split the array into smaller arrays insteadwe simply rearrange the whole array to reflect the splitting we say that we partition the arrayand the quicksort algorithm is then applied to the sub-arrays of this partitioned array in order for the algorithm to be called recursivelyto sort ever smaller parts of the original arraywe need to tell it which part of the array is currently under consideration thereforequicksort is called giving the lowest index (leftand highest index (rightof the sub-array it must work on thus the algorithm takes the formquicksort(array aint leftint rightif left right pivotindex partition( ,left,rightquicksort( ,left,pivotindex- quicksort( ,pivotindex+ ,rightfor which the initial call would be quicksort( , , - and the array at the end is sorted the crucial part of this is clearly the partition(aleftrightprocedure that rearranges the array so that it can be split around an appropriate pivot [pivotindexif we were to split off only one item at timequicksort would have recursive callswhere is the number of items in the array ifon the other handwe halve the array at each stageit would only need log recursive calls this can be made clear by drawing binary tree whose nodes are labelled by the sub-arrays that have been split off at each stageand measuring its height ideally thenwe would like to get two sub-arrays of roughly equal size (namely half of the given arraysince that is the most efficient way of doing this of coursethat depends on choosing good pivot
20,004
then the split will be as even as possible unfortunatelythere is no quick guaranteed way of finding the optimal pivot if the keys are integersone could take the average value of all the keysbut that requires visiting all the entries to sample their keyadding considerable overhead to the algorithmand if the keys are more complicatedsuch as stringsyou cannot do this at all more importantlyit would not necessarily give pivot that is value in the array some sensible heuristic pivot choice strategies areuse random number generator to produce an index and then use [ktake key from 'the middleof the arraythat is [( - )/ take small sample ( or itemsand take the 'middlekey of those note that one should never simply choose the first or last key in the array as the pivotbecause if the array is almost sorted alreadythat will lead to the particularly bad choice mentioned aboveand this situation is actually quite common in practice since there are so many reasonable possibilitiesand they are all fairly straightforwardwe will not give specific implementation for any of these pivot choosing strategiesbut just assume that we have choosepivot( ,left,rightprocedure that returns the index of the pivot for particular sub-array (rather than the pivot value itselfthe partitioning in order to carry out the partitioning within the given arraysome thought is required as to how this may be best achieved this is more easily demonstrated by an example than put into words for changewe will consider an array of stringsnamely the programming languages[cfortranjavaadapascalbasichaskellocamlthe ordering we choose is the standard lexicographic oneand let the chosen pivot be "fortranwe will use markers to denote partition of the array to the left of the left markerthere will be items we know to have key smaller than or equal to the pivot to the right of the right markerthere will be items we know to have key bigger than or equal to the pivot in the middlethere will be the items we have not yet considered note that this algorithm proceeds to investigate the items in the array from two sides we begin by swapping the pivot value to the end of the array where it can easily be kept separate from the sub-array creation processso we have the array[|cocamljavaadapascalbasichaskell fortranstarting from the leftwe find "cis less than "fortran"so we move the left marker one step to the right to give [ ocamljavaadapascalbasichaskell fortrannow "ocamlis greater than "fortran"so we stop on the left and proceed from the right insteadwithout moving the left marker we then find "haskellis bigger than "fortran"so we move the right marker to the left by onegiving [ ocamljavaadapascalbasichaskellfortrannow "basicis smaller than "fortran"so we have two keys"ocamland "basic"which are 'on the wrong sidewe therefore swap themwhich allows us to move both the left and the right marker one step further towards the middle this brings us to [cbasic javaadapascal ocamlhaskellfortrannow we proceed from the left once againbut "javais bigger than "fortran"so we stop there and switch to the right then "pascalis bigger than "fortran"so we move the right marker again we then find "ada"which is smaller than the pivotso we stop we have now got [cbasic javaadapascalocamlhaskellfortranas beforewe want to swap "javaand "ada"which leaves the left and the right markers in the same place[cbasicadajava pascalocamlhaskellfortran]so we
20,005
after the markers to give [cbasicadajava fortranocamlhaskellpascalsince we obviously cannot have the marker indices 'betweenarray entrieswe will assume the left marker is on the left of [leftmarkand the right marker is to the right of [rightmarkthe markers are therefore 'in the same placeonce rightmark becomes smaller than leftmarkwhich is when we stop if we assume that the keys are integerswe can write the partitioning procedurethat needs to return the final pivot positionaspartition(array aint leftint rightpivotindex choosepivot(aleftrightpivot [pivotindexswap [pivotindexand [rightleftmark left rightmark right while (leftmark <rightmarkwhile (leftmark <rightmark and [leftmark<pivotleftmark+while (leftmark pivotrightmark-if (leftmark rightmarkswap [leftmark++and [rightmark--swap [leftmarkand [rightreturn leftmark this achieves partitioning that ends with the same items in the arraybut in different orderwith all items to the left of the returned pivot position smaller or equal to the pivot valueand all items to the right greater or equal to the pivot value note that this algorithm doesn' require any extra memory it just swaps the items in the original array howeverthe swapping of items means the algorithm is not stable to render quicksort stablethe partitioning must be done in such way that the order of identical items can never be reversed conceptually simple approach that does thisbut requires more memory and copyingis to simply go systematically through the whole arrayre-filling the array with items less than or equal to the pivotand filling second array with items greater or equal to the pivotand finally copying the array into the end of apartition (array aint leftint rightcreate new array of size right-left+ pivotindex choosepivot(aleftrightpivot [pivotindexacount left bcount for left <right +if =pivotindex [ [ielse if [ipivot |( [ =pivot & pivotindex
20,006
else [bcount++ [ifor bcount + [acount++ [ireturn right-bcount+ like the first partition procedurethis also achieves partitioning with the same items in the arraybut in different orderwith all items to the left of the returned pivot position smaller or equal to the pivot valueand all items to the right greater or equal to the pivot value complexity of quicksort once again we shall determine complexity based on the number of comparisons performed the partitioning step compares each of items against the pivotand therefore has complexity (nclearlysome partition and pivot choice algorithms are less efficient than otherslike partition involving more copying of items than partitionbut that does not generally affect the overall complexity class in the worst casewhen an array is partitionedwe have one empty sub-array if this happens at each stepwe apply the partitioning method to arrays of size nthen then until we reach those complexity functions then add up to ( ( ( )/ ignoring the constant factor and the non-dominant term / this shows thatin the worst casethe number of comparisons performed by quicksort is ( in the best casewhenever we partition the arraythe resulting sub-arrays will differ in size by at most one then we have comparisons in the first casetwo lots of bn/ comparisons for the two sub-arraysfour times bn/ ceight times bn/ cand so ondown to log - times bn/ log - that gives the total number of comparisons as bn/ bn/ bn/ log - bn/ log - nlog which matches the theoretical best possible time complexity of (nlog nmore interesting and important is how well quicksort does in the average case howeverthat is much harder to analyze exactly the strategy for choosing pivot at each stage affects thatthough as long as it avoids the problems outlined abovethat does not change the complexity class it also makes difference whether there can be duplicate valuesbut again that doesn' change the complexity class in the endall reasonable variations involve comparing (nitems against pivotfor each of (log nrecursionsso the total number of comparisonsand hence the overall time complexityin the average case is (nlog nlike heapsortwhen only the largest items need to be found and sortedrather than all nquicksort can be modified to result in reduced time complexity in this caseonly the first sub-array needs to be processed at each stageuntil the sub-array sizes exceed in that situationfor the best casethe total number of comparisons is reduced to bn/ bn/ bn/ mlog rendering the time complexity of the whole modified algorithm only (nfor the average casethe computation is again more difficultbut as long as the key problems outlined above are avoidedthe average-case complexity of this special case is also (
20,007
for defining the pivotsince the particular problem in question might well allow for more refined approach generallythe pivot will be better if more items are sampled before it is being chosen for exampleone could check several randomly chosen items and take the 'middleone of thosethe so called median note that in order to find the median of all the itemswithout sorting them firstwe would end up having to make comparisonsso we cannot do that without making quicksort unattractively slow quicksort is rarely the most suitable algorithm if the problem size is small the reason for this is all the overheads from the recursion ( storing all the return addresses and formal parametershence once the sub-problem become 'small( size of is often suggested in the literature)quicksort should stop calling itself and instead sort the remaining sub-arrays using simpler algorithm such as selection sort mergesort the other divide and conquer sorting strategy based on repeatedly splitting the array of items into two sub-arraysmentioned in section is called mergesort this simply splits the array at each stage into its first and last halfwithout any reordering of the items in it howeverthat will obviously not result in set of sorted sub-arrays that we can just append to each other at the end so mergesort needs another procedure merge that merges two sorted sub-arrays into another sorted array as with binary search in section integer variables left and right can be used to refer to the lower and upper index of the relevant arrayand mid refers to the end of its left sub-array thus suitable mergesort algorithm ismergesort(array aint leftint rightif left right mid (left right mergesort(aleftmidmergesort(amid+ rightmerge(aleftmidrightnote that it would be relatively simple to modify this mergesort algorithm to operate on linked lists (of known lengthrather than arrays to 'splitsuch list into twoall one has to do is set the pointer of the bn/ cth list entry to nulland use the previously-pointed-to next entry as the head of the new second list of coursecare needs to be taken to keep the list size information intactand effort is required to find the crucial pointer for each split the merge algorithm the principle of merging two sorted collections (whether they be listsarraysor something elseis quite simplesince they are sortedit is clear that the smallest item overall must be either the smallest item in the first collection or the smallest item in the second collection let us assume it is the smallest key in the first collection now the second smallest item overall must be either the second-smallest item in the first collectionor the smallest item in the second collectionand so on in other wordswe just work through both collections and at each stagethe 'nextitem is the current item in either the first or the second collection
20,008
we are using when arrays are usedit is actually necessary for the merge algorithm to create new array to hold the result of the operation at least temporarily in contrastwhen using linked listsit would be possible for merge to work by just changing the reference to the next node this does make for somewhat more confusing codehowever for arraysa suitable merge algorithm would start by creating new array to store the resultsthen repeatedly add the next smallest item into it until one sub-array is finishedthen copy the remainder of the unfinished sub-arrayand finally copy back into amerge(array aint leftint midint rightcreate new array of size right-left+ bcount lcount left rcount mid+ while (lcount <midand (rcount <rightif [lcount< [rcountb[bcount++ [lcount++else [bcount++ [rcount++if lcount mid while rcount <right [bcount++ [rcount++else while lcount <mid [bcount++ [lcount++for bcount bcount right-left+ bcount+ [left+bcountb[bcountit is instructive to compare this with the partition algorithm for quicksort to see exactly where the two sort algorithms differ as with partition the merge algorithm never swaps identical items past each otherand the splitting does not change the ordering at allso the whole mergesort algorithm is stable complexity of mergesort the total number of comparisons needed at each recursion level of mergesort is the number of items needing merging which is ( )and the number of recursions needed to get to the single item level is (log )so the total number of comparisons and its time complexity are (nlog nthis holds for the worst case as well as the average case like quicksortit is possible to speed up mergesort by abandoning the recursive algorithm when the sizes of the sub-collections become small for arrays would once again be suitable size to switch to an algorithm like selection sort note thatwith mergesortfor the special case when only the largest/smallest items need to be found and sortedrather than all nthere is no way to reduce the time complexity in the way it was possible with heapsort and quicksort this is because the ordering of the required items only emerges at the very last stage after the large majority of the comparisons have already been carried out
20,009
summary of comparison-based sorting algorithms the following table summarizes the key properties of all the comparison-based sorting algorithms we have consideredsorting algorithm bubble sort selection sort insertion sort treesort heapsort quicksort mergesort strategy employed exchange selection insertion insertion selection & & objects manipulated arrays arrays arrays/lists trees/lists arrays arrays arrays/lists worst case complexity ( ( ( ( (nlog no( (nlog naverage case complexity ( ( ( (nlog no(nlog no(nlog no(nlog nstable yes no yes yes no maybe yes to see what the time complexities mean in practicethe following table compares the typical run times of those of the above algorithms that operate directly on arraysalgorithm bubble sort selection sort insertion sort heapsort quicksort quicksort mergesort mergesort as beforearrays of the stated sizes are filled randomlyexcept that denotes an array with entries which are already sortedand that denotes an array which is sorted in the reverse order quicksort and mergesort are algorithms where the recursive procedure is abandoned in favour of selection sort once the size of the array falls to or below it should be emphasized again that these numbers are of limited accuracysince they vary somewhat depending on machine and language implementation what has to be stressed here is that there is no 'best sorting algorithmin generalbut that there are usually good and bad choices of sorting algorithms for particular circumstances it is up to the program designer to make sure that an appropriate one is pickeddepending on the properties of the data to be sortedhow it is best storedwhether all the sorted items are required rather than some sub-setand so on non-comparison-based sorts all the above sorting algorithms have been based on comparisons of the items to be sortedand we have seen that we can' get time complexity better than (nlog nwith comparison based algorithms howeverin some circumstances it is possible to do better than that with sorting algorithms that are not based on comparisons
20,010
numbers from to how would you sort thosethe answer is surprisingly simple we know that we have entries in the array and we know exactly which items should go there and in which order this is very unusual situation as far as general sorting is concernedyet this kind of thing often comes up in every-day life for examplewhen hotel needs to sort the room keys for its rooms rather than employing one of the comparison-based sorting algorithmsin this situation we can do something much simpler we can simply put the items directly in the appropriate placesusing an algorithm such as that as shown in figure create array of size for + [ [ ] [icopy array into array figure simply put the items in the right order using their values this algorithm uses second array to hold the resultswhich is clearly not very memory efficientbut it is possible to do without that one can use series of swaps within array to get the items in the right positions as shown in figure = = for ni+while [ ! swap [ [ ]and [ii= = figure swapping the items into the right order without using new array as far as time complexity is concernedit is obviously not appropriate here to count the number of comparisons insteadit is the number of swaps or copies that is important the algorithm of figure performs copies to fill array and then another to return the result to array aso the overall time complexity is (nthe time complexity of the algorithm of figure looks worse than it really is this algorithm performs at most swapssince one itemnamely [ [ ]is always swapped into its final position so at worstthis has time complexity (ntoo this example should make it clear that in particular situationssorting might be performed by much simpler (and quickermeans than the standard comparison sortsthough most realistic situations will not be quite as simple as the case here once againit is the responsibility of the program designer to take this possibility into account
20,011
binbucketradix sorts binbucketand radix sorts are all names for essentially the same non-comparison-based sorting algorithm that works well when the items are labelled by small sets of values for examplesuppose you are given number of datesby day and monthand need to sort them into order one way of doing this would be to create queue for each dayand place the items (datesone at time into the right queue according to their day (without sorting them furtherthen form one big queue out of theseby concatenating all the day queues starting with day and continuing up to day then for the second phasecreate queue for each monthand place the dates into the right queues in the order that they appear in the queue created by the first phase again form big queue by concatenating these month queues in order this final queue is sorted in the intended order this may seem surprising at first sightso let us consider simple example[ / / / / / / / / / / / / we first create and fill queues for the days as follows [ / [ / [ / [ / [ / [ / [ / / [ / / [ / [ / the empty queues are not shown there is no need to create queues before we hit an item that belongs to them then concatenation of the queues gives[ / / / / / / / / / / , / / next we create and fill queues for the months that are presentgiving [ / / / [ / / / / [ / / [ / [ / / finallyconcatenating all these queues gives the items in the required order[ / / / / / / / / / / / / this is called two-phase radix sortingsince there are clearly two phases to it
20,012
phasecreate an ordered set of queues corresponding to the possible valuesthen add each item in the order they appear to the end of the relevant queueand finally concatenate the the queues in order repeat this process for each sorting criterion the crucial additional detail is that the queuing phases must be performed in the order of the significance of each criteriawith the least significant criteria first for exampleif you know that your items to be sorted are all (at mosttwo-digit integersyou can use radix sort to sort them first create and fill queues for the last digitconcatenatethen create and fill queues for the first digitand concatenate to leave the items in sorted order similarlyif you know that your keys are all strings consisting of three charactersyou can again apply radix sort you would first queue according to the third characterthen the secondand finally the firstgiving three phase radix sort note that at no pointdoes the the algorithm actually compare any items at all this kind of algorithm makes use of the fact that for each phase the items are from strictly restricted setorin other wordsthe items are of particular form which is known priori the complexity class of this algorithm is ( )since at every phaseeach item is dealt with precisely onceand the number of phases is assumed to be small and constant if the restricted sets are smallthe number of operations involved in finding the right queue for each item and placing it at the end of it will be smallbut this could become significant if the sets are large the concatenation of the queues will involve some overheadsof coursebut these will be small if the sets are small and linked listsrather than arraysare used one has to be carefulhoweverbecause if the total number of operations for each item exceeds log nthen the overall complexity is likely to be greater than the (nlog ncomplexity of the more efficient comparison-based algorithms alsoif the restricted sets are not known in advanceand potentially largethe overheads of finding and sorting them could render radix sort worse than using comparison-based approach once againit is the responsibility of the program designer to decide whether given problem can be solved more efficiently with radix sort rather than comparison-based sort
20,013
hash tables storing data we have already seen number of different ways of storing items in computerarrays and variants thereof ( sorted and unsorted arraysheap trees)linked lists ( queuesstacks)and trees ( binary search treesheap treeswe have also seen that these approaches can perform quite differently when it comes to the particular tasks we expect to carry out on the itemssuch as insertiondeletion and searchingand that the best way of storing data does not exist in generalbut depends on the particular application this looks at another way of storing datathat is quite different from the ones we have seen so far the idea is to simply put each item in an easily determined locationso we never need to search for itand have no ordering to maintain when inserting or deleting items this has impressive performance as far as time is concernedbut that advantage is payed for by needing more space ( memory)as well as by being more complicated and therefore harder to describe and implement we first need to specify what we expect to be able to do with this way of storing datawithout considering how it is actually implemented in other wordswe need to outline an abstract data type this is similar to what you will generally do when first trying to implement class in javayou should think about the operations you wish to perform on the objects of that class you may also want to specify few variables that you know will definitely be needed for that classbut this does not usually come into defining an abstract data type the approach we have been following for defining abstract data types in these notes is by describing the crucial operations in plain englishtrusting that they are simple enough to not need further explanations in generalwhat is needed is specification for the abstract data type in question an important aspect of studying software engineering is to learn about and use more formal approaches to this way of operating after we have decided what our specification iswe then need to choose data structure in order to implement the abstract data type the data structure to be considered in this is particular type of table known as hash table the table abstract data type the specification of the table abstract data type is as follows table can be used to store objectsfor example
20,014
johnny james alex sherlock james english bond rider holmes moriarty spy spy spy detective villain the objects can be arbitrarily complicated howeverfor our purposesthe only relevant detail is that each object has unique keyand that their keys can be compared for equality the keys are used in order to identify objects in much the way we have done for searching and sorting we assume that there are methods or procedures for(adetermining whether the table is empty or full(binserting new object into the tableprovided the table is not already full(cgiven keyretrieving the object with that key(dgiven keyupdating the item with that key (usually by replacing the item with new one with the same keywhich is what we will assume hereor by overwriting some of the item' variables)(egiven keydeleting the object with that keyprovided that object is already stored in the table(flisting or traversing all the items in the table (if there is an order on the keys then we would expect this to occur in increasing ordernotice that we are assuming that each object is uniquely identified by its key in programming language such as javawe could write an interface for this abstract data type as followswhere we assume here that keys are objects of class we call key and we have records of class called recordinterface table boolean isempty()boolean isfull()void insert(record)record retrieve(key)void update(record)void delete{key}void traverse()note that we have not fixed how exactly the storage of records should work that is something that comes with the implementation also note that you could give an interface to somebody elsewho could then write program which performs operations on tables without ever knowing how they are implemented you could certainly carry out all those operations with binary search trees and sorted or unsorted arrays if you wished the former even has the advantage that binary search tree never becomes full as suchbecause it is only limited by the size of the memory this general approach follows the sensible and commonly used way to go about defining java classfirst think about what it is you want to do with the classand only then wonder
20,015
mechanisms for defining abstract data types but notice thatas opposed to specification in plain englishsuch as the abovea definition of an interface is only partial specification of an abstract data typebecause it does not explain what the methods are supposed to doit only explains how they are called implementations of the table data structure there are three key approaches for implementing the table data structure the first two we have studied alreadyand the third is the topic of this implementation via sorted arrays let us assume that we want to implement the table data structure using sorted array whether it is full or empty can easily be determined in constant time if we have variable for the size then to insert an element we first have to find its proper positionwhich will take on average the same time as finding an element to find an element (which is necessary for all other operations apart from traversal)we can use binary search as described in in section so this takes (log nthis is also the complexity for retrieval and update howeverif we wish to delete or insert an itemwe will have to shift what is 'to the rightof the location in question by oneeither to the left (for deletionor to the right (for insertionthis will take on average / stepsso these operations have (ncomplexity traversal in order is simpleand is of (ncomplexity as well implementation via binary search trees possible alternative implementation would involve using binary search trees howeverwe know already that in the worst casethe tree can be very deep and narrowand that these trees will have linear complexity when it comes to looking up an entry we have seen that there is variant of binary search trees which keeps the worst case the same as the average casethe so-called self-balancing binary search treebut that is more complicated to both understand and program for those treesinsertiondeletionsearchretrieval and updatecan all be done with time complexity (log )and traversal has (ncomplexity implementation via hash tables the idea here is thatat the expense of using more space than strictly neededwe can speed up the table operations the remainder of this will describe how this is doneand what the various computational costs are hash tables the underlying idea of hash table is very simpleand quite appealingassume thatgiven keythere was way of jumping straight to the entry for that key then we would never have to search at allwe could just go thereof coursewe still have to work out way for that to be achieved assume that we have an array data to hold our entries now if we had function (kthat maps each key to the index (an integerwhere the associated entry will be storedthen we could just look up data[ ( )to find the entry with the key it would be easiest if we could just make the data array big enough to hold all the keys that might appear for exampleif we knew that the keys were the numbers from to then we could just create an array of size and store the entry with key in data[ ]
20,016
this idea is not very practical if we are dealing with relatively small number of keys out of huge collection of possible keys for examplemany american companies use their employees -digit social security number as keyeven though they have nowhere near employees british national insurance numbers are even worsebecause they are just as long and usually contain mixture of letters and numbers clearly it would be very inefficientif not impossibleto reserve space for all social security numbers which might occur insteadwe use non-trivial function hthe so-called hash functionto map the space of possible keys to the set of indices of our array for exampleif we had to store entries about employeeswe might create an array with entries and use three digits from their social security number (maybe the first or last threeto determine the place in the array where the records for each particular employee should be stored this approach sounds like good ideabut there is pretty obvious problem with itwhat happens if two employees happen to have the same three digitsthis is called collision between the two keys much of the remainder of this will be spent on the various strategies for dealing with such collisions first of allof courseone should try to avoid collisions if the keys that are likely to actually occur are not evenly spread throughout the space of all possible keysparticular attention should be paid to choosing the hash function in such way that collisions among them are less likely to occur iffor examplethe first three digits of social security number had geographical meaningthen employees are particularly likely to have the three digits signifying the region where the company residesand so choosing the first three digits as hash function might result in many collisions howeverthat problem might easily be avoided by more prudent choicesuch as the last three digits collision likelihoods and load factors for hash tables one might be tempted to assume that collisions do not occur very often when only small subset of the set of possible keys is chosenbut this assumption is mistaken the von mises birthday paradox as an exampleconsider collection of peopleand hash function that gives their birthdays as the number of the day in the yeari st january is nd january is st december is one might think that if all we want to do is store modest number of people in this way in an array with locationscollisions will be rather unlikely howeverit turns out that the probability of collision is bigger than this is so surprising at first sight that this phenomenon has become known as the von mises birthday paradox although it is not really paradox in the strict sense it is easy to understand what is happening suppose we have group of people and want to find out how likely it is that two of them have the same birthdayassuming that the birthdays are uniformly distributed over the days of the year let us call this probability (nit is actually easier to first compute the probability (nthat no two of them share birthdayand then ( (nfor this probability is clearly ( for we get ( / becausefor the added second person of the days are not the birthday of the first person for we get ( (
20,017
( ( ( ( )it may be surprising that ( and ( which means that as soon as there are more than people in groupit is more likely that two of them share birthday than not note that in the real worldthe distribution of birthdays over the year is not precisely uniformbut this only increases the probability that two people have the same birthday in other wordsbirthday collisions are much more likely than one might think at first implications for hash tables if random locations in table of size have more than chance of overlappingit seems inevitable that collisions will occur in any hash table that does not waste an enormous amount of memory and collisions will be even more likely if the hash function does not distribute the items randomly throughout the table to compute the computational efficiency of hash tablewe need some way of quantifying how full the table isso we can compute the probability of collisionsand hence determine how much effort will be required to deal with them the load factor of hash table suppose we have hash table of size mand it currently has entries then we call / the load factor of the hash table this load factor is the obvious way of describing how full the table currently isa hash table with load factor is fullone with load factor is fulland so on then if we have hash table with load factor lthe probability that collision occurs for the next key we wish to insert is this assumes that each key from the key space is equally likelyand that the hash function spreads the key space evenly over the set of indices of our array if these optimistic assumptions failthen the probability may be even higher thereforeto minimize collisionsit is prudent to keep the load factor low fifty percent is an often quoted good maximum figurewhile beyond an eighty percent load the performance deteriorates considerably we shall see later exactly what effect the table' load factor has on the speed of the operations we are interested in simple hash table in operation let us assume that we have small data array we wish to useof size and that our set of possible keys is the set of -character stringswhere each character is in the range from to obviouslythis example is designed to illustrate the principle typical real-world hash tables are usually very much biggerinvolving arrays that may have size of thousandsmillionsor tens of millionsdepending on the problem we now have to define hash function which maps each string to an integer in the range to let us consider one of the many possibilities we first map each string to number as followseach character is mapped to an integer from to using its place in the alphabet ( is the first letterso it goes to the second so it goes to and so onwith getting value the string therefore gives us three numbers from to say and we can then map the whole string to the number calculated as
20,018
now it is quite easy to go from any number (rather than stringto number from to for examplewe can take the remainder the number leaves when divided by this is the or java modulus operation so our hash function is ( ( )% % this modulo operationand modular arithmetic more generallyare widely used when constructing good hash functions as simple example of hash table in operationassume that we now wish to insert the following three-letter airport acronyms as keys (in this orderinto our hash tablephlorygcmhkgglaaklfralaxdca to make this easierit is good idea to start by listing the values the hash function takes for each of the keyscode ( phl ory gcm hkg gla akl fra lax dca it is clear already that we will have hash collisions to deal with we naturally start off with an empty table of the required sizei clearly we have to be able to tell whether particular location in the array is still emptyor whether it has already been filled we can assume that there is unique key or entry (which is never associated with recordwhich denotes that the position has not been filled yet howeverfor claritythis key will not appear in the pictures we use now we can begin inserting the keys in order the number associated with the first item phl is so we place it at index givingphl next is orywhich gives us the number so we getphl ory then we have gcmwith value givingphl gcm ory then hkgwhich also has value results in our first collision since the corresponding position has already been filled with phl now we couldof coursetry to deal with this by simply saying the table is fullbut this gives such poor performance (due to the frequency with which collisions occurthat it is unacceptable strategies for dealing with collisions we now look at three standard approachesof increasing complexityfor dealing with hash collisions
20,019
think of each column as bucket in which we throw all the elements which give particular result when the hash function is suppliedso the fifth column contains all the keys for which the hash function evaluates to then we could put hkg into the slot 'beneathphland gla in the one beneath oryand continue filling the table in the order given until we reach lax dca phl hkg fra gcm akl ory gla the disadvantage of this approach is that it has to reserve quite bit more space than will be eventually requiredsince it must take into account the likely maximal number of collisions even while the table is still quite empty overallcollisions will become increasingly likely moreoverwhen searching for particular keyit will be necessary to search the entire column associated with its expected positionat least until an empty slot is reached if there is an order on the keysthey can be stored in ascending orderwhich means we can use the more efficient binary search rather than linear searchbut the ordering will have an overhead of its own the average complexity of searching for particular item depends on how many entries in the array have been filled already this approach turns out to be slower than the other techniques we shall considerso we shall not spend any more time on itapart from noting that it does prove useful when the entries are held in slow external storage direct chaining rather than reserving entire sub-arrays (the columns abovefor keys that collideone can instead create linked list for the set of entries corresponding to each key the result for the above example can be pictured something like this lax phl fra dca hkg gcm akl ory gla this approach does not reserve any space that will not be taken upbut has the disadvantage that in order to find particular itemlists will have to be traversed howeveradding the hashing step still speeds up retrieval considerably we can compute the size of the average non-empty list occurring in the hash table as follows with items in an array of size mthe probability than no items land in particular slot is (nmm- so the number of slots with at least one item falling in it is nn (nmm (nmm
20,020
(nmn (nmm - ) then linear search for an item in list of size takes on average ( + ** comparisons it is difficult to visualize what these formulae mean in practicebut if we assume the hash table is large but not overloadedi and are both large with mwe can perform taylor approximation for small loading factor / that shows there are + = ( comparisons on average for successful searchi that this has ( complexity for an unsuccessful searchwe need the average list size including the empty slots that will clearly be / land so in an unsuccessful search the average number of comparisons made to decide the item in question is not present will be lwhich is again ( thusneither the successful nor unsuccessful search times depend on the number of keys in the tablebut only on the load factorwhich can be kept low by choosing the size of the hash table to be big enough note also that insertion is done even more speedilysince all we have to do is to insert new element at the front of the appropriate list henceapart from traversalthe complexity class of all operations is constanti ( for traversalwe need to sort the keyswhich can be done in (nlog )as we know from variant would be to make each linked list sortedwhich will speed up finding an itemas well as speed up traversal slightlyalthough this will not put either operation into different complexity class this speed-up would be paid for by making the insertion operation more expensivei take slightly longerbut it will still have constant complexity overallall the time complexities for this approach are clearly very impressive compared to those for sorted arrays or (balancedbinary search trees open addressing the last fundamentally different approach to collision avoidance is called open addressingand that involves finding another open location for any entry which cannot be placed where its hash function points we refer to that position as key' primary position (so in the earlier exampleory and gla have the same primary positionthe easiest strategy for achieving this is to search for open locations by simply decreasing the index considered by one until we find an empty space if this reaches the beginning of the arrayi index we start again at the end this process is called linear probing better approach is to search for an empty location using secondary hash function this process is called double hashing we will now look at both of these approaches in some detail linear probing we now proceed with the earlier example using linear probing we had reached the stagephl gcm ory
20,021
linear probing reduces the index by one to and finds an empty location in that positionso we put hkg there givinghkg phl gcm ory next we wish to insert glawith hash value but the location with that index is already filled by ory again linear probing reduces the index by oneand since that slot one to the left is freewe insert gla therehkg phl gcm gla ory then we have akland although we have not had the value beforethe corresponding location is filled by gla so we try the next index downbut that contains gcmso we continue to the next one at index which emptyso we put akl therehkg phl akl gcm gla ory we now continue in the same way with the remaining keyseventually reachingdca lax fra hkg phl akl gcm gla ory this looks quite convincing all the keys have been inserted in way that seems to make good use of the space we have reserved howeverwhat happens now if we wish to find particular keyit will no longer be good enough to simply apply the hash function to it and check there insteadwe will have to follow its possible insertion locations until we hit an empty onewhich tells us that the key we were looking for is not presentafter allbecause it would have been inserted there this is why every hash table that uses open addressing should have at least one empty slot at any timeand be declared full when only one empty location is left howeveras we shall seehash tables lose much of their speed advantage if they have high load factorso as matter of policymany more locations should be kept empty soto find the key aklwe would first check at index then at and where we are successful searching for jfkon the other handwe would start with its proper positiongiven by the hash function value so we would check indices in that order until we find an empty space which tells us that jfk isin factnot present at all this looks pretty bad at first sightbut bear in mind that we said that we will aim towards keeping the load factor at around percentso there would be many more empty slots which effectively stop any further search but this idea brings another problem with it suppose we now delete gcm from the table and then search for akl again we would find the array empty at index and stop searchingand therefore wrongly conclude that akl is not present this is clearly not acceptablebut equallywe do not wish to have to search through the entire array to be sure that an entry is not there the solution is that we reserve another key to mean that position is emptybut that it did hold key at some point let us assume that we use the character '!for that then after deleting gcmthe array would bedca lax fra hkg phl akl gla ory
20,022
on the other handwe are trying to insert keythen we can ignore any exclamation marks and fill the position once again this now does take care of all our problemsalthough if we do lot of deleting and insertingwe will end up with table which is bit of mess large number of exclamation marks means that we have to keep looking for long time to find particular entry despite the fact that the load factor may not be all that high this happens if deletion is frequent operation in such casesit may be better to re-fill new hash table again from scratchor use another implementation search complexity the complexity of open addressing with linear probing is rather difficult to computeso we will not attempt to present full account of it here if is once again the load factor of the tablethen successful search can be shown to take ( - comparisons on averagewhile an unsuccessful search takes approximately ( ( - ) for relatively small load factorsthis is quite impressiveand even for larger onesit is not bad thusthe hash table time complexity for search is again constanti ( clustering there is particular problem with linear probingnamely what is known as primary and secondary clustering consider what happens if we try to insert two keys that have the same result when the hash function is applied to them take the above example with hash table at the stage where we just inserted glahkg phl gcm gla ory if we next try to insert jfk we note that the hash function evaluates to once again so we keep checking the same locations we only just checked in order to insert gla this seems rather inefficient way of doing this this effect is known as primary clustering because the new key jfk will be inserted close to the previous key with the same primary positiongla it means that we get continuous 'blockof filled slotsand whenever we try to insert any key which is sent into the block by the hash functionwe will have to test all locations until we hit the end of the blockand then make such block even bigger by appending another entry at its end so these blocksor clusterskeep growingnot only if we hit the same primary location repeatedlybut also if we hit anything that is part of the same cluster the last effect is called secondary clustering note that searching for keys is also adversely affected by these clustering effects double hashing the obvious way to avoid the clustering problems of linear probing is to do something slightly more sophisticated than trying every position to the left until we find an empty one this is known as double hashing we apply secondary hash function to tell us how many slots to jump to look for an empty slot if key' primary position has been filled already like the primary hash functionthere are many possible choices of the secondary hash function in the above exampleone thing we could do is take the same number associated with the three-character codeand use the result of integer division by instead of the remainderas the secondary hash function howeverthe resulting value might be bigger than so to prevent the jump looping round back toor beyondthe starting pointwe first take
20,023
divided by thus we would like to use as our secondary hash function ( ( / )% howeverthis has yet another problemit might give zero at some pointand we obviously cannot test 'every zeroth locationan easy solution is to simply make the secondary hash function one if the above would evaluate to zerothat is( / )% if ( / )% ( otherwise the values of this for our example set of keys are given in the following tablecode ( phl ory gcm hkg gla akl fra lax dca bhx we can then proceed from the situation we were in when the first collision occurredphl gcm ory with hkg the next key to insertwhich gives collision with phl since (hkg we now try every third location to the left in order to find free slothkg phl gcm ory note that this did not create block when we now try to insert glawe once again find its primary location blocked by ory since (gla we now try every ninth location counting to the left from orythat gets us (starting again from the back when we reach the first slotto the last location overallhkg phl gcm ory gla note that we still have not got any blockswhich is good further note that most keys which share the same primary location with ory and gla will follow different route when trying to find an empty slotthus avoiding primary clustering here is the result when filling the table with the remaining keys givenhkg dca phl fra gcm akl ory lax gla our example is too small to show convincingly that this method also avoids secondary clusteringbut in general it does it is clear that the trivial secondary hash function ( reduces this approach to that of linear probing it is also worth noting thatin both casesproceeding to secondary positions to the left is merely convention it could equally well be to the right but obviously it has to be made clear which direction has been chosen for particular hash table search complexity the efficiency of double hashing is even more difficult to compute than that of linear probingand therefore we shall just give the results without derivation with load factor la successful search requires ( /lln( /( )comparisons on averageand an unsuccessful one requires /( lnote that it is the natural logarithm (to base that occurs hererather than the usual logarithm to base thusthe hash table time complexity for search is again constanti (
20,024
choosing good hash functions in principleany convenient function can be used as primary hash function howeverwhat is important when choosing good hash function is to make sure that it spreads the space of possible keys onto the set of hash table indices as evenly as possibleor more collisions than necessary will occur secondlyit is advantageous if any potential clusters in the space of possible keys are broken up (something that the remainder in division will not do)because in that case we could end up with 'continuous runand associated clustering problems in the hash table thereforewhen defining hash functions of strings of charactersit is never good idea to make the last (or even the firstfew characters decisive when choosing secondary hash functionsin order to avoid primary clusteringone has to make sure that different keys with the same primary position give different results when the secondary hash function is applied secondlyone has to be careful to ensure that the secondary hash function cannot result in number which has common divisor with the size of the hash table for exampleif the hash table has size and we get secondary hash function which gives (or or as resultthen only half of the locations will be checkedwhich might result in failure (an endless loopfor examplewhile the table is still half empty even for large hash tablesthis can still be problem if the secondary hash keys can be similarly large simple remedy for this is to always make the size of the hash table prime number complexity of hash tables we have already seen that insertsearch and delete all have ( time complexity if the load factor of the hash table is kept reasonably lowe below but having higher load factors can considerably slow down the operations the crucial search time complexity of particular form of hash table is determined by counting the average number of location checks that are needed when searching for items in the table when it has particular load factorand that will depend on whether the item is found the following table shows the average number of locations that need to be checked to conduct successful and unsuccessful searches in hash tables with different collision handling strategiesdepending on the load factor given in the top row it shows how the different approaches and cases vary differently as the table becomes closer to fully loaded strategy successful search direct chaining linear probing double hashing unsuccessful search direct chaining linear probing double hashing it also shows the considerable advantage that double hashing has over linear probingparticularly when the load factors become large whether or not double hashing is preferable to
20,025
maintainis dependent on the circumstances the following table shows comparison of the average time complexities for the different possible implementations of the table interfacesorted array balanced bst hash table search (log no(log no( insert (no(log no( delete (no(log no( traverse (no(no(nlog nhash tables are seen to perform rather wellthe complexity of searchingupdating and retrieving are all independent of table size in practicehoweverwhen deciding what approach to useit will depend on the mix of operations typically performed for examplelots of repeated deletions and insertions can cause efficiency problems with some hash table strategiesas explained above to give concrete exampleif there are entries in balanced binary search treeit takes on average comparisons to complete successful search on the other handwe can need as few as comparisons if we use hash tableprovided that we keep its load factor below percent of coursedespite their time advantagewe should never forget that hash tables have considerable disadvantage in terms of the memory required to implement them efficiently
20,026
graphs often it is useful to represent information in more general graphical form than considered so farsuch as the following representation of the distances between townsglasgow edinburgh newcastle manchester birmingham swansea london exeter with similar structures (maybe leaving out the distancesor replacing them by something else)we could represent many other situationslike an underground tunnel networkor network of pipes (where the number label might give the pipe diameters)or railway mapor an indication of which cities are linked by flightsor ferriesor political alliances even if we assume it is network of paths or roadsthe numbers do not necessarily have to represent distancesthey might be an indication of how long it takes to cover the distance in question on footso given distance up steep hill would take longer than on even ground there is much more that can be done with such picture of situation than just reading off which place is directly connected with another placefor examplewe can ask ourselves
20,027
would be the shortest set of pipes connecting all the locations there is also the famous travelling salesman problem which involves finding the shortest route through the structure that visits each city precisely once graph terminology the kind of structure in the above figure is known formally as graph graph consists of series of nodes (also called vertices or points)displayed as nodesand edges (also called lineslinks orin directed graphsarcs)displayed as connections between the nodes there exists quite lot of terminology that allows us to specify graphs preciselya graph is said to be simple if it has no self-loops ( edges connected at both ends to the same vertexand no more than one edge connecting any pair of vertices the remainder of this will assume thatwhich is sufficient for most practical applications if there are labels on the edges (usually non-negative real numbers)we say that the graph is weighted we distinguish between directed and undirected graphs in directed graphs (also called digraphs)each edge comes with one or two directionswhich are usually indicated by arrows think of them as representing roadswhere some roads may be one-way only or think of the associated numbers as applying to travel in one way onlysuch as going up hill which takes longer than coming down an example of an unweighted digraph isa and an example of weighted digraphbecause it has labels on its edgesis in undirected graphswe assume that every edge can be viewed as going both waysthat isan edge between and goes from to as well as from to the first graph given at the beginning of this is weighted and undirected path is sequence of nodes or vertices vn such that vi and vi+ are connected by an edge for all < < note that in directed graphthe edge from vi to vi+ is the one which has the corresponding direction circle is non-empty path whose first vertex is the same as its last vertex path is simple if no vertex appears on it twice (with the exception of circlewhere the first and last vertex may be the same this is because we have to 'cut openthe circle at some point to get pathso this is inevitable
20,028
directed graphsthe notion of connectedness has two distinct versionswe say that digraph is weakly connected if for every two vertices and there is either path from to or path from to we say it is strongly connected if there are paths leading both ways soin weakly connected digraphthere may be two vertices and such that there exists no path from to graph clearly has many properties similar to tree in factany tree can be viewed as simple graph of particular kindnamely one that is connected and contains no circles because graphunlike treedoes not come with natural 'starting pointfrom which there is unique path to each vertexit does not make sense to speak of parents and children in graph insteadif two vertices and are connected by an edge ewe say that they are neighboursand the edge connecting them is said to be incident to and two edges that have vertex in common (for exampleone connecting and and one connecting and care said to be adjacent implementing graphs all the data structures we have considered so far were designed to hold certain informationand we wanted to perform certain actions on them which mostly centred around inserting new itemsdeleting particular itemssearching for particular itemsand sorting the collection at no time was there ever connection between all the items representedapart from the order in which their keys appeared moreoverthat connection was never something that was inherent in the structure and that we therefore tried to represent somehow it was just property that we used to store the items in way which made sorting and searching quicker nowon the other handit is the connections that are the crucial information we need to encode in the data structure we are given structure which comes with specified connectionsand we need to design an implementation that efficiently keeps track of them array-based implementation the first underlying idea for array-based implementations is that we can conveniently rename the vertices of the graph so that they are labelled by non-negative integer indicessay from to if they do not have these labels already howeverthis only works if the graph is given explicitlythat isif we know in advance how many vertices there will beand which pairs will have edges between them then we only need to keep track of which vertex has an edge to which other vertexandfor weighted graphswhat the weights on the edges are for unweighted graphswe can do this quite easily in an two-dimensional binary array adjalso called matrix the so-called adjacency matrix in the case of weighted graphswe instead have an weight matrix weights the array/matrix representations for the two example graphs shown above are thena
20,029
isthere is no edge from the vertex to the vertex ' 'on the other handreads as trueindicating that there is an edge it is often useful to use boolean values hererather than the numbers and because it allows us to carry out operations on the booleans in the second casewe have weighted graphand we have the real-valued weights in the matrix insteadusing the infinity symbol to indicate when there is no edge for an undirected graphif there is in the ith column and the jth rowwe know that there is an edge from vertex to the vertex with the number jwhich means there is also an edge from vertex to vertex this means that adj[ ][ =adj[ ][iwill hold for all and from to so there is some redundant information here we say that such matrix is symmetric it equals its mirror image along the main diagonal mixed implementation there is potential problem with the adjacency/weight matrix representationif the graph has very many verticesthe associated array will be extremely large ( , entries are needed if the graph has just verticesthenif the graph is sparse ( has relatively few edges)the adjacency matrix contains many and only few sand it is waste of space to reserve so much memory for so little information solution to this problem is to number all the vertices as beforebutrather than using two-dimensional arrayuse one-dimensional array that points to linked list of neighbours for each vertex for examplethe above weighted graph can be represented as followswith each triple consisting of vertex nameconnection weight and pointer to the next triple if there are very few edgeswe will have very short lists at each entry of the arraythus saving space over the adjacency/weight matrix representation this implementation is using so-called adjacency lists note that if we are considering undirected graphsthere is still certain amount of redundancy in this representationsince every edge is represented twiceonce in each list corresponding to the two vertices it connects in javathis could be accomplished with something likeclass graph vertex[headsprivate class vertex int namedouble weightvertex next//methods for vertices //methods for graphs
20,030
treeswhich is essentially generalization of linked listscan be generalized for graphs in language such as javaa class graph might have the following as an internal classclass vertex string namevertex[neighboursdouble[weightswhen each vertex is createdan array neighbours big enough to accommodate (pointers toall its neighbours is allocatedwith (for weighted graphsan equal sized array weights to accommodate the associated weights we then place the neighbours of each vertex into those arrays in some arbitrary order any entries in the neighbours array that are not needed will hold null pointer as usual for examplethe above weighted graph would be represented as followswith each weight shown following the associated pointer relations between graphs many important theorems about graphs rely on formal definitions of the relations between themso we now define the main relevant concepts two graphs are said to be isomorphic if they contain the same number of vertices with the same pattern of adjacencyi there is bijection between their vertices which preserves the adjacency relations subgraph of graph is defined as any graph that has vertex set which is subset of that of gwith adjacency relations which are subset of those of converselya supergraph of graph is defined as any graph which has as subgraph finallya graph is said to contain another graph if there exists subgraph of that is either or isomorphic to subdivision of an edge with endpoints and is simply the pair of edges with endpoints and wand with endpoints and vfor some new vertex the reverse operation of smoothing removes vertex with exactly two edges and leaving an edge connecting the two adjacent vertices and
20,031
in two graphs and can then be defined as being homeomorphic if there is graph isomorphism from some subdivision of to some subdivision of an edge contraction removes an edge from graph and merges the two vertices previously connected by it this can lead to multiple edges between pair of verticesor self-loops connecting vertex to itself these are not allowed in simple graphsin which case some edges may need to be deleted then an undirected graph is said to be minor of another undirected graph if graph isomorphic to can be obtained from by contracting some edgesdeleting some edgesand deleting some isolated vertices planarity planar graph is graph that can be embeded in plane in other wordsit can be drawn on sheet of paper in such way that no edges cross each other this is important in applications such as printed circuit design note that it is clearly possible for planar graphs to be drawn in such way that their edges do cross each otherbut the crucial thing is that they can be transformed (by moving vertices and/or deforming the edgesinto form without any edges crossing for examplethe following three diagrams all represent the same planar graphthis graph is the fully connected graph with four verticesknown as clearly all sub-graphs of this will also be planar it is actually quite difficult to formulate general algorithms for determining whether given graph is planar for small graphsit is easy to check systematically that there are no possible vertex repositionings or edge deformations that will bring the graph into explicitly planar form two slightly larger graphs than that can be shown to be non-planar in this way are the fully connected graph with five verticesknown as and the graph with three vertices fully connected to three other verticesknown as , clearlyany larger graph that contains one of these two non-planar graphs as subgraph must also be non-planar iteslfand any subdivision or smoothing of edges will have no effect
20,032
theorems about planarity the most well-known of these is kuratowski' theorem which states that " finite graph is planar if and only if it does not contain subgraph that is homeomorphic toor subdivision ofk or , anotherbased on the concept of minorsis wagner' theorem which states that " finite graph is planar if and only if it does not have or , as minora good general approach for testing planarity is therefore to search for subgraphs of the given graph that can be transformed into or , this is not entirely straightforwardbut algorithms do exist which allow graph with vertices to be tested for planarity with time complexity (nexercisefind out exactly how these algorithms work traversals systematically visiting all vertices in order to traverse graphi systematically visit all its verticeswe clearly need strategy for exploring graphs which guarantees that we do not miss any edges or vertices becauseunlike treesgraphs do not have root vertexthere is no natural place to start traversaland therefore we assume that we are givenor randomly picka starting vertex there are two strategies for performing graph traversal the first is known as breadth first traversal we start with the given vertex then we visit its neighbours one by one (which must be possible no matter which implementation we use)placing them in an initially empty queue we then remove the first vertex from the queue and one by one put its neighbours at the end of the queue we then visit the next vertex in the queue and again put its neighbours at the end of the queue we do this until the queue is empty howeverthere is no reason why this basic algorithm should ever terminate if there is circle in the graphlike abc in the first unweighted graph abovewe would revisit vertex we have already visitedand thus we would run into an infinite loop (visiting ' neighbours puts onto the queuevisiting that (eventuallygives us cand once we reach in the queuewe get againto avoid this we create second array done of booleanswhere done[jis true if we have already visited the vertex with number jand it is false otherwise in the above algorithmwe only add vertex to the queue if done[jis false then we mark it as done by setting done[jtrue this waywe will not visit any vertex more than onceand for finite graphour algorithm is bound to terminate in the example we are discussingbreadth first search starting at might yieldabdce to see why this is called breadth first searchwe can imagine tree being built up in this waywhere the starting vertex is the rootand the children of each vertex are its neighbours (that haven' already been visitedwe would then first follow all the edges emanating from the rootleading to all the vertices on level then find all the vertices on the level belowand so onuntil we find all the vertices on the 'lowestlevel the second traversal strategy is known as depth first traversal given vertex to start fromwe now put it on stack rather than queue (recall that in stackthe next item to be removed at any time is the last one that was put on the stackthen we take it from the stackmark it as done as for breadth first traversallook up its neighbours one after the otherand put them onto the stack we then repeatedly pop the next vertex from the stackmark it as doneand put its neighbours on the stackprovided they have not been marked as donejust as we did for breadth first traversal for the example discussed abovewe might (starting from agetabced againwe can see why this is called depth first by
20,033
added and processed note that with both breadth first and depth firstthe order of the vertices depends on the implementation there is no reason why ' neighbour should be visited before in the example so it is better to speak of result of depth first or breadth first traversalrather than of the result note also that the only vertices that will be listed are those in the same connected component as if we have to ensure that all vertices are visitedwe may need to start the traversal process with number of different starting verticeseach time choosing one that has not been marked as done when the previous traversal terminated exerciseswrite algorithmsin pseudocodeto ( visit all nodes of graphand ( decide whether given graph is connected or not for ( you will actually need two algorithmsone for the strong notion of connectednessand another for the weak notion shortest paths dijkstra' algorithm common graph based problem is that we have some situation represented as weighted digraph with edges labelled by non-negative numbers and need to answer the following questionfor two particular verticeswhat is the shortest route from one to the otherhereby "shortest routewe mean path whichwhen we add up the weights along its edgesgives the smallest overall weight for the path this number is called the length of the path thusa shortest path is one with minimal length note that there need not be unique shortest pathsince several paths might have the same length in disconnected graph there will not be path between vertices in different componentsbut we can take care of this by using once again to stand for "no path at allnote that the weights do not necessarily have to correspond to distancesthey couldfor examplebe time (in which case we could speak of "quickest paths"or money (in which case we could speak of "cheapest paths")among other possibilities by considering "abstractgraphs in which the numerical weights are left uninterpretedwe can take care of all such situations and others but notice that we do need to restrict the edge weights to be nonnegative numbersbecause if there are negative numbers and cycleswe can have increasingly long paths with lower and lower costsand no path with minimal cost applications of shortest-path algorithms include internet packet routing (becauseif you send an email message from your computer to someone elseit has to go through various email routersuntil it reaches its final destination)train-ticket reservation systems (that need to figure out the best connecting stations)and driving route finders (that need to find an optimum route in some sensedijkstra' algorithm in turns out thatif we want to compute the shortest path from given start node to given end node zit is actually most convenient to compute the shortest paths from to all other nodesnot just the given node that we are interested in given the start nodedijkstra' algorithm computes shortest paths starting from and ending at each possible node it maintains all the information it needs in simple arrayswhich are iteratively updated until the solution is reached because the algorithmalthough elegant and shortis fairly complicatedwe shall consider it one component at time overestimation of shortest paths we keep an array of distances indexed by the vertices the idea is that [zwill hold the distance of the shortest path from to when
20,034
we currently have of the distance from to we initially have [ and set [zfor all vertices other than the start node then the algorithm repeatedly decreases the overestimates until it is no longer possible to decrease them further when this happensthe algorithm terminateswith each estimate fully constrained and said to be tight improving estimates the general idea is to look systematically for shortcuts suppose thatfor two given vertices and zit happens that [uweight[ ][zd[zthen there is way of going from to and then to whose total length is smaller than the current overestimate [zof the distance from to zand hence we can replace [zby this better estimate this corresponds to the code fragment if [uweight[ ][zd[zd[zd[uweight[ ][zof the full algorithm given below the problem is thus reduced to developing an algorithm that will systematically apply this improvement so that ( we eventually get the tight estimates promised aboveand ( that is done as efficiently as possible dijkstra' algorithmversion the first version of such an algorithm is not as efficient as it could bebut it is relatively simple and certainly correct (it is always good idea to start with an inefficient simple algorithmso that the results from it can be used to check the operation of more complex efficient algorithm the general idea is thatat each stage of the algorithm' operationif an entry [uof the array has the minimal value among all the values recorded in dthen the overestimate [umust actually be tightbecause the improvement algorithm discussed above cannot possibly find shortcut the following algorithm implements that idea/inputa directed graph with weight matrix 'weightand / start vertex ' /outputan array 'dof distances as explained above /we begin by buiding the distance overestimates [ /the shortest path from to itself has length zero for each vertex of the graph if is not the start vertex [zinfinity /this is certainly an overestimate /we use an auxiliary array 'tightindexed by the vertices/that records for which nodes the shortest path estimates /are ''known'to be tight by the algorithm for each vertex of the graph tight[zfalse
20,035
/all entries in the array 'tighthold the value true repeat as many times as there are vertices in the graph find vertex with tight[ufalse and minimal estimate [utight[utrue for each vertex adjacent to if [uweight[ ][zd[zd[zd[uweight[ ][ /lower overestimate exists /at this pointall entries of array 'dhold tight estimates it is clear that when this algorithm finishesthe entries of cannot hold under-estimates of the lengths of the shortest paths what is perhaps not so clear is why the estimates it holds are actually tighti are the minimal path lengths in order to understand whyfirst notice that an initial sub-path of shortest path is itself shortest path to see thissuppose that you wish to navigate from vertex to vertex zand that the shortest path from to happens to go through certain vertex then your path from to can be split into two pathsone going from to (an initial sub-pathand the other going from to ( final sub-pathgiven that the wholeunsplit path is shortest path from to zthe initial sub-path has to be shortest path from to ufor if notthen you could shorten your path from to by replacing the initial sub-path to by shorter pathwhich would not only give shorter path from to but also from to the final destination now it follows that for any start vertexthere is tree of shortest paths from that vertex to all other vertices the reason is that shortest paths cannot have cycles implicitlydijkstra' algorithm constructs this tree starting from the rootthat isthe start vertex ifas tends to be the case in practicewe also wish to compute the route of shortest pathrather than just its lengthwe also need to introduce third array pred to keep track of the 'predecessoror 'previous vertexof each vertexso that the path can be followed back from the end point to the start point the algorithm can clearly also be adapted to work with non-weighted graphs by assigning suitable weight matrix of for connected vertices and for non-connected vertices the time complexity of this algorithm is clearly ( where is the number of verticessince there are operations of (nnested within the repeat of (na simple example suppose we want to compute the shortest path from (node to (node in the weighted graph we looked at before
20,036
of the three arrays at each intermediate stagegives the following outputin which"oois used to represent the infinity symbol ""computing shortest paths from | + | oo oo oo oo tight |no no no no no pred |none none none none none vertex has minimal estimateand so is tight neighbour has estimate decreased from oo to taking shortcut via neighbour has estimate decreased from oo to taking shortcut via | + | oo oo tight |yes no no no no pred |none none none vertex has minimal estimateand so is tight neighbour is already tight neighbour has estimate decreased from oo to taking shortcut via neighbour has estimate decreased from to taking shortcut via neighbour has estimate decreased from oo to taking shortcut via | + | tight |yes yes no no no pred |none vertex has minimal estimateand so is tight neighbour is already tight neighbour has estimate unchanged neighbour has estimate decreased from to taking shortcut via | + | tight |yes yes yes no no pred |none
20,037
neighbour has estimate unchanged | + | tight |yes yes yes yes no pred |none vertex has minimal estimateand so is tight neighbour is already tight neighbour is already tight | + | tight |yes yes yes yes yes pred |none end of dijkstra' computation shortest path from to isa once it is clear what is happening at each stageit is usually more convenient to adopt shorthand notation that allows the whole process to be represented in single table for exampleusing "*to represent tightthe distancestatus and predecessor for each node at each stage of the above example can be listed more concisely as followsstage + oo oo oo oo oo oo shortest path from to isa dijkstra' algorithmversion the time complexity of dijkstra' algorithm can be improved by making use of priority queue ( some form of heapto keep track of which node' distance estimate becomes tight next here it is convenient to use the convention that lower numbers have higher priority the previous algorithm then becomes
20,038
/ start vertex ' /outputan array 'dof distances as explained above /we begin by buiding the distance overestimates [ /the shortest path from to itself has length zero for each vertex of the graph if is not the start vertex [zinfinity /this is certainly an overestimate /then we set up priority queue based on the overestimates create priority queue containing all the vertices of the graphwith the entries of as the priorities /then we implicitly build the path tree discussed above while priority queue is not empty /the next vertex of the path tree is called remove vertex with smallest priority from queue for each vertex in the queue which is adjacent to if [uweight[ ][zd[zd[zd[uweight[ ][ /lower overestimate exists change the priority of vertex in queue to [ /at this pointall entries of array 'dhold tight estimates if the priority queue is implemented as binary or binomial heapinitializing and creating the priority queue both have complexity ( )where is the number of vertices of the graphand that is negligible compared to the rest of the algorithm then removing vertices and changing the priorities of elements in the priority queue require some rearrangement of the heap tree by "bubbling up"and that takes (log nstepsbecause that is the maximum height of the tree removals happen (ntimesand priority changes happen (etimeswhere is the number of edges in the graphso the cost of maintaining the queue and updating is (( )log nthusthe total time complexity of this form of dijkstra' algorithm is (( )log nusing fibonacci heap for the priority queue allows priority updates of ( complexityimproving the overall complexity to ( nlog nin fully connected graphthe number of edges will be ( )and hence the time complexity of this algorithm is ( log nor ( depending on which kind of priority queue is used soin that casethe time complexity is actually greater than or equal to the previous simpler ( algorithm howeverin practicemany graphs tend to be much more
20,039
this case the time complexity for both priority queue versions is (nlog )which is clear improvement over the previous ( algorithm shortest paths floyd' algorithm if we are not only interested in finding the shortest path from one specific vertex to all the othersbut the shortest paths between every pair of verticeswe couldof courseapply dijkstra' algorithm to every starting vertex but there is actually simpler way of doing thisknown as floyd' algorithm this maintains square matrix 'distancewhich contains the overestimates of the shortest paths between every pair of verticesand systematically decreases the overestimates using the same shortcut idea as above if we also wish to keep track of the routes of the shortest pathsrather than just their lengthswe simply introduce second square matrix 'predecessorto keep track of all the 'previous verticesin the algorithm belowwe attempt to decrease the estimate of the distance from each vertex to each vertex by going systematically via each possible vertex to see whether that is shortcutand if it isthe overestimate of the distance is decreased to the smaller overestimateand the predecessor updated/store initial estimates and predecessors for each vertex for each vertex distance[ ][zweight[ ][zpredecessor[ ][zs /improve them by considering all possible shortcuts for each vertex for each vertex for each vertex if distance[ ][ ]+distance[ ][zdistance[ ][zdistance[ ][zdistance[ ][ ]+distance[ ][zpredecessor[ ][zpredecessor[ ][zas with dijkstra' algorithmthis can easily be adapted to the case of non-weighted graphs by assigning suitable weight matrix of and the time complexity here is clearly ( )since it involves three nested for loops of (nthis is the same complexity as running the ( dijkstra' algorithm once for each of the possible starting vertices in generalhoweverfloyd' algorithm will be faster than dijkstra'seven though they are both in the same complexity classbecause the former performs fewer
20,040
then multiple runs of dijkstra' algorithm can be made to perform with time complexity ( log )and be faster than floyd' algorithm simple example suppose we want to compute the lengths of the shortest paths between all vertices in the following undirected weighted graphwe start with distance matrix based on the connection weightsand trivial predecessorsstart then for each vertex in turn we test whether shortcut via that vertex reduces any of the distancesand update the distance and predecessor arrays with any reductions found the five stepswith the updated entries in quotesare as follows:abca ' ' ' ' ' ' ' 'ae 'ac 'ac 'bb 'bb
20,041
ea ' ' ' ' ' ' ' ' ' ' 'dd ' 'dc 'dd 'dd ' 'ee 'ec the algorithm finishes with the matrix of shortest distances and the matrix of associated predecessors so the shortest distance from to is and the predecessors of are ethen dthen cgiving the path note that updating distance does not necessarily mean updating the associated predecessor for examplewhen introducing as shortcut between and bthe predecessor of remains minimal spanning trees we now move on to another common graph-based problem suppose you have been given weighted undirected graph such as the followinga we could think of the vertices as representing housesand the weights as the distances between them now imagine that you are tasked with supplying all these houses with some commodity such as watergasor electricity for obvious reasonsyou will want to keep the amount of digging and laying of pipes or cable to minimum sowhat is the best pipe or cable layout that you can findi what layout has the shortest overall lengthobviouslywe will have to choose some of the edges to dig alongbut not all of them for exampleif we have already chosen the edge between and dand the one between and dthen there is no reason to also have the one between and more generallyit is clear that we want to avoid circles alsoassuming that we have only one feeding-in point (it is of no importance which of the vertices that is)we need the whole layout to be connected we have seen already that connected graph without circles is tree
20,042
of graph is subgraph that is tree which connects all the vertices togetherso it 'spansthe original graph but using fewer edges hereminimal refers to the sum of all the weights of the edges contained in that treeso minimal spanning tree has total weight less than or equal to the total weight of every other spanning tree as we shall seethere will not necessarily be unique minimal spanning tree for given graph observations concerning spanning trees for the other graph algorithms we have covered so farwe started by making some observations which allowed us to come up with an idea for an algorithmas well as strategy for formulating proof that the algorithm did indeed perform as desired soto come up with some ideas which will allow us to develop an algorithm for the minimal spanning tree problemwe shall need to make some observations about minimal spanning trees let us assumefor the time beingthat all the weights in the above graph were equalto give us some idea of what kind of shape minimal spanning tree might have under those circumstances here are some exampleswe can immediately notice that their general shape is such that if we add any of the remaining edgeswe would create circle then we can see that going from one spanning tree to another can be achieved by removing an edge and replacing it by another (to the vertex which would otherwise be unconnectedsuch that no circle is created these observations are not quite sufficient to lead to an algorithmbut they are good enough to let us prove that the algorithms we find do actually work greedy algorithms we say that an algorithm is greedy if it makes its decisions based only on what is best from the point of view of 'local considerations'with no concern about how the decision might affect the overall picture the general idea is to start with an approximationas we did in dijkstra' algorithmand then refine it in series of steps the algorithm is greedy in the sense that the decision at each step is based only on what is best for that next stepand does not consider how that will affect the quality of the final overall solution we shall now consider some greedy approaches to the minimal spanning tree problem prim' algorithm greedy vertex-based approach suppose that we already have spanning tree connecting some set of vertices then we can consider all the edges which connect vertex in to one outside of sand add to one of those that has minimal weight this cannot possibly create circlesince it must add vertex not yet in this process can be repeatedstarting with any vertex to be the sole element of swhich is trivial minimal spanning tree containing no edges this approach is known as prim' algorithm when implementing prim' algorithmone can use either an array or list to keep track of the set of vertices reached so far one could then maintain another array or list closest whichfor each vertex not yet in skeeps track of the vertex in closest to that isthe
20,043
weights of those edgeswe could save timebecause we would then only have to check the weights mentioned in that array or list for the above graphstarting with { }the tree is built up as followsa it is slightly more challenging to produce convincing argument that this algorithm really works than it has been for the other algorithms we have seen so far it is clear that prim' algorithm must result in spanning treebecause it generates tree that spans all the verticesbut it is not obvious that it is minimal there are several possible proofs that it isbut none are straightforward the simplest works by showing that the set of all possible minimal spanning trees xi must include the output of prim' algorithm let be the output of prim' algorithmand be any minimal spanning tree the following illustrates such situationwe don' actually need to know what is we just need to know the properties it must satisfyand then systematically work through all the possibilitiesshowing that is minimal spanning tree in each case clearlyif then prim' algorithm has generated minimal spanning tree otherwiselet be the first edge added to that is not in thensince is spanning treeit must include path connecting the two endpoints of eand because circles are not allowedthere must be an edge in that is not in which we can call since prim' algorithm added rather than we know weight( <weight( then create tree that is with replaced by clearly is connectedhas the same number of edges as spans all the verticesand has total weight no greater than so it must also
20,044
edges in that are not in and we end up with the minimal spanning tree xn which completes the proof that is minimal spanning tree the time complexity of the standard prim' algorithm is ( because at each step we need to choose vertex to add to sand then update the closest arraynot dissimilar to the simplest form of dijkstra' algorithm howeveras with dijkstra' algorithma binary or binomial heap based priority queue can be used to speed things up by keeping track of which is the minimal weight vertex to be added next with an adjacency list representationthis can bring the complexity down to (( )log nfinallyusing the more sophisticated fibonacci heap for the priority queue can improve this further to ( nlog nthususing the optimal approach in each caseprim' algorithm is (nlog nfor sparse graphs that have ( )and ( for highly connected graphs that have ( just as with floyd' versus dijkstra' algorithmwe should consider whether it eally is necessary to process every vertex at each stagebecause it could be sufficient to only check actually existing edges we therefore now consider an alternative edge-based strategykruskal' algorithm greedy edge-based approach this algorithm does not consider the vertices directly at allbut builds minimal spanning tree by considering and adding edges as followsassume that we already have collection of edges thenfrom all the edges not yet in choose one with minimal weight such that its addition to does not produce circleand add that to if we start with being the empty setand continue until no more edges can be addeda minimal spanning tree will be produced this approach is known as kruskal' algorithm for the same graph as used for prim' algorithmthis algorithm proceeds as followsa in practicekruskal' algorithm is implemented in rather different way to prim' algorithm the general idea of the most efficient approaches is to start by sorting the edges according to their weightsand then simply go through that list of edges in order of increasing weightand either add them to or reject them if they would produce circle there are implementations of that which can be achieved with overall time complexity (elog )which is dominated by the (elog ecomplexity of sorting the edges in the first place this means that the choice between prim' algorithm and kruskal' algorithm depends on the connectivity of the particular graph under consideration if the graph is sparsei the
20,045
have the same (nlog ncomplexity as the optimal priority queue based versions of prim' algorithmbut will be faster than the standard ( prim' algorithm howeverif the graph is highly connectedi the number of edges is near the square of the number of verticesit will have complexity ( log nand be slower than the optimal ( versions of prim' algorithm travelling salesmen and vehicle routing note that all the graph algorithms we have considered so far have had polynomial time complexity there are further graph based problems that are even more complex probably the most well known of these is the travelling salesman problemwhich involves finding the shortest path through graph which visits each node precisely once there are currently no known polynomial time algorithms for solving this since only algorithms with exponential complexity are knownthis makes the travelling salesman problem difficult even for moderately sized ( all capital citiesexercisewrite an algorithm in pseudocode that solves the travelling salesman problemand determine its time complexity variation of the shortest path problem with enormous practical importance in transportation is the vehicle routing problem this involves finding series of routes to service number of customers with fleet of vehicles with minimal costwhere that cost may be the number of vehicles requiredthe total distance coveredor the total driver time required oftenfor practical instancesthere are conflicts between the various objectivesand there is trade-off between the various costs which have to be balanced in such casesa multi-objective optimization approach is required which returns pareto front of non-dominated solutionsi set solutions for which there are no other solutions which are better on all objectives alsoin practicethere are usually various constraints involvedsuch as fixed delivery timewindowsor limited capacity vehiclesthat must be satisfiedand that makes finding good solutions even more difficult since exact solutions to these problems are currently impossible for all but the smallest casesheuristic approaches are usually employedsuch as evolutionary computationwhich deliver solutions that are probably good but cannot be proved to be optimal one popular approach is to maintain whole population of solutionsand use simulated evolution by natural selection to iteratively improve the quality of those solutions that has the additional advantage of being able to generate whole pareto front of solutions rather than just single solution this is currently still very active research area
20,046
epilogue hopefully the reader will agree that these notes have achieved their objective of introducing the basic data structures used in computer scienceand showing how they can be used in the design of useful and efficient algorithms the basic data structures (arrayslistsstacksqueues and treeshave been employed throughoutand used as the basis of the crucial processessuch as storingsorting and searching datawhich underly many computer science applications it has been demonstrated how ideas from combinatorics and probability theory can be used to compute the efficiency of algorithms as function of problem size we have seen how considerations of computational efficiency then drive the development of more complex data structures (such as binary search treesheaps and hash tablesand associated algorithms general ideas such as recursion and divide-and-conquer have been used to provide more efficient algorithmsand inductive definitions and invariants have been used to establish proofs of correctness of algorithms throughoutabstract data types and pseudo-code have allowed an implementation independent approach that facilitates application to any programming environment in the future clearlythese notes have only been able to provide brief introduction to the topicand the algorithms and data structures and efficiency computations discussed have been relatively simple examples howeverhaving good understanding of these fundamental ideas and design patterns allows them to be easily expanded and elaborated to deal with the more complex applications that will inevitably arise in the future
20,047
some useful formulae the symbols abcr and represent real numbersm and are positive integersand indices and are non-negative integers binomial formulae ( ) ab ( ) ab ( )( ba ( ) ab powers and roots - /(ar ar as ar+ as bs (ab) (ar ) ars asr (as ) / ar /as ar- as /bs ( / ) and the following are special cases of the aboven nb np ab ab / am/ am -( / / am / ) logarithms definitionthe logarithm of to base awritten as loga cis the real number satisfying the equation ab in which we assume that and there are two special cases worth notingnamely loga since and loga since from the definitionwe immediately see thataloga and loga ab and we can move easily from one base to another usingloga loga loga
20,048
loga (bcloga loga loga ( /cloga loga loga (br loga and the following are special cases of those ruleslog an log log ( /nlog for large we have the useful approximationlog nn log (na sums we often find it useful to abbreviate sum as followssn ai an = we can view this as an algorithm or programlet hold the sum at the endand double[ be an array holding the numbers we wish to addthat is [iai thendouble for < + [icomputes the sum the most common use of sums for our purposes is when investigating the time complexity of an algorithm or program for thatwe often have to count variant of nso it is helpful to know thatn = ( to illustrate thisconsider the program in which counts the instructionsfor +forj < + +/instruction +/instruction +/instruction
20,049
vii heapsort binsort and radix sort an empirical comparison of sorting algorithms lower bounds for sorting further reading exercises projects file processing and external sorting primary versus secondary storage disk drives disk drive architecture disk access costs buffers and buffer pools the programmer' view of files external sorting simple approaches to external sorting replacement selection multiway merging further reading exercises projects searching searching unsorted and sorted arrays self-organizing lists bit vectors for representing sets hashing hash functions open hashing closed hashing analysis of closed hashing deletion
20,050
contents further reading exercises projects indexing linear indexing isam tree-based indexing - trees -trees -trees -tree analysis further reading exercises projects iv advanced data structures graphs terminology and representations graph implementations graph traversals depth-first search breadth-first search topological sort shortest-paths problems single-source shortest paths minimum-cost spanning trees prim' algorithm kruskal' algorithm further reading exercises projects
20,051
ix lists and arrays revisited multilists matrix representations memory management dynamic storage allocation failure policies and garbage collection further reading exercises projects advanced tree structures tries balanced trees the avl tree the splay tree spatial data structures the - tree the pr quadtree other point data structures other spatial data structures further reading exercises projects theory of algorithms analysis techniques summation techniques recurrence relations estimating upper and lower bounds expanding recurrences divide and conquer recurrences average-case analysis of quicksort
20,052
contents amortized analysis further reading exercises projects lower bounds introduction to lower bounds proofs lower bounds on searching lists searching in unsorted lists searching in sorted lists finding the maximum value adversarial lower bounds proofs state space lower bounds proofs finding the ith best element optimal sorting further reading exercises projects patterns of algorithms dynamic programming the knapsack problem all-pairs shortest paths randomized algorithms randomized algorithms for finding large values skip lists numerical algorithms exponentiation largest common factor matrix multiplication random numbers the fast fourier transform further reading
20,053
exercises projects xi limits to computation reductions hard problems the theory of -completeness -completeness proofs coping with -complete problems impossible problems uncountability the halting problem is unsolvable further reading exercises projects bibliography index
20,054
we study data structures so that we can learn to write more efficient programs but why must programs be efficient when new computers are faster every yearthe reason is that our ambitions grow with our capabilities instead of rendering efficiency needs obsoletethe modern revolution in computing power and storage capability merely raises the efficiency stakes as we computerize more complex tasks the quest for program efficiency need not and should not conflict with sound design and clear coding creating efficient programs has little to do with "programming tricksbut rather is based on good organization of information and good algorithms programmer who has not mastered the basic principles of clear design is not likely to write efficient programs conversely"software engineeringcannot be used as an excuse to justify inefficient performance generality in design can and should be achieved without sacrificing performancebut this can only be done if the designer understands how to measure performance and does so as an integral part of the design and implementation process most computer science curricula recognize that good programming skills begin with strong emphasis on fundamental software engineering principles thenonce programmer has learned the principles of clear program design and implementationthe next step is to study the effects of data organization and algorithms on program efficiency approachthis book describes many techniques for representing data these techniques are presented within the context of the following principles each data structure and each algorithm has costs and benefits practitioners need thorough understanding of how to assess costs and benefits to be able to adapt to new design challenges this requires an understanding of the principles of algorithm analysisand also an appreciation for the significant effects of the physical medium employed ( data stored on disk versus main memoryxiii
20,055
preface related to costs and benefits is the notion of tradeoffs for exampleit is quite common to reduce time requirements at the expense of an increase in space requirementsor vice versa programmers face tradeoff issues regularly in all phases of software design and implementationso the concept must become deeply ingrained programmers should know enough about common practice to avoid reinventing the wheel thusprogrammers need to learn the commonly used data structurestheir related algorithmsand the most frequently encountered design patterns found in programming data structures follow needs programmers must learn to assess application needs firstthen find data structure with matching capabilities to do this requires competence in principles and as have taught data structures through the yearsi have found that design issues have played an ever greater role in my courses this can be traced through the various editions of this textbook by the increasing coverage for design patterns and generic interfaces the first edition had no mention of design patterns the second edition had limited coverage of few example patternsand introduced the dictionary adt and comparator classes with the third editionthere is explicit coverage of some design patterns that are encountered when programming the basic data structures and algorithms covered in the book using the book in classdata structures and algorithms textbooks tend to fall into one of two categoriesteaching texts or encyclopedias books that attempt to do both usually fail at both this book is intended as teaching text believe it is more important for practitioner to understand the principles required to select or design the data structure that will best solve some problem than it is to memorize lot of textbook implementations hencei have designed this as teaching text that covers most standard data structuresbut not all few data structures that are not widely adopted are included to illustrate important principles some relatively new data structures that should become widely used in the future are included within an undergraduate programthis textbook is designed for use in either an advanced lower division (sophomore or junior leveldata structures courseor for senior level algorithms course new material has been added in the third edition to support its use in an algorithms course normallythis text would be used in course beyond the standard freshman level "cs course that often serves as the initial introduction to data structures readers of this book should have programming experiencetypically two semesters or the equivalent of structured programming language such as pascal or cand including at least some exposure to java readers who are already familiar with recursion will have an advantage students of
20,056
xv data structures will also benefit from having first completed good course in discrete mathematics nonetheless attempts to give reasonably complete survey of the prerequisite mathematical topics at the level necessary to understand their use in this book readers may wish to refer back to the appropriate sections as needed when encountering unfamiliar mathematical material sophomore-level class where students have only little background in basic data structures or analysis (that isbackground equivalent to what would be had from traditional cs coursemight cover - in detailas well as selected topics from that is how use the book for my own sophomorelevel class students with greater background might cover skip most of except for referencebriefly cover and and then cover - in detail againonly certain topics from might be covereddepending on the programming assignments selected by the instructor senior-level algorithms course would focus on and - is intended in part as source for larger programming exercises recommend that all students taking data structures course be required to implement some advanced tree structureor another dynamic structure of comparable difficulty such as the skip list or sparse matrix representations of none of these data structures are significantly more difficult to implement than the binary search treeand any of them should be within student' ability after completing while have attempted to arrange the presentation in an order that makes senseinstructors should feel free to rearrange the topics as they see fit the book has been written so that once the reader has mastered - the remaining material has relatively few dependencies clearlyexternal sorting depends on understanding internal sorting and disk files section on the union/find algorithm is used in kruskal' minimum-cost spanning tree algorithm section on selforganizing lists mentions the buffer replacement schemes covered in section draws on examples from throughout the book section relies on knowledge of graphs otherwisemost topics depend only on material presented earlier within the same most end with section entitled "further reading these sections are not comprehensive lists of references on the topics presented ratheri include books and articles thatin my opinionmay prove exceptionally informative or entertaining to the reader in some cases include references to works that should become familiar to any well-rounded computer scientist use of javathe programming examples are written in javabut do not wish to discourage those unfamiliar with java from reading this book have attempted to
20,057
preface make the examples as clear as possible while maintaining the advantages of java java is used here strictly as tool to illustrate data structures concepts in particulari make use of java' support for hiding implementation detailsincluding features such as classesprivate class membersand interfaces these features of the language support the crucial concept of separating logical designas embodied in the abstract data typefrom physical implementation as embodied in the data structure as with any programming languagejava has both advantages and disadvantages java is small language there usually is only one language feature to do somethingand this has the happy tendency of encouraging programmer toward clarity when used correctly in this respectit is superior to or +java serves nicely for defining and using most traditional data structures such as lists and trees on the other handjava is quite poor when used to do file processingbeing both cumbersome and inefficient it is also poor language when fine control of memory is required as an exampleapplications requiring memory managementsuch as those discussed in section are difficult to write in java since wish to stick to single language throughout the textlike any programmer must take the bad along with the good the most important issue is to get the ideas acrosswhether or not those ideas are natural to particular language of discourse most programmers will use variety of programming languages throughout their careerand the concepts described in this book should prove useful in variety of circumstances inheritancea key feature of object-oriented programmingis used sparingly in the code examples inheritance is an important tool that helps programmers avoid duplicationand thus minimize bugs from pedagogical standpointhoweverinheritance often makes code examples harder to understand since it tends to spread the description for one logical unit among several classes thusmy class definitions only use inheritance where inheritance is explicitly relevant to the point illustrated ( section this does not mean that programmer should do likewise avoiding code duplication and minimizing errors are important goals treat the programming examples as illustrations of data structure principlesbut do not copy them directly into your own programs one painful decision had to make was whether to use generics in the code examples in the first edition of this bookthe decision was to leave generics out as it was felt that their syntax obscures the meaning of the code for those not familiar with java in the years followingthe use of java in computer science curricula greatly expandedand now believe that readers of the text are likely to already be familiar with generic syntax thusgenerics are now used extensively throughout the code examples
20,058
xvii my implementations are meant to provide concrete illustrations of data structure principlesas an aid to the textual exposition code examples should not be read or used in isolation from the associated text because the bulk of each example' documentation is contained in the textnot the code the code complements the textnot the other way around they are not meant to be series of commercialquality class implementations if you are looking for complete implementation of standard data structure for use in your own codeyou would do well to do an internet search for instancethe code examples provide less parameter checking than is sound programming practicesince including such checking would obscure rather than illuminate the text some parameter checking and testing for other constraints ( whether value is being removed from an empty containeris included in the form of calls to methods in class assert these methods are modeled after the standard library function assert method assert notfalse takes boolean expression if this expression evaluates to falsethen message is printed and the program terminates immediately method assert notnull takes reference to class objectand terminates the program if the value of the reference is null (to be precisethese functions throw an illegalargumentexceptionwhich typically results in terminating the program unless the programmer takes action to handle the exception terminating program when function receives bad parameter is generally considered undesirable in real programsbut is quite adequate for understanding how data structure is meant to operate in real programming applicationsjava' exception handling features should be used to deal with input data errors howeverassertions provide simpler mechanism for indicating required conditions in way that is both adequate for clarifying how data structure is meant to operateand is easily modified into true exception handling make distinction in the text between "java implementationsand "pseudocode code labeled as java implementation has actually been compiled and tested on one or more java compilers pseudocode examples often conform closely to java syntaxbut typically contain one or more lines of higher-level description pseudocode is used where perceived greater pedagogical advantage to simplerbut less precisedescription exercises and projectsproper implementation and analysis of data structures cannot be learned simply by reading book you must practice by implementing real programsconstantly comparing different techniques to see what really works best in given situation
20,059
preface one of the most important aspects of course in data structures is that it is where students really learn to program using pointers and dynamic memory allocationby implementing data structures such as linked lists and trees its also where students truly learn recursion in our curriculumthis is the first course where students do significant designbecause it often requires real data structures to motivate significant design exercises finallythe fundamental differences between memory-based and disk-based data access cannot be appreciated without practical programming experience for all of these reasonsa data structures course cannot succeed without significant programming component in our departmentthe data structures course is arguably the most difficult programming course in the curriculum students should also work problems to develop their analytical abilities provide over exercises and suggestions for programming projects urge readers to take advantage of them contacting the author and supplementary materialsa book such as this is sure to contain errors and have room for improvement welcome bug reports and constructive criticism can be reached by electronic mail via the internet at shaffer@vt edu alternativelycomments can be mailed to cliff shaffer department of computer science virginia tech blacksburgva set of lecture notes for use in conjunction with this book can be obtained at used in the book are also available at this site online web pages for virginia tech' sophomore-level data structures class can be found at url this book was originally typeset by the author with latex the bibliography was prepared using bibtex the index was prepared using makeindex the figures were mostly drawn with xfig figures and were partially created using mathematica acknowledgmentsit takes lot of help from lot of people to make book wish to acknowledge few of those who helped to make this book possible apologize for the inevitable omissions
20,060
xix virginia tech helped make this whole thing possible through sabbatical research leave during fall enabling me to get the project off the ground my department heads during the time have written the various editions of this bookdennis kafura and jack carrollprovided unwavering moral support for this project mike keenanlenny heathand jeff shaffer provided valuable input on early versions of the also wish to thank lenny heath for many years of stimulating discussions about algorithms and analysis (and how to teach both to studentssteve edwards deserves special thanks for spending so much time helping me on various redesigns of the +and java code versions for the second and third editionsand many hours of discussion on the principles of program design thanks to layne watson for his help with mathematicaand to bo begolephilip isenhourjeff nielsenand craig struble for much technical assistance thanks to bill mcquainmark abrams and dennis kafura for answering lots of silly questions about +and java am truly indebted to the many reviewers of the various editions of this manuscript for the first edition these reviewers included david bezek (university of evansville)douglas campbell (brigham young university)karen davis (university of cincinnati)vijay kumar garg (university of texas austin)jim miller (university of kansas)bruce maxim (university of michigan dearborn)jeff parker (agile networks/harvard)dana richards (george mason university)jack tan (university of houston)and lixin tao (concordia universitywithout their helpthis book would contain many more technical errors and many fewer insights for the second editioni wish to thank these reviewersgurdip singh (kansas state university)peter allen (columbia university)robin hill (university of wyoming)norman jacobson (university of california irvine)ben keller (eastern michigan university)and ken bosworth (idaho state universityin additioni wish to thank neil stewart and frank thesen for their comments and ideas for improvement third edition reviewers included randall lechlitner (university of houstinclear lakeand brian hipp (york technical collegei thank them for their comments without the hard work of many people at prentice hallnone of this would be possible authors simply do not create printer-ready books on their own foremost thanks go to kate hargettpetra rectorlaura steeleand alan aptmy editors over the years my production editorsirwin zucker for the second editionkathleen caren for the original +versionand ed defelippis for the java versionkept everything moving smoothly during that horrible rush at the end thanks to bill zobrist and bruce gregory ( thinkfor getting me into this in the first place
20,061
preface others at prentice hall who helped me along the way include truly donovanlinda behrensand phyllis bregman am sure owe thanks to many others at prentice hall for their help in ways that am not even aware of wish to express my appreciation to hanan samet for teaching me about data structures learned much of the philosophy presented here from him as wellthough he is not responsible for any problems with the result thanks to my wife terryfor her love and supportand to my daughters irena and kate for pleasant diversions from working too hard finallyand most importantlyto all of the data structures students over the years who have taught me what is important and what should be skipped in data structures courseand the many new insights they have provided this book is dedicated to them clifford shaffer blacksburgvirginia
20,062
preliminaries
20,063
data structures and algorithms how many cities with more than , people lie within miles of dallastexashow many people in my company make over $ , per yearcan we connect all of our telephone customers with less than , miles of cableto answer questions like theseit is not enough to have the necessary information we must organize that information in way that allows us to find the answers in time to satisfy our needs representing information is fundamental to computer science the primary purpose of most computer programs is not to perform calculationsbut to store and retrieve information -usually as fast as possible for this reasonthe study of data structures and the algorithms that manipulate them is at the heart of computer science and that is what this book is about -helping you to understand how to structure information to support efficient processing this book has three primary goals the first is to present the commonly used data structures these form programmer' basic data structure "toolkit for many problemssome data structure in the toolkit provides good solution the second goal is to introduce the idea of tradeoffs and reinforce the concept that there are costs and benefits associated with every data structure this is done by describingfor each data structurethe amount of space and time required for typical operations the third goal is to teach how to measure the effectiveness of data structure or algorithm only through such measurement can you determine which data structure in your toolkit is most appropriate for new problem the techniques presented also allow you to judge the merits of new data structures that you or others might invent there are often many approaches to solving problem how do we choose between themat the heart of computer program design are two (sometimes conflictinggoals
20,064
chap data structures and algorithms to design an algorithm that is easy to understandcodeand debug to design an algorithm that makes efficient use of the computer' resources ideallythe resulting program is true to both of these goals we might say that such program is "elegant while the algorithms and program code examples presented here attempt to be elegant in this senseit is not the purpose of this book to explicitly treat issues related to goal ( these are primarily concerns of the discipline of software engineering ratherthis book is mostly about issues relating to goal ( how do we measure efficiency describes method for evaluating the efficiency of an algorithm or computer programcalled asymptotic analysis asymptotic analysis also allows you to measure the inherent difficulty of problem the remaining use asymptotic analysis techniques for every algorithm presented this allows you to see how each algorithm compares to other algorithms for solving the same problem in terms of its efficiency this first sets the stage for what is to followby presenting some higherorder issues related to the selection and use of data structures we first examine the process by which designer selects data structure appropriate to the task at hand we then consider the role of abstraction in program design we briefly consider the concept of design pattern and see some examples the ends with an exploration of the relationship between problemsalgorithmsand programs philosophy of data structures the need for data structures you might think that with ever more powerful computersprogram efficiency is becoming less important after allprocessor speed and memory size still seem to double every couple of years won' any efficiency problem we might have today be solved by tomorrow' hardwareas we develop more powerful computersour history so far has always been to use additional computing power to tackle more complex problemsbe it in the form of more sophisticated user interfacesbigger problem sizesor new problems previously deemed computationally infeasible more complex problems demand more computationmaking the need for efficient programs even greater worse yetas tasks become more complexthey become less like our everyday experience today' computer scientists must be trained to have thorough understanding of the principles behind efficient program designbecause their ordinary life experiences often do not apply when designing computer programs
20,065
in the most general sensea data structure is any data representation and its associated operations even an integer or floating point number stored on the computer can be viewed as simple data structure more typicallya data structure is meant to be an organization or structuring for collection of data items sorted list of integers stored in an array is an example of such structuring given sufficient space to store collection of data itemsit is always possible to search for specified items within the collectionprint or otherwise process the data items in any desired orderor modify the value of any particular data item thusit is possible to perform all necessary operations on any data structure howeverusing the proper data structure can make the difference between program running in few seconds and one requiring many days solution is said to be efficient if it solves the problem within the required resource constraints examples of resource constraints include the total space available to store the data -possibly divided into separate main memory and disk space constraints -and the time allowed to perform each subtask solution is sometimes said to be efficient if it requires fewer resources than known alternativesregardless of whether it meets any particular requirements the cost of solution is the amount of resources that the solution consumes most oftencost is measured in terms of one key resource such as timewith the implied assumption that the solution meets the other resource constraints it should go without saying that people write programs to solve problems howeverit is crucial to keep this truism in mind when selecting data structure to solve particular problem only by first analyzing the problem to determine the performance goals that must be achieved can there be any hope of selecting the right data structure for the job poor program designers ignore this analysis step and apply data structure that they are familiar with but which is inappropriate to the problem the result is typically slow program converselythere is no sense in adopting complex representation to "improvea program that can meet its performance goals when implemented using simpler design when selecting data structure to solve problemyou should follow these steps analyze your problem to determine the basic operations that must be supported examples of basic operations include inserting data item into the data structuredeleting data item from the data structureand finding specified data item quantify the resource constraints for each operation select the data structure that best meets these requirements
20,066
chap data structures and algorithms this three-step approach to selecting data structure operationalizes datacentered view of the design process the first concern is for the data and the operations to be performed on themthe next concern is the representation for those dataand the final concern is the implementation of that representation resource constraints on certain key operationssuch as searchinserting data recordsand deleting data recordsnormally drive the data structure selection process many issues relating to the relative importance of these operations are addressed by the following three questionswhich you should ask yourself whenever you must choose data structureare all data items inserted into the data structure at the beginningor are insertions interspersed with other operationscan data items be deletedare all data items processed in some well-defined orderor is search for specific data items allowedtypicallyinterspersing insertions with other operationsallowing deletionand supporting search for data items all require more complex representations costs and benefits each data structure has associated costs and benefits in practiceit is hardly ever true that one data structure is better than another for use in all situations if one data structure or algorithm is superior to another in all respectsthe inferior one will usually have long been forgotten for nearly every data structure and algorithm presented in this bookyou will see examples of where it is the best choice some of the examples might surprise you data structure requires certain amount of space for each data item it storesa certain amount of time to perform single basic operationand certain amount of programming effort each problem has constraints on available space and time each solution to problem makes use of the basic operations in some relative proportionand the data structure selection process must account for this only after careful analysis of your problem' characteristics can you determine the best data structure for the task example bank must support many types of transactions with its customersbut we will examine simple model where customers wish to open accountsclose accountsand add money or withdraw money from accounts we can consider this problem at two distinct levels( the requirements for the physical infrastructure and workflow process that the
20,067
bank uses in its interactions with its customersand ( the requirements for the database system that manages the accounts the typical customer opens and closes accounts far less often than he or she accesses the account customers are willing to wait many minutes while accounts are created or deleted but are typically not willing to wait more than brief time for individual account transactions such as deposit or withdrawal these observations can be considered as informal specifications for the time constraints on the problem it is common practice for banks to provide two tiers of service human tellers or automated teller machines (atmssupport customer access to account balances and updates such as deposits and withdrawals special service representatives are typically provided (during restricted hoursto handle opening and closing accounts teller and atm transactions are expected to take little time opening or closing an account can take much longer (perhaps up to an hour from the customer' perspectivefrom database perspectivewe see that atm transactions do not modify the database significantly for simplicityassume that if money is added or removedthis transaction simply changes the value stored in an account record adding new account to the database is allowed to take several minutes deleting an account need have no time constraintbecause from the customer' point of view all that matters is that all the money be returned (equivalent to withdrawalfrom the bank' point of viewthe account record might be removed from the database system after business hoursor at the end of the monthly account cycle when considering the choice of data structure to use in the database system that manages customer accountswe see that data structure that has little concern for the cost of deletionbut is highly efficient for search and moderately efficient for insertionshould meet the resource constraints imposed by this problem records are accessible by unique account number (sometimes called an exact-match queryone data structure that meets these requirements is the hash table described in hash tables allow for extremely fast exact-match search record can be modified quickly when the modification does not affect its space requirements hash tables also support efficient insertion of new records while deletions can also be supported efficientlytoo many deletions lead to some degradation in performance for the remaining operations howeverthe hash table can be reorganized periodically to restore the system to peak efficiency such reorganization can occur offline so as not to affect atm transactions
20,068
chap data structures and algorithms example company is developing database system containing information about cities and towns in the united states there are many thousands of cities and townsand the database program should allow users to find information about particular place by name (another example of an exact-match queryusers should also be able to find all places that match particular value or range of values for attributes such as location or population size this is known as range query reasonable database system must answer queries quickly enough to satisfy the patience of typical user for an exact-match querya few seconds is satisfactory if the database is meant to support range queries that can return many cities that match the query specificationthe entire operation may be allowed to take longerperhaps on the order of minute to meet this requirementit will be necessary to support operations that process range queries efficiently by processing all cities in the range as batchrather than as series of operations on individual cities the hash table suggested in the previous example is inappropriate for implementing our city databasebecause it cannot perform efficient range queries the -tree of section supports large databasesinsertion and deletion of data recordsand range queries howevera simple linear index as described in section would be more appropriate if the database is created onceand then never changedsuch as an atlas distributed on cd-rom abstract data types and data structures the previous section used the terms "data itemand "data structurewithout properly defining them this section presents terminology and motivates the design process embodied in the three-step approach to selecting data structure this motivation stems from the need to manage the tremendous complexity of computer programs type is collection of values for examplethe boolean type consists of the values true and false the integers also form type an integer is simple type because its values contain no subparts bank account record will typically contain several pieces of information such as nameaddressaccount numberand account balance such record is an example of an aggregate type or composite type data item is piece of information or record whose value is drawn from type data item is said to be member of type
20,069
data type is type together with collection of operations to manipulate the type for examplean integer variable is member of the integer data type addition is an example of an operation on the integer data type distinction should be made between the logical concept of data type and its physical implementation in computer program for examplethere are two traditional implementations for the list data typethe linked list and the array-based list the list data type can therefore be implemented using linked list or an array even the term "arrayis ambiguous in that it can refer either to data type or an implementation "arrayis commonly used in computer programming to mean contiguous block of memory locationswhere each memory location stores one fixed-length data item by this meaningan array is physical data structure howeverarray can also mean logical data type composed of (typically homogeneouscollection of data itemswith each data item identified by an index number it is possible to implement arrays in many different ways for examplesection describes the data structure used to implement sparse matrixa large two-dimensional array that stores only relatively few non-zero values this implementation is quite different from the physical representation of an array as contiguous memory locations an abstract data type (adtis the realization of data type as software component the interface of the adt is defined in terms of type and set of operations on that type the behavior of each operation is determined by its inputs and outputs an adt does not specify how the data type is implemented these implementation details are hidden from the user of the adt and protected from outside accessa concept referred to as encapsulation data structure is the implementation for an adt in an object-oriented language such as javaan adt and its implementation together make up class each operation associated with the adt is implemented by member function or method the variables that define the space required by data item are referred to as data members an object is an instance of classthat issomething that is created and takes up storage during the execution of computer program the term "data structureoften refers to data stored in computer' main memory the related term file structure often refers to the organization of data on peripheral storagesuch as disk drive or cd-rom example the mathematical concept of an integeralong with operations that manipulate integersform data type the java int variable type is physical representation of the abstract integer the int variable typealong with the operations that act on an int variableform an adt un
20,070
chap data structures and algorithms fortunatelythe int implementation is not completely true to the abstract integeras there are limitations on the range of values an int variable can store if these limitations prove unacceptablethen some other representation for the adt "integermust be devisedand new implementation must be used for the associated operations example an adt for list of integers might specify the following operationsinsert new integer at particular position in the list return true if the list is empty reinitialize the list return the number of integers currently in the list delete the integer at particular position in the list from this descriptionthe input and output of each operation should be clearbut the implementation for lists has not been specified one application that makes use of some adt might use particular member functions of that adt more than second applicationor the two applications might have different time requirements for the various operations these differences in the requirements of applications are the reason why given adt might be supported by more than one implementation example two popular implementations for large disk-based database applications are hashing (section and the -tree (section both support efficient insertion and deletion of recordsand both support exactmatch queries howeverhashing is more efficient than the -tree for exact-match queries on the other handthe -tree can perform range queries efficientlywhile hashing is hopelessly inefficient for range queries thusif the database application limits searches to exact-match querieshashing is preferred on the other handif the application requires support for range queriesthe -tree is preferred despite these performance issuesboth implementations solve versions of the same problemupdating and searching large collection of records the concept of an adt can help us to focus on key issues even in non-com-uting applications
20,071
example when operating carthe primary activities are steeringacceleratingand braking on nearly all passenger carsyou steer by turning the steering wheelaccelerate by pushing the gas pedaland brake by pushing the brake pedal this design for cars can be viewed as an adt with operations "steer,"accelerate,and "brake two cars might implement these operations in radically different wayssay with different types of engineor frontversus rear-wheel drive yetmost drivers can operate many different cars because the adt presents uniform method of operation that does not require the driver to understand the specifics of any particular engine or drive design these differences are deliberately hidden the concept of an adt is one instance of an important principle that must be understood by any successful computer scientistmanaging complexity through abstraction central theme of computer science is complexity and techniques for handling it humans deal with complexity by assigning label to an assembly of objects or concepts and then manipulating the label in place of the assembly cognitive psychologists call such label metaphor particular label might be related to other pieces of information or other labels this collection can in turn be given labelforming hierarchy of concepts and labels this hierarchy of labels allows us to focus on important issues while ignoring unnecessary details example we apply the label "hard driveto collection of hardware that manipulates data on particular type of storage deviceand we apply the label "cputo the hardware that controls execution of computer instructions these and other labels are gathered together under the label "computer because even small home computers have millions of componentssome form of abstraction is necessary to comprehend how computer operates consider how you might go about the process of designing complex computer program that implements and manipulates an adt the adt is implemented in one part of the program by particular data structure while designing those parts of the program that use the adtyou can think in terms of operations on the data type without concern for the data structure' implementation without this ability to simplify your thinking about complex programyou would have no hope of understanding or implementing it
20,072
chap data structures and algorithms example consider the design for relatively simple database system stored on disk typicallyrecords on disk in such program are accessed through buffer pool (see section rather than directly variable length records might use memory manager (see section to find an appropriate location within the disk file to place the record multiple index structures (see will typically be used to access records in various ways thuswe have chain of classeseach with its own responsibilities and access privileges database query from user is implemented by searching an index structure this index requests access to the record by means of request to the buffer pool if record is being inserted or deletedsuch request goes through the memory managerwhich in turn interacts with the buffer pool to gain access to the disk file program such as this is far too complex for nearly any human programmer to keep all of the details in his or her head at once the only way to design and implement such program is through proper use of abstraction and metaphors in object-oriented programmingsuch abstraction is handled using classes data types have both logical and physical form the definition of the data type in terms of an adt is its logical form the implementation of the data type as data structure is its physical form figure illustrates this relationship between logical and physical forms for data types when you implement an adtyou are dealing with the physical form of the associated data type when you use an adt elsewhere in your programyou are concerned with the associated data type' logical form some sections of this book focus on physical implementations for given data structure other sections use the logical adt for the data type in the context of higher-level task example particular java environment might provide library that includes list class the logical form of the list is defined by the public functionstheir inputsand their outputs that define the class this might be all that you know about the list class implementationand this should be all you need to know within the classa variety of physical implementations for lists is possible several are described in section design patterns at higher level of abstraction than adts are abstractions for describing the design of programs -that isthe interactions of objects and classes experienced software
20,073
sec design patterns data type adttype operations data itemslogical form data structurestorage space data itemsphysical form subroutines figure the relationship between data itemsabstract data typesand data structures the adt defines the logical form of the data type the data structure implements the physical form of the data type designers learn and reuse various techniques for combining software components such techniques are sometimes referred to as design patterns design pattern embodies and generalizes important design concepts for recurring problem primary goal of design patterns is to quickly transfer the knowledge gained by expert designers to newer programmers another goal is to allow for efficient communication between programmers its much easier to discuss design issue when you share vocabulary relevant to the topic specific design patterns emerge from the discovery that particular design problem appears repeatedly in many contexts they are meant to solve real problems design patterns are bit like genericsthey describe the structure for design solutionwith the details filled in for any given problem design patterns are bit like data structureseach one provides costs and benefitswhich implies that tradeoffs are possible thereforea given design pattern might have variations on its application to match the various tradeoffs inherent in given situation the rest of this section introduces few simple design patterns that are used later in the book flyweight the flyweight design pattern is meant to solve the following problem you have an application with many objects some of these objects are identical in the information that they containand the role that they play but they must be reached from various placesand conceptually they really are distinct objects because so much information is sharedwe would like to take advantage of the opportunity to reduce memory cost by sharing space an example comes from representing the
20,074
chap data structures and algorithms layout for document the letter "cmight reasonably be represented by an object that describes that character' strokes and bounding box howeverwe don' want to create separate "cobject everywhere in the document that "cappears the solution is to allocate single copy of the shared representation for "cobject thenevery place in the document that needs "cin given fontsizeand typeface will reference this single copy the various instances of references to "care called flyweights flyweight includes the reference to the shared informationand might include additional information specific to that instance we could imagine describing the layout of text on page by using tree structure the root of the tree is node representing the page the page has multiple child nodesone for each column the column nodes have child nodes for each row and the rows have child nodes for each character these representations for characters are the flyweights the flyweight includes the reference to the shared shape informationand might contain additional information specific to that instance for exampleeach instance for "cwill contain reference to the shared information about strokes and shapesand it might also contain the exact location for that instance of the character on the page flyweights are used in the implementation for the pr quadtree data structure for storing collections of point objectsdescribed in section in pr quadtreewe again have tree with leaf nodes many of these leaf nodes (the ones that represent empty areascontain the same information these identical nodes can be implemented using the flyweight design pattern for better memory efficiency visitor given tree of objects to describe page layoutwe might wish to perform some activity on every node in the tree section discusses tree traversalwhich is the process of visiting every node in the tree in defined order simple example for our text composition application might be to count the number of nodes in the tree that represents the page at another timewe might wish to print listing of all the nodes for debugging purposes we could write separate traversal function for each such activity that we intend to perform on the tree better approach would be to write generic traversal functionand pass in the activity to be performed at each node this organization constitutes the visitor design pattern the visitor design pattern is used in sections (tree traversaland (graph traversal
20,075
composite there are two fundamental approaches to dealing with the relationship between collection of actions and hierarchy of object types first consider the typical procedural approach say we have base class for page layout entitieswith subclass hierarchy to define specific subtypes (pagecolumnsrowsfigurescharactersetc and say there are actions to be performed on collection of such objects (such as rendering the objects to the screenthe procedural design approach is for each action to be implemented as method that takes as parameter pointer to the base class type each such method will traverse through the collection of objectsvisiting each object in turn each method contains something like case statement that defines the details of the action for each subclass in the collection ( pagecolumnrowcharacterwe can cut the code down some by using the visitor design pattern so that we only need to write the traversal onceand then write visitor subroutine for each action that might be applied to the collection of objects but each such visitor subroutine must still contain logic for dealing with each of the possible subclasses in our page composition applicationthere are only few activities that we would like to perform on the page representation we might render the objects in full detail or we might want "rough draftrendering that prints only the bounding boxes of the objects if we come up with new activity to apply to the collection of objectswe do not need to change any of the code that implements the existing activities but adding new activities won' happen often for this application in contrastthere could be many object typesand we might frequently add new object types to our implementation unfortunatelyadding new object type requires that we modify each activityand the subroutines implementing the activities get rather long case statements to distinguish the behavior of the many subclasses an alternative design is to have each object subclass in the hierarchy embody the action for each of the various activities that might be performed each subclass will have code to perform each activity (such as full rendering or bounding box renderingthenif we wish to apply the activity to the collectionwe simply call the first object in the collection and specify the action (as method call on that objectin the case of our page layout and its hierarchical collection of objectsthose objects that contain other objects (such as row objects that contains letterswill call the appropriate method for each child if we want to add new activity with this organizationwe have to change the code for every subclass but this is relatively rare for our text compositing application in contrastadding new object into the subclass hierarchy (which for this application is far more likely than adding new rendering functionis easy adding new subclass does not require changing
20,076
chap data structures and algorithms any of the existing subclasses it merely requires that we define the behavior of each activity that can be performed on that subclass this second design approach of burying the functional activity in the subclasses is called the composite design pattern detailed example for using the composite design pattern is presented in section strategy our final example of design pattern lets us encapsulate and make interchangeable set of alternative actions that might be performed as part of some larger activity again continuing our text compositing exampleeach output device that we wish to render to will require its own function for doing the actual rendering that isthe objects will be broken down into constituent pixels or strokesbut the actual mechanics of rendering pixel or stroke will depend on the output device we don' want to build this rendering functionality into the object subclasses insteadwe want to pass to the subroutine performing the rendering action method or class that does the appropriate rendering details for that output device that iswe wish to hand to the object the appropriate "strategyfor accomplishing the details of the rendering task thuswe call this approach the strategy design pattern the strategy design pattern will be discussed further in therea sorting function is given class (called comparatorthat understands how to extract and compare the key values for records to be sorted in this waythe sorting function does not need to know any details of how its record type is implemented one of the biggest challenges to understanding design patterns is that many of them appear to be pretty much the same for exampleyou might be confused about the difference between the composite pattern and the visitor pattern the distinction is that the composite design pattern is about whether to give control of the traversal process to the nodes of the tree or to the tree itself both approaches can make use of the visitor design pattern to avoid rewriting the traversal function many timesby encapsulating the activity performed at each node but isn' the strategy design pattern doing the same thingthe difference between the visitor pattern and the strategy pattern is more subtle here the difference is primarily one of intent and focus in both the strategy design pattern and the visitor design patternan activity is being passed in as parameter the strategy design pattern is focused on encapsulating an activity that is part of larger processso that different ways of performing that activity can be substituted the visitor design pattern is focused on encapsulating an activity that will be performed on all members of collection so that completely different activities can be substituted within generic method that accesses all of the collection members
20,077
problemsalgorithmsand programs programmers commonly deal with problemsalgorithmsand computer programs these are three distinct concepts problemsas your intuition would suggesta problem is task to be performed it is best thought of in terms of inputs and matching outputs problem definition should not include any constraints on how the problem is to be solved the solution method should be developed only after the problem is precisely defined and thoroughly understood howevera problem definition should include constraints on the resources that may be consumed by any acceptable solution for any problem to be solved by computerthere are always such constraintswhether stated or implied for exampleany computer program may use only the main memory and disk space availableand it must run in "reasonableamount of time problems can be viewed as functions in the mathematical sense function is matching between inputs (the domainand outputs (the rangean input to function might be single value or collection of information the values making up an input are called the parameters of the function specific selection of values for the parameters is called an instance of the problem for examplethe input parameter to sorting function might be an array of integers particular array of integerswith given size and specific values for each position in the arraywould be an instance of the sorting problem different instances might generate the same output howeverany problem instance must always result in the same output every time the function is computed using that particular input this concept of all problems behaving like mathematical functions might not match your intuition for the behavior of computer programs you might know of programs to which you can give the same input value on two separate occasionsand two different outputs will result for exampleif you type "dateto typical unix command line promptyou will get the current date naturally the date will be different on different dayseven though the same command is given howeverthere is obviously more to the input for the date program than the command that you type to run the program the date program computes function in other wordson any particular day there can only be single answer returned by properly running date program on completely specified input for all computer programsthe output is completely determined by the program' full set of inputs even "random number generatoris completely determined by its inputs (although some random number generating systems appear to get around this by accepting random input from physical process beyond the user' controlthe relationship between programs and functions is explored further in section
20,078
chap data structures and algorithms algorithmsan algorithm is method or process followed to solve problem if the problem is viewed as functionthen an algorithm is an implementation for the function that transforms an input to the corresponding output problem can be solved by many different algorithms given algorithm solves only one problem ( computes particular functionthis book covers many problemsand for several of these problems present more than one algorithm for the important problem of sorting present nearly dozen algorithmsthe advantage of knowing several solutions to problem is that solution might be more efficient than solution for specific variation of the problemor for specific class of inputs to the problemwhile solution might be more efficient than for another variation or class of inputs for exampleone sorting algorithm might be the best for sorting small collection of integersanother might be the best for sorting large collection of integersand third might be the best for sorting collection of variable-length strings by definitionan algorithm possesses several properties something can only be called an algorithm to solve particular problem if it has all of the following properties it must be correct in other wordsit must compute the desired functionconverting each input to the correct output note that every algorithm implements some function because every algorithm maps every input to some output (even if that output is system crashat issue here is whether given algorithm implements the intended function it is composed of series of concrete steps concrete means that the action described by that step is completely understood -and doable -by the person or machine that must perform the algorithm each step must also be doable in finite amount of time thusthe algorithm gives us "recipefor solving the problem by performing series of stepswhere each such step is within our capacity to perform the ability to perform step can depend on who or what is intended to execute the recipe for examplethe steps of cookie recipe in cookbook might be considered sufficiently concrete for instructing human cookbut not for programming an automated cookiemaking factory there can be no ambiguity as to which step will be performed next often it is the next step of the algorithm description selection ( the if statements in javais normally part of any language for describing algorithms selection allows choice for which step will be performed nextbut the selection process is unambiguous at the time when the choice is made
20,079
it must be composed of finite number of steps if the description for the algorithm were made up of an infinite number of stepswe could never hope to write it downnor implement it as computer program most languages for describing algorithms (including english and "pseudocode"provide some way to perform repeated actionsknown as iteration examples of iteration in programming languages include the while and for loop constructs of java iteration allows for short descriptionswith the number of steps actually performed controlled by the input it must terminate in other wordsit may not go into an infinite loop programswe often think of computer program as an instanceor concrete representationof an algorithm in some programming language in this booknearly all of the algorithms are presented in terms of programsor parts of programs naturallythere are many programs that are instances of the same algorithmbecause any modern computer programming language can be used to implement the same collection of algorithms (although some programming languages can make life easier for the programmerto simplify presentation throughout the remainder of the texti often use the terms "algorithmand "programinterchangeablydespite the fact that they are really separate concepts by definitionan algorithm must provide sufficient detail that it can be converted into program when needed the requirement that an algorithm must terminate means that not all computer programs meet the technical definition of an algorithm your operating system is one such program howeveryou can think of the various tasks for an operating system (each with associated inputs and outputsas individual problemseach solved by specific algorithms implemented by part of the operating system programand each one of which terminates once its output is produced to summarizea problem is function or mapping of inputs to outputs an algorithm is recipe for solving problem whose steps are concrete and unambiguous the algorithm must be correctof finite lengthand must terminate for all inputs program is an instantiation of an algorithm in computer programming language further reading the first authoritative work on data structures and algorithms was the series of books the art of computer programming by donald knuthwith volumes and being most relevant to the study of data structures [knu knu modern encyclopedic approach to data structures and algorithms that should be easy
20,080
chap data structures and algorithms to understand once you have mastered this book is algorithms by robert sedgewick [sed for an excellent and highly readable (but more advancedteaching introduction to algorithmstheir designand their analysissee introduction to algorithmsa creative approach by udi manber [man for an advancedencyclopedic approachsee introduction to algorithms by cormenleisersonand rivest [clrs steven skiena' the algorithm design manual [ski provides pointers to many implementations for data structures and algorithms that are available on the web for gentle introduction to adts and program specificationsee abstract data typestheir specificationrepresentationand use by thomasrobinsonand emms [tre the claim that all modern programming languages can implement the same algorithms (stated more preciselyany function that is computable by one programming language is computable by any programming language with certain standard capabilitiesis key result from computability theory for an easy introduction to this field see james heindiscrete structureslogicand computability [hei much of computer science is devoted to problem solving indeedthis is what attracts many people to the field how to solve it by george polya [pol is considered to be the classic work on how to improve your problem-solving abilities if you want to be better student (as well as better problem solver in general)see strategies for creative problem solving by folger and leblanc [fl ]effective problem solving by marvin levine [lev ]and problem solving comprehension by arthur whimbey and jack lochhead [wl see the origin of consciousness in the breakdown of the bicameral mind by julian jaynes [jay for good discussion on how humans use the concept of metaphor to handle complexity more directly related to computer science education and programmingsee "cogitoergo sumcognitive processes of students dealing with data structuresby dan aharoni [aha for discussion on moving from programming-context thinking to higher-level (and more design-orientedprogramming-free thinking on more pragmatic levelmost people study data structures to write better programs if you expect your program to work correctly and efficientlyit must first be understandable to yourself and your co-workers kernighan and pike' the practice of programming [kp discusses number of practical issues related to programmingincluding good coding and documentation style for an excellent (and entertaining!introduction to the difficulties involved with writing large programsread the classic the mythical man-monthessays on software engineering by frederick brooks [bro
20,081
if you want to be successful java programmeryou need good reference manuals close at hand david flanagan' java in nutshell [fla provides good reference for those familiar with the basics of the language after gaining proficiency in the mechanics of program writingthe next step is to become proficient in program design good design is difficult to learn in any disciplineand good design for object-oriented software is one of the most difficult of arts the novice designer can jump-start the learning process by studying wellknown and well-used design patterns the classic reference on design patterns is design patternselements of reusable object-oriented software by gammahelmjohnsonand vlissides [ghjv (this is commonly referred to as the "gang of fourbookunfortunatelythis is an extremely difficult book to understandin part because the concepts are inherently difficult number of web sites are available that discuss design patternsand which provide study guides for the design patterns book two other books that discuss object-oriented software design are object-oriented software design and construction with +by dennis kafura [kaf ]and object-oriented design heuristics by arthur riel [rie exercises the exercises for this are different from those in the rest of the book most of these exercises are answered in the following howeveryou should not look up the answers in other parts of the book these exercises are intended to make you think about some of the issues to be covered later on answer them to the best of your ability with your current knowledge think of program you have used that is unacceptably slow identify the specific operations that make the program slow identify other basic operations that the program performs quickly enough most programming languages have built-in integer data type normally this representation has fixed sizethus placing limit on how large value can be stored in an integer variable describe representation for integers that has no size restriction (other than the limits of the computer' available main memory)and thus no practical limit on how large an integer can be stored briefly show how your representation can be used to implement the operations of additionmultiplicationand exponentiation define an adt for character strings your adt should consist of typical functions that can be performed on stringswith each function defined in
20,082
chap data structures and algorithms terms of its input and output then define two different physical representations for strings define an adt for list of integers firstdecide what functionality your adt should provide example should give you some ideas thenspecify your adt in java in the form of an abstract class declarationshowing the functionstheir parametersand their return types briefly describe how integer variables are typically represented on computer (look up one' complement and two' complement arithmetic in an introductory computer science textbook if you are not familiar with these why does this representation for integers qualify as data structure as defined in section define an adt for two-dimensional array of integers specify precisely the basic operations that can be performed on such arrays nextimagine an application that stores an array with rows and columnswhere less than , of the array values are non-zero describe two different implementations for such arrays that would be more space efficient than standard two-dimensional array implementation requiring one million positions imagine that you have been assigned to implement sorting program the goal is to make this program general purposein that you don' want to define in advance what record or key types are used describe ways to generalize simple sorting algorithm (such as insertion sortor any other sort you are familiar withto support this generalization imagine that you have been assigned to implement simple sequential search on an array the problem is that you want the search to be as general as possible this means that you need to support arbitrary record and key types describe ways to generalize the search function to support this goal consider the possibility that the function will be used multiple times in the same programon differing record types consider the possibility that the function will need to be used on different keys (possibly with the same or different typesof the same record for examplea student data record might be searched by zip codeby nameby salaryor by gpa does every problem have an algorithm does every algorithm have java program consider the design for spelling checker program meant to run on home computer the spelling checker should be able to handle quickly document of less than twenty pages assume that the spelling checker comes with dictionary of about , words what primitive operations must be implemented on the dictionaryand what is reasonable time constraint for each operation
20,083
imagine that you have been hired to design database service containing information about cities and towns in the united statesas described in example suggest two possible implementations for the database imagine that you are given an array of records that is sorted with respect to some key field contained in each record give two different algorithms for searching the array to find the record with specified key value which one do you consider "betterand why how would you go about comparing two proposed algorithms for sorting an array of integersin particular(awhat would be appropriate measures of cost to use as basis for comparing the two sorting algorithms(bwhat tests or analysis would you conduct to determine how the two algorithms perform under these cost measures common problem for compilers and text editors is to determine if the parentheses (or other bracketsin string are balanced and properly nested for examplethe string "((())())()contains properly nested pairs of parenthesesbut the string ")()(does notand the string "())does not contain properly matching parentheses (agive an algorithm that returns true if string contains properly nested and balanced parenthesesand false if otherwise hintat no time while scanning legal string from left to right will you have encountered more right parentheses than left parentheses (bgive an algorithm that returns the position in the string of the first offending parenthesis if the string is not properly nested and balanced that isif an excess right parenthesis is foundreturn its positionif there are too many left parenthesesreturn the position of the first excess left parenthesis return - if the string is properly balanced and nested graph consists of set of objects (called verticesand set of edgeswhere each edge connects two vertices any given pair of vertices can be connected by only one edge describe at least two different ways to represent the connections defined by the vertices and edges of graph imagine that you are shipping clerk for large company you have just been handed about invoiceseach of which is single sheet of paper with large number in the upper right corner the invoices must be sorted by this numberin order from lowest to highest write down as many different approaches to sorting the invoices as you can think of
20,084
chap data structures and algorithms imagine that you are programmer who must write function to sort an array of about integers from lowest value to highest value write down at least five approaches to sorting the array do not write algorithms in java or pseudocode just write sentence or two for each approach to describe how it would work think of an algorithm to find the maximum value in an (unsortedarray nowthink of an algorithm to find the second largest value in the array which is harder to implementwhich takes more time to run (as measured by the number of comparisons performed)nowthink of an algorithm to find the third largest value finallythink of an algorithm to find the middle value which is the most difficult of these problems to solve an unsorted list of integers allows for constant-time insert simply by adding new integer at the end of the list unfortunatelysearching for the integer with key value requires sequential search through the unsorted list until you find xwhich on average requires looking at half the list on the other handa sorted array-based list of integers can be searched in log time by using binary search unfortunatelyinserting new integer requires lot of time because many integers might be shifted in the array if we want to keep it sorted how might data be organized to support both insertion and search in log time
20,085
mathematical preliminaries this presents mathematical notationbackgroundand techniques used throughout the book this material is provided primarily for review and reference you might wish to return to the relevant sections when you encounter unfamiliar notation or mathematical techniques in later section on estimating might be unfamiliar to many readers estimating is not mathematical techniquebut rather general engineering skill it is enormously useful to computer scientists doing design workbecause any proposed solution whose estimated resource requirements fall well outside the problem' resource constraints can be discarded immediately sets and relations the concept of set in the mathematical sense has wide application in computer science the notations and techniques of set theory are commonly used when describing and implementing algorithms because the abstractions associated with sets often help to clarify and simplify algorithm design set is collection of distinguishable members or elements the members are typically drawn from some larger population known as the base type each member of set is either primitive element of the base type or is set itself there is no concept of duplication in set each value from the base type is either in the set or not in the set for examplea set named might be the three integers and in this casep' members are and and the base type is integer figure shows the symbols commonly used to express sets and their relationships here are some examples of this notation in use first define two setsp and { } {
20,086
chap mathematical preliminaries { { is positive integerxp / |pp qq pq pq - set composed of the members and set definition using set former examplethe set of all positive integers is member of set is not member of set the null or empty set cardinalitysize of set or number of members for set set is included in set qset is subset of set qset is superset of set set unionall elements appearing in or set intersectionall elements appearing in and set differenceall elements of set not in set figure set notation | (because has three membersand | (because has two membersthe union of and qwritten qis the set of elements in either or qwhich is { the intersection of and qwritten qis the set of elements that appear in both and qwhich is { the set difference of and qwritten qis the set of elements that occur in but not in qwhich is { note that and that pbut in general in this exampleq { note that the set { is indistinguishable from set pbecause sets have no concept of order likewiseset { is also indistinguishable from pbecause sets have no concept of duplicate elements the powerset of set is the set of all possible subsets for consider the set {abcthe powerset of is {{ }{ }{ }{ab}{ac}{bc}{abc}sometimes we wish to define collection of elements with no order (like set)but with duplicate-valued elements such collection is called bag to distinguish bags from setsi use square brackets [around bag' elements for the object referred to here as bag is sometimes called multilist buti reserve the term multilist for list that may contain sublists (see section
20,087
sec sets and relations examplebag [ is distinct from bag [ ]while set { is indistinguishable from set { howeverbag [ is indistinguishable from bag [ sequence is collection of elements with an orderand which may contain duplicate-valued elements sequence is also sometimes called tuple or vector in sequencethere is th elementa st element nd elementand so on indicate sequence by using angle brackets hi to enclose its elements for exampleh is sequence note that sequence is distinct from sequence iand both are distinct from sequence relation over set is set of ordered pairs from as an example of relationif is {abc}then {hacihbcihcbiis relationand {haaihacihbbihbcihcciis different relation if tuple hxyi is in relation rwe may use the infix notation xry we often use relations such as the less than operator (<on the natural numberswhich includes ordered pairs such as and ibut not or rather than writing the relationship in terms of ordered pairswe typically use an infix notation for such relationswriting define the properties of relations as followswhere is binary relation over set is reflexive if ara for all is symmetric if whenever arbthen brafor all ab is antisymmetric if whenever arb and brathen bfor all ab is transitive if whenever arb and brcthen arcfor all abc as examplesfor the natural numbersis antisymmetric and transitive<is reflexiveantisymmetricand transitiveand is reflexiveantisymmetricand transitive for peoplethe relation "is sibling ofis symmetric and transitive if we define person to be sibling of himselfthen it is reflexiveif we define person not to be sibling of himselfthen it is not reflexive is an equivalence relation on set if it is reflexivesymmetricand transitive an equivalence relation can be used to partition set into equivalence classes if two elements and are equivalent to each otherwe write partition of set is collection of subsets that are disjoint from each other and whose union is an equivalence relation on set partitions the set into subsets whose elements are equivalent see section for discussion on how to represent equivalence classes on set one application for disjoint sets appears in section
20,088
chap mathematical preliminaries example for the integersis an equivalence relation that partitions each element into distinct subset in other wordsfor any integer athree things are true if then aand if and cthen of coursefor distinct integers aband there are never cases where bb aor so the claims that is symmetric and transitive are vacuously true (there are never examples in the relation where these events occurbut because the requirements for symmetry and transitivity are not violatedthe relation is symmetric and transitive example if we clarify the definition of sibling to mean that person is sibling of himor herselfthen the sibling relation is an equivalence relation that partitions the set of people example we can use the modulus function (defined in the next sectionto define an equivalence relation for the set of integersuse the modulus function to define binary relation such that two numbers and are in the relation if and only if mod mod thusfor is in the relation because mod mod we see that modulus used in this way defines an equivalence relation on the integersand this relation can be used to partition the integers into equivalence classes this relation is an equivalence relation because mod mod for all if mod mod mthen mod mod mand if mod mod and mod mod mthen mod mod binary relation is called partial order if it is antisymmetric and transitive the set on which the partial order is defined is called partially ordered set or poset elements and of set are comparable under given relation if either not all authors use this definition for partial order have seen at least three significantly different definitions in the literature have selected the one that lets and <both define partial orders on the integersbecause this seems the most natural to me
20,089
xry or yrx if every pair of distinct elements in partial order are comparablethen the order is called total order or linear order example for the integersthe relations and <both define partial orders operation is total order becausefor every pair of integers and such that yeither or likewise<is total order becausefor every pair of integers and such that yeither < or < example for the powerset of the integersthe subset operator defines partial order (because it is antisymmetric and transitivefor example{ { howeversets { and { are not comparable by the subset operatorbecause neither is subset of the other thereforethe subset operator does not define total order on the powerset of the integers miscellaneous notation units of measurei use the following notation for units of measure "bwill be used as an abbreviation for bytes"bfor bits"kbfor kilobytes ( bytes)"mbfor megabytes ( bytes)"gbfor gigabytes ( bytes)and "msfor milliseconds ( millisecond is of secondspaces are not placed between the number and the unit abbreviation when power of two is intended thus disk drive of size gigabytes (where gigabyte is intended as byteswill be written as " gb spaces are used when decimal value is intended an amount of bits would therefore be written " kbwhile " kbrepresents bits milliseconds is written as ms note that in this book large amounts of storage are nearly always measured in powers of two and times in powers of ten factorial functionthe factorial functionwritten nfor an integer greater than is the product of the integers between and ninclusive thus as special case the factorial function grows quickly as becomes larger because computing the factorial function directly is time-consuming processit can be useful to have an equation that provides good approximation stirling' approximation states that pnne ) where ( is the base for the system of natural logarithms thus we see that the symbol "means "approximately equal
20,090
while ngrows slower than nn (because any positive integer constant chap mathematical preliminaries pn/en )it grows faster than cn for permutationsa permutation of sequence is simply the members of arranged in some order for examplea permutation of the integers through would be those values arranged in some order if the sequence contains distinct membersthen there are ndifferent permutations for the sequence this is because there are choices for the first member in the permutationfor each choice of first member there are choices for the second memberand so on sometimes one would like to obtain random permutation for sequencethat isone of the npossible permutations is selected in such way that each permutation has equal probability of being selected simple java function for generating random permutation is as follows herethe values of the sequence are stored in positions through of array afunction swap(aijexchanges elements and in array aand random(nreturns an integer value in the range to (see the appendix for more information on swap and random/randomly permute the values of array "astatic void permute( [afor (int lengthi --/for each swap(ai- dsutil random( ))/swap [ - with / random element boolean variablesa boolean variable is variable (of type boolean in javathat takes on one of the two values true and false these two values are often associated with the values and respectivelyalthough there is no reason why this needs to be the case it is poor programming practice to rely on the correspondence between and falsebecause these are logically distinct objects of different types floor and ceilingthe floor of (written bxctakes real value and returns the greatest integer < for exampleb as does cwhile - - and - - the ceiling of (written dxetakes real value and returns the least integer > for exampled as does ewhile - - - modulus operatorthe modulus (or modfunction returns the remainder of an integer division sometimes written mod in mathematical expressionsthe syntax for the java modulus operator is from the definition of remaindern mod is the integer such that qm for an integerand | |mthereforethe result of mod must be between and when and are
20,091
positive integers for example mod mod mod and mod unfortunatelythere is more than one way to assign values to and rdepending on how integer division is interpreted the most common mathematical definition computes the mod function as mod mbn/mc in this case- mod howeverjava and +compilers typically use the underlying processor' machine instruction for computing integer arithmetic on many computers this is done by truncating the resulting fractionmeaning mod (trunc( / )under this definition- mod - unfortunatelyfor many applications this is not what the user wants or expects for examplemany hash systems will perform some computation on record' key value and then take the result modulo the hash table size the expectation here would be that the result is legal index into the hash tablenot negative number implementers of hash functions must either insure that the result of the computation is always positiveor else add the hash table size to the result of the modulo function when that result is negative logarithms logarithm of base for value is the power to which is raised to get normallythis is written as logb thusif logb then bx yand blogb logarithms are used frequently by programmers here are two typical uses example many programs require an encoding for collection of objects what is the minimum number of bits needed to represent distinct code valuesthe answer is dlog ne bits for exampleif you have codes to storeyou will require at least dlog bits to have different codes ( bits provide distinct code valuesexample consider the binary search algorithm for finding given value within an array sorted by value from lowest to highest binary search first looks at the middle element and determines if the value being searched for is in the upper half or the lower half of the array the algorithm then continues splitting the appropriate subarray in half until the desired value is found (binary search is described in more detail in section how many times can an array of size be split in half until only one element remains in the final subarraythe answer is dlog ne times
20,092
chap mathematical preliminaries in this booknearly all logarithms used have base of two this is because data structures and algorithms most often divide things in halfor store codes with binary bits whenever you see the notation log in this bookeither log is meant or else the term is being used asymptotically and the actual base does not matter if any base for the logarithm other than two is intendedthen the base will be shown explicitly logarithms have the following propertiesfor any positive values of mnand rand any positive integers and log(nmlog log log( /mlog log log(nr log loga logb nlogb the first two properties state that the logarithm of two numbers multiplied (or dividedcan be found by adding (or subtractingthe logarithms of the two numbers property ( is simply an extension of property ( property ( tells us thatfor variable and any two integer constants and bloga and logb differ by the constant factor logb aregardless of the value of most runtime analyses in this book are of type that ignores constant factors in costs property ( says that such analyses need not be concerned with the base of the logarithmbecause this can change the total cost only by constant factor note that log when discussing logarithmsexponents often lead to confusion property ( tells us that log log how do we indicate the square of the logarithm (as opposed to the logarithm of )this could be written as (log ) but it is traditional to use log on the other handwe might want to take the logarithm of the logarithm of this is written log log special notation is used in the rare case where we would like to know how many times we must take the log of number before we reach value < this quantity is written logn for examplelog because log log log and log which is total of log operations these properties are the idea behind the slide rule adding two numbers can be viewed as joining two lengths together and measuring their combined length multiplication is not so easily done howeverif the numbers are first converted to the lengths of their logarithmsthen those lengths can be added and the inverse logarithm of the resulting length gives the answer for the multiplication (this is simply logarithm property ( ) slide rule measures the length of the logarithm for the numberslets you slide bars representing these lengths to add up the total lengthand finally converts this total length to the correct numeric answer by taking the inverse of the logarithm for the result
20,093
sec summations and recurrences summations and recurrences most programs contain loop constructs when analyzing running time costs for programs with loopswe need to add up the costs for each time the loop is executed this is an example of summation summations are simply the sum of costs for some function applied to range of parameter values summations are typically written with the following "sigmanotationn (ii= this notation indicates that we are summing the value of (iover some range of (integervalues the parameter to the expression and its initial value are indicated below the symbol herethe notation indicates that the parameter is and that it begins with the value at the top of the symbol is the expression this indicates the maximum value for the parameter thusthis notation means to sum the values of (ias ranges from through this can also be written ( ( ( (np within sentencesigma notation is typeset as ni= (igiven summationyou often wish to replace it with direct equation with the same value as the summation this is known as closed-form solutionand the process of replacing the summation with its closed-form solution is known as solvp ing the summation for examplethe summation ni= is simply the expression " summed times (remember that ranges from to nbecause the sum of is nthe closed-form solution is the following is list of useful summationsalong with their closed-form solutions ( ( ( )( ( = = log xn log ( = = ai for - (
20,094
chap mathematical preliminaries ai = an+ for - ( as special cases to equation ni ( = and + ( = as corollary to equation log xn log + ( = finallyn = + ( the sum of reciprocals from to ncalled the harmonic series and written hn has value between loge and loge to be more preciseas growsthe summation grows closer to ( where is euler' constant and has the value most of these equalities can be proved easily by mathematical induction (see section unfortunatelyinduction does not help us derive closed-form solution it only confirms when proposed closed-form solution is correct techniques for deriving closed-form solutions are discussed in section the running time for recursive algorithm is most easily expressed by recursive expression because the total time for the recursive algorithm includes the time to run the recursive call(sa recurrence relation defines function by means of an expression that includes one or more (smallerinstances of itself classic example is the recursive definition for the factorial functionhn loge ( ) for another standard example of recurrence is the fibonacci sequencefib(nfib( fib( for fib( fib(
20,095
sec summations and recurrences from this definition we see that the first seven numbers of the fibonacci sequence are and notice that this definition contains two partsthe general definition for fib(nand the base cases for fib( and fib( likewisethe definition for factorial contains recursive part and base cases recurrence relations are often used to model the cost of recursive functions for examplethe number of multiplications required by function fact of section for an input of size will be zero when or (the base cases)and it will be one plus the cost of calling fact on value of this can be defined using the following recurrencet(nt( for ( ( as with summationswe typically wish to replace the recurrence relation with closed-form solution one approach is to expand the recurrence by replacing any occurrences of on the right-hand side with its definition example if we expand the recurrence (nt( we get (nt( ( ( we can expand the recurrence as many steps as we likebut the goal is to detect some pattern that will permit us to rewrite the recurrence in terms of summation in this examplewe might notice that ( ( ( and if we expand the recurrence againwe get (nt( ( ( which generalizes to the pattern (nt( ii we might conclude that (nt( ( )( (
20,096
chap mathematical preliminaries because we have merely guessed at pattern and not actually proved that this is the correct closed form solutionwe can use an induction proof to complete the process (see example example slightly more complicated recurrence is (nt( nt ( expanding this recurrence few stepswe get (nt( ( ( ( ( ( we should then observe that this recurrence appears to have pattern that leads to (nt( ( )( ( )( ( this is equivalent to the summation ni= ifor which we already know the closed-form solution techniques to find closed-form solutions for recurrence relations are discussed in section prior to recurrence relations are used infrequently in this bookand the corresponding closed-form solution and an explanation for how it was derived will be supplied at the time of use recursion an algorithm is recursive if it calls itself to do part of its work for this approach to be successfulthe "call to itselfmust be on smaller problem then the one originally attempted in generala recursive algorithm must have two partsthe base casewhich handles simple input that can be solved without resorting to recursive calland the recursive part which contains one or more recursive calls to the algorithm where the parameters are in some sense "closerto the base case than those of the original call here is recursive java function to compute the
20,097
factorial of trace of fact' execution for small value of is presented in section static long fact(int /compute nrecursively /fact( is the largest value that fits in long assert ( > &( < " out of range"if ( < return /base casereturn base solution return fact( - )/recursive call for the first two lines of the function constitute the base cases if < then one of the base cases computes solution for the problem if then fact calls function that knows how to find the factorial of of coursethe function that knows how to compute the factorial of happens to be fact itself but we should not think too hard about this while writing the algorithm the design for recursive algorithms can always be approached in this way first write the base cases then think about solving the problem by combining the results of one or more smaller -but similar -subproblems if the algorithm you write is correctthen certainly you can rely on it (recursivelyto solve the smaller subproblems the secret to success isdo not worry about how the recursive call solves the subproblem simply accept that it will solve it correctlyand use this result to in turn correctly solve the original problem what could be simplerrecursion has no counterpart in everyday problem solving the concept can be difficult to grasp because it requires you to think about problems in new way to use recursion effectivelyit is necessary to train yourself to stop analyzing the recursive process beyond the recursive call the subproblems will take care of themselves you just worry about the base cases and how to recombine the subproblems the recursive version of the factorial function might seem unnecessarily complicated to you because the same effect can be achieved by using while loop here is another example of recursionbased on famous puzzle called "towers of hanoi the natural algorithm to solve this problem has multiple recursive calls it cannot be rewritten easily using while loops the towers of hanoi puzzle begins with three poles and ringswhere all rings start on the leftmost pole (labeled pole the rings each have different sizeand are stacked in order of decreasing size with the largest ring at the bottomas shown in figure the problem is to move the rings from the leftmost pole to the rightmost pole (labeled pole in series of steps at each step the top ring on some pole is moved to another pole there is one limitation on where rings may be moveda ring can never be moved on top of smaller ring
20,098
chap mathematical preliminaries ( (bfigure towers of hanoi example (athe initial conditions for problem with six rings (ba necessary intermediate step on the road to solution how can you solve this problemit is easy if you don' think too hard about the details insteadconsider that all rings are to be moved from pole to pole it is not possible to do this without first moving the bottom (largestring to pole to do thatpole must be emptyand only the bottom ring can be on pole the remaining rings must be stacked up in order on pole as shown in figure how can you do thisassume that function is available to solve the problem of moving the top rings from pole to pole then move the bottom ring from pole to pole finallyagain use function to move the remaining rings from pole to pole in both cases"function xis simply the towers of hanoi function called on smaller version of the problem the secret to success is relying on the towers of hanoi algorithm to do the work for you you need not be concerned about the gory details of how the towers of hanoi subproblem will be solved that will take care of itself provided that two things are done firstthere must be base case (what to do if there is only one ringso that the recursive process will not go on forever secondthe recursive call to towers of hanoi can only be used to solve smaller problemand then only one of the proper form (one that meets the original definition for the towers of hanoi problemassuming appropriate renaming of the poleshere is java implementation for the recursive towers of hanoi algorithm function move(startgoaltakes the top ring from pole start and moves it to pole goal if the move function were to print the values of its parametersthen the result of calling toh would be list of ring-moving instructions that solves the problem
20,099
static void toh(int npole startpole goalpole tempif ( = return/base case toh( - starttempgoal)/recursive calln- rings move(startgoal)/move bottom disk to goal toh( - tempgoalstart)/recursive calln- rings those who are unfamiliar with recursion might find it hard to accept that it is used primarily as tool for simplifying the design and description of algorithms recursive algorithm usually does not yield the most efficient computer program for solving the problem because recursion involves function callswhich are typically more expensive than other alternatives such as while loop howeverthe recursive approach usually provides an algorithm that is reasonably efficient in the sense discussed in (but not alwayssee exercise if necessarythe clearrecursive solution can later be modified to yield faster implementationas described in section many data structures are naturally recursivein that they can be defined as being made up of self-similar parts tree structures are an example of this thusthe algorithms to manipulate such data structures are often presented recursively many searching and sorting algorithms are based on strategy of divide and conquer that isa solution is found by breaking the problem into smaller (similarsubproblemssolving the subproblemsthen combining the subproblem solutions to form the solution to the original problem this process is often implemented using recursion thusrecursion plays an important role throughout this bookand many more examples of recursive functions will be given mathematical proof techniques solving any problem has two distinct partsthe investigation and the argument students are too used to seeing only the argument in their textbooks and lectures but to be successful in school (and in life after school)one needs to be good at bothand to understand the differences between these two phases of the process to solve the problemyou must investigate successfully that means engaging the problemand working through until you find solution thento give the answer to your client (whether that "clientbe your instructor when writing answers on homework assignment or examor written report to your boss)you need to be able to make the argument in way that gets the solution across clearly and succinctly the argument phase involves good technical writing skills -the ability to make clearlogical argument being conversant with standard proof techniques can help you in this process knowing how to write good proof helps in many ways firstit clarifies your