id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
20,300 | chap internal sorting shellsort the next sort we consider is called shellsortnamed after its inventord shell it is also sometimes called the diminishing increment sort unlike insertion and selection sortthere is no real life intuitive equivalent to shellsort unlike the exchange sortsshellsort makes comparisons and swaps between non-adjacent elements shellsort also exploits the best-case performance of insertion sort shellsort' strategy is to make the list "mostly sortedso that final insertion sort can finish the job when properly implementedshellsort will give substantially better performance than th( in the worst case shellsort uses process that forms the basis for many of the sorts presented in the following sectionsbreak the list into sublistssort themthen recombine the sublists shellsort breaks the array of elements into "virtualsublists each sublist is sorted using an insertion sort another group of sublists is then chosen and sortedand so on during each iterationshellsort breaks the list into disjoint sublists so that each element in sublist is fixed number of positions apart for examplelet us assume for convenience that nthe number of values to be sortedis power of two one possible implementation of shellsort will begin by breaking the list into / sublists of elements eachwhere the array index of the elements in each sublist differs by / if there are elements in the array indexed from to there would initially be sublists of elements each the first sublist would be the elements in positions and the second in positions and and so on each list of two elements is sorted using insertion sort the second pass of shellsort looks at fewerbigger lists for our example the second pass would have / lists of size with the elements in the list being / positions apart thusthe second pass would have as its first sublist the elements in positions and the second sublist would have elements in positions and and so on each sublist of four elements would also be sorted using an insertion sort the third pass would be made on two listsone consisting of the odd positions and the other consisting of the even positions the culminating pass in this example would be "normalinsertion sort of all elements figure illustrates the process for an array of values where the sizes of the increments (the distances between elements on the successive passesare and below is java implementation for shellsort |
20,301 | figure an example of shellsort sixteen items are sorted in four passes the first pass sorts sublists of size and increment the second pass sorts sublists of size and increment the third pass sorts sublists of size and increment the fourth pass sorts list of size and increment ( regular insertion sortstatic void sort( [afor (int = length/ > /= /for each increment for (int = <ij++/sort each sublist inssort (aji)inssort ( )/could call regular inssort here /modified version of insertion sort for varying increments static void inssort ( [aint startint incrfor (int =start+incri< lengthi+=incrfor (int = ( >=incr)&( [jcompareto( [ -incr])< ) -=incrdsutil swap(ajj-incr)shellsort will work correctly regardless of the size of the incrementsprovided that the final pass has increment ( provided the final pass is regular insertion sortif shellsort will aways conclude with regular insertion sortthen how can it be any improvement on insertion sortthe expectation is that each of the (relatively cheapsublist sorts will make the list "more sortedthan it was before it is not necessarily the case that this will be truebut it is almost always true in practice when the final insertion sort is conductedthe list should be "almost sorted,yielding relatively cheap final insertion sort pass |
20,302 | chap internal sorting some choices for increments will make shellsort run more efficiently than others in particularthe choice of increments described above ( - turns out to be relatively inefficient better choice is the following series based on division by three the analysis of shellsort is difficultso we must accept without proof that the average-case performance of shellsort (for "divisions by threeincrementsis ( other choices for the increment series can reduce this upper bound somewhat thusshellsort is substantially better than insertion sortor any of the th( sorts presented in section in factshellsort is competitive with the asymptotically better sorts to be presented whenever is of medium size shellsort illustrates how we can sometimes exploit the special properties of an algorithm (in this case insertion sorteven if in general that algorithm is unacceptably slow mergesort natural approach to problem solving is divide and conquer in terms of sortingwe might consider breaking the list to be sorted into piecesprocess the piecesand then put them back together somehow simple way to do this would be to split the list in halfsort the halvesand then merge the sorted halves together this is the idea behind mergesort mergesort is one of the simplest sorting algorithms conceptuallyand has good performance both in the asymptotic sense and in empirical running time surprisinglyeven though it is based on simple conceptit is relatively difficult to implement in practice figure illustrates mergesort pseudocode sketch of mergesort is as followslist mergesort(list inlistif (inlist length(< return inlist;list half of the items from inlistlist other half of the items from inlistreturn merge(mergesort( )mergesort( ))before discussing how to implement mergesortwe will first examine the merge function merging two sorted sublists is quite simple function merge examines the first element of each sublist and picks the smaller value as the smallest element overall this smaller value is removed from its sublist and placed into the output list merging continues in this waycomparing the front elements of the sublists and continually appending the smaller to the output list until no more input elements remain |
20,303 | sec mergesort figure an illustration of mergesort the first row shows eight numbers that are to be sorted mergesort will recursively subdivide the list into sublists of one element eachthen recombine the sublists the second row shows the four sublists of size created by the first merging pass the third row shows the two sublists of size created by the next merging pass on the sublists of row the last row shows the final sorted list created by merging the two sublists of row implementing mergesort presents number of technical difficulties the first decision is how to represent the lists mergesort lends itself well to sorting singly linked list because merging does not require random access to the list elements thusmergesort is the method of choice when the input is in the form of linked list implementing merge for linked lists is straightforwardbecause we need only remove items from the front of the input lists and append items to the output list breaking the input list into two equal halves presents some difficulty ideally we would just break the lists into front and back halves howevereven if we know the length of the list in advanceit would still be necessary to traverse halfway down the linked list to reach the beginning of the second half simpler methodwhich does not rely on knowing the length of the list in advanceassigns elements of the input list alternating between the two sublists the first element is assigned to the first sublistthe second element to the second sublistthe third to first sublistthe fourth to the second sublistand so on this requires one complete pass through the input list to build the sublists when the input to mergesort is an arraysplitting input into two subarrays is easy if we know the array bounds merging is also easy if we merge the subarrays into second array note that this approach requires twice the amount of space as any of the sorting methods presented so farwhich is serious disadvantage for mergesort it is possible to merge the subarrays without using second arraybut this is extremely difficult to do efficiently and is not really practical merging the two subarrays into second arraywhile simple to implementpresents another difficulty the merge process ends with the sorted list in the auxiliary array consider how the recursive nature of mergesort breaks the original array into subarraysas shown in figure mergesort is recursively called until subarrays of size have been createdrequiring log levels of recursion these subarrays are merged into |
20,304 | chap internal sorting subarrays of size which are in turn merged into subarrays of size and so on we need to avoid having each merge operation require new array with some difficultyan algorithm can be devised that alternates between two arrays much simpler approach is to copy the sorted sublists to the auxiliary array firstand then merge them back to the original array here is complete implementation for mergesort following this approachstatic void mergesort( [ae[tempint lint rint mid ( + )/ /select midpoint if ( =rreturn/list has one element mergesort(atemplmid)/mergesort first half mergesort(atempmid+ )/mergesort second half for (int =li<=ri++/copy subarray to temp temp[ia[ ]/do the merge operation back to int lint mid for (int curr=lcurr<=rcurr++if ( =mid+ /left sublist exhausted [currtemp[ ++]else if ( /right sublist exhausted [currtemp[ ++]else if (temp[ compareto(temp[ ])< /get smaller [currtemp[ ++]else [currtemp[ ++]an optimized mergesort implementation is shown next it reverses the order of the second subarray during the initial copy now the current positions of the two subarrays work inwards from the endsallowing the end of each subarray to act as sentinel for the other unlike the previous implementationno test is needed to check for when one of the two subarrays becomes empty this version also uses insertion sort to sort small subarrays |
20,305 | static void mergesort( [ae[tempint lint rint ijkmid ( + )/ /select the midpoint if ( =rreturn/list has one element if ((mid- >thresholdmergesort(atemplmid)else inssort(almid- + )if (( -midthresholdmergesort(atempmid+ )else inssort(amid+ -mid)/do the merge operation firstcopy halves to temp for ( =li<=midi++temp[ia[ ]for ( = <= -midj++temp[ - + [ +mid]/merge sublists back to array for ( = , = , =lk<=rk++if (temp[icompareto(temp[ ])< [ktemp[ ++]else [ktemp[ --]analysis of mergesort is straightforwarddespite the fact that it is recursive algorithm the merging part takes time th(iwhere is the total length of the two subarrays being merged the array to be sorted is repeatedly split in half until subarrays of size are reachedat which time they are merged to be of size these merged to subarrays of size and so on as shown in figure thusthe depth of the recursion is log for elements (assume for simplicity that is power of twothe first level of recursion can be thought of as working on one array of size nthe next level working on two arrays of size / the next on four arrays of size / and so on the bottom of the recursion has arrays of size thusn arrays of size are merged (requiring total steps) / arrays of size (again requiring total steps) / arrays of size and so on at each of the log levels of recursionth(nwork is donefor total cost of th( log nthis cost is unaffected by the relative order of the values being sortedthus this analysis holds for the bestaverageand worst cases quicksort while mergesort uses the most obvious form of divide and conquer (split the list in half then sort the halves)it is not the only way that we can break down the sorting problem and we saw that doing the merge step for mergesort when using an array implementation is not so easy so perhaps different divide and conquer strategy might turn out to be more efficientquicksort is aptly named becausewhen properly implementedit is the fastest known general-purpose in-memory sorting algorithm in the average case it does not require the extra array needed by mergesortso it is space efficient as well quicksort is widely usedand is typically the algorithm implemented in library |
20,306 | chap internal sorting sort routine such as the unix qsort function interestinglyquicksort is hampered by exceedingly poor worst-case performancethus making it inappropriate for certain applications before we get to quicksortconsider for moment the practicality of using binary search tree for sorting you could insert all of the values to be sorted into the bst one by onethen traverse the completed tree using an inorder traversal the output would form sorted list this approach has number of drawbacksincluding the extra space required by bst pointers and the amount of time required to insert nodes into the tree howeverthis method introduces some interesting ideas firstthe root of the bst ( the first node insertedsplits the list into two subliststhe left subtree contains those values in the list less than the root value while the right subtree contains those values in the list greater than or equal to the root value thusthe bst implicitly implements "divide and conquerapproach to sorting the left and right subtrees quicksort implements this concept in much more efficient way quicksort first selects value called the pivot assume that the input array contains values less than the pivot the records are then rearranged in such way that the values less than the pivot are placed in the firstor leftmostk positions in the arrayand the values greater than or equal to the pivot are placed in the lastor rightmostn positions this is called partition of the array the values placed in given partition need not (and typically will notbe sorted with respect to each other all that is required is that all values end up in the correct partition the pivot value itself is placed in position quicksort then proceeds to sort the resulting subarrays now on either side of the pivotone of size and the other of size how are these values sortedbecause quicksort is such good algorithmusing quicksort on the subarrays would be appropriate unlike some of the sorts that we have seen earlier in this quicksort might not seem very "naturalin that it is not an approach that person is likely to use to sort real objects but it should not be too surprising that really efficient sort for huge numbers of abstract objects on computer would be rather different from our experiences with sorting relatively few physical objects the java code for quicksort is as follows parameters and define the left and right indicesrespectivelyfor the subarray being sorted the initial call to quicksort would be qsort(array - |
20,307 | static void qsort( [aint iint /quicksort int pivotindex findpivot(aij)/pick pivot dsutil swap(apivotindexj)/stick pivot at end / will be the first position in the right subarray int partition(ai- ja[ ])dsutil swap(akj)/put pivot in place if (( - qsort(aik- )/sort left partition if (( - qsort(ak+ )/sort right partition function partition will move records to the appropriate partition and then return kthe first position in the right partition note that the pivot value is initially placed at the end of the array (position jthuspartition must not affect the value of array position after partitioningthe pivot value is placed in position kwhich is its correct position in the finalsorted array by doing sowe guarantee that at least one value (the pivotwill not be processed in the recursive calls to qsort even if bad pivot is selectedyielding completely empty partition to one side of the pivotthe larger partition will contain at most elements selecting pivot can be done in many ways the simplest is to use the first key howeverif the input is sorted or reverse sortedthis will produce poor partitioning with all values to one side of the pivot it is better to pick value at randomthereby reducing the chance of bad input order affecting the sort unfortunatelyusing random number generator is relatively expensiveand we can do nearly as well by selecting the middle position in the array here is simple findpivot functionstatic int findpivot( [aint iint jreturn ( + )/ we now turn to function partition if we knew in advance how many keys are less than the pivotpartition could simply copy elements with key values less than the pivot to the low end of the arrayand elements with larger keys to the high end because we do not know in advance how many keys are less than the pivotwe use clever algorithm that moves indices inwards from the ends of the subarrayswapping values as necessary until the two indices meet here is java implementation for the partition step |
20,308 | chap internal sorting static int partition( [aint lint re pivotdo /move bounds inward until they meet while ( [++lcompareto(pivot)< )while (( != &( [--rcompareto(pivot)> ))dsutil swap(alr)/swap out-of-place values while ( )/stop when they cross dsutil swap(alr)/reverse lastwasted swap return /return first position in right partition figure illustrates partition initiallyvariables and are immediately outside the actual bounds of the subarray being partitioned each pass through the outer do loop moves the counters and inwardsuntil eventually they meet note that at each iteration of the inner while loopsthe bounds are moved prior to checking against the pivot value this ensures that progress is made by each while loopeven when the two values swapped on the last iteration of the do loop were equal to the pivot also note the check that in the second while loop this ensures that does not run off the low end of the partition in the case where the pivot is the least value in that partition function partition returns the first index of the right partition so that the subarray bound for the recursive calls to qsort can be determined figure illustrates the complete quicksort algorithm to analyze quicksortwe first analyze the findpivot and partition functions operating on subarray of length clearlyfindpivot takes constant time function partition contains do loop with two nested while loops the total cost of the partition operation is constrained by how far and can move inwards in particularthese two bounds variables together can move total of steps for subarray of length howeverthis does not directly tell us how much work is done by the nested while loops the do loop as whole is guaranteed to move both and inward at least one position on each first pass each while loop moves its variable at least once (except in the special case where is at the left edge of the arraybut this can happen only oncethuswe see that the do loop can be executed at most timesthe total amount of work done moving and is sand each while loop can fail its test at most times the total work for the entire partition function is therefore th(sknowing the cost of findpivot and partitionwe can determine the cost of quicksort we begin with worst-case analysis the worst case will occur when the pivot does poor job of breaking the arraythat iswhen there are no elements in one partitionand elements in the other in this casethe divide and conquer strategy has done poor job of dividingso the conquer phase will work on subproblem only one less than the size of the original problem if this |
20,309 | sec quicksort initial pass swap pass swap pass , figure the quicksort partition step the first row shows the initial positions for collection of ten key values the pivot value is which has been swapped to the end of the array the do loop makes three iterationseach time moving counters and inwards until they meet in the third pass in the endthe left partition contains four values and the right partition contains six values function qsort will place the pivot value into position pivot pivot pivot pivot pivot pivot pivot final sorted array figure an illustration of quicksort |
20,310 | chap internal sorting happens at each partition stepthen the total cost of the algorithm will be th( = in the worst casequicksort is th( this is terribleno better than bubble sort when will this worst case occuronly when each pivot yields bad partitioning of the array if the pivot values are selected at randomthen this is extremely unlikely to happen when selecting the middle position of the current subarrayit is still unlikely to happen it does not take many good partitionings for quicksort to work fairly well quicksort' best case occurs when findpivot always breaks the array into two equal halves quicksort repeatedly splits the array into smaller partitionsas shown in figure in the best casethe result will be log levels of partitionswith the top level having one array of size nthe second level two arrays of size / the next with four arrays of size / and so on thusat each levelall partition steps for that level do total of workfor an overall cost of log work when quicksort finds perfect pivots quicksort' average-case behavior falls somewhere between the extremes of worst and best case average-case analysis considers the cost for all possible arrangements of inputsumming the costs and dividing by the number of cases we make one reasonable simplifying assumptionat each partition stepthe pivot is equally likely to end in any position in the (sortedarray in other wordsthe pivot is equally likely to break an array into partitions of sizes and - or and - and so on given this assumptionthe average-case cost is computed from the following equationn- (ncn [ (kt( )] ( ( = this equation is in the form of recurrence relation recurrence relations are discussed in and and this one is solved in section this equation says that there is one chance in that the pivot breaks the array into subarrays of size and one chance in that the pivot breaks the array into subarrays of size and and so on the expression " (kt( )is the cost for the two recursive calls to quicksort on two arrays of size and - - the worst insult that can think of for sorting algorithm |
20,311 | the initial cn term is the cost of doing the findpivot and partition stepsfor some constant the closed-form solution to this recurrence relation is th( log nthusquicksort has average-case cost th( log nthis is an unusual situation that the average case cost and the worst case cost have asymptotically different growth rates consider what "average caseactually means we compute an average cost for inputs of size by summing up for every possible input of size the product of the running time cost of that input times the probability that that input will occur to simplify thingswe assumed that every permutation is equally likely to occur thusfinding the average means summing up the cost for every permutation and dividing by the number of inputs ( !we know that some of these ninputs cost ( but the sum of all the permutation costs has to be ( !)( ( log )given the extremely high cost of the worst inputsthere must be very few of them in factthere cannot be constant fraction of the inputs with cost ( evensay of the inputs with cost ( would lead to an average cost of ( thusas growsthe fraction of inputs with high cost must be going toward limit of zero we can conclude that quicksort will always have good behavior if we can avoid those very few bad input permutations the running time for quicksort can be improved (by constant factor)and much study has gone into optimizing this algorithm the most obvious place for improvement is the findpivot function quicksort' worst case arises when the pivot does poor job of splitting the array into equal size subarrays if we are willing to do more work searching for better pivotthe effects of bad pivot can be decreased or even eliminated one good choice is to use the "median of threealgorithmwhich uses as pivot the middle of three randomly selected values using random number generator to choose the positions is relatively expensiveso common compromise is to look at the firstmiddleand last positions of the current subarray howeverour simple findpivot function that takes the middle value as its pivot has the virtue of making it highly unlikely to get bad input by chanceand it is quite cheap to implement this is in sharp contrast to selecting the first or last element as the pivotwhich would yield bad performance for many permutations that are nearly sorted or nearly reverse sorted significant improvement can be gained by recognizing that quicksort is relatively slow when is small this might not seem to be relevant if most of the time we sort large arraysnor should it matter how long quicksort takes in the rare instance when small array is sorted because it will be fast anyway but you should notice that quicksort itself sorts manymany small arraysthis happens as natural by-product of the divide and conquer approach |
20,312 | chap internal sorting simple improvement might then be to replace quicksort with faster sort for small numberssay insertion sort or selection sort howeverthere is an even better -and still simpler -optimization when quicksort partitions are below certain sizedo nothingthe values within that partition will be out of order howeverwe do know that all values in the array to the left of the partition are smaller than all values in the partition all values in the array to the right of the partition are greater than all values in the partition thuseven if quicksort only gets the values to "nearlythe right locationsthe array will be close to sorted this is an ideal situation in which to take advantage of the best-case performance of insertion sort the final step is single call to insertion sort to process the entire arrayputting the elements into final sorted order empirical testing shows that the subarrays should be left unordered whenever they get down to nine or fewer elements the last speedup to be considered reduces the cost of making recursive calls quicksort is inherently recursivebecause each quicksort operation must sort two sublists thusthere is no simple way to turn quicksort into an iterative algorithm howeverquicksort can be implemented using stack to imitate recursionas the amount of information that must be stored is small we need not store copies of subarrayonly the subarray bounds furthermorethe stack depth can be kept small if care is taken on the order in which quicksort' recursive calls are executed we can also place the code for findpivot and partition inline to eliminate the remaining function calls note however that by not processing sublists of size nine or less as suggested aboveabout three quarters of the function calls will already have been eliminated thuseliminating the remaining function calls will yield only modest speedup heapsort our discussion of quicksort began by considering the practicality of using binary search tree for sorting the bst requires more space than the other sorting methods and will be slower than quicksort or mergesort due to the relative expense of inserting values into the tree there is also the possibility that the bst might be unbalancedleading to th( worst-case running time subtree balance in the bst is closely related to quicksort' partition step quicksort' pivot serves roughly the same purpose as the bst root value in that the left partition (subtreestores values less than the pivot (rootvaluewhile the right partition (subtreestores values greater than or equal to the pivot (roota good sorting algorithm can be devised based on tree structure more suited to the purpose in particularwe would like the tree to be balancedspace efficient |
20,313 | and fast the algorithm should take advantage of the fact that sorting is specialpurpose application in that all of the values to be stored are available at the start this means that we do not necessarily need to insert one value at time into the tree structure heapsort is based on the heap data structure presented in section heapsort has all of the advantages just listed the complete binary tree is balancedits array representation is space efficientand we can load all values into the tree at oncetaking advantage of the efficient buildheap function the asymptotic performance of heapsort is th( log nin the bestaverageand worst cases it is not as fast as quicksort in the average case (by constant factor)but heapsort has special properties that will make it particularly useful when sorting data sets too large to fit in main memoryas discussed in sorting algorithm based on max-heaps is quite straightforward first we use the heap building algorithm of section to convert the array into max-heap order then we repeatedly remove the maximum value from the heaprestoring the heap property each time that we do sountil the heap is empty note that each time we remove the maximum element from the heapit is placed at the end of the array assume the elements are stored in array positions through - after removing the maximum value from the heap and readjustingthe maximum value will now be placed in position of the array the heap is now considered to be of size removing the new maximum (rootvalue places the second largest value in position of the array at the end of the processthe array will be properly sorted from least to greatest this is why heapsort uses max-heap rather than min-heap as might have been expected figure illustrates heapsort the complete java implementation is as followsstatic void heapsort( [ /heapsort maxheap new maxheap(aa lengtha length)for (int = < lengthi++/now sort removemax()/removemax places max at end of heap because building the heap takes th(ntime (see section )and because deletions of the maximum element each take th(log ntimewe see that the entire heapsort operation takes th( log ntime in the worstaverageand best cases while typically slower than quicksort by constant factorheapsort has one special advantage over the other sorts studied so far building the heap is relatively cheaprequiring th(ntime removing the maximum element from the heap requires th(log ntime thusif we wish to find the largest elements in an arraywe can do so in time th( log nif is smallthis is substantial improvement |
20,314 | chap internal sorting original numbers build heap remove remove remove figure an illustration of heapsort the top row shows the values in their original order the second row shows the values after building the heap the third row shows the result of the first removefirst operation on key value note that is now at the end of the array the fourth row shows the result of the second removefirst operation on key value the fifth row shows the result of the third removefirst operation on key value at this pointthe last three positions of the array hold the three greatest values in sorted order heapsort continues in this manner until the entire array is sorted |
20,315 | over the time required to find the largest elements using one of the other sorting methods described earlier one situation where we are able to take advantage of this concept is in the implementation of kruskal' minimum-cost spanning tree (mstalgorithm of section that algorithm requires that edges be visited in ascending order (souse min-heap)but this process stops as soon as the mst is complete thusonly relatively small fraction of the edges need be sorted binsort and radix sort imagine that for the past yearas you paid your various billsyou then simply piled all the paperwork onto the top of table somewhere now the year has ended and its time to sort all of these papers by what the bill was for (phoneelectricityrentetc and date pretty natural approach is to make some space on the floorand as you go through the pile of papersput the phone bills into one pilethe electric bills into another pileand so on once this initial assignment of bills to piles is done (in one pass)you can sort each pile by date relatively quickly because they are each fairly small this is the basic idea behind binsort section presented the following code fragment to sort permutation of the numbers through for ( = <ni++ [ [ ] [ ]here the key value is used to determine the position for record in the final sorted array this is the most basic example of binsortwhere key values are used to assign records to bins this algorithm is extremely efficienttaking th(ntime regardless of the initial ordering of the keys this is far better than the performance of any sorting algorithm that we have seen so far the only problem is that this algorithm has limited use because it works only for permutation of the numbers from to we can extend this simple binsort algorithm to be more useful because binsort must perform direct computation on the key value (as opposed to just asking which of two records comes first as our previous sorting algorithms did)we will assume that the records use an integer key type the simplest extension is to allow for duplicate values among the keys this can be done by turning array slots into arbitrary-length bins by turning into an array of linked lists in this wayall records with key value can be placed in bin [ia second extension allows for key range greater than for examplea set of records might have keys in the range to the only requirement is |
20,316 | chap internal sorting that each possible key value have corresponding bin in the extended binsort algorithm is as followsstatic void binsort(integer []list[ (llist[])new llist[maxkey]integer itemfor (int = <maxkeyi++ [inew llist()for (int = < lengthi++ [ [ ]append( [ ])for (int = <maxkeyi++for ( [imovetostart()(item [igetvalue()!nullb[inext()output(item)this version of binsort can sort any collection of records whose key values fall in the range from to maxkeyvalue- the total work required is simply that needed to place each record into the appropriate bin and then take all of the records out of the bins thuswe need to process each record twicefor th(nwork unfortunatelythere is crucial oversight in this analysis binsort must also look at each of the bins to see if it contains record the algorithm must process maxkeyvalue binsregardless of how many actually hold records if maxkeyvalue is small compared to nthen this is not great expense suppose that maxkeyvalue in this casethe total amount of work done will be th( th( this results in poor sorting algorithmand the algorithm becomes even worse as the disparity between and maxkeyvalue increases in additiona large key range requires an unacceptably large array thuseven the extended binsort is useful only for limited key range further generalization to binsort yields bucket sort each bin is associated with not just one keybut rather range of key values bucket sort assigns records to bins and then relies on some other sorting technique to sort the records within each bin the hope is that the relatively inexpensive bucketing process will put only small number of records in each binand that "cleanup sortwithin the bins will then be relatively cheap there is way to keep the number of bins and the related processing small while allowing the cleanup sort to be based on binsort consider sequence of records with keys in the range to if we have ten bins availablewe can first assign records to bins by taking their key value modulo thusevery key will be assigned to the bin matching its rightmost decimal digit we can then take these records from the bins in order and reassign them to the bins on the basis of their leftmost ( ' placedigit (define values in the range to to have leftmost digit |
20,317 | sec binsort and radix sort initial list first pass (on right digitsecond pass (on left digit result of first passresult of second pass figure an example of radix sort for twelve two-digit numbers in base ten two passes are required to sort the list of in other wordsassign the ith record from array to bin using the formula [ ]/ if we now gather the values from the bins in orderthe result is sorted list figure illustrates this process in this examplewe have bins and keys in the range to the total computation is th( )because we look at each record and each bin constant number of times this is great improvement over the simple binsort where the number of bins must be as large as the key range note that the example uses so as to make the bin computations easy to visualizerecords were placed into bins based on the value of first the rightmost and then the leftmost decimal digits any number of bins would have worked this is an example of radix sortso called because the bin computations are based on the radix or the base of the key values this sorting algorithm can be extended to any number of keys in any key range we simply assign records to bins based on the keysdigit values working from the rightmost digit to the leftmost if there are digitsthen this requires that we assign keys to bins times |
20,318 | chap internal sorting as with mergesortan efficient implementation of radix sort is somewhat difficult to achieve in particularwe would prefer to sort an array of values and avoid processing linked lists if we know how many values will be in each binthen an auxiliary array of size can be used to hold the bins for exampleif during the first pass the bin will receive three records and the bin will receive five recordsthen we could simply reserve the first three array positions for the bin and the next five array positions for the bin exactly this approach is taken by the following java implementation at the end of each passthe records are copied back to the original array static void radix(integer[ainteger[bint kint rint[count/count[istores number of records in bin[iint ijrtokfor ( = rtok= <ki++rtok*= /for digits for ( = <rj++count[ /initialize count /count the number of records for each bin on this pass for ( = < lengthj++count[( [ ]/rtok)% ]++/count[jwill be index in for last slot of bin for ( = <rj++count[jcount[ - count[ ]/put records into binsworking from bottom of bin /since bins fill from bottomj counts downwards for ( = length- >= -- [--count[( [ ]/rtok)% ] [ ]for ( = < lengthj++ [jb[ ]/copy back the first inner for loop initializes array cnt the second loop counts the number of records to be assigned to each bin the third loop sets the values in cnt to their proper indices within array note that the index stored in cnt[jis the last index for bin jbins are filled from the bottom the fourth loop assigns the records to the bins (within array bthe final loop simply copies the records back to array to be ready for the next pass variable rtoi stores ri for use in bin computation on the 'th iteration figure shows how this algorithm processes the input shown in figure this algorithm requires passes over the list of numbers in base rwith th( rwork done at each pass thus the total work is th(nk rkwhat is this in terms of nbecause is the size of the baseit might be rather small |
20,319 | sec binsort and radix sort initial inputarray first pass values for count rtoi count arrayindex positions for array end of pass array second pass values for count rtoi count arrayindex positions for array end of pass array figure an example showing function radix applied to the input of figure row shows the initial values within the input array row shows the values for array cnt after counting the number of records for each bin row shows the index values stored in array cnt for examplecnt[ is indicating no input values are in bin cnt[ is indicating that array positions and will hold the values for bin cnt[ is indicating that array position will hold the (singlevalue for bin cnt[ is indicating that array positions through will hold the four values for bin row shows the results of the first pass of the radix sort rows through show the equivalent steps for the second pass |
20,320 | chap internal sorting one could use base or base would be appropriate for sorting character strings for nowwe will treat as constant value and ignore it for the purpose of determining asymptotic complexity variable is related to the key rangeit is the maximum number of digits that key may have in base in some applications we can determine to be of limited size and so might wish to consider it constant in this caseradix sort is th(nin the bestaverageand worst casesmaking it the sort with best asymptotic complexity that we have studied is it reasonable assumption to treat as constantor is there some relationship between and nif the key range is limited and duplicate key values are commonthere might be no relationship between and to make this distinction clearuse to denote the number of distinct key values used by the records thusn < because it takes minimum of logr base digits to represent distinct key valueswe know that >logr nowconsider the situation in which no keys are duplicated if there are unique keys ( )then it requires distinct code values to represent them thusk >logr because it requires at least ohm(log ndigits (within constant factorto distinguish between the distinct keysk is in ohm(log nthis yields an asymptotic complexity of ohm( log nfor radix sort to process distinct key values it is possible that the key range is much largerlogr bits is merely the best case possible for distinct values thusthe logr estimate for could be overly optimistic the moral of this analysis is thatfor the general case of distinct key valuesradix sort is at best ohm( log nsorting algorithm radix sort can be much improved by making base be as large as possible consider the case of an integer key value set for some in other wordsthe value of is related to the number of bits of the key processed on each pass each time the number of bits is doubledthe number of passes is cut in half when processing an integer key valuesetting allows the key to be processed one byte at time processing -bit key requires only four passes it is not unreasonable on most computers to use kresulting in only two passes for -bit key of coursethis requires cnt array of size performance will be good only if the number of records is close to or greater in other wordsthe number of records must be large compared to the key size for radix sort to be efficient in many sorting applicationsradix sort can be tuned in this way to give good performance radix sort depends on the ability to make fixed number of multiway choices based on digit valueas well as random access to the bins thusradix sort might be difficult to implement for certain key types for exampleif the keys |
20,321 | are real numbers or arbitrary length stringsthen some care will be necessary in implementation in particularradix sort will need to be careful about deciding when the "last digithas been found to distinguish among real numbersor the last character in variable length strings implementing the concept of radix sort with the trie data structure (section is most appropriate for these situations at this pointthe perceptive reader might begin to question our earlier assumption that key comparison takes constant time if the keys are "normal integervalues stored insayan integer variablewhat is the size of this variable compared to nin factit is almost certain that (the number of bits in standard int variableis greater than log for any practical computation in this sensecomparison of two long integers requires ohm(log nwork computers normally do arithmetic in units of particular sizesuch as -bit word regardless of the size of the variablescomparisons use this native word size and require constant amount of time in practicecomparisons of two -bit values take constant timeeven though is much greater than log to some extent the truth of the proposition that there are constant time operations (such as integer comparisonis in the eye of the beholder at the gate level of computer architectureindividual bits are compared howeverconstant time comparison for integers is true in practice on most computersand we rely on such assumptions as the basis for our analyses in contrastradix sort must do several arithmetic calculations on key values (each requiring constant time)where the number of such calculations is proportional to the key length thusradix sort truly does ohm( log nwork to process distinct key values an empirical comparison of sorting algorithms which sorting algorithm is fastestasymptotic complexity analysis lets us distinguish between th( and th( log nalgorithmsbut it does not help distinguish between algorithms with the same asymptotic complexity nor does asymptotic analysis say anything about which algorithm is best for sorting small lists for answers to these questionswe can turn to empirical testing figure shows timing results for actual implementations of the sorting algorithms presented in this the algorithms compared include insertion sortbubble sortselection sortshellsortquicksortmergesortheapsort and radix sort shellsort shows both the basic version from section and another with increments based on division by three mergesort shows both the basic implementation from section and the optimized version with calls to insertion sort for lists of length below nine for quicksorttwo versions are comparedthe basic implementation from section and an optimized version that does not partition |
20,322 | chap internal sorting sort insertion bubble selection shell shell/ merge merge/ quick quick/ heap heap/ radix/ radix/ up down figure empirical comparison of sorting algorithms run on -ghz intel pentium cpu running linux shellsortquicksortmergesortand heapsort each are shown with regular and optimized versions radix sort is shown for and -bit-per-pass versions all times shown are milliseconds sublists below length nine the first heapsort version uses the class definitions from section the second version removes all the class definitions and operates directly on the array using inlined code for all access functions in all casesthe values sorted are random -bit numbers the input to each algorithm is random array of integers this affects the timing for some of the sorting algorithms for exampleselection sort is not being used to best advantage because the record size is smallso it does not get the best possible showing the radix sort implementation certainly takes advantage of this key range in that it does not look at more digits than necessary on the other handit was not optimized to use bit shifting instead of divisioneven though the bases used would permit this the various sorting algorithms are shown for lists of sizes , , and , , the final two columns of each figure show the performance for the algorithms when run on inputs of size , where the numbers are in ascending (sortedand descending (reverse sortedorderrespectively these columns demonstrate best-case performance for some algorithms and worstcase performance for others these columns also show that for some algorithmsthe order of input has little effect these figures show number of interesting results as expectedthe ( sorts are quite poor performers for large arrays insertion sort is by far the best of this groupunless the array is already reverse sorted shellsort is clearly superior to any of these ( sorts for lists of even elements optimized quicksort is clearly the best overall algorithm for all but lists of elements even for small |
20,323 | arraysoptimized quicksort performs well because it does one partition step before calling insertion sort compared to the other ( log nsortsunoptimized heapsort is quite slow due to the overhead of the class structure when all of this is stripped away and the algorithm is implemented to manipulate an array directlyit is still somewhat slower than mergesort in generaloptimizing the various algorithms makes noticeable improvement for larger array sizes overallradix sort is surprisingly poor performer if the code had been tuned to use bit shifting of the key valueit would likely improve substantiallybut this would seriously limit the range of element types that the sort could support lower bounds for sorting this book contains many analyses for algorithms these analyses generally define the upper and lower bounds for algorithms in their worst and average cases for most of the algorithms presented so faranalysis is easy this section considers more difficult task -an analysis for the cost of problem as opposed to an algorithm the upper bound for problem can be defined as the asymptotic cost of the fastest known algorithm the lower bound defines the best possible efficiency for any algorithm that solves the problemincluding algorithms not yet invented once the upper and lower bounds for the problem meetwe know that no future algorithm can possibly be (asymptoticallymore efficient simple estimate for problem' lower bound can be obtained by measuring the size of the input that must be read and the output that must be written certainly no algorithm can be more efficient than the problem' / time from this we see that the sorting problem cannot be solved by any algorithm in less than ohm(ntime because it takes at least steps to read and write the values to be sorted based on our current knowledge of sorting algorithms and the size of the inputwe know that the problem of sorting is bounded by ohm(nand ( log ncomputer scientists have spent much time devising efficient general-purpose sorting algorithmsbut no one has ever found one that is faster than ( log nin the worst or average cases should we keep searching for faster sorting algorithmor can we prove that there is no faster sorting algorithm by finding tighter lower boundthis section presents one of the most important and most useful proofs in computer scienceno sorting algorithm based on key comparisons can possibly be faster than ohm( log nin the worst case this proof is important for three reasons firstknowing that widely used sorting algorithms are asymptotically optimal is reassuring in particularit means that you need not bang your head against the wall searching for an (nsorting algorithm (or at least not one in any way based on key |
20,324 | chap internal sorting comparisonssecondthis proof is one of the few non-trivial lower-bounds proofs that we have for any problemthat isthis proof provides one of the relatively few instances where our lower bound is tighter than simply measuring the size of the input and output as suchit provides useful model for proving lower bounds on other problems finallyknowing lower bound for sorting gives us lower bound in turn for other problems whose solution could be used as the basis for sorting algorithm the process of deriving asymptotic bounds for one problem from the asymptotic bounds of another is called reductiona concept further explored in except for the radix sort and binsortall of the sorting algorithms presented in this make decisions based on the direct comparison of two key values for exampleinsertion sort sequentially compares the value to be inserted into the sorted list until comparison against the next value in the list fails in contrastradix sort has no direct comparison of key values all decisions are based on the value of specific digits in the key valueso it is possible to take approaches to sorting that do not involve key comparisons of courseradix sort in the end does not provide more efficient sorting algorithm than comparison-based sorting thusempirical evidence suggests that comparison-based sorting is good approach the proof that any comparison sort requires ohm( log ncomparisons in the worst case is structured as follows firstyou will see how comparison decisions can be modeled as the branches in binary tree this means that any sorting algorithm based on comparisons can be viewed as binary tree whose nodes correspond to the results of making comparisons nextthe minimum number of leaves in the resulting tree is shown to be the factorial of finallythe minimum depth of tree with nleaves is shown to be in ohm( log nbefore presenting the proof of an ohm( log nlower bound for sortingwe first must define the concept of decision tree decision tree is binary tree that can model the processing for any algorithm that makes decisions each (binarydecision is represented by branch in the tree for the purpose of modeling sorting algorithmswe count all comparisons of key values as decisions if two keys are compared and the first is less than the secondthen this is modeled as left branch in the decision tree in the case where the first value is greater than the secondthe algorithm takes the right branch figure shows the decision tree that models insertion sort on three input values the first input value is labeled xthe second yand the third they are the truth is stronger than this statement implies in realityradix sort relies on comparisons as well and so can be modeled by the technique used in this section the result is an ohm( log nbound in the general case even for algorithms that look like radix sort |
20,325 | sec lower bounds for sorting xyz xyz yzx xzy zxy yxz zyx yes yxz yxz yzx zyx [ ]< [ ]( < ?no xyz xyz xzy zxy yes no yes no [ ]< [ ] [ ]< [ ]yzx ( < ?yxz xzy ( < ?xyz yzx xzy zyx zxy yes no yes no [ ]< [ ] [ ]< [ ]zxy ( < ?xzy zyx ( < ?yzx figure decision tree for insertion sort when processing three values labeled xyand zinitially stored at positions and respectivelyin input array initially stored in positions and respectivelyof input array consider the possible outputs initiallywe know nothing about the final positions of the three values in the sorted output array the correct output could be any permutation of the input values for three valuesthere are permutations thusthe root node of the decision tree lists all six permutations that might be the eventual result of the algorithm when the first comparison made by insertion sort is between the second item in the input array (yand the first item in the array (xthere are two possibilitieseither the value of is less than that of xor the value of is not less than that of this decision is modeled by the first branch in the tree if is less than xthen the left branch should be taken and must appear before in the final output only three of the original six permutations have this propertyso the left child of the root lists the three permutations where appears before xyxzyzxand zyx likewiseif were not less than xthen the right branch would be takenand only the three permutations in which appears after are possible outcomesxyzxzyand zxy these are listed in the right child of the root let us assume for the moment that is less than and so the left branch is taken in this caseinsertion sort swaps the two values at this point the array stores yxz thusin figure the left child of the root shows yxz above the line nextthe third value in the array is compared against the second ( is |
20,326 | chap internal sorting compared with xagainthere are two possibilities if is less than xthen these items should be swapped (the left branchif is not less than xthen insertion sort is complete (the right branchnote that the right branch reaches leaf nodeand that this leaf node contains only permutation yxz this means that only permutation yxz can be the outcome based on the results of the decisions taken to reach this node in other wordsinsertion sort has "foundthe single permutation of the original input that yields sorted list likewiseif the second decision resulted in taking the left brancha third comparisonregardless of the outcomeyields nodes in the decision tree with only single permutations againinsertion sort has "foundthe correct permutation that yields sorted list any sorting algorithm based on comparisons can be modeled by decision tree in this wayregardless of the size of the input thusall sorting algorithms can be viewed as algorithms to "findthe correct permutation of the input that yields sorted list each algorithm based on comparisons can be viewed as proceeding by making branches in the tree based on the results of key comparisonsand each algorithm can terminate once node with single permutation has been reached how is the worst-case cost of an algorithm expressed by the decision treethe decision tree shows the decisions made by an algorithm for all possible inputs of given size each path through the tree from the root to leaf is one possible series of decisions taken by the algorithm the depth of the deepest node represents the longest series of decisions required by the algorithm to reach an answer there are many comparison-based sorting algorithmsand each will be modeled by different decision tree some decision trees might be well-balancedothers might be unbalanced some trees will have more nodes than others (those with more nodes might be making "unnecessarycomparisonsin facta poor sorting algorithm might have an arbitrarily large number of nodes in its decision treewith leaves of arbitrary depth there is no limit to how slow the "worstpossible sorting algorithm could be howeverwe are interested here in knowing what the best sorting algorithm could have as its minimum cost in the worst case in other wordswe would like to know what is the smallest depth possible for the deepest node in the tree for any sorting algorithm the smallest depth of the deepest node will depend on the number of nodes in the tree clearly we would like to "push upthe nodes in the treebut there is limited room at the top tree of height can only store one node (the root)the tree of height can store three nodesthe tree of height can store seven nodesand so on here are some important facts worth remembering |
20,327 | binary tree of height can store at most nodes equivalentlya tree with nodes requires at least dlog( ) levels what is the minimum number of nodes that must be in the decision tree for any comparison-based sorting algorithm for valuesbecause sorting algorithms are in the business of determining which unique permutation of the input corresponds to the sorted listall sorting algorithms must contain at least one leaf node for each possible permutation there are npermutations for set of numbers (see section because there are at least nnodes in the treewe know that the tree must have ohm(log !levels from stirling' approximation (section )we know log nis in ohm( log nthe decision tree for any comparison-based sorting algorithm must have nodes ohm( log nlevels deep thusin the worst caseany such sorting algorithm must require ohm( log ncomparisons any sorting algorithm requiring ohm( log ncomparisons in the worst case requires ohm( log nrunning time in the worst case because any sorting algorithm requires ohm( log nrunning timethe problem of sorting also requires ohm( log ntime we already know of sorting algorithms with ( log nrunning timeso we can conclude that the problem of sorting requires th( log ntime as corollarywe know that no comparison-based sorting algorithm can improve on existing th( log ntime sorting algorithms by more than constant factor further reading the definitive reference on sorting is donald knuth' sorting and searching [knu wealth of details is covered thereincluding optimal sorts for small size and special purpose sorting networks it is thorough (although somewhat datedtreatment on sorting for an analysis of quicksort and thorough survey on its optimizationssee robert sedgewick' quicksort [sed sedgewick' algorithms [sed discusses most of the sorting algorithms described here and pays special attention to efficient implementation the optimized mergesort version of section comes from sedgewick while ohm( log nis the theoretical lower bound in the worst case for sortingmany times the input is sufficiently well ordered that certain algorithms can take advantage of this fact to speed the sorting process simple example is insertion sort' best-case running time sorting algorithms whose running time is based on the amount of disorder in the input are called adaptive for more information on adaptive sorting algorithmssee " survey of adaptive sorting algorithmsby estivill-castro and wood [ecw |
20,328 | chap internal sorting exercises using inductionprove that insertion sort will always produce sorted array write an insertion sort algorithm for integer key values howeverhere' the catchthe input is stack (not an array)and the only variables that your algorithm may use are fixed number of integers and fixed number of stacks the algorithm should return stack containing the records in sorted order (with the least value being at the top of the stackyour algorithm should be th( in the worst case the bubble sort implementation has the following inner for loopfor (int = - >ij--consider the effect of replacing this with the following statementfor (int = - > --would the new implementation work correctlywould the change affect the asymptotic complexity of the algorithmhow would the change affect the running time of the algorithm when implementing insertion sorta binary search could be used to locate the position within the first elements of the array into which element should be inserted how would this affect the number of comparisons requiredhow would using such binary search affect the asymptotic running time for insertion sort figure shows the best-case number of swaps for selection sort as th(nthis is because the algorithm does not check to see if the ith record is already in the ith positionthat isit might perform unnecessary swaps (amodify the algorithm so that it does not make unnecessary swaps (bwhat is your prediction regarding whether this modification actually improves the running time(cwrite two programs to compare the actual running times of the original selection sort and the modified algorithm which one is actually faster recall that sorting algorithm is said to be stable if the original ordering for duplicate keys is preserved of the sorting algorithms insertion sortbubble sortselection sortshellsortquicksortmergesortheapsortbinsortand radix sortwhich of these are stableand which are notfor each onedescribe either why it is or is not stable if minor change to the implementation would make it stabledescribe the change |
20,329 | recall that sorting algorithm is said to be stable if the original ordering for duplicate keys is preserved we can make any algorithm stable if we alter the input keys so that (potentiallyduplicate key values are made unique in way that the first occurrence of the original duplicate value is less than the second occurrencewhich in turn is less than the thirdand so on in the worst caseit is possible that all input records have the same key value give an algorithm to modify the key values such that every modified key value is uniquethe resulting key values give the same sort order as the original keysthe result is stable (in that the duplicate original key values remain in their original order)and the process of altering the keys is done in linear time using only constant amount of additional space the discussion of quicksort in section described using stack instead of recursion to reduce the number of function calls made (ahow deep can the stack get in the worst case(bquicksort makes two recursive calls the algorithm could be changed to make these two calls in specific order in what order should the two calls be madeand how does this affect how deep the stack can become give permutation for the values through that will cause quicksort (as implemented in section to have its worst case behavior assume is an arraylength(lreturns the number of records in the arrayand qsort(lijsorts the records of from to (leaving the records sorted in lusing the quicksort algorithm what is the averagecase time complexity for each of the following code fragments(afor ( = < lengthi++qsort( )(bfor ( = < lengthi++qsort( length- ) modify quicksort to find the smallest values in an array of records your output should be the array modified so that the smallest values are sorted in the first positions of the array your algorithm should do the minimum amount of work necessary modify quicksort to sort sequence of variable-length strings stored one after the other in character arraywith second array (storing pointers to stringsused to index the strings your function should modify the index array so that the first pointer points to the beginning of the lowest valued stringand so on |
20,330 | chap internal sorting graph (nn log nf (nn and (nn in the range < < to visually compare their growth rates typicallythe constant factor in the running-time expression for an implementation of insertion sort will be less than the constant factors for shellsort or quicksort how many times greater can the constant factor be for shellsort to be faster than insertion sort when how many times greater can the constant factor be for quicksort to be faster than insertion sort when imagine that there exists an algorithm splitk that can split list of elements into sublistseach containing one or more elementssuch that sublist contains only elements whose values are less than all elements in sublist for < if kthen sublists are emptyand the rest are of length assume that splitk has time complexity (length of lfurthermoreassume that the lists can be concatenated again in constant time consider the following algorithmlist sortk(list llist sub[ ]/to hold the sublists if ( length( splitk(lsub)/splitk places sublists into sub for ( = <ki++sub[isortk(sub[ ])/sort each sublist concatenation of sublists in subreturn (awhat is the worst-case asymptotic running time for sortkwhy(bwhat is the average-case asymptotic running time of sortkwhy here is variation on sorting the problem is to sort collection of nuts and bolts by size it is assumed that for each bolt in the collectionthere is corresponding nut of the same sizebut initially we do not know which nut goes with which bolt the differences in size between two nuts or two bolts can be too small to see by eyeso you cannot rely on comparing the sizes of two nuts or two bolts directly insteadyou can only compare the sizes of nut and bolt by attempting to screw one into the other (assume this comparison to be constant time operationthis operation tells you that either the nut is bigger than the boltthe bolt is bigger than the nutor they are the same size what is the minimum number of comparisons needed to sort the nuts and bolts in the worst case |
20,331 | (adevise an algorithm to sort three numbers it should make as few comparisons as possible how many comparisons and swaps are required in the bestworstand average cases(bdevise an algorithm to sort five numbers it should make as few comparisons as possible how many comparisons and swaps are required in the bestworstand average cases(cdevise an algorithm to sort eight numbers it should make as few comparisons as possible how many comparisons and swaps are required in the bestworstand average cases devise an efficient algorithm to sort set of numbers with values in the range to , there are no duplicates keep memory requirements to minimum which of the following operations are best implemented by first sorting the list of numbersfor each operationbriefly describe an algorithm to implement itand state the algorithm' asymptotic complexity (afind the minimum value (bfind the maximum value (ccompute the arithmetic mean (dfind the median ( the middle value(efind the mode ( the value that appears the most times consider recursive mergesort implementation that calls insertion sort on sublists smaller than some threshold if there are calls to mergesorthow many calls will there be to insertion sortwhy implement mergesort for the case where the input is linked list counting sort (assuming the input key values are integers in the range to works by counting the number of records with each key value in the first passand then uses this information to place the records in order in second pass write an implementation of counting sort (see the implementation of radix sort for some ideaswhat can we say about the relative values of and for this to be effectiveif nwhat is the running time of this algorithm use an argument similar to that given in section to prove that log is worst-case lower bound for the problem of searching for given value in sorted array containing elements projects one possible improvement for bubble sort would be to add flag variable and test that determines if an exchange was made during the current iteration if no exchange was madethen the list is sorted and so the algorithm |
20,332 | chap internal sorting can stop early this makes the best case performance become ( (because if the list is already sortedthen no iterations will take place on the first passand the sort will stop right theremodify the bubble sort implementation to add this flag and test compare the modified implementation on range of inputs to determine if it does or does not improve performance in practice double insertion sort is variation on insertion sort that works from the middle of the array out at each iterationsome middle portion of the array is sorted on the next iterationtake the two adjacent elements to the sorted portion of the array if they are out of order with respect to each otherthan swap them nowpush the left element toward the right in the array so long as it is greater than the element to its right and push the right element toward the left in the array so long as it is less than the element to its left the algorithm begins by processing the middle two elements of the array if the array is even if the array is oddthen skip processing the middle item and begin with processing the elements to its immediate left and right firstexplain what the cost of double insertion sort will be in comparison to standard insertion sortand why (note that the two elements being processed in the current iterationonce initially swapped to be sorted with with respect to each othercannot cross as they are pushed into sorted position thenimplement double insertion sortbeing careful to properly handle both when the array is odd and when it is even compare its running time in practice against standard insertion sort finallyexplain how this speedup might affect the threshold level and running time for quicksort implementation starting with the java code for quicksort given in this write series of quicksort implementations to test the following optimizations on wide range of input data sizes try these optimizations in various combinations to try and develop the fastest possible quicksort implementation that you can (alook at more values when selecting pivot (bdo not make recursive call to qsort when the list size falls below given thresholdand use insertion sort to complete the sorting process test various values for the threshold size (celiminate recursion by using stack and inline functions write your own collection of sorting programs to implement the algorithms described in this and compare their running times be sure to implement optimized versionstrying to make each program as fast as possible do you get the same relative timings as shown in figure if notwhy do you think this happenedhow do your results compare with those of your |
20,333 | classmateswhat does this say about the difficulty of doing empirical timing studies perform study of shellsortusing different increments compare the version shown in section where each increment is half the previous onewith others in particulartry implementing "division by where the increments on list of length will be / / etc do other increment schemes work as well the implementation for mergesort given in section takes an array as input and sorts that array at the beginning of section there is simple pseudocode implementation for sorting linked list using mergesort implement both linked list-based version of mergesort and the array-based version of mergesortand compare their running times radix sort is typically implemented to support only radix that is power of two this allows for direct conversion from the radix to some number of bits in an integer key value for exampleif the radix is then -bit key will be processed in steps of bits each this can lead to more efficient implementation because bit shifting can replace the division operations shown in the implementation of section re-implement the radix sort code given in section to use bit shifting in place of division compare the running time of the old and new radix sort implementations it has been proposed that heapsort can be optimized by altering the heap' siftdown function call the value being sifted down siftdown does two comparisons per levelfirst the children of are comparedthen the winner is compared to if is too smallit is swapped with its larger child and the process repeated the proposed optimization dispenses with the test against insteadthe larger child automatically replaces xuntil reaches the bottom level of the heap at this pointx might be too large to remain in that position this is corrected by repeatedly comparing with its parent and swapping as necessary to "bubbleit up to its proper level the claim is that this process will save number of comparisons because most nodes when sifted down end up near the bottom of the tree anyway implement both versions of siftdownand do an empirical study to compare their running times |
20,334 | file processing and external sorting earlier presented basic data structures and algorithms that operate on data stored in main memory some applications require that large amounts of information be stored and processed -so much information that it cannot all fit into main memory in that casethe information must reside on disk and be brought into main memory selectively for processing you probably already realize that main memory access is much faster than access to data stored on disk or other storage devices the relative difference in access times is so great that efficient disk-based programs require different approach to algorithm design than most programmers are used to as resultmany programmers do poor job when it comes to file processing applications this presents the fundamental issues relating to the design of algorithms and data structures for disk-based applications we begin with description of the significant differences between primary memory and secondary storage section discusses the physical aspects of disk drives section presents basic methods for managing buffer pools section discusses the java model for random access to data stored on disk section discusses the basic principles for sorting collections of records too large to fit in main memory computer technology changes rapidly provide examples of disk drive specifications and other hardware performance numbers that are reasonably up to date as of the time when the book was written when you read itthe numbers might seem out of date howeverthe basic principles do not change the approximate ratios for timespaceand cost between memory and disk have remained surprisingly steady for over years |
20,335 | chap file processing and external sorting medium ram disk flash drive floppy tape $ figure price comparison table for some writeable electronic data storage media in common use prices are in us dollars/mb primary versus secondary storage computer storage devices are typically classified into primary or main memory and secondary or peripheral storage primary memory usually refers to random access memory (ram)while secondary storage refers to devices such as hard disk drivesremoveable "flashdrivesfloppy diskscdsdvdsand tape drives primary memory also includes registerscacheand video memoriesbut we will ignore them for this discussion because their existence does not affect the principal differences between primary and secondary memory along with faster cpuevery new model of computer seems to come with more main memory as memory size continues to increaseis it possible that relatively slow disk storage will be unnecessaryprobably notbecause the desire to store and process larger files grows at least as fast as main memory size prices for both main memory and peripheral storage devices have dropped dramatically in recent yearsas demonstrated by figure howeverthe cost for disk drive storage per megabyte is about two orders of magnitude less than ram and has been for many years there is now wide range of removable media available for transferring data or storing data offline in relative safety these include floppy disks (now largely obsolete)writable cds and dvds"flashdrivesand magnetic tape optical storage such as cds and dvds costs roughly half the price of hard disk drive space per megabyteand have become practical for use as backup storage within the past few years tape used to be much cheaper than other mediaand was the preferred means of backup "flashdrives cost the most per megabytebut due to their storage capacity and flexibilityhave now replaced floppy disks as the primary storage device for transferring data between computer when direct network transfer is not available secondary storage devices have at least two other advantages over ram memory perhaps most importantlydisk and tape files are persistentmeaning that |
20,336 | they are not erased from disk and tape when the power is turned off in contrastram used for main memory is usually volatile -all information is lost with the power second advantage is that floppy diskscd-romsand "flashdrives can easily be transferred between computers this provides convenient way to take information from one computer to another in exchange for reduced storage costspersistenceand portabilitysecondary storage devices pay penalty in terms of increased access time while not all accesses to disk take the same amount of time (more on this later)the typical time required to access byte of storage from disk drive in is around ms ( thousandths of secondthis might not seem slowbut compared to the time required to access byte from main memorythis is fantastically slow typical access time from standard personal computer ram in is about - nanoseconds ( - billionths of secondthusthe time to access byte of data from disk drive is about six orders of magnitude greater than that required to access byte from main memory while disk drive and ram access times are both decreasingthey have done so at roughly the same rate the relative speeds have remained the same for over twenty-five yearsin that the difference in access time between ram and disk drive has remained in the range between factor of , and , , to gain some intuition for the significance of this speed differenceconsider the time that it might take for you to look up the entry for disk drives in the index of this bookand then turn to the appropriate page call this your "primary memoryaccess time if it takes you about seconds to perform this accessthen an access taking , times longer would require months it is interesting to note that while processing speeds have increased dramaticallyand hardware prices have dropped dramaticallydisk and memory access times have improved by less than an order of magnitude over the past ten years howeverthe situation is really much better than that modest speedup would suggest during the same time periodthe size of both disk and main memory has increased by about three orders of magnitude thusthe access times have actually decreased in the face of massive increase in the density of these storage devices due to the relatively slow access time for data on disk as compared to main memorygreat care is required to create efficient applications that process diskbased information the million-to-one ratio of disk access time versus main memory access time makes the following rule of paramount importance when designing disk-based applicationsminimize the number of disk accesses |
20,337 | chap file processing and external sorting there are generally two approaches to minimizing disk accesses the first is to arrange information so that if you do access data from secondary memoryyou will get what you need in as few accesses as possibleand preferably on the first access file structure is the term used for data structure that organizes data stored in secondary memory file structures should be organized so as to minimize the required number of disk accesses the other way to minimize disk accesses is to arrange information so that each disk access retrieves additional data that can be used to minimize the need for future accessesthat isto guess accurately what information will be needed later and retrieve it from disk nowif this can be done cheaply as you shall seethere is little or no difference in the time required to read several hundred contiguous bytes from disk as compared to reading one byteso this strategy is indeed practical one way to minimize disk accesses is to compress the information stored on disk section discusses the space/time tradeoff in which space requirements can be reduced if you are willing to sacrifice time howeverthe disk-based space/time tradeoff principle stated that the smaller you can make your disk storage requirementsthe faster your program will run this is because the time to read information from disk is enormous compared to computation timeso almost any amount of additional computation to unpack the data is going to be less than the disk read time saved by reducing the storage requirements this is precisely what happens when files are compressed cpu time is required to uncompress informationbut this time is likely to be much less than the time saved by reducing the number of bytes read from disk current file compression programs are not designed to allow random access to parts of compressed fileso the disk-based space/time tradeoff principle cannot easily be taken advantage of in normal processing using commercial disk compression utilities howeverin the future disk drive controllers might automatically compress and decompress files stored on diskthus taking advantage of the disk-based space/time tradeoff principle to save both space and time many cartridge tape drives (which must process data sequentiallyautomatically compress and decompress information during / disk drives java programmer views random access file stored on disk as contiguous series of byteswith those bytes possibly combining to form data records this is called the logical file the physical file actually stored on disk is usually not contiguous series of bytes it could well be in pieces spread all over the disk the file managera part of the operating systemis responsible for taking requests for data from logical file and mapping those requests to the physical location |
20,338 | of the data on disk likewisewhen writing to particular logical byte position with respect to the beginning of the filethis position must be converted by the file manager into the corresponding physical location on the disk to gain some appreciation for the the approximate time costs for these operationsyou need to understand the physical structure and basic workings of disk drive disk drives are often referred to as direct access storage devices this means that it takes roughly equal time to access any record in the file this is in contrast to sequential access storage devices such as tape driveswhich require the tape reader to process data from the beginning of the tape until the desired position has been reached as you will seethe disk drive is only approximately direct accessat any given timesome records are more quickly accessible than others disk drive architecture hard disk drive is composed of one or more round plattersstacked one on top of another and attached to central spindle platters spin continuously at constant rate each usable surface of each platter is assigned read/write head or / head through which data are read or writtensomewhat like the arrangement of phonograph player' arm "readingsound from phonograph record unlike phonograph needlethe disk read/write head does not actually touch the surface of hard disk insteadit remains slightly above the surfaceand any contact during normal operation would damage the disk this distance is very smallmuch smaller than the height of dust particle it can be likened to -kilometer airplane trip across the united stateswith the plane flying at height of one metera hard disk drive typically has several platters and several read/write headsas shown in figure (aeach head is attached to an armwhich connects to the boom the boom moves all of the heads in or out together when the heads are in some position over the plattersthere are data on each platter directly accessible to each head the data on single platter that are accessible to any one position of the head for that platter are collectively called trackthat isall data on platter that are fixed distance from the spindleas shown in figure (bthe collection of all tracks that are fixed distance from the spindle is called cylinder thusa cylinder is all of the data that can be read when the arms are in particular position each track is subdivided into sectors between each sector there are intersector gaps in which no data are stored these gaps allow the read head to recognize the end of sector note that each sector contains the same amount of data because the outer tracks have greater lengththey contain fewer bits per inch than do the inner tracks thusabout half of the potential storage space is wastedbecause only the innermost tracks are stored at the highest possible data density this |
20,339 | chap file processing and external sorting boom (armplatters read/write heads spindle (atrack (bfigure (aa typical disk drive arranged as stack of platters (bone track on disk drive platter intersector gaps bits of data sectors ( (bfigure the organization of disk platter dots indicate density of information (anominal arrangement of tracks showing decreasing data density when moving outward from the center of the disk (ba "zonedarrangement with the sector size and density periodically reset in tracks further away from the center arrangement is illustrated by figure disk drives today actually group tracks into "zonessuch that the tracks in the innermost zone adjust their data density going out to maintain the same radial data densitythen the tracks of the next zone reset the data density to make better use of their storage abilityand so on this arrangement is shown in figure in contrast to the physical layout of hard diska cd-rom consists of single spiral track bits of information along the track are equally spacedso the information density is the same at both the outer and inner portions of the track to keep |
20,340 | the information flow at constant rate along the spiralthe drive must speed up the rate of disk spin as the / head moves toward the center of the disk this makes for more complicated and slower mechanism three separate steps take place when reading particular byte or series of bytes of data from hard disk firstthe / head moves so that it is positioned over the track containing the data this movement is called seek secondthe sector containing the data rotates to come under the head when in use the disk is always spinning at the time of this writingtypical disk spin rates are rotations per minute (rpmthe time spent waiting for the desired sector to come under the / head is called rotational delay or rotational latency the third step is the actual transfer ( reading or writingof data it takes relatively little time to read information once the first byte is positioned under the / headsimply the amount of time required for it all to move under the head in factdisk drives are designed not to read one byte of databut rather to read an entire sector of data at each request thusa sector is the minimum amount of data that can be read or written at one time in generalit is desirable to keep all sectors for file together on as few tracks as possible this desire stems from two assumptions seek time is slow (it is typically the most expensive part of an / operation)and if one sector of the file is readthe next sector will probably soon be read assumption ( is called locality of referencea concept that comes up frequently in computer applications contiguous sectors are often grouped to form cluster cluster is the smallest unit of allocation for fileso all files are multiple of the cluster size the cluster size is determined by the operating system the file manager keeps track of which clusters make up each file in microsoft windows systemsthere is designated portion of the disk called the file allocation tablewhich stores information about which sectors belong to which file in contrastunix does not use clusters the smallest unit of file allocation and the smallest unit that can be read/written is sectorwhich in unix terminology is called block unix maintains information about file organization in certain disk blocks called -nodes group of physically contiguous clusters from the same file is called an extent ideallyall clusters making up file will be contiguous on the disk ( the file will consist of one extent)so as to minimize seek time required to access different portions of the file if the disk is nearly full when file is createdthere might not be an extent available that is large enough to hold the new file furthermoreif file |
20,341 | chap file processing and external sorting growsthere might not be free space physically adjacent thusa file might consist of several extents widely spaced on the disk the fuller the diskand the more that files on the disk changethe worse this file fragmentation (and the resulting seek timebecomes file fragmentation leads to noticeable degradation in performance as additional seeks are required to access data another type of problem arises when the file' logical record size does not match the sector size if the sector size is not multiple of the record size (or vice versa)records will not fit evenly within sector for examplea sector might be bytes longand logical record bytes this leaves room to store records with bytes left over either the extra space is wastedor else records are allowed to cross sector boundaries if record crosses sector boundarytwo disk accesses might be required to read it if the space is left empty insteadsuch wasted space is called internal fragmentation second example of internal fragmentation occurs at cluster boundaries files whose size is not an even multiple of the cluster size must waste some space at the end of the last cluster the worst case will occur when file size modulo cluster size is one (for examplea file of bytes and cluster of bytesthuscluster size is tradeoff between large files processed sequentially (where large cluster size is desirable to minimize seeksand small files (where small clusters are desirable to minimize wasted storageevery disk drive organization requires that some disk space be used to organize the sectorsclustersand so forth the layout of sectors within track is illustrated by figure typical information that must be stored on the disk itself includes the file allocation tablesector headers that contain address marks and information about the condition (whether usable or notfor each sectorand gaps between sectors the sector header also contains error detection codes to help verify that the data have not been corrupted this is why most disk drives have "nominalsize that is greater than the actual amount of user data that can be stored on the drive the difference is the amount of space required to organize the information on the disk additional space will be lost due to fragmentation disk access costs the primary cost when accessing information on disk is normally the seek time this assumes of course that seek is necessary when reading file in sequential order (if the sectors comprising the file are contiguous on disk)little seeking is necessary howeverwhen accessing random disk sectorseek time becomes the dominant cost for the data access while the actual seek time is highly variabledepending on the distance between the track where the / head currently is and |
20,342 | sec disk drives intersector gap sector header sector data sector header sector data intrasector gap figure an illustration of sector gaps within track each sector begins with sector header containing the sector address and an error detection code for the contents of that sector the sector header is followed by small intra-sector gapfollowed in turn by the sector data each sector is separated from the next sector by larger inter-sector gap the track where the head is moving towe will consider only two numbers one is the track-to-track costor the minimum time necessary to move from track to an adjacent track this is appropriate when you want to analyze access times for files that are well placed on the disk the second number is the average seek time for random access these two numbers are often provided by disk manufacturers typical example is the gb western digital wd caviar se serial ata drive the manufacturer' specifications indicate that the track-to-track time is ms and the average seek time is ms for many yearstypical rotation speed for disk drives was rpmor one rotation every ms most disk drives today have rotation speed of rpmor ms per rotation when reading sector at randomyou can expect that the disk will need to rotate halfway around to bring the desired sector under the / heador ms for -rpm disk drive once under the / heada sector of data can be transferred as fast as that sector rotates under the head if an entire track is to be readthen it will require one rotation ( ms at rpmto move the full track under the head if only part of the track is to be readthen proportionately less time will be required for exampleif there are sectors on the track and one sector is to be readthis will require trivial amount of time ( / of rotationexample assume that an older disk drive has total (nominalcapacity of gb spread among plattersyielding gb/platter each platter contains , tracks and each track contains (after formatting sectors of bytes/sector track-to-track seek time is ms and average seek time for random access is ms assume the operating system maintains cluster size of sectors per cluster ( kb)yielding clusters per track the disk rotation rate is rpm ( ms per rotationbased on this information we can estimate the cost for various file processing operations |
20,343 | chap file processing and external sorting how much time is required to read the trackon averageit will require half rotation to bring the first sector of the track under the / headand then one complete rotation to read the track how long will it take to read file of mb divided into sectorsized ( byterecordsthis file will be stored in clustersbecause each cluster holds sectors the answer to the question depends in large measure on how the file is stored on the diskthat iswhether it is all together or broken into multiple extents we will calculate both cases to see how much difference this makes if the file is stored so as to fill all of the sectors of eight adjacent tracksthen the cost to read the first sector will be the time to seek to the first track (assuming this requires random seek)then wait for the initial rotational delayand then the time to read this requires ms at this pointbecause we assume that the next seven tracks require only track-to-track seek because they are adjacenteach requires ms the total time required is therefore ms ms ms if the file' clusters are spread randomly across the diskthen we must perform seek for each clusterfollowed by the time for rotational delay once the first sector of the cluster comes under the / headvery little time is needed to read the cluster because only / of the track needs to rotate under the headfor total time of about ms for latency and read time thusthe total time required is about ( ms or close to seconds this is much longer than the time required when the file is all together on diskthis example illustrates why it is important to keep disk files from becoming fragmentedand why so-called "disk defragmenterscan speed up file processing time file fragmentation happens most commonly when the disk is nearly full and the file manager must search for free space whenever file is created or changed |
20,344 | buffers and buffer pools given the specifications of the disk drive from example we find that it takes about + ms to read one track of data on average it takes about ++( / ) ms on average to read single sector of data this is good savings (slightly over half the time)but less than of the data on the track are read if we want to read only single byteit would save us effectively no time over that required to read an entire sector for this reasonnearly all disk drives automatically read or write an entire sector' worth of information whenever the disk is accessedeven when only one byte of information is requested once sector is readits information is stored in main memory this is known as buffering or caching the information if the next disk request is to that same sectorthen it is not necessary to read from disk again because the information is already stored in main memory buffering is an example of one method for minimizing disk accesses mentioned at the beginning of the bring off additional information from disk to satisfy future requests if information from files were accessed at randomthen the chance that two consecutive disk requests are to the same sector would be low howeverin practice most disk requests are close to the location (in the logical file at leastof the previous request this means that the probability of the next request "hitting the cacheis much higher than chance would indicate this principle explains one reason why average access times for new disk drives are lower than in the past not only is the hardware fasterbut information is also now stored using better algorithms and larger caches that minimize the number of times information needs to be fetched from disk this same concept is also used to store parts of programs in faster memory within the cpuusing the cpu cache that is prevalent in modern microprocessors sector-level buffering is normally provided by the operating system and is often built directly into the disk drive controller hardware most operating systems maintain at least two buffersone for input and one for output consider what would happen if there were only one buffer during byte-by-byte copy operation the sector containing the first byte would be read into the / buffer the output operation would need to destroy the contents of the single / buffer to write this byte then the buffer would need to be filled again from disk for the second byteonly to be destroyed during output the simple solution to this problem is to keep one buffer for inputand second for output most disk drive controllers operate independently from the cpu once an / request is received this is useful because the cpu can typically execute millions of instructions during the time required for single / operation technique that |
20,345 | chap file processing and external sorting takes maximum advantage of this micro-parallelism is double buffering imagine that file is being processed sequentially while the first sector is being readthe cpu cannot process that information and so must wait or find something else to do in the meantime once the first sector is readthe cpu can start processing while the disk drive (in parallelbegins reading the second sector if the time required for the cpu to process sector is approximately the same as the time required by the disk controller to read sectorit might be possible to keep the cpu continuously fed with data from the file the same concept can also be applied to outputwriting one sector to disk while the cpu is writing to second output buffer in memory thusin computers that support double bufferingit pays to have at least two input buffers and two output buffers available caching information in memory is such good idea that it is usually extended to multiple buffers the operating system or an application program might store many buffers of information taken from some backing storage such as disk file this process of using buffers as an intermediary between user and disk file is called buffering the file the information stored in buffer is often called pageand the collection of buffers is called buffer pool the goal of the buffer pool is to increase the amount of information stored in memory in hopes of increasing the likelihood that new information requests can be satisfied from the buffer pool rather than requiring new information to be read from disk as long as there is an unused buffer available in the buffer poolnew information can be read in from disk on demand when an application continues to read new information from diskeventually all of the buffers in the buffer pool will become full once this happenssome decision must be made about what information in the buffer pool will be sacrificed to make room for newly requested information when replacing information contained in the buffer poolthe goal is to select buffer that has "unnecessaryinformationthat isthat information least likely to be requested again because the buffer pool cannot know for certain what the pattern of future requests will look likea decision based on some heuristicor best guessmust be used there are several approaches to making this decision one heuristic is "first-infirst-out(fifothis scheme simply orders the buffers in queue the buffer at the front of the queue is used next to store new information and then placed at the end of the queue in this waythe buffer to be replaced is the one that has held its information the longestin hopes that this information is no longer needed this is reasonable assumption when processing moves along the file at some steady pace in roughly sequential order howevermany programs work with certain key pieces of information over and over againand the importance of information has little to do with how long ago the informa |
20,346 | tion was first accessed typically it is more important to know how many times the information has been accessedor how recently the information was last accessed another approach is called "least frequently used(lfulfu tracks the number of accesses to each buffer in the buffer pool when buffer must be reusedthe buffer that has been accessed the fewest number of times is considered to contain the "least importantinformationand so it is used next lfuwhile it seems intuitively reasonablehas many drawbacks firstit is necessary to store and update access counts for each buffer secondwhat was referenced many times in the past might now be irrelevant thussome time mechanism where counts "expireis often desirable this also avoids the problem of buffers that slowly build up big counts because they get used just often enough to avoid being replaced an alternative is to maintain counts for all sectors ever readnot just the sectors currently in the buffer pool the third approach is called "least recently used(lrulru simply keeps the buffers in list whenever information in buffer is accessedthis buffer is brought to the front of the list when new information must be readthe buffer at the back of the list (the one least recently usedis taken and its "oldinformation is either discarded or written to diskas appropriate this is an easily implemented approximation to lfu and is often the method of choice for managing buffer pools unless special knowledge about information access patterns for an application suggests special-purpose buffer management scheme the main purpose of buffer pool is to minimize disk / when the contents of block are modifiedwe could write the updated information to disk immediately but what if the block is changed againif we write the block' contents after every changethat might be lot of disk write operations that can be avoided it is more efficient to wait until either the file is to be closedor the buffer containing that block is flushed from the buffer pool when buffer' contents are to be replaced in the buffer poolwe only want to write the contents to disk if it is necessary that would be necessary only if the contents have changed since the block was read in originally from the file the way to insure that the block is written when necessarybut only when necessaryis to maintain boolean variable with the buffer (often referred to as the dirty bitthat is turned on when the buffer' contents are modified by the client at the time when the block is flushed from the buffer poolit is written to disk if and only if the dirty bit has been turned on modern operating systems support virtual memory virtual memory is technique that allows the programmer to pretend that there is more of the faster main memory (such as ramthan actually exists this is done by means of buffer pool |
20,347 | chap file processing and external sorting reading blocks stored on slowersecondary memory (such as on the disk drivethe disk stores the complete contents of the virtual memory blocks are read into main memory as demanded by memory accesses naturallyprograms using virtual memory techniques are slower than programs whose data are stored completely in main memory the advantage is reduced programmer effort because good virtual memory system provides the appearance of larger main memory without modifying the program example consider virtual memory whose size is ten sectorsand which has buffer pool of five buffers associated with it we will use lru replacement scheme the following series of memory requests occurs after the first five requeststhe buffer pool will store the sectors in the order because sector is already at the frontthe next request can be answered without reading new data from disk or reordering the buffers the request to sector requires emptying the contents of the least recently used bufferwhich contains sector the request to sector brings the buffer holding sector ' contents back to the front processing the remaining requests results in the buffer pool as shown in figure example figure illustrates buffer pool of five blocks mediating virtual memory of ten blocks at any given momentup to five sectors of information can be in main memory assume that sectors and are currently in the buffer poolstored in this orderand that we use the lru buffer replacement strategy if request for sector is then receivedthen one sector currently in the buffer pool must be replaced because the buffer containing sector is the least recently used bufferits contents will be copied back to disk at sector the contents of sector are then copied into this bufferand it is moved to the front of the buffer pool (leaving the buffer containing sector as the new least-recently used bufferif the next memory request were to sector no data would need to be read from disk insteadthe buffer containing sector would be moved to the front of the buffer pool when implementing buffer poolsthere are two basic approaches that can be taken regarding the transfer of information between the user of the buffer pool and |
20,348 | sec buffers and buffer pools secondary storage (on diskmain memory (in ram figure an illustration of virtual memory the complete collection of information resides in the slowersecondary storage (on diskthose sectors recently accessed are held in the fast main memory (in ramin this examplecopies of sectors and from secondary storage are currently stored in the main memory if memory access to sector is receivedone of the sectors currently in main memory must be replaced the buffer pool class itself the first approach is to pass "messagesbetween the two this approach is illustrated by the following abstract class/*adt for buffer pools using the message-passing style *public interface bufferpooladt /*copy "szbytes from "spaceto position "posin the buffered storage *public void insert(byte[spaceint szint pos)/*copy "szbytes from position "posof the buffered storage to "space*public void getbytes(byte[spaceint szint pos)this simple class provides an interface with two member functionsinsert and getbytes the information is passed between the buffer pool user and the buffer pool through the space parameter this is storage spaceprovided by the bufferpool client and at least sz bytes longwhich the buffer pool can take information from (the insert functionor put information into (the getbytes functionparameter pos indicates where the information will be placed in the |
20,349 | chap file processing and external sorting buffer pool' logical storage space physicallyit will actually be copied to the appropriate byte position in some buffer in the buffer pool example assume each sector of the disk file (and thus each block in the buffer poolstores bytes assume that the buffer pool is in the state shown in figure if the next request is to copy bytes beginning at position of the filethese bytes should be placed into sector (whose bytes go from position to position because sector is currently in the buffer poolwe simply copy the bytes contained in space to byte positions - the buffer containing sector is then moved to the buffer pool ahead of the buffer containing sector the alternative approach is to have the buffer pool provide to the user direct pointer to buffer that contains the necessary information such an interface might look as follows/*adt for buffer pools using the buffer-passing style *public interface bufferpooladt /*return pointer to the requested block *public byte[getblock(int block)/*set the dirty bit for the buffer holding "block*public void dirtyblock(int block)/tell the size of buffer public int blocksize()}in this approachthe user of the buffer pool is made aware that the storage space is divided into blocks of given sizewhere each block is the size of buffer the user requests specific blocks from the buffer poolwith pointer to the buffer holding the requested block being returned to the user the user might then read from or write to this space if the user writes to the spacethe buffer pool must be informed of this fact the reason is thatwhen given block is to be removed from the buffer poolthe contents of that block must be written to the backing storage if it has been modified if the block has not been modifiedthen it is unnecessary to write it out example we wish to write bytes beginning at logical position in the file assume that the buffer pool is in the state shown in figure using the second adtthe client would need to know that blocks (buffersare of size and therefore would request access to sector |
20,350 | pointer to the buffer containing sector would be returned by the call to getblock the client would then copy bytes to positions - of the bufferand call dirtyblock to warn the buffer pool that the contents of this block have been modified further problem with the second adt is the risk of stale pointers when the buffer pool user is given pointer to some buffer space at time that pointer does indeed refer to the desired data at that time as further requests are made to the buffer poolit is possible that the data in any given buffer will be removed and replaced with new data if the buffer pool user at later time then refers to the data referred to by the pointer given at time it is possible that the data are no longer valid because the buffer contents have been replaced in the meantime thus the pointer into the buffer pool' memory has become "stale to guarantee that pointer is not staleit should not be used if intervening requests to the buffer pool have taken place we can solve this problem by introducing the concept of user (or possibly multiple usersgaining access to bufferand then releasing the buffer when done we will add method acquirebuffer and releasebuffer for this purpose method acquirebuffer takes block id as input and returns pointer to the buffer that will be used to store this block the buffer pool will keep count of the number of requests currently active for this block method releasebuffer will reduce the count of active users for the associated block buffers associated with active blocks will not be eligible for flushing from the buffer pool this will lead to problem if the client neglects to release active blocks when they are no longer needed there would also be problem if there were more total active blocks than buffers in the buffer pool howeverthe buffer pool should always be initialized to include more buffers than will ever be active at one time an additional problem with both adts presented so far comes when the user intends to completely overwrite the contents of blockand does not need to read in the old contents already on disk howeverthe buffer pool cannot in general know whether the user wishes to use the old contents or not this is especially true with the message-passing approach where given message might overwrite only part of the block in this casethe block will be read into memory even when not neededand then its contents will be overwritten this inefficiency can be avoided (at least in the buffer-passing versionby separating the assignment of blocks to buffers from actually reading in data for the block in particularthe following revised buffer-passing adt does not actually read data in the acquirebuffer method users who wish to see the old con |
20,351 | chap file processing and external sorting tents must then issue readblock request to read the data from disk into the bufferand then getdatapointer request to gain direct access to the buffer' data contents /*improved adt for buffer pools using the buffer-passing style most user functionality is in the buffer classnot the buffer pool itself */* single buffer in the buffer pool *public interface bufferadt /*read the associated block from disk (if necessaryand return pointer to the data *public byte[readblock()/*return pointer to the buffer' data array (without reading from disk*public byte[getdatapointer()/*flag the buffer' contents as having changedso that flushing the block will write it back to disk *public void markdirty()/*release the block' access to this buffer further accesses to this buffer are illegal *public void releasebuffer()/*the bufferpool *public interface bufferpooladt /*relate block to bufferreturning pointer to buffer object *buffer acquirebuffer(int block)clearlythe buffer-passing approach places more obligations on the user of the buffer pool these obligations include knowing the size of blocknot corrupting the buffer pool' storage spaceand informing the buffer pool both when block has been modified and when it is no longer needed so many obligations make this approach prone to error an advantage is that there is no need to do an extra copy step when getting information from the user to the buffer if the size of the records stored is smallthis is not an important consideration if the size of the records is large (especially if the record size and the buffer size are the sameas typically is the case when implementing -treessee section )then this efficiency issue might become important note however that the in-memory copy time will always be far less than the time required to write the contents of buffer to disk for applications |
20,352 | where disk / is the bottleneck for the programeven the time to copy lots of information between the buffer pool user and the buffer might be inconsequential another advantage to buffer passing is the reduction in unnecessary read operations for data that will be overwritten anyway you should note that the implementations for class bufferpool above does not use generics insteadthe space parameter and the buffer pointer are declared to be byte[when class uses genericthat means that the record type is arbitrarybut that the class knows what the record type is in contrastusing byte[for the space means that not only is the record type arbitrarybut also the buffer pool does not even know what the user' record type is in facta given buffer pool might have many users who store many types of records in buffer poolthe user decides where given record will be stored but has no control over the precise mechanism by which data are transferred to the backing storage this is in contrast to the memory manager described in section in which the user passes record to the manager and has no control at all over where the record is stored the programmer' view of files the java programmer' logical view of random access file is single stream of bytes interaction with file can be viewed as communications channel for issuing one of three instructionsread bytes from the current position in the filewrite bytes to the current position in the fileand move the current position within the file you do not normally see how the bytes are stored in sectorsclustersand so forth the mapping from logical to physical addresses is done by the file systemand sector-level buffering is done automatically by the disk controller when processing records in disk filethe order of access can have great effect on / time random access procedure processes records in an order independent of their logical order within the file sequential access processes records in order of their logical appearance within the file sequential processing requires less seek time if the physical layout of the disk file matches its logical layoutas would be expected if the file were created on disk with high percentage of free space java provides several mechanisms for manipulating binary files one of the most commonly used is the randomaccessfile class the following class member functions are most commonly used to manipulate information in random access disk files |
20,353 | chap file processing and external sorting randomaccessfile(string namestring mode)class constructoropens disk file for processing read(byte[ )read some bytes from the current position in the file the current position moves forward as the bytes are read write(byte[ )write some bytes at the current position in the file (overwriting the bytes already at that positionthe current position moves forward as the bytes are written seek(long pos)move the current position in the file to pos this allows bytes at arbitrary places within the file to be read or written close()close file at the end of processing external sorting we now consider the problem of sorting collections of records too large to fit in main memory because the records must reside in peripheral or external memorysuch sorting methods are called external sorts this is in contrast to the internal sorts discussed in which assume that the records to be sorted are stored in main memory sorting large collections of records is central to many applicationssuch as processing payrolls and other large business databases as consequencemany external sorting algorithms have been devised years agosorting algorithm designers sought to optimize the use of specific hardware configurationssuch as multiple tape or disk drives most computing today is done on personal computers and low-end workstations with relatively powerful cpusbut only one or at most two disk drives the techniques presented here are geared toward optimized processing on single disk drive this approach allows us to cover the most important issues in external sorting while skipping many less important machine-dependent details readers who have need to implement efficient external sorting algorithms that take advantage of more sophisticated hardware configurations should consult the references in section when collection of records is too large to fit in main memorythe only practical way to sort it is to read some records from diskdo some rearrangingthen write them back to disk this process is repeated until the file is sortedwith each record read perhaps many times given the high cost of disk /oit should come as no surprise that the primary goal of an external sorting algorithm is to minimize the amount of information that must be read from or written to disk certain amount of additional cpu processing can profitably be traded for reduced disk access before discussing external sorting techniquesconsider again the basic model for accessing information from disk the file to be sorted is viewed by the programmer as sequential series of fixed-size blocks assume (for simplicitythat each |
20,354 | block contains the same number of fixed-size data records depending on the applicationa record might be only few bytes -composed of little or nothing more than the key -or might be hundreds of bytes with relatively small key field records are assumed not to cross block boundaries these assumptions can be relaxed for special-purpose sorting applicationsbut ignoring such complications makes the principles clearer as explained in section sector is the basic unit of / in other wordsall disk reads and writes are for one or more complete sectors sector sizes are typically power of twoin the range to bytesdepending on the operating system and the size and speed of the disk drive the block size used for external sorting algorithms should be equal to or multiple of the sector size under this modela sorting algorithm reads block of data into buffer in main memoryperforms some processing on itand at some future time writes it back to disk from section we see that reading or writing block from disk takes on the order of one million times longer than memory access based on this factwe can reasonably expect that the records contained in single block can be sorted by an internal sorting algorithm such as quicksort in less time than is required to read or write the block under good conditionsreading from file in sequential order is more efficient than reading blocks in random order given the significant impact of seek time on disk accessit might seem obvious that sequential processing is faster howeverit is important to understand precisely under what circumstances sequential file processing is actually faster than random accessbecause it affects our approach to designing an external sorting algorithm efficient sequential access relies on seek time being kept to minimum the first requirement is that the blocks making up file are in fact stored on disk in sequential order and close togetherpreferably filling small number of contiguous tracks at the very leastthe number of extents making up the file should be small users typically do not have much control over the layout of their file on diskbut writing file all at once in sequential order to disk drive with high percentage of free space increases the likelihood of such an arrangement the second requirement is that the disk drive' / head remain positioned over the file throughout sequential processing this will not happen if there is competition of any kind for the / head for exampleon multi-user time-shared computer the sorting process might compete for the / head with the processes of other users even when the sorting process has sole control of the / headit is still likely that sequential processing will not be efficient imagine the situation where all processing is done on single disk drivewith the typical arrangement |
20,355 | chap file processing and external sorting of single bank of read/write heads that move together over stack of platters if the sorting process involves reading from an input filealternated with writing to an output filethen the / head will continuously seek between the input file and the output file similarlyif two input files are being processed simultaneously (such as during merge process)then the / head will continuously seek between these two files the moral is thatwith single disk drivethere often is no such thing as efficient sequential processing of data file thusa sorting algorithm might be more efficient if it performs smaller number of non-sequential disk operations rather than larger number of logically sequential disk operations that require large number of seeks in practice as mentioned previouslythe record size might be quite large compared to the size of the key for examplepayroll entries for large business might each store hundreds of bytes of information including the nameidaddressand job title for each employee the sort key might be the id numberrequiring only few bytes the simplest sorting algorithm might be to process such records as wholereading the entire record whenever it is processed howeverthis will greatly increase the amount of / requiredbecause only relatively few records will fit into single disk block another alternative is to do key sort under this methodthe keys are all read and stored together in an index filewhere each key is stored along with pointer indicating the position of the corresponding record in the original data file the key and pointer combination should be substantially smaller than the size of the original recordthusthe index file will be much smaller than the complete data file the index file will then be sortedrequiring much less / because the index records are smaller than the complete records once the index file is sortedit is possible to reorder the records in the original database file this is typically not done for two reasons firstreading the records in sorted order from the record file requires random access for each record this can take substantial amount of time and is only of value if the complete collection of records needs to be viewed or processed in sorted order (as opposed to search for selected recordsseconddatabase systems typically allow searches to be done on multiple keys for exampletoday' processing might be done in order of id numbers tomorrowthe boss might want information sorted by salary thusthere might be no single "sortedorder for the full record insteadmultiple index files are often maintainedone for each sort key these ideas are explored further in |
20,356 | simple approaches to external sorting if your operating system supports virtual memorythe simplest "externalsort is to read the entire file into virtual memory and run an internal sorting method such as quicksort this approach allows the virtual memory manager to use its normal buffer pool mechanism to control disk accesses unfortunatelythis might not always be viable option one potential drawback is that the size of virtual memory is usually limited to something much smaller than the disk space available thusyour input file might not fit into virtual memory limited virtual memory can be overcome by adapting an internal sorting method to make use of your own buffer pool more general problem with adapting an internal sorting algorithm to external sorting is that it is not likely to be as efficient as designing new algorithm with the specific goal of minimizing disk fetches consider the simple adaptation of quicksort to use buffer pool quicksort begins by processing the entire array of recordswith the first partition step moving indices inward from the two ends this can be implemented efficiently using buffer pool howeverthe next step is to process each of the subarraysfollowed by processing of sub-subarraysand so on as the subarrays get smallerprocessing quickly approaches random access to the disk drive even with maximum use of the buffer poolquicksort still must read and write each record log times on average we can do much better finallyeven if the virtual memory manager can give good performance using standard quicksortthis will come at the cost of using lot of the system' working memorywhich will mean that the system cannot use this space for other work better methods can save time while also using less memory our approach to external sorting is derived from the mergesort algorithm the simplest form of external mergesort performs series of sequential passes over the recordsmerging larger and larger sublists on each pass the first pass merges sublists of size into sublists of size the second pass merges the sublists of size into sublists of size and so on sorted sublist is called run thuseach pass is merging pairs of runs to form longer runs each pass copies the contents of the file to another file here is sketch of the algorithmas illustrated by figure split the original file into two equal-sized run files read one block from each run file into input buffers take the first record from each input bufferand write run of length two to an output buffer in sorted order take the next record from each input bufferand write run of length two to second output buffer in sorted order |
20,357 | chap file processing and external sorting runs of length runs of length runs of length figure simple external mergesort algorithm input records are divided equally between two input files the first runs from each input file are merged and placed into the first output file the second runs from each input file are merged and placed in the second output file merging alternates between the two output files until the input files are empty the roles of input and output files are then reversedallowing the runlength to be doubled with each pass repeat until finishedalternating output between the two output run buffers whenever the end of an input block is reachedread the next block from the appropriate input file when an output buffer is fullwrite it to the appropriate output file repeat steps through using the original output files as input files on the second passthe first two records of each input run file are already in sorted order thusthese two runs may be merged and output as single run of four elements each pass through the run files provides larger and larger runs until only one run remains example using the input of figure we first create runs of length one split between two input files we then process these two input files sequentiallymaking runs of length two the first run has the values and which are output to the first output file the next run has and which is output to the second file the run is sent to the first filethen run is sent to the second fileand so on once this pass has completedthe roles of the input files and output files are reversed the next pass will merge runs of length two into runs of length four runs and are merged to send to the first output file then runs and are merged to send run to the second output file in the final passthese runs are merged to form the final run this algorithm can easily take advantage of the double buffering techniques described in section note that the various passes read the input run files se |
20,358 | quentially and write the output run files sequentially for sequential processing and double buffering to be effectivehoweverit is necessary that there be separate / head available for each file this typically means that each of the input and output files must be on separate disk drivesrequiring total of four disk drives for maximum efficiency the external mergesort algorithm just described requires that log passes be made to sort file of records thuseach record must be read from disk and written to disk log times the number of passes can be significantly reduced by observing that it is not necessary to use mergesort on small runs simple modification is to read in block of datasort it in memory (perhaps using quicksort)and then output it as single sorted run example assume that we have blocks of size kband records are eight bytes with four bytes of data and -byte key thuseach block contains records standard mergesort would require nine passes to generate runs of recordswhereas processing each block as unit can be done in one pass with an internal sort these runs can then be merged by mergesort standard mergesort requires eighteen passes to process records using an internal sort to create initial runs of records reduces this to one initial pass to create the runs and nine merge passes to put them all togetherapproximately half as many passes we can extend this concept to improve performance even further available main memory is usually much more than one block in size if we process larger initial runsthen the number of passes required by mergesort is further reduced for examplemost modern computers can provide tens or even hundreds of megabytes of ram to the sorting program if all of this memory (excepting small amount for buffers and local variablesis devoted to building initial runs as large as possiblethen quite large files can be processed in few passes the next section presents technique for producing large runstypically twice as large as could fit directly into main memory another way to reduce the number of passes required is to increase the number of runs that are merged together during each pass while the standard mergesort algorithm merges two runs at timethere is no reason why merging needs to be limited in this way section discusses the technique of multiway merging over the yearsmany variants on external sorting have been presentedbut all are based on the following two steps break the file into large initial runs |
20,359 | chap file processing and external sorting merge the runs together to form single sorted file replacement selection this section treats the problem of creating initial runs as large as possible from disk fileassuming fixed amount of ram is available for processing as mentioned previouslya simple approach is to allocate as much ram as possible to large arrayfill this array from diskand sort the array using quicksort thusif the size of memory available for the array is recordsthen the input file can be broken into initial runs of length better approach is to use an algorithm called replacement selection thaton averagecreates runs of records in length replacement selection is actually slight variation on the heapsort algorithm the fact that heapsort is slower than quicksort is irrelevant in this context because / time will dominate the total running time of any reasonable external sorting algorithm building longer initial runs will reduce the total / time required replacement selection views ram as consisting of an array of size in addition to an input buffer and an output buffer (additional / buffers might be desirable if the operating system supports double bufferingbecause replacement selection does sequential processing on both its input and its output imagine that the input and output files are streams of records replacement selection takes the next record in sequential order from the input stream when neededand outputs runs one record at time to the output stream buffering is used so that disk / is performed one block at time block of records is initially read and held in the input buffer replacement selection removes records from the input buffer one at time until the buffer is empty at this point the next block of records is read in output to buffer is similaronce the buffer fills up it is written to disk as unit this process is illustrated by figure replacement selection works as follows assume that the main processing is done in an array of size records fill the array from disk set last build min-heap (recall that min-heap is defined such that the record at each node has key value less than the key values of its children repeat until the array is empty(asend the record with the minimum key value (the rootto the output buffer (blet be the next record in the input buffer if ' key value is greater than the key value just output then place at the root |
20,360 | sec external sorting input file input buffer ram output buffer output run file figure overview of replacement selection input records are processed sequentially initially ram is filled with records as records are processedthey are written to an output buffer when this buffer becomes fullit is written to disk meanwhileas replacement selection needs recordsit reads them from the input buffer whenever this buffer becomes emptythe next block of records is read from disk ii else replace the root with the record in array position lastand place at position last set last last (csift down the root to reorder the heap when the test at step (bis successfula new record is added to the heapeventually to be output as part of the run as long as records coming from the input file have key values greater than the last key value output to the runthey can be safely added to the heap records with smaller key values cannot be output as part of the current run because they would not be in sorted order such values must be stored somewhere for future processing as part of another run howeverbecause the heap will shrink by one element in this casethere is now free space where the last element of the heap used to bethusreplacement selection will slowly shrink the heap and at the same time use the discarded heap space to store records for the next run once the first run is complete ( the heap becomes empty)the array will be filled with records ready to be processed for the second run figure illustrates part of run being created by replacement selection it should be clear that the minimum length of run will be records if the size of the heap is because at least those records originally in the heap will be part of the run under good conditions ( if the input is sorted)then an arbitrarily long run is possible in factthe entire file could be processed as one run if conditions are bad ( if the input is reverse sorted)then runs of only size result what is the expected length of run generated by replacement selectionit can be deduced from an analogy called the snowplow argument imagine that snowplow is going around circular track during heavybut steadysnowstorm after the plow has been around at least oncesnow on the track must be as follows immediately behind the plowthe track is empty because it was just plowed the greatest level of snow on the track is immediately in front of the plowbecause |
20,361 | chap file processing and external sorting input memory output figure replacement selection example after building the heaproot value is output and incoming value replaces it value is output nextreplaced with incoming value the heap is reorderedwith rising to the root value is output next incoming value is too small for this run and is placed at end of the arraymoving value to the root reordering the heap results in rising to the rootwhich is output next |
20,362 | sec external sorting falling snow future snow existing snow snowplow movement start time figure the snowplow analogy showing the action during one revolution of the snowplow circular track is laid out straight for purposes of illustrationand is shown in cross section at any time the most snow is directly in front of the snowplow as the plow moves around the trackthe same amount of snow is always in front of the plow as the plow moves forwardless of this is snow that was in the track at time more is snow that has fallen since this is the place least recently plowed at any instantthere is certain amount of snow on the track snow is constantly falling throughout the track at steady ratewith some snow falling "in frontof the plow and some "behindthe plow (on circular trackeverything is actually "in frontof the plowbut figure illustrates the idea during the next revolution of the plowall snow on the track is removedplus half of what falls because everything is assumed to be in steady stateafter one revolution snow is still on the trackso snow must fall during revolutionand snow is removed during revolution (leaving snow behindat the beginning of replacement selectionnearly all values coming from the input file are greater ( "in front of the plow"than the latest key value output for this runbecause the run' initial key values should be small as the run progressesthe latest key value output becomes greater and so new key values coming from the input file are more likely to be too small ( "after the plow")such records go to the bottom of the array the total length of the run is expected to be twice the size of the array of coursethis assumes that incoming key values are evenly distributed within the key range (in terms of the snowplow analogywe assume that snow falls evenly throughout the tracksorted and reverse sorted inputs do not meet this expectation and so change the length of the run multiway merging the second stage of typical external sorting algorithm merges the runs created by the first stage assume that we have runs to merge if simple two-way merge is usedthen runs (regardless of their sizeswill require log passes through the file while should be much less than the total number of records (because |
20,363 | chap file processing and external sorting input runs output buffer figure illustration of multiway merge the first value in each input run is examined and the smallest sent to the output this value is removed from the input and the process repeated in this examplevalues and are compared first value is removed from the first run and sent to the output values and will be compared next after the first five values have been outputthe "currentvalue of each block is the one underlined the initial runs should each contain many records)we would like to reduce still further the number of passes required to merge the runs together note that twoway merging does not make good use of available memory because merging is sequential process on the two runsonly one block of records per run need be in memory at time keeping more than one block of run in memory at any time will not reduce the disk / required by the merge process thusmost of the space just used by the heap for replacement selection (typically many blocks in lengthis not being used by the merge process we can make better use of this space and at the same time greatly reduce the number of passes needed to merge the runs if we merge several runs at time multiway merging is similar to two-way merging if we have runs to mergewith block from each run available in memorythen the -way merge algorithm simply looks at values (the front-most value for each input runand selects the smallest one to output this value is removed from its runand the process is repeated when the current block for any run is exhaustedthe next block from that run is read from disk figure illustrates multiway merge conceptuallymultiway merge assumes that each run is stored in separate file howeverthis is not necessary in practice we only need to know the position of each run within single fileand use seek to move to the appropriate block whenever we need new data from particular run naturallythis approach destroys the ability to do sequential processing on the input file howeverif all runs were stored on single disk drivethen processing would not be truly sequential anyway because the / head would be alternating between the runs thusmultiway merging |
20,364 | replaces several (potentiallysequential passes with single random access pass if the processing would not be sequential anyway (such as when all processing is on single disk drive)no time is lost by doing so multiway merging can greatly reduce the number of passes required if there is room in memory to store one block for each runthen all runs can be merged in single pass thusreplacement selection can build initial runs in one passand multiway merging can merge all runs in one passyielding total cost of two passes howeverfor truly large filesthere might be too many runs for each to get block in memory if there is room to allocate blocks for -way mergeand the number of runs is greater than bthen it will be necessary to do multiple merge passes in other wordsthe first runs are mergedthen the next band so on these super-runs are then merged by subsequent passesb super-runs at time how big file can be merged in one passassuming blocks were allocated to the heap for replacement selection (resulting in runs of average length blocks)followed by -way mergewe can process on average file of size blocks in single multiway merge + blocks on average can be processed in bway merges to gain some appreciation for how quickly this growsassume that we have available mb of working memoryand that block is kbyielding blocks in working memory the average run size is mb (twice the working memory sizein one pass runs can be merged thusa file of size mb canon averagebe processed in two passes (one to build the runsone to do the mergewith only mb of working memory larger block size would reduce the size of the file that can be processed in one merge pass for fixed-size working memorya smaller block size or larger working memory would increase the file size that can be processed in one merge pass with mb of working memory and kb blocksa file of size gigabytes could be processed in two merge passeswhich is big enough for most applications thusthis is very effective algorithm for single disk drive external sorting figure shows comparison of the running time to sort various-sized files for the following implementations( standard mergesort with two input runs and two output runs( two-way mergesort with large initial runs (limited by the size of available memory)and ( -way mergesort performed after generating large initial runs in each casethe file was composed of series of four-byte records ( two-byte key and two-byte data value)or records per megabyte of file size we can see from this table that using even modest memory size (two blocksto create initial runs results in tremendous savings in time doing -way merges of the runs provides another considerable speeduphowever large-scale multi-way |
20,365 | chap file processing and external sorting file size sort , , , , sort memory size (in blocks , , , , , , , , , , , , sort memory size (in blocks , , , , , , , , , figure comparison of three external sorts on collection of small records for files of various sizes each entry in the table shows time in seconds and total number of blocks read and written by the program file sizes are in megabytes for the third sorting algorithmon file size of mbthe time and blocks shown in the last column are for -way merge is used instead of because is root of the number of blocks in the file (while is not)thus allowing the same number of runs to be merged at every pass merges for beyond about or runs does not help much because lot of time is spent determining which is the next smallest element among the runs we see from this experiment that building large initial runs reduces the running time to slightly more than one third that of standard mergesortdepending on file and memory sizes using multiway merge further cuts the time nearly in half in summarya good external sorting algorithm will seek to do the followingmake the initial runs as long as possible at all stagesoverlap inputprocessingand output as much as possible use as much working memory as possible applying more memory usually speeds processing in factmore memory will have greater effect than faster disk faster cpu is unlikely to yield much improvement in running time for external sortingbecause disk / speed is the limiting factor if possibleuse additional disk drives for more overlapping of processing with /oand to allow for sequential file processing further reading good general text on file processing is folk and zoellig' file structuresa conceptual toolkit [fz somewhat more advanced discussion on key issues in file processing is betty salzberg' file structuresan analytical approach [sal |
20,366 | great discussion on external sorting methods can be found in salzberg' book the presentation in this is similar in spirit to salzberg' for details on disk drive modeling and measurementsee the article by ruemmler and wilkes"an introduction to disk drive modeling[rw see andrew tanenbaum' structured computer organization [tan for an introduction to computer hardware and organization an excellentdetailed description of memory and hard disk drives can be found online at "the pc guide,by charles kozierok [koz (www pcguide comthe pc guide also gives detailed descriptions of the microsoft windows and unix (linuxfile systems see "outperforming lru with an adaptive replacement cache algorithmby megiddo and modha for an example of more sophisticated algorithm than lru for managing buffer pools the snowplow argument comes from donald knuth' sorting and searching [knu ]which also contains wide variety of external sorting algorithms exercises computer memory and storage prices change rapidly find out what the current prices are for the media listed in figure does your information change any of the basic conclusions regarding disk processing assume disk drive from the late is configured as follows the total storage is approximately mb divided among surfaces each surface has tracksthere are sectors/track bytes/sectorand sectors/cluster the disk turns at rpm the track-to-track seek time is msand the average seek time is ms now assume that there is kb file on the disk on averagehow long does it take to read all of the data in the fileassume that the first track of the file is randomly placed on the diskthat the entire file lies on adjacent tracksand that the file completely fills each track on which it is found seek must be performed each time the / head moves to new track show your calculations using the specifications for the disk drive given in exercise calculate the expected time to read one entire trackone sectorand one byte show your calculations using the disk drive specifications given in exercise calculate the time required to read mb file assuming (athe file is stored on series of contiguous tracksas few tracks as possible (bthe file is spread randomly across the disk in kb clusters |
20,367 | chap file processing and external sorting show your calculations assume that disk drive is configured as follows the total storage is approximately mb divided among surfaces each surface has tracksthere are sectors/track bytes/sectorand sectors/cluster the disk turns at rpm the track-to-track seek time is msand the average seek time is ms now assume that there is kb file on the disk on averagehow long does it take to read all of the data on the fileassume that the first track of the file is randomly placed on the diskthat the entire file lies on contiguous tracksand that the file completely fills each track on which it is found show your calculations using the specifications for the disk drive given in exercise calculate the expected time to read one entire trackone sectorand one byte show your calculations using the disk drive specifications given in exercise calculate the time required to read mb file assuming (athe file is stored on series of contiguous tracksas few tracks as possible (bthe file is spread randomly across the disk in kb clusters show your calculations typical disk drive from has the following specifications the total storage is approximately gb on platter surfaces or gb/platter each platter has tracks with sectors/track ( sector holds bytesand sectors/cluster the disk turns at rpm the track-to-track seek time is msand the average seek time is ms now assume that there is mb file on the disk on averagehow long does it take to read all of the data on the fileassume that the first track of the file is randomly placed on the diskthat the entire file lies on contiguous tracksand that the file completely fills each track on which it is found show your calculations using the specifications for the disk drive given in exercise calculate the expected time to read one entire trackone sectorand one byte show your calculations using the disk drive specifications given in exercise calculate the time required to read mb file assuming to make the exercise doablethis specification is completely fictitious with respect to the track and sector layout while sectors do have bytesand while the number of platters and amount of data per track is plausiblethe reality is that all modern drives use zoned organization to keep the data density from inside to outside of the disk reasonably high the rest of the numbers are typical for drive from |
20,368 | (athe file is stored on series of contiguous tracksas few tracks as possible (bthe file is spread randomly across the disk in kb clusters show your calculations at the end of the fastest disk drive could find specifications for was the maxtor atlas this drive had nominal capacity of gb using platters ( surfacesor gb/surface assume there are , tracks with an average of sectors/track and bytes/sector the disk turns at , rpm the track-to-track seek time is ms and the average seek time is ms how long will it take on average to read mb fileassuming that the first track of the file is randomly placed on the diskthat the entire file lies on contiguous tracksand that the file completely fills each track on which it is found show your calculations using the specifications for the disk drive given in exercise calculate the expected time to read one entire trackone sectorand one byte show your calculations using the disk drive specifications given in exercise calculate the time required to read mb file assuming (athe file is stored on series of contiguous tracksas few tracks as possible (bthe file is spread randomly across the disk in kb clusters show your calculations prove that two tracks selected at random from disk are separated on average by one third the number of tracks on the disk assume that file contains one million records sorted by key value query to the file returns single record containing the requested key value files are stored on disk in sectors each containing records assume that the average time to read sector selected at random is ms in contrastit takes only ms to read the sector adjacent to the current position of the / head the "batchalgorithm for processing queries is to first sort the queries by order of appearance in the fileand then read the entire file sequentiallyprocessing all queries in sequential order as the file is read this algorithm implies that the queries must all be available before processing begins the "interactivealgorithm is to process each query in order of its arrivalsearching for the requested sector each time (unless by chance two queries in row are to the same sectorcarefully define under what conditions the batch method is more efficient than the interactive method againthis track layout does does not account for the zoned arrangement on modern disk drives |
20,369 | chap file processing and external sorting assume that virtual memory is managed using buffer pool the buffer pool contains five buffers and each buffer stores one block of data memory accesses are by block id assume the following series of memory accesses takes place for each of the following buffer pool replacement strategiesshow the contents of the buffer pool at the end of the seriesand indicate how many times block was found in the buffer pool (instead of being read into memoryassume that the buffer pool is initially empty (afirst-infirst out (bleast frequently used (with counts kept only for blocks currently in memorycounts for page are lost when that page is removedand the oldest item with the smallest count is removed when there is tie(cleast frequently used (with counts kept for all blocksand the oldest item with the smallest count is removed when there is tie(dleast recently used (emost recently used (replace the block that was most recently accessed suppose that record is bytesa block is bytes (thusthere are records per block)and that working memory is mb (there is also additional space available for / buffersprogram variablesetc what is the expected size for the largest file that can be merged using replacement selection followed by single pass of multiway mergeexplain how you got your answer assume that working memory size is kb broken into blocks of bytes (there is also additional space available for / buffersprogram variablesetc what is the expected size for the largest file that can be merged using replacement selection followed by two passes of multiway mergeexplain how you got your answer prove or disprove the following propositiongiven space in memory for heap of recordsreplacement selection will completely sort file if no record in the file is preceded by or more keys of greater value imagine database containing ten million recordswith each record being bytes long provide an estimate of the time it would take (in secondsto sort the database on typical desktop or laptop computer assume that company has computer configuration satisfactory for processing their monthly payroll further assume that the bottleneck in payroll |
20,370 | processing is sorting operation on all of the employee recordsand that an external sorting algorithm is used the company' payroll program is so good that it plans to hire out its services to do payroll processing for other companies the president has an offer from second company with times as many employees she realizes that her computer is not up to the job of sorting times as many records in an acceptable amount of time describe what impact each of the following modifications to the computing system is likely to have in terms of reducing the time required to process the larger payroll database (aa factor of two speedup to the cpu (ba factor of two speedup to disk / time (ca factor of two speedup to main memory access time (da factor of two increase to main memory size how can the external sorting algorithm described in this be extended to handle variable-length records projects for database applicationassume it takes ms to read block from disk ms to search for record in block stored in memoryand that there is room in memory for buffer pool of blocks requests come in for recordswith the request specifying which block contains the record if block is accessedthere is probability for each of the next ten requests that the request will be to the same block what will be the expected performance improvement for each of the following modifications to the system(aget cpu that is twice as fast (bget disk drive that is twice as fast (cget enough memory to double the buffer pool size write simulation to analyze this problem pictures are typically stored as an arrayrow by rowon disk consider the case where the picture has colors thuseach pixel can be represented using bits if you allow bits per pixelno processing is required to unpack the pixels (because pixel corresponds to bytethe lowest level of addressing on most machinesif you pack two pixels per bytespace is saved but the pixels must be unpacked which takes more time to read from disk and access every pixel of the image bits per pixelor bits per pixel with pixels per byteprogram both and compare the times |
20,371 | chap file processing and external sorting implement disk-based buffer pool class based on the lru buffer pool replacement strategy disk blocks are numbered consecutively from the beginning of the file with the first block numbered as assume that blocks are bytes in sizewith the first bytes used to store the block id corresponding to that buffer use the first bufferpool abstract class given in section as the basis for your implementation implement an external sort based on replacement selection and multiway merging as described in this test your program both on files with small records and on files with large records for what size record do you find that key sorting would be worthwhile implement quicksort for large files on disk by replacing all array access in the normal quicksort application with access to virtual array implemented using buffer pool that iswhenever record in the array would be read or written by quicksortuse call to buffer pool function instead compare the running time of this implementation with implementations for external sorting based on mergesort as described in this section suggests that an easy modification to the basic -way mergesort is to read in large chunk of data into main memorysort it with quicksortand write it out for initial runs thena standard -way merge is used in series of passes to merge the runs together howeverthis makes use of only two blocks of working memory at time each block read is essentially random accessbecause the various files are read in an unknown ordereven though each of the input and output files is processed sequentially on each pass possible improvement would beon the merge passesto divide working memory into four equal sections one section is allocated to each of the two input files and two output files all reads during merge passes would be in full sectionsrather than single blocks while the total number of blocks read and written would be the same as regular -way mergesortit is possible that this would speed processing because series of blocks that are logically adjacent in the various input and output files would be read/written each time implement this variationand compare its running time against standard series of -way merge passes that read/write only single block at time before beginning implementationwrite down your hypothesis on how the running time will be affected by this change after implementingdid you find that this change has any meaningful effect on performance |
20,372 | searching organizing and retrieving information is at the heart of most computer applicationsand searching is surely the most frequently performed of all computing tasks search can be viewed abstractly as process to determine if an element with particular value is member of particular set the more common view of searching is an attempt to find the record within collection of records that has particular key valueor those records in collection whose key values meet some criterion such as falling within range of values we can define searching formally as follows suppose kn are distinct keysand that we have collection of records of the form ( )( )(kn in where ij is information associated with key kj for < < given particular key value kthe search problem is to locate the record (kj ij in such that kj (if one existssearching is systematic method for locating the record (or recordswith key value kj successful search is one in which record with key kj is found an unsuccessful search is one in which no record with kj is found (and no such record existsan exact-match query is search for the record whose key value matches specified key value range query is search for all records whose key value falls within specified range of key values we can categorize search algorithms into three general approaches sequential and list methods direct access by key value (hashing tree indexing methods |
20,373 | chap searching this and the following treat these three approaches in turn any of these approaches are potentially suitable for implementing the dictionary adt introduced in section howevereach has different performance characteristics that make it the method of choice in particular circumstances the current considers methods for searching data stored in lists and tables table is simply another term for an array list in this context means any list implementation including linked list or an array most of these methods are appropriate for sequences ( duplicate key values are allowed)although special techniques applicable to sets are discussed in section the techniques from the first three sections of this are most appropriate for searching collection of records stored in ram section discusses hashinga technique for organizing data in table such that the location of each record within the table is function of its key value hashing is appropriate when records are stored either in ram or on disk discusses tree-based methods for organizing information on diskincluding commonly used file structure called the -tree nearly all programs that must organize large collections of records stored on disk use some variant of either hashing or the -tree hashing is practical for only certain access functions (exactmatch queriesand is generally appropriate only when duplicate key values are not allowed -trees are the method of choice for disk-based applications anytime hashing is not appropriate searching unsorted and sorted arrays the simplest form of search has already been presented in example the sequential search algorithm sequential search on an unsorted list requires th(ntime in the worst case how many comparisons does linear search do on averagea major consideration is whether is in list at all we can simplify our analysis by ignoring everything about the input except the position of if it is found in thuswe have distinct possible eventsthat is in one of positions to in (each with its own probability)or that it is not in at all we can express the probability that is not in as ( = where (xis the probability of event ( [ ] |
20,374 | sec searching unsorted and sorted arrays let pi be the probability that is in position of when is not in lsequential search will require comparisons let be the probability that is not in then the average cost (nwill be (nnp ipi = what happens to the equation if we assume all the pi ' are equal (except ) (np ip = = ( ( ( depending on the value of + < ( < for large collections of records that are searched repeatedlysequential search is unacceptably slow one way to reduce search time is to preprocess the records by sorting them given sorted arrayan obvious improvement over simple linear search is to test if the current element in is greater than if it isthen we know that cannot appear later in the arrayand we can quit the search early but this still does not improve the worst-case cost of the algorithm we can also observe that if we look first at position in sorted array and find that is biggerthen we rule out positions as well as position because more is often betterwhat if we look at position in and find that is bigger yetthis rules out positions and with one comparison what if we carry this to the extreme and look first at the last position in and find that is biggerthen we know in one comparison that is not in this is very useful to knowbut what is wrong with this approachwhile we learn lot sometimes (in one comparison we might learn that is not in the list)usually we learn only little bit (that the last element is not kthe question then becomeswhat is the right amount to jumpthis leads us to an algorithm known as jump search for some value jwe check every 'th |
20,375 | chap searching element in lthat iswe check elements [ ] [ ]and so on so long as is greater than the values we are checkingwe continue on but when we reach value in greater than kwe do linear search on the piece of length that we know brackets if it is in the list if mj < ( )jthen the total cost of this algorithm is at most -way comparisons thereforethe cost to run the algorithm on items with jump of size is (njm what is the best value that we can pick for jwe want to minimize the cost min + - <= <= take the derivative and solve for ( to find the minimumwhich is in this casethe worst case cost will be roughly this example teaches us some lessons about algorithm design we want to balance the work done while selecting sublist with the work done while searching sublist in generalit is good strategy to make subproblems of equal effort this is an example of divide and conquer algorithm what if we extend this idea to three levelswe would first make jumps of some size to find sublist of size whose end values bracket value ww would then work through this sublist by making jumps of some smaller sizesay finallyonce we find bracketed sublist of size we would do sequential search to complete the process this probably sounds convoluted to do two levels of jumping to be followed by sequential search while it might make sense to do two-level algorithm (that isjump search jumps to find sublist and then does sequential search on the sublist)it almost never seems to make sense to do three-level algorithm insteadwhen we go beyond two levelswe nearly always generalize by using recursion this leads us to the most commonly used search algorithm for sorted arraysthe binary search described in section if we know nothing about the distribution of key valuesthen binary search is the best algorithm available for searching sorted array (see exercise howeversometimes we do know something about the expected key distribution consider the typical behavior of person looking up word in large dictionary most people certainly do not use sequential searchtypicallypeople use modified form of binary searchat least until they get close to the word that they are |
20,376 | looking for the search generally does not start at the middle of the dictionary person looking for word starting with 'sgenerally assumes that entries beginning with 'sstart about three quarters of the way through the dictionary thushe or she will first open the dictionary about three quarters of the way through and then make decision based on what they find as to where to look next in other wordspeople typically use some knowledge about the expected distribution of key values to "computewhere to look next this form of "computedbinary search is called dictionary search or interpolation search in dictionary searchwe search at position that is appropriate to the value of as follows pk [ [nl[ the location of particular key within the key range is translated into the expected position for the corresponding record in the tableand this position is checked first as with binary searchthe value of the key found eliminates all records either above or below that position the actual value of the key found can then be used to compute new position within the remaining range of the table the next check is made based on the new computation this proceeds until either the desired record is foundor the table is narrowed until no records are left variation on dictionary search is known as quadratic binary search (qbs)and we will analyze this in detail because its analysis is easier than that of the general dictionary search qbs will first compute and hen examine [dpneif [dpnethen qbs will sequentially probe by steps of size nthat iswe step through [dpn ne] until we reach value less than or equal to similarly for [dpnewe will step forward by until we reach value in that is greater than we are now within positions of assume (for nowthat it takes constant number of comparisons to bracket within sublist of size we then take this sublist and repeat the process recursively what is the cost for qbsnote that cn cn/ and we will be repeatedly taking square roots of the current sublist size until we find the item we are looking for because log and we can cut log in half only log log timesthe cost is th(log log nif the number of probes on jump search is constant the number of comparisons needed is = ip(need exactly probes |
20,377 | chap searching this is equal tonpn (need at least probesi= ( ( pn ( pn ( pn ( pn npn we require at least two probes to set the boundsso the cost is (need at least probesi= we now make take advantage of useful fact known as cebysev' inequality cebysev' inequality states that (need probes)or pi is pi < ( ) <( ) ( ) because ( < / this assumes uniformly distributed data thusthe expected number of probes is = < = ( = is this better than binary searchtheoretically yesbecause cost of (log log ngrows slower than cost of (log nhoweverwe have situation here which illustrates the limits to the model of asymptotic complexity in some practical situations yesc log does grow faster than log log in factits exponentially fasterbut even sofor practical input sizesthe absolute cost difference is fairly small thusthe constant factors might play role let' compare lg lg to lg lg lg lg difference |
20,378 | sec searching unsorted and sorted arrays it is not always practical to reduce an algorithm' growth rate there is practicality window for every problemin that we have practical limit to how big an input we wish to solve for if our problem size never grows too bigit might not matter if we can reduce the cost by an extra log factorbecause the constant factors in the two algorithms might overwhelm the savings for our two algorithmslet us look further and check the actual number of comparisons used for binary searchwe need about log total comparisons quadratic binary search requires about lg lg comparisons if we incorporate this observation into our tablewe get different picture about the relative differences lg lg lg difference worse same but we still are not done this is only count of comparisonsbinary search is inherently much simpler than qbsbecause binary search only needs to calculate the midpoint position of the array before each comparisonwhile quadratic binary search must calculate an interpolation point which is more expensive so the constant factors for qbs are even higher not only are the constant factors worse on averagebut qbs is far more dependent than binary search on good data distribution to perform well for exampleimagine that you are searching telephone directory for the name "young normally you would look near the back of the book if you found name beginning with ' ,you might look just little ways toward the front if the next name you find also begins with ' ,you would look little further toward the front if this particular telephone directory were unusual in that half of the entries begin with ' ,then you would need to move toward the front many timeseach time eliminating relatively few records from the search in the extremethe performance of interpolation search might not be much better than sequential search if the distribution of key values is badly calculated while it turns out that qbs is not practical algorithmthis is not typical situation fortunatelyalgorithm growth rates are usually well behavedso that asymptotic algorithm analysis nearly always gives us practical indication for which of two algorithms is better |
20,379 | chap searching self-organizing lists while lists are most commonly ordered by key valuethis is not the only viable option another approach to organizing lists to speed search is to order the records by expected frequency of access while the benefits might not be as great as when organized by key valuethe cost to organize by frequency of access is much cheaperand thus is of use in some situations assume that we knowfor each key ki the probability pi that the record with key ki will be requested assume also that the list is ordered so that the most frequently requested record is firstthen the next most frequently requested recordand so on search in the list will be done sequentiallybeginning with the first position over the course of many searchesthe expected number of comparisons required for one search is npn in other wordsthe cost to access the first record is one (because one key value is looked at)and the probability of this occurring is the cost to access the second record is two (because we must look at the first and the second recordskey values)with probability and so on for recordsassuming that all searches are for records that actually existthe probabilities through pn must sum to one certain probability distributions give easily computed results example calculate the expected cost to search list when each record has equal chance of being accessed (the classic sequential search through an unsorted listsetting pi / yields cn / ( )/ = this result matches our expectation that half the records will be accessed on average by normal sequential search if the records truly have equal access probabilitiesthen ordering records by frequency yields no benefit we saw in section the more general case where we must consider the probability (labeled that the search key does not match that for any record in the array in that casein accordance with our general formulawe get ( thusn+ < <ndepending on the value of |
20,380 | sec self-organizing lists geometric probability distribution can yield quite different results example calculate the expected cost for searching list ordered by frequency when the probabilities are defined as / if < < pi / - if thencn ( / = for this examplethe expected number of accesses is constant this is because the probability for accessing the first record is highthe second is much lower but still much higher than for record threeand so on this shows that for some probability distributionsordering the list by frequency can yield an efficient search technique in many search applicationsreal access patterns follow rule of thumb called the / rule the / rule says that of the record accesses are to of the records the values of and are only estimatesevery application has its own values howeverbehavior of this nature occurs surprisingly often in practice (which explains the success of caching techniques widely used by disk drive and cpu manufacturers for speeding access to data stored in slower memorysee the discussion on buffer pools in section when the / rule applieswe can expect reasonable search performance from list ordered by frequency of access example the / rule is an example of zipf distribution naturally occurring distributions often follow zipf distribution examples include the observed frequency for the use of words in natural language such as englishand the size of the population for cities ( view the relative proportions for the populations as equivalent to the "frequency of use"zipf distributions are related to the harmonic series defined in equation define the zipf frequency for item in the distribution for records as /(ihn (see exercise the expected cost for the series whose members follow this zipf distribution will be cn = /ihn /hn nloge |
20,381 | chap searching when frequency distribution follows the / rulethe average search looks at about one tenth of the records in table ordered by frequency in most applicationswe have no means of knowing in advance the frequencies of access for the data records to complicate matters furthercertain records might be accessed frequently for brief period of timeand then rarely thereafter thusthe probability of access for records might change over time (in most database systemsthis is to be expectedself-organizing lists seek to solve both of these problems self-organizing lists modify the order of records within the list based on the actual pattern of record access self-organizing lists use heuristic for deciding how to to reorder the list these heuristics are similar to the rules for managing buffer pools (see section in facta buffer pool is form of self-organizing list ordering the buffer pool by expected frequency of access is good strategybecause typically we must search the contents of the buffers to determine if the desired information is already in main memory when ordered by frequency of accessthe buffer at the end of the list will be the one most appropriate for reuse when new page of information must be read below are three traditional heuristics for managing self-organizing lists the most obvious way to keep list ordered by frequency would be to store count of accesses to each record and always maintain records in this order this method will be referred to as count count is similar to the least frequently used buffer replacement strategy whenever record is accessedit might move toward the front of the list if its number of accesses becomes greater than record preceding it thuscount will store the records in the order of frequency that has actually occurred so far besides requiring space for the access countscount does not react well to changing frequency of access over time once record has been accessed large number of times under the frequency count systemit will remain near the front of the list regardless of further access history bring record to the front of the list when it is foundpushing all the other records back one position this is analogous to the least recently used buffer replacement strategy and is called move-to-front this heuristic is easy to implement if the records are stored using linked list when records are stored in an arraybringing record forward from near the end of the array will result in large number of records changing position move-to-front' cost is bounded in the sense that it requires at most twice the number of accesses required by the optimal static ordering for records when at least |
20,382 | sec self-organizing lists searches are performed in other wordsif we had known the series of (at least nsearches in advance and had stored the records in order of frequency so as to minimize the total cost for these accessesthis cost would be at least half the cost required by the move-to-front heuristic (this will be proved using amortized analysis in section finallymove-to-front responds well to local changes in frequency of accessin that if record is frequently accessed for brief period of time it will be near the front of the list during that period of access move-to-front does poorly when the records are processed in sequential orderespecially if that sequential order is then repeated multiple times swap any record found with the record immediately preceding it in the list this heuristic is called transpose transpose is good for list implementations based on either linked lists or arrays frequently used records willover timemove to the front of the list records that were once frequently accessed but are no longer used will slowly drift toward the back thusit appears to have good properties with respect to changing frequency of access unfortunatelythere are some pathological sequences of access that can make transpose perform poorly consider the case where the last record of the list (call it xis accessed this record is then swapped with the next-to-last record (call it )making the last record if is now accessedit swaps with repeated series of accesses alternating between and will continually search to the end of the listbecause neither record will ever make progress toward the front howeversuch pathological cases are unusual in practice example assume that we have eight recordswith key values to hand that they are initially placed in alphabetical order nowconsider the result of applying the following access patternf if the list is organized by the count heuristicthe final list resulting from these accesses will be hand the total cost for the twelve accesses will be comparisons (assume that when record' frequency count goes upit moves forward in the list to become the last record with that value for its frequency count after the first two accessesf will be the first record and will be the second if the list is organized by the move-to-front heuristicthen the final list will be |
20,383 | chap searching and the total number of comparisons required is finallyif the list is organized by the transpose heuristicthen the final list will be hand the total number of comparisons required is while self-organizing lists do not generally perform as well as search trees or sorted listboth of which require (log nsearch timethere are many situations in which self-organizing lists prove valuable tool obviously they have an advantage over sorted lists in that they need not be sorted this means that the cost to insert new record is lowwhich could more than make up for the higher search cost when insertions are frequent self-organizing lists are simpler to implement than search trees and are likely to be more efficient for small lists nor do they require additional space finallyin the case of an application where sequential search is "almostfast enoughchanging an unsorted list to self-organizing list might speed the application enough at minor cost in additional code as an example of applying self-organizing listsconsider an algorithm for compressing and transmitting messages the list is self-organized by the move-to-front rule transmission is in the form of words and numbersby the following rules if the word has been seen beforetransmit the current position of the word in the list move the word to the front of the list if the word is seen for the first timetransmit the word place the word at the front of the list both the sender and the receiver keep track of the position of words in the list in the same way (using the move-to-front rule)so they agree on the meaning of the numbers that encode repeated occurrences of words for exampleconsider the following example message to be transmitted (for simplicityignore case in lettersthe car on the left hit the car left the first three words have not been seen beforeso they must be sent as full words the fourth word is the second appearance of "the,which at this point is the third word in the list thuswe only need to transmit the position value " the next two words have not yet been seenso must be sent as full words the seventh word is the third appearance of "the,which coincidentally is again in the third position the eighth word is the second appearance of "car,which is now in the fifth position of the list "iis new wordand the last word "leftis now in the fifth position thus the entire transmission would be |
20,384 | sec bit vectors for representing sets figure the bit table for the set of primes in the range to the bit at position is set to if and only if is prime the car on left hit this approach to compression is similar in spirit to ziv-lempel codingwhich is class of coding algorithms commonly used in file compression utilities zivlempel coding will replace repeated occurrences of strings with pointer to the location in the file of the first occurrence of the string the codes are stored in self-organizing list in order to speed up the time required to search for string that has previously been seen bit vectors for representing sets determining whether value is member of particular set is special case of searching for keys in sequence of records thusany of the search methods discussed in this book can be used to check for set membership howeverwe can also take advantage of the restricted circumstances imposed by this problem to develop another representation in the case where the set elements fall within limited key rangewe can represent the set using bit array with bit position allocated for each potential member those members actually in the set store value of in their corresponding bitthose members not in the set store value of in their corresponding bit for exampleconsider the set of primes between and figure shows the corresponding bit table to determine if particular value is primewe simply check the corresponding bit this representation scheme is called bit vector or bitmap the mark array used in several of the graph algorithms of is an example of such set representation if the set fits within single computer wordthen set unionintersectionand difference can be performed by logical bit-wise operations the union of sets and is the bit-wise or function (whose symbol is in javathe intersection of sets and is the bit-wise and function (whose symbol is in javafor exampleif we would like to compute the set of numbers between and that are both prime and odd numberswe need only compute the expression |
20,385 | chap searching the set difference can be implemented in java using the expression &~ (is the symbol for bit-wise negationfor larger sets that do not fit into single computer wordthe equivalent operations can be performed in turn on the series of words making up the entire bit vector this method of computing sets from bit vectors is sometimes applied to document retrieval consider the problem of picking from collection of documents those few which contain selected keywords for each keywordthe document retrieval system stores bit vector with one bit for each document if the user wants to know which documents contain certain three keywordsthe corresponding three bit vectors are and'ed together those bit positions resulting in value of correspond to the desired documents alternativelya bit vector can be stored for each document to indicate those keywords appearing in the document such an organization is called signature file the signatures can be manipulated to find documents with desired combinations of keywords hashing this section presents completely different approach to searching tablesby direct access based on key value the process of finding record using some computation to map its key value to position in the table is called hashing most hashing schemes place records in the table in whatever order satisfies the needs of the address calculationthus the records are not ordered by value or frequency the function that maps key values to positions is called hash function and is usually denoted by the array that holds the records is called the hash table and will be denoted by ht position in the hash table is also known as slot the number of slots in hash table ht will be denoted by the variable with slots numbered from to the goal for hashing system is to arrange things such thatfor any key value and some hash function hi (kis slot in the table such that < (km and we have the key of the record stored at ht[iequal to hashing only works to store sets that ishashing cannot be used for applications where multiple records with the same key value are permitted hashing is not good method for answering range searches in other wordswe cannot easily find all records (if anywhose key values fall within certain range nor can we easily find the record with the minimum or maximum key valueor visit the records in key order hashing is most appropriate for answering the question"what recordif anyhas key value ?for applications where access involves only exact-match querieshashing is usually the search method of choice because it is extremely efficient when implemented correctly as you will see in this sectionhoweverthere |
20,386 | are many approaches to hashing and it is easy to devise an inefficient implementation hashing is suitable for both in-memory and disk-based searching and is one of the two most widely used methods for organizing large databases stored on disk (the other is the -treewhich is covered in as simple (though unrealisticexample of hashingconsider storing recordseach with unique key value in the range to in this simple casea record with key can be stored in ht[ ]and the hash function is simply (kk to find the record with key value ksimply look in ht[ktypicallythere are many more values in the key range than there are slots in the hash table for more realistic examplesuppose that the key can take any value in the range to , ( the key is two-byte unsigned integer)and that we expect to store approximately records at any given time it is impractical in this situation to use hash table with , slotsbecause most of the slots will be left empty insteadwe must devise hash function that allows us to store the records in much smaller table because the possible key range is larger than the size of the tableat least some of the slots must be mapped to from multiple key values given hash function and two keys and if ( ( where is slot in the tablethen we say that and have collision at slot under hash function finding record with key value in database organized by hashing follows two-step procedure compute the table location ( starting with slot ( )locate the record containing key using (if necessarya collision resolution policy hash functions hashing generally takes records whose key values come from large range and stores those records in table with relatively small number of slots collisions occur when two records hash to the same slot in the table if we are careful--or lucky--when selecting hash functionthen the actual number of collisions will be few unfortunatelyeven under the best of circumstancescollisions are nearly unavoidable for exampleconsider classroom full of students what is the the exception to this is perfect hashing perfect hashing is system in which records are hashed such that there are no collisions hash function is selected for the specific set of records being hashedwhich requires that the entire collection of records be available before selecting the hash function perfect hashing is efficient because it always finds the record that we are looking for exactly where the hash function computes it to beso only one access is required selecting perfect hash function can be expensive but might be worthwhile when extremely efficient search |
20,387 | chap searching probability that some pair of students shares the same birthday ( the same day of the yearnot necessarily the same year)if there are studentsthen the odds are about even that two will share birthday this is despite the fact that there are days in which students can have birthdays (ignoring leap years)on most of which no student in the class has birthday with more studentsthe probability of shared birthday increases the mapping of students to days based on their birthday is similar to assigning records to slots in table (of size using the birthday as hash function note that this observation tells us nothing about which students share birthdayor on which days of the year shared birthdays fall to be practicala database organized by hashing must store records in hash table that is not so large that it wastes space typicallythis means that the hash table will be around half full because collisions are extremely likely to occur under these conditions (by chanceany record inserted into table that is half full will have collision half of the time)does this mean that we need not worry about the ability of hash function to avoid collisionsabsolutely not the difference between good hash function and bad hash function makes big difference in practice technicallyany function that maps all possible key values to slot in the hash table is hash function in the extreme caseeven function that maps all records to the same slot is hash functionbut it does nothing to help us find records during search operation we would like to pick hash function that stores the actual records in the collection such that each slot in the hash table has equal probability of being filled unfortunatelywe normally have no control over the key values of the actual recordsso how well any particular hash function does this depends on the distribution of the keys within the allowable key range in some casesincoming data are well distributed across their key range for exampleif the input is set of random numbers selected uniformly from the key rangeany hash function that assigns the key range so that each slot in the hash table receives an equal share of the range will likely also distribute the input records uniformly within the table howeverin many applications the incoming records are highly clustered or otherwise poorly distributed when input records are not well distributed throughout the key range it can be difficult to devise hash function that does good job of distributing the records throughout the tableespecially if the input distribution is not known in advance there are many reasons why data values might be poorly distributed performance is required an example is searching for data on read-only cd here the database will never changethe time for each access is expensiveand the database designer can build the hash table before issuing the cd |
20,388 | natural distributions are geometric for exampleconsider the populations of the largest cities in the united states if you plot these populations on number linemost of them will be clustered toward the low sidewith few outliers on the high side this is an example of zipf distribution (see section viewed the other waythe home town for given person is far more likely to be particular large city than particular small town collected data are likely to be skewed in some way field samples might be rounded tosaythe nearest ( all numbers end in or if the input is collection of common english wordsthe beginning letter will be poorly distributed note that in each of these exampleseither highor low-order bits of the key are poorly distributed when designing hash functionswe are generally faced with one of two situations we know nothing about the distribution of the incoming keys in this casewe wish to select hash function that evenly distributes the key range across the hash tablewhile avoiding obvious opportunities for clustering such as hash functions that are sensitive to the highor low-order bits of the key value we know something about the distribution of the incoming keys in this casewe should use distribution-dependent hash function that avoids assigning clusters of related key values to the same hash table slot for exampleif hashing english wordswe should not hash on the value of the first character because this is likely to be unevenly distributed below are several examples of hash functions that illustrate these points example consider the following hash function used to hash integers to table of sixteen slotsint (int xreturn( )the value returned by this hash function depends solely on the least significant four bits of the key because these bits are likely to be poorly distributed (as an examplea high percentage of the keys might be even numberswhich means that the low order bit is zero)the result will also be poorly distributed this example shows that the size of the table can have big effect on the performance of hash system because this value is typically used as the modulus |
20,389 | chap searching example good hash function for numerical values is the mid-square method the mid-square method squares the key valueand then takes the middle bits of the resultgiving value in the range to this works well because most or all bits of the key value contribute to the result for exampleconsider records whose keys are -digit numbers in base the goal is to hash these key values to table of size ( range of to this range is equivalent to two digits in base that isr if the input is the number squaring yields an -digit number the middle two digits of this result are all digits (equivalentlyall bits when the number is viewed in binarycontribute to the middle two digits of the squared value thusthe result is not dominated by the distribution of the bottom digit or the top digit of the original key value example here is hash function for strings of charactersint (string xint mchar ch[]ch tochararray()int xlength length()int isumfor (sum= = < length() ++sum +ch[ ]return sum mthis function sums the ascii values of the letters in string if the hash table size is smallthis hash function should do good job of distributing strings evenly among the hash table slotsbecause it gives equal weight to all characters this is an example of the folding approach to designing hash function note that the order of the characters in the string has no effect on the result similar method for integers would add the digits of the key valueassuming that there are enough digits to ( keep any one or two digits with bad distribution from skewing the results of the process and ( generate sum much larger than as with many other hash functionsthe final step is to apply the modulus operator to the resultusing table size to generate value within the table range if the sum is not sufficiently largethen the modulus operator will yield poor distribution |
20,390 | for examplebecause the ascii value for "ais and "zis sum will always be in the range to for string of ten upper case letters for hash table of size or lessa good distribution results with all slots in the table accepting either two or three of the values in the key range for hash table of size the distribution is terrible because only slots to can possibly be the home slot for some key value example here is much better hash function for strings long sfold(string sint mint intlength length( long sum for (int intlengthj++char [ substring( ( tochararray()long mult for (int lengthk++sum + [kmultmult * char [ substring(intlength tochararray()long mult for (int lengthk++sum + [kmultmult * return(math abs(summ)this function takes string as input it processes the string four bytes at timeand interprets each of the four-byte chunks as single (unsignedlong integer value the integer values for the four-byte chunks are added together in the endthe resulting sum is converted to the range to using the modulus operator for exampleif the string "aaaabbbbis passed to sfoldthen the first four bytes ("aaaa"will be interpreted as the integer value , , , recall from section that the implementation for mod on many +and java compilers will yield negative number if is negative implementors for hash functions need to be careful that their hash function does not generate negative number this can be avoided either by insuring that is positive when computing mod mor adding to the result if mod is negative all computation in sfold is done using unsigned long values in part to protect against taking the modulus of an negative number |
20,391 | chap searching and the next four bytes ("bbbb"will be interpreted as the integer value , , , their sum is , , , (when treated as an unsigned integerif the table size is then the modulus function will cause this key to hash to slot in the table note that for any sufficiently long stringthe sum for the integer quantities will typically cause -bit integer to overflow (thus losing some of the high-order bitsbecause the resulting values are so large but this causes no problems when the goal is to compute hash function open hashing while the goal of hash function is to minimize collisionscollisions are normally unavoidable in practice thushashing implementations must include some form of collision resolution policy collision resolution techniques can be broken into two classesopen hashing (also called separate chainingand closed hashing (also called open addressing the difference between the two has to do with whether collisions are stored outside the table (open hashing)or whether collisions result in storing one of the records at another slot in the table (closed hashingopen hashing is treated in this sectionand closed hashing in section the simplest form of open hashing defines each slot in the hash table to be the head of linked list all records that hash to particular slot are placed on that slot' linked list figure illustrates hash table where each slot stores one record and link pointer to the rest of the list records within slot' list can be ordered in several waysby insertion orderby key value orderor by frequency-of-access order ordering the list by key value provides an advantage in the case of an unsuccessful searchbecause we know to stop searching the list once we encounter key that is greater than the one being searched for if records on the list are unordered or ordered by frequencythen an unsuccessful search will need to visit every record on the list given table of size storing recordsthe hash function will (ideallyspread the records evenly among the positions in the tableyielding on average / records for each list assuming that the table has more slots than there are records to be storedwe can hope that few slots will contain more than one record in the case where list is empty or has only one recorda search requires only one access to the list thusthe average cost for hashing should be th( howeverif clustering causes many records to hash to only few of the slotsthen the cost to yesit is confusing when "open hashingmeans the opposite of "open addressing,but unfortunatelythat is the way it is |
20,392 | sec hashing figure an illustration of open hashing for seven numbers stored in ten-slot hash table using the hash function (kk mod the numbers are inserted in the order and two of the values hash to slot one value hashes to slot three of the values hash to slot and one value hashes to slot access record will be much higher because many elements on the linked list must be searched open hashing is most appropriate when the hash table is kept in main memorywith the lists implemented by standard in-memory linked list storing an open hash table on disk in an efficient way is difficultbecause members of given linked list might be stored on different disk blocks this would result in multiple disk accesses when searching for particular key valuewhich defeats the purpose of using hashing there are similarities between open hashing and binsort one way to view open hashing is that each record is simply placed in bin while multiple records may hash to the same binthis initial binning should still greatly reduce the number of records accessed by search operation in similar fashiona simple binsort reduces the number of records in each bin to small number that can be sorted in some other way closed hashing closed hashing stores all records directly in the hash table each record with key value kr has home position that is (kr )the slot computed by the hash function if is to be inserted and another record already occupies ' home positionthen |
20,393 | chap searching hash table overflow figure an illustration of bucket hashing for seven numbers stored in fivebucket hash table using the hash function (kk mod each bucket contains two slots the numbers are inserted in the order and two of the values hash to bucket three values hash to bucket one value hashes to bucket and one value hashes to bucket because bucket cannot hold three valuesthe third one ends up in the overflow bucket will be stored at some other slot in the table it is the business of the collision resolution policy to determine which slot that will be naturallythe same policy must be followed during search as during insertionso that any record not found in its home position can be recovered by repeating the collision resolution process bucket hashing one implementation for closed hashing groups hash table slots into buckets the slots of the hash table are divided into bucketswith each bucket consisting of / slots the hash function assigns each record to the first slot within one of the buckets if this slot is already occupiedthen the bucket slots are searched sequentially until an open slot is found if bucket is entirely fullthen the record is stored in an overflow bucket of infinite capacity at the end of the table all buckets share the same overflow bucket good implementation will use hash function that distributes the records evenly among the buckets so that as few records as possible go into the overflow bucket figure illustrates bucket hashing when searching for recordthe first step is to hash the key to determine which bucket should contain the record the records in this bucket are then searched if |
20,394 | the desired key value is not found and the bucket still has free slotsthen the search is complete if the bucket is fullthen it is possible that the desired record is stored in the overflow bucket in this casethe overflow bucket must be searched until the record is found or all records in the overflow bucket have been checked if many records are in the overflow bucketthis will be an expensive process simple variation on bucket hashing is to hash key value to some slot in the hash table as though bucketing were not being used if the home position is fullthen the collision resolution process is to move down through the table toward the end of the bucket will searching for free slot in which to store the record if the bottom of the bucket is reachedthen the collision resolution routine wraps around to the top of the bucket to continue the search for an open slot for exampleassume that buckets contain eight recordswith the first bucket consisting of slots through if record is hashed to slot the collision resolution process will attempt to insert the record into the table in the order and finally if all slots in this bucket are fullthen the record is assigned to the overflow bucket the advantage of this approach is that initial collisions are reducedbecause any slot can be home position rather than just the first slot in the bucket bucket methods are good for implementing hash tables stored on diskbecause the bucket size can be set to the size of disk block whenever search or insertion occursthe entire bucket is read into memory because the entire bucket is then in memoryprocessing an insert or search operation requires only one disk accessunless the bucket is full if the bucket is fullthen the overflow bucket must be retrieved from disk as well naturallyoverflow should be kept small to minimize unnecessary disk accesses linear probing we now turn to the most commonly used form of hashingclosed hashing with no bucketingand collision resolution policy that can potentially use any slot in the hash table during insertionthe goal of collision resolution is to find free slot in the hash table when the home position for the record is already occupied we can view any collision resolution method as generating sequence of hash table slots that can potentially hold the record the first slot in the sequence will be the home position for the key if the home position is occupiedthen the collision resolution policy goes to the next slot in the sequence if this is occupied as wellthen another slot must be foundand so on this sequence of slots is known as the probe sequenceand it is generated by some probe function that we will call the insert function is shown in figure insertion works as follows |
20,395 | chap searching /*insert record with key into ht *void hashinsert(key ke rint home/home position for int pos home ( )/initial position for (int = ht[pos!nulli++pos (home (ki) /next pobe slot assert ht[poskey(compareto( ! "duplicates not allowed"ht[posnew kvpair(kr)/insert figure insertion method for dictionary implemented by hash table /*search in hash table ht for the record with key * hashsearch(key kint home/home position for int pos home ( )/initial position for (int (ht[pos!null&(ht[poskey(compareto( ! ) ++pos (home (ki) /next probe position if (ht[pos=nullreturn null/key not in hash table else return ht[posvalue()/found it figure search method for dictionary implemented by hash table method hashinsert first checks to see if the home slot for the key is empty if the home slot is occupiedthen we use the probe functionp(kito locate free slot in the table function has two parametersthe key and count for where in the probe sequence we wish to be that isto get the first position in the probe sequence after the home slot for key kwe call ( for the next slot in the probe sequencecall ( note that the probe function returns an offset from the original home positionrather than slot in the hash table thusthe for loop in hashinsert is computing positions in the table at each iteration by adding the value returned from the probe function to the home position the ith call to returns the ith offset to be used searching in hash table follows the same probe sequence that was followed when inserting records in this waya record not in its home position can be recovered java implementation for the search procedure is shown in figure both the insert and the search routines assume that at least one slot on the probe sequence of every key will be empty otherwisethey will continue in an infinite loop on unsuccessful searches thusthe dictionary should keep count of the |
20,396 | sec hashing number of records storedand refuse to insert into table that has only one free slot the discussion on bucket hashing presented simple method of collision resolution if the home position for the record is occupiedthen move down the bucket until free slot is found this is an example of technique for collision resolution known as linear probing the probe function for simple linear probing is (kii that isthe ith offset on the probe sequence is just imeaning that the ith step is simply to move down slots in the table once the bottom of the table is reachedthe probe sequence wraps around to the beginning of the table linear probing has the virtue that all slots in the table will be candidates for inserting new record before the probe sequence returns to the home position it is important to understand thatwhile linear probing is probably the first idea that comes to mind when considering collision resolution policiesit is not the only one possible probe function allows us many options for how to do collision resolution in factlinear probing is one of the worst collision resolution methods the main problem is illustrated by figure herewe see hash table of ten slots used to store four-digit numberswith hash function (kk mod in figure ( )five numbers have been placed in the tableleaving five slots remaining the ideal behavior for collision resolution mechanism is that each empty slot in the table will have equal probability of receiving the next record inserted (assuming that every slot in the table has equal probability of being hashed to initiallyin this examplethe hash function gives each slot (roughlyequal probability of being the home position for the next key howeverconsider what happens to the next record if its key has its home position at slot linear probing will send the record to slot the same will happen to records whose home position is at slot record with home position at slot will remain in slot thusthe probability is / that the next record inserted will end up in slot in similar mannerrecords hashing to slots or will end up in slot howeveronly records hashing to slot will be stored in slot yielding one chance in ten of this happening likewisethere is only one chance in ten that the next record will be stored in slot one chance in ten for slot and one chance in ten for slot thusthe resulting probabilities are not equal to make matters worseif the next record ends up in slot (which already has higher than normal chance of happening)then the following record will end up |
20,397 | chap searching ( (bfigure example of problems with linear probing (afour values are inserted in the order and using hash function (kk mod (bthe value is added to the hash table in slot with probability / this is illustrated by figure (bthis tendency of linear probing to cluster items together is known as primary clustering small clusters tend to merge into big clustersmaking the problem worse the objection to primary clustering is that it leads to long probe sequences improved collision resolution methods how can we avoid primary clusteringone possible improvement might be to use linear probingbut to skip slots by constant other than this would make the probe function (kiciand so the ith slot in the probe sequence will be ( (kicmod in this wayrecords with adjacent home positions will not follow the same probe sequence for exampleif we were to skip by twosthen our offsets from the home slot would be then then and so on one quality of good probe sequence is that it will cycle through all slots in the hash table before returning to the home position clearly linear probing (which "skipsslots by one each timedoes this unfortunatelynot all values for will make this happen for exampleif and the table contains an even number of slotsthen any key whose home position is in an even slot will have probe |
20,398 | sec hashing sequence that cycles through only the even slots likewisethe probe sequence for key whose home position is in an odd slot will cycle through the odd slots thusthis combination of table size and linear probing constant effectively divides the records into two sets stored in two disjoint sections of the hash table so long as both sections of the table contain the same number of recordsthis is not really important howeverjust from chance it is likely that one section will become fuller than the otherleading to more collisions and poorer performance for those records the other section would have fewer recordsand thus better performance but the overall system performance will be degradedas the additional cost to the side that is more full outweighs the improved performance of the less-full side constant must be relatively prime to to generate linear probing sequence that visits all slots in the table (that isc and must share no factorsfor hash table of size if is any one of or then the probe sequence will visit all slots for any key when any value for between and generates probe sequence that visits all slots for every key consider the situation where and we wish to insert record with key such that ( the probe sequence for is and so on if another key has home position at slot then its probe sequence will be and so on the probe sequences of and are linked together in manner that contributes to clustering in other wordslinear probing with value of does not solve the problem of primary clustering we would like to find probe function that does not link keys together in this way we would prefer that the probe sequence for after the first step on the sequence should not be identical to the probe sequence of insteadtheir probe sequences should diverge the ideal probe function would select the next position on the probe sequence at random from among the unvisited slotsthat isthe probe sequence should be random permutation of the hash table positions unfortunatelywe cannot actually select the next position in the probe sequence at randombecause then we would not be able to duplicate this same probe sequence when searching for the key howeverwe can do something similar called pseudo-random probing in pseudo-random probingthe ith slot in the probe sequence is ( (kri mod where ri is the ith value in random permutation of the numbers from to all insertion and search operations must use the same random permutation the probe function would be (kiperm[ ]where perm is an array of length containing random permutation of the values from to |
20,399 | chap searching example consider table of size with perm[ perm[ and perm[ assume that we have two keys and where ( and ( the probe sequence for is then then then the probe sequence for is then then then thuswhile will probe to ' home position as its second choicethe two keysprobe sequences diverge immediately thereafter another probe function that eliminates primary clustering is called quadratic probing here the probe function is some quadratic function (kic for some choice of constants and the simplest variation is (kii ( and then the ith value in the probe sequence would be ( (ki mod under quadratic probingtwo keys with different home positions will have diverging probe sequences example given hash table of size assume for keys and that ( and ( the probe sequence for is then then then the probe sequence for is then then then thuswhile will probe to ' home position as its second choicethe two keysprobe sequences diverge immediately thereafter unfortunatelyquadratic probing has the disadvantage that typically not all hash table slots will be on the probe sequence using (kii gives particularly inconsistent results for many hash table sizesthis probe function will cycle through relatively small number of slots if all slots on that cycle happen to be fullthen the record cannot be inserted at allfor exampleif our hash table has three slotsthen records that hash to slot can probe only to slots and (that isthe probe sequence will never visit slot in the tablethusif slots and are fullthen the record cannot be inserted even though the table is not fulla more realistic example is table with slots the probe sequence starting from any given slot will only visit other slots in the table if all of these slots should happen to be fulleven if other slots in the table are emptythen the record cannot be inserted because the probe sequence will continually hit only those same slots fortunatelyit is possible to get good results from quadratic probing at low cost the right combination of probe function and table size will visit many slots in the table in particularif the hash table size is prime number and the probe function is (kii then at least half the slots in the table will be visited thusif the table is less than half fullwe can be certain that free slot will be |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.