id
int64
0
25.6k
text
stringlengths
0
4.59k
21,100
if we implement the priority queue using sorted listthen we improve the running time of phase to ( )for each operation removemin on now takes ( time unfortunatelyphase now becomes the bottleneck for the running timesincein the worst caseeach insert operation takes time proportional to the size of this sorting algorithm is therefore better known as insertion-sort (see figure )for the bottleneck in this sorting algorithm involves the repeated "insertionof new element at the appropriate position in sorted list figure execution of insertion-sort on sequence ( , , , , , , in phase we repeatedly remove the first element of and insert it into pby scanning the list implementing puntil we find the correct place for this element in phase we repeatedly perform removemin operations on peach of which returns the first element of the list implementing pand we add the element at the end of analyzing the running time of phase of insertion-sortwe note that it is
21,101
againby recalling proposition phase runs in ( timeand henceso does the entire insertion-sort algorithm alternativelywe could change our definition of insertion-sort so that we insert elements starting from the end of the priority-queue list in phase in which case performing insertion-sort on sequence that is already sorted would run in (ntime indeedthe running time of insertion-sort in this case is ( )where is the number of inversions in the sequencethat isthe number of pairs of elements that start out in the input sequence in the wrong relative order heaps the two implementations of the priorityqueuesort scheme presented in the previous section suggest possible way of improving the running time for priorityqueue sorting for one algorithm (selection-sortachieves fast running time for phase but has slow phase whereas the other algorithm (insertion-sorthas slow phase but achieves fast running time for phase if we can somehow balance the running times of the two phaseswe might be able to significantly speed up the overall running time for sorting this isin factexactly what we can achieve using the priority-queue implementation discussed in this section
21,102
data structure allows us to perform both insertions and removals in logarithmic timewhich is significant improvement over the list-based implementations discussed in section the fundamental way the heap achieves this improvement is to abandon the idea of storing entries in list and take the approach of storing entries in binary tree instead the heap data structure heap (see figure is binary tree that stores collection of entries at its nodes and that satisfies two additional propertiesa relational property defined in terms of the way keys are stored in and structural property defined in terms of the nodes of itself we assume that total order relation on the keys is givenfor exampleby comparator the relational property of tdefined in terms of the way keys are storedis the followingheap-order propertyin heap tfor every node other than the rootthe key stored at is greater than or equal to the key stored at ' parent as consequence of the heap-order propertythe keys encountered on path from the root to an external node of are in nondecreasing order alsoa minimum key is always stored at the root of this is the most important key and is informally said to be "at the top of the heap"hencethe name "heapfor the data structure by the waythe heap data structure defined here has nothing to do with the memory heap (section used in the run-time environment supporting programming language like java if we define our comparator to indicate the opposite of the standard total order relation between keys (so thatfor examplecompare( , )then the root of the heap stores the largest key this versatility comes essentially "for freefrom our use of the comparator pattern by defining the minimum key in terms of the comparatorthe "minimumkey with "reversecomparator is in fact the largest figure example of heap storing entries with integer keys the last node is the one storing entry (
21,103
minimum keywhich will always be at the root of the heap for the sake of efficiencyas will become clear laterwe want the heap to have as small height as possible we enforce this requirement by insisting that the heap satisfy an additional structural propertyit must be complete before we define this structural propertywe need some definitions we recall from section that level of binary tree is the set of nodes of tthat have depth given nodes and on the same level of twe say that is to the left of if is encountered before in an inorder traversal of that isthere is node of such that is in the left subtree of and is in the right subtree of for examplein the binary tree of figure the node storing entry ( ,kis to the left of the node storing entry ( qin standard drawing of binary treethe "to the left ofrelation is visualized by the relative horizontal placement of the nodes complete binary tree propertya heap with height is complete binary tree if levels , , , of have the maximum number of nodes possible (namelylevel has nodesfor < < and in level all the internal nodes are to the left of the external nodes and there is at most one node with one childwhich must be left child by insisting that heap be completewe identify another important node in heap tother than the rootnamelythe last node of twhich we define to be the right-mostdeepest external node of (see figure the height of heap let denote the height of another way of defining the last node of is that it is the node on level such that all the other nodes of level are to the left of it insisting that be complete also has an important consequenceas shown in proposition
21,104
hlogn justificationfrom the fact that is completewe know that the number of nodes of is at least - this lower bound is achieved when there is only one node on level in additionalso following from being completewe have that the number of nodes of is at most this upper bound is achieved when level has nodes since the number of nodes is equal to the number of entrieswe obtain < and < + thusby taking logarithms of both sides of these two inequalitieswe see that <log and log( < since is an integerthe two inequalities above imply that hlogn proposition has an important consequencefor it implies that if we can perform update operations on heap in time proportional to its heightthen those operations will run in logarithmic time let us therefore turn to the problem of how to efficiently perform various priority queue methods using heap complete binary trees and their representation let us discuss more about complete binary trees and how they are represented
21,105
as an abstract data typea complete binary supports all the methods of binary tree adt (section )plus the following two methodsadd( )add to and return new external node storing element such that the resulting tree is complete binary tree with last node remove()remove the last node of and return its element using only these update operations guarantees that we will always have complete binary tree as shown in figure there are two cases for the effect of an add or remove specificallyfor an addwe have the following (remove is similarif the bottom level of is not fullthen add inserts new node on the bottom level of timmediately after the right-most node of this level (that isthe last node)hencet' height remains the same if the bottom level is fullthen add inserts new node as the left child of the left-most node of the bottom level of thencet' height increases by one figure examples of operations add and remove on complete binary treewhere denotes the node inserted by add or deleted by remove the trees shown in (band (dare the results of performing add operations on the trees in (aand ( )respectively likewisethe trees shown in (aand (care the results of performing remove operations on the trees in (band ( )respectively
21,106
the array-list binary tree representation (section is especially suitable for complete binary tree we recall that in this implementationthe nodes of are stored in an array list such that node in is the element of with index equal to the level number (vof vdefined as followsif is the root of tthen ( if is the left child of node uthen ( (uif is the right child of node uthen ( ( with this implementationthe nodes of have contiguous indices in the range [ ,nand the last node of is always at index nwhere is the number of nodes of figure shows two examples illustrating this property of the last node figure two examples showing that the last node of heap with nodes has level number (aheap with more than one node on the bottom level(bheap with one node on the bottom level(carray-list representation of (darray-list representation of
21,107
array list aid in the implementation of methods add and remove assuming that no array expansion is necessarymethods add and remove can be performed in ( timefor they simply involve adding or removing the last element of the array list moreoverthe array list associated with has elements (the element at index is place-holderif we use an extendable array that grows and shrinks for the implementation of the array list (section and exercise - )the space used by the array-list representation of complete binary tree with nodes is (nand operations add and remove take ( amortized time java implementation of complete binary tree we represent the complete binary tree adt in interface completebinarytree shown in code fragment we provide java class arraylistcompletebinarytree that implements the completebinarytree interface with an array list and supports methods add and remove in ( time in code fragments code fragment interface completebinarytree for complete binary tree code fragment class arraylistcompletebinarytree implementing interface completebinarytree using
21,108
fragment code fragment class arraylistcompletebinarytree implementing the complete binary tree adt (continues in code fragment
21,109
arraylistcompletebinarytree implementing the complete binary tree adt methods children and positions are omitted (continued from code fragment
21,110
we now discuss how to implement priority queue using heap our heap-based representation for priority queue consists of the following (see figure )heapa complete binary tree whose internal nodes store entries so that the heap-order property is satisfied we assume is implemented using an array listas described in section for each internal node of twe denote the key of the entry stored at as (vcompa comparator that defines the total order relation among the keys with this data structuremethods size and isempty take ( timeas usual in additionmethod min can also be easily performed in ( time by accessing the entry stored at the root of the heap (which is at index in the array listinsertion let us consider how to perform insert on priority queue implemented with heap to store new entry ( ,xinto we add new node to with operation add so that this new node becomes the last node of and stores entry ( ,xafter this actionthe tree is completebut it may violate the heap-order property henceunless node is the root of (that isthe priority queue was empty before the insertion)we compare key (zwith the key (ustored at the parent of if ( > ( )the heap-order property is satisfied and the algorithm terminates if instead (zk( )then we need to restore the heap-order propertywhich can be locally achieved by swapping the entries stored at and (see figure and this swap causes the new entry ( ,eto move up one level againthe heap-order property may be violatedand we continue swappinggoing up in until no violation of the heap-order property occurs (see figure and figure illustration of the heap-based implementation of priority queue
21,111
the heap of figure (ainitial heap(bafter performing operation add( and dswap to locally restore the partial order property( and fanother swap( and hfinal swap
21,112
conventionally called up-heap bubbling swap either resolves the violation of the heap-order property or propagates it one level up in the heap in the worst caseup-heap bubbling causes the new entry to move all the way up to the root of heap (see figure thusin the worst casethe number of swaps performed in the execution of method insert is equal to the height of tthat isit is logn by proposition removal
21,113
algorithm for performing method removemin using heap is illustrated in figure we know that an entry with the smallest key is stored at the root of (even if there is more than one entry with smallest keyhoweverunless is the only internal node of twe cannot simply delete node rbecause this action would disrupt the binary tree structure insteadwe access the last node of tcopy its entry to the root rand then delete the last node by performing operation remove of the complete binary tree adt (see figure and down-heap bubbling after removal we are not necessarily donehoweverforeven though is now completet may now violate the heap-order property if has only one node (the root)then the heap-order property is trivially satisfied and the algorithm terminates otherwisewe distinguish two caseswhere denotes the root of tif has no right childlet be the left child of otherwise ( has both children)let be child of with the smallest key if ( < ( )the heap-order property is satisfied and the algorithm terminates if instead (rk( )then we need to restore the heap-order propertywhich can be locally achieved by swapping the entries stored at and (see figure and (note that we shouldn' swap with ' sibling the swap we perform restores the heap-order property for node and its childrenbut it may violate this property at shencewe may have to continue swapping down until no violation of the heap-order property occurs (see figure and this downward swapping process is called down-heap bubbling swap either resolves the violation of the heap-order property or propagates it one level down in the heap in the worst casean entry moves all the way down to the bottom level (see figure thusthe number of swaps performed in the execution of method removemin isin the worst caseequal to the height of heap tthat isit is logn by proposition figure removal of the entry with the smallest key from heap( and bdeletion of the last nodewhose entry gets stored into the root( and dswap to locally restore the heap-order property( and fanother swap( and hfinal swap
21,114
table shows the running time of the priority queue adt methods for the heap implementation of priority queueassuming that two keys can be compared in ( time and that the heap is implemented with either an array list or linked structure
21,115
performance of priority queue realized by means of heapwhich is in turn implemented with an array list or linked structure we denote with the number of entries in the priority queue at the time method is executed the space requirement is (nthe running time of operations insert and removemin is worst case for the array-list implementation of the heap and amortized for the linked representation operation time sizeisempty ( mino( insert (lognremovemin (lognin shorteach of the priority queue adt methods can be performed in ( or in (logntimewhere is the number of entries at the time the method is executed the analysis of the running time of the methods is based on the followingthe heap has nodeseach storing reference to an entry operations add and remove on take either ( amortized time (arraylist representationor (lognworst-case time in the worst caseup-heap and down-heap bubbling perform number of swaps equal to the height of the height of heap is (logn)since is complete (proposition
21,116
priority queue adtindependent of whether the heap is implemented with linked structure or an array list the heap-based implementation achieves fast running times for both insertion and removalunlike the list-based priority queue implementations indeedan important consequence of the efficiency of the heapbased implementation is that it can speed up priority-queue sorting to be much faster than the list-based insertion-sort and selection-sort algorithms java heap implementation java implementation of heap-based priority queue is shown in code frag ments to aid in modularitywe delegate the maintenance of the structure of the heap itself to complete binary tree code fragment class heappriorityqueuewhich implements priority queue with heap nested class myentry is used for the entries of the priority queuewhich form the elements in the heap tree (continues in code fragment
21,117
methods mininsert and removemin and some auxiliary methods of class heappriorityqueue (continues in code fragment
21,118
remaining auxiliary methods of class heappriorityqueue (continued from code fragment
21,119
21,120
advantage that all the methods in the priority queue adt run in logarithmic time or better hencethis realization is suitable for applications where fast running times are sought for all the priority queue methods thereforelet us again consider the priorityqueuesort sorting scheme from section which uses priority queue to sort sequence with elements during phase the -th insert operation ( < <ntakes ( +logitimesince the heap has entries after the operation is performed likewiseduring phase the jth removemin operation ( <= <nruns in time ( +log( + )since the heap has entries at the time the operation is performed thuseach phase takes (nlogntimeso the entire priority-queue sorting algorithm runs in (nlogntime when we use heap to implement the priority queue this sorting algorithm is better known as heap-sortand its performance is summarized in the following proposition proposition the heap-sort algorithm sorts sequence of elements in (nlogntimeassuming two elements ofs can be compared in ( time let us stress that the (nlognrunning time of heap-sort is considerably better than the ( running time of selection-sort and insertion-sort (section implementing heap-sort in-place if the sequence to be sorted is implemented by means of an arraywe can speed up heap-sort and reduce its space requirement by constant factor using portion of the sequence itself to store the heapthus avoiding the use of an external heap data structure this is accomplished by modifying the algorithm as follows we use reverse comparatorwhich corresponds to heap where an entry with the largest key is at the top at any time during the execution of the algorithmwe use the left portion of sup to certain index to store the entries of the heapand the right portion of sfrom index to to store the elements of the sequence thusthe first elements of (at indices , provide the array-list representation of the heap (with modified level numbers starting at instead of )that isthe element at index is greater than or equal to its "childrenat indices and in the first phase of the algorithmwe start with an empty heap and move the boundary between the heap and the sequence from left to rightone step at time in step ( )we expand the heap by adding the element at index in the second phase of the algorithmwe start with an empty sequence and move the boundary between the heap and the sequence from right to leftone step at time at step ( )we remove maximum element from the heap and store it at index
21,121
small amount of space in addition to the sequence itself instead of transferring elements out of the sequence and then back inwe simply rearrange them we il lustrate in-place heap-sort in figure in generalwe say that sorting algorithm is in-place if it uses only small amount of memory in addition to the sequence storing the objects to be sorted figure first three steps of phase of in-place heap-sort the heap portion of the sequence is highlighted in blue we draw next to the sequence binary tree view of the heapeven though this tree is not actually constructed by the in-place algorithm
21,122
the analysis of the heap-sort algorithm shows that we can construct heap storing entries in (nlogntimeby means of successive insert operationsand then use that heap to extract the entries in order by nondecreasing key howeverif all the key-value pairs to be stored in the heap are given in advancethere is an al ternative bottom-up construction method that runs in (ntime we describe this method in this sectionobserving that it could be included as one of the constructors of class implementing heap-based priority queue for simplicity of expositionwe describe this bottom-up heap construction assuming the number of keys is an integer of the type that isthe heap is complete binary tree with every level being fullso the heap has height log( viewed nonre
21,123
steps in the first step (see figure )we construct ( )/ elementary heaps storing one entry each in the second step (see figure - )we form ( )/ heapseach stor ing three entriesby joining pairs of elementary heaps and adding new entry the new entry is placed at the root and may have to be swapped with the entry stored at child to preserve the heap-order property in the third step (see figure - )we form ( )/ heapseach storing entriesby joining pairs of -entry heaps (constructed in the previous stepand adding new entry the new entry is placed initially at the rootbut may have to move down with down-heap bubbling to preserve the heap-order property in the generic ith step < <hwe form ( )/ heapseach storing entriesby joining pairs of heaps storing ( - entries (constructed in the previous stepand adding new entry the new entry is placed initially at the rootbut may have to move down with down-heap bubbling to preserve the heap-order property in the last step (see figure - )we form the final heapstoring all the entriesby joining two heaps storing ( )/ entries (constructed in the previous stepand adding new entry the new entry is placed initially at the rootbut may have to move down with down-heap bubbling to preserve the heap-order property we illustrate bottom-up heap construction in figure for figure bottom-up construction of heap with entries(awe begin by constructing -entry heaps on the bottom level( and cwe combine these heaps into -entry heaps and then ( and -entry heapsuntil ( and gwe create the final heap the paths of the down-heap bubblings are highlighted in blue for simplicitywe only show the key within each node instead of the entire entry
21,124
we can also describe bottom-up heap construction as recursive algorithmas shown in code fragment which we call by passing list storing the keyvalue pairs for which we wish to build heap code fragment recursive bottom-up heap construction
21,125
keys into an initially empty heapas the following proposition shows proposition bottom-up construction of heap with entries takes (ntimeassuming two keys can be compared in ( time justificationwe analyze bottom-up heap construction using "visualap proachwhich is illustrated in figure let be the final heaplet be node of tand let (vdenote the subtree of rooted at in the worst casethe time for forming (vfrom the two recursively formed subtrees rooted at ' children is proportional to the height of (vthe worst case occurs when down-heap bubbling from traverses path from all the way to bottom-most node of (vnow consider the path (vof from node to its inorder successor external nodethat isthe path that starts at vgoes to the right child of vand then goes down leftward until it reaches an external node we say that path (vis associated with node note that (vis not necessarily the path followed by down-heap bubbling when forming (vclearlythe size (number of nodesof (vis equal to the height of (vplus one henceforming (vtakes time proportional to the size of ofp( )in the worst case thusthe total running time of bottom-up heap construction is proportional to the sum of the sizes of the paths associated with the nodes of observe that each node of distinct from the root belongs to exactly two such pathsthe path (vassociated with itself and the path (uassociated with the parent of (see figure alsothe root of belongs only to path (rassociated with itself thereforethe sum of the sizes of the paths associated with the internal nodes of is we conclude that the bottom-up construction of heap takes (ntime
21,126
time of bottom-up heap con structionwhere the paths associated with the internal nodes have been highlighted with alternating colors for examplethe path associated with the root consists of the nodes storing keys and alsothe path associated with the right child of the root consists of the internal nodes storing keys and to summarizeproposition states that the running time for the first phase of heap-sort can be reduced to be (nunfortunatelythe running time of the second phase of heap-sort cannot be made asymptotically better than (nlogn(that isit will always be ohm(nlognin the worst casewe will not justify this lower bound until however insteadwe conclude this by discussing design pattern that allows us to extend the priority queue adt to have additional functionality adaptable priority queues the methods of the priority queue adt given in section are sufficient for most basic applications of priority queuessuch as sorting howeverthere are situations where additional methods would be usefulas shown in the scenarios belowwhich refer to the standby airline passenger application standby passenger with pessimistic attitude may become tired of waiting and decide to leave ahead of the boarding timerequesting to be removed from the
21,127
associated with this passenger operation removemin is not suitable for this purpose since the passenger leaving is unlikely to have first priority insteadwe would like to have new operation remove (ethat removes an arbitrary entry another standby passenger finds her gold frequent-flyer card and shows it to the agent thusher priority has to be modified accordingly to achieve this change of prioritywe would like to have new operation replacekey( ,kthat replaces with the key of entry in the priority queue finallya third standby passenger notices her name is misspelled on the ticket and asks it to be corrected to perform the changewe need to up date the passenger' record hencewe would like to have new operation replacevalue( ,xthat replaces with the value of entry in the priority queue methods of the adaptable priority queue adt the above scenarios motivate the definition of new adt that extends the prior ity queue adt with methods removereplacekeyand replacevalue namelyan adaptable priority queue supports the following methods in addition to those of the priority queue adtremove( )remove from and return entry replacekey( , )replace with and return the key of entry of pan error condition occurs if is invalid (that isk cannot be compared with other keysreplacevalue( , )replace with and return the value of entry of example the following table shows series of operations and their effects on an initially empty adaptable priority queue operation output insert( ,ae {( , )
21,128
{( , )( , )insert( ,ce {( , ),( , ),( , )min( ( , ),( , ),( , )getkey( {( , ),( , ),( , )remove( ( , )( , )replacekey( , {( , ),( , )replace value( ,dc {( , ),( , )remove( {( , )
21,129
location-aware entries in order to implement methods removereplacekeyand replacevalue of an adapt able priority queue pwe need mechanism for finding the position of an entry of namelygiven the entry of passed as an argument to one of the above methodswe need to find the position storing in the the data structure imple menting (for examplea doubly linked list or heapthis position is called the location of the entry instead of searching for the location of given entry ewe augment the entry object with an instance variable of type position storing the location this im plementation of an entry that keeps track of its position is called location-aware entry summary description of the the use of location-aware entries for the sorted list and heap implementations of an adaptable priority queue is provided below we denote the number of entries in the priority queue at the time an operation is per formedwith sorted list implementation in this implementationafter an entry is insertedwe set the location of the entry to refer to the position of the list containing the entry alsowe update the location of the entry whenever it changes position in the list operations remove(eand replacevalue( ,xtake ( timesince we can obtain the position of entry in ( time following the location reference stored with the entry insteadoperation replacekey(ekruns in (ntimebecause the modification of the key of entry may require moving the entry to different position in the list to preserve the ordering of the keys the use of location-aware entries increases the running time of the standard priority queue operations by constant factor heap implementation in this implementationafter an entry is insertedwe set the location of the entry to refer to the node of the heap containing the entry alsowe update the location of the entry whenever it changes node in the heap (for examplebecause of the swaps in down-heap or up-heap bubblingoperation replacevalue( ,xtakes ( time since we can obtain the position of entry in ( time following the location reference stored with the entry operations remove(eand replacekey( ,krun instead in (logn(details are explored in exercise - the use of location-aware entries increases the running time of operations insert and removemin by constant factor overhead the use of location-aware entries for the unsorted list implementation is explored in exercise - performance of adaptable priority queue implementations
21,130
data structures with location-aware entries is summarized in table table running times of the methods of an adaptable priority queue of size nrealized by means of an unsorted listsorted listand heaprespectively the space requirement is (nmethod unsorted list sorted list heap sizeisempty ( ( ( insert ( (no(lognmin (no( ( removemin (no( (logn
21,131
( ( (lognreplacekey ( (no(lognreplacevalue ( ( ( implementing an adaptable priority queue in code fragment and we show the java implementation of an adaptable priority queue based on sorted list this implementation is obtained by extending class sortedlistpriorityqueue shown in code fragment in particularcode fragment shows how to realize location-aware entry in java by extending regular entry code fragment java implementation of an adaptable priority queue by means of sorted list storing location-aware entries class sortedlistadaptablepriori tyqueue extends class sortedlistpriorityqueue (code fragment and imple ments interface adaptablepriorityqueue (continues in code fragment
21,132
an adaptable priority queue implemented with sorted list storing location-aware entries (continued from code fragment the
21,133
sortedlistpriorityqueue shown in code fragment
21,134
exercises for source code and help with exercisesplease visit java datastructures net reinforcement - suppose you label each node of binary tree with key equal to the preorder rank of under what circumstances is heapr- what is the output from the following sequence of priority queue adt methodsinsert( , )insert( , )insert( , )insert( , )removemin()insert( , )insert( , )removemin()removemin()insert( , )remove min()insert( , )removemin()removemin( - an airport is developing computer simulation of air-traffic control that handles events such as landings and takeoffs each event has time-stamp that denotes the time when the event occurs the simulation program needs to efficiently perform the following two fundamental operationsinsert an event with given time-stamp (that isadd future eventextract the event with smallest time-stamp (that isdetermine the next event to processwhich data structure should be used for the above operationswhyr- although it is correct to use "reversecomparator with the priority queue adt so that we retrieve and remove an entry with the maximum key each timeit is confusing to have an entry with maximum key returned by method named "removemin write short adapter class that can take any priority queue and an associated comparator and implement priority queue that concentrates on the element with maximum keyusing methods with names like removemax -
21,135
sequence( , , , , , , , , , - illustrate the execution of the insertion-sort algorithm on the input se quence of the previous problem - give an example of worst-case sequence with elements for insertion sortand show that insertion-sort runs in ohm( time on such sequence - at which nodes of heap can an entry with the largest key be storedr- in defining the relation "to the left offor two nodes of binary tree (sec tion )can we use preorder traversal instead of an inorder traversalhow about postorder traversalr- illustrate the execution of the heap-sort algorithm on the following input sequence( , , , , , , , , , - let be complete binary tree such that node stores the entry ( ( ) )where (vis the level number of is tree heapwhy or why notr- explain why the case where node has right child but not left child was not considered in the description of down-heap bubbling - is there heap storing seven entries with distinct keys such that pre order traversal of yields the entries of in increasing or decreasing order by keyhow about an inorder traversalhow about postorder traversalif sogive an exampleif notsay why - let be heap storing entries using the array-list representation of complete binary tree what is the sequence of indices of the array list that are
21,136
about postorder traversal of hr- show that the sum which appears in the analysis of heap-sortis ohm(nlognr- bill claims that preorder traversal of heap will list its keys in nonde creasing order draw an example of heap that proves him wrong - hillary claims that postorder traversal of heap will list its keys in non increasing order draw an example of heap that proves her wrong - show all the steps of the algorithm for removing key from the heap of figure - show all the steps of the algorithm for replacing key with in the heap of figure - draw an example of heap whose keys are all the odd numbers from to (with no repeats)such that the insertion of an entry with key would cause up-heap bubbling to proceed all the way up to child of the root (replacing that child' key with - complete figure by showing all the steps of the in-place heap-sort algorithm show both the array and the associated heap at the end of each step - give pseudo-code description of nonrecursive in-place heap-sort algorithm -
21,137
turn the player with the most money must give half of his/her money to the player with the least amount of money what data structure(sshould be used to play this game efficientlywhycreativity - an online computer system for trading stock needs to process orders of the form "buy shares at $ eachor "sell shares at $ each buy order for $ can only be processed if there is an existing sell order with price $ such that < likewisea sell order for $ can only be processed if there is an existing buy order with price $ such that < if buy or sell order is entered but cannot be processedit must wait for future order that allows it to be processed describe scheme that allows for buy and sell orders to be entered in (logntimeindependent of whether or not they can be immediately processed - extend solution to the previous problem so that users are allowed to update the prices for their buy or sell orders that have yet to be processed - write comparator for nonnegative integers that determines order based on the number of ' in each integer' binary expansionso that if the number of ' in the binary representation of is less than the number of ' in the binary representation of - show how to implement the stack adt using only priority queue and one additional integer instance variable - show how to implement the (standardqueue adt using only priority queue and one additional integer instance variable - describe in detail an implementation of priority queue based on sorted array show that this implementation achieves ( time for operations min and removemin and (ntime for operation insert -
21,138
space for instance variables in addition to an input array itself - assuming the input to the sorting problem is given in an array adescribe how to implement the insertion-sort algorithm using only the array andat mostsix additional (base-typevariables - describe how to implement the heap-sort algorithm usingat mostsix integer variables in addition to an input array itself - describe sequence of insertions in heap that requires ohm(nlogntime to process - an alternative method for finding the last node during an insertion in heap is to storein the last node and each external node of ta reference to the external node immediately to its right (wrapping to the first node in the next lower level for the right-most external nodeshow how to maintain such references in ( time per operation of the priority queue adt assuming is implemented as linked structure - describe an implementation of complete binary tree by means of linked structure and reference to the last node in particularshow how to update the reference to the last node after operations add and remove in (logntimewhere is the current number of nodes of be sure and handle all possible casesas illustrated in figure figure updating the last node in complete binary tree after operation add or remove node is the last node before operation add or after operation remove node is the last node after operation add or before operation remove
21,139
we can represent path from the root to given node of binary tree by means of binary stringwhere means "go to the left childand means "go to the right child for examplethe path from the root to the node storing ( ,win the heap of figure is represented by " design an (logn)-time algorithm for finding the last node of complete binary tree with nodesbased on the above representation show how this algorithm can be used in the implementation of complete binary tree by means of linked structure that does not keep reference to the last node - given heap and key kgive an algorithm to compute all the entries in with key less than or equal to for examplegiven the heap of figure and query the algorithm should report the entries with keys and (but not necessarily in this orderyour algorithm should run in time proportional to the number of entries returned - provide justification of the time bounds in table - tamarindo airlines wants to give first-class upgrade coupon to their top log frequent flyersbased on the number of miles accumulatedwhere is the total number of the airlinesfrequent flyers the algorithm they currently usewhich runs in (nlogntimesorts the flyers by the number of miles flown and then scans the sorted list to pick the top logn flyers describe an algorithm that identifies the top logn flyers in (ntime - develop an algorithm that computes the kth smallest element of set of distinct integers in ( klogntime
21,140
suppose two binary treest and hold entries satisfying the heap-order property describe method for combining and into tree whose internal nodes hold the union of the entries in andt and also satisfy the heap-order property your algorithm should run in time ( where and are the respective heights of and - give an alternative analysis of bottom-up heap construction by showing the following summation is ( )for any positive integer hc- give an alternate description of the in-place heap-sort algorithm that uses standard comparator instead of reverse one - describe efficient algorithms for performing operations remove(eand replacekey( ,kon an adaptable priority queue realized by means of an unsorted list with location-aware entries - describe efficient algorithms for performing operations remove(eand replacekey( ,kon an adaptable priority queue realized by means of heap with location-aware entries - let be set of points in the plane with distinct integer xand ycoordinates let be complete binary tree storing the points from at its external nodessuch that the points are ordered left-to-right by in creasing -coordinates for each node in tlet (vdenote the subset of consisting of points stored in the subtree rooted at for the root of tdefine top(rto be the point in (rwith maximum -coordinate for every other node vdefine top(rto be the point in with highest -coordinate in (vthat is not also the highest ycoordinate in ( )where is the parent of in (if such point existssuch labeling turns into priority search tree describe linear-time algorithm for turning into priority search tree projects
21,141
give java implementation of priority queue based on an unsorted list - write an applet or stand-alone graphical program that animates both the insertion-sort and selection-sort algorithms your animation should visu alize the movement of elements to their correct locations - write an applet or stand-alone graphical program that animates heap your program should support all the priority queue operations and should visualize the swaps in the up-heap and down-heap bubblings (extravi sualize bottomup heap construction as well - implement the heap-sort algorithm using bottom-up heap construction - implement the in-place heap-sort algorithm experimentally compare its running time with that of the standard heap-sort that is not in-place - implement heap-based priority queue that supports the following addi tional operation in linear timereplacecomparator( )replace the current comparator with (hintutilize the bottom-up heap construction algorithm - develop java implementation of an adaptable priority queue that is based on an unsorted list and supports location-aware entries - develop java implementation of an adaptable priority queue that is based on heap and supports location-aware entries - write program that can process sequence of stock buy and sell orders as described in exercise -
21,142
one of the main applications of priority queues is in operating systems for scheduling jobs on cpu in this project you are to build program that schedules simulated cpu jobs your program should run in loopeach iteration of which corresponds to time slice for the cpu each job is assigned prioritywhich is an integer between - (highest priorityand (lowest priority)inclusive from among all jobs waiting to be pro cessed in time slicethe cpu must work on job with highest priority in this simulationeach job will also come with length valuewhich is an integer between and inclusiveindicating the number of time slices that are needed to process this job for simplicityyou may assume jobs cannot be interrupted--once it is scheduled on the cpua job runs for number of time slices equal to its length your simulator must output the name of the job running on the cpu in each time slice and must process sequence of commandsone per time sliceeach of which is of the form "add job name with length and priority por "no new job this slicenotes knuth' book on sorting and searching [ describes the motivation and history for the selection-sortinsertion-sortand heap-sort algorithms the heap-sort algorithm is due to williams [ ]and the linear-time heap construction algorithm is due to floyd [ additional algorithms and analyses for heaps and heap-sort variations can be found in papers by bentley [ ]carlsson [ ]gonnet and munro [ ]mcdiarmid and reed [ ]and schaffer and sedgewick [ the design pattern of using location-aware entries (also described in [ ]appears to be new
21,143
maps and dictionaries contents the map abstract data type simple list-based map implementation hash tables bucket arrays hash functions hash codes
21,144
compression functions collision-handling schemes java hash table implementation load factors and rehashing applicationcounting word frequencies the dictionary abstract data type list-based dictionaries and audit trails hash table dictionary implementation
21,145
ordered search tables and binary search skip lists search and update operations in skip list probabilistic analysis of skip lists extensions and applications of dictionaries supporting location-aware dictionary entries the ordered dictionary adt flight databases and maxima sets
21,146
java datastructures net the map abstract data type map allows us to store elements so they can be located quickly using keys the motivation for such searches is that each element typically stores additional useful information besides its search keybut the only way to get at that information is to use the search key specificallya map stores key-value pairs (kv)which we call entrieswhere is the key and is its corresponding value in additionthe map adt requires that each key be uniqueso the association of keys to values defines mapping in order to achieve the highest level of generalitywe allow both the keys and the values stored in map to be of any object type (see figure in map storing student records (such as the student' nameaddressand course grades)the key might be the student' id number in some applicationsthe key and the value may be the same for exampleif we had map storing prime numberswe could use each number itself as both key and its value figure conceptual illustration of the map adt keys (labelsare assigned to values (diskettesby user the resulting entries (labeled diskettesare inserted into the map (file cabinetthe keys can be used later to retrieve or remove values
21,147
user to an associated value object thusa map is most appropriate in situations where each key is to be viewed as kind of unique index address for its valuethat isan object that serves as kind of location for that value for exam pleif we wish to store student recordswe would probably want to use student id objects as keys (and disallow two students having the same student idin other wordsthe key associated with an object can be viewed as an "addressfor that object indeedmaps are sometimes referred to as associative storesbecause the key associated with an object determines its "locationin the data structure the map adt since map stores collection of objectsit should be viewed as collection of key-value pairs as an adta map supports the following methodssize()return the number of entries in isempty()test whether is empty get( )if mcontains an entry with key equal to kthen return the value of eelse return null put(kv)if does not have an entry with key equal to kthen add entry (kvto and return nullelsereplace with the existing value of the entry with key equal to and return the old value
21,148
its valueif has no such entrythen return null keys()return an iterable collection containing all the keys stored in (so keys(iterator(returns an iterator of keysvalues()return an iterable collection containing all the values as sociated with keys stored in (so values(iterator(re turns an iterator of valuesentries()return an iterable collection containing all the key-value entries in (so entries(iterator(returns an iterator of entrieswhen operations get( )put( ,vand remove(kare performed on map that has no entry with key equal to kwe use the convention of returning null special value such as this is known as sentinel (see also section the disadvantage with using null as such sentinel is that this choice can create ambiguity should we every want to have an entry (knullwith value null in the map another choiceof coursewould be to throw an exception when someone requests key that is not in our map this would probably not be an appropriate use of an exceptionhoweversince it is normal to ask for something that might not be in our map moreoverthrowing and catching an exception is typically slower than test against sentinelhenceusing sentinel is more efficient (andin this caseconceptually more appropriateso we use null as sentinel for value associated with missing key example in the followingwe show the effect of series of operations on an initially empty map storing entries with integer keys and single-character values operation output map isempty(true ph put( ,anull {( , )put( ,
21,149
{( , )( , )put( ,cnull {( , )( , )( , )put( ,dnull {( , )( , )( , )( , )put( ,ec {( , )( , )( , )( , )get( {( , )( , )( , )( , )get( null {( , )( , )( , )( , )get( {( , )( , )( , )( , )size( {( , )( , )( , )( , )remove(
21,150
remove( {( , )( , )get( null {( , )( , )isempty(false {( , )( , )maps in the java util package the java package java util includes an interface for the map adtwhich is called java util map this interface is defined so that an implementing class enforces unique keysand it includes all of the methods of the map adt given aboveexcept that it uses different method names in couple of cases the correspondences between the map adt and the java util map interface are shown in table table correspondences between methods of the map adt and the methods of the java util map interfacewhich supports other methods as well map adt methods java util map methods size(size(isempty(isempty(get(
21,151
put( ,vput( ,vremove(kremove(kkeys(keyset(values(values(entries(entryset( simple list-based map implementation simple way of implementing map is to store its entries in list simplemented as doubly linked list performing the fundamental methodsget( )put(kv)and remove( )involves simple scans down looking for an entry with key we give pseudo-code for performing these methods on map in code fragment this list-based map implementation is simplebut it is only efficient for very small maps every one of the fundamental methods takes (ntime on map with entriesbecause each method involves searching through the entire list in the worst case thuswe would like something faster code fragment algorithms for the fundamental map methods with list
21,152
hash tables the keys associated with values in map are typically thought of as "addressesfor those values examples of such applications include compiler' symbol table and registry of environment variables both of these structures consist of collection of symbolic names where each name serves as the "addressfor properties about variable' type and value one of the most efficient ways to implement map in such circumstances is to use hash table althoughas we will seethe worst-case running
21,153
perform these operations in ( expected time in generala hash table consists of two major componentsa bucket array and hash function bucket arrays bucket array for hash table is an array of size nwhere each cell of is thought of as "bucket(that isa collection of key-value pairsand the integer defines the capacity of the array if the keys are integers well distributed in the range [ , ]this bucket array is all that is needed an entry with key is simply inserted into the bucket [ (see figure to save spacean empty bucket may be replaced by null object figure bucket array of size for the entries ( , )( , )( , )( , )( , )( ,cand ( )if our keys are unique integers in the range [ , ]then each bucket holds at most one entry thussearchesinsertionsand removals in the bucket array take ( time this sounds like great achievementbut it has two drawbacks firstthe space used is proportional to thusif is much larger than the number of entries actually present in the mapwe have waste of space the second draw back is that keys are required to be integers in the range [ ]which is often not the case because of these two drawbackswe use the bucket array in conjunction with "goodmapping from the keys to the integers in the range [ , hash functions the second part of hash table structure is functionhcalled hash functionthat maps each key in our map to an integer in the range [ , ]where is the capacity of the bucket array for this table equipped with such hash functionhwe can apply the bucket array method to arbitrary keys the main idea of this approach is to use the hash function valueh( )as an index into our bucket arrayainstead of the key (which is most likely inappropriate for use as bucket array indexthat iswe store the entry (kvin the bucket [ ( )
21,154
different entries will be mapped to the same bucket in in this casewe say that collision has occurred clearlyif each bucket of can store only single entrythen we cannot associate more than one entry with single bucketwhich is problem in the case of collisions to be surethere are ways of dealing with collisionswhich we will discuss laterbut the best strategy is to try to avoid them in the first place we say that hash function is "goodif it maps the keys in our map so as to minimize collisions as much as possible for practical reasonswe also would like hash function to be fast and easy to compute following the convention in javawe view the evaluation of hash functionh( )as consisting of two actions--mapping the key to an integercalled the hash codeand mapping the hash code to an integer within the range of indices ([ , ]of bucket arraycalled the compression function (see figure figure the two parts of hash functiona hash code and compression func tion hash codes the first action that hash function performs is to take an arbitrary key in our map and assign it an integer value the integer assigned to key is called the hash code for this integer value need not be in the range [ , ]and may even be negativebut we desire that the set of hash codes assigned to our keys should avoid collisions as much as possible for if the hash codes of our keys cause collisionsthen there is no hope for our compression function to avoid them in additionto be consistent with all of our keysthe hash code we use for key should be the same as the hash code for any key that is equal to
21,155
the generic object class defined in java comes with default hashcode(method for mapping each object instance to an integer that is "representationof that ob ject specificallythe hashcode(method returns -bit integer of type int un less specifically overriddenthis method is inherited by every object used in java program we should be careful in using the default object version of hashcode()howeveras this could just be an integer interpretation of the object' location in memory (as is the case in many java implementationsthis type of hash code works poorly with character stringsfor examplebecause two different string ob jects in memory might actually be equalin which case we would like them to have the same hash code indeedthe java string class overrides the hashcode method of the object class to be something more appropriate for character strings like wiseif we intend to use certain objects as keys in mapthen we should override the built-in hashcode(method for these objectsreplacing it with mapping that assigns well-spreadconsistent integers to these types of objects let us considerthenseveral common data types and some example methods for assigning hash codes to objects of these types casting to an integer to beginwe note thatfor any data type that is represented using at most as many bits as our integer hash codeswe can simply take as hash code for an integer interpretation of its bits thusfor java base types byteshortintand charwe can achieve good hash code simply by casting this type to int likewisefor variable of base type floatwe can convert to an integer using call to float floattointbits( )and then use this integer as ' hash code summing components for base typessuch as long and doublewhose bit representation is double that of hash codethe above scheme is not immediately applicable stillone possible hash codeand indeed one that is used by many java implementationsis to simply cast (longinteger representation of the type down to an integer the size of hash code this hash codeof courseignores half of the information present in the original valueand if many of the keys in our map only differ in these bitsthen they will collide using this simple hash code an alternative hash codethenwhich takes all the original bits into considerationis to sum an integer representation of the high-order bits with an integer representation of the loworder bits such hash code can be written in java as follows
21,156
indeedthe approach of summing components can be extended to any object whose binary representation can be viewed as -tuple ( , , - of for examplegiven integersfor we can then form hash code for as any floating point numberwe can sum its mantissa and exponent as long integersand then apply hash code for long integers to the result polynomial hash codes the summation hash codedescribed aboveis not good choice for character strings or other variable-length objects that can be viewed as tuples of the form ( , , - )where the order of the ' is significant for exampleconsider hash code for character string that sums the ascii (or unicodevalues of the characters in this hash code unfortunately produces lots of unwanted collisions for common groups of strings in particular"temp and "temp collide using this functionas do "stop""tops""pots"and "spota better hash code should somehow take into consideration the positions of the ' an alternative hash codewhich does exactly thisis to choose nonzero constanta and use as hash code the value ak- ak- - - mathematically speakingthis is simply polynomial in that takes the compo nents ( , , - of an object as its coefficients this hash code is therefore called polynomial hash code by horner' rule (see exercise - )this poly nomial can be written as - ( - ( - ( ( ax ))intuitivelya polynomial hash code uses multiplication by the constant as way of "making roomfor each component in tuple of values while also preserv ing characterization of the previous components of courseon typical computerevaluating polynomial will be done using the finite bit representation for hash codehencethe value will periodically over flow the bits used for an integer since we are more interested in good spread of the object with respect to other keyswe simply ignore such overflows stillwe should be mindful that such overflows are occurring and choose the constant so that it has some nonzerolow-order bitswhich will serve to preserve some of the information content even as we are in an overflow situation we have done some experimental studies that suggest that and are particularly good choices for when working with character strings that are english words in factin list of over , english words formed as the union
21,157
or produced less than collisions in each caseit should come as no surprisethento learn that many java implementations choose the polynomial hash functionusing one of these constants for aas default hash code for strings for the sake of speedhoweversome java implementations only apply the polynomial hash function to fraction of the characters in long strings cyclic shift hash codes variant of the polynomial hash code replaces multiplication by with cyclic shift of partial sum by certain number of bits such functionapplied to character strings in java couldfor examplelook like the followingstatic int hashcode(string sint = for (int = < length() + ( > )/ -bit cyclic shift of the running sum (ints charat( )/add in next character return has with the traditional polynomial hash codeusing the cyclic-shift hash code re quires some fine-tuning in this casewe must wisely choose the amount to shift by for each new character we show in table the results of some experiments run on list of just over , english wordswhich compare the number of col lisions for various shift amounts these and our previous experiments show that if we choose our constant or our shift value wiselythen either the polynomial hash code or its cyclic-shift variant are suitable for any object that can be written as tuple ( , , )where the order in tuples matters table comparison of collision behavior for the cyclic shift variant of the poly nomial hash code as applied to list of just over , english words the "totalcolumn records the total number of collisions and the "maxcolumn records the maximum number
21,158
cyclic shift of this hash code reverts to the one that simply sums all the characters collisions shift total max
21,159
21,160
compression functions the hash code for key will typically not be suitable for immediate use with bucket arraybecause the range of possible hash codes for our keys will typically exceed the range of legal indices of our bucket array that isincorrectly using hash code as an index into our bucket array may result in an array out-of-bounds exception being throwneither because the index is negative or it exceeds the ca pacity of thusonce we have determined an integer hash code for key object kthere is still the issue of mapping that integer into the range [ , this map ping is the second action that hash function performsand good compression function is one that minimizes the possible number of collisions in given set of hash codes the division method one simple compression function is the division methodwhich maps an integer to |imod nwhere nthe size of the bucket arrayis fixed positive integer additionallyif we take to be prime numberthen this compression function helps "spread outthe distribution of hashed values indeedif is not primethen there is higher likelihood that patterns in the distribution of hash codes will be repeated in the distribution of hash valuesthereby causing collisions for exampleif we insert keys with hash codes { , , , , , into bucket array of size then each hash code will collide with three others but if we use bucket array of size then there will be no collisions if hash function is chosen wellit should ensure that the probability of two different keys getting hashed to the same bucket is / choosing to be prime number is not always enough
21,161
several different 'sthen there will still be collisions the mad method more sophisticated compression functionwhich helps eliminate repeated pat terns in set of integer keys is the multiply add and divide (or "mad"method this method maps an integer to |ai bmod nwhere is prime numberand (called scaling factorand > (called shiftare integer constants randomly chosen at the time the compression function is determined so that mod this compression function is chosen in order to eliminate repeated patterns in the set of hash codes and get us closer to having "goodhash functionthat isone such that the probability any two different keys collide is / this good behavior would be the same as we would have if these keys were "throwninto uniformly at random with compression function such as thiswhich spreads integers fairly evenly in the range [ , ]and hash code that transforms the keys in our map into integerswe have an effective hash function togethersuch hash function and bucket array define the main ingredients of the hash table implementation of the map adt but before we can give the details of how to perform such operations as putgetand removewe must first resolve the issue of how we will be handling collisions collision-handling schemes the main idea of hash table is to take bucket arrayaand hash functionhand use them to implement map by storing each entry ( ,vin the "bucketa [ ( )this simple idea is challengedhoweverwhen we have two distinct keysk and such that ( ( the existence of such collisions prevents us from simply inserting anew entry ( ,vdirectly in the bucket [ ( )they also complicate our procedure for performing the get( )put(kv)and remove(koperations separate chaining simple and efficient way for dealing with collisions is to have each bucket [istore small mapm implemented using listas described in section holding entries (kvsuch that (ki that iseach separate chains together the entries that hash to index in linked list this collision resolution rule is known as separate chaining assuming that we initialize each bucket [ito be
21,162
the fundamental map operationsas shown in code fragment code fragment the fundamental methods of the map adtimplemented with hash table that uses separate chaining to resolve collisions among its entries for each fundamental map operationinvolving key kthe separate-chaining approach delegates the handling of this operation to the miniature list-based map stored at [ ( )soput(kvwill scan this list looking for an entry with key equal to kif it finds oneit replaces its value with votherwiseit puts (kvat the end of this list likewiseget(kwill search through this list until it reaches the end or finds an entry with key equal to and remove(kwill perform similar search but additionally remove an entry after it is found we can "get awaywith this simple list-based approachbecause the spreading properties of the hash function help keep each bucket' list small indeeda good hash function will try to minimize collisions as much as possiblewhich will imply that most of our buckets are either empty or store just single entry this observation allows us to make slight change to our implementation so thatif bucket [iis emptyit stores nulland if [istores just single entry ( , )we can simply have [ipoint directly to the entry (kvrather than to list-based map holding
21,163
exercise ( - in figure we give an illustration of hash table with separate chaining assuming we use good hash function to index the entries of our map in bucket array of capacity nwe expect each bucket to be of size / this valuecalled the load factor of the hash table (and denoted with )should be bounded by small constantpreferably below forgiven good hash functionthe expected running time of operations getputand remove in map implemented with hash table that uses this function is on/ thuswe can implement these operations to run in ( expected timeprovided is (nfigure hash table of size storing entries with integer keyswith colli sions resolved by separate chaining the compression function is (kk mod for simplicitywe do not show the values associated with the keys open addressing the separate chaining rule has many nice propertiessuch as allowing for simple implementations of map operationsbut it nevertheless has one slight disadvan tageit requires the use of an auxiliary data structure-- list--to hold entries with colliding keys we can handle collisions in other ways besides using the separate
21,164
are writing program for small handheld device)then we can use the alternative approach of always storing each entry directly in bucketat most one entry per bucket this approach saves space because no auxiliary structures are employedbut it requires bit more complexity to deal with collisions there are several vari ants of this approachcollectively referred to as open addressing schemeswhich we discuss next open addressing requires that the load factor is always at most and that entries are stored directly in the cells of the bucket array itself linear probing simple open addressing method for collision handling is linear probing in this methodif we try to insert an entry (kvinto bucket [ithat is already occupiedwhere ( )then we try next at [( modnif [( mod nis also occupiedthen we try [( mod ]and so onuntil we find an empty bucket that can accept the new entry once this bucket is locatedwe simply insert the entry there of coursethis collision resolution strategy requires that we change the implementation of the get(kvoperation in particularto perform such searchfollowed by either replacement or insertionwe must examine consecutive buck etsstarting from [ ( )]until we either find an entry with key equal to or we find an empty bucket (see figure the name "linear probingcomes from the fact that accessing cell of the bucket array can be viewed as "probefigure insertion into hash table with integer keys using linear probing the hash function is (kk mod values associated with keys are not shown to implement remove( )we mightat firstthink we need to do consider able amount of shifting of entries to make it look as though the entry with key was never insertedwhich would be very complicated typical way to get around this difficulty is to replace deleted entry with special "availablemarker object with this special marker possibly occupying buckets in our hash tablewe modify our search algorithm for remove(kor get(kso that the search for
21,165
until reach ing the desired entry or an empty bucket (or returning back to where we started fromadditionallyour algorithm for put( ,vshould remember an available cell encountered during the search for ksince this is valid place to put new entry ( ,vthuslinear probing saves spacebut it complicates removals even with the use of the available marker objectlinear probing suffers from an additional disadvantage it tends to cluster the entries of the map into contiguous runswhich may even overlap (particularly if more than half of the cells in the hash table are occupiedsuch contiguous runs of occupied hash cells causes searches to slow down considerably quadratic probing another open addressing strategyknown as quadratic probinginvolves iteratively trying the buckets [( ( )mod ]for , , where ( = until finding an empty bucket as with linear probingthe quadratic probing strategy complicates the removal operationbut it does avoid the kinds of clustering patterns that occur with linear probing neverthelessit creates its own kind of clusteringcalled secondary clusteringwhere the set of filled array cells "bouncesaround the array in fixed pattern if is not chosen as primethen the quadratic probing strategy may not find an empty bucket in even if one exists in facteven if is primethis strategy may not find an empty slotif the bucket array is at least half fullwe explore the cause of this type of clustering in an exercise ( - double hashing another open addressing strategy that does not cause clustering of the kind pro duced by linear probing or the kind produced by quadratic probing is the double hashing strategy in this approachwe choose secondary hash functionh 'and if maps some key to bucket [ ]with ( )that is already occupiedthen we iteratively try the buckets [( ( )mod nnextfor , , where (jj '(kin this schemethe secondary hash function is not allowed to eval uate to zeroa common choice is '(kq ( mod )for some prime number alson should be prime moreoverwe should choose secondary hash function that will attempt to minimize clustering as much as possible these open addressing schemes save some space over the separate chaining methodbut they are not necessarily faster in experimental and theoretical anal ysesthe chaining method is either competitive or faster than the other methodsdepending on the load factor of the bucket array soif memory space is not major issuethe collision-handling method of choice seems to be separate chain ing stillif memory space is in short supplythen one of these open addressing methods might be worth implementingprovided our probing strategy minimizes the clustering that can occur from open addressing
21,166
in code fragments we show classhashtablemapwhich implements the map adt using hash table with linear probing to resolve collisions these code fragments include the entire implementation of the map adtexcept for the methods values(and entries()which we leave as an exercise ( - the main design elements of the java class hashtablemap are as followswe maintainin instance variablesthe sizenof the mapthe bucket arrayaand the capacitynof we use method hash value to compute the hash function of key by means of the built-in hashcode method and the multiply-add-and-divide (madcompression function we define sentinelavailableas marker for deactivated entries we provide an optional constructor that allows us to specify the initial capac ity of the bucket array if the current bucket array is full and one tries to insert new entrywe rehash the entire contents into new array that is twice the size as the old version the following (protectedauxiliary methods are usedcheckkey( )which checks if the key is valid this method currently just checks that is not nullbut class that extends hashtablemap can override this method with more elaborate test rehash()which computes new mad hash function with random pa rameters and rehashes the entries into new array with double capacity findentry( )which looks for an entry with key equal to kstarting at the index [ ( )and going through the array in circular fashion if the method finds cell with such an entrythen it returns the index of this cell otherwiseit returns - - where is the index of the last empty or available cell encountered code fragment class hashtablemap implementing the map adt using hash table with linear probing (continues in code fragment
21,167
class hashtablemap implementing the map adt using hash table with linear probing (continues in code fragment
21,168
class hashtablemap implementing the map adt using hash table with linear probing (continued from code fragment we have omitted the values(and entries(methods in the listing aboveas they are similar to keys(
21,169
in the hash table schemes described abovewe should desire that the load factorx /nbe kept below experiments and average-case analyses suggest that we
21,170
for separate chaining the built-in class java util hashmapwhich imple ments the map adtuses the threshold as default maximum load factor and rehashes any time the load factor exceeds this (or an optional user-set load factorthe choice of is fine for separate chaining (which is the likely implementation in java util hashmap)butas we explore in exercise - some open addressing schemes can start to fail when > although the details of the average-case analysis of hashing are beyond the scope of this bookits probabilistic basis is quite intuitive if our hash function is goodthen we expect the entries to be uniformly distributed in the cells of the bucket array thusto store entriesthe expected number of keys in bucket would be / which is ( if is (nwith separate chainingas gets very close to the probability of collision also approaches which adds overhead to our operationssince we must revert to linear-time list-based methods in buckets that have collisions of coursein the worst casea poor hash function could map every entry to the same bucketwhich would result in linear-time performance for all map operationsbut this is unlikely with open addressingon the other handas the load factor grows beyond and starts approaching clusters of entries in the bucket array start to grow as well these clusters cause the probing strategies to "bounce aroundthe bucket array for considerable amount of time before they can finish thuskeeping the load factor below certain threshold is vital for open ad dressing schemes and is also of concern with the separate chaining method if the load factor of hash table goes significantly above the specified thresholdthen it is common to require that the table be resized (to regain the specified load factorand all the objects inserted into this new table when rehashing to new tableit is good requirement for the new array' size to be at least double the previous size once we have allocated this new bucket arraywe must define new hash function to go with itpossibly computing new parameters we then reinsert every entry from the old array into the new array using this new hash function in our im plementation of hash table with linear probing given in code fragments rehashing is used to keep the load factor less than or equal to even with periodic rehashinga hash table is an efficient means of implementing map indeedif we always double the size of the table with each rehashing operationthen we can amortize the cost of rehashing all the entries in the table against the time used to insert them in the first place (see section each rehashing will generally scatter the entries throughout the new bucket array applicationcounting word frequencies as miniature case study of using hash tableconsider the problem of counting the number of occurrences of different words in documentwhich arisesfor
21,171
an ideal data structure to use herefor we can use words as keys and word counts as values we show such an application in code fragment code fragment program for counting word frequencies in documentprint ing the most frequent word the document is parsed using the scanner classfor which we change the delimiter for separating tokens from whitespace to any non letter we also convert words to lowercase
21,172
the dictionary abstract data type like mapa dictionary stores key-value pairs (kv)which we call entrieswhere is the key and is the value similarlya dictionary allows for keys and values to be of any object type butwhereas map insists that entries have unique keysa dictionary allows for multiple entries to have the same keymuch like an english dictionarywhich allows for multiple definitions for the same word we distinguish two types of dictionariesunordered dictionaries and ordered dictionaries in an ordered dictionarywe assume that total order relation is de fined
21,173
section in an unordered dictionaryhoweverno order relation is assumed on the keyshenceonly equality testing between keys is used as an adtan (unordereddictionary supports the following methodssize()return the number of entries in isempty()test whether is empty find( )if contains an entry with key equal to kthen return such an entryelse return null findall( )return an iterable collection containing all entries with key equal to insert( , )insert an entry with key and value into dreturning the entry created remove( )remove from an entry ereturning the removed entry or null if was not in entries()return an iterable collection of the key-value entries in notice that our dictionary operations use entrieswhich are the key-value pairs stored in the dictionary we assume each entry comes equipped with getkey(and getvalue(methods to access its key and value components respectively when the method find(kis unsuccessful (that isthere is no entry with key equal to )we use the convention of returning sentinel null another choiceof coursewould be to throw an exception for an unsuccessful find( )but that would probably not be an appropriate use of an exceptionsince it is normal to ask for key that might not be in our dictionary moreoverthrowing and catching an exception is typically slower than test against sentinelhenceusing sentinel is more efficient note thatas we have defined ita dictionary can contain different entries with equal keys in this caseoperation find(kreturns an arbitrary entry ( , )whose key is equal to we mentionin passingthat our dictionary adt should not be confused with the abstract class java util dictionarywhich actually corresponds to the map adt given above and is now considered obsolete example in the followingwe show series of operations on an initially empty dictionary storing entries with integer keys and character values operation output
21,174
insert( , ( , {( )insert( , ( , {( , )( , )insert( , ( , {( , )( , )( , )insert( , ( , {( , )( , )( , ),( , )insert( , ( , {( , )( , )( , )( , )( , )find( ( , {( , )( , )( , )( , )( , )find( null {( , )( , )( , )( , )( , )find( ( , {( , )( , )( , )( , )( , )
21,175
{( , )( , ){( , )( , )( , )( , )( , )size( {( , )( , )( , )( , )( , )remove(find( )( , {( , )( , )( , )( , )find( null {( , )( , )( , )( ,elist-based dictionaries and audit trails simple way of realizing dictionary uses an unordered list to store the key-value entries such an implementation is often called log file or audit trail the primary applications of audit trails are situations where we wish to archive structured data for examplemany operating systems store log files of login requests they process the typical scenario is that there are many insertions into the dictionary but few searches for examplesearching such an operating system log file typically occurs only after something goes wrong thusa list-based dictionary supports simple and fast insertionspossibly at the expense of search timeby storing entries of dictionary in arbitrary order (see figure figure realization of dictionary by means of log file we show only the keys in this dictionaryso as to highlight its unordered list implementation
21,176
we assume that the list used for list-based dictionary is implemented with doubly linked list we give descriptions of the main dictionary methods for listbased implementation in code fragment in this simple implementationwe don' assume that an entry stores reference to its location in code fragment some of the main methods for dictionary dimplemented with an unordered list sh
21,177
21,178
ordered list beginning with the memory usagewe note that the space required for list-based dictionary with entries is ( )since the linked list data structure has memory usage proportional to its size in additionwith this implementation of the dictionary adtwe can realize operation insert(kveasily and efficientlyjust by single call to the add last method on swhich simply adds the new entry to the end of the list thuswe achieve ( time for the insert(kvoperation on the dictionary unfortunatelythis implementation does not allow for an efficient execution of the find method find(koperation requiresin the worst casescanning through the entire list sexamining each of its entries for examplewe could use an iterator on the positions in sstopping as soon as we encounter an entry with key equal to (or reach the end of the listthe worst case for the running time of this method clearly occurs when the search is unsuccessfuland we reach the end of the list having examined all of its entries thusthe find method runs in (ntime similarlytime proportional to is needed in the worst case to perform remove(eoperation on dif we assume that entries do not keep track of their positions in thus the running time for performing operation remove(eis (nalternativelyif we use location-aware entries that store their position in sthen we can perform operation remove(ein ( time (see section the operation find all always requires scanning through the entire list sand therefore its running time is (nmore preciselyusing the big-theta notation (section )we say that operation find all runs in th(ntime since it takes time proportional to in both the best and worst case in conclusionimplementing dictionary with an unordered list provides for fast insertionsbut at the expense of slow searches and removals thuswe should only use this implementation where we either expect the dictionary to always be small or we expect the number of insertions to be large relative to the number of searches and removals of coursearchiving database and operating system transactions are precisely situations such as this neverthelessthere are many other scenarios where the number of insertions in dictionary will be roughly proportional to the number of searches and removalsand in these cases the list implementation is clearly inappropriate the unordered dictionary implementation we discuss next can often be usedhoweverto achieve fast insertionsremovalsand searches in many such cases hash table dictionary implementation we can use hash table to implement the dictionary adtmuch in the same way as we did for the map adt the main differenceof courseis that dictionary allows
21,179
kept below our hash function spreads entries fairly uniformlyand we use separate chaining to resolve collisionsthen we can achieve ( )-time performance for the findremoveand insert methods and ( )-time performance for the findall methodwhere is the number of entries returned in additionwe can simplify the algorithms for implementing this dictionaryif we assume we have list-based dictionary storing the entries at each cell in the bucket array such an assumption would be in keeping with our use of separate chainingsince each cell would be list this approach allows us to implement the main dictionary methods as shown in code fragment code fragment some of the main methods for dictionary dimplemented with hash table that uses bucket arrayaand an unordered list for each cell in we use to denote the number of entries in dn to denote the capacity of aand to denote the maximum load factor for the hash table
21,180
if the keys in dictionary come from total orderwe can store ' entries in an array list by nondecreasing order of the keys (see figure we specify that is an array listrather than node listfor the ordering of the keys in the array list allows for faster searching than would be possible had beensayimplemented with linked list admittedlya hash table has good expected running time for searching but its worst-case time for searching is no better than linked listand in some applicationssuch as in real-time processingwe need to guarantee worstcase searching bound the fast algorithm for searching in an ordered array listwhich we discuss in this subsectionhas good worst-case guarantee on its running time so it might be preferred over hash table in certain applications we refer to this ordered array list implementation of dictionary as an ordered search table
21,181
realization of dictionary by means of an ordered search table we show only the keys for this dictionaryso as to highlight their ordering the space requirement of an ordered search table is ( )which is similar to the list-based dictionary implementation (section )assuming we grow and shrink the array supporting the array list to keep the size of this array proportional to the number of entries in unlike an unordered listhoweverperforming updates in search table takes considerable amount of time in particularperforming the insert( ,voperation in search table requires (ntimesince we need to shift up all the entries in the array list with key greater than to make room for the new entry (kva similar observation applies to the operation remove ( )since it takes (ntime to shift all the entries in the array list with key greater than to close the "holeleft by the removed entry (or entriesthe search table implementation is therefore inferior to the log file in terms of the worst-case running times of the dictionary update operations neverthelesswe can perform the find method much faster in search table binary search significant advantage of using an ordered array list to implement dictionary with entries is that accessing an element of by its index takes ( time we recallfrom section that the index of an element in an array list is the number of elements preceding it thusthe first element in has index and the last element has index the elements stored in are the entries of dictionary dand since is orderedthe entry at index has key no smaller than the keys of the entries at indices and no larger than the keys of the entries at indices this observation allows us to quickly "home inon search key using variant of the children' game "high-low we call an entry of candidate ifat the current stage of the searchwe cannot rule out that this entry has key equal to the algorithm maintains two parameterslow and highsuch that all the candidate entries have index at least low and at most high in initiallylow and high we then compare to the key of the median candidate ethat isthe entry with index mid (low high)/ we consider three cases
21,182
if getkey()then we have found the entry we were looking forand the search terminates successfully returning if getkey()then we recur on the first half of the array listthat ison the range of indices from low to mid if getkey()we recur on the range of indices from mid to high this search method is called binary searchand is given in pseudo-code in code fragment operation find(kon an -entry dictionary implemented with an ordered array list consists of calling binarysearch( , , , code fragment array list binary search in an ordered we illustrate the binary search algorithm in figure figure example of binary search to perform operation find( )in dictio nary with integer keysimplemented with an ordered array list for simplicitywe show the keys stored in the dictionary but not the whole entries
21,183
ber of primitive operations are executed at each recursive call of method binary search hencethe running time is proportional to the number of recursive calls performed crucial fact is that with each recursive call the number of candidate entries still to be searched in the array list is given by the value high low moreoverthe number of remaining candidates is reduced by at least one half with each recursive call specificallyfrom the definition of midthe number of remain ing candidates is either or initiallythe number of candidate entries is nafter the first call to binarysearchit is at most / after the second callit is at most / and so on in generalafter the ith call to binarysearchthe number of candidate entries remaining is at most / in the worst case (unsuccessful search)the recursive calls stop when there are no more candidate entries hencethe maximum number of recursive calls performedis the smallest integer such that /
21,184
thuswe have mlogn which implies that binary search runs in (logntime there is simple variation of binary search that performs findall(kin time (logn )where is the number of entries in the iterator returned the details are left as an exercise ( - thuswe can use an ordered search table to perform fast dictionary searchesbut using such table for lots of dictionary updates would take considerable amount of time for this reasonthe primary applications for search tables are in situations where we expect few updates to the dictionary but many searches such situation could arisefor examplein an ordered list of english words we use to order entries in an encyclopedia or help file comparing dictionary implementations table compares the running times of the methods of dictionary realized by either an unordered lista hash tableor an ordered search table note that an unordered list allows for fast insertions but slow searches and removalswhereas search table allows for fast searches but slow insertions and removals incidentallyalthough we don' explicitly discuss itwe note that sorted list implemented with doubly linked list would be slow in performing almost all the dictionary operations (see exercise - table comparison of the running times of the methods of dictionary realized by means of an unordered lista hash tableor an ordered search table we let denote the number of entries in the dictionaryn denote the capacity of the bucket array in the hash table implementationsand denote the size of collection returned by operation findall the space requirement of all the implementations is ( )assuming that the arrays supporting the hash table and search table implementations are maintained such that their capacity is proportional to the number of entries in the dictionary
21,185
list hash table search table sizeisempty ( ( ( entries (no(no(nfind (no( exp (nworst-case (lognfindall (no( sexp (nworst-case (logn sinsert ( ( (nremove
21,186
( exp (nworst-case ( skip lists an interesting data structure for efficiently realizing the dictionary adt is the skip list this data structure makes random choices in arranging the entries in such way that search and update times are (lognon averagewhere is the number of entries in the dictionary interestinglythe notion of average time complexity used here does not depend on the probability distribution of the keys in the input insteadit depends on the use of random-number generator in the implementation of the insertions to help decide where to place the new entry the running time is averaged over all possible outcomes of the random numbers used when inserting entries because they are used extensively in computer gamescryptographyand computer simulationsmethods that generate numbers that can be viewed as random numbers are built into most modern computers some methodscalled pseudorandom number generatorsgenerate random-like numbers deterministicallystarting with an initial number called seed other methods use hardware devices to extract "truerandom numbers from nature in any casewe will assume that our computer has access to numbers that are sufficiently random for our analysis the main advantage of using randomization in data structure and algorithm design is that the structures and methods that result are usually simple and efficient we can devise simple randomized data structurecalled the skip listwhich has the same logarithmic time bounds for searching as is achieved by the binary searching algorithm neverthelessthe bounds are expected for the skip listwhile they are worst-case bounds for binary searching in look-up table on the other handskip lists are much faster than look-up tables for dictionary updates skip list for dictionary consists of series of lists { each list stores subset of the entries of sorted by nondecreasing key plus entries with two special keysdenoted and +where is smaller than every possible key that can be inserted in and is larger than every possible key that can be inserted in in additionthe lists in satisfy the followinglist contains every entry of dictionary (plus the special entries with keys and +for list contains (in addition to and + randomly generated subset of the entries in list - list contains only and
21,187
list with list at the bottom and lists , above it alsowe refer to as the height of skip list figure example of skip list storing entries for simplicitywe show only the keys of the entries intuitivelythe lists are set up so that + contains more or less every other entry in as we shall see in the details of the insertion methodthe entries in + are chosen at random from the entries in by picking each entry from to also be in + with probability / that isin essencewe "flip coinfor each entry in and place that entry in + if the coin comes up "heads thuswe expect to have about / entriess to have about / entriesandin generals to have about / entries in other wordswe expect the height of to be about logn the halving of the number of entries from one list to the next is not enforced as an explicit property of skip listshowever insteadrandomization is used using the position abstraction used for lists and treeswe view skip list as twodimensional collection of positions arranged horizontally into levels and vertically into towers each level is list and each tower contains positions storing the same entry across consecutive lists the positions in skip list can be traversed using the following operationsnext( )return the position following on the same level prev( )return the position preceding on the same level below( )return the position below in the same tower above( )return the position above in the same tower we conventionally assume that the above operations return null position if the position requested does not exist without going into the detailswe note that we can easily implement skip list by means of linked structure such that the above traversal methods each take ( timegiven skip-list position such linked
21,188
are also doubly linked lists search and update operations in skip list the skip list structure allows for simple dictionary search and update algorithms in factall of the skip list search and update algorithms are based on an elegant skipsearch method that takes key and finds the position of the entry in list such that has the largest key (which is possibly -less than or equal to searching in skip list suppose we are given search key we begin the skipsearch method by setting position variable to the top-mostleft position in the skip list scalled the start position of that isthe start position is the position of storing the special entry with key we then perform the following steps (see figure )where key(pdenotes the key of the entry at position if below(pis nullthen the search terminates--we are at the bottom and have located the largest entry in with key less than or equal to the search key otherwisewe drop down to the next lower level in the present tower by setting below( starting at position pwe move forward until it is at the right-most position on the present level such that key( < we call this the scan forward step note that such position always existssince each level contains the keys and in factafter we perform the scan forward for this levelp may remain where it started in any casewe then repeat the previous step figure example of search in skip list the positions visited when searching for key are highlighted in blue
21,189
skipsearchin code fragment given this methodit is now easy to implement the operation find(kwe simply perform skipsearch(kand test whether or not key(pk if these two keys are equalwe return potherwisewe return null code fragment search in skip list variable holds the start position of as it turns outthe expected running time of algorithm skipsearch on skip list with entries is (lognwe postpone the justification of this facthoweveruntil after we discuss the implementation of the update methods for skip lists insertion in skip list the insertion algorithm for skip lists uses randomization to decide the height of the tower for the new entry we begin the insertion of new entry ( ,vby performing skipsearch(koperation this gives us the position of the bottom-level entry with the largest key less than or equal to (note that may hold the special entry with key -we then insert (kvimmediately after position after inserting the new entry at the bottom levelwe "flipa coin if the flip comes up tailsthen we stop here else (the flip comes up heads)we backtrack to the previous (next higherlevel and insert ( ,vin this level at the appropriate position we again flip coinif it comes up headswe go to the next higher level and repeat thuswe continue to insert the new entry ( ,vin lists until we finally get flip that comes up tails we link together all the references to the new entry (kvcreated in this process to create the tower for the new entry coin flip can be simulated with java' built-in pseudo-random number generator java util random by calling nextint( )which returns of each with probability / we give the insertion algorithm for skip list in code fragment and we illustrate it in figure the algorithm uses method insertafterabove(
21,190
level as pand above position qreturning the position of the new entry (and setting internal references so that nextprevaboveand below methods will work correctly for pqand rthe expected running time of the insertion algorithm on skip list with entries is (logn)which we show in section code fragment insertion in skip list method coinflip(returns "headsor "tails"each with probability / variables nhand hold the number of entriesthe heightand the start node of the skip list figure insertion of an entry with key into the skip list of figure we assume that the random "coin flipsfor the new entry came up heads three times in rowfollowed by tails the positions visited are highlighted in blue the positions inserted to hold
21,191
positions preceding them are flagged removal in skip list like the search and insertion algorithmsthe removal algorithm for skip list is quite simple in factit is even easier than the insertion algorithm that isto perform remove(koperationwe begin by executing method skipsearch(kif the position stores an entry with key different from kwe return null otherwisewe remove and all the positions above pwhich are easily accessed by using above operations to climb up the tower of this entry in starting at position the removal algorithm is illustrated in figure and detailed description of it is left as an exercise ( - as we show in the next subsectionoperation remove in skip list with entries has (lognexpected running time before we give this analysishoweverthere are some minor improvements to the skip list data structure we would like to discuss firstwe don' actually need to store references to entries at the levels of the skip list above the bottom levelbecause all that is needed at these levels are references to keys secondwe don' actually need the above method in factwe don' need the prev method either we can perform entry insertion and removal in strictly top-downscan-forward fashionthus saving space for "upand "prevreferences we explore the details of this optimization in exercise - neither of these optimizations improve the asymptotic performance of skip lists by more than constant factorbut these improvements canneverthelessbe meaningful in practice in factexperimental evidence suggests that optimized skip lists are faster in practice than avl trees and other balanced search treeswhich are discussed in the expected running time of the removal algorithm is (logn)which we show in section figure removal of the entry with key from the skip list of figure the positions visited after
21,192
highlighted in blue the positions removed are drawn with dashed lines maintaining the top-most level skip-list must maintain reference to the start position (the top-mostleft position in sas an instance variableand must have policy for any insertion that wishes to continue inserting new entry past the top level of there are two possible courses of action we can takeboth of which have their merits one possibility is to restrict the top levelhto be kept at some fixed value that is function of nthe number of entries currently in the dictionary (from the analysis we will see that max , log is reasonable choiceand picking logn is even saferimplementing this choice means that we must modify the insertion algorithm to stop inserting new position once we reach the top-most level (unless logn log( in which case we can now go at least one more levelsince the bound on the height is increasingthe other possibility is to let an insertion continue inserting new position as long as heads keeps getting returned from the random number generator this is the approach taken in algorithm skipinsert of code fragment as we show in the analysis of skip liststhe probability that an insertion will go to level that is more than (lognis very lowso this design choice should also work either choice will still result in the expected (logntime to perform searchinsertionand removalhoweverwhich we show in the next section probabilistic analysis of skip lists as we have shown aboveskip lists provide simple implementation of an ordered dictionary in terms of worst-case performancehoweverskip lists are not superior data structure in factif we don' officially prevent an insertion from continuing significantly past the current highest levelthen the insertion algorithm
21,193
howeversince the probability of having fair coin repeatedly come up heads forever is moreoverwe cannot infinitely add positions to list without eventually running out of memory in any caseif we terminate position insertion at the highest level hthen the worst-case running time for performing the findinsertand remove operations in skip list with entries and height is ( hthis worst-case performance occurs when the tower of every entry reaches level - where is the height of howeverthis event has very low probability judging from this worst casewe might conclude that the skip list structure is strictly inferior to the other dictionary implementations discussed earlier in this but this would not be fair analysisfor this worst-case behavior is gross overestimate bounding the height of skip list because the insertion step involves randomizationa more accurate analysis of skip lists involves bit of probability at firstthis might seem like major undertakingfor complete and thorough probabilistic analysis could require deep mathematics (andindeedthere are several such deep analyses that have appeared in data structures research literaturefortunatelysuch an analysis is not necessary to understand the expected asymptotic behavior of skip lists the informal and intuitive probabilistic analysis we give below uses only basic concepts of probability theory let us begin by determining the expected value of the height of skip list with entries (assuming that we do not terminate insertions earlythe probability that given entry has tower of height > is equal to the probability of getting consecutive heads when flipping cointhat isthis probability is / hencethe probability pi that level has at least one position is at most < / ifor the probability that any one of different events occurs is at most the sum of the probabilities that each occurs the probability that the height of is larger than is equal to the probability that level has at least one positionthat isit is no more than this means that is larger thansay log with probability at most log < / log / / for exampleif this probability is one-in- -million long shot more generallygiven constant is larger than log with probability at most /nc- that isthe probability that is smaller than log is at least /nc- thuswith high probabilitythe height of is (logn
21,194
nextconsider the running time of search in skip list sand recall that such search involves two nested while loops the inner loop performs scan forward on level of as long as the next key is no greater than the search key kand the outer loop drops down to the next level and repeats the scan forward iteration since the height of is (lognwith high probabilitythe number of drop-down steps is (lognwith high probability so we have yet to bound the number of scan-forward steps we make let be the number of keys examined while scanning forward at level observe thatafter the key at the starting positioneach additional key examined in scan-forward at level cannot also belong to level + if any of these keys were on the previous levelwe would have encountered them in the previous scan-forward step thusthe probability that any key is counted in is / thereforethe expected value of is exactly equal to the expected number of times we must flip fair coin before it comes up heads this expected value is hencethe expected amount of time spent scanning forward at any level is ( since has (lognlevels with high probabilitya search in takes expected time (lognby similar analysiswe can show that the expected running time of an insertion or removal is (lognspace usage in skip list finallylet us turn to the space requirement of skip list with entries as we observed abovethe expected number of positions at level is / which means that the expected total number of positions in is using proposition on geometric summationswe have for all > hencethe expected space requirement of is (ntable summarizes the performance of dictionary realized by skip list table performance of dictionary implemented with skip list we denote the number
21,195
performed with nand the size of the collection returned by operation findall with the expected space requirement is (noperation time sizeisempty ( entries (nfindinsertremove (logn(expectedfindall (logn (expected extensions and applications of dictionaries in this sectionwe explore several extensions and applications of dictionaries supporting location-aware dictionary entries as we did for priority queues (section )we can also use location-aware entries to speed up the running time for some operations in dictionary in particulara location-aware entry can greatly speed up entry removal in dictionary for in removing location-aware entry ewe can simply go directly to the place in our data structure where we are storing and remove it we could implement location-aware entryfor exampleby augmenting our entry class with private location variable and protected methodslocation(and setlocation( )which return and set this variable respectively we then require that the location variable for an entry ealways refer to ' position or index in the data structure implementing our dictionary we wouldof coursehave to update this variable any time we moved an entryso it would probably make the most sense for this entry class to be closely related to the class implementing the dictionary (the location-aware entry class could even be nested inside the dictionary
21,196
structures presented in this unordered list in an unordered listlimplementing dictionarywe can maintain the location variable of each entry to point to ' position in the underlying linked list for this choice allows us to perform remove(eas remove( location())which would run in ( time hash table with separate chaining consider hash tablewith bucket array and hash function hthat uses separate chaining for handling collisions we use the location variable of each entry to point to ' position in the list implementing the mini-map [ ( )this choice allows us to perform the main work of remove(eas remove( location())which would run in constant expected time ordered search table in an ordered tabletimplementing dictionarywe should maintain the location variable of each entry to be ' index in this choice would allow us to perform remove(eas remove( location()(recall that location(now returns an integer this approach would run fast if entry was stored near the end of skip list in skip listsimplementing dictionarywe should maintain the location variable of each entry to point to ' position in the bottom level of this choice would allow us to skip the search step in our algorithm for performing remove(ein skip list we summarize the performance of entry removal in dictionary with locationaware entries in table table performance of the remove method in dictionaries implemented with location-aware entries we use to denote the number of entries in the dictionary list hash table search table skip list ( ( (expected
21,197
(logn(expectedthe ordered dictionary adt in an ordered dictionarywe want to perform the usual dictionary operationsbut also maintain an order relation for the keys in our dictionary we can use comparator to provide the order relation among keysas we did for the ordered search table and skip list dictionary implementations described above indeedall of the dictionary implementations discussed in use comparator to store the dictionary in nondecreasing key order when the entries of dictionary are stored in orderwe can provide efficient implementations for additional methods in the dictionary adt for examplewe could consider adding the following methods to the dictionary adt so as to define the ordered dictionary adt first()return an entry with smallest key last()return an entry with largest key successors( )return an iterator of the entries with keys greater than or equal to kin nondecreasing order predecessors( )return an iterator of the entries with keys less than or equal to kin nonincreasing order implementing an ordered dictionary the ordered nature of the operations above makes the use of an unordered list or hash table inappropriate for implementing the dictionarybecause neither of these data structures maintains any ordering information for the keys in the dictionary indeedhash tables achieve their best search speeds when their keys are distributed almost at random thuswe should consider an ordered search table or skip list (or data structure from when dealing with ordered dictionaries for exampleusing skip list to implement an ordered dictionarywe can implement methods first(and last(in ( time by accessing the second and second to last positions of the bottom list also methods successors(kand predecessors(kcan be implemented to run in (lognexpected time moreoverthe iterators returned by the successors(kand predecessors(kmethods could be implemented using reference to current
21,198
methods of these iterators would each run in constant time using this approach the java util sorted map interface java provides an ordered version of the java util map interface in its interface called java util sortedmap this interface extends the java util map interface with methods that take order into account like the parent interfacea sortedmap does not allow for duplicate keys ignoring the fact that dictionaries allow for multiple entries with the same keypossible correspondences between methods of our ordered dictionary adt and methods of interface java util sortedmap are shown in table table loose correspondences between methods of the ordered dictionary adt and methods of the java util sortedmap interfacewhich supports other methods as well the java util sortedmap expression for predecessors(kis not an exact correspondencehoweveras the iterator returned would be by increasing keys and would not include the entry with key equal to there appears to be no efficient way of getting true correspondence to predecessors(kusing java util sortedmap methods ordered dictionary methods java util sortedmap methods first(getkey(firstkey(first(getvalue(get(firstkey()last(getkey(
21,199
last(getvalue(get(lastkey()successors(ktailmap(kentryset(iterator(predecessors(kheadmap(kentryset(iterator(flight databases and maxima sets as we have mentioned in the preceding sectionsunordered and ordered dictionaries have many applications in this sectionwe explore some specific applications of ordered dictionaries flight databases there are several web sites on the internet that allow users to perform queries on flight databases to find flights between various citiestypically with the intent to buy ticket to make querya user specifies origin and destination citiesa departure dateand departure time to support such querieswe can model the flight database as dictionarywhere keys are flight objects that contain fields corresponding to these four parameters that isa key is tuple (origindestinationdatetimeadditional information about flightsuch as the flight numberthe number of seats still available in first (fand coach (yclassthe flight durationand the farecan be stored in the value object finding requested flight is not simply matter of finding key in the dictionary matching the requested queryhowever the main difficulty is thatalthough user typically wants to exactly match the origin and destination citiesas well as the departure datehe or she will probably be content with any departure time that is close to his or her requested departure time we can handle such queryof courseby ordering our keys lexicographically thusgiven user query key kwe can call successors(kto return an iteration of all the flights between the desired cities on the desired datewith departure times in strictly increasing order from the requested departure time similar use of predecessors(kwould give us flights with times before the requested time thereforean efficient