id
int64
0
25.6k
text
stringlengths
0
4.59k
23,500
hashing empty table after after after after after figure hash table with double hashingafter each insertion as hash (xr ( mod )with prime smaller than tablesizewill work well if we choose then figure shows the results of inserting the same keys as before the first collision occurs when is inserted hash ( so is inserted in position hash ( so is inserted at location finally collides and is inserted at distance hash ( - away if we tried to insert in position we would have collision since hash ( we would then try positions and then until an empty spot is found it is generally possible to find some bad casebut there are not too many here as we have said beforethe size of our sample hash table is not prime we have done this for convenience in computing the hash functionbut it is worth seeing why it is important to make sure the table size is prime when double hashing is used if we attempt to insert into the tableit would collide with since hash ( and the table size is we essentially have only one alternative locationand it is already taken thusif the table size is not primeit is possible to run out of alternative locations prematurely howeverif double hashing is correctly implementedsimulations imply that the expected number of probes is almost the same as for random collision resolution strategy this makes double hashing theoretically interesting quadratic probinghoweverdoes not require the use of second hash function and is thus likely to be simpler and faster in practiceespecially for keys like strings whose hash functions are expensive to compute rehashing if the table gets too fullthe running time for the operations will start taking too longand insertions might fail for open addressing hashing with quadratic resolution this can happen if there are too many removals intermixed with insertions solutionthenis to build another table that is about twice as big (with an associated new hash functionand scan down the entire original hash tablecomputing the new hash value for each (nondeletedelement and inserting it in the new table
23,501
figure hash table with linear probing with input as an examplesuppose the elements and are inserted into linear probing hash table of size the hash function is (xx mod the resulting hash table appears in figure if is inserted into the tablethe resulting table in figure will be over percent full because the table is so fulla new table is created the size of this table is because this is the first prime that is twice as large as the old table size the new hash function is then (xx mod the old table is scannedand elements and are inserted into the new table the resulting table appears in figure this entire operation is called rehashing this is obviously very expensive operationthe running time is ( )since there are elements to rehash and the table size is roughly nbut it is actually not all that badbecause it happens very infrequently in particularthere must have been / insertions prior to the last rehashso it essentially adds constant cost to each insertion this is why the new table is made twice as large as the old table if this data structure is part of the programthe effect is not noticeable on the other handif the hashing is performed as part of an interactive systemthen the unfortunate user whose insertion caused rehash could see slowdown figure hash table with linear probing after is inserted
23,502
hashing figure hash table after rehashing rehashing can be implemented in several ways with quadratic probing one alternative is to rehash as soon as the table is half full the other extreme is to rehash only when an insertion fails thirdmiddle-of-the-road strategy is to rehash when the table reaches certain load factor since performance does degrade as the load factor increasesthe third strategyimplemented with good cutoffcould be best rehashing for separate chaining hash tables is similar figure shows that rehashing is simple to implement and provides an implementation for separate chaining rehashing hash tables in the standard library in ++ the standard library includes hash table implementations of sets and maps-namelyunordered_set and unordered_mapwhich parallel set and map the items in the ordered_set (or the keys in the unordered_mapmust provide an overloaded operator=and hash functionas described earlierin section just as the set and map templates can
23,503
/*rehashing for quadratic probing hash table *void rehashvector oldarray array/create new double-sizedempty table array resizenextprime oldarray size)forauto entry array entry info empty/copy table over currentsize forauto entry oldarray ifentry info =active insertstd::moveentry element )/*rehashing for separate chaining hash table *void rehashvectoroldlists thelists/create new double-sizedempty table thelists resizenextprime thelists size)forauto thislist thelists thislist clear)/copy table over currentsize forauto thislist oldlists forauto thislist insertstd::movex )figure rehashing for both separate chaining hash tables and probing hash tables also be instantiated with function object that provides (or overrides defaultcomparison functionunordered_set and unordered_map can be instantiated with function objects that provide hash function and equality operator thusfor examplefigure illustrates how an unordered set of case-insensitive strings can be maintainedassuming that some string operations are implemented elsewhere
23,504
hashing class caseinsensitivestringhash publicsize_t operatorconst string const static hash hfreturn hftolowers )/tolower implemented elsewhere bool operatorconst string lhsconst string rhs const return equalsignorecaselhsrhs )/equalsignorecase is elsewhere }unordered_set sfigure creating case-insensitive unordered_set these unordered classes can be used if it is not important for the entries to be viewable in sorted order for instancein the word-changing example in section there were three maps map in which the key is word lengthand the value is collection of all words of that word length map in which the key is representativeand the value is collection of all words with that representative map in which the key is wordand the value is collection of all words that differ in only one character from that word because the order in which word lengths are processed does not matterthe first map can be an unordered_map because the representatives are not even needed after the second map is builtthe second map can be an unordered_map the third map can also be an unordered_mapunless we want printhighchangeables to alphabetically list the subset of words that can be changed into large number of other words the performance of an unordered_map can often be superior to mapbut it is hard to know for sure without writing the code both ways hash tables with worst-case ( access the hash tables that we have examined so far all have the property that with reasonable load factorsand appropriate hash functionswe can expect ( cost on average for insertionsremovesand searching but what is the expected worst case for search assuming reasonably well-behaved hash function
23,505
for separate chainingassuming load factor of this is one version of the classic balls and bins problemgiven balls placed randomly (uniformlyin binswhat is the expected number of balls in the most occupied binthe answer is well known to be th(log nlog log )meaning that on averagewe expect some queries to take nearly logarithmic time similar types of bounds are observed (or provablefor the length of the longest expected probe sequence in probing hash table we would like to obtain ( worst-case cost in some applicationssuch as hardware implementations of lookup tables for routers and memory cachesit is especially important that the search have definite ( constantamount of completion time let us assume that is known in advanceso no rehashing is needed if we are allowed to rearrange items as they are insertedthen ( worst-case cost is achievable for searches in the remainder of this section we describe the earliest solution to this problemnamely perfect hashingand then two more recent approaches that appear to offer promising alternatives to the classic hashing schemes that have been prevalent for many years perfect hashing supposefor purposes of simplificationthat all items are known in advance if separate chaining implementation could guarantee that each list had at most constant number of itemswe would be done we know that as we make more liststhe lists will on average be shorterso theoretically if we have enough liststhen with reasonably high probability we might expect to have no collisions at allbut there are two fundamental problems with this approachfirstthe number of lists might be unreasonably largesecondeven with lots of listswe might still get unlucky the second problem is relatively easy to address in principle suppose we choose the number of lists to be ( tablesize is )which is sufficiently large to guarantee that with probability at least there will be no collisions then if collision is detectedwe simply clear out the table and try again using different hash function that is independent of the first if we still get collisionwe try third hash functionand so on the expected number of trials will be at most (since the success probability is )and this is all folded into the insertion cost section discusses the crucial issue of how to produce additional hash functions so we are left with determining how large mthe number of listsneeds to be unfortunatelym needs to be quite largespecifically ( howeverif we can show that the table is collision free with probability at least and this result can be used to make workable modification to our basic approach theorem if balls are placed into binsthe probability that no bin has more than one ball is less than proof if pair (ijof balls are placed in the same binwe call that collision let cij be the expected number of collisions produced by any two balls (ijclearly the probability that any two specified balls collide is /mand thus cij is /msince the number of collisions that involve the pair (ijis either or thus the expected number of
23,506
hashing collisions in the entire table is (ij) < cij since there are ( )/ pairsthis sum is ( )/( mn( )/( since the expected number of collisions is below the probability that there is even one collision must also be below of courseusing lists is impractical howeverthe preceding analysis suggests the following alternativeuse only binsbut resolve the collisions in each bin by using hash tables instead of linked lists the idea is that because the bins are expected to have only few items eachthe hash table that is used for each bin can be quadratic in the bin size figure shows the basic structure herethe primary hash table has ten bins bins and are all empty bins and have one itemso they are resolved by secondary hash table with one position bins and have two itemsso they will be resolved into secondary hash table with four ( positions and bin has three itemsso it is resolved into secondary hash table with nine ( positions as with the original ideaeach secondary hash table will be constructed using different hash function until it is collision free the primary hash table can also be constructed several times if the number of collisions that are produced is higher than required this scheme is known as perfect hashing all that remains to be shown is that the total size of the secondary hash tables is indeed expected to be linear theorem if items are placed into primary hash table containing binsthen the total size of the secondary hash tables has expected value at most proof using the same logic as in the proof of theorem the expected number of pairwise collisions is at most ( )/ nor ( )/ let bi be the number of items that hash to position in the primary hash tableobserve that space is used for this cell figure perfect hashing table using secondary hash tables
23,507
in the secondary hash tableand that this accounts for bi (bi )/ pairwise collisions which we will call ci thus the amount of spacebi used for the ith secondary hash table is ci bi the total space is then ci bi the total number of collisions is ( )/ (from the first sentence of this proof)the total number of items is of course nso we obtain total secondary space requirement of ( )/ thusthe probability that the total secondary space requirement is more than is at most (sinceotherwisethe expected value would be higher than )so we can keep choosing hash functions for the primary table until we generate the appropriate secondary space requirement once that is doneeach secondary hash table will itself require only an average of two trials to be collision free after the tables are builtany lookup can be done in two probes perfect hashing works if the items are all known in advance there are dynamic schemes that allow insertions and deletions (dynamic perfect hashing)but instead we will investigate two newer alternatives that appear to be competitive in practice with the classic hashing algorithms cuckoo hashing from our previous discussionwe know that in the balls and bins problemif items are randomly tossed into binsthe size of the largest bin is expected to be th(log nlog log nsince this bound has been known for long timeand the problem has been well studied by mathematiciansit was surprising whenin the mid sit was shown that ifat each tosstwo bins were randomly chosen and the item was tossed into the more empty bin (at the time)then the size of the largest bin would only be th(log log ) significantly lower number quicklya host of potential algorithms and data structures arose out of this new concept of the "power of two choices one of the ideas is cuckoo hashing in cuckoo hashingsuppose we have items we maintain two tableseach more than half emptyand we have two independent hash functions that can assign each item to position in each table cuckoo hashing maintains the invariant that an item is always stored in one of these two locations as an examplefigure shows potential cuckoo hash table for six itemswith two tables of size (these tables are too smallbut serve well as an examplebased on the table table figure potential cuckoo hash table hash functions are shown on the right for these six itemsthere are only three valid positions in table and three valid positions in table so it is not clear that this arrangement can easily be found
23,508
hashing randomly chosen hash functionsitem can be at either position in table or position in table item can be at either position in table or position in table and so on immediatelythis implies that search in cuckoo hash table requires at most two table accessesand remove is trivialonce the item is located (lazy deletion is not needed now!but there is an important detailhow is the table builtfor instancein figure there are only three available locations in the first table for the six itemsand there are only three available locations in the second table for the six items so there are only six available locations for these six itemsand thus we must find an ideal matching of slots for our six items clearlyif there were seventh itemgwith locations for table and for table it could not be inserted into the table by any algorithm (the seven items would be competing for six table locationsone could argue that this means that the table would simply be too loaded ( would yield load factor)but at the same timeif the table had thousands of itemsand were lightly loadedbut we had abcdefg with these hash positionsit would still be impossible to insert all seven of those items so it is not at all obvious that this scheme can be made to work the answer in this situation would be to pick another hash functionand this can be fine as long as it is unlikely that this situation occurs the cuckoo hashing algorithm itself is simpleto insert new itemxfirst make sure it is not already there we can then use the first hash functionand if the (firsttable location is emptythe item can be placed so figure shows the result of inserting into an empty hash table suppose now we want to insert bwhich has hash locations in table and in table for the remainder of the algorithmwe will use ( to specify the two locationsso ' locations are given by ( table is already occupied in position at this point there are two optionsone is to look in table the problem is that position in table could also be occupied it happens that in this case it is notbut the algorithm that the standard cuckoo hash table uses does not bother to look insteadit preemptively places the new item in table in order to do soit must displace aso moves to table using its table hash locationwhich is position the result is shown in figure it is easy to insert cand this is shown in figure next we want to insert dwith hash locations ( but the table location (position is already taken note also that the table location is not already takenbut we don' look there insteadwe have replace cand then goes into table at position as suggested by its second hash function the resulting tables are shown in figure table table figure cuckoo hash table after insertion of
23,509
table table figure cuckoo hash table after insertion of table table figure cuckoo hash table after insertion of after this is donee can be easily inserted so farso goodbut can we now insert ffigures to show that this algorithm successfully inserts fby displacing ethen aand then clearlyas we mentioned beforewe cannot successfully insert with hash locations ( if we were to trywe would displace dthen bthen aefand cand then table table figure cuckoo hash table after insertion of table table figure cuckoo hash table starting the insertion of into the table in figure firstf displaces
23,510
hashing table table figure continuing the insertion of into the table in figure nexte displaces table table figure continuing the insertion of into the table in figure nexta displaces table table figure completing the insertion of into the table in figure miraculously (?) finds an empty position in table would try to go back into table position displacing which was placed there at the start this would get us to figure so now would try its alternate in table (location and then displace awhich would displace bwhich would displace dwhich would displace cwhich would displace fwhich would displace ewhich would now displace from position at this pointg would be in cycle the central issue then concerns questions such as what is the probability of there being cycle that prevents the insertion from completingand what is the expected number of displacements required for successful insertionfortunatelyif the table' load factor is below an analysis shows that the probability of cycle is very lowthat the expected number of displacements is small constantand that it is extremely unlikely that successful insertion would require more than (log ndisplacements as suchwe can simply rebuild the tables with new hash functions after certain number of displacements are
23,511
table table figure inserting into the table in figure displaces dwhich displaces bwhich displaces awhich displaces ewhich displaces fwhich displaces cwhich displaces it is not yet hopeless since when is displacedwe would now try the other hash tableat position howeverwhile that could be successful in generalin this case there is cycle and the insertion will not terminate detected more preciselythe probability that single insertion would require new set of hash functions can be made to be ( / )the new hash functions themselves generate more insertions to rebuild the tablebut even sothis means the rebuilding cost is minimal howeverif the table' load factor is at or higherthen the probability of cycle becomes drastically higherand this scheme is unlikely to work well at all after the publication of cuckoo hashingnumerous extensions were proposed for instanceinstead of two tableswe can use higher number of tablessuch as or while this increases the cost of lookupit also drastically increases the theoretical space utilization in some applications the lookups through separate hash functions can be done in parallel and thus cost little to no additional time another extension is to allow each table to store multiple keysagainthis can increase space utilization and make it easier to do insertions and can be more cache-friendly various combinations are possibleas shown in figure and finallyoften cuckoo hash tables are implemented as one giant table with two (or morehash functions that probe the entire tableand some variations attempt to place an item in the second hash table immediately if there is an available spotrather than starting sequence of displacements cuckoo hash table implementation implementing cuckoo hashing requires collection of hash functionssimply using hashcode to generate the collection of hash functions makes no sensesince any hashcode collisions will result in collisions in all the hash functions figure shows simple interface that can be used to send families of hash functions to the cuckoo hash table hash functions hash functions hash functions item per cell items per cell items per cell figure maximum load factors for cuckoo hashing variations
23,512
hashing template class cuckoohashfamily publicsize_t hashconst anytype xint which constint getnumberoffunctions)void generatenewfunctions)}figure generic hashfamily interface for cuckoo hashing figure provides the class interface for cuckoo hashing we will code variant that will allow an arbitrary number of hash functions (specified by the hashfamily template parameter typewhich uses single array that is addressed by all the hash functions thus our implementation differs from the classic notion of two separately addressable hash tables we can implement the classic version by making relatively minor changes to the codehoweverthe version provided in this section seems to perform better in tests using simple hash functions in figure we specify that the maximum load for the table is if the load factor of the table is about to exceed this limitan automatic table expansion is performed we also define allowed_rehasheswhich specifies how many rehashes we will perform if evictions take too long in theoryallowed_rehashes can be infinitesince we expect only small constant number of rehashes are neededin practicedepending on several factors such as the number of hash functionsthe quality of the hash functionsand the load factorthe rehashes could significantly slow things downand it might be worthwhile to expand the tableeven though this will cost space the data representation for the cuckoo hash table is straightforwardwe store simple arraythe current sizeand the collections of hash functionsrepresented in hashfamily instance we also maintain the number of hash functionseven though that is always obtainable from the hashfamily instance figure shows the constructor and makeempty methodsand these are straightforward figure shows pair of private methods the firstmyhashis used to select the appropriate hash function and then scale it into valid array index the secondfindposconsults all the hash functions to return the index containing item xor - if is not found findpos is then used by contains and remove in figures and respectivelyand we can see that those methods are easy to implement the difficult routine is insertion in figure we can see that the basic plan is to check to see if the item is already presentreturning if so otherwisewe check to see if the table is fully loadedand if sowe expand it finally we call helper routine to do all the dirty work the helper routine for insertion is shown in figure we declare variable rehashes to keep track of how many attempts have been made to rehash in this insertion our insertion routine is mutually recursiveif neededinsert eventually calls rehashwhich eventually calls back into insert thus rehashes is declared in an outer scope for code simplicity
23,513
template class cuckoohashtable publicexplicit cuckoohashtableint size )void makeempty)bool containsconst anytype constbool removeconst anytype )bool insertconst anytype )bool insertanytype & )privatestruct hashentry anytype elementbool isactivehashentryconst anytype anytype)bool false elemente }isactivea hashentryanytype &ebool false elementstd::movee }isactivea }bool inserthelper const anytype xx )bool inserthelper anytype &xx )bool isactiveint currentpos constsize_t myhashconst anytype xint which constint findposconst anytype constvoid expand)void rehash)void rehashint newsize )static const double max_load static const int allowed_rehashes vector arrayint currentsizeint numhashfunctionsint rehashesuniformrandom rhashfamily hashfunctions}figure class interface for cuckoo hashing
23,514
hashing explicit hashtableint size arraynextprimesize numhashfunctions hashfunctions getnumberoffunctions)rehashes makeempty)void makeemptycurrentsize forauto entry array entry isactive falsefigure routines to initialize and empty the cuckoo hash table /*compute the hash code for using specified function *int myhashconst anytype xint which const return hashfunctions hashxwhich array size)/*search all hash function places return the position where the search terminates or - if not found *int findposconst anytype const forint numhashfunctions++ int pos myhashxi )ifisactivepos &arraypos element = return posreturn - figure routines to find the location of an item in the cuckoo hash table and to compute the hash code for given table
23,515
/*return true if is found *bool containsconst anytype const return findposx !- figure routine to search cuckoo hash table /*remove from the hash table return true if item was found and removed *bool removeconst anytype int currentpos findposx )if!isactivecurrentpos return falsearraycurrentpos isactive false--currentsizereturn truefigure routine to remove from cuckoo hash table bool insertconst anytype ifcontainsx return falseifcurrentsize >array sizemax_load expand)return inserthelper )figure public insert routine for cuckoo hashing our basic logic is different from the classic scheme we have already tested that the item to insert is not already present at lines to we check to see if any of the valid positions are emptyif sowe place our item in the first available position and we are done otherwisewe evict one of the existing items howeverthere are some tricky issues
23,516
hashing static const int allowed_rehashes bool inserthelper const anytype xx const int count_limit anytype xxwhiletrue int lastpos - int posforint count count count_limit++count forint numhashfunctions++ pos myhashxi )if!isactivepos arraypos std::movehashentrystd::movex )true )++currentsizereturn true/none of the spots are available evict random one int do pos myhashxr nextintnumhashfunctions )whilepos =lastpos & + )lastpos posstd::swapxarraypos element )if++rehashes allowed_rehashes expand)/make the table bigger rehashes /reset the of rehashes else rehash)/same table sizenew hash functions figure insertion routine for cuckoo hashing uses different algorithm that chooses the item to evict randomlyattempting not to re-evict the last item the table will attempt to select new hash functions (rehashif there are too many evictions and will expand if there are too many rehashes
23,517
evicting the first item did not perform well in experiments evicting the last item did not perform well in experiments evicting the items in sequence ( the first eviction uses hash function the next uses hash function etc did not perform well in experiments evicting the item purely randomly did not perform well in experimentsin particularwith only two hash functionsit tended to create cycles to alleviate the last problemwe maintain the last position that was evictedand if our random item was the last evicted itemwe select new random item this will loop forever if used with two hash functionsand both hash functions happen to probe to the same locationand that location was prior evictionso we limit the loop to five iterations (deliberately using an odd numberthe code for expand and rehash is shown in figure expand creates larger array but keeps the same hash functions the zero-parameter rehash leaves the array size unchanged but creates new array that is populated with newly chosen hash functions void expandrehashstatic_castarray sizemax_load )void rehashhashfunctions generatenewfunctions)rehasharray size)void rehashint newsize vector oldarray array/create new double-sizedempty table array resizenextprimenewsize )forauto entry array entry isactive false/copy table over currentsize forauto entry oldarray ifentry isactive insertstd::moveentry element )figure rehashing and expanding code for cuckoo hash tables
23,518
hashing template class stringhashfamily publicstringhashfamilymultiplierscount generatenewfunctions)int getnumberoffunctionsconst return countvoid generatenewfunctionsforauto mult multipliers mult nextint)size_t hashconst string xint which const const int multiplier multiplierswhich ]size_t hashval forauto ch hashval multiplier hashval chreturn hashvalprivatevector multipliersuniformrandom }figure casual string hashing for cuckoo hashingthese hash functions do not provably satisfy the requirements needed for cuckoo hashing but offer decent performance if the table is not highly loaded and the alternate insertion routine in figure is used finallyfigure shows the stringhashfamily class that provides set of simple hash functions for strings these hash functions replace the constant in figure with randomly chosen numbers (not necessarily primethe benefits of cuckoo hashing include the worst-case constant lookup and deletion timesthe avoidance of lazy deletion and extra dataand the potential for parallelism
23,519
howevercuckoo hashing is extremely sensitive to the choice of hash functionsthe inventors of the cuckoo hash table reported that many of the standard hash functions that they attempted performed poorly in tests furthermorealthough the insertion time is expected to be constant time as long as the load factor is below the bound that has been shown for the expected insertion cost for classic cuckoo hashing with two separate tables (both with load factor lis roughly /( ( ) / )which deteriorates rapidly as the load factor gets close to (the formula itself makes no sense when equals or exceeds using lower load factors or more than two hash functions seems like reasonable alternative hopscotch hashing hopscotch hashing is new algorithm that tries to improve on the classic linear probing algorithm recall that in linear probingcells are tried in sequential orderstarting from the hash location because of primary and secondary clusteringthis sequence can be long on average as the table gets loadedand thus many improvements such as quadratic probingdouble hashingand so forthhave been proposed to reduce the number of collisions howeveron some modern architecturesthe locality produced by probing adjacent cells is more significant factor than the extra probesand linear probing can still be practical or even best choice the idea of hopscotch hashing is to bound the maximal length of the probe sequence by predetermined constant that is optimized to the underlying computer' architecture doing so would give constant-time lookups in the worst caseand like cuckoo hashingthe lookup could be parallelized to simultaneously check the bounded set of possible locations if an insertion would place new item too far from its hash locationthen we efficiently go backward toward the hash locationevicting potential items if we are carefulthe evictions can be done quickly and guarantee that those evicted are not placed too far from their hash locations the algorithm is deterministic in that given hash functioneither the items can be evicted or they can' the latter case implies that the table is likely too crowdedand rehash is in orderbut this would happen only at extremely high load factorsexceeding for table with load factor of the failure probability is almost zero (exercise let max_dist be the chosen bound on the maximum probe sequence this means that item must be found somewhere in the max_dist positions listed in hash( )hash( hash( (max_dist in order to efficiently process evictionswe maintain information that tells for each position xwhether the item in the alternate position is occupied by an element that hashes to position as an examplefigure shows fairly crowded hopscotch hash tableusing max_dist the bit array for position shows that only position has an item (cwith hash value only the first bit of hop[ is set hop[ has the first two bits setindicating that positions and ( and dare occupied with items whose hash value is and hop[ has only the third bit setindicating that the item in position (ehas hash value if max_dist is no more than the hop array is essentially an array of -bit integersso the additional space requirement is not substantial if hop[poscontains all for some posthen an attempt to insert an item whose hash value is pos will clearly
23,520
hashing item hop figure hopscotch hashing table the hops tell which of the positions in the block are occupied with cells containing this hash value thus hop[ indicates that only position currently contains items whose hash value is while positions and do not failsince there would now be max_dist items trying to reside within max_dist positions of pos--an impossibility continuing the examplesuppose we now insert item with hash value our normal linear probing would try to place it in position but that is too far from the hash value of so insteadwe look to evict an item and relocate it to position the only candidates to go into position would be items with hash value of or if we examine hop[ ]we see that there are no candidates with hash value but hop[ produces candidategwith value that can be placed into position since position is now close enough to the hash value of hwe can now insert these stepsalong with the changes to the hop informationare shown in figure finallywe will attempt to insert whose hash value is linear probing suggests position but of course that is too far away thus we look in hop[ ]and it tells us that can move downfreeing up position now that is vacantwe can look in hop[ to find another element to evict but hop[ has all zeros in the first three positionsso there are no items with hash value that can be moved so we examine hop[ there we find all zeros in the first two positions so we try hop[ ]where we need the first position to be which it is thus can move down these two steps are shown in figure notice that if this were not the case--for instance if hash(fwere instead of --we would be stuck and have to rehash howeverthat is not problem with our algorithminsteadthere would simply be no way to place all of ciadebhand (if ' hash value were )these items would all have hash values between and and would thus need to be placed in the seven spots between and but that would be eight items in seven spots--an impossibility howeversince this is not the case for our exampleand we have evicted an item from position we can now continue figure shows the remaining eviction from position and subsequent placement of
23,521
item hop item hop item hop figure hopscotch hashing table attempting to insert linear probing suggests location but that is too farso we evict from position to find closer position item hop item hop item hop figure hopscotch hashing table attempting to insert linear probing suggests location but that is too farconsulting hop[ ]we see that can move downleaving position open consulting hop[ gives no suggestions hop[ does not help either (why?)so hop[ suggests moving hopscotch hashing is relatively new algorithmbut the initial experimental results are very promisingespecially for applications that make use of multiple processors and require significant parallelism and concurrency it remains to be seen if either cuckoo hashing or hopscotch hashing emerge as practical alternative to the classic separate chaining and linear/quadratic probing schemes
23,522
hashing item hop item hop item hop figure hopscotch hashing table insertion of continuesnextb is evictedand finallywe have spot that is close enough to the hash value and can insert universal hashing although hash tables are very efficient and have constant average cost per operationassuming appropriate load factorstheir analysis and performance depend on the hash function having two fundamental properties the hash function must be computable in constant time ( independent of the number of items in the hash table the hash function must distribute its items uniformly among the array slots in particularif the hash function is poorthen all bets are offand the cost per operation can be linear in this sectionwe discuss universal hash functionswhich allow us to choose the hash function randomly in such way that condition above is satisfied as in section we use to represent tablesize although strong motivation for the use of universal hash functions is to provide theoretical justification for the assumptions used in the classic hash table analysesthese functions can also be used in applications that require high level of robustnessin which worst-case (or even substantially degradedperformanceperhaps based on inputs generated by saboteur or hackersimply cannot be tolerated as in section we use to represent tablesize definition family of hash functions is universalif for any ythe number of hash functions in for which (xh(yis at most | |/ notice that this definition holds for each pair of itemsrather than being averaged over all pairs of items the definition above means that if we choose hash function randomly from universal family hthen the probability of collision between any two distinct items
23,523
is at most /mand when adding into table with itemsthe probability of collision at the initial point is at most /mor the load factor the use of universal hash function for separate chaining or hopscotch hashing would be sufficient to meet the assumptions used in the analysis of those data structures howeverit is not sufficient for cuckoo hashingwhich requires stronger notion of independence in cuckoo hashingwe first see if there is vacant locationif there is notand we do an evictiona different item is now involved in looking for vacant location this repeats until we find the vacant locationor decide to rehash [generally within (log nstepsin order for the analysis to workeach step must have collision probability of / independentlywith different item being subject to the hash function we can formalize this independence requirement in the following definition definition family of hash functions is -universalif for any   xk yk the number of hash functions in for which ( ( ) ( ( )and (xk (yk is at most | |/mk with this definitionwe see that the analysis of cuckoo hashing requires an (log )universal hash function (after that many evictionswe give up and rehashin this section we look only at universal hash functions to design simple universal hash functionwe will assume first that we are mapping very large integers into smaller integers ranging from to let be prime larger than the largest input key our universal family will consist of the following set of functionswhere and are chosen randomlyh {ha, ( ((ax bmod pmod mwhere < < < < for examplein this familythree of the possible random choices of (abyield three different hash functionsh , ( (( mod pmod , ( (( mod pmod , ( (( xmod pmod observe that there are ( possible hash functions that can be chosen theorem the hash family {ha, ( ((ax bmod pmod mwhere < < < < is universal proof let and be distinct valueswith ysuch that ha, (xha, (yclearly if (ax bmod is equal to (ay bmod pthen we will have collision howeverthis cannot happensubtracting equations yields ( (mod )which would mean that divides or divides ysince is prime but neither can happensince both and are between and so let (ax bmod and let (ay bmod pand by the above argumentr  thus there are possible values for rand for each rthere are possible
23,524
hashing values for sfor total of ( possible (rspairs notice that the number of (abpairs and the number of (rspairs is identicalthus each (rspair will correspond to exactly one (abpair if we can solve for (abin terms of and but that is easyas beforesubtracting equations yields ( ( (mod )which means that by multiplying both sides by the unique multiplicative inverse of ( (which must existsince is not zero and is prime)we obtain ain terms of and then follows finallythis means that the probability that and collide is equal to the probability that (mod )and the above analysis allows us to assume that and are chosen randomlyrather than and immediate intuition would place this probability at /mbut that would only be true if were an exact multiple of mand all possible (rspairs were equally likely since is primeand sthat is not exactly trueso more careful analysis is needed for given rthe number of values of that can collide mod is at most  / - (the - is because sit is easy to see that this is at most ( )/ thus the probability that and will generate collision is at most / (we divide by becauseas mentioned earlier in the proofthere are only choices for given rthis implies that the hash family is universal implementation of this hash function would seem to require two mod operationsone mod and the second mod figure shows simple implementation in ++assuming that is significantly less than because the computations must now be exactly as specifiedand thus overflow is no longer acceptablewe promote to long long computationswhich are at least bits howeverwe are allowed to choose any prime pas long as it is larger than henceit makes sense to choose prime that is most favorable for computations one such prime is prime numbers of this form are known as mersenne primesother mersenne primes include and just as multiplication by mersenne prime such as can be implemented by bit shift and subtracta mod operation involving mersenne prime can also be implemented by bit shift and an additionsuppose (mod pif we divide by ( )then ( rwhere qand rare the quotient and remainderrespectively thusr ( (mod pand since ( (mod )we obtain qr(mod pfigure implements this ideawhich is known as the carter-wegman trick on line the bit shift computes the quotient and the bitwise-and computes the remainder when dividing by ( )these bitwise operations work because ( is an exact power int universalhashint xint aint bint pint return static_caststatic_casta mfigure simple implementation of universal hashing
23,525
const int digs const int mersennep ( <<digs int universalhashint xint aint bint long long hashval static_casta bhashval hashval >digs hashval mersennep )ifhashval >mersennep hashval -mersennepreturn static_casthashval mfigure simple implementation of universal hashing of two since the remainder could be almost as large as pthe resulting sum might be larger than pso we scale it back down at lines and universal hash functions exist for strings also firstchoose any prime plarger than (and larger than the largest character codethen use our standard string hashing functionchoosing the multiplier randomly between and and returning an intermediate hash value between and inclusive finallyapply universal hash function to generate the final hash value between and extendible hashing our last topic in this deals with the case where the amount of data is too large to fit in main memory as we saw in the main consideration then is the number of disk accesses required to retrieve data as beforewe assume that at any point we have records to storethe value of changes over time furthermoreat most records fit in one disk block we will use in this section if either probing hashing or separate chaining hashing is usedthe major problem is that collisions could cause several blocks to be examined during searcheven for well-distributed hash table furthermorewhen the table gets too fullan extremely expensive rehashing step must be performedwhich requires (ndisk accesses clever alternativeknown as extendible hashingallows search to be performed in two disk accesses insertions also require few disk accesses we recall from that -tree has depth (logm/ nas increasesthe depth of -tree decreases we could in theory choose to be so large that the depth of the -tree would be then any search after the first would take one disk accesssincepresumablythe root node could be stored in main memory the problem with this strategy is that the branching factor is so high that it would take considerable processing to determine which leaf the data was in if the time to perform this step could be reducedthen we would have practical scheme this is exactly the strategy used by extendible hashing
23,526
hashing ( ( ( ( figure extendible hashingoriginal data let us supposefor the momentthat our data consists of several -bit integers figure shows an extendible hashing scheme for these data the root of the "treecontains four pointers determined by the leading two bits of the data each leaf has up to elements it happens that in each leaf the first two bits are identicalthis is indicated by the number in parentheses to be more formald will represent the number of bits used by the rootwhich is sometimes known as the directory the number of entries in the directory is thus dl is the number of leading bits that all the elements of some leaf have in common dl will depend on the particular leafand dl < suppose that we want to insert the key this would go into the third leafbut as the third leaf is already fullthere is no room we thus split this leaf into two leaveswhich are now determined by the first three bits this requires increasing the directory size to these changes are reflected in figure notice that all the leaves not involved in the split are now pointed to by two adjacent directory entries thusalthough an entire directory is rewrittennone of the other leaves is actually accessed if the key is now insertedthen the first leaf is splitgenerating two leaves with dl since the only change required in the directory is the updating of the and pointers see figure this very simple strategy provides quick access times for insert and search operations on large databases there are few important details we have not considered firstit is possible that several directory splits will be required if the elements in leaf agree in more than leading bits for instancestarting at the original examplewith if and finally are insertedthe directory size must be increased to to distinguish between the five keys this is an easy detail to take care ofbut must not be forgotten secondthere is the possibility of duplicate keysif there are more than duplicatesthen this algorithm does not work at all in this casesome other arrangements need to be made
23,527
( ( ( ( ( figure extendible hashingafter insertion of and directory split ( ( ( ( ( ( figure extendible hashingafter insertion of and leaf split these possibilities suggest that it is important for the bits to be fairly random this can be accomplished by hashing the keys into reasonably long integer--hence the name we close by mentioning some of the performance properties of extendible hashingwhich are derived after very difficult analysis these results are based on the reasonable assumption that the bit patterns are uniformly distributed the expected number of leaves is ( /mlog thus the average leaf is ln full this is the same as for -treeswhich is not entirely surprisingsince for both data structures new nodes are created when the ( )th entry is added
23,528
hashing the more surprising result is that the expected size of the directory (in other words is ( + / /mif is very smallthen the directory can get unduly large in this casewe can have the leaves contain pointers to the records instead of the actual recordsthus increasing the value of this adds second disk access to each search operation in order to maintain smaller directory if the directory is too large to fit in main memorythe second disk access would be needed anyway summary hash tables can be used to implement the insert and contains operations in constant average time it is especially important to pay attention to details such as load factor when using hash tablessince otherwise the time bounds are not valid it is also important to choose the hash function carefully when the key is not short string or integer for separate chaining hashingthe load factor should be close to although performance does not significantly degrade unless the load factor becomes very large for probing hashingthe load factor should not exceed unless this is completely unavoidable if linear probing is usedperformance degenerates rapidly as the load factor approaches rehashing can be implemented to allow the table to grow (and shrink)thus maintaining reasonable load factor this is important if space is tight and it is not possible just to declare huge hash table other alternatives such as cuckoo hashing and hopscotch hashing can also yield good results because all these algorithms are constant timeit is difficult to make strong statements about which hash table implementation is the "best"recent simulation results provide conflicting guidance and suggest that the performance can depend strongly on the types of items being manipulatedthe underlying computer hardwareand the programming language binary search trees can also be used to implement insert and contains operations although the resulting average time bounds are (log )binary search trees also support routines that require order and are thus more powerful using hash tableit is not possible to find the minimum element it is not possible to search efficiently for string unless the exact string is known binary search tree could quickly find all items in certain rangethis is not supported by hash tables furthermorethe (log nbound is not necessarily that much more than ( )especially since no multiplications or divisions are required by search trees on the other handthe worst case for hashing generally results from an implementation errorwhereas sorted input can make binary trees perform poorly balanced search trees are quite expensive to implementso if no ordering information is required and there is any suspicion that the input might be sortedthen hashing is the data structure of choice hashing applications are abundant compilers use hash tables to keep track of declared variables in source code the data structure is known as symbol table hash tables are the ideal application for this problem identifiers are typically shortso the hash function can be computed quicklyand alphabetizing the variables is often unnecessary
23,529
hash table is useful for any graph theory problem where the nodes have real names instead of numbers hereas the input is readvertices are assigned integers from onward by order of appearance againthe input is likely to have large groups of alphabetized entries for examplethe vertices could be computers then if one particular installation lists its computers as ibm ibm ibm there could be dramatic effect on efficiency if search tree is used third common use of hash tables is in programs that play games as the program searches through different lines of playit keeps track of positions it has seen by computing hash function based on the position (and storing its move for that positionif the same position recursusually by simple transposition of movesthe program can avoid expensive recomputation this general feature of all game-playing programs is known as the transposition table yet another use of hashing is in online spelling checkers if misspelling detection (as opposed to correctionis importantan entire dictionary can be prehashed and words can be checked in constant time hash tables are well suited for thisbecause it is not important to alphabetize wordsprinting out misspellings in the order they occurred in the document is certainly acceptable hash tables are often used to implement cachesboth in software (for instancethe cache in your internet browserand in hardware (for instancethe memory caches in modern computersthey are also used in hardware implementations of routers we close this by returning to the word puzzle problem of if the second algorithm described in is usedand we assume that the maximum word size is some small constantthen the time to read in the dictionary containing words and put it in hash table is (wthis time is likely to be dominated by the disk / and not the hashing routines the rest of the algorithm would test for the presence of word for each ordered quadruple (rowcolumnorientationnumber of charactersas each lookup would be ( )and there are only constant number of orientations ( and characters per wordthe running time of this phase would be ( cthe total running time would be ( )which is distinct improvement over the original ( wwe could make further optimizationswhich would decrease the running time in practicethese are described in the exercises exercises given input { and hash function (xx (mod ( )show the resulting separate chaining hash table hash table using linear probing hash table using quadratic probing hash table with second hash function ( ( mod show the result of rehashing the hash tables in exercise write program to compute the number of collisions required in long random sequence of insertions using linear probingquadratic probingand double hashing
23,530
hashing large number of deletions in separate chaining hash table can cause the table to be fairly emptywhich wastes space in this casewe can rehash to table half as large assume that we rehash to larger table when there are twice as many elements as the table size how empty should the table be before we rehash to smaller tablereimplement separate chaining hash tables using vector of singly linked lists instead of vectors the isempty routine for quadratic probing has not been written can you implement it by returning the expression currentsize== in the quadratic probing hash tablesuppose that instead of inserting new item into the location suggested by findposwe insert it into the first inactive cell on the search path (thusit is possible to reclaim cell that is marked deletedpotentially saving spacea rewrite the insertion algorithm to use this observation do this by having findpos maintainwith an additional variablethe location of the first inactive cell it encounters explain the circumstances under which the revised algorithm is faster than the original algorithm can it be slowersuppose instead of quadratic probingwe use "cubic probing"here the ith probe is at hash(xi does cubic probing improve on quadratic probingusing standard dictionaryand table size that approximates load factor of compare the number of collisions produced by the hash function in figure and the hash function in figure what are the advantages and disadvantages of the various collision resolution strategies suppose that to mitigate the effects of secondary clustering we use as the collision resolution function (ii (hash( ))where hash(xis the -bit hash value (not yet scaled to suitable array index)and ( | ymod ( )) /*fnv- hash routine for string objects *unsigned int hashconst string keyint tablesize unsigned int hashval forchar ch key hashval hashval ch ) return hashval tablesizefigure alternative hash function for exercise
23,531
mod tablesize (section describes method of performing this calculation without overflowsbut it is unlikely that overflow matters in this case explain why this strategy tends to avoid secondary clusteringand compare this strategy with both double hashing and quadratic probing rehashing requires recomputing the hash function for all items in the hash table since computing the hash function is expensivesuppose objects provide hash member function of their ownand each object stores the result in an additional data member the first time the hash function is computed for it show how such scheme would apply for the employee class in figure and explain under what circumstances the remembered hash value remains valid in each employee write program to implement the following strategy for multiplying two sparse polynomials of size and nrespectively each polynomial is represented as list of objects consisting of coefficient and an exponent we multiply each term in by term in for total of mn operations one method is to sort these terms and combine like termsbut this requires sorting mn recordswhich could be expensiveespecially in small-memory environments alternativelywe could merge terms as they are computed and then sort the result write program to implement the alternative strategy if the output polynomial has about ( ntermswhat is the running time of both methods describe procedure that avoids initializing hash table (at the expense of memory suppose we want to find the first occurrence of string pk in long input string an we can solve this problem by hashing the pattern stringobtaining hash value hp and comparing this value with the hash value formed from ak ak+ ak+ and so on until an- + an- + an if we have match of hash valueswe compare the strings character by character to verify the match we return the position (in aif the strings actually do matchand we continue in the unlikely event that the match is false show that if the hash value of + ai+ - is knownthen the hash value of ai+ ai+ ai+ can be computed in constant time show that the running time is ( nplus the time spent refuting false matches show that the expected number of false matches is negligible write program to implement this algorithm describe an algorithm that runs in ( nworst-case time  describe an algorithm that runs in ( /kaverage time nonstandard +extension adds syntax that allows switch statement to work with the string type (instead of the primitive integer typesexplain how hash tables can be used by the compiler to implement this language addition an (old-stylebasic program consists of series of statements numbered in ascending order control is passed by use of goto or gosub and statement number write program that reads in legal basic program and renumbers the statements so
23,532
hashing that the first starts at number and each statement has number higher than the previous statement you may assume an upper limit of statementsbut the statement numbers in the input might be as large as -bit integer your program must run in linear time implement the word puzzle program using the algorithm described at the end of the we can get big speed increase by storingin addition to each word wall of ' prefixes (if one of ' prefixes is another word in the dictionaryit is stored as real word although this may seem to increase the size of the hash table drasticallyit does notbecause many words have the same prefixes when scan is performed in particular directionif the word that is looked up is not even in the hash table as prefixthen the scan in that direction can be terminated early use this idea to write an improved program to solve the word puzzle if we are willing to sacrifice the sanctity of the hash table adtwe can speed up the program in part (bby noting that iffor examplewe have just computed the hash function for "excel,we do not need to compute the hash function for "excelsfrom scratch adjust your hash function so that it can take advantage of its previous calculation in we suggested using binary search incorporate the idea of using prefixes into your binary search algorithm the modification should be simple which algorithm is fasterunder certain assumptionsthe expected cost of an insertion into hash table with secondary clustering is given by /( - )- -ln( -lunfortunatelythis formula is not accurate for quadratic probing howeverassuming that it isdetermine the followinga the expected cost of an unsuccessful search the expected cost of successful search implement generic map that supports the insert and lookup operations the implementation will store hash table of pairs (keydefinitionyou will lookup definition by providing key figure provides the map specification (minus some details implement spelling checker by using hash table assume that the dictionary comes from two sourcesan existing large dictionary and second file containing personal dictionary output all misspelled words and the line numbers on which they occur alsofor each misspelled wordlist any words in the dictionary that are obtainable by applying any of the following rulesa add one character remove one character exchange adjacent characters prove markov' inequalityif is any random variable and then pr| > < | )/ show how this inequality can be applied to theorems and if hopscotch table with parameter max_dist has load factor what is the approximate probability that an insertion requires rehash
23,533
template class pair hashedobj keyobject def/appropriate constructorsetc }template class dictionary publicdictionary)void insertconst hashedobj keyconst object definition )const object lookupconst hashedobj key constbool isemptyconstvoid makeempty)privatehashtableitems}figure dictionary skeleton for exercise implement hopscotch hash table and compare its performance with linear probingseparate chainingand cuckoo hashing implement the classic cuckoo hash table in which two separate tables are maintained the simplest way to do this is to use single array and modify the hash function to access either the top half or the bottom half extend the classic cuckoo hash table to use hash functions show the result of inserting the keys into an initially empty extendible hashing data structure with write program to implement extendible hashing if the table is small enough to fit in main memoryhow does its performance compare with separate chaining and open addressing hashingreferences despite the apparent simplicity of hashingmuch of the analysis is quite difficultand there are still many unresolved questions there are also many interesting theoretical issues
23,534
hashing hashing dates to at least when luhn wrote an internal ibm memorandum that used separate chaining hashing early papers on hashing are [ and [ wealth of information on the subjectincluding an analysis of hashing with linear probing under the assumption of totally random and independent hashingcan be found in [ more recent results have shown that linear probing requires only -independent hash functions [ an excellent survey on early classic hash tables methods is [ ][ contains suggestionsand pitfallsfor choosing hash functions precise analytic and simulation results for separate chaininglinear probingquadratic probingand double hashing can be found in [ howeverdue to changes (improvementsin computer architecture and compilerssimulation results tend to quickly become dated an analysis of double hashing can be found in [ and [ yet another collision resolution scheme is coalesced hashingdescribed in [ yao [ has shown the uniform hashingin which no clustering existsis optimal with respect to cost of successful searchassuming that items cannot move once placed universal hash functions were first described in [ and [ ]the latter paper introduces the "carter-wegman trickof using mersenne prime numbers to avoid expensive mod operations perfect hashing is described in [ ]and dynamic version of perfect hashing was described in [ [ is survey of some classic dynamic hashing schemes the (log nlog log nbound on the length of the longest list in separate chaining was shown (in precise formin [ the "power of two choices,showing that when the shorter of two randomly selected lists is chosenthen the bound on the length of the longest list is lowered to only (log log )was first described in [ an early example of the power of two choices is [ the classic work on cuckoo hashing is [ ]since the initial papera host of new results have appeared that analyze the amount of independence needed in the hash functions and describe alternative implementations [ ][ ][ ][ ][ ][ ][ ][ ][ and [ hopscotch hashing appeared in [ extendible hashing appears in [ ]with analysis in [ and [ exercise ( -dis from [ part (eis from [ ]and part (fis from [ the fnv- hash function described in exercise is due to fowlernolland vo arbitmanm naorand segev"de-amortized cuckoo hashingprovable worst-case performance and experimental results,proceedings of the th international colloquium on automatalanguages and programming ( ) - azara brodera karlinand upfal"balanced allocations,siam journal of computing ( ) - boyer and moore" fast string searching algorithm,communications of the acm ( ) - broder and mitzenmacher"using multiple hash functions to improve ip lookups,proceedings of the twentieth ieee infocom ( ) - carter and wegman"universal classes of hash functions,journal of computer and system sciences ( ) - cohen and kane"bounds on the independence required for cuckoo hashing,preprint devroye and morin"cuckoo hashingfurther analysis,information processing letters ( ) -
23,535
dietzfelbingera karlink melhornf meyer auf der heideh rohnertand tarjan"dynamic perfect hashingupper and lower bounds,siam journal on computing ( ) - dietzfelbinger and schellbach"on risks of using cuckoo hashing with simple universal hash classes,proceedings of the twentieth annual acm-siam symposium on discrete algorithms ( ) - dietzfelbinger and weidling"balanced allocation and dictionaries with tightly packed constant size bins,theoretical computer science ( ) - dumey"indexing for rapid random-access memory,computers and automation ( ) - enbody and du"dynamic hashing schemes,computing surveys ( ) - faginj nievergeltn pippengerand strong"extendible hashing-- fast access method for dynamic files,acm transactions on database systems ( ) - flajolet"on the performance evaluation of extendible hashing and trie searching,acta informatica ( ) - fotakisr paghp sandersand spirakis"space efficient hash tables with worst case constant access time,theory of computing systems ( ) - fredmanj komlosand szemeredi"storing sparse table with ( worst case access time,journal of the acm ( ) - friezep melstedand mitzenmacher"an analysis of random-walk cuckoo hashing,proceedings of the twelfth international workshop on approximation algorithms in combinatorial optimization (approx( ) - gonnet"expected length of the longest probe sequence in hash code searching,journal of the association for computing machinery ( ) - gonnet and baeza-yateshandbook of algorithms and data structures ed addison-wesleyreadingmass guibas and szemeredi"the analysis of double hashing,journal of computer and system sciences ( ) - herlihyn shavitand tzafrir"hopscotch hashing,proceedings of the twenty-second international symposium on distributed computing ( ) - karp and rabin"efficient randomized pattern-matching algorithms,aiken computer laboratory report tr- - harvard universitycambridgemass kirsch and mitzenmacher"the power of one movehashing schemes for hardware,proceedings of the th ieee international conference on computer communications (infocom( ) - kirschm mitzenmacherand wieder"more robust hashingcuckoo hashing with stash,proceedings of the sixteenth annual european symposium on algorithms ( ) - knuththe art of computer programmingvol sorting and searching ed addisonwesleyreadingmass knuthj morrisand pratt"fast pattern matching in strings,siam journal on computing ( ) - lueker and molodowitch"more analysis of double hashing,proceedings of the twentieth acm symposium on theory of computing ( ) - maurer and lewis"hash table methods,computing surveys ( ) -
23,536
hashing mckenzier harriesand bell"selecting hashing algorithm,software--practice and experience ( ) - pagh and rodler"cuckoo hashing,journal of algorithms ( ) - patrascu and thorup"on the -independence required by linear probing and minwise independence,proceedings of the th international colloquium on automatalanguagesand programming ( ) - peterson"addressing for random access storage,ibm journal of research and development ( ) - vitter"implementations for coalesced hashing,communications of the acm ( ) - vocking"how asymmetry helps load balancing,journal of the acm ( ) - wegman and carter"new hash functions and their use in authentication and set equality,journal of computer and system sciences ( ) - yao" note on the analysis of extendible hashing,information processing letters ( ) - yao"uniform hashing is optimal,journal of the acm ( ) -
23,537
priority queues (heapsalthough jobs sent to printer are generally placed on queuethis might not always be the best thing to do for instanceone job might be particularly importantso it might be desirable to allow that job to be run as soon as the printer is available converselyifwhen the printer becomes availablethere are several -page jobs and one -page jobit might be reasonable to make the long job go lasteven if it is not the last job submitted (unfortunatelymost systems do not do thiswhich can be particularly annoying at times similarlyin multiuser environmentthe operating system scheduler must decide which of several processes to run generallya process is allowed to run only for fixed period of time one algorithm uses queue jobs are initially placed at the end of the queue the scheduler will repeatedly take the first job on the queuerun it until either it finishes or its time limit is upand place it at the end of the queue if it does not finish this strategy is generally not appropriatebecause very short jobs will seem to take long time because of the wait involved to run generallyit is important that short jobs finish as fast as possibleso these jobs should have precedence over jobs that have already been running furthermoresome jobs that are not short are still very important and should also have precedence this particular application seems to require special kind of queueknown as priority queue in this we will discuss efficient implementation of the priority queue adt uses of priority queues advanced implementations of priority queues the data structures we will see are among the most elegant in computer science model priority queue is data structure that allows at least the following two operationsinsertwhich does the obvious thingand deleteminwhich findsreturnsand removes the minimum element in the priority queue the insert operation is the equivalent of enqueueand deletemin is the priority queue equivalent of the queue' dequeue operation the +code provides two versions of deletemin one removes the minimumthe other removes the minimum and stores the removed value in an object passed by reference
23,538
priority queues (heapsdeletemin priority queue insert figure basic model of priority queue as with most data structuresit is sometimes possible to add other operationsbut these are extensions and not part of the basic model depicted in figure priority queues have many applications besides operating systems in we will see how priority queues are used for external sorting priority queues are also important in the implementation of greedy algorithmswhich operate by repeatedly finding minimumwe will see specific examples in and in this we will see use of priority queues in discrete event simulation simple implementations there are several obvious ways to implement priority queue we could use simple linked listperforming insertions at the front in ( and traversing the listwhich requires (ntimeto delete the minimum alternativelywe could insist that the list be kept always sortedthis makes insertions expensive ( ( )and deletemins cheap ( ( )the former is probably the better idea of the twobased on the fact that there are never more deletemins than insertions another way of implementing priority queues would be to use binary search tree this gives an (log naverage running time for both operations this is true in spite of the fact that although the insertions are randomthe deletions are not recall that the only element we ever delete is the minimum repeatedly removing node that is in the left subtree would seem to hurt the balance of the tree by making the right subtree heavy howeverthe right subtree is random in the worst casewhere the deletemins have depleted the left subtreethe right subtree would have at most twice as many elements as it should this adds only small constant to its expected depth notice that the bound can be made into worst-case bound by using balanced treethis protects one against bad insertion sequences using search tree could be overkill because it supports host of operations that are not required the basic data structure we will use will not require links and will support both operations in (log nworst-case time insertion will actually take constant time on averageand our implementation will allow building priority queue of items in linear timeif no deletions intervene we will then discuss how to implement priority queues to support efficient merging this additional operation seems to complicate matters bit and apparently requires the use of linked structure
23,539
binary heap the implementation we will use is known as binary heap its use is so common for priority queue implementations thatin the context of priority queueswhen the word heap is used without qualifierit is generally assumed to be referring to this implementation of the data structure in this sectionwe will refer to binary heaps merely as heaps like binary search treesheaps have two propertiesnamelya structure property and heaporder property as with avl treesan operation on heap can destroy one of the propertiesso heap operation must not terminate until all heap properties are in order this turns out to be simple to do structure property heap is binary tree that is completely filledwith the possible exception of the bottom levelwhich is filled from left to right such tree is known as complete binary tree figure shows an example it is easy to show that complete binary tree of height has between and + nodes this implies that the height of complete binary tree is log which is clearly (log nan important observation is that because complete binary tree is so regularit can be represented in an array and no links are necessary the array in figure corresponds to the heap in figure figure complete binary tree figure array implementation of complete binary tree
23,540
priority queues (heapstemplate class binaryheap publicexplicit binaryheapint capacity )explicit binaryheapconst vector items )bool isemptyconstconst comparable findminconstvoid insertconst comparable )void insertcomparable & )void deletemin)void deletemincomparable minitem )void makeempty)privateint currentsizevector array/number of elements in heap /the heap array void buildheap)void percolatedownint hole )}figure class interface for priority queue for any element in array position ithe left child is in position ithe right child is in the cell after the left child ( )and the parent is in position / thusnot only are links not requiredbut the operations required to traverse the tree are extremely simple and likely to be very fast on most computers the only problem with this implementation is that an estimate of the maximum heap size is required in advancebut typically this is not problem (and we can resize if neededin figure the limit on the heap size is elements the array has position more on this later heap data structure willthenconsist of an array (of comparable objectsand an integer representing the current heap size figure shows priority queue interface throughout this we shall draw the heaps as treeswith the implication that an actual implementation will use simple arrays heap-order property the property that allows operations to be performed quickly is the heap-order property since we want to be able to find the minimum quicklyit makes sense that the smallest element should be at the root if we consider that any subtree should also be heapthen any node should be smaller than all of its descendants
23,541
figure two complete trees (only the left tree is heapapplying this logicwe arrive at the heap-order property in heapfor every node xthe key in the parent of is smaller than (or equal tothe key in xwith the exception of the root (which has no parent in figure the tree on the left is heapbut the tree on the right is not (the dashed line shows the violation of heap orderby the heap-order propertythe minimum element can always be found at the root thuswe get the extra operationfindminin constant time basic heap operations it is easy (both conceptually and practicallyto perform the two required operations all the work involves ensuring that the heap-order property is maintained insert to insert an element into the heapwe create hole in the next available locationsince otherwisethe tree will not be complete if can be placed in the hole without violating heap orderthen we do so and are done otherwisewe slide the element that is in the hole' parent node into the holethus bubbling the hole up toward the root we continue this process until can be placed in the hole figure shows that to insert we create hole in the next available heap location inserting in the hole would violate the heaporder propertyso is slid down into the hole this strategy is continued in figure until the correct location for is found this general strategy is known as percolate upthe new element is percolated up the heap until the correct location is found insertion is easily implemented with the code shown in figure analogously we can declare (maxheapwhich enables us to efficiently find and remove the maximum element by changing the heap-order property thusa priority queue can be used to find either minimum or maximumbut this needs to be decided ahead of time
23,542
figure attempt to insert creating the holeand bubbling the hole up figure the remaining two steps to insert in previous heap /*insert item xallowing duplicates *void insertconst comparable ifcurrentsize =array size array resizearray size )/percolate up int hole ++currentsizecomparable copy xarray std::movecopy )forx arrayhole ]hole / arrayhole std::movearrayhole )arrayhole std::movearray )figure procedure to insert into binary heap
23,543
we could have implemented the percolation in the insert routine by performing repeated swaps until the correct order was establishedbut swap requires three assignment statements if an element is percolated up levelsthe number of assignments performed by the swaps would be our method uses assignments if the element to be inserted is the new minimumit will be pushed all the way to the top at some pointhole will be and we will want to break out of the loop we could do this with an explicit testor we can put copy of the inserted item in position in order to make the loop terminate we elect to place into position the time to do the insertion could be as much as (log )if the element to be inserted is the new minimum and is percolated all the way to the root on averagethe percolation terminates earlyit has been shown that comparisons are required on average to perform an insertso the average insert moves an element up levels deletemin deletemins are handled in similar manner as insertions finding the minimum is easythe hard part is removing it when the minimum is removeda hole is created at the root since the heap now becomes one smallerit follows that the last element in the heap must move somewhere in the heap if can be placed in the holethen we are done this is unlikelyso we slide the smaller of the hole' children into the holethus pushing the hole down one level we repeat this step until can be placed in the hole thusour action is to place in its correct spot along path from the root containing minimum children in figure the left figure shows heap prior to the deletemin after is removedwe must now try to place in the heap the value cannot be placed in the holebecause this would violate heap order thuswe place the smaller child ( in the holesliding the hole down one level (see fig we repeat this againand since is larger than we place into the hole and create new hole one level deeper we then place in the hole and create new hole on the bottom level sinceonce again is too large finallywe are able to place in the hole (fig this general strategy is known as percolate down we use the same technique as in the insert routine to avoid the use of swaps in this routine frequent implementation error in heaps occurs when there are an even number of elements in the heapand the one node that has only one child is encountered you must figure creation of the hole at the root
23,544
priority queues (heaps figure next two steps in deletemin figure last two steps in deletemin make sure not to assume that there are always two childrenso this usually involves an extra test in the code depicted in figure we've done this test at line one extremely tricky solution is always to ensure that your algorithm thinks every node has two children do this by placing sentinelof value higher than any in the heapat the spot after the heap endsat the start of each percolate down when the heap size is even you should think very carefully before attempting thisand you must put in prominent comment if you do use this technique although this eliminates the need to test for the presence of right childyou cannot eliminate the requirement that you test when you reach the bottombecause this would require sentinel for every leaf the worst-case running time for this operation is (log non averagethe element that is placed at the root is percolated almost to the bottom of the heap (which is the level it came from)so the average running time is (log nother heap operations notice that although finding the minimum can be performed in constant timea heap designed to find the minimum element (also known as (min)heapis of no help whatsoever in finding the maximum element in facta heap has very little ordering information
23,545
/*remove the minimum item throws underflowexception if empty *void deleteminifisemptythrow underflowexception}array std::movearraycurrentsize-)percolatedown )/*remove the minimum item and place it in minitem throws underflowexception if empty *void deletemincomparable minitem ifisemptythrow underflowexception}minitem std::movearray )array std::movearraycurrentsize-)percolatedown )/*internal method to percolate down in the heap hole is the index at which the percolate begins *void percolatedownint hole int childcomparable tmp std::movearrayhole )forhole <currentsizehole child child hole ifchild !currentsize &arraychild arraychild ++childifarraychild tmp arrayhole std::movearraychild )else breakarrayhole std::movetmp )figure method to perform deletemin in binary heap
23,546
priority queues (heapsfigure very large complete binary tree so there is no way to find any particular element without linear scan through the entire heap to see thisconsider the large heap structure (the elements are not shownin figure where we see that the only information known about the maximum element is that it is at one of the leaves half the elementsthoughare contained in leavesso this is practically useless information for this reasonif it is important to know where elements aresome other data structuresuch as hash tablemust be used in addition to the heap (recall that the model does not allow looking inside the heap if we assume that the position of every element is known by some other methodthen several other operations become cheap the first three operations below all run in logarithmic worst-case time decreasekey the decreasekey(poperation lowers the value of the item at position by positive amount since this might violate the heap orderit must be fixed by percolate up this operation could be useful to system administratorsthey can make their programs run with highest priority increasekey the increasekey(poperation increases the value of the item at position by positive amount this is done with percolate down many schedulers automatically drop the priority of process that is consuming excessive cpu time remove the remove(poperation removes the node at position from the heap this is done by first performing decreasekey( ,and then performing deletemin(when process
23,547
is terminated by user (instead of finishing normally)it must be removed from the priority queue buildheap the binary heap is sometimes constructed from an initial collection of items this constructor takes as input items and places them into heap obviouslythis can be done with successive inserts since each insert will take ( average and (log nworst-case timethe total running time of this algorithm would be (naverage but ( log nworst-case since this is special instruction and there are no other operations interveningand we already know that the instruction can be performed in linear average timeit is reasonable to expect that with reasonable care linear time bound can be guaranteed the general algorithm is to place the items into the tree in any ordermaintaining the structure property thenif percolatedown(ipercolates down from node ithe buildheap routine in figure can be used by the constructor to create heap-ordered tree the first tree in figure is the unordered tree the seven remaining trees in figures through show the result of each of the seven percolatedowns each dashed line corresponds to two comparisonsone to find the smaller child and one to compare the smaller child with the node notice that there are only dashed lines in the entire algorithm (there could have been an th--where?corresponding to comparisons to bound the running time of buildheapwe must bound the number of dashed lines this can be done by computing the sum of the heights of all the nodes in the heapwhich is the maximum number of dashed lines what we would like to show is that this sum is ( explicit binaryheapconst vector items arrayitems size )currentsizeitems sizeforint items size)++ arrayi itemsi ]buildheap)/*establish heap order property from an arbitrary arrangement of items runs in linear time *void buildheapforint currentsize -- percolatedowni )figure buildheap and constructor
23,548
priority queues (heaps figure leftinitial heaprightafter percolatedown( figure leftafter percolatedown( )rightafter percolatedown( figure leftafter percolatedown( )rightafter percolatedown( theorem for the perfect binary tree of height containing + - nodesthe sum of the heights of the nodes is + ( proof it is easy to see that this tree consists of node at height nodes at height nodes at height and in general nodes at height the sum of the heights of all the nodes is then
23,549
figure leftafter percolatedown( )rightafter percolatedown( sh ( ii= ( ( ( ( - ( ( multiplying by gives the equation ( ( ( ( ( we subtract these two equations and obtain equation ( we find that certain terms almost cancel for instancewe have ( ( ( and so on the last term in equation ( ) does not appear in equation ( )thusit appears in equation ( the first term in equation ( )hdoes not appear in equation ( )thus- appears in equation ( we obtain - - ( + ( ( which proves the theorem complete tree is not perfect binary treebut the result we have obtained is an upper bound on the sum of the heights of the nodes in complete tree since complete tree has between and + nodesthis theorem implies that this sum is ( )where is the number of nodes although the result we have obtained is sufficient to show that buildheap is linearthe bound on the sum of the heights is not as strong as possible for complete tree with nodesthe bound we have obtained is roughly the sum of the heights can be shown by induction to be ( )where (nis the number of in the binary representation of applications of priority queues we have already mentioned how priority queues are used in operating systems design in we will see how priority queues are used to implement several graph algorithms efficiently here we will show how to use priority queues to obtain solutions to two problems
23,550
priority queues (heapsthe selection problem the first problem we will examine is the selection problem from section recall that the input is list of elementswhich can be totally orderedand an integer the selection problem is to find the kth largest element two algorithms were given in but neither is very efficient the first algorithmwhich we shall call algorithm ais to read the elements into an array and sort themreturning the appropriate element assuming simple sorting algorithmthe running time is ( the alternative algorithm bis to read elements into an array and sort them the smallest of these is in the kth position we process the remaining elements one by one as an element arrivesit is compared with the kth element in the array if it is largerthen the kth element is removedand the new element is placed in the correct place among the remaining elements when the algorithm endsthe element in the kth position is the answer the running time is ( * (why?if  / then both algorithms are ( notice that for any kwe can solve the symmetric problem of finding the ( )th smallest elementso  / is really the hardest case for these algorithms this also happens to be the most interesting casesince this value of is known as the median we give two algorithms hereboth of which run in ( log nin the extreme case of  / which is distinct improvement algorithm for simplicitywe assume that we are interested in finding the kth smallest element the algorithm is simple we read the elements into an array we then apply the buildheap algorithm to this array finallywe perform deletemin operations the last element extracted from the heap is our answer it should be clear that by changing the heap-order propertywe could solve the original problem of finding the kth largest element the correctness of the algorithm should be clear the worst-case timing is (nto construct the heapif buildheap is usedand (log nfor each deletemin since there are deleteminswe obtain total running time of ( log nif ( /log )then the running time is dominated by the buildheap operation and is (nfor larger values of kthe running time is ( log nif  / then the running time is ( log nnotice that if we run this program for and record the values as they leave the heapwe will have essentially sorted the input file in ( log ntime in we will refine this idea to obtain fast sorting algorithm known as heapsort algorithm for the second algorithmwe return to the original problem and find the kth largest element we use the idea from algorithm at any point in time we will maintain set of the largest elements after the first elements are readwhen new element is read it is compared with the kth largest elementwhich we denote by sk notice that sk is the smallest element in if the new element is largerthen it replaces sk in will then have new smallest elementwhich may or may not be the newly added element at the end of the inputwe find the smallest element in and return it as the answer this is essentially the same algorithm described in herehoweverwe will use heap to implement the first elements are placed into the heap in total time (kwith call to buildheap the time to process each of the remaining elements is ( )to test
23,551
if the element goes into splus (log )to delete sk and insert the new element if this is necessary thusthe total time is ( ( klog ko( log kthis algorithm also gives bound of ( log nfor finding the median in we will see how to solve this problem in (naverage time in we will see an elegantalbeit impracticalalgorithm to solve this problem in (nworst-case time event simulation in section we described an important queuing problem recall that we have systemsuch as bankwhere customers arrive and wait in line until one of tellers is available customer arrival is governed by probability distribution functionas is the service time (the amount of time to be served once teller is availablewe are interested in statistics such as how long on average customer has to wait or how long the line might be with certain probability distributions and values of kthese answers can be computed exactly howeveras gets largerthe analysis becomes considerably more difficultso it is appealing to use computer to simulate the operation of the bank in this waythe bank officers can determine how many tellers are needed to ensure reasonably smooth service simulation consists of processing events the two events here are (aa customer arriving and (ba customer departingthus freeing up teller we can use the probability functions to generate an input stream consisting of ordered pairs of arrival time and service time for each customersorted by arrival time we do not need to use the exact time of day ratherwe can use quantum unitwhich we will refer to as tick one way to do this simulation is to start simulation clock at zero ticks we then advance the clock one tick at timechecking to see if there is an event if there isthen we process the event(sand compile statistics when there are no customers left in the input stream and all the tellers are freethen the simulation is over the problem with this simulation strategy is that its running time does not depend on the number of customers or events (there are two events per customer)but instead depends on the number of tickswhich is not really part of the input to see why this is importantsuppose we changed the clock units to milliticks and multiplied all the times in the input by , the result would be that the simulation would take , times longerthe key to avoiding this problem is to advance the clock to the next event time at each stage this is conceptually easy to do at any pointthe next event that can occur is either (athe next customer in the input file arrives or (bone of the customers at teller leaves since all the times when the events will happen are availablewe just need to find the event that happens nearest in the future and process that event if the event is departureprocessing includes gathering statistics for the departing customer and checking the line (queueto see whether there is another customer waiting if sowe add that customerprocess whatever statistics are requiredcompute the time when that customer will leaveand add that departure to the set of events waiting to happen
23,552
priority queues (heapsif the event is an arrivalwe check for an available teller if there is nonewe place the arrival on the line (queue)otherwise we give the customer tellercompute the customer' departure timeand add the departure to the set of events waiting to happen the waiting line for customers can be implemented as queue since we need to find the event nearest in the futureit is appropriate that the set of departures waiting to happen be organized in priority queue the next event is thus the next arrival or next departure (whichever is sooner)both are easily available it is then straightforwardalthough possibly time-consumingto write the simulation routines if there are customers (and thus eventsand tellersthen the running time of the simulation would be ( log( )because computing and processing each event takes (log )where is the size of the heap -heaps binary heaps are so simple that they are almost always used when priority queues are needed simple generalization is -heapwhich is exactly like binary heap except that all nodes have children (thusa binary heap is -heapfigure shows -heap notice that -heap is much shallower than binary heapimproving the running time of inserts to (logd nhoweverfor large dthe deletemin operation is more expensivebecause even though the tree is shallowerthe minimum of children must be foundwhich takes comparisons using standard algorithm this raises the time for this operation to ( logd nif is constantboth running times areof courseo(log nalthough an array can still be usedthe multiplications and divisions to find children and parents are now by dwhichunless is power of seriously increases the running timebecause we can no longer implement division by bit shift -heaps are interesting in theorybecause there are many algorithms where the number of insertions is much greater than the number of deletemins (and thus theoretical speedup is possiblethey are also of interest when the priority queue is too large to fit entirely in main memory figure -heap ( we use ( log( )instead of ( log kto avoid confusion for the case
23,553
in this casea -heap can be advantageous in much the same way as -trees finallythere is evidence suggesting that -heaps may outperform binary heaps in practice the most glaring weakness of the heap implementationaside from the inability to perform findsis that combining two heaps into one is hard operation this extra operation is known as merge there are quite few ways to implement heaps so that the running time of merge is (log nwe will now discuss three data structuresof various complexitythat support the merge operation efficiently we will defer any complicated analysis until leftist heaps it seems difficult to design data structure that efficiently supports merging (that isprocesses merge in (ntimeand uses only an arrayas in binary heap the reason for this is that merging would seem to require copying one array into anotherwhich would take (ntime for equal-sized heaps for this reasonall the advanced data structures that support efficient merging require the use of linked data structure in practicewe can expect that this will make all the other operations slower like binary heapa leftist heap has both structural property and an ordering property indeeda leftist heaplike virtually all heaps usedhas the same heap-order property we have already seen furthermorea leftist heap is also binary tree the only difference between leftist heap and binary heap is that leftist heaps are not perfectly balancedbut actually attempt to be very unbalanced leftist heap property we define the null path lengthnpl( )of any node to be the length of the shortest path from to node without two children thusthe npl of node with zero or one child is while npl(nullptr- in the tree in figure the null path lengths are indicated inside the tree nodes notice that the null path length of any node is more than the minimum of the null path lengths of its children this applies to nodes with less than two children because the null path length of nullptr is - the leftist heap property is that for every node in the heapthe null path length of the left child is at least as large as that of the right child this property is satisfied by only one of the trees in figure namelythe tree on the left this property actually goes out of its way to ensure that the tree is unbalancedbecause it clearly biases the tree to get deep toward the left indeeda tree consisting of long path of left nodes is possible (and actually preferable to facilitate merging)--hence the name leftist heap because leftist heaps tend to have deep left pathsit follows that the right path ought to be short indeedthe right path down leftist heap is as short as any in the heap otherwisethere would be path that goes through some node and takes the left child then would violate the leftist property theorem leftist tree with nodes on the right path must have at least nodes
23,554
priority queues (heaps figure null path lengths for two treesonly the left tree is leftist proof the proof is by induction if there must be at least one tree node otherwisesuppose that the theorem is true for consider leftist tree with nodes on the right path then the root has right subtree with nodes on the right pathand left subtree with at least nodes on the right path (otherwise it would not be leftistapplying the inductive hypothesis to these subtrees yields minimum of nodes in each subtree this plus the root gives at least + nodes in the treeproving the theorem from this theoremit follows immediately that leftist tree of nodes has right path containing at most log( nodes the general idea for the leftist heap operations is to perform all the work on the right pathwhich is guaranteed to be short the only tricky part is that performing inserts and merges on the right path could destroy the leftist heap property it turns out to be extremely easy to restore the property leftist heap operations the fundamental operation on leftist heaps is merging notice that insertion is merely special case of mergingsince we may view an insertion as merge of one-node heap with larger heap we will first give simple recursive solution and then show how this might be done nonrecursively our input is the two leftist heapsh and in figure you should check that these heaps really are leftist notice that the smallest elements are at the roots in addition to space for the data and left and right pointerseach node will have an entry that indicates the null path length if either of the two heaps is emptythen we can return the other heap otherwiseto merge the two heapswe compare their roots firstwe recursively merge the heap with the larger root with the right subheap of the heap with the smaller root in our examplethis means we recursively merge with the subheap of rooted at obtaining the heap in figure since this tree is formed recursivelyand we have not yet finished the description of the algorithmwe cannot at this point show how this heap was obtained howeverit is
23,555
figure two leftist heaps and figure result of merging with ' right subheap reasonable to assume that the resulting tree is leftist heapbecause it was obtained via recursive step this is much like the inductive hypothesis in proof by induction since we can handle the base case (which occurs when one tree is empty)we can assume that the recursive step works as long as we can finish the mergethis is rule of recursionwhich we discussed in we now make this new heap the right child of the root of (see fig although the resulting heap satisfies the heap-order propertyit is not leftist because the left subtree of the root has null path length of whereas the right subtree has null path length of thusthe leftist property is violated at the root howeverit is easy to see that the remainder of the tree must be leftist the right subtree of the root is leftist because of the recursive step the left subtree of the root has not been changedso it too must still be leftist thuswe need only to fix the root we can make the entire tree leftist by merely swapping the root' left and right children (fig and updating the null path length-the new null path length is plus the null path length of the new right child--completing
23,556
priority queues (heaps figure result of attaching leftist heap of previous figure as ' right child the merge notice that if the null path length is not updatedthen all null path lengths will be and the heap will not be leftist but merely random in this casethe algorithm will workbut the time bound we will claim will no longer be valid the description of the algorithm translates directly into code the node class (fig is the same as the binary treeexcept that it is augmented with the npl (null path lengthdata member the leftist heap stores pointer to the root as its data member we have seen in that when an element is inserted into an empty binary tree figure result of swapping children of ' root
23,557
template class leftistheap publicleftistheap)leftistheapconst leftistheap rhs )leftistheapleftistheap &rhs )~leftistheap)leftistheap operator=const leftistheap rhs )leftistheap operator=leftistheap &rhs )bool isemptyconstconst comparable findminconstvoid insertconst comparable )void insertcomparable & )void deletemin)void deletemincomparable minitem )void makeempty)void mergeleftistheap rhs )privatestruct leftistnode comparable elementleftistnode *leftleftistnode *rightint nplleftistnodeconst comparable eleftistnode *lt nullptrleftistnode *rt nullptrint np elemente }leftlt }rightrt }nplnp leftistnodecomparable &eleftistnode *lt nullptrleftistnode *rt nullptrint np elementstd::movee }leftlt }rightrt }nplnp }leftistnode *rootleftistnode mergeleftistnode * leftistnode * )leftistnode merge leftistnode * leftistnode * )void swapchildrenleftistnode * )void reclaimmemoryleftistnode * )leftistnode cloneleftistnode * const}figure leftist heap type declarations
23,558
priority queues (heapsthe node referenced by the root will need to change we use the usual technique of implementing private recursive methods to do the merging the class skeleton is also shown in figure the two merge routines (fig are drivers designed to remove special cases and ensure that has the smaller root the actual merging is performed in merge (fig the public merge method merges rhs into the controlling heap rhs becomes empty the alias test in the public method disallows merge(hthe time to perform the merge is proportional to the sum of the length of the right pathsbecause constant work is performed at each node visited during the recursive calls thus we obtain an (log ntime bound to merge two leftist heaps we can also perform this operation nonrecursively by essentially performing two passes in the first passwe create new tree by merging the right paths of both heaps to do thiswe arrange the nodes on the right paths of and in sorted orderkeeping their respective left children in our examplethe new right path is and the resulting tree is shown in figure /*merge rhs into the priority queue rhs becomes empty rhs must be different from this *void mergeleftistheap rhs ifthis =&rhs /avoid aliasing problems returnroot mergerootrhs root )rhs root nullptr/*internal method to merge two roots deals with deviant cases and calls recursive merge *leftistnode mergeleftistnode * leftistnode * ifh =nullptr return ifh =nullptr return ifh ->element element return merge )else return merge )figure driving routines for merging leftist heaps
23,559
/*internal method to merge two roots assumes trees are not emptyand ' root contains smallest item *leftistnode merge leftistnode * leftistnode * ifh ->left =nullptr /single node ->left /other fields in already accurate else ->right mergeh ->righth )ifh ->left->npl right->npl swapchildrenh ) ->npl ->right->npl return figure actual routine to merge leftist heaps second pass is made up the heapand child swaps are performed at nodes that violate the leftist heap property in figure there is swap at nodes and and the same tree as before is obtained the nonrecursive version is simpler to visualize but harder to code we leave it to the reader to show that the recursive and nonrecursive procedures do the same thing figure result of merging right paths of and
23,560
priority queues (heaps /*inserts xduplicates allowed *void insertconst comparable root mergenew leftistnodex }root )figure insertion routine for leftist heaps as mentioned abovewe can carry out insertions by making the item to be inserted one-node heap and performing merge to perform deleteminwe merely destroy the rootcreating two heapswhich can then be merged thusthe time to perform deletemin is (log nthese two routines are coded in figure and figure finallywe can build leftist heap in (ntime by building binary heap (obviously using linked implementationalthough binary heap is clearly leftistthis is not necessarily the best solutionbecause the heap we obtain is the worst possible leftist heap furthermoretraversing the tree in reverse-level order is not as easy with links the buildheap effect can be obtained by recursively building the left and right subtrees and then percolating the root down the exercises contain an alternative solution /*remove the minimum item throws underflowexception if empty *void deleteminifisemptythrow underflowexception}leftistnode *oldroot rootroot mergeroot->leftroot->right )delete oldroot/*remove the minimum item and place it in minitem throws underflowexception if empty *void deletemincomparable minitem minitem findmin)deletemin)figure deletemin routine for leftist heaps
23,561
skew heaps skew heap is self-adjusting version of leftist heap that is incredibly simple to implement the relationship of skew heaps to leftist heaps is analogous to the relation between splay trees and avl trees skew heaps are binary trees with heap orderbut there is no structural constraint on these trees unlike leftist heapsno information is maintained about the null path length of any node the right path of skew heap can be arbitrarily long at any timeso the worst-case running time of all operations is (nhoweveras with splay treesit can be shown (see that for any consecutive operationsthe total worst-case running time is ( log nthusskew heaps have (log namortized cost per operation as with leftist heapsthe fundamental operation on skew heaps is merging the merge routine is once again recursiveand we perform the exact same operations as beforewith one exception the difference is that for leftist heapswe check to see whether the left and right children satisfy the leftist heap structure property and swap them if they do not for skew heapsthe swap is unconditionalwe always do itwith the one exception that the largest of all the nodes on the right paths does not have its children swapped this one exception is what happens in the natural recursive implementationso it is not really special case at all furthermoreit is not necessary to prove the boundsbut since this node is guaranteed not to have right childit would be silly to perform the swap and give it one (in our examplethere are no children of this nodeso we do not worry about it againsuppose our input is the same two heaps as beforefigure if we recursively merge with the subheap of rooted at we will get the heap in figure againthis is done recursivelyso by the third rule of recursion (section we need not worry about how it was obtained this heap happens to be leftistbut there is no guarantee that this is always the case we make this heap the new left child of and the old left child of becomes the new right child (see fig the entire tree is leftistbut it is easy to see that that is not always trueinserting into this new heap would destroy the leftist property we can perform all operations nonrecursivelyas with leftist heapsby merging the right paths and swapping left and right children for every node on the right pathwith figure two skew heaps and
23,562
priority queues (heaps figure result of merging with ' right subheap figure result of merging skew heaps and the exception of the last after few examplesit becomes clear that since all but the last node on the right path have their children swappedthe net effect is that this becomes the new left path (see the preceding example to convince yourselfthis makes it very easy to merge two skew heaps visually this is not exactly the same as the recursive implementation (but yields the same time boundsif we only swap children for nodes on the right path that are above the point where the merging of right paths terminated due to exhaustion of one heap' right pathwe get the same result as the recursive version
23,563
the implementation of skew heaps is left as (trivialexercise note that because right path could be longa recursive implementation could fail because of lack of stack spaceeven though performance would otherwise be acceptable skew heaps have the advantage that no extra space is required to maintain path lengths and no tests are required to determine when to swap children it is an open problem to determine precisely the expected right path length of both leftist and skew heaps (the latter is undoubtedly more difficultsuch comparison would make it easier to determine whether the slight loss of balance information is compensated by the lack of testing binomial queues although both leftist and skew heaps support merginginsertionand deletemin all effectively in (log ntime per operationthere is room for improvement because we know that binary heaps support insertion in constant average time per operation binomial queues support all three operations in (log nworst-case time per operationbut insertions take constant time on average binomial queue structure binomial queues differ from all the priority queue implementations that we have seen in that binomial queue is not heap-ordered tree but rather collection of heap-ordered treesknown as forest each of the heap-ordered trees is of constrained form known as binomial tree (the reason for the name will be obvious laterthere is at most one binomial tree of every height binomial tree of height is one-node treea binomial treebk of height is formed by attaching binomial treebk- to the root of another binomial treebk- figure shows binomial trees and from the diagram we see that binomial treebk consists of root with children bk- binomial trees of height have exactly nodesand the number of nodes at depth is the binomial coefficient dk if we impose heap order on the binomial trees and allow at most one binomial tree of any heightwe can represent priority queue of any size by collection of binomial trees for instancea priority queue of size could be represented by the forest we might write this representation as which not only represents in binary but also represents the fact that and are present in the representation and is not as an examplea priority queue of six elements could be represented as in figure binomial queue operations the minimum element can then be found by scanning the roots of all the trees since there are at most log different treesthe minimum can be found in (log ntime alternativelywe can maintain knowledge of the minimum and perform the operation in ( time if we remember to update the minimum when it changes during other operations merging two binomial queues is conceptually easy operationwhich we will describe by example consider the two binomial queuesh and with six and seven elementsrespectivelypictured in figure
23,564
figure binomial trees and figure binomial queue with six elements figure two binomial queues and
23,565
the merge is performed by essentially adding the two queues together let be the new binomial queue since has no binomial tree of height and doeswe can just use the binomial tree of height in as part of nextwe add binomial trees of height since both and have binomial trees of height we merge them by making the larger root subtree of the smallercreating binomial tree of height shown in figure thush will not have binomial tree of height there are now three binomial trees of height namelythe original trees of and plus the tree formed by the previous step we keep one binomial tree of height in and merge the other twocreating binomial tree of height since and have no trees of height this tree becomes part of and we are finished the resulting binomial queue is shown in figure since merging two binomial trees takes constant time with almost any reasonable implementationand there are (log nbinomial treesthe merge takes (log ntime in the worst case to make this operation efficientwe need to keep the trees in the binomial queue sorted by heightwhich is certainly simple thing to do insertion is just special case of mergingsince we merely create one-node tree and perform merge the worst-case time of this operation is likewise (log nmore preciselyif the priority queue into which the element is being inserted has the property that the smallest nonexistent binomial tree is bi the running time is proportional to for exampleh (fig is missing binomial tree of height so the insertion will terminate in two steps since each tree in binomial queue is present with probability it follows that we expect an insertion to terminate in two stepsso the average time is constant furthermorean analysis will show that performing inserts on an initially empty binomial queue will take (nworst-case time indeedit is possible to do this operation using only comparisonswe leave this as an exercise as an examplewe show in figures through the binomial queues that are formed by inserting through in order inserting shows off bad case we merge figure merge of the two trees in and figure binomial queue the result of merging and
23,566
figure after is inserted figure after is inserted figure after is inserted figure after is inserted figure after is inserted figure after is inserted figure after is inserted
23,567
with obtaining new tree of height we then merge this tree with obtaining tree of height which is the new priority queue we count this as three steps (two tree merges plus the stopping casethe next insertion after is inserted is another bad case and would require three tree merges deletemin can be performed by first finding the binomial tree with the smallest root let this tree be bk and let the original priority queue be we remove the binomial tree bk from the forest of trees in hforming the new binomial queue hwe also remove the root of bk creating binomial trees bk- which collectively form priority queue we finish the operation by merging hand as an examplesuppose we perform deletemin on which is shown again in figure the minimum root is so we obtain the two priority queues hand in figure and figure the binomial queue that results from merging hand is the final answer and is shown in figure for the analysisnote first that the deletemin operation breaks the original binomial queue into two it takes (log ntime to find the tree containing the minimum element and to create the queues hand merging these two queues takes (log ntimeso the entire deletemin operation takes (log ntime figure binomial queue figure binomial queue hcontaining all the binomial trees in except ' figure binomial queue  with removed
23,568
priority queues (heaps figure result of applying deletemin to implementation of binomial queues the deletemin operation requires the ability to find all the subtrees of the root quicklyso the standard representation of general trees is requiredthe children of each node are kept in linked listand each node has pointer to its first child (if anythis operation also requires that the children be ordered by the size of their subtrees we also need to make sure that it is easy to merge two trees when two trees are mergedone of the trees is added as child to the other since this new tree will be the largest subtreeit makes sense to maintain the subtrees in decreasing sizes only then will we be able to merge two binomial treesand thus two binomial queuesefficiently the binomial queue will be an array of binomial trees to summarizetheneach node in binomial tree will contain the datafirst childand right sibling the children in binomial tree are arranged in decreasing rank figure shows how the binomial queue in figure is represented figure shows the type declarations for node in the binomial tree and the binomial queue class interface figure binomial queue drawn as forest figure representation of binomial queue
23,569
template class binomialqueue publicbinomialqueue)binomialqueueconst comparable item )binomialqueueconst binomialqueue rhs )binomialqueuebinomialqueue &rhs )~binomialqueue)binomialqueue operator=const binomialqueue rhs )binomialqueue operator=binomialqueue &rhs )bool isemptyconstconst comparable findminconstvoid insertconst comparable )void insertcomparable & )void deletemin)void deletemincomparable minitem )void makeempty)void mergebinomialqueue rhs )privatestruct binomialnode comparable elementbinomialnode *leftchildbinomialnode *nextsiblingbinomialnodeconst comparable ebinomialnode *ltbinomialnode *rt elemente }leftchildlt }nextsiblingrt binomialnodecomparable &ebinomialnode *ltbinomialnode *rt elementstd::movee }leftchildlt }nextsiblingrt }const static int default_trees vector thetreesint currentsize/an array of tree roots /number of items in the priority queue int findminindexconstint capacityconstbinomialnode combinetreesbinomialnode * binomialnode * )void makeemptybinomialnode )binomialnode clonebinomialnode const}figure binomial queue class interface and node definition
23,570
priority queues (heaps figure merging two binomial trees in order to merge two binomial queueswe need routine to merge two binomial trees of the same size figure shows how the links change when two binomial trees are merged the code to do this is simple and is shown in figure we provide simple implementation of the merge routine is represented by the current object and is represented by rhs the routine combines and placing the result in and making empty at any point we are dealing with trees of rank and are the trees in and respectivelyand carry is the tree carried from previous step (it might be nullptrdepending on each of the eight possible casesthe tree that results for rank and the carry tree of rank is formed this process proceeds from rank to the last rank in the resulting binomial queue the code is shown in figure improvements to the code are suggested in exercise the deletemin routine for binomial queues is given in figure (on pages - we can extend binomial queues to support some of the nonstandard operations that binary heaps allowsuch as decreasekey and removewhen the position of the affected element is known decreasekey is percolateupwhich can be performed in (log ntime if we add data member to each node that stores parent link an arbitrary remove can be performed by combination of decreasekey and deletemin in (log ntime /*return the result of merging equal-sized and *binomialnode combinetreesbinomialnode * binomialnode * ift ->element element return combinetreest ) ->nextsibling ->leftchildt ->leftchild return figure routine to merge two equal-sized binomial trees
23,571
/*merge rhs into the priority queue rhs becomes empty rhs must be different from this exercise needed to make this operation more efficient *void mergebinomialqueue rhs ifthis =&rhs /avoid aliasing problems returncurrentsize +rhs currentsizeifcurrentsize capacityint oldnumtrees thetrees size)int newnumtrees maxthetrees size)rhs thetrees size thetrees resizenewnumtrees )forint oldnumtreesi newnumtrees++ thetreesi nullptrbinomialnode *carry nullptrforint <currentsize++ij * binomialnode * thetreesi ]binomialnode * rhs thetrees sizerhs thetreesi nullptrint whichcase =nullptr whichcase + =nullptr whichcase +carry =nullptr switchwhichcase case /no trees *case /only this *breakcase /only rhs *thetreesi rhs thetreesi nullptrbreakcase /only carry *thetreesi carrycarry nullptrbreakfigure routine to merge two priority queues
23,572
case /this and rhs *carry combinetreest )thetreesi rhs thetreesi nullptrbreakcase /this and carry *carry combinetreest carry )thetreesi nullptrbreakcase /rhs and carry *carry combinetreest carry )rhs thetreesi nullptrbreakcase /all three *thetreesi carrycarry combinetreest )rhs thetreesi nullptrbreakforauto root rhs thetrees root nullptrrhs currentsize figure (continued /*remove the minimum item and place it in minitem throws underflowexception if empty *void deletemincomparable minitem ifisemptythrow underflowexception}int minindex findminindex)minitem thetreesminindex ]->elementfigure deletemin for binomial queues
23,573
binomialnode *oldroot thetreesminindex ]binomialnode *deletedtree oldroot->leftchilddelete oldroot/construct 'binomialqueue deletedqueuedeletedqueue thetrees resizeminindex )deletedqueue currentsize <minindex forint minindex > -- deletedqueue thetreesj deletedtreedeletedtree deletedtree->nextsiblingdeletedqueue thetreesj ]->nextsibling nullptr/construct hthetreesminindex nullptrcurrentsize -deletedqueue currentsize mergedeletedqueue )/*find index of tree containing the smallest item in the priority queue the priority queue must not be empty return the index of tree containing the smallest item *int findminindexconst int iint minindexfori thetreesi =nullptr++ forminindex ii thetrees size)++ ifthetreesi !nullptr &thetreesi ]->element element minindex ireturn minindexfigure (continued
23,574
priority queues (heaps#include #include #include #include #include using namespace std/empty the priority queue and print its contents template void dumpcontentsconst string msgpriorityqueue pq cout <msg <":<endlwhile!pq emptycout <pq top<endlpq pop)/do some inserts and removes (done in dumpcontentsint mainpriority_queue maxpqpriority_queue,greaterminpqminpq push )minpq push )minpq push )maxpq push )maxpq push )maxpq push )dumpcontents"minpq"minpq )dumpcontents"maxpq"maxpq )/ / return figure routine that demonstrates the stl priority_queuethe comment shows the expected order of output priority queues in the standard library the binary heap is implemented in the stl by the class template named priority_queue found in the standard header file queue the stl implements max-heap rather than minheap so the largest rather than smallest item is the one that is accessed the key member functions are
23,575
void pushconst object )const object topconstvoid pop)bool empty)void clear)push adds to the priority queuetop returns the largest element in the priority queueand pop removes the largest element from the priority queue duplicates are allowedif there are several largest elementsonly one of them is removed the priority queue template is instantiated with an item typethe container type (almost always you would want to use vector that stores the items)and the comparatordefaults are allowed for the last two parametersand the defaults yield max-heap using greater function object as the comparator yields min-heap figure shows test program that illustrates how the priority_queue class template can be used as both the default max-heap and min-heap summary in this we have seen various implementations and uses of the priority queue adt the standard binary heap implementation is elegant because of its simplicity and speed it requires no links and only constant amount of extra spaceyet supports the priority queue operations efficiently we considered the additional merge operation and developed three implementationseach of which is unique in its own way the leftist heap is wonderful example of the power of recursion the skew heap represents remarkable data structure because of the lack of balance criteria its analysiswhich we will perform in is interesting in its own right the binomial queue shows how simple idea can be used to achieve good time bound we have also seen several uses of priority queuesranging from operating systems scheduling to simulation we will see their use again in and exercises can both insert and findmin be implemented in constant time show the result of inserting and one at timeinto an initially empty binary heap show the result of using the linear-time algorithm to build binary heap using the same input show the result of performing three deletemin operations in the heap of the previous exercise complete binary tree of elements uses array positions to suppose we try to use an array representation of binary tree that is not complete determine how large the array must be for the following
23,576
priority queues (heapsa binary tree that has two extra levels (that isit is very slightly unbalancedb binary tree that has deepest node at depth log binary tree that has deepest node at depth log the worst-case binary tree rewrite the binaryheap insert routine by placing copy of the inserted item in position how many nodes are in the large heap in figure prove that for binary heapsbuildheap does at most - comparisons between elements show that heap of eight elements can be constructed in eight comparisons between heap elements  give an algorithm to build binary heap in (log nelement compar isons show the following regarding the maximum item in the heapa it must be at one of the leaves there are exactly  / leaves every leaf must be examined to find it  show that the expected depth of the kth smallest element in large complete heap (you may assume is bounded by log give an algorithm to find all nodes less than some valuexin binary heap your algorithm should run in ( )where is the number of nodes output does your algorithm extend to any of the other heap structures discussed in this give an algorithm that finds an arbitrary item in binary heap using at most roughly / comparisons  propose an algorithm to insert nodes into binary heap on elements in ( log log log ntime prove your time bound write program to take elements and do the followinga insert them into heap one by one build heap in linear time compare the running time of both algorithms for sortedreverse-orderedand random inputs each deletemin operation uses log comparisons in the worst case propose scheme so that the deletemin operation uses only log log log ( comparisons between elements this need not imply less data movement  extend your scheme in part (aso that only log log log log ( comparisons are performed  how far can you take this idea do the savings in comparisons compensate for the increased complexity of your algorithmif -heap is stored as an arrayfor an entry located in position iwhere are the parents and children
23,577
suppose we need to perform percolateups and deletemins on -heap that initially has elements what is the total running time of all operations in terms of mnand db if what is the running time of all heap operationsc if ( )what is the total running timed what choice of minimizes the total running time suppose that binary heaps are represented using explicit links give simple algorithm to find the tree node that is at implicit position suppose that binary heaps are represented using explicit links consider the problem of merging binary heap lhs with rhs assume both heaps are perfect binary treescontaining and nodesrespectively give an (log nalgorithm to merge the two heaps if give an (log nalgorithm to merge the two heaps if | give an (log nalgorithm to merge the two heaps regardless of and min-max heap is data structure that supports both deletemin and deletemax in (log nper operation the structure is identical to binary heapbut the heaporder property is that for any nodexat even depththe element stored at is smaller than the parent but larger than the grandparent (where this makes sense)and for any nodexat odd depththe element stored at is larger than the parent but smaller than the grandparent see figure how do we find the minimum and maximum elementsb give an algorithm to insert new node into the min-max heap give an algorithm to perform deletemin and deletemax can you build min-max heap in linear time suppose we would like to support deletemindeletemaxand merge propose data structure to support all operations in (log ntime figure min-max heap
23,578
priority queues (heaps figure input for exercises and merge the two leftist heaps in figure show the result of inserting keys to in order into an initially empty leftist heap prove or disprovea perfectly balanced tree forms if keys to are inserted in order into an initially empty leftist heap give an example of input that generates the best leftist heap can leftist heaps efficiently support decreasekeyb what changesif any (if possible)are required to do this one way to delete nodes from known position in leftist heap is to use lazy strategy to delete nodemerely mark it deleted when findmin or deletemin is performedthere is potential problem if the root is marked deletedsince then the node has to be actually deleted and the real minimum needs to be foundwhich may involve deleting other marked nodes in this strategyremoves cost one unitbut the cost of deletemin or findmin depends on the number of nodes that are marked deleted suppose that after deletemin or findmin there are fewer marked nodes than before the operation show how to perform the deletemin in ( log ntime  propose an implementationwith an analysis to show that the time to perform the deletemin is ( log( / ) we can perform buildheap in linear time for leftist heaps by considering each element as one-node leftist heapplacing all these heaps on queueand performing the following stepuntil only one heap is on the queuedequeue two heapsmerge themand enqueue the result prove that this algorithm is (nin the worst case why might this algorithm be preferable to the algorithm described in the text merge the two skew heaps in figure show the result of inserting keys to in order into skew heap prove or disprovea perfectly balanced tree forms if the keys to - are inserted in order into an initially empty skew heap skew heap of elements can be built using the standard binary heap algorithm can we use the same merging strategy described in exercise for skew heaps to get an (nrunning timeprove that binomial treebk has binomial trees bk- as children of the root
23,579
figure input for exercise prove that binomial tree of height has dk nodes at depth merge the two binomial queues in figure show that inserts into an initially empty binomial queue take (ntime in the worst case give an algorithm to build binomial queue of elementsusing at most comparisons between elements propose an algorithm to insert nodes into binomial queue of elements in ( log nworst-case time prove your bound write an efficient routine to perform insert using binomial queues do not call merge  for the binomial queue modify the merge routine to terminate merging if there are no trees left in and the carry tree is nullptr modify the merge so that the smaller tree is always merged into the larger suppose we extend binomial queues to allow at most two trees of the same height per structure can we obtain ( worst-case time for insertion while retaining (log nfor the other operations suppose you have number of boxeseach of which can hold total weight and items in which weigh wn respectively the object is to pack all the items without placing more weight in any box than its capacity and using as few boxes as possible for instanceif and the items have weights then we can solve the problem with two boxes in generalthis problem is very hardand no efficient solution is known write programs to implement efficiently the following approximation strategiesa place the weight in the first box for which it fits (creating new box if there is no box with enough room(this strategy and all that follow would give three boxeswhich is suboptimal place the weight in the box with the most room for it place the weight in the most filled box that can accept it without overflowing  are any of these strategies enhanced by presorting the items by weight
23,580
priority queues (heapssuppose we want to add the decreaseallkeysoperation to the heap repertoire the result of this operation is that all keys in the heap have their value decreased by an amount for the heap implementation of your choiceexplain the necessary modifications so that all other operations retain their running times and decreaseallkeys runs in ( which of the two selection algorithms has the better time boundthe standard copy constructor and makeempty for leftist heaps can fail because of too many recursive calls although this was true for binary search treesit is more problematic for leftist heapsbecause leftist heap can be very deepeven while it has good worst-case performance for basic operations thus the copy constructor and makeempty need to be reimplemented to avoid deep recursion in leftist heaps do this as followsa reorder the recursive routines so that the recursive call to ->left follows the recursive call to ->right rewrite the routines so that the last statement is recursive call on the left subtree eliminate the tail recursion these functions are still recursive give precise bound on the depth of the remaining recursion explain how to rewrite the copy constructor and makeempty for skew heaps references the binary heap was first described in [ the linear-time algorithm for its construction is from [ the first description of -heaps was in [ recent results suggest that -heaps may improve binary heaps in some circumstances [ leftist heaps were invented by crane [ and described in knuth [ skew heaps were developed by sleator and tarjan [ binomial queues were invented by vuillemin [ ]brown provided detailed analysis and empirical study showing that they perform well in practice [ ]if carefully implemented exercise ( -cis taken from [ exercise (cis from [ method for constructing binary heaps that uses about comparisons on average is described in [ lazy deletion in leftist heaps (exercise is from [ solution to exercise can be found in [ min-max heaps (exercise were originally described in [ more efficient implementation of the operations is given in [ and [ alternative representations for double-ended priority queues are the deap and diamond deque details can be found in [ ][ ]and [ solutions to (eare given in [ and [ theoretically interesting priority queue representation is the fibonacci heap [ ]which we will describe in the fibonacci heap allows all operations to be performed in ( amortized timeexcept for deletionswhich are (log nrelaxed heaps [ achieve identical bounds in the worst case (with the exception of mergethe procedure of [ achieves optimal worst-case bounds for all operations another interesting implementation is the pairing heap [ ]which is described in finallypriority queues that work when the data consist of small integers are described in [ and [
23,581
atkinsonj sackn santoroand strothotte"min-max heaps and generalized priority queues,communications of the acm ( ) - bright"range restricted mergeable priority queues,information processing letters ( ) - brodal"worst-case efficient priority queues,proceedings of the seventh annual acmsiam symposium on discrete algorithms ( ) - brown"implementation and analysis of binomial queue algorithms,siam journal on computing ( ) - carlsson"the deap-- double-ended heap to implement double-ended priority queues,information processing letters ( ) - carlsson and chen"the complexity of heaps,proceedings of the third symposium on discrete algorithms ( ) - carlssonj chenand strothotte" note on the construction of the data structure 'deap',information processing letters ( ) - carlssonj munroand poblete"an implicit binomial queue with constant insertion time,proceedings of first scandinavian workshop on algorithm theory ( ) - chang and due"diamond dequea simple data structure for priority deques,information processing letters ( ) - cheriton and tarjan"finding minimum spanning trees,siam journal on computing ( ) - crane"linear lists and priority queues as balanced binary trees,technical report stan-cs- - computer science departmentstanford universitystanfordcalif ding and weiss"the relaxed min-max heapa mergeable double-ended priority queue,acta informatica ( ) - driscollh gabowr shrairmanand tarjan"relaxed heapsan alternative to fibonacci heaps with applications to parallel computation,communications of the acm ( ) - floyd"algorithm treesort ,communications of the acm ( ) fredmanr sedgewickd sleatorand tarjan"the pairing heapa new form of self-adjusting heap,algorithmica ( ) - fredman and tarjan"fibonacci heaps and their uses in improved network optimization algorithms,journal of the acm ( ) - gonnet and munro"heaps on heaps,siam journal on computing ( ) - hasham and sack"bounds for min-max heaps,bit ( ) - johnson"priority queues with update and finding minimum spanning trees,information processing letters ( ) - khoong and leong"double-ended binomial queues,proceedings of the fourth annual international symposium on algorithms and computation ( ) - knuththe art of computer programmingvol sorting and searching ed addisonwesleyreadingmass lamarca and ladner"the influence of caches on the performance of sorting,proceedings of the eighth annual acm-siam symposium on discrete algorithms ( ) -
23,582
priority queues (heaps mcdiarmid and reed"building heaps fast,journal of algorithms ( ) - sleator and tarjan"self-adjusting heaps,siam journal on computing ( ) - strothottep erikssonand vallner" note on constructing min-max heaps,bit ( ) - van emde boasr kaasand zijlstra"design and implementation of an efficient priority queue,mathematical systems theory ( ) - vuillemin" data structure for manipulating priority queues,communications of the acm ( ) - williams"algorithm heapsort,communications of the acm ( ) -
23,583
sorting in this we discuss the problem of sorting an array of elements to simplify matterswe will assume in our examples that the array contains only integersalthough our code will once again allow more general objects for most of this we will also assume that the entire sort can be done in main memoryso that the number of elements is relatively small (less than few millionsorts that cannot be performed in main memory and must be done on disk or tape are also quite important this type of sortingknown as external sortingwill be discussed at the end of the our investigation of internal sorting will show that there are several easy algorithms to sort in ( )such as insertion sort there is an algorithmshellsortthat is very simple to coderuns in ( )and is efficient in practice there are slightly more complicated ( log nsorting algorithms any general-purpose sorting algorithm requires ( log ncomparisons the rest of this will describe and analyze the various sorting algorithms these algorithms contain interesting and important ideas for code optimization as well as algorithm design sorting is also an example where the analysis can be precisely performed be forewarned that where appropriatewe will do as much analysis as possible preliminaries the algorithms we describe will all be interchangeable each will be passed an array containing the elementswe assume all array positions contain data to be sorted we will assume that is the number of elements passed to our sorting routines we will also assume the existence of the "operatorswhich can be used to place consistent ordering on the input besides the assignment operatorthese are the only operations allowed on the input data sorting under these conditions is known as comparison-based sorting this interface is not the same as in the stl sorting algorithms in the stlsorting is accomplished by use of the function template sort the parameters to sort represent the start and endmarker of (range in acontainer and an optional comparatorvoid sortiterator beginiterator end )void sortiterator beginiterator endcomparator cmp )
23,584
sorting the iterators must support random access the sort algorithm does not guarantee that equal items retain their original order (if that is importantuse stable_sort instead of sortas an examplein std::sortv begin) end)std::sortv begin) end)greater)std::sortv begin) beginv endv begin )the first call sorts the entire containervin nondecreasing order the second call sorts the entire container in nonincreasing order the third call sorts the first half of the container in nondecreasing order the sorting algorithm used is generally quicksortwhich we describe in section in section we implement the simplest sorting algorithm using both our style of passing the array of comparable itemswhich yields the most straightforward codeand the interface supported by the stlwhich requires more code insertion sort one of the simplest sorting algorithms is the insertion sort the algorithm insertion sort consists of passes for pass through insertion sort ensures that the elements in positions through are in sorted order insertion sort makes use of the fact that elements in positions through are already known to be in sorted order figure shows sample array after each pass of insertion sort figure shows the general strategy in pass pwe move the element in position left until its correct place is found among the first + elements the code in figure implements this strategy lines to implement that data movement without the explicit use of swaps the element in position is moved to tmpand all larger elements (prior to position pare moved one spot to the right then tmp is moved to the correct spot this is the same technique that was used in the implementation of binary heaps original positions moved after after after after after figure insertion sort after each pass
23,585
/*simple insertion sort *template void insertionsortvector forint size)++ comparable tmp std::moveap )int jforj pj &tmp aj ]-- aj std::moveaj )aj std::movetmp )figure insertion sort routine stl implementation of insertion sort in the stlinstead of having the sort routines take an array of comparable items as single parameterthe sort routines receive pair of iterators that represent the start and endmarker of range two-parameter sort routine uses just that pair of iterators and presumes that the items can be orderedwhile three-parameter sort routine has function object as third parameter converting the algorithm in figure to use the stl introduces several issues the obvious issues are we must write two-parameter sort and three-parameter sort presumablythe twoparameter sort invokes the three-parameter sortwith lessas the third parameter array access must be converted to iterator access line of the original code requires that we create tmpwhich in the new code will have type object the first issue is the trickiest because the template type parameters ( the generic typesfor the two-parameter sort are both iteratorhoweverobject is not one of the generic type parameters prior to ++ one had to write extra routines to solve this problem as shown in figure ++ introduces decltype which cleanly expresses the intent figure shows the main sorting code that replaces array indexing with use of the iteratorand that replaces calls to operatorwith calls to the lessthan function object observe that once we actually code the insertionsort algorithmevery statement in the original code is replaced with corresponding statement in the new code that makes
23,586
sorting /the two-parameter version calls the three-parameter versionusing ++ decltype *template void insertionsortconst iterator beginconst iterator end insertionsortbeginendless)figure two-parameter sort invokes three-parameter sort via ++ decltype template void insertionsortconst iterator beginconst iterator endcomparator lessthan ifbegin =end returniterator jforiterator begin+ !end++ auto tmp std::move* )forj pj !begin &lessthantmp* - )-- * std::move*( - )* std::movetmp )figure three-parameter sort using iterators straightforward use of iterators and the function object the original code is arguably much simpler to readwhich is why we use our simpler interface rather than the stl interface when coding our sorting algorithms analysis of insertion sort because of the nested loopseach of which can take iterationsinsertion sort is ( furthermorethis bound is tightbecause input in reverse order can achieve this bound precise calculation shows that the number of tests in the inner loop in figure is at most for each value of summing over all gives total of = (
23,587
on the other handif the input is presortedthe running time is ( )because the test in the inner for loop always fails immediately indeedif the input is almost sorted (this term will be more rigorously defined in the next section)insertion sort will run quickly because of this wide variationit is worth analyzing the average-case behavior of this algorithm it turns out that the average case is ( for insertion sortas well as for variety of other sorting algorithmsas the next section shows lower bound for simple sorting algorithms an inversion in an array of numbers is any ordered pair (ijhaving the property that but [ia[jin the example of the last sectionthe input list had nine inversionsnamely ( )( )( )( )( )( )( )( )and ( notice that this is exactly the number of swaps that needed to be (implicitlyperformed by insertion sort this is always the casebecause swapping two adjacent elements that are out of place removes exactly one inversionand sorted array has no inversions since there is (nother work involved in the algorithmthe running time of insertion sort is ( )where is the number of inversions in the original array thusinsertion sort runs in linear time if the number of inversions is (nwe can compute precise bounds on the average running time of insertion sort by computing the average number of inversions in permutation as usualdefining average is difficult proposition we will assume that there are no duplicate elements (if we allow duplicatesit is not even clear what the average number of duplicates isusing this assumptionwe can assume that the input is some permutation of the first integers (since only relative ordering is importantand that all are equally likely under these assumptionswe have the following theoremtheorem the average number of inversions in an array of distinct elements is ( )/ proof for any listlof elementsconsider lr the list in reverse order the reverse list of the example is consider any pair of two elements in the list (xywith clearlyin exactly one of and lr this ordered pair represents an inversion the total number of these pairs in list and its reverse lr is ( )/ thusan average list has half this amountor ( )/ inversions this theorem implies that insertion sort is quadratic on average it also provides very strong lower bound about any algorithm that only exchanges adjacent elements theorem any algorithm that sorts by exchanging adjacent elements requires ( time on average
23,588
sorting proof the average number of inversions is initially ( - )/ ( each swap removes only one inversionso ( swaps are required this is an example of lower-bound proof it is valid not only for insertion sortwhich performs adjacent exchanges implicitlybut also for other simple algorithms such as bubble sort and selection sortwhich we will not describe here in factit is valid over an entire class of sorting algorithmsincluding those undiscoveredthat perform only adjacent exchanges because of thisthis proof cannot be confirmed empirically although this lower-bound proof is rather simplein general proving lower bounds is much more complicated than proving upper bounds and in some cases resembles magic this lower bound shows us that in order for sorting algorithm to run in subquadraticor ( )timeit must do comparisons andin particularexchanges between elements that are far apart sorting algorithm makes progress by eliminating inversionsand to run efficientlyit must eliminate more than just one inversion per exchange shellsort shellsortnamed after its inventordonald shellwas one of the first algorithms to break the quadratic time barrieralthough it was not until several years after its initial discovery that subquadratic time bound was proven as suggested in the previous sectionit works by comparing elements that are distantthe distance between comparisons decreases as the algorithm runs until the last phasein which adjacent elements are compared for this reasonshellsort is sometimes referred to as diminishing increment sort shellsort uses sequenceh ht called the increment sequence any increment sequence will do as long as but some choices are better than others (we will discuss that issue laterafter phaseusing some increment hk for every iwe have [ < [ hk (where this makes sense)all elements spaced hk apart are sorted the file is then said to be hk -sorted for examplefigure shows an array after several phases of shellsort an important property of shellsort (which we state without proofis that an hk -sorted file that is then hk- -sorted remains hk -sorted if this were not the casethe algorithm would likely be of little valuesince work done by early phases would be undone by later phases the general strategy to hk -sort is for each positioniin hk hk place the element in the correct spot among ii hk hk and so on although this does not original after -sort after -sort after -sort figure shellsort after each pass
23,589
/*shellsortusing shell' (poorincrements *template void shellsortvector forint gap size gap gap / forint gapi size)++ comparable tmp std::moveai )int iforj >gap &tmp aj gap ] -gap aj std::moveaj gap )aj std::movetmp )figure shellsort routine using shell' increments (better increments are possibleaffect the implementationa careful examination shows that the action of an hk -sort is to perform an insertion sort on hk independent subarrays this observation will be important when we analyze the running time of shellsort popular (but poorchoice for increment sequence is to use the sequence suggested by shellht / and hk hk+ / figure contains function that implements shellsort using this sequence we shall see later that there are increment sequences that give significant improvement in the algorithm' running timeeven minor change can drastically affect performance (exercise the program in figure avoids the explicit use of swaps in the same manner as our implementation of insertion sort worst-case analysis of shellsort although shellsort is simple to codethe analysis of its running time is quite another story the running time of shellsort depends on the choice of increment sequenceand the proofs can be rather involved the average-case analysis of shellsort is long-standing open problemexcept for the most trivial increment sequences we will prove tight worst-case bounds for two particular increment sequences theorem the worst-case running time of shellsort using shell' increments is ( proof the proof requires showing not only an upper bound on the worst-case running time but also showing that there exists some input that actually takes ( time to run
23,590
sorting we prove the lower bound first by constructing bad case firstwe choose to be power of this makes all the increments evenexcept for the last incrementwhich is nowwe will give as input an array with the / largest numbers in the even positions and the / smallest numbers in the odd positions (for this proofthe first position is position as all the increments except the last are evenwhen we come to the last passthe / largest numbers are still all in even positions and the / smallest numbers are still all in odd positions the ith smallest number ( < / is thus in position before the beginning of the last pass restoring the ith element to its correct place requires moving it - spaces in the array thusto merely place the  / / smallest elements in the correct place requires at least = ( work as an examplefigure shows bad (but not the worstinput when the number of inversions remaining after the -sort is exactly + + + + + + thusthe last pass will take considerable time to finish the proofwe show the upper bound of ( as we have observed beforea pass with increment hk consists of hk insertion sorts of about /hk elements since insertion sort is quadraticthe total cost of pass is (hk ( /hk = ( /hk summing over all passes gives total bound of oti= /hi ( ti= /hi because the increments form geometric series with common ratio and the largest term in the series is ti= /hi thus we obtain total bound of ( the problem with shell' increments is that pairs of increments are not necessarily relatively primeand thus the smaller increment can have little effect hibbard suggested slightly different increment sequencewhich gives better results in practice (and theoreticallyhis increments are of the form although these increments are almost identicalthe key difference is that consecutive increments have no common factors we now analyze the worst-case running time of shellsort for this increment sequence the proof is rather complicated theorem the worst-case running time of shellsort using hibbard' increments is ( / proof we will prove only the upper bound and leave the proof of the lower bound as an exercise the proof requires some well-known results from additive number theory references to these results are provided at the end of the for the upper boundas beforewe bound the running time of each pass and sum over all passes for increments hk / we will use the bound ( /hk from the start after -sort after -sort after -sort after -sort figure bad case for shellsort with shell' increments (positions are numbered to
23,591
previous theorem although this bound holds for the other incrementsit is too large to be useful intuitivelywe must take advantage of the fact that this increment sequence is special what we need to show is that for any element [pin position pwhen it is time to perform an hk -sortthere are only few elements to the left of position that are larger than [pwhen we come to hk -sort the input arraywe know that it has already been hk+ and hk+ -sorted prior to the hk -sortconsider elements in positions and ii < if is multiple of hk+ or hk+ then clearly [ ia[pwe can say morehowever if is expressible as linear combination (in nonnegative integersof hk+ and hk+ then [ ia[pas an examplewhen we come to -sortthe file is already and -sorted is expressible as linear combination of and because thusa[ cannot be larger than [ because [ < [ < [ < [ < [ nowhk+ hk+ so hk+ and hk+ cannot share common factor in this caseit is possible to show that all integers that are at least as large as (hk+ )(hk+ hk can be expressed as linear combination of hk+ and hk+ (see the reference at the end of the this tells us that the body of the innermost for loop can be executed at most hk (hk times for each of the hk positions this gives bound of (nhk per pass using the fact that about half the increments satisfy hk nand assuming that is eventhe total running time is then / / nhk /hk hk /hk = = / + = = / + because both sums are geometric seriesand since ht/  )this simplifies to ( / nht/ ht/ the average-case running time of shellsortusing hibbard' incrementsis thought to be ( / )based on simulationsbut nobody has been able to prove this pratt has shown that the ( / bound applies to wide range of increment sequences sedgewick has proposed several increment sequences that give an ( / worstcase running time (also achievablethe average running time is conjectured to be ( / for these increment sequences empirical studies show that these sequences perform significantly better in practice than hibbard' the best of these is the sequence { }in which the terms are either of the form or this is most easily implemented by placing these values in an array this increment sequence is the best known in practicealthough there is lingering possibility that some increment sequence might exist that could give significant improvement in the running time of shellsort there are several other results on shellsort that (generallyrequire difficult theorems from number theory and combinatorics and are mainly of theoretical interest shellsort is fine example of very simple algorithm with an extremely complex analysis
23,592
sorting the performance of shellsort is quite acceptable in practiceeven for in the tens of thousands the simplicity of the code makes it the algorithm of choice for sorting up to moderately large input heapsort as mentioned in priority queues can be used to sort in ( log ntime the algorithm based on this idea is known as heapsort and gives the best big-oh running time we have seen so far recall from that the basic strategy is to build binary heap of elements this stage takes (ntime we then perform deletemin operations the elements leave the heap smallest firstin sorted order by recording these elements in second array and then copying the array backwe sort elements since each deletemin takes (log ntimethe total running time is ( log nthe main problem with this algorithm is that it uses an extra array thusthe memory requirement is doubled this could be problem in some instances notice that the extra time spent copying the second array back to the first is only ( )so that this is not likely to affect the running time significantly the problem is space clever way to avoid using second array makes use of the fact that after each deleteminthe heap shrinks by thus the cell that was last in the heap can be used to store the element that was just deleted as an examplesuppose we have heap with six elements the first deletemin produces now the heap has only five elementsso we can place in position the next deletemin produces since the heap will now only have four elementswe can place in position using this strategyafter the last deletemin the array will contain the elements in decreasing sorted order if we want the elements in the more typical increasing sorted orderwe can change the ordering property so that the parent has larger element than the child thuswe have (max)heap in our implementationwe will use (max)heap but avoid the actual adt for the purposes of speed as usualeverything is done in an array the first step builds the heap in linear time we then perform deletemaxes by swapping the last element in the heap with the firstdecrementing the heap sizeand percolating down when the algorithm terminatesthe array contains the elements in sorted order for instanceconsider the input sequence the resulting heap is shown in figure figure shows the heap that results after the first deletemax as the figures implythe last element in the heap is has been placed in part of the heap array that is technically no longer part of the heap after more deletemax operationsthe heap will actually have only one elementbut the elements left in the heap array will be in sorted order the code to perform heapsort is given in figure the slight complication is thatunlike the binary heapwhere the data begin at array index the array for heapsort contains data in position thus the code is little different from the binary heap code the changes are minor
23,593
figure (maxheap after buildheap phase figure heap after first deletemax analysis of heapsort as we saw in the first phasewhich constitutes the building of the heapuses less than comparisons in the second phasethe ith deletemax uses at most less than log ( comparisonsfor total of at most log (ncomparisons (assuming > consequentlyin the worst caseat most log (ncomparisons are used by heapsort exercise asks you to show that it is possible for all of the deletemax operations to achieve their worst case simultaneously
23,594
/*standard heapsort *template void heapsortvector forint size > -- /buildheap *percdownaia size)forint size -- std::swapa ]aj )/deletemax *percdowna )/*internal method for heapsort is the index of an item in the heap returns the index of the left child *inline int leftchildint return /*internal method for heapsort that is used in deletemax and buildheap is the position from which to percolate down is the logical size of the binary heap *template void percdownvector aint iint int childcomparable tmpfortmp std::moveai )leftchildi ni child child leftchildi )ifchild ! &achild achild ++childiftmp achild ai std::moveachild )else breakai std::movetmp )figure heapsort
23,595
experiments have shown that the performance of heapsort is extremely consistenton average it uses only slightly fewer comparisons than the worst-case bound suggests for many yearsnobody had been able to show nontrivial bounds on heapsort' average running time the problemit seemsis that successive deletemax operations destroy the heap' randomnessmaking the probability arguments very complex eventuallyanother approach proved successful theorem the average number of comparisons used to heapsort random permutation of distinct items is log ( log log nproof the heap construction phase uses (ncomparisons on averageand so we only need to prove the bound for the second phase we assume permutation of { nsuppose the ith deletemax pushes the root element down di levels then it uses di comparisons for heapsort on any inputthere is cost sequence dn that defines the cost of phase that cost is given by md = di the number of comparisons used is thus md let (nbe the number of heaps of items one can show (exercise that ( ( /( )) (where we will show that only an exponentially small fraction of these heaps (in particular ( / ) have cost smaller than (log log log when this is shownit follows that the average value of md is at least minus term that is ( )and thus the average number of comparisons is at least consequentlyour basic goal is to show that there are very few heaps that have small cost sequences because level di has at most di nodesthere are di possible places that the root element can go for any di consequentlyfor any sequence dthe number of distinct corresponding deletemax sequences is at most sd dn simple algebraic manipulation shows that for given sequence ds md because each di can assume any value between and log there are at most (log ) possible sequences it follows that the number of distinct deletemax sequences that require cost exactly equal to is at most the number of cost sequences of total cost times the number of deletemax sequences for each of these cost sequences bound of (log ) follows immediately the total number of heaps with cost sequence less than is at most - (log ) (log ) =
23,596
sorting if we choose (log log log )then the number of heaps that have cost sequence less than is at most ( / ) and the theorem follows from our earlier comments using more complex argumentit can be shown that heapsort always uses at least log (ncomparisons and that there are inputs that can achieve this bound the average-case analysis also can be improved to log (ncomparisons (rather than the nonlinear second term in theorem mergesort we now turn our attention to mergesort mergesort runs in ( log nworst-case running timeand the number of comparisons used is nearly optimal it is fine example of recursive algorithm the fundamental operation in this algorithm is merging two sorted lists because the lists are sortedthis can be done in one pass through the inputif the output is put in third list the basic merging algorithm takes two input arrays and ban output array cand three countersactrbctrand cctrwhich are initially set to the beginning of their respective arrays the smaller of [actrand [bctris copied to the next entry in cand the appropriate counters are advanced when either input list is exhaustedthe remainder of the other list is copied to an example of how the merge routine works is provided for the following input actr bctr cctr if the array contains and contains then the algorithm proceeds as followsfirsta comparison is done between and is added to cand then and are compared actr bctr cctr is added to cand then and are compared actr bctr cctr
23,597
is added to cand then and are compared this proceeds until and are compared actr bctr actr cctr bctr actr cctr bctr cctr is added to cand the array is exhausted actr bctr cctr the remainder of the array is then copied to actr bctr cctr the time to merge two sorted lists is clearly linearbecause at most comparisons are madewhere is the total number of elements to see thisnote that every comparison adds an element to cexcept the last comparisonwhich adds at least two the mergesort algorithm is therefore easy to describe if there is only one element to sortand the answer is at hand otherwiserecursively mergesort the first half and the second half this gives two sorted halveswhich can then be merged together using the merging algorithm described above for instanceto sort the eight-element array we recursively sort the first four and last four elementsobtaining then we merge the two halves as aboveobtaining the final list this algorithm is classic divide-and-conquer strategy the problem is divided into smaller problems and solved recursively the conquering phase consists of patching together the answers divide-and-conquer is very powerful use of recursion that we will see many times an implementation of mergesort is provided in figure the one-parameter mergesort is just driver for the four-parameter recursive mergesort the merge routine is subtle if temporary array is declared locally for each recursive call of mergethen there could be log temporary arrays active at any point close examination shows that since merge is the last line of mergesortthere only needs to be one
23,598
sorting /*mergesort algorithm (driver*template void mergesortvector vector tmparraya size)mergesortatmparray size )/*internal method that makes recursive calls is an array of comparable items tmparray is an array to place the merged result left is the left-most index of the subarray right is the right-most index of the subarray *template void mergesortvector avector tmparrayint leftint right ifleft right int center left right mergesortatmparrayleftcenter )mergesortatmparraycenter right )mergeatmparrayleftcenter right )figure mergesort routines temporary array active at any pointand that the temporary array can be created in the public mergesort driver furtherwe can use any part of the temporary arraywe will use the same portion as the input array this allows the improvement described at the end of this section figure implements the merge routine analysis of mergesort mergesort is classic example of the techniques used to analyze recursive routineswe have to write recurrence relation for the running time we will assume that is power of so that we always split into even halves for the time to mergesort is constantwhich we will denote by otherwisethe time to mergesort numbers is equal to the
23,599
/*internal method that merges two sorted halves of subarray is an array of comparable items tmparray is an array to place the merged result leftpos is the left-most index of the subarray rightpos is the index of the start of the second half rightend is the right-most index of the subarray *template void mergevector avector tmparrayint leftposint rightposint rightend int leftend rightpos int tmppos leftposint numelements rightend leftpos /main loop whileleftpos <leftend &rightpos <rightend ifaleftpos <arightpos tmparraytmppos+std::movealeftpos+)else tmparraytmppos+std::movearightpos+)whileleftpos <leftend /copy rest of first half tmparraytmppos+std::movealeftpos+)whilerightpos <rightend /copy rest of right half tmparraytmppos+std::movearightpos+)/copy tmparray back forint numelements++ --rightend arightend std::movetmparrayrightend )figure merge routine time to do two recursive mergesorts of size / plus the time to mergewhich is linear the following equations say this exactlyt( ( ( / this is standard recurrence relationwhich can be solved several ways we will show two methods the first idea is to divide the recurrence relation through by the reason for doing this will become apparent soon this yields