id
int64
0
25.6k
text
stringlengths
0
4.59k
23,400
listsstacksand queues infix to postfix conversion not only can stack be used to evaluate postfix expressionbut we can also use stack to convert an expression in standard form (otherwise known as infixinto postfix we will concentrate on small version of the general problem by allowing only the operators +*()and insisting on the usual precedence rules we will further assume that the expression is legal suppose we want to convert the infix expression into postfix correct answer is when an operand is readit is immediately placed onto the output operators are not immediately outputso they must be saved somewhere the correct thing to do is to place operators that have been seenbut not placed on the outputonto the stack we will also stack left parentheses when they are encountered we start with an initially empty stack if we see right parenthesisthen we pop the stackwriting symbols until we encounter (correspondingleft parenthesiswhich is popped but not output if we see any other symbol (+*()then we pop entries from the stack until we find an entry of lower priority one exception is that we never remove from the stack except when processing for the purposes of this operationhas lowest priority and highest when the popping is donewe push the operator onto the stack finallyif we read the end of inputwe pop the stack until it is emptywriting symbols onto the output the idea of this algorithm is that when an operator is seenit is placed on the stack the stack represents pending operators howeversome of the operators on the stack that have high precedence are now known to be completed and should be poppedas they will no longer be pending thus prior to placing the operator on the stackoperators that are on the stackand which are to be completed prior to the current operatorare popped this is illustrated in the following tableexpression stack when third operator is processed action * - + is completedis pushed / + * nothing is completedis pushed - * / is completedis pushed - * + and are completedis pushed parentheses simply add an additional complication we can view left parenthesis as high-precedence operator when it is an input symbol (so that pending operators remain pendingand low-precedence operator when it is on the stack (so that it is not accidentally removed by an operatorright parentheses are treated as the special case to see how this algorithm performswe will convert the long infix expression above into its postfix form firstthe symbol is readso it is passed through to the output
23,401
then is read and pushed onto the stack next is read and passed through to the output the state of affairs at this juncture is as followsstack ab output nexta is read the top entry on the operator stack has lower precedence than *so nothing is output and is put on the stack nextc is read and output thus farwe have stack abc output the next symbol is checking the stackwe find that we will pop and place it on the outputpop the other +which is not of lower but equal priorityon the stackand then push the stack abc*output the next symbol read is being of highest precedencethis is placed on the stack then is read and output stack abc*+ output we continue by reading since open parentheses do not get removed except when closed parenthesis is being processedthere is no output nexte is read and output stack abc*+de output
23,402
listsstacksand queues the next symbol read is we pop and output and then push then we read and output stack abc*+de* output now we read )so the stack is emptied back to the we output stack abc*+de*foutput we read nextit is pushed onto the stack then is read and output stack abc*+de* + output the input is now emptyso we pop and output symbols from the stack until it is empty stack abc*+de* + *output as beforethis conversion requires only (ntime and works in one pass through the input we can add subtraction and division to this repertoire by assigning subtraction and addition equal priority and multiplication and division equal priority subtle point is that the expression will be converted to and not our algorithm does the right thingbecause these operators associate from left to right this is not necessarily the case in generalsince exponentiation associates right to left not we leave as an exercise the problem of adding exponentiation to the repertoire of operators function calls the algorithm to check balanced symbols suggests way to implement function calls in compiled procedural and object-oriented languages the problem here is that when call is made to new functionall the variables local to the calling routine need to be saved by the systemsince otherwise the new function will overwrite the memory used by the calling routine' variables furthermorethe current location in the routine must be saved
23,403
so that the new function knows where to go after it is done the variables have generally been assigned by the compiler to machine registersand there are certain to be conflicts (usually all functions get some variables assigned to register # )especially if recursion is involved the reason that this problem is similar to balancing symbols is that function call and function return are essentially the same as an open parenthesis and closed parenthesisso the same ideas should work when there is function callall the important information that needs to be savedsuch as register values (corresponding to variable namesand the return address (which can be obtained from the program counterwhich is typically in register)is saved "on piece of paperin an abstract way and put at the top of pile then the control is transferred to the new functionwhich is free to replace the registers with its values if it makes other function callsit follows the same procedure when the function wants to returnit looks at the "paperat the top of the pile and restores all the registers it then makes the return jump clearlyall of this work can be done using stackand that is exactly what happens in virtually every programming language that implements recursion the information saved is called either an activation record or stack frame typicallya slight adjustment is madethe current environment is represented at the top of the stack thusa return gives the previous environment (without copyingthe stack in real computer frequently grows from the high end of your memory partition downwardand on many systems there is no checking for overflow there is always the possibility that you will run out of stack space by having too many simultaneously active functions needless to sayrunning out of stack space is always fatal error in languages and systems that do not check for stack overflowprograms crash without an explicit explanation in normal eventsyou should not run out of stack spacedoing so is usually an indication of runaway recursion (forgetting base caseon the other handsome perfectly legal and seemingly innocuous programs can cause you to run out of stack space the routine in figure which prints out containeris perfectly legal and actually correct it properly handles the base case of an empty containerand the recursion is fine this program can be proven correct unfortunatelyif the container /*print container from start up to but not including end *template void printiterator startiterator endostream out cout ifstart =end returnout <*start+<endl/print and advance start printstartendout )figure bad use of recursionprinting container
23,404
listsstacksand queues /*print container from start up to but not including end *template void printiterator startiterator endostream out cout whiletrue ifstart =end returnout <*start+<endl/print and advance start figure printing container without recursiona compiler might do this (you should notcontains , elements to printthere will be stack of , activation records representing the nested calls of line activation records are typically large because of all the information they containso this program is likely to run out of stack space (if , elements are not enough to make the program crashreplace the number with larger one this program is an example of an extremely bad use of recursion known as tail recursion tail recursion refers to recursive call at the last line tail recursion can be mechanically eliminated by enclosing the body in while loop and replacing the recursive call with one assignment per function argument this simulates the recursive call because nothing needs to be savedafter the recursive call finishesthere is really no need to know the saved values because of thiswe can just go to the top of the function with the values that would have been used in recursive call the function in figure shows the mechanically improved version generated by this algorithm removal of tail recursion is so simple that some compilers do it automatically even soit is best not to find out that yours does not recursion can always be completely removed (compilers do so in converting to assembly language)but doing so can be quite tedious the general strategy requires using stack and is worthwhile only if you can manage to put the bare minimum on the stack we will not dwell on this furtherexcept to point out that although nonrecursive programs are certainly generally faster than equivalent recursive programsthe speed advantage rarely justifies the lack of clarity that results from removing the recursion the queue adt like stacksqueues are lists with queuehoweverinsertion is done at one end whereas deletion is performed at the other end
23,405
queue model the basic operations on queue are enqueuewhich inserts an element at the end of the list (called the rear)and dequeuewhich deletes (and returnsthe element at the start of the list (known as the frontfigure shows the abstract model of queue array implementation of queues as with stacksany list implementation is legal for queues like stacksboth the linked list and array implementations give fast ( running times for every operation the linked list implementation is straightforward and left as an exercise we will now discuss an array implementation of queues for each queue data structurewe keep an arraythearrayand the positions front and backwhich represent the ends of the queue we also keep track of the number of elements that are actually in the queuecurrentsize the following table shows queue in some intermediate state front back the operations should be clear to enqueue an element xwe increment currentsize and backthen set thearray[backx to dequeue an elementwe set the return value to thearray[front]decrement currentsizeand then increment front other strategies are possible (this is discussed laterwe will comment on checking for errors presently there is one potential problem with this implementation after enqueuesthe queue appears to be fullsince back is now at the last array indexand the next enqueue would be in nonexistent position howeverthere might only be few elements in the queuebecause several elements may have already been dequeued queueslike stacksfrequently stay small even in the presence of lot of operations the simple solution is that whenever front or back gets to the end of the arrayit is wrapped around to the beginning the following tables show the queue during some operations this is known as circular array implementation dequeue figure model of queue queue enqueue
23,406
listsstacksand queues initial state front back after enqueue( back front after enqueue( back front after dequeuewhich returns back front after dequeuewhich returns front back after dequeuewhich returns back front
23,407
after dequeuewhich returns and makes the queue empty back front the extra code required to implement the wraparound is minimal (although it probably doubles the running timeif incrementing either back or front causes it to go past the arraythe value is reset to the first position in the array some programmers use different ways of representing the front and back of queue for instancesome do not use an entry to keep track of the sizebecause they rely on the base case that when the queue is emptyback front- the size is computed implicitly by comparing back and front this is very tricky way to gobecause there are some special casesso be very careful if you need to modify code written this way if the currentsize is not maintained as an explicit data memberthen the queue is full when there are thearray capacity()- elementssince only thearray capacity(different sizes can be differentiated and one of these is pick any style you like and make sure that all your routines are consistent since there are few options for implementationit is probably worth comment or two in the code if you don' use the currentsize data member in applications where you are sure that the number of enqueues is not larger than the capacity of the queuethe wraparound is not necessary as with stacksdequeues are rarely performed unless the calling routines are certain that the queue is not empty thus error checks are frequently skipped for this operationexcept in critical code this is generally not justifiablebecause the time savings that you are likely to achieve are minimal applications of queues there are many algorithms that use queues to give efficient running times several of these are found in graph theoryand we will discuss them in for nowwe will give some simple examples of queue usage when jobs are submitted to printerthey are arranged in order of arrival thusessentiallyjobs sent to printer are placed on queue virtually every real-life line is (supposed to bea queue for instancelines at ticket counters are queuesbecause service is first-come first-served another example concerns computer networks there are many network setups of personal computers in which the disk is attached to one machineknown as the file server users on other machines are given access to files on first-come first-served basisso the data structure is queue we say essentially because jobs can be killed this amounts to deletion from the middle of the queuewhich is violation of the strict definition
23,408
listsstacksand queues further examples include the followingr calls to large companies are generally placed on queue when all operators are busy in large universitieswhere resources are limitedstudents must sign waiting list if all computers are occupied the student who has been at computer the longest is forced off firstand the student who has been waiting the longest is the next user to be allowed on whole branch of mathematics known as queuing theory deals with computingprobabilisticallyhow long users expect to wait on linehow long the line getsand other such questions the answer depends on how frequently users arrive to the line and how long it takes to process user once the user is served both of these parameters are given as probability distribution functions in simple casesan answer can be computed analytically an example of an easy case would be phone line with one operator if the operator is busycallers are placed on waiting line (up to some maximum limitthis problem is important for businessesbecause studies have shown that people are quick to hang up the phone if there are operatorsthen this problem is much more difficult to solve problems that are difficult to solve analytically are often solved by simulation in our casewe would need to use queue to perform the simulation if is largewe also need other data structures to do this efficiently we shall see how to do this simulation in we could then run the simulation for several values of and choose the minimum that gives reasonable waiting time additional uses for queues aboundand as with stacksit is staggering that such simple data structure can be so important summary this describes the concept of adts and illustrates the concept with three of the most common abstract data types the primary objective is to separate the implementation of the adts from their function the program must know what the operations dobut it is actually better off not knowing how it is done listsstacksand queues are perhaps the three fundamental data structures in all of computer scienceand their use is documented through host of examples in particularwe saw how stacks are used to keep track of function calls and how recursion is actually implemented this is important to understandnot just because it makes procedural languages possiblebut because knowing how recursion is implemented removes good deal of the mystery that surrounds its use although recursion is very powerfulit is not an entirely free operationmisuse and abuse of recursion can result in programs crashing exercises you are given listland another listpcontaining integers sorted in ascending order the operation printlots( ,pwill print the elements in that are in positions specified by for instanceif the elements in positions and in are printed write the procedure printlots( ,pyou may use only the public stl container operations what is the running time of your procedure
23,409
swap two adjacent elements by adjusting only the links (and not the datausing singly linked lists doubly linked lists implement the stl find routine that returns the iterator containing the first occurrence of in the range that begins at start and extends up to but not including end if is not foundend is returned this is nonclass (global functionwith signature template iterator finditerator startiterator endconst object ) given two sorted listsl and write procedure to compute using only the basic list operations given two sorted listsl and write procedure to compute using only the basic list operations the josephus problem is the following gamen peoplenumbered to nare sitting in circle starting at person hot potato is passed after passesthe person holding the hot potato is eliminatedthe circle closes ranksand the game continues with the person who was sitting after the eliminated person picking up the hot potato the last remaining person wins thusif and players are eliminated in orderand player wins if and the order of elimination is write program to solve the josephus problem for general values of and try to make your program as efficient as possible make sure you dispose of cells what is the running time of your programc if what is the running time of your programhow is the actual speed affected by the delete routine for large values of ( , ) modify the vector class to add bounds checks for indexing add insert and erase to the vector class according to the +standardfor the vectora call to push_backpop_backinsertor erase invalidates (potentially makes staleall iterators viewing the vector whymodify the vector class to provide stringent iterator checking by making iterators class types rather than pointer variables the hardest part is dealing with stale iteratorsas described in exercise assume that singly linked list is implemented with header nodebut no tail nodeand that it maintains only pointer to the header node write class that includes methods to return the size of the linked list print the linked list test if value is contained in the linked list add value if it is not already contained in the linked list remove value if it is contained in the linked list repeat exercise maintaining the singly linked list in sorted order add support for operatorto the list iterator classes
23,410
listsstacksand queues looking ahead in an stl iterator requires an application of operator++which in turn advances the iterator in some cases looking at the next item in the listwithout advancing to itmay be preferable write the member function with the declaration const_iterator operator+int const to facilitate this in general case the binary operatorreturns an iterator that corresponds to positions ahead of current add the splice operation to the list class the method declaration void spliceiterator positionlist lst )removes all the items from lstplacing them prior to position in list *this lst and *this must be different lists your routine must run in constant time add reverse iterators to the stl list class implementation define reverse_iterator and const_reverse_iterator add the methods rbegin and rend to return appropriate reverse iterators representing the position prior to the endmarker and the position that is the header node reverse iterators internally reverse the meaning of the +and -operators you should be able to print list in reverse by using the code list::reverse_iterator itr rbegin)whileitr ! rendcout <*itr+<endl modify the list class to provide stringent iterator checking by using the ideas suggested at the end of section when an erase method is applied to listit invalidates any iterator that is referencing the removed node such an iterator is called stale describe an efficient algorithm that guarantees that any operation on stale iterator acts as though the iterator' current is nullptr note that there may be many stale iterators you must explain which classes need to be rewritten in order to implement your algorithm rewrite the list class without using header and tail nodes and describe the differences between the class and the class provided in section an alternative to the deletion strategy we have given is to use lazy deletion to delete an elementwe merely mark it deleted (using an extra bit fieldthe number of deleted and nondeleted elements in the list is kept as part of the data structure if there are as many deleted elements as nondeleted elementswe traverse the entire listperforming the standard deletion algorithm on all marked nodes list the advantages and disadvantages of lazy deletion write routines to implement the standard linked list operations using lazy deletion write program to check for balancing symbols in the following languagesa pascal (begin/end()[]{} +(/*/()[]{ explain how to print out an error message that is likely to reflect the probable cause
23,411
write program to evaluate postfix expression write program to convert an infix expression that includes ()+-*and to postfix add the exponentiation operator to your repertoire write program to convert postfix expression to infix write routines to implement two stacks using only one array your stack routines should not declare an overflow unless every slot in the array is used propose data structure that supports the stack push and pop operations and third operation findminwhich returns the smallest element in the data structureall in ( worst-case time prove that if we add the fourth operation deletemin which finds and removes the smallest elementthen at least one of the operations must take (log ntime (this requires reading show how to implement three stacks in one array if the recursive routine in section used to compute fibonacci numbers is run for is stack space likely to run outwhy or why not deque is data structure consisting of list of items on which the following operations are possiblepush( )insert item on the front end of the deque pop()remove the front item from the deque and return it inject( )insert item on the rear end of the deque eject()remove the rear item from the deque and return it write routines to support the deque that take ( time per operation write an algorithm for printing singly linked list in reverseusing only constant extra space this instruction implies that you cannot use recursion but you may assume that your algorithm is list member function can such an algorithm be written if the routine is constant member function write an array implementation of self-adjusting lists in self-adjusting listall insertions are performed at the front self-adjusting list adds find operationand when an element is accessed by findit is moved to the front of the list without changing the relative order of the other items write linked list implementation of self-adjusting lists suppose each element has fixed probabilityp of being accessed show that the elements with highest access probability are expected to be close to the front efficiently implement stack class using singly linked listwith no header or tail nodes efficiently implement queue class using singly linked listwith no header or tail nodes efficiently implement queue class using circular array you may use vector (rather than primitive arrayas the underlying array structure linked list contains cycle ifstarting from some node pfollowing sufficient number of next links brings us back to node does not have to be the first node
23,412
listsstacksand queues in the list assume that you are given linked list that contains nodeshoweverthe value of is unknown design an (nalgorithm to determine if the list contains cycle you may use (nextra space repeat part ( )but use only ( extra space (hintuse two iterators that are initially at the start of the list but advance at different speeds one way to implement queue is to use circular linked list in circular linked listthe last node' next pointer points at the first node assume the list does not contain header and that we can maintainat mostone iterator corresponding to node in the list for which of the following representations can all basic queue operations be performed in constant worst-case timejustify your answers maintain an iterator that corresponds to the first item in the list maintain an iterator that corresponds to the last item in the list suppose we have pointer to node in singly linked list that is guaranteed not to be the last node in the list we do not have pointers to any other nodes (except by following linksdescribe an ( algorithm that logically removes the value stored in such node from the linked listmaintaining the integrity of the linked list (hintinvolve the next node suppose that singly linked list is implemented with both header and tail node describe constant-time algorithms to insert item before position (given by an iteratorb remove the item stored at position (given by an iterator
23,413
trees for large amounts of inputthe linear access time of linked lists is prohibitive in this we look at simple data structure for which the average running time of most operations is (log nwe also sketch conceptually simple modification to this data structure that guarantees the above time bound in the worst case and discuss second modification that essentially gives an (log nrunning time per operation for long sequence of instructions the data structure that we are referring to is known as binary search tree the binary search tree is the basis for the implementation of two library collections classesset and mapwhich are used in many applications trees in general are very useful abstractions in computer scienceso we will discuss their use in othermore general applications in this we will see how trees are used to implement the file system of several popular operating systems see how trees can be used to evaluate arithmetic expressions show how to use trees to support searching operations in (log naverage time and how to refine these ideas to obtain (log nworst-case bounds we will also see how to implement these operations when the data are stored on disk discuss and use the set and map classes preliminaries tree can be defined in several ways one natural way to define tree is recursively tree is collection of nodes the collection can be emptyotherwisea tree consists of distinguished nodercalled the rootand zero or more nonempty (sub)trees tk each of whose roots are connected by directed edge from the root of each subtree is said to be child of rand is the parent of each subtree root figure shows typical tree using the recursive definition from the recursive definitionwe find that tree is collection of nodesone of which is the rootand edges that there are edges follows from the fact that each edge connects some node to its parentand every node except the root has one parent (see fig
23,414
trees root figure generic tree figure tree in the tree of figure the root is node has as parent and kland as children each node may have an arbitrary number of childrenpossibly zero nodes with no children are known as leavesthe leaves in the tree above are bchipqklmand nodes with the same parent are siblingsthuskland are all siblings grandparent and grandchild relations can be defined in similar manner path from node to nk is defined as sequence of nodes nk such that ni is the parent of ni+ for < the length of this path is the number of edges on the pathnamelyk there is path of length zero from every node to itself notice that in tree there is exactly one path from the root to each node for any node ni the depth of ni is the length of the unique path from the root to ni thusthe root is at depth the height of ni is the length of the longest path from ni to leaf thus all leaves are at height the height of tree is equal to the height of the root for the tree in figure is at depth and height is at depth and height the height of the tree is the depth of tree is equal to the depth of the deepest leafthis is always equal to the height of the tree if there is path from to then is an ancestor of and is descendant of if  then is proper ancestor of and is proper descendant of implementation of trees one way to implement tree would be to have in each nodebesides its dataa link to each child of the node howeversince the number of children per node can vary so greatly and is not known in advanceit might be infeasible to make the children direct links in the data
23,415
struct treenode object elementtreenode *firstchildtreenode *nextsibling}figure node declarations for trees figure first child/next sibling representation of the tree shown in figure structurebecause there would be too much wasted space the solution is simplekeep the children of each node in linked list of tree nodes the declaration in figure is typical figure shows how tree might be represented in this implementation horizontal arrows that point downward are firstchild links arrows that go left to right are nextsibling links null links are not drawnbecause there are too many in the tree of figure node has both link to sibling (fand link to child ( )while some nodes have neither tree traversals with an application there are many applications for trees one of the popular uses is the directory structure in many common operating systemsincluding unix and dos figure is typical directory in the unix file system the root of this directory is /usr (the asterisk next to the name indicates that /usr is itself directory /usr has three childrenmarkalexand billwhich are themselves directories thus/usr contains three directories and no regular files the filename /usr/mark/book/ch is obtained by following the leftmost child three times each after the first indicates an edgethe result is the full pathname this hierarchical file system is very popular because it allows users to organize their data logically furthermoretwo files in different directories can share the same namebecause they must have different paths from the root and thus have different pathnames directory in the unix file system is just file with list of all its childrenso the directories are structured almost exactly in accordance
23,416
trees /usrmarkbookch ch figure coursealexjunk ch cop fallsprsumsyl syl syl junk billworkcoursecop fallgrades prog fallprog prog prog grades unix directory void filesystem::listallint depth const printnamedepth )/print the name of the object ifisdirectoryfor each file in this directory (for each childc listalldepth )figure pseudocode to list directory in hierarchical file system with the type declaration above indeedon some versions of unixif the normal command to print file is applied to directorythen the names of the files in the directory can be seen in the output (along with other non-ascii informationsuppose we would like to list the names of all of the files in the directory our output format will be that files that are depth di will have their names indented by di tabs our algorithm is given in figure as pseudocode the recursive function listall needs to be started with depth of to signify no indenting for the root this depth is an internal bookkeeping variableand is hardly parameter that calling routine should be expected to know about thusthe default value of is provided for depth the logic of the algorithm is simple to follow the name of the file object is printed out with the appropriate number of tabs if the entry is directorythen we process all children recursivelyone by one these children are one level deeperand thus need to be indented an extra space the output is in figure this traversal strategy is known as preorder traversal in preorder traversalwork at node is performed before (preits children are processed when this program is runit is clear that line is executed exactly once per nodesince each name is output once since line is executed at most once per nodeline must also be executed once per each directory in the unix file system also has one entry that points to itself and another entry that points to the parent of the directory thustechnicallythe unix file system is not treebut is treelike
23,417
/usr mark book ch ch ch course cop fall syl spr syl sum syl junk alex junk bill work course cop fall grades prog prog fall prog prog grades figure the (preorderdirectory listing node furthermoreline can be executed at most once for each child of each node but the number of children is exactly one less than the number of nodes finallythe for loop iterates once per execution of line plus once each time the loop ends thusthe total amount of work is constant per node if there are file names to be outputthen the running time is (nanother common method of traversing tree is the postorder traversal in postorder traversalthe work at node is performed after (postits children are evaluated as an examplefigure represents the same directory structure as beforewith the numbers in parentheses representing the number of disk blocks taken up by each file since the directories are themselves filesthey have sizes too suppose we would like to calculate the total number of blocks used by all the files in the tree the most natural way to do this would be to find the number of blocks contained in the subdirectories /usr/mark ( )/usr/alex ( )and /usr/bill ( the total number of blocks is then the total in the
23,418
trees /usr*( book*( mark*( alex*( bill*( course*( junk ( junk ( work*( cop *( ch ( ch ( ch ( cop *( figure course*( fall*( fall*( fall*( spr*( sum*( syl ( syl ( syl ( grades( prog ( prog ( prog ( prog ( grades( unix directory with file sizes obtained via postorder traversal int filesystem::sizeconst int totalsize sizeofthisfile)ifisdirectoryfor each file in this directory (for each childtotalsize + size)return totalsizefigure pseudocode to calculate the size of directory subdirectories ( plus the one block used by /usrfor total of the pseudocode method size in figure implements this strategy if the current object is not directorythen size merely returns the number of blocks it uses in the current object otherwisethe number of blocks used by the directory is added to the number of blocks (recursivelyfound in all the children to see the difference between the postorder traversal strategy and the preorder traversal strategyfigure shows how the size of each directory or file is produced by the algorithm binary trees binary tree is tree in which no node can have more than two children figure shows that binary tree consists of root and two subtreestl and tr both of which could possibly be empty property of binary tree that is sometimes important is that the depth of an average binary tree is considerably smaller than an analysis shows that the average depth is on)and that for special type of binary treenamely the binary search treethe average value of the depth is (log nunfortunatelythe depth can be as large as as the example in figure shows
23,419
ch ch ch book syl fall syl spr syl sum cop course junk mark junk alex work grades prog prog fall prog prog grades fall cop course bill /usr figure trace of the size function root tl figure generic binary tree tr
23,420
trees figure worst-case binary tree implementation because binary tree node has at most two childrenwe can keep direct links to them the declaration of tree nodes is similar in structure to that for doubly linked listsin that node is structure consisting of the element information plus two pointers (left and rightto other nodes (see fig we could draw the binary trees using the rectangular boxes that are customary for linked listsbut trees are generally drawn as circles connected by linesbecause they are actually graphs we also do not explicitly draw nullptr links when referring to treesbecause every binary tree with nodes would require nullptr links binary trees have many important uses not associated with searching one of the principal uses of binary trees is in the area of compiler designwhich we will now explore an exampleexpression trees figure shows an example of an expression tree the leaves of an expression tree are operandssuch as constants or variable namesand the other nodes contain operators this particular tree happens to be binarybecause all the operators are binaryand although this is the simplest caseit is possible for nodes to have more than two children it is also possible for node to have only one childas is the case with the unary minus operator we can evaluate an expression treetby applying the operator at the root to the values struct binarynode object elementbinarynode *leftbinarynode *right}/the data in the node /left child /right child figure binary tree node class (pseudocode
23,421
figure expression tree for ( (( gobtained by recursively evaluating the left and right subtrees in our examplethe left subtree evaluates to ( cand the right subtree evaluates to (( efg the entire tree therefore represents ( ( )((( efgwe can produce an (overly parenthesizedinfix expression by recursively producing parenthesized left expressionthen printing out the operator at the rootand finally recursively producing parenthesized right expression this general strategy (leftnoderightis known as an inorder traversalit is easy to remember because of the type of expression it produces an alternate traversal strategy is to recursively print out the left subtreethe right subtreeand then the operator if we apply this strategy to our tree abovethe output is +which is easily seen to be the postfix representation of section this traversal strategy is generally known as postorder traversal we have seen this traversal strategy earlier in section third traversal strategy is to print out the operator first and then recursively print out the left and right subtrees the resulting expressiona gis the less useful prefix notationand the traversal strategy is preorder traversalwhich we have also seen earlier in section we will return to these traversal strategies later in the constructing an expression tree we now give an algorithm to convert postfix expression into an expression tree since we already have an algorithm to convert infix to postfixwe can generate expression trees from the two common types of input the method we describe strongly resembles the postfix evaluation algorithm of section we read our expression one symbol at time if the symbol is an operandwe create one-node tree and push pointer to it onto stack if the symbol is an operatorwe pop (pointersto two trees and from the stack ( is popped firstand form new tree whose root is the operator and whose left and right children point to and respectively pointer to this new tree is then pushed onto the stack as an examplesuppose the input is
23,422
trees the first two symbols are operandsso we create one-node trees and push pointers to them onto stack nexta is readso two pointers to trees are poppeda new tree is formedand pointer to it is pushed onto the stack nextcdand are readand for each one-node tree is created and pointer to the corresponding tree is pushed onto the stack now is readso two trees are merged for conveniencewe will have the stack grow from left to right in the diagrams
23,423
continuinga is readso we pop two tree pointers and form new tree with as root finallythe last symbol is readtwo trees are mergedand pointer to the final tree is left on the stack
23,424
trees the search tree adt--binary search trees an important application of binary trees is their use in searching let us assume that each node in the tree stores an item in our exampleswe will assumefor simplicitythat these are integersalthough arbitrarily complex items are easily handled in +we will also assume that all the items are distinctand we will deal with duplicates later the property that makes binary tree into binary search tree is that for every nodexin the treethe values of all the items in its left subtree are smaller than the item in xand the values of all the items in its right subtree are larger than the item in notice that this implies that all the elements in the tree can be ordered in some consistent manner in figure the tree on the left is binary search treebut the tree on the right is not the tree on the right has node with item in the left subtree of node with item (which happens to be the rootwe now give brief descriptions of the operations that are usually performed on binary search trees note that because of the recursive definition of treesit is common to write these routines recursively because the average depth of binary search tree turns out to be (log )we generally do not need to worry about running out of stack space figure shows the interface for the binarysearchtree class template there are several things worth noticing searching is based on the operator that must be defined for the particular comparable type specificallyitem matches if both < and < are false this allows comparable to be complex type (such as an employee record)with comparison function defined on only part of the type (such as the social security number data member or salarysection illustrates the general technique of designing class that can be used as comparable an alternativedescribed in section is to allow function object the data member is pointer to the root nodethis pointer is nullptr for empty trees the public member functions use the general technique of calling private recursive functions an example of how this is done for containsinsertand remove is shown in figure figure two binary trees (only the left tree is search tree
23,425
template class binarysearchtree publicbinarysearchtree)binarysearchtreeconst binarysearchtree rhs )binarysearchtreebinarysearchtree &rhs )~binarysearchtree)const comparable findminconstconst comparable findmaxconstbool containsconst comparable constbool isemptyconstvoid printtreeostream out cout constvoid makeempty)void insertconst comparable )void insertcomparable & )void removeconst comparable )binarysearchtree operator=const binarysearchtree rhs )binarysearchtree operator=binarysearchtree &rhs )privatestruct binarynode comparable elementbinarynode *leftbinarynode *rightbinarynodeconst comparable theelementbinarynode *ltbinarynode *rt elementtheelement }leftlt }rightrt binarynodecomparable &theelementbinarynode *ltbinarynode *rt elementstd::movetheelement }leftlt }rightrt }binarynode *rootvoid insertconst comparable xbinarynode )void insertcomparable &xbinarynode )void removeconst comparable xbinarynode )binarynode findminbinarynode * constbinarynode findmaxbinarynode * constbool containsconst comparable xbinarynode * constvoid makeemptybinarynode )void printtreebinarynode *tostream out constbinarynode clonebinarynode * const}figure binary search tree class skeleton
23,426
trees /*returns true if is found in the tree *bool containsconst comparable const return containsxroot )/*insert into the treeduplicates are ignored *void insertconst comparable insertxroot )/*remove from the tree nothing is done if is not found *void removeconst comparable removexroot )figure illustration of public member function calling private recursive member function several of the private member functions use the technique of passing pointer variable using call-by-reference this allows the public member functions to pass pointer to the root to the private recursive member functions the recursive functions can then change the value of the root so that the root points to another node we will describe the technique in more detail when we examine the code for insert we can now describe some of the private methods contains this operation requires returning true if there is node in tree that has item xor false if there is no such node the structure of the tree makes this simple if is emptythen we can just return false otherwiseif the item stored at is xwe can return true otherwisewe make recursive call on subtree of teither left or rightdepending on the relationship of to the item stored in the code in figure is an implementation of this strategy
23,427
/*internal method to test if an item is in subtree is item to search for is the node that roots the subtree *bool containsconst comparable xbinarynode * const ift =nullptr return falseelse ifx element return containsxt->left )else ift->element return containsxt->right )else return true/match figure contains operation for binary search trees notice the order of the tests it is crucial that the test for an empty tree be performed firstsince otherwisewe would generate run time error attempting to access data member through nullptr pointer the remaining tests are arranged with the least likely case last also note that both recursive calls are actually tail recursions and can be easily removed with while loop the use of tail recursion is justifiable here because the simplicity of algorithmic expression compensates for the decrease in speedand the amount of stack space used is expected to be only (log nfigure shows the trivial changes required to use function object rather than requiring that the items be comparable this mimics the idioms in section findmin and findmax these private routines return pointer to the node containing the smallest and largest elements in the treerespectively to perform findminstart at the root and go left as long as there is left child the stopping point is the smallest element the findmax routine is the sameexcept that branching is to the right child many programmers do not bother using recursion we will code the routines both ways by doing findmin recursively and findmax nonrecursively (see figs and notice how we carefully handle the degenerate case of an empty tree although this is always important to doit is especially crucial in recursive programs also notice that it is safe to change in findmaxsince we are only working with copy of pointer always be extremely carefulhoweverbecause statement such as ->right ->right->right will make changes
23,428
trees template class binarysearchtree public/same methodswith object replacing comparable privatebinarynode *rootcomparator islessthan/same methodswith object replacing comparable /*internal method to test if an item is in subtree is item to search for is the node that roots the subtree *bool containsconst object xbinarynode * const ift =nullptr return falseelse ifislessthanxt->element return containsxt->left )else ifislessthant->elementx return containsxt->right )else return true/match }figure illustrates use of function object to implement binary search tree insert the insertion routine is conceptually simple to insert into tree tproceed down the tree as you would with contains if is founddo nothing otherwiseinsert at the last spot on the path traversed figure shows what happens to insert we traverse the tree as though contains were occurring at the node with item we need to go rightbut there is no subtreeso is not in the treeand this is the correct spot to place duplicates can be handled by keeping an extra field in the node record indicating the frequency of occurrence this adds some extra space to the entire tree but is better than putting duplicates in the tree (which tends to make the tree very deepof course
23,429
/*internal method to find the smallest item in subtree return node containing the smallest item *binarynode findminbinarynode * const ift =nullptr return nullptrift->left =nullptr return treturn findmint->left )figure recursive implementation of findmin for binary search trees /*internal method to find the largest item in subtree return node containing the largest item *binarynode findmaxbinarynode * const ift !nullptr whilet->right !nullptr ->rightreturn tfigure nonrecursive implementation of findmax for binary search trees figure binary search trees before and after inserting
23,430
trees this strategy does not work if the key that guides the operator is only part of larger structure if that is the casethen we can keep all of the structures that have the same key in an auxiliary data structuresuch as list or another search tree figure shows the code for the insertion routine lines and recursively insert and attach into the appropriate subtree notice that in the recursive routinethe only time that changes is when new leaf is created when this happensit means that the recursive routine has been called from some other nodepwhich is to be the leaf' parent the call /*internal method to insert into subtree is the item to insert is the node that roots the subtree set the new root of the subtree *void insertconst comparable xbinarynode ift =nullptr new binarynodexnullptrnullptr }else ifx element insertxt->left )else ift->element insertxt->right )else /duplicatedo nothing /*internal method to insert into subtree is the item to insert by moving is the node that roots the subtree set the new root of the subtree *void insertcomparable &xbinarynode ift =nullptr new binarynodestd::movex )nullptrnullptr }else ifx element insertstd::movex ) ->left )else ift->element insertstd::movex ) ->right )else /duplicatedo nothing figure insertion into binary search tree
23,431
will be insert( , ->leftor insert( , ->righteither wayt is now reference to either ->left or ->rightmeaning that ->left or ->right will be changed to point at the new node all in alla slick maneuver remove as is common with many data structuresthe hardest operation is deletion once we have found the node to be deletedwe need to consider several possibilities if the node is leafit can be deleted immediately if the node has one childthe node can be deleted after its parent adjusts link to bypass the node (we will draw the link directions explicitly for claritysee figure the complicated case deals with node with two children the general strategy is to replace the data of this node with the smallest data of the right subtree (which is easily foundand recursively delete that node (which is now emptybecause the smallest node in the right subtree cannot have left childthe second remove is an easy one figure shows an initial tree and the result of deletion the node to be deleted is the left child of the rootthe key value is it is replaced with the smallest data in its right subtree ( )and then that node is deleted as before the code in figure performs deletion it is inefficient because it makes two passes down the tree to find and delete the smallest node in the right subtree when this is appropriate it is easy to remove this inefficiency by writing special removemin methodand we have left it in only for simplicity if the number of deletions is expected to be smallthen popular strategy to use is lazy deletionwhen an element is to be deletedit is left in the tree and merely marked as being deleted this is especially popular if duplicate items are presentbecause then the data member that keeps count of the frequency of appearance can be decremented if the number of real nodes in the tree is the same as the number of "deletednodesthen the depth of the tree is only expected to go up by small constant (why?)so there is very small time penalty associated with lazy deletion alsoif deleted item is reinsertedthe overhead of allocating new cell is avoided figure deletion of node ( with one childbefore and after
23,432
trees figure deletion of node ( with two childrenbefore and after /*internal method to remove from subtree is the item to remove is the node that roots the subtree set the new root of the subtree *void removeconst comparable xbinarynode ift =nullptr return/item not founddo nothing ifx element removext->left )else ift->element removext->right )else ift->left !nullptr & ->right !nullptr /two children ->element findmint->right )->elementremovet->elementt->right )else binarynode *oldnode tt ->left !nullptr ->left ->rightdelete oldnodefigure deletion routine for binary search trees
23,433
destructor and copy constructor as usualthe destructor calls makeempty the public makeempty (not shownsimply calls the private recursive version as shown in figure after recursively processing ' childrena call to delete is made for thus all nodes are recursively reclaimed notice that at the endtand thus rootis changed to point at nullptr the copy constructorshown in figure follows the usual procedurefirst initializing root to nullptr and then making copy of rhs we use very slick recursive function named clone to do all the dirty work average-case analysis intuitivelywe expect that all of the operations described in this sectionexcept makeempty and copyingshould take (log ntimebecause in constant time we descend level in the treethus operating on tree that is now roughly half as large indeedthe running time of all the operations (except makeempty and copyingis ( )where is the depth of the node containing the accessed item (in the case of removethis may be the replacement node in the two-child casewe prove in this section that the average depth over all nodes in tree is (log non the assumption that all insertion sequences are equally likely the sum of the depths of all nodes in tree is known as the internal path length we will now calculate the average internal path length of binary search treewhere the average is taken over all possible insertion sequences into binary search trees /*destructor for the tree *~binarysearchtreemakeempty)/*internal method to make subtree empty *void makeemptybinarynode ift !nullptr makeemptyt->left )makeemptyt->right )delete tt nullptrfigure destructor and recursive makeempty member function
23,434
trees /*copy constructor *binarysearchtreeconst binarysearchtree rhs rootnullptr root clonerhs root )/*internal method to clone subtree *binarynode clonebinarynode * const ift =nullptr return nullptrelse return new binarynodet->elementclonet->left )clonet->right }figure copy constructor and recursive clone member function let (nbe the internal path length for some tree of nodes ( an -node tree consists of an -node left subtree and an ( )-node right subtreeplus root at depth zero for < (iis the internal path length of the left subtree with respect to its root in the main treeall these nodes are one level deeper the same holds for the right subtree thuswe get the recurrence (nd(id( if all subtree sizes are equally likelywhich is true for binary search trees (since the subtree size depends only on the relative rank of the first element inserted into the tree)but not binary treesthen the average value of both (iand ( is ( /nn- = djthis yields - (ndjn = this recurrence will be encountered and solved in obtaining an average value of (no( log nthusthe expected depth of any node is (log nas an examplethe randomly generated -node tree shown in figure has nodes at expected depth it is tempting to say immediately that this result implies that the average running time of all the operations discussed in the previous section is (log )but this is not entirely true the reason for this is that because of deletionsit is not clear that all binary search trees are equally likely in particularthe deletion algorithm described above favors making the left subtrees deeper than the rightbecause we are always replacing deleted node with node from the right subtree the exact effect of this strategy is still unknownbut
23,435
figure randomly generated binary search tree it seems only to be theoretical novelty it has been shown that if we alternateinsertions and deletions ( timesthen the trees will have an expected depth of nafter quarter-million random insert/remove pairsthe tree that was somewhat right-heavy in figure looks decidedly unbalanced (average depth in figure we could try to eliminate the problem by randomly choosing between the smallest element in the right subtree and the largest in the left when replacing the deleted element this apparently eliminates the bias and should keep the trees balancedbut nobody has figure binary search tree after ( insert/remove pairs
23,436
trees actually proved this in any eventthis phenomenon appears to be mostly theoretical noveltybecause the effect does not show up at all for small treesandstranger stillif ( insert/remove pairs are usedthen the tree seems to gain balancethe main point of this discussion is that deciding what "averagemeans is generally extremely difficult and can require assumptions that may or may not be valid in the absence of deletionsor when lazy deletion is usedwe can conclude that the average running times of the operations above are (log nexcept for strange cases like the one discussed abovethis result is very consistent with observed behavior if the input comes into tree presortedthen series of inserts will take quadratic time and give very expensive implementation of linked listsince the tree will consist only of nodes with no left children one solution to the problem is to insist on an extra structural condition called balanceno node is allowed to get too deep there are quite few general algorithms to implement balanced trees most are quite bit more complicated than standard binary search treeand all take longer on average for updates they dohoweverprovide protection against the embarrassingly simple cases belowwe will sketch one of the oldest forms of balanced search treesthe avl tree second method is to forgo the balance condition and allow the tree to be arbitrarily deepbut after every operationa restructuring rule is applied that tends to make future operations efficient these types of data structures are generally classified as self-adjusting in the case of binary search treewe can no longer guarantee an (log nbound on any single operation but can show that any sequence of operations takes total time ( log nin the worst case this is generally sufficient protection against bad worst case the data structure we will discuss is known as splay treeits analysis is fairly intricate and is discussed in avl trees an avl (adelson-velskii and landistree is binary search tree with balance condition the balance condition must be easy to maintainand it ensures that the depth of the tree is (log nthe simplest idea is to require that the left and right subtrees have the same height as figure showsthis idea does not force the tree to be shallow figure bad binary tree requiring balance at the root is not enough
23,437
another balance condition would insist that every node must have left and right subtrees of the same height if the height of an empty subtree is defined to be - (as is usual)then only perfectly balanced trees of nodes would satisfy this criterion thusalthough this guarantees trees of small depththe balance condition is too rigid to be useful and needs to be relaxed an avl tree is identical to binary search treeexcept that for every node in the treethe height of the left and right subtrees can differ by at most (the height of an empty tree is defined to be - in figure the tree on the left is an avl tree but the tree on the right is not height information is kept for each node (in the node structureit can be shown that the height of an avl tree is at most roughly log( but in practice it is only slightly more than log as an examplethe avl tree of height with the fewest nodes ( is shown in figure this tree has as left subtree an avl tree of height of minimum size the right subtree is an avl tree of height of minimum size this tells us that the minimum number of nodess( )in an avl tree of height is given by (hs( ( for ( for ( the function (his closely related to the fibonacci numbersfrom which the bound claimed above on the height of an avl tree follows thusall the tree operations can be performed in (log ntimeexcept possibly insertion and deletion when we do an insertionwe need to update all the balancing information for the nodes on the path back to the rootbut the reason that insertion is potentially difficult is that inserting node could violate the avl tree property (for instanceinserting into the avl tree in figure would destroy the balance condition at the node with key if this is the casethen the property has to be restored before the insertion step is considered over it turns out that this can always be done with simple modification to the treeknown as rotation after an insertiononly nodes that are on the path from the insertion point to the root might have their balance altered because only those nodes have their subtrees altered as we follow the path up to the root and update the balancing informationwe may find node whose new balance violates the avl condition we will show how to rebalance the tree at the first ( deepestsuch nodeand we will prove that this rebalancing guarantees that the entire tree satisfies the avl property figure two binary search trees only the left tree is avl
23,438
trees figure smallest avl tree of height let us call the node that must be rebalanced since any node has at most two childrenand height imbalance requires that ' two subtreesheights differ by twoit is easy to see that violation might occur in four cases an insertion into the left subtree of the left child of an insertion into the right subtree of the left child of an insertion into the left subtree of the right child of an insertion into the right subtree of the right child of cases and are mirror image symmetries with respect to aas are cases and consequentlyas matter of theorythere are two basic cases from programming perspectiveof coursethere are still four cases the first casein which the insertion occurs on the "outside( left-left or rightright)is fixed by single rotation of the tree the second casein which the insertion occurs on the "inside( left-right or right-leftis handled by the slightly more complex double rotation these are fundamental operations on the tree that we'll see used several times in balanced-tree algorithms the remainder of this section describes these rotationsproves that they suffice to maintain balanceand gives casual implementation of the avl tree describes other balanced-tree methods with an eye toward more careful implementation
23,439
single rotation figure shows the single rotation that fixes case the before picture is on the left and the after is on the right let us analyze carefully what is going on node violates the avl balance property because its left subtree is two levels deeper than its right subtree (the dashed lines in the middle of the diagram mark the levelsthe situation depicted is the only possible case scenario that allows to satisfy the avl property before an insertion but violate it afterwards subtree has grown to an extra levelcausing it to be exactly two levels deeper than cannot be at the same level as the new because then would have been out of balance before the insertionand cannot be at the same level as because then would be the first node on the path toward the root that was in violation of the avl balancing condition to ideally rebalance the treewe would like to move up level and down level note that this is actually more than the avl property would require to do thiswe rearrange nodes into an equivalent tree as shown in the second part of figure here is an abstract scenariovisualize the tree as being flexiblegrab the child node close your eyesand shake itletting gravity take hold the result is that will be the new root the binary search tree property tells us that in the original tree so becomes the right child of in the new tree and remain as the left child of and right child of respectively subtree ywhich holds items that are between and in the original treecan be placed as ' left child in the new tree and satisfy all the ordering requirements as result of this workwhich requires only few pointer changeswe have another binary search tree that is an avl tree this happens because moves up one levely stays at the same leveland moves down one level and not only satisfy the avl requirementsbut they also have subtrees that are exactly the same height furthermorethe new height of the entire subtree is exactly the same as the height of the original subtree prior to the insertion that caused to grow thus no further updating of heights on the path to the root is neededand consequently no further rotations are needed figure shows that after the insertion of into the original avl tree on the leftnode becomes unbalanced thuswe do single rotation between and obtaining the tree on the right as we mentioned earliercase represents symmetric case figure shows how single rotation is applied let us work through rather long example suppose we start with an initially empty avl tree and insert the items and then through in sequential order the first problem occurs when it is time to insert item because the avl figure single rotation to fix case
23,440
trees figure avl property destroyed by insertion of then fixed by single rotation figure single rotation fixes case property is violated at the root we perform single rotation between the root and its left child to fix the problem here are the before and after trees before after dashed line joins the two nodes that are the subject of the rotation next we insert which causes no problemsbut the insertion of creates violation at node that is fixed by single rotation besides the local change caused by the rotationthe programmer must remember that the rest of the tree has to be informed of this change here this means that ' right child must be reset to link to instead of forgetting to do so is easy and would destroy the tree ( would be inaccessible
23,441
before after next we insert this causes balance problem at the rootsince its left subtree is of height and its right subtree would be height thereforewe perform single rotation at the root between and before after the rotation is performed by making child of and ' original left subtree the new right subtree of every item in this subtree must lie between and so this transformation makes sense the next item we insert is which causes another rotation before after double rotation the algorithm described above has one problemas figure showsit does not work for cases or the problem is that subtree is too deepand single rotation does not make it any less deep the double rotation that solves the problem is shown in figure the fact that subtree in figure has had an item inserted into it guarantees that it is nonempty thuswe may assume that it has root and two subtrees consequentlythe
23,442
trees figure single rotation fails to fix case figure left-right double rotation to fix case tree may be viewed as four subtrees connected by three nodes as the diagram suggestsexactly one of tree or is two levels deeper than (unless all are empty)but we cannot be sure which one it turns out not to matterin figure both and are drawn at levels below to rebalancewe see that we cannot leave as the rootand rotation between and was shown in figure to not workso the only alternative is to place as the new root this forces to be ' left child and to be its right childand it also completely determines the resulting locations of the four subtrees it is easy to see that the resulting tree satisfies the avl tree propertyand as was the case with the single rotationit restores the height to what it was before the insertionthus guaranteeing that all rebalancing and height updating is complete figure shows that the symmetric case can also be fixed figure right-left double rotation to fix case
23,443
by double rotation in both cases the effect is the same as rotating between ' child and grandchildand then between and its new child we will continue our previous example by inserting through in reverse orderfollowed by and then inserting is easysince it does not destroy the balance propertybut inserting causes height imbalance at node this is case which is solved by right-left double rotation in our examplethe right-left double rotation will involve and in this casek is the node with item is the node with item and is the node with item subtrees abcand are empty before after next we insert which also requires double rotation here the double rotation that will restore the tree is again right-left double rotation that will involve and in this casek is the node with item is the node with item and is the node with item subtree is the tree rooted at the node with item subtree is the empty subtree that was originally the left child of the node with item subtree is the tree rooted at the node with item and finallysubtree is the tree rooted at the node with item before after if is now insertedthere is an imbalance at the root since is not between and we know that the single rotation will work
23,444
trees before after insertion of will also require single rotation after before to insert single rotation needs to be performedand the same is true for the subsequent insertion of we insert without rotationcreating an almost perfectly balanced tree before
23,445
finallywe will insert to show the symmetric case of the double rotation notice that causes the node containing to become unbalanced since is between and (which is ' child on the path to ) double rotation needs to be performedyielding the following tree after let us summarize what happens the programming details are fairly straightforward except that there are several cases to insert new node with item into an avl tree twe recursively insert into the appropriate subtree of (let us call this tlr if the height of tlr does not changethen we are done otherwiseif height imbalance appears in twe do the appropriate single or double rotation depending on and the items in and tlr update the heights (making the connection from the rest of the tree above)and we are done since one rotation always sufficesa carefully coded nonrecursive version generally turns out to be faster than the recursive versionbut on modern compilers the difference is not as significant as in the past howevernonrecursive versions are quite difficult to code correctlywhereas casual recursive implementation is easily readable another efficiency issue concerns storage of the height information since all that is really required is the difference in heightwhich is guaranteed to be smallwe could get by with two bits (to represent + - if we really try doing so will avoid repetitive calculation of balance factors but results in some loss of clarity the resulting code is somewhat more complicated than if the height were stored at each node if recursive routine is writtenthen speed is probably not the main consideration in this casethe slight speed advantage obtained by storing balance factors hardly seems worth the loss of clarity and relative simplicity furthermoresince most machines will align this to at least an -bit boundary anywaythere is not likely to be any difference in the amount of space used an -bit (signedchar will allow us to store absolute heights of up to since the tree is balancedit is inconceivable that this would be insufficient (see the exercises
23,446
trees struct avlnode comparable elementavlnode *leftavlnode *rightint heightavlnodeconst comparable eleavlnode *ltavlnode *rtint elementele }leftlt }rightrt }heighth avlnodecomparable &eleavlnode *ltavlnode *rtint elementstd::moveele }leftlt }rightrt }heighth }figure node declaration for avl trees /*return the height of node or - if nullptr *int heightavlnode * const return =nullptr - ->heightfigure function to compute height of an avl node with all thiswe are ready to write the avl routines we show some of the code herethe rest is online firstwe need the avlnode class this is given in figure we also need quick function to return the height of node this function is necessary to handle the annoying case of nullptr pointer this is shown in figure the basic insertion routine (see figure adds only single line at the end that invokes balancing method the balancing method applies single or double rotation if neededupdates the heightand returns the resulting tree for the trees in figure rotatewithleftchild converts the tree on the left to the tree on the rightreturning pointer to the new root rotatewithrightchild is symmetric the code is shown in figure similarlythe double rotation pictured in figure can be implemented by the code shown in figure since deletion in binary search tree is somewhat more complicated than insertionone can assume that deletion in an avl tree is also more complicated in perfect worldone would hope that the deletion routine in figure could easily be modified by changing the last line to return after calling the balance methodas was done for insertion this would yield the code in figure this change worksa deletion could cause one side
23,447
/*internal method to insert into subtree is the item to insert is the node that roots the subtree set the new root of the subtree *void insertconst comparable xavlnode ift =nullptr new avlnodexnullptrnullptr }else ifx element insertxt->left )else ift->element insertxt->right )balancet )static const int allowed_imbalance /assume is balanced or within one of being balanced void balanceavlnode ift =nullptr returnifheightt->left heightt->right allowed_imbalance ifheightt->left->left >heightt->left->right rotatewithleftchildt )else doublewithleftchildt )else ifheightt->right heightt->left allowed_imbalance ifheightt->right->right >heightt->right->left rotatewithrightchildt )else doublewithrightchildt ) ->height maxheightt->left )heightt->right figure insertion into an avl tree
23,448
trees figure single rotation /*rotate binary tree node with left child for avl treesthis is single rotation for case update heightsthen set new root *void rotatewithleftchildavlnode avlnode * ->leftk ->left ->rightk ->right ->height maxheightk ->left )heightk ->right ->height maxheightk ->left ) ->height figure routine to perform single rotation figure double rotation of the tree to become two levels shallower than the other side the case-by-case analysis is similar to the imbalances that are caused by insertionbut not exactly the same for instancecase in figure which would now reflect deletion from tree (rather than an insertion into )must be augmented with the possibility that tree could be as deep as tree even soit is easy to see that the rotation rebalances this case and the symmetric case in figure thus the code for balance in figure lines and uses >
23,449
/*double rotate binary tree nodefirst left child with its right childthen node with new left child for avl treesthis is double rotation for case update heightsthen set new root *void doublewithleftchildavlnode rotatewithrightchildk ->left )rotatewithleftchildk )figure routine to perform double rotation /*internal method to remove from subtree is the item to remove is the node that roots the subtree set the new root of the subtree *void removeconst comparable xavlnode ift =nullptr return/item not founddo nothing ifx element removext->left )else ift->element removext->right )else ift->left !nullptr & ->right !nullptr /two children ->element findmint->right )->elementremovet->elementt->right )else avlnode *oldnode tt ->left !nullptr ->left ->rightdelete oldnodebalancet )figure deletion in an avl tree
23,450
trees instead of specifically to ensure that single rotations are done in these cases rather than double rotations we leave verification of the remaining cases as an exercise splay trees we now describe relatively simple data structure known as splay tree that guarantees that any consecutive tree operations starting from an empty tree take at most ( log ntime although this guarantee does not preclude the possibility that any single operation might take (ntimeand thus the bound is not as strong as an (log nworstcase bound per operationthe net effect is the samethere are no bad input sequences generallywhen sequence of operations has total worst-case running time of (mf( ))we say that the amortized running time is ( ( )thusa splay tree has an (log namortized cost per operation over long sequence of operationssome may take moresome less splay trees are based on the fact that the (nworst-case time per operation for binary search trees is not badas long as it occurs relatively infrequently any one accesseven if it takes ( )is still likely to be extremely fast the problem with binary search trees is that it is possibleand not uncommonfor whole sequence of bad accesses to take place the cumulative running time then becomes noticeable search tree data structure with (nworst-case timebut guarantee of at most ( log nfor any consecutive operationsis certainly satisfactorybecause there are no bad sequences if any particular operation is allowed to have an (nworst-case time boundand we still want an (log namortized time boundthen it is clear that whenever node is accessedit must be moved otherwiseonce we find deep nodewe could keep performing accesses on it if the node does not change locationand each access costs ( )then sequence of accesses will cost ( nthe basic idea of the splay tree is that after node is accessedit is pushed to the root by series of avl tree rotations notice that if node is deepthere are many nodes on the path that are also relatively deepand by restructuring we can make future accesses cheaper on all these nodes thusif the node is unduly deepthen we want this restructuring to have the side effect of balancing the tree (to some extentbesides giving good time bound in theorythis method is likely to have practical utilitybecause in many applicationswhen node is accessedit is likely to be accessed again in the near future studies have shown that this happens much more often than one would expect splay trees also do not require the maintenance of height or balance informationthus saving space and simplifying the code to some extent (especially when careful implementations are writtena simple idea (that does not workone way of performing the restructuring described above is to perform single rotationsbottom up this means that we rotate every node on the access path with its parent as an exampleconsider what happens after an access ( findon in the following tree
23,451
the access path is dashed firstwe would perform single rotation between and its parentobtaining the following treek thenwe rotate between and obtaining the next treek then two more rotations are performed until we reach the root
23,452
trees these rotations have the effect of pushing all the way to the rootso that future accesses on are easy (for whileunfortunatelyit has pushed another node ( almost as deep as used to be an access on that node will then push another node deepand so on although this strategy makes future accesses of cheaperit has not significantly improved the situation for the other nodes on the (originalaccess path it turns out that it is possible to prove that using this strategythere is sequence of operations requiring ( ntimeso this idea is not quite good enough the simplest way to show this is to consider the tree formed by inserting keys into an initially empty tree (work this example outthis gives tree consisting of only left children this is not necessarily badthoughsince the time to build this tree is (ntotal the bad part is that accessing the node with key takes units of timewhere each node on the access path counts as one unit after the rotations are completean access of the node with key takes units of timekey takes unitsand so on the total for accessing all the keys in order is nn = ( after they are accessedthe tree reverts to its original stateand we can repeat the sequence splaying the splaying strategy is similar to the rotation idea aboveexcept that we are little more selective about how rotations are performed we will still rotate bottom up along the access
23,453
figure zig-zag figure zig-zig path let be (non-rootnode on the access path at which we are rotating if the parent of is the root of the treewe merely rotate and the root this is the last rotation along the access path otherwisex has both parent (pand grandparent ( )and there are two casesplus symmetriesto consider the first case is the zig-zag case (see fig here is right child and is left child (or vice versaif this is the casewe perform double rotationexactly like an avl double rotation otherwisewe have zig-zig casex and are both left children (orin the symmetric caseboth right childrenin that casewe transform the tree on the left of figure to the tree on the right as an exampleconsider the tree from the last examplewith contains on the first splay step is at and is clearly zig-zagso we perform standard avl double rotation using and the resulting tree follows
23,454
trees the next splay step at is zig-zigso we do the zig-zig rotation with and obtaining the final treek although it is hard to see from small examplessplaying not only moves the accessed node to the root but also has the effect of roughly halving the depth of most nodes on the access path (some shallow nodes are pushed down at most two levelsto see the difference that splaying makes over simple rotationconsider again the effect of inserting items into an initially empty tree this takes total of ( )as beforeand yields the same tree as simple rotations figure shows the result of splaying at the node with item the difference is that after an access of the node with item which figure result of splaying at node
23,455
takes unitsthe access on the node with item will only take about / units instead of unitsthere are no nodes quite as deep as before an access on the node with item will bring nodes to within / of the rootand this is repeated until the depth becomes roughly log (an example with is too small to see the effect wellfigures to show the result of accessing items through in -node tree that originally contains only left children thus we do not get the same bad behavior from splay trees that is prevalent in the simple rotation strategy (actuallythis turns out to be very good case rather complicated proof shows that for this examplethe accesses take total of (ntime these figures highlight the fundamental and crucial property of splay trees when access paths are longthus leading to longer-than-normal search timethe rotations tend to be good for future operations when accesses are cheapthe rotations are not as good and can be bad the extreme case is the initial tree formed by the insertions all the insertions were constant-time operations leading to bad initial tree at that point in timewe had very bad treebut we were running ahead of schedule and had the compensation of less total running time then couple of really horrible accesses left nearly balanced treebut the cost was that we had to give back some of the time that had been saved the main theoremwhich we will prove in is that we never fall behind pace of (log nper operationwe are always on scheduleeven though there are occasionally bad operations figure result of splaying at node tree of all left children
23,456
trees figure result of splaying the previous tree at node figure result of splaying the previous tree at node figure result of splaying the previous tree at node we can perform deletion by accessing the node to be deleted this puts the node at the root if it is deletedwe get two subtrees tl and tr (left and rightif we find the largest element in tl (which is easy)then this element is rotated to the root of tl and tl will now have root with no right child we can finish the deletion by making tr the right child
23,457
figure result of splaying the previous tree at node figure result of splaying the previous tree at node figure result of splaying the previous tree at node figure result of splaying the previous tree at node
23,458
trees figure result of splaying the previous tree at node the analysis of splay trees is difficultbecause it must take into account the everchanging structure of the tree on the other handsplay trees are much simpler to program than most balanced search treessince there are fewer cases to consider and no balance information to maintain some empirical evidence suggests that this translates into faster code in practicealthough the case for this is far from complete finallywe point out that there are several variations of splay trees that can perform even better in practice one variation is completely coded in tree traversals (revisitedbecause of the ordering information in binary search treeit is simple to list all the items in sorted order the recursive function in figure does the real work convince yourself that this function works as we have seen beforethis kind of routine when applied to trees is known as an inorder traversal (which makes sensesince it lists the items in orderthe general strategy of an inorder traversal is to process the left subtree firstthen perform processing at the current nodeand finally process the right subtree the interesting part about this algorithmaside from its simplicityis that the total running time is (nthis is because there is constant work being performed at every node in the tree each node is visited onceand the work performed at each node is testing against nullptrsetting up two function callsand doing an output statement since there is constant work per node and nodesthe running time is (nsometimes we need to process both subtrees first before we can process node for instanceto compute the height of nodewe need to know the height of the subtrees first the code in figure computes this since it is always good idea to check the special cases--and crucial when recursion is involved--notice that the routine will declare the height of leaf to be zerowhich is correct this general order of traversalwhich we have also seen beforeis known as postorder traversal againthe total running time is ( )because constant work is performed at each node
23,459
/*print the tree contents in sorted order *void printtreeostream out cout const ifisemptyout <"empty tree<endlelse printtreerootout )/*internal method to print subtree rooted at in sorted order *void printtreebinarynode *tostream out const ift !nullptr printtreet->leftout )out element <endlprinttreet->rightout )figure routine to print binary search tree in order /*internal method to compute the height of subtree rooted at *int heightbinarynode * ift =nullptr return - else return maxheightt->left )heightt->right )figure routine to compute the height of tree using postorder traversal the third popular traversal scheme that we have seen is preorder traversal herethe node is processed before the children this could be usefulfor exampleif you wanted to label each node with its depth the common idea in all of these routines is that you handle the nullptr case first and then the rest notice the lack of extraneous variables these routines pass only the pointer
23,460
trees to the node that roots the subtreeand do not declare or pass any extra variables the more compact the codethe less likely that silly bug will turn up fourthless often usedtraversal (which we have not seen yetis level-order traversal in level-order traversalall nodes at depth are processed before any node at depth level-order traversal differs from the other traversals in that it is not done recursivelya queue is usedinstead of the implied stack of recursion -trees so farwe have assumed that we can store an entire data structure in the main memory of computer supposehoweverthat we have more data than can fit in main memoryandas resultmust have the data structure reside on disk when this happensthe rules of the game changebecause the big-oh model is no longer meaningful the problem is that big-oh analysis assumes that all operations are equal howeverthis is not trueespecially when disk / is involved modern computers execute billions of instructions per second that is pretty fastmainly because the speed depends largely on electrical properties on the other handa disk is mechanical its speed depends largely on the time it takes to spin the disk and to move disk head many disks spin at , rpm thusin min it makes , revolutionshenceone revolution occurs in / of secondor ms on averagewe might expect that we have to spin disk halfway to find what we are looking forbut this is compensated by the time to move the disk headso we get an access time of ms (this is very charitable estimate - ms access times are more common consequentlywe can do approximately disk accesses per second this sounds pretty gooduntil we compare it with the processor speed what we have is billions instructions equal to disk accesses of courseeverything here is rough calculationbut the relative speeds are pretty cleardisk accesses are incredibly expensive furthermoreprocessor speeds are increasing at much faster rate than disk speeds (it is disk sizes that are increasing quite quicklyso we are willing to do lots of calculations just to save disk access in almost all casesit is the number of disk accesses that will dominate the running time thusif we halve the number of disk accessesthe running time will halve here is how the typical search tree performs on disksuppose we want to access the driving records for citizens in the state of florida we assume that we have , , itemsthat each key is bytes (representing name)and that record is bytes we assume this does not fit in main memory and that we are of users on system (so we have / of the resourcesthusin sec we can execute many millions of instructions or perform six disk accesses the unbalanced binary search tree is disaster in the worst caseit has linear depth and thus could require , , disk accesses on averagea successful search would require log disk accessesand since log an average search would require disk accessesor sec in typical randomly constructed treewe would expect that few nodes are three times deeperthese would require about disk accessesor sec an avl tree is somewhat better the worst case of log is unlikely to occurand the typical case is very close to log thus an avl tree would use about disk accesses on averagerequiring sec
23,461
figure -ary tree of nodes has only three levels we want to reduce the number of disk accesses to very small constantsuch as three or four we are willing to write complicated code to do thisbecause machine instructions are essentially freeas long as we are not ridiculously unreasonable it should probably be clear that binary search tree will not worksince the typical avl tree is close to optimal height we cannot go below log using binary search tree the solution is intuitively simpleif we have more branchingwe have less height thuswhile perfect binary tree of nodes has five levelsa -ary tree of nodes has only three levelsas shown in figure an -ary search tree allows -way branching as branching increasesthe depth decreases whereas complete binary tree has height that is roughly log na complete -ary tree has height that is roughly logm we can create an -ary search tree in much the same way as binary search tree in binary search treewe need one key to decide which of two branches to take in an -ary search treewe need keys to decide which branch to take to make this scheme efficient in the worst casewe need to ensure that the -ary search tree is balanced in some way otherwiselike binary search treeit could degenerate into linked list actuallywe want an even more restrictive balancing condition that iswe do not want an -ary search tree to degenerate to even binary search treebecause then we would be stuck with log accesses one way to implement this is to use -tree the basic -tree is described here many variations and improvements are knownand an implementation is somewhat complex because there are quite few cases howeverit is easy to see thatin principlea -tree guarantees only few disk accesses -tree of order is an -ary tree with the following properties: the data items are stored at leaves the nonleaf nodes store up to keys to guide the searchingkey represents the smallest key in subtree the root is either leaf or has between two and children all nonleaf nodes (except the roothave between  / and children all leaves are at the same depth and have between  / and data itemsfor some (the determination of is described shortly what is described is popularly known as btree rules and must be relaxed for the first insertions
23,462
trees figure -tree of order an example of -tree of order is shown in figure notice that all nonleaf nodes have between three and five children (and thus between two and four keys)the root could possibly have only two children herewe have it happens that and are the same in this examplebut this is not necessary since is each leaf has between three and five data items requiring nodes to be half full guarantees that the -tree does not degenerate into simple binary tree although there are various definitions of -trees that change this structuremostly in minor waysthis definition is one of the popular forms each node represents disk blockso we choose and on the basis of the size of the items that are being stored as an examplesuppose one block holds , bytes in our florida exampleeach key uses bytes in -tree of order mwe would have - keysfor total of bytesplus branches since each branch is essentially number of another disk blockwe can assume that branch is bytes thus the branches use bytes the total memory requirement for nonleaf node is thus - the largest value of for which this is no more than , is thus we would choose since each data record is byteswe would be able to fit records in block thus we would choose we are guaranteed that each leaf has between and data records and that each internal node (except the rootbranches in at least ways since there are , , recordsthere areat most , leaves consequentlyin the worst caseleaves would be on level in more concrete termsthe worst-case number of accesses is given by approximately logm/ ngive or take (for examplethe root and the next level could be cached in main memoryso that over the long rundisk accesses would be needed only for level and deeper the remaining issue is how to add and remove items from the -tree the ideas involved are sketched next note that many of the themes seen before recur we begin by examining insertion suppose we want to insert into the -tree in figure search down the tree reveals that it is not already in the tree we can add it to the leaf as fifth item note that we may have to reorganize all the data in the leaf to do this howeverthe cost of doing this is negligible when compared to that of the disk accesswhich in this case also includes disk write of coursethat was relatively painlessbecause the leaf was not already full suppose we now want to insert figure shows problemthe leaf where wants to go is already full the solution is simplesince we now have itemswe split them into two
23,463
figure -tree after insertion of into the tree in figure leavesboth guaranteed to have the minimum number of data records needed we form two leaves with three items each two disk accesses are required to write these leavesand third disk access is required to update the parent note that in the parentboth keys and branches changebut they do so in controlled way that is easily calculated the resulting -tree is shown in figure although splitting nodes is time-consuming because it requires at least two additional disk writesit is relatively rare occurrence if is for examplethen when node is splittwo leaves with and itemsrespectivelyare created for the leaf with itemswe can perform more insertions without another split put another wayfor every splitthere are roughly / nonsplits the node splitting in the previous example worked because the parent did not have its full complement of children but what would happen if it didsupposefor examplethat we insert into the -tree in figure we must split the leaf containing the keys through and now into two leaves but doing this would give the parent six childrenand it is allowed only five the solution is to split the parent the result of this is shown in figure when the parent is splitwe must update the values of the keys and also the parent' parentthus incurring an additional two disk writes (so this insertion costs five disk writeshoweveronce againthe keys change in very controlled manneralthough the code is certainly not simple because of host of cases figure insertion of into the -tree in figure causes split into two leaves
23,464
trees figure insertion of into the -tree in figure causes split into two leaves and then split of the parent node when nonleaf node is splitas is the case hereits parent gains child what if the parent already has reached its limit of childrenthen we continue splitting nodes up the tree until either we find parent that does not need to be split or we reach the root if we split the rootthen we have two roots obviouslythis is unacceptablebut we can create new root that has the split roots as its two children this is why the root is granted the special two-child minimum exemption it also is the only way that -tree gains height needless to saysplitting all the way up to the root is an exceptionally rare event this is because tree with four levels indicates that the root has been split three times throughout the entire sequence of insertions (assuming no deletions have occurredin factthe splitting of any nonleaf node is also quite rare there are other ways to handle the overflowing of children one technique is to put child up for adoption should neighbor have room to insert into the -tree in figure for examplewe could make room by moving to the next leaf this technique requires modification of the parentbecause the keys are affected howeverit tends to keep nodes fuller and saves space in the long run we can perform deletion by finding the item that needs to be removed and then removing it the problem is that if the leaf it was in had the minimum number of data itemsthen it is now below the minimum we can rectify this situation by adopting neighboring itemif the neighbor is not itself at its minimum if it isthen we can combine with the neighbor to form full leaf unfortunatelythis means that the parent has lost child if this causes the parent to fall below its minimumthen it follows the same strategy this process could figure -tree after the deletion of from the -tree in figure
23,465
percolate all the way up to the root the root cannot have just one child (and even if this were allowedit would be sillyif root is left with one child as result of the adoption processthen we remove the root and make its child the new root of the tree this is the only way for -tree to lose height for examplesuppose we want to remove from the -tree in figure since the leaf has only two items and its neighbor is already at its minimum of threewe combine the items into new leaf of five items as resultthe parent has only two children howeverit can adopt from neighborbecause the neighbor has four children as resultboth have three children the result is shown in figure sets and maps in the standard library the stl containers discussed in --namelyvector and list--are inefficient for searching consequentlythe stl provides two additional containersset and mapthat guarantee logarithmic cost for basic operations such as insertiondeletionand searching sets the set is an ordered container that does not allow duplicates many of the idioms used to access items in vector and list also work for set specificallynested in the set are iterator and const_iterator types that allow traversal of the setand several methods from vector and list are identically named in setincluding beginendsizeand empty the print function template described in figure will work if passed set the unique operations required by the set are the abilities to insertremoveand perform basic search (efficientlythe insert routine is aptly named insert howeverbecause set does not allow duplicatesit is possible for the insert to fail as resultwe want the return type to be able to indicate this with boolean variable howeverinsert has more complicated return type than bool this is because insert also returns an iterator that represents where is when insert returns this iterator represents either the newly inserted item or the existing item that caused the insert to failand it is usefulbecause knowing the position of the item can make removing it more efficient by avoiding the search and getting directly to the node containing the item the stl defines class template called pair that is little more than struct with members first and second to access the two items in the pair there are two different insert routinespair insertconst object )pair insertiterator hintconst object )the one-parameter insert behaves as described above the two-parameter insert allows the specification of hintwhich represents the position where should go if the hint is accuratethe insertion is fastoften ( if notthe insertion is done using the normal insertion algorithm and performs comparably with the one-parameter insert for instancethe following code might be faster using the two-parameter insert rather than the oneparameter insert
23,466
trees set sforint ++ inserts end) )there are several versions of eraseint eraseconst object )iterator eraseiterator itr )iterator eraseiterator startiterator end )the first one-parameter erase removes (if foundand returns the number of items actually removedwhich is obviously either or the second one-parameter erase behaves the same as in vector and list it removes the object at the position given by the iteratorreturns an iterator representing the element that followed itr immediately prior to the call to eraseand invalidates itrwhich becomes stale the two-parameter erase behaves the same as in vector or listremoving all the items starting at startup to but not including the item at end for searchingrather than contains routine that returns boolean variablethe set provides find routine that returns an iterator representing the location of the item (or the endmarker if the search failsthis provides considerably more informationat no cost in running time the signature of find is iterator findconst object constby defaultordering uses the less function objectwhich itself is implemented by invoking operatorfor the object an alternative ordering can be specified by instantiating the set template with function object type for instancewe can create set that stores string objectsignoring case distinctions by using the caseinsensitivecompare function object coded in figure in the following codethe set has size set ss insert"hello) insert"hello)cout <"the size is< size<endlmaps map is used to store collection of ordered entries that consists of keys and their values keys must be uniquebut several keys can map to the same values thus values need not be unique the keys in the map are maintained in logically sorted order the map behaves like set instantiated with pairwhose comparison function refers only to the key thus it supports beginendsizeand emptybut the underlying iterator is key-value pair in other wordsfor an iterator itr*itr is of type pair the map also supports insertfindand erase for insertone must provide pair object although find requires only keythe like setan optional template parameter can be used to specify comparison function that differs from less
23,467
iterator it returns references pair using only these operations is often not worthwhile because the syntactic baggage can be expensive fortunatelythe map has an important extra operation that yields simple syntax the array-indexing operator is overloaded for maps as followsvaluetype operator[const keytype key )the semantics of operator[are as follows if key is present in the mapa reference to the corresponding value is returned if key is not present in the mapit is inserted with default value into the map and then reference to the inserted default value is returned the default value is obtained by applying zero-parameter constructor or is zero for the primitive types these semantics do not allow an accessor version of operator[]so operator[cannot be used on map that is constant for instanceif map is passed by constant referenceinside the routineoperator[is unusable the code snippet in figure illustrates two techniques to access items in map firstobserve that at line the left-hand side invokes operator[]thus inserting "patand double of value into the mapand returning reference to that double then the assignment changes that double inside the map to line outputs unfortunatelyline inserts "janand salary of into the map and then prints it this may or may not be the proper thing to dodepending on the application if it is important to distinguish between items that are in the map and those not in the mapor if it is important not to insert into the map (because it is immutable)then an alternate approach shown at lines to can be used there we see call to find if the key is not foundthe iterator is the endmarker and can be tested if the key is foundwe can access the second item in the pair referenced by the iteratorwhich is the value associated with the key we could also assign to itr->second ifinstead of const_iteratoritr is an iterator implementation of set and map +requires that set and map support the basic inserteraseand find operations in logarithmic worst-case time consequentlythe underlying implementation is balanced map salariessalaries"pat cout <salaries"pat<endlcout <salaries"jan<endlmap::const_iterator itritr salaries find"chris)ifitr =salaries endcout <"not an employee of this company!<endlelse cout second <endlfigure accessing values in map
23,468
trees binary search tree typicallyan avl tree is not usedinsteadtop-down red-black treeswhich are discussed in section are often used an important issue in implementing set and map is providing support for the iterator classes of courseinternallythe iterator maintains pointer to the "currentnode in the iteration the hard part is efficiently advancing to the next node there are several possible solutionssome of which are listed here when the iterator is constructedhave each iterator store as its data an array containing the set items this doesn' workit makes it impossible to efficiently implement any of the routines that return an iterator after modifying the setsuch as some of the versions of erase and insert have the iterator maintain stack storing nodes on the path to the current node with this informationone can deduce the next node in the iterationwhich is either the node in the current node' right subtree that contains the minimum item or the nearest ancestor that contains the current node in its left subtree this makes the iterator somewhat large and makes the iterator code clumsy have each node in the search tree store its parent in addition to the children the iterator is not as largebut there is now extra memory required in each nodeand the code to iterate is still clumsy have each node maintain extra linksone to the next smallerand one to the next larger node this takes spacebut the iteration is very simple to doand it is easy to maintain these links maintain the extra links only for nodes that have nullptr left or right links by using extra boolean variables to allow the routines to tell if left link is being used as standard binary search tree left link or link to the next smaller nodeand similarly for the right link (exercise this idea is called threaded tree and is used in many of the stl implementations an example that uses several maps many words are similar to other words for instanceby changing the first letterthe word wine can become dinefinelinemineninepineor vine by changing the third letterwine can become widewifewipeor wireamong others by changing the fourth letterwine can become windwingwinkor winsamong others this gives different words that can be obtained by changing only one letter in wine in factthere are over different wordssome more obscure we would like to write program to find all words that can be changed into at least other words by single one-character substitution we assume that we have dictionary consisting of approximately , different words of varying lengths most words are between and characters the distribution includes , six-letter words , seven-letter words , eight-letter words , nine-letter words , ten-letter wordsand , eleven-letter words (in realitythe most changeable words are three-four-and five-letter wordsbut the longer words are the time-consuming ones to check the most straightforward strategy is to use map in which the keys are words and the values are vectors containing the words that can be changed from the key with
23,469
void printhighchangeablesconst mapadjacentwordsint minwords forauto entry adjacentwords const vector words entry secondifwords size>minwords cout <entry first <(<words size<"):"forauto str words cout <<strcout <endlfigure given map containing words as keys and vector of words that differ in only one character as valuesoutput words that have minwords or more words obtainable by one-character substitution one-character substitution the routine in figure shows how the map that is eventually produced (we have yet to write code for that partcan be used to print the required answers the code uses range for loop to step through the map and views entries that are pairs consisting of word and vector of words the constant references at lines and are used to replace complicated expressions and avoid making unneeded copies the main issue is how to construct the map from an array that contains the , words the routine in figure is straightforward function to test if two words are identical except for one-character substitution we can use the routine to provide the simplest algorithm for the map constructionwhich is brute-force test of all pairs of words this algorithm is shown in figure if we find pair of words that differ in only one characterwe can update the map at lines and the idiom we are using at line is that adjwords[strrepresents the vector of words that are identical to strexcept for one character if we have previously seen strthen it is in the mapand we need only add the new word to the vector in the mapand we do this by calling push_back if we have never seen str beforethen the act of using operator[places it in the mapwith vector of size and returns this vectorso the push_back updates the vector to be size all in alla super-slick idiom for maintaining map in which the value is collection the problem with this algorithm is that it is slow and takes seconds on our computer an obvious improvement is to avoid comparing words of different lengths we can do this by grouping words by their lengthand then running the previous algorithm on each of the separate groups to do thiswe can use second maphere the key is an integer representing word lengthand the value is collection of all the words of that length we can use vector to
23,470
trees /returns true if word and word are the same length /and differ in only one character bool onecharoffconst string word const string word ifword length!word lengthreturn falseint diffs forint word length)++ ifword !word if++diffs return falsereturn diffs = figure routine to check if two words differ in only one character /computes map in which the keys are words and values are vectors of words /that differ in only one character from the corresponding key /uses quadratic algorithm mapcomputeadjacentwordsconst vector words mapadjwordsforint words size)++ forint words size)++ ifonecharoffwordsi ]wordsj adjwordswordsi push_backwordsj )adjwordswordsj push_backwordsi )return adjwordsfigure function to compute map containing words as keys and vector of words that differ in only one character as values this version runs in minutes on an , word dictionary store each collectionand the same idiom applies the code is shown in figure line shows the declaration for the second maplines and populate the mapand then an extra loop is used to iterate over each group of words compared to the first algorithmthe second algorithm is only marginally more difficult to code and runs in secondsor about six times as fast
23,471
/computes map in which the keys are words and values are vectors of words /that differ in only one character from the corresponding key /uses quadratic algorithmbut speeds things up little by /maintaining an additional map that groups words by their length mapcomputeadjacentwordsconst vector words mapadjwordsmapwordsbylength/group the words by their length forauto thisword words wordsbylengththisword lengthpush_backthisword )/work on each group separately forauto entry wordsbylength const vector groupswords entry secondforint groupswords size)++ forint groupswords size)++ ifonecharoffgroupswordsi ]groupswordsj adjwordsgroupswordsi push_backgroupswordsj )adjwordsgroupswordsj push_backgroupswordsi )return adjwordsfigure function to compute map containing words as keys and vector of words that differ in only one character as values it splits words into groups by word length this version runs in seconds on an , -word dictionary our third algorithm is more complex and uses additional mapsas beforewe group the words by word lengthand then work on each group separately to see how this algorithm workssuppose we are working on words of length first we want to find word pairssuch as wine and ninethat are identical except for the first letter one way to do this is as followsfor each word of length remove the first characterleaving three-character word representative form map in which the key is the representativeand the value is vector of all words that have that representative for instancein considering the first character of the four-letter word grouprepresentative "inecorresponds to "dine""fine""wine""nine""mine""vine""pine""linerepresentative "ootcorresponds to "boot""foot""hoot""loot""soot""zooteach individual vector that is value in this latest map forms clique of words in which any word can be changed to any other word by one-character substitutionso after this latest map is constructedit is easy to traverse it and add entries to the original map that is being computed we would then proceed to
23,472
trees the second character of the four-letter word groupwith new mapand then the third characterand finally the fourth character the general outline is for each group gcontaining words of length len for each position (ranging from to len- make an empty maprepstowords for each word obtain ' representative by removing position update repstowords use cliques in repstowords to update adjwords map figure contains an implementation of this algorithm the running time improves to two seconds it is interesting to note that although the use of the additional maps makes the algorithm fasterand the syntax is relatively cleanthe code makes no use of the fact that the keys of the map are maintained in sorted order /computes map in which the keys are words and values are vectors of words /that differ in only one character from the corresponding key /uses an efficient algorithm that is ( log nwith map mapcomputeadjacentwordsconst vector words mapadjwordsmapwordsbylength/group the words by their length forauto str words wordsbylengthstr lengthpush_backstr )/work on each group separately forauto entry wordsbylength const vector groupswords entry secondint groupnum entry first/work on each position in each group forint groupnum++ figure function to compute map containing words as keys and vector of words that differ in only one character as values this version runs in seconds on an , word dictionary
23,473
/remove one character in specified positioncomputing representative /words with same representatives are adjacentso populate map mapreptowordforauto str groupswords string rep strrep erasei )reptowordrep push_backstr )/and then look for map values with more than one string forauto entry reptoword const vector clique entry secondifclique size> forint clique size)++ forint clique size)++ adjwordscliquep push_backcliqueq )adjwordscliqueq push_backcliquep )return adjwordsfigure (continuedas suchit is possible that data structure that supports the map operations but does not guarantee sorted order can perform bettersince it is being asked to do less explores this possibility and discusses the ideas behind the alternative map implementation that ++ adds to the standard libraryknown as an unordered_map an unordered map reduces the running time of the implementation from sec to sec summary we have seen uses of trees in operating systemscompiler designand searching expression trees are small example of more general structure known as parse treewhich is central data structure in compiler design parse trees are not binarybut are relatively simple extensions of expression trees (although the algorithms to build them are not quite so simple
23,474
trees search trees are of great importance in algorithm design they support almost all the useful operationsand the logarithmic average cost is very small nonrecursive implementations of search trees are somewhat fasterbut the recursive versions are sleekermore elegantand easier to understand and debug the problem with search trees is that their performance depends heavily on the input being random if this is not the casethe running time increases significantlyto the point where search trees become expensive linked lists we saw several ways to deal with this problem avl trees work by insisting that all nodesleft and right subtrees differ in heights by at most one this ensures that the tree cannot get too deep the operations that do not change the treeas insertion doescan all use the standard binary search tree code operations that change the tree must restore the tree this can be somewhat complicatedespecially in the case of deletion we showed how to restore the tree after insertions in (log ntime we also examined the splay tree nodes in splay trees can get arbitrarily deepbut after every access the tree is adjusted in somewhat mysterious manner the net effect is that any sequence of operations takes ( log ntimewhich is the same as balanced tree would take -trees are balanced -way (as opposed to -way or binarytreeswhich are well suited for disksa special case is the - tree ( )which is another way to implement balanced search trees in practicethe running time of all the balanced-tree schemeswhile slightly faster for searchingis worse (by constant factorfor insertions and deletions than the simple binary search treebut this is generally acceptable in view of the protection being given against easily obtained worst-case input discusses some additional search tree data structures and provides detailed implementations final noteby inserting elements into search tree and then performing an inorder traversalwe obtain the elements in sorted order this gives an ( log nalgorithm to sortwhich is worst-case bound if any sophisticated search tree is used we shall see better ways in but none that have lower time bound exercises for the tree in figure which node is the rootb which nodes are leaves for each node in the tree of figure name the parent node list the children list the siblings compute the depth compute the height what is the depth of the tree in figure show that in binary tree of nodesthere are nullptr links representing children
23,475
figure tree for exercises to show that the maximum number of nodes in binary tree of height is + full node is node with two children prove that the number of full nodes plus one is equal to the number of leaves in nonempty binary tree suppose binary tree has leaves lm at depths dm respectively -di < and determine when the equality is true prove that = give the prefixinfixand postfix expressions corresponding to the tree in figure show the result of inserting into an initially empty binary search tree show the result of deleting the root figure tree for exercise
23,476
trees let (nbe the average number of full nodes in an -node binary search tree determine the values of ( and ( show that for - ( (if( ) - (ni= show (by inductionthat ( ( )/ is solution to the equation in part ( )with the initial conditions in part (ad use the results of exercise to determine the average number of leaves in an -node binary search tree write an implementation of the set classwith associated iterators using binary search tree add to each node link to the parent node write an implementation of the map class by storing data member of type set write an implementation of the set classwith associated iterators using binary search tree add to each node link to the next smallest and next largest node to make your code simpleradd header and tail node which are not part of the binary search treebut help make the linked list part of the code simpler suppose you want to perform an experiment to verify the problems that can be caused by random insert/remove pairs here is strategy that is not perfectly randombut close enough you build tree with elements by inserting elements chosen at random from the range to an you then perform pairs of insertions followed by deletions assume the existence of routinerandominteger( , )which returns uniform random integer between and inclusive explain how to generate random integer between and that is not already in the tree (so random insertion can be performedin terms of and awhat is the running time of this operationb explain how to generate random integer between and that is already in the tree (so random deletion can be performedwhat is the running time of this operationc what is good choice of awhywrite program to evaluate empirically the following strategies for removing nodes with two childrena replace with the largest nodexin tl and recursively remove alternately replace with the largest node in tl and the smallest node in tr and recursively remove the appropriate node replace with either the largest node in tl or the smallest node in tr (recursively removing the appropriate node)making the choice randomly which strategy seems to give the most balancewhich takes the least cpu time to process the entire sequenceredo the binary search tree class to implement lazy deletion note carefully that this affects all of the routines especially challenging are findmin and findmaxwhich must now be done recursively
23,477
 prove that the depth of random binary search tree (depth of the deepest nodeis (log )on average give precise expression for the minimum number of nodes in an avl tree of height what is the minimum number of nodes in an avl tree of height show the result of inserting into an initially empty avl tree keys are inserted in order into an initially empty avl tree prove that the resulting tree is perfectly balanced write the remaining procedures to implement avl single and double rotations design linear-time algorithm that verifies that the height information in an avl tree is correctly maintained and that the balance property is in order write nonrecursive function to insert into an avl tree show that the deletion algorithm in figure is correct how many bits are required per node to store the height of node in an -node avl treeb what is the smallest avl tree that overflows an -bit height counter write the functions to perform the double rotation without the inefficiency of doing two single rotations show the result of accessing the keys in order in the splay tree in figure show the result of deleting the element with key in the resulting splay tree for the previous exercise show that if all nodes in splay tree are accessed in sequential orderthe resulting tree consists of chain of left children figure tree for exercise
23,478
trees  show that if all nodes in splay tree are accessed in sequential orderthen the total access time is ( )regardless of the initial tree write program to perform random operations on splay trees count the total number of rotations performed over the sequence how does the running time compare to avl trees and unbalanced binary search treeswrite efficient functions that take only pointer to the root of binary treetand compute the number of nodes in the number of leaves in the number of full nodes in what is the running time of your routinesdesign recursive linear-time algorithm that tests whether binary tree satisfies the search tree order property at every node write recursive function that takes pointer to the root node of tree and returns pointer to the root node of the tree that results from removing all leaves from write function to generate an -node random binary search tree with distinct keys through what is the running time of your routinewrite function to generate the avl tree of height with fewest nodes what is the running time of your functionwrite function to generate perfectly balanced binary search tree of height with keys through + what is the running time of your functionwrite function that takes as input binary search treetand two keysk and which are ordered so that < and prints all elements in the tree such that <key( < do not assume any information about the type of keys except that they can be ordered (consistentlyyour program should run in ( log naverage timewhere is the number of keys printed bound the running time of your algorithm the larger binary trees in this were generated automatically by program this was done by assigning an (xycoordinate to each tree nodedrawing circle around each coordinate (this is hard to see in some pictures)and connecting each node to its parent assume you have binary search tree stored in memory (perhaps generated by one of the routines aboveand that each node has two extra fields to store the coordinates the coordinate can be computed by assigning the inorder traversal number write routine to do this for each node in the tree the coordinate can be computed by using the negative of the depth of the node write routine to do this for each node in the tree in terms of some imaginary unitwhat will the dimensions of the picture behow can you adjust the units so that the tree is always roughly two-thirds as high as it is wided prove that using this system no lines crossand that for any nodexall elements in ' left subtree appear to the left of and all elements in ' right subtree appear to the right of
23,479
write general-purpose tree-drawing program that will convert tree into the following graph-assembler instructionsa circle(xyb drawline(ijthe first instruction draws circle at (xy)and the second instruction connects the ith circle to the jth circle (circles are numbered in the order drawnyou should either make this program and define some sort of input language or make this function that can be called from any program what is the running time of your routine write routine to list out the nodes of binary tree in level-order list the rootthen nodes at depth followed by nodes at depth and so on you must do this in linear time prove your time bound write routine to perform insertion into -tree write routine to perform deletion from -tree when an item is deletedis it necessary to update information in the internal nodesc modify your insertion routine so that if an attempt is made to add into node that already has entriesa search is performed for sibling with less than children before the node is split -tree of order is -tree in which each interior node has between / and children describe method to perform insertion into -tree show how the tree in figure is represented using child/sibling link implementation write procedure to traverse tree stored with child/sibling links two binary trees are similar if they are both empty or both nonempty and have similar left and right subtrees write function to decide whether two binary trees are similar what is the running time of your functiontwo treest and are isomorphic if can be transformed into by swapping left and right children of (some of thenodes in for instancethe two trees in figure are isomorphic because they are the same if the children of aband gbut not the other nodesare swapped give polynomial time algorithm to decide if two trees are isomorphic figure tree for exercise
23,480
trees figure two isomorphic trees what is the running time of your program (there is linear solution) show that via avl single rotationsany binary search tree can be transformed into another search tree (with the same itemsb give an algorithm to perform this transformation using ( log nrotations on average  show that this transformation can be done with (nrotationsworst-case suppose we want to add the operation findkth to our repertoire the operation findkth(kreturns the kth smallest item in the tree assume all items are distinct explain how to modify the binary search tree to support this operation in (log naverage timewithout sacrificing the time bounds of any other operation since binary search tree with nodes has nullptr pointershalf the space allocated in binary search tree for pointer information is wasted suppose that if node has nullptr left childwe make its left child link to its inorder predecessorand if node has nullptr right childwe make its right child link to its inorder successor this is known as threaded tree and the extra links are called threads how can we distinguish threads from real children pointersb write routines to perform insertion and deletion into tree threaded in the manner described above what is the advantage of using threaded treeswrite program that reads +source code file and outputs list of all identifiers (that isvariable namesbut not keywordsthat are not found in comments or string constantsin alphabetical order each identifier should be output with list of line numbers on which it occurs generate an index for book the input file consists of set of index entries each line consists of the string ix:followed by an index entry name enclosed in bracesfollowed by page number that is enclosed in braces each in an index entry name represents sublevel |represents the start of rangeand |represents the end of the range occasionallythis range will be the same page in that caseoutput only single page number otherwisedo not collapse or expand ranges on your own as an examplefigure shows sample input and figure shows the corresponding output
23,481
ix{series|({ ix{series!geometric|({ ix{euler' constant{ ix{series!geometric|){ ix{series!arithmetic|({ ix{series!arithmetic|){ ix{series!harmonic|({ ix{euler' constant{ ix{series!harmonic|){ ix{series|){ figure sample input for exercise euler' constant series - arithmetic - geometric harmonic figure sample output for exercise references more information on binary search treesand in particular the mathematical properties of treescan be found in the two books by knuth[ and [ several papers deal with the lack of balance caused by biased deletion algorithms in binary search trees hibbard' paper [ proposed the original deletion algorithm and established that one deletion preserves the randomness of the trees complete analysis has been performed only for trees with three nodes [ and four nodes [ eppinger' paper [ provided early empirical evidence of nonrandomnessand the papers by culberson and munro [ ][ provided some analytical evidence (but not complete proof for the general case of intermixed insertions and deletionsadelson-velskii and landis [ proposed avl trees recently it was shown that for avl treesif rebalancing is performed only on insertionsand not on deletionsunder certain circumstances the resulting structure still maintains depth of (log mwhere is the number of insertions [ simulation results for avl treesand variants in which the height imbalance is allowed to be at most for various values of kare presented in [ analysis of the average search cost in avl trees is incompletebut some results are contained in [ [ and [ considered self-adjusting trees like the type in section splay trees are described in [ -trees first appeared in [ the implementation described in the original paper allows data to be stored in internal nodes as well as leaves the data structure we have described
23,482
trees is sometimes known as -tree survey of the different types of -trees is presented in [ empirical results of the various schemes are reported in [ analysis of - trees and -trees can be found in [ ][ ]and [ exercise is deceptively difficult solution can be found in [ exercise is from [ information on *-treesdescribed in exercise can be found in [ exercise is from [ solution to exercise using rotations is given in [ using threadsa la exercise was first proposed in [ - treeswhich handle multidimensional datawere first proposed in [ and are discussed in other popular balanced search trees are red-black trees [ and weight-balanced trees [ more balanced-tree schemes can be found in the books [ and [ adelson-velskii and landis"an algorithm for the organization of information,soviet mat doklady ( ) - ahoj hopcroftand ullmanthe design and analysis of computer algorithmsaddison-wesleyreadingmass allen and munro"self organizing search trees,journal of the acm ( ) - baeza-yates"expected behaviour of -trees under random insertions,acta informatica ( ) - baeza-yates" trivial algorithm whose analysis isn'ta continuation,bit ( ) - bayer and mccreight"organization and maintenance of large ordered indices,acta informatica ( ) - bentley"multidimensional binary search trees used for associative searching,communications of the acm ( ) - bitner"heuristics that dynamically organize data structures,siam journal on computing ( ) - comer"the ubiquitous -tree,computing surveys ( ) - culberson and munro"explaining the behavior of binary search trees under prolonged updatesa model and simulations,computer journal ( ) - culberson and munro"analysis of the standard deletion algorithms in exact fit domain binary search trees,algorithmica ( ) - culikt ottmanand wood"dense multiway trees,acm transactions on database systems ( ) - eisenbathn zivianag gonnetk melhornand wood"the theory of fringe analysis and its application to - trees and -trees,information and control ( ) - eppinger"an empirical study of insertion and deletion in binary search trees,communications of the acm ( ) - flajolet and odlyzko"the average height of binary trees and other simple trees,journal of computer and system sciences ( ) - gonnet and baeza-yateshandbook of algorithms and data structures ed addison-wesleyreadingmass gudes and tsur"experiments with -tree reorganization,proceedings of acm sigmod symposium on management of data ( ) -
23,483
guibas and sedgewick" dichromatic framework for balanced trees,proceedings of the nineteenth annual ieee symposium on foundations of computer science ( ) - hibbard"some combinatorial properties of certain trees with applications to searching and sorting,journal of the acm ( ) - jonassen and knuth" trivial algorithm whose analysis isn' ,journal of computer and system sciences ( ) - karltons fullerr scroggsand kaehler"performance of height balanced trees,communications of the acm ( ) - knuththe art of computer programmingvol fundamental algorithms ed addison-wesleyreadingmass knuththe art of computer programmingvol sorting and searching ed addisonwesleyreadingmass melhorn" partial analysis of height-balanced trees under random insertions and deletions,siam journal of computing ( ) - melhorndata structures and algorithms sorting and searchingspringer-verlagberlin nievergelt and reingold"binary search trees of bounded balance,siam journal on computing ( ) - perlis and thornton"symbol manipulation in threaded lists,communications of the acm ( ) - sen and tarjan"deletion without rebalancing in balanced binary trees,proceedings of the twentieth symposium on discrete algorithms ( ) - sleator and tarjan"self-adjusting binary search trees,journal of the acm ( ) - sleatorr tarjanand thurston"rotation distancetriangulationsand hyperbolic geometry,journal of the ams ( ) - tarjan"sequential access in splay trees takes linear time,combinatorica ( ) - yao"on random - trees,acta informatica ( ) -
23,484
23,485
hashing in we discussed the search tree adtwhich allowed various operations on set of elements in this we discuss the hash table adtwhich supports only subset of the operations allowed by binary search trees the implementation of hash tables is frequently called hashing hashing is technique used for performing insertionsdeletionsand finds in constant average time tree operations that require any ordering information among the elements are not supported efficiently thusoperations such as findminfindmaxand the printing of the entire table in sorted order in linear time are not supported the central data structure in this is the hash table we will see several methods of implementing the hash table compare these methods analytically show numerous applications of hashing compare hash tables with binary search trees general idea the ideal hash table data structure is merely an array of some fixed size containing the items as discussed in generally search is performed on some part (that isdata memberof the item this is called the key for instancean item could consist of string (that serves as the keyand additional data members (for instancea name that is part of large employee structurewe will refer to the table size as tablesizewith the understanding that this is part of hash data structure and not merely some variable floating around globally the common convention is to have the table run from to tablesize we will see why shortly each key is mapped into some number in the range to tablesize and placed in the appropriate cell the mapping is called hash functionwhich ideally should be simple to compute and should ensure that any two distinct keys get different cells since there are finite number of cells and virtually inexhaustible supply of keysthis is clearly impossibleand thus we seek hash function that distributes the keys evenly among the cells figure is typical of perfect situation in this examplejohn hashes to phil hashes to dave hashes to and mary hashes to
23,486
hashing john phil dave mary figure an ideal hash table this is the basic idea of hashing the only remaining problems deal with choosing functiondeciding what to do when two keys hash to the same value (this is known as collision)and deciding on the table size hash function if the input keys are integersthen simply returning key mod tablesize is generally reasonable strategyunless key happens to have some undesirable properties in this casethe choice of hash function needs to be carefully considered for instanceif the table size is and the keys all end in zerothen the standard hash function is bad choice for reasons we shall see laterand to avoid situations like the one aboveit is often good idea to ensure that the table size is prime when the input keys are random integersthen this function is not only very simple to compute but also distributes the keys evenly usuallythe keys are stringsin this casethe hash function needs to be chosen carefully one option is to add up the ascii values of the characters in the string the routine in figure implements this strategy the hash function depicted in figure is simple to implement and computes an answer quickly howeverif the table size is largethe function does not distribute the keys well for instancesuppose that tablesize , ( , is prime numbersuppose all the keys are eight or fewer characters long since an ascii character has an integer value that is always at most the hash function typically can only assume values between and , which is this is clearly not an equitable distributionanother hash function is shown in figure this hash function assumes that key has at least three characters the value represents the number of letters in the english alphabetplus the blankand is this function examines only the first three charactersbut if these are random and the table size is , as beforethen we would expect
23,487
int hashconst string keyint tablesize int hashval forchar ch key hashval +chreturn hashval tablesizefigure simple hash function int hashconst string keyint tablesize return key key key tablesizefigure another possible hash function--not too good reasonably equitable distribution unfortunatelyenglish is not random although there are , possible combinations of three characters (ignoring blanks) check of reasonably large online dictionary reveals that the number of different combinations is actually only , even if none of these combinations collideonly percent of the table can actually be hashed to thus this functionalthough easily computableis also not appropriate if the hash table is reasonably large figure shows third attempt at hash function this hash function involves all characters in the key and can generally be expected to distribute well (it computes keysize- key[keysize and brings the result into proper rangethe code = computes polynomial function (of by use of horner' rule for instanceanother way of computing hk is by the formula hk (( horner' rule extends this to an nth degree polynomial /* hash routine for string objects *unsigned int hashconst string keyint tablesize unsigned int hashval forchar ch key hashval hashval chreturn hashval tablesizefigure good hash function
23,488
hashing the hash function takes advantage of the fact that overflow is allowed and uses unsigned int to avoid introducing negative number the hash function described in figure is not necessarily the best with respect to table distributionbut it does have the merit of extreme simplicity and is reasonably fast if the keys are very longthe hash function will take too long to compute common practice in this case is not to use all the characters the length and properties of the keys would then influence the choice for instancethe keys could be complete street address the hash function might include couple of characters from the street address and perhaps couple of characters from the city name and zip code some programmers implement their hash function by using only the characters in the odd spaceswith the idea that the time saved computing the hash function will make up for slightly less evenly distributed function the main programming detail left is collision resolution ifwhen an element is insertedit hashes to the same value as an already inserted elementthen we have collision and need to resolve it there are several methods for dealing with this we will discuss two of the simplestseparate chaining and open addressingthen we will look at some more recently discovered alternatives separate chaining the first strategycommonly known as separate chainingis to keep list of all elements that hash to the same value we can use the standard library list implementation if space is tightit might be preferable to avoid their use (since these lists are doubly linked and waste spacewe assume for this section that the keys are the first perfect squares and that the hashing function is simply hash(xx mod (the table size is not prime but is used here for simplicity figure shows the resulting separate chaining hash table to perform searchwe use the hash function to determine which list to traverse we then search the appropriate list to perform an insertwe check the appropriate list to see whether the element is already in place (if duplicates are expectedan extra data member is figure separate chaining hash table
23,489
template class hashtable publicexplicit hashtableint size )bool containsconst hashedobj constvoid makeempty)bool insertconst hashedobj )bool inserthashedobj & )bool removeconst hashedobj )privatevectorthelistsint currentsize/the array of lists void rehash)size_t myhashconst hashedobj const}figure type declaration for separate chaining hash table usually keptand this data member would be incremented in the event of matchif the element turns out to be newit can be inserted at the front of the listsince it is convenient and also because frequently it happens that recently inserted elements are the most likely to be accessed in the near future the class interface for separate chaining implementation is shown in figure the hash table stores an array of linked listswhich are allocated in the constructor the class interface illustrates syntax pointprior to ++ in the declaration of thelistsa space was required between the two >ssince >is +tokenand because it is longer than >>would be recognized as the token in ++ this is no longer the case just as the binary search tree works only for objects that are comparablethe hash tables in this work only for objects that provide hash function and equality operators (operator=or operator!=or possibly bothinstead of requiring hash functions that take both the object and the table size as parameterswe have our hash functions take only the object as the parameter and return an appropriate integral type the standard mechanism for doing this uses function objectsand the protocol for hash tables was introduced in ++ specificallyin ++ hash functions can be expressed by the function object templatetemplate class hash publicsize_t operator(const key const}
23,490
hashing default implementations of this template are provided for standard types such as int and stringthusthe hash function described in figure could be implemented as template class hash publicsize_t operator()const string key size_t hashval forchar ch key hashval hashval chreturn hashval}the type size_t is an unsigned integral type that represents the size of an objectthereforeit is guaranteed to be able to store an array index class that implements hash table algorithm can then use calls to the generic hash function object to generate an integral type size_t and then scale the result into suitable array index in our hash tablesthis is manifested in private member function myhashshown in figure figure illustrates an employee class that can be stored in the generic hash tableusing the name member as the key the employee class implements the hashedobj requirements by providing equality operators and hash function object the code to implement makeemptycontainsand remove is shown in figure next comes the insertion routine if the item to be inserted is already presentthen we do nothingotherwisewe place it in the list (see fig the element can be placed anywhere in the listusing push_back is most convenient in our case whichlist is reference variablesee section for discussion of this use of reference variables any scheme could be used besides linked lists to resolve the collisionsa binary search tree or even another hash table would workbut we expect that if the table is large and the hash function is goodall the lists should be shortso basic separate chaining makes no attempt to try anything complicated we define the load factorlof hash table to be the ratio of the number of elements in the hash table to the table size in the example abovel the average length of list is the effort required to perform search is the constant time required to evaluate the hash function plus the time to traverse the list in an unsuccessful searchthe number size_t myhashconst hashedobj const static hash hfreturn hfx thelists size)figure myhash member function for hash tables
23,491
/example of an employee class class employee publicconst string getnameconst return namebool operator==const employee rhs const return getname=rhs getname)bool operator!=const employee rhs const return !*this =rhs/additional public members not shown privatestring namedouble salaryint seniority/additional private members not shown }template class hash publicsize_t operator()const employee item static hash hfreturn hfitem getname)}figure example of class that can be used as hashedobj of nodes to examine is on average successful search requires that about ( / links be traversed to see thisnotice that the list that is being searched contains the one node that stores the match plus zero or more other nodes the expected number of "other nodesin table of elements and lists is ( )/ /mwhich is essentially lsince is presumed large on averagehalf the "other nodesare searchedso combined with the matching nodewe obtain an average search cost of / nodes this analysis shows that the table size is not really important but the load factor is the general rule for separate chaining hashing is to make the table size about as large as the number of elements expected (in other wordslet in the code in figure if the load factor exceeds we expand the table size by calling rehash at line rehash is discussed in section it is also good ideaas mentioned beforeto keep the table size prime to ensure good distribution
23,492
hashing void makeemptyforauto thislist thelists thislist clear)bool containsconst hashedobj const auto whichlist thelistsmyhashx ]return findbeginwhichlist )endwhichlist ) !endwhichlist )bool removeconst hashedobj auto whichlist thelistsmyhashx ]auto itr findbeginwhichlist )endwhichlist ) )ifitr =endwhichlist return falsewhichlist eraseitr )--currentsizereturn truefigure makeemptycontainsand remove routines for separate chaining hash table bool insertconst hashedobj auto whichlist thelistsmyhashx ]iffindbeginwhichlist )endwhichlist ) !endwhichlist return falsewhichlist push_backx )/rehashsee section if++currentsize thelists sizerehash)return truefigure insert routine for separate chaining hash table
23,493
hash tables without linked lists separate chaining hashing has the disadvantage of using linked lists this could slow the algorithm down bit because of the time required to allocate new cells (especially in other languagesand essentially requires the implementation of second data structure an alternative to resolving collisions with linked lists is to try alternative cells until an empty cell is found more formallycells ( ) ( ) ( )are tried in successionwhere hi ( (hash(xf( )mod tablesizewith ( the functionfis the collision resolution strategy because all the data go inside the tablea bigger table is needed in such scheme than for separate chaining hashing generallythe load factor should be below for hash table that doesn' use separate chaining we call such tables probing hash tables we now look at three common collision resolution strategies linear probing in linear probingf is linear function of itypically (ii this amounts to trying cells sequentially (with wraparoundin search of an empty cell figure shows the result of inserting keys { into hash table using the same hash function as before and the collision resolution strategyf(ii the first collision occurs when is insertedit is put in the next available spotnamelyspot which is open the key collides with and then before an empty cell is found three away the collision for is handled in similar manner as long as the table is big enougha free cell can always be foundbut the time to do so can get quite large worseeven if the table is relatively emptyblocks of occupied cells start forming this effectknown as primary clusteringmeans that any key that hashes into the cluster will require several attempts to resolve the collisionand then it will add to the cluster although we will not perform the calculations hereit can be shown that the expected number of probes using linear probing is roughly ( /( ) for insertions and empty table after after after after after figure hash table with linear probingafter each insertion
23,494
hashing unsuccessful searchesand ( /( )for successful searches the calculations are somewhat involved it is easy to see from the code that insertions and unsuccessful searches require the same number of probes moment' thought suggests thaton averagesuccessful searches should take less time than unsuccessful searches the corresponding formulasif clustering is not problemare fairly easy to derive we will assume very large table and that each probe is independent of the previous probes these assumptions are satisfied by random collision resolution strategy and are reasonable unless is very close to firstwe derive the expected number of probes in an unsuccessful search this is just the expected number of probes until we find an empty cell since the fraction of empty cells is lthe number of cells we expect to probe is /( lthe number of probes for successful search is equal to the number of probes required when the particular element was inserted when an element is insertedit is done as result of an unsuccessful search thuswe can use the cost of an unsuccessful search to compute the average cost of successful search the caveat is that changes from to its current valueso that earlier insertions are cheaper and should bring the average down for instancein the table in figure but the cost of accessing is determined when is inserted at that pointl since was inserted into relatively empty tableaccessing it should be easier than accessing recently inserted elementsuch as we can estimate the average by using an integral to calculate the mean value of the insertion timeobtaining ( dx ln - - these formulas are clearly better than the corresponding formulas for linear probing clustering is not only theoretical problem but actually occurs in real implementations figure compares the performance of linear probing (dashed curveswith what would be expected from more random collision resolution successful searches are indicated by an sand unsuccessful searches and insertions are marked with and irespectively if then the formula above indicates that probes are expected for an insertion in linear probing if then probes are expectedwhich is unreasonable this compares with and probes for the respective load factors if clustering were not problem we see from these formulas that linear probing can be bad idea if the table is expected to be more than half full if howeveronly probes are required on average for insertionand only probes are requiredon averagefor successful search quadratic probing quadratic probing is collision resolution method that eliminates the primary clustering problem of linear probing quadratic probing is what you would expect--the collision function is quadratic the popular choice is (ii figure shows the resulting hash table with this collision function on the same input used in the linear probing example when collides with the next position attempted is one cell away this cell is emptyso is placed there next collides at position then the cell one away is
23,495
, , figure number of probes plotted against load factor for linear probing (dashedand random strategy ( is successful searchu is unsuccessful searchand is insertionempty table after after after after after figure hash table with quadratic probingafter each insertion triedbut another collision occurs vacant cell is found at the next cell triedwhich is away is thus placed in cell the same thing happens for for linear probingit is bad idea to let the hash table get nearly fullbecause performance degrades for quadratic probingthe situation is even more drasticthere is no guarantee of finding an empty cell once the table gets more than half fullor even before the table gets half full if the table size is not prime this is because at most half of the table can be used as alternative locations to resolve collisions indeedwe prove now that if the table is half empty and the table size is primethen we are always guaranteed to be able to insert new element
23,496
hashing theorem if quadratic probing is usedand the table size is primethen new element can always be inserted if the table is at least half empty proof let the table sizetablesizebe an (oddprime greater than we show that the first tablesize/ alternative locations (including the initial location ( )are all distinct two of these locations are (xi (mod tablesizeand (xj (mod tablesize)where <ij <tablesize/ supposefor the sake of contradictionthat these locations are the samebut  then (xi (xj (mod tablesizei = (mod tablesizei - = (mod tablesize( )( (mod tablesize since tablesize is primeit follows that either ( jor ( jis equal to (mod tablesizesince and are distinctthe first option is not possible since <ij <tablesize/ the second option is also impossible thusthe first tablesize/ alternative locations are distinct if at most tablesize/ positions are takenthen an empty spot can always be found if the table is even one more than half fullthe insertion could fail (although this is extremely unlikelythereforeit is important to keep this in mind it is also crucial that the table size be prime if the table size is not primethe number of alternative locations can be severely reduced as an exampleif the table size were then the only alternative locations would be at distances or away standard deletion cannot be performed in probing hash tablebecause the cell might have caused collision to go past it for instanceif we remove then virtually all the remaining find operations will fail thusprobing hash tables require lazy deletionalthough in this case there really is no laziness implied the class interface required to implement probing hash tables is shown in figure instead of an array of listswe have an array of hash table entry cells the nested class hashentry stores the state of an entry in the info memberthis state is either activeemptyor deleted we use standard enumerated type enum entrytype activeemptydeleted }constructing the table (fig consists of setting the info member to empty for each cell contains( )shown in figure invokes private member functions isactive and findpos the private member function findpos performs the collision resolution we ensure in the insert routine that the hash table is at least twice as large as the number of elements in the tableso quadratic resolution will always work in the implementation if the table size is prime of the form and the quadratic collision resolution strategy ( +- is usedthen the entire table can be probed the cost is slightly more complicated routine
23,497
template class hashtable publicexplicit hashtableint size )bool containsconst hashedobj constvoid makeempty)bool insertconst hashedobj )bool inserthashedobj & )bool removeconst hashedobj )enum entrytype activeemptydeleted }privatestruct hashentry hashedobj elemententrytype infohashentryconst hashedobj hashedobj}entrytype empty elemente }infoi hashentryhashedobj &eentrytype empty elementstd::movee }infoi }vector arrayint currentsizebool isactiveint currentpos constint findposconst hashedobj constvoid rehash)size_t myhashconst hashedobj const}figure class interface for hash tables using probing strategiesincluding the nested hashentry class in figure elements that are marked as deleted count as being in the table this can cause problemsbecause the table can get too full prematurely we shall discuss this item presently lines to represent the fast way of doing quadratic resolution from the definition of the quadratic resolution functionf(if( so the next cell to try is distance from the previous cell tried and this distance increases by on successive probes
23,498
hashing explicit hashtableint size arraynextprimesize makeempty)void makeemptycurrentsize forauto entry array entry info emptyfigure routines to initialize quadratic probing hash table bool containsconst hashedobj const return isactivefindposx )int findposconst hashedobj const int offset int currentpos myhashx )whilearraycurrentpos info !empty &arraycurrentpos element ! currentpos +offset/compute ith probe offset + ifcurrentpos >array sizecurrentpos -array size)return currentposbool isactiveint currentpos const return arraycurrentpos info =activefigure contains routine (and private helpersfor hashing with quadratic probing if the new location is past the arrayit can be put back in range by subtracting tablesize this is faster than the obvious methodbecause it avoids the multiplication and division that seem to be required an important warningthe order of testing at lines and is important don' switch itthe final routine is insertion as with separate chaining hashingwe do nothing if is already present it is simple modification to do something else otherwisewe place it at the spot suggested by the findpos routine the code is shown in figure if the load
23,499
bool insertconst hashedobj /insert as active int currentpos findposx )ifisactivecurrentpos return falsearraycurrentpos element xarraycurrentpos info active/rehashsee section if++currentsize array size rehash)return truebool removeconst hashedobj int currentpos findposx )if!isactivecurrentpos return falsearraycurrentpos info deletedreturn truefigure some insert and remove routines for hash tables with quadratic probing factor exceeds the table is full and we enlarge the hash table this is called rehashing and is discussed in section figure also shows remove although quadratic probing eliminates primary clusteringelements that hash to the same position will probe the same alternative cells this is known as secondary clustering secondary clustering is slight theoretical blemish simulation results suggest that it generally causes less than an extra half probe per search the following technique eliminates thisbut does so at the cost of computing an extra hash function double hashing the last collision resolution method we will examine is double hashing for double hashingone popular choice is (ii hash (xthis formula says that we apply second hash function to and probe at distance hash ( ) hash ( )and so on poor choice of hash (xwould be disastrous for instancethe obvious choice hash (xx mod would not help if were inserted into the input in the previous examples thusthe function must never evaluate to zero it is also important to make sure all cells can be probed (this is not possible in the example belowbecause the table size is not primea function such