id
int64
0
25.6k
text
stringlengths
0
4.59k
20,200
chap listsstacksand queues / simple payroll entry with idnameaddress fields class payroll private integer idprivate string nameprivate string address/constructor payroll(int inidstring innamestring inaddrid inidname innameaddress inaddr/data member access functions public integer getid(return idpublic string getname(return namepublic string getaddr(return addressfigure payroll record implementation /iddict organizes payroll records by id dictionary iddict new ualdictionary()/namedict organizes payroll records by name dictionary namedict new ualdictionary()payroll foo new payroll( "joe""anytown")payroll foo new payroll( "john""mytown")iddict insert(foo getid()foo )iddict insert(foo getid()foo )namedict insert(foo getname()foo )namedict insert(foo getname()foo )payroll findfoo iddict find( )payroll findfoo namedict find("john")figure dictionary search example herepayroll records are stored in two dictionariesone organized by id and the other organized by name both dictionaries are implemented with an unsorted array-based list
20,201
/container for key-value pair class kvpair private key kprivate /constructors kvpair( nulle nullkvpair(key kvale evalk kvale eval/data member access functions public key key(return kpublic value(return efigure implementation for class representing key-value pair need some mechanism for extracting keys that is sufficiently general one approach is to require all record types to support some particular method that returns the key value for examplein java the comparable interface can be used to provide this effect unfortunatelythis approach does not work when the same record type is meant to be stored in multiple dictionarieseach keyed by different field of the record this is typical in database applications anothermore general approach is to supply class whose job is to extract the key from the record unfortunatelythis solution also does not work in all situationsbecause there are record types for which it is not possible to write key extraction method the fundamental issue is that the key value for record is not an intrinsic property of the record' classor of any field within the class the key for record is actually property of the context in which the record is used truly general alternative is to explicitly store the key associated with given recordas separate field in the dictionary that iseach entry in the dictionary will contain both record and its associated key such entries are known as keyvalue pairs it is typical that storing the key explicitly duplicates some field in the record howeverkeys tend to be much smaller than recordsso this additional one example of such situation occurs when we have collection of records that describe books in library one of the fields for such record might be list of subject keywordswhere the typical record stores few keywords our dictionary might be implemented as list of records sorted by keyword if book contains three keywordsit would appear three times on the listonce for each associated keyword howevergiven the recordthere is no simple way to determine which keyword on the keyword list triggered this appearance of the record thuswe cannot write function that extracts the key from such record
20,202
chap listsstacksand queues space overhead will not be great simple class for representing key-value pairs is shown in figure the insert method of the dictionary class supports the key-value pair implementation because it takes two parametersa record and its associated key for that dictionary now that we have defined the dictionary adt and settled on the design approach of storing key-value pairs for our dictionary entrieswe are ready to consider ways to implement it two possibilities would be to use an array-based or linked list figure shows an implementation for the dictionary using an (unsortedarray-based list examining class ualdict (ual stands for "unsorted array-based list)we can easily see that insert is constant time operationbecause it simply inserts the new record at the end of the list howeverfindand remove both require th(ntime in the average and worst casesbecause we need to do sequential search method remove in particular must touch every record in the listbecause once the desired record is foundthe remaining records must be shifted down in the list to fill the gap method removeany removes the last record from the listso this is constant-time operation as an alternativewe could implement the dictionary using linked list the implementation would be quite similar to that shown in figure and the cost of the functions should be the same asymptotically another alternative would be to implement the dictionary with sorted list the advantage of this approach would be that we might be able to speed up the find operation by using binary search to do sofirst we must define variation on the list adt to support sorted lists sorted list is somewhat different from an unsorted list in that it cannot permit the user to control where elements get inserted thusthe insert method must be quite different in sorted list than in an unsorted list likewisethe user cannot be permitted to append elements onto the list for these reasonsa sorted list cannot be implemented with straightforward inheritance from the list adt the cost for find in sorted list is th(log nfor list of length this is great improvement over the cost of find in an unsorted list unfortunatelythe cost of insert changes from constant time in the unsorted list to th(ntime in the sorted list whether the sorted list implementation for the dictionary adt is more or less efficient than the unsorted list implementation depends on the relative number of insert and find operations to be performed if many more find operations than insert operations are usedthen it might be worth using sorted list to implement the dictionary in both casesremove requires th(ntime in the worst and average cases even if we used binary search to cut down on the time to
20,203
sec dictionaries /*dictionary implemented by unsorted array-based list *class ualdictionary implements dictionary private static final int defaultsize /default size private alistlist/to store dictionary /constructors ualdictionary(this(defaultsize)ualdictionary(int szlist new alist>(sz)public void clear(list clear()/reinitialize /*insert an elementappend to list *public void insert(key ke ekvpair temp new kvpair(ke)list append(temp)/*use sequential search to find the element to remove *public remove(key ke temp find( )if (temp !nulllist remove()return temp/*remove the last element *public removeany(if (size(! list movetoend()list prev()kvpair list remove()return value()else return null/find "kusing sequential search public find(key kfor(list movetostart()list currpos(list length()list next()kvpair temp list getvalue()if ( =temp key()return temp value()return null/"kdoes not appear in dictionary figure dictionary implemented with an unsorted array-based list
20,204
chap listsstacksand queues public int size(/return list size return list length()figure (continuedfind the record prior to removalwe would still need to shift down the remaining records in the list to fill the gap left by the remove operation given two keyswe have not properly addressed the issue of how to compare them one possibility would be to simply use the basic ==operators built into java this is the approach taken by our implementations for dictionaries shown in figure if the key type is intfor examplethis will work fine howeverif the key is pointer to string or any other type of objectthen this will not give the desired result when we compare two strings we probably want to know which comes first in alphabetical orderbut what we will get from the standard comparison operators is simply which object appears first in memory unfortunatelythe code will compile finebut the answers probably will not be fine in language like +that supports operator overloadingwe could require that the user of the dictionary overload the ==operators for the given key type this requirement then becomes an obligation on the user of the dictionary class unfortunatelythis obligation is hidden within the code of the dictionary (and possibly in the user' manualrather than exposed in the dictionary' interface as resultsome users of the dictionary might neglect to implement the overloadingwith unexpected results againthe compiler will not catch this problem the java comparable interface provides an approach to solving this problem in key-value pair implementationthe keys can be required to implement the comparable interface in other applicationsthe records might be required to implement comparable the most general solution is to have users supply their own definition for comparing keys the concept of class that does comparison (called comparatoris quite important by making these operations be generic parametersthe requirement to supply the comparator class becomes part of the interface this design is an example of the strategy design patternbecause the "strategiesfor comparing and getting keys from records are provided by the client in many casesit is also possible for the comparator class to also extract the key from the record typeas an alternative to storing key-value pairs we will use the comparable interface in section to implement comparison in heapsand in to implement comparison in sorting algorithms
20,205
sec further reading further reading for more discussion on choice of functions used to define the list adtsee the work of the reusable software research group from ohio state their definition for the list adt can be found in [swh more information about designing such classes can be found in [sw exercises assume list has the following configurationh write series of java statements using the list adt of figure to delete the element with value show the list configuration resulting from each series of list operations using the list adt of figure assume that lists and are empty at the beginning of each series show where the current position is in the list (al append( ) append( ) append( )(bl append( ) append( ) append( ) movetostart() insert( ) next() insert( ) write series of java statements that uses the list adt of figure to create list capable of holding twenty elements and which actually stores the list with the following configurationh using the list adt of figure write function to interchange the current element and the one following it
20,206
chap listsstacksand queues in the linked list implementation presented in section the current position is implemented using pointer to the element ahead of the logical current node the more "naturalapproach might seem to be to have curr point directly to the node containing the current element howeverif this was donethen the pointer of the node preceding the current one cannot be updated properly because there is no access to this node from curr an alternative is to add new node after the current elementcopy the value of the current element to this new nodeand then insert the new value into the old current node (awhat happens if curr is at the end of the list alreadyis there still way to make this workis the resulting code simpler or more complex than the implementation of section (bwill deletion always work in constant time if curr points directly to the current node add to the llist class implementation member function to reverse the order of the elements on the list your algorithm should run in th(ntime for list of elements write function to merge two linked lists the input lists have their elements in sorted orderfrom smallest to highest the output list should also be sorted from highest to lowest your algorithm should run in linear time on the length of the output list circular linked list is one in which the next field for the last link node of the list points to the first link node of the list this can be useful when you wish to have relative positioning for elementsbut no concept of an absolute first or last position (amodify the code of figure to implement circular singly linked lists (bmodify the code of figure to implement circular doubly linked lists section states "the space required by the array-based list implementation is ohm( )but can be greater explain why this is so section presents an equation for determining the break-even point for the space requirements of two implementations of lists the variables are dep and what are the dimensional units for each variableshow that both sides of the equation balance in terms of their dimensional units use the space equation of section to determine the break-even point for an array-based list and linked list implementation for lists when the sizes for the data fielda pointerand the array-based list' array are as specified
20,207
sec exercises (athe data field is eight bytesa pointer is four bytesand the array holds twenty elements (bthe data field is two bytesa pointer is four bytesand the array holds thirty elements (cthe data field is one bytea pointer is four bytesand the array holds thirty elements (dthe data field is bytesa pointer is four bytesand the array holds forty elements determine the size of an int variablea double variableand pointer on your computer (acalculate the break-even pointas function of nbeyond which the array-based list is more space efficient than the linked list for lists whose elements are of type int (bcalculate the break-even pointas function of nbeyond which the array-based list is more space efficient than the linked list for lists whose elements are of type double modify the code of figure to implement two stacks sharing the same arrayas shown in figure modify the array-based queue definition of figure to use separate boolean member to keep track of whether the queue is emptyrather than require that one array position remain empty palindrome is string that reads the same forwards as backwards using only fixed number of stacks and queuesthe stack and queue adt functionsand fixed number of int and char variableswrite an algorithm to determine if string is palindrome assume that the string is read from standard input one character at time the algorithm should output true or false as appropriate re-implement function fibr from exercise using stack to replace the recursive call as described in section write recursive algorithm to compute the value of the recurrence relation (nt(dn/ et(bn/ cnt( thenrewrite your algorithm to simulate the recursive calls with stack let be non-empty queueand let be an empty stack using only the stack and queue adt functions and single element variable xwrite an algorithm to reverse the order of the elements in
20,208
chap listsstacksand queues common problem for compilers and text editors is to determine if the parentheses (or other bracketsin string are balanced and properly nested for examplethe string "((())())()contains properly nested pairs of parenthesesbut the string ")()(does notand the string "())does not contain properly matching parentheses (agive an algorithm that returns true if string contains properly nested and balanced parenthesesand false otherwise use stack to keep track of the number of left parentheses seen so far hintat no time while scanning legal string from left to right will you have encountered more right parentheses than left parentheses (bgive an algorithm that returns the position in the string of the first offending parenthesis if the string is not properly nested and balanced that isif an excess right parenthesis is foundreturn its positionif there are too many left parenthesesreturn the position of the first excess left parenthesis return - if the string is properly balanced and nested use stack to keep track of the number and positions of left parentheses seen so far imagine that you are designing an application where you need to perform the operations insertdelete maximumand delete minimum for this applicationthe cost of inserting is not importantbecause it can be done off-line prior to startup of the time-critical sectionbut the performance of the two deletion operations are critical repeated deletions of either kind must work as fast as possible suggest data structure that can support this applicationand justify your suggestion what is the time complexity for each of the three key operations write function that reverses the order of an array of items projects deque (pronounced "deck"is like queueexcept that items may be added and removed from both the front and the rear write either an array-based or linked implementation for the deque one solution to the problem of running out of space for an array-based list implementation is to replace the array with larger array whenever the original array overflows good rule that leads to an implementation that is both space and time efficient is to double the current size of the array when there is an overflow re-implement the array-based list class of figure to support this array-doubling rule
20,209
use singly linked lists to implement integers of unlimited size each node of the list should store one digit of the integer you should implement additionsubtractionmultiplicationand exponentiation operations limit exponents to be positive integers what is the asymptotic running time for each of your operationsexpressed in terms of the number of digits for the two operands of each function implement doubly linked lists by storing the sum of the next and prev pointers in single pointer variable as described in example implement city database using unordered lists each database record contains the name of the city ( string of arbitrary lengthand the coordinates of the city expressed as integer and coordinates your database should allow records to be inserteddeleted by name or coordinateand searched by name or coordinate another operation that should be supported is to print all records within given distance of specified point implement the database using an array-based list implementationand then linked list implementation collect running time statistics for each operation in both implementations what are your conclusions about the relative advantages and disadvantages of the two implementationswould storing records on the list in alphabetical order by city name speed any of the operationswould keeping the list in alphabetical order slow any of the operations modify the code of figure to support storing variable-length strings of at most characters the stack array should have type char string is represented by series of characters (one character per stack element)with the length of the string stored in the stack element immediately above the string itselfas illustrated by figure the push operation would store an element requiring storage units in the positions beginning with the current value of top and store the size in the position storage units above top the value of top would then be reset above the newly inserted element the pop operation need only look at the size value stored in position top and then pop off the appropriate number of units you may store the string on the stack in reverse order if you preferprovided that when it is popped from the stackit is returned in its proper order implement collection of freelists for variable-length stringsas described at the end of section for each such freelistyou will need an access function to get it if it existsand implement it if it does not major design consideration is how to organize the collection of freelistswhich are distinguished by the length of the strings essentiallywhat is needed is dictionary of freelistsorganized by string lengths
20,210
chap listsstacksand queues top ' ' ' ' ' ' ' ' figure an array-based stack storing variable-length strings each position stores either one character or the length of the string immediately to the left of it in the stack define an adt for bag (see section and create an array-based implementation for bags be sure that your bag adt does not rely in any way on knowing or controlling the position of an element thenimplement the dictionary adt of figure using your bag implementation implement the dictionary adt of figure using an unsorted linked list as defined by class llist in figure make the implementation as efficient as you cangiven the restriction that your implementation must use the unsorted linked list and its access operations to implement the dictionary state the asymptotic time requirements for each function member of the dictionary adt under your implementation implement the dictionary adt of figure based on stacks your implementation should declare and use two stacks implement the dictionary adt of figure based on queues your implementation should declare and use two queues
20,211
binary trees the list representations of have fundamental limitationeither search or insert can be made efficientbut not both at the same time tree structures permit both efficient access and update to large collections of data binary trees in particular are widely used and relatively easy to implement but binary trees are useful for many things besides searching just few examples of applications that trees can speed up include prioritizing jobsdescribing mathematical expressions and the syntactic elements of computer programsor organizing the information needed to drive data compression algorithms this begins by presenting definitions and some key properties of binary trees section discusses how to process all nodes of the binary tree in an organized manner section presents various methods for implementing binary trees and their nodes sections through present three examples of binary trees used in specific applicationsthe binary search tree (bstfor implementing dictionariesheaps for implementing priority queuesand huffman coding trees for text compression the bstheapand huffman coding tree each have distinctive features that affect their implementation and use definitions and properties binary tree is made up of finite set of elements called nodes this set either is empty or consists of node called the root together with two binary treescalled the left and right subtreeswhich are disjoint from each other and from the root (disjoint means that they have no nodes in common the roots of these subtrees are children of the root there is an edge from node to each of its childrenand node is said to be the parent of its children if nk is sequence of nodes in the tree such that ni is the parent of ni+ for < kthen this sequence is called path from to nk the length
20,212
chap binary trees figure an example binary tree node is the root nodes and are ' children nodes and together form subtree node has two childrenits left child is the empty tree and its right child is nodes acand are ancestors of nodes deand make up level of the treenode is at level the edges from to to to form path of length nodes dghand are leaves nodes abceand are internal nodes the depth of is the height of this tree is of the path is if there is path from node to node mthen is an ancestor of mand is descendant of thusall nodes in the tree are descendants of the root of the treewhile the root is the ancestor of all nodes the depth of node in the tree is the length of the path from the root of the tree to the height of tree is one more than the depth of the deepest node in the tree all nodes of depth are at level in the tree the root is the only node at level and its depth is leaf node is any node that has two empty children an internal node is any node that has at least one non-empty child figure illustrates the various terms used to identify parts of binary tree figure illustrates an important point regarding the structure of binary trees because all binary tree nodes have two children (one or both of which might be empty)the two binary trees of figure are not the same two restricted forms of binary tree are sufficiently important to warrant special names each node in full binary tree is either ( an internal node with exactly two non-empty children or ( leaf complete binary tree has restricted shape obtained by starting at the root and filling the tree by levels from left to right in the complete binary tree of height dall levels except possibly level - are completely full the bottom level has its nodes filled in from the left side
20,213
sec definitions and properties ( (ba empty (cempty (dfigure two different binary trees (aa binary tree whose root has nonempty left child (ba binary tree whose root has non-empty right child (cthe binary tree of (awith the missing right child made explicit (dthe binary tree of (bwith the missing left child made explicit ( (bfigure examples of full and complete binary trees (athis tree is full (but not complete(bthis tree is complete (but not fullfigure illustrates the differences between full and complete binary trees there is no particular relationship between these two tree shapesthat isthe tree of figure (ais full but not complete while the tree of figure (bis complete but not full the heap data structure (section is an example of complete binary tree the huffman coding tree (section is an example of full binary tree while these definitions for full and complete binary tree are the ones most commonly usedthey are not universal some textbooks even reverse these definitionsbecause the common meaning of the words "fulland "completeare quite similarthere is little that you can do to distinguish between them other than to memorize the definitions here is memory aid that you might find useful"completeis wider word than "full,and complete binary trees tend to be wider than full binary trees because each level of complete binary tree is as wide as possible
20,214
chap binary trees any number of internal nodes figure tree containing internal nodes and single leaf the full binary tree theorem some binary tree implementations store data only at the leaf nodesusing the internal nodes to provide structure to the tree more generallybinary tree implementations might require some amount of space for internal nodesand different amount for leaf nodes thusto analyze the space required by such implementationsit is useful to know the minimum and maximum fraction of the nodes that are leaves in tree containing internal nodes unfortunatelythis fraction is not fixed binary tree of internal nodes might have only one leaf this occurs when the internal nodes are arranged in chain ending in single leaf as shown in figure in this casethe number of leaves is low because each internal node has only one non-empty child to find an upper bound on the number of leaves for tree of internal nodesfirst note that the upper bound will occur when each internal node has two non-empty childrenthat iswhen the tree is full howeverthis observation does not tell what shape of tree will yield the highest percentage of non-empty leaves it turns out not to matterbecause all full binary trees with internal nodes have the same number of leaves this fact allows us to compute the space requirements for full binary tree implementation whose leaves require different amount of space from its internal nodes theorem full binary tree theoremthe number of leaves in non-empty full binary tree is one more than the number of internal nodes proofthe proof is by mathematical induction on nthe number of internal nodes this is an example of an induction proof where we reduce from an arbitrary instance of size to an instance of size that meets the induction hypothesis base casesthe non-empty tree with zero internal nodes has one leaf node full binary tree with one internal node has two leaf nodes thusthe base cases for and conform to the theorem induction hypothesisassume that any full binary tree containing internal nodes has leaves
20,215
induction stepgiven tree with internal nodesselect an internal node whose children are both leaf nodes remove both of ' childrenmaking leaf node call the new tree has internal nodes from the induction hypothesist has leaves nowrestore ' two children we once again have tree with internal nodes how many leaves does havebecause has leavesadding the two children yields + howevernode counted as one of the leaves in and has now become an internal node thustree has leaf nodes and internal nodes by mathematical induction the theorem holds for all values of > when analyzing the space requirements for binary tree implementationit is useful to know how many empty subtrees tree contains simple extension of the full binary tree theorem tells us exactly how many empty subtrees there are in any binary treewhether full or not here are two approaches to proving the following theoremand each suggests useful way of thinking about binary trees theorem the number of empty subtrees in non-empty binary tree is one more than the number of nodes in the tree proof take an arbitrary binary tree and replace every empty subtree with leaf node call the new tree all nodes originally in will be internal nodes in (because even the leaf nodes of have children in is full binary treebecause every internal node of now must have two children in and each leaf node in must have two children in (the leaves just addedthe full binary tree theorem tells us that the number of leaves in full binary tree is one more than the number of internal nodes thusthe number of new leaves that were added to create is one more than the number of nodes in each leaf node in corresponds to an empty subtree in thusthe number of empty subtrees in is one more than the number of nodes in proof by definitionevery node in binary tree has two childrenfor total of children in tree of nodes every node except the root node has one parentfor total of nodes with parents in other wordsthere are non-empty children because the total number of children is nthe remaining children must be empty binary tree node adt just as we developed generic list adt on which to build specialized list implementationswe would like to define generic binary tree adt based on those
20,216
chap binary trees /*adt for binary tree nodes *public interface binnode /*return and set the element value *public element()public void setelement( )/*return the left child *public binnode left()/*return the right child *public binnode right()/*return true if this is leaf node *public boolean isleaf()figure binary tree node adt aspects of binary trees that are relevant to all applications for examplewe must be able to initialize binary treeand we might wish to determine if the tree is empty some activities might be unique to the application for examplewe might wish to combine two binary trees by making their roots be the children of new root node other activities are centered around the nodes for examplewe might need access to the left or right child of nodeor to the node' data value clearly there are activities that relate to nodes ( reach node' child or get node' value)and activities that relate to trees ( tree initializationthis indicates that nodes and trees should be implemented as separate classes for nowwe concentrate on the class to implement binary tree nodes this class will be used by some of the binary tree structures presented later figure shows an adt for binary tree nodescalled binnode class binnode is generic with parameter ewhich is the type for the data record stored in node member functions are provided that set or return the element valueset or return reference to the left childset or return reference to the right childor indicate whether the node is leaf binary tree traversals often we wish to process binary tree by "visitingeach of its nodeseach time performing specific action such as printing the contents of the node any process for visiting all of the nodes in some order is called traversal any traversal that lists every node in the tree exactly once is called an enumeration of the tree' nodes some applications do not require that the nodes be visited in any particular order as long as each node is visited precisely once for other applicationsnodes
20,217
sec binary tree traversals must be visited in an order that preserves some relationship for examplewe might wish to make sure that we visit any given node before we visit its children this is called preorder traversal example the preorder enumeration for the tree of figure is abdcegfhi the first node printed is the root then all nodes of the left subtree are printed (in preorderbefore any node of the right subtree alternativelywe might wish to visit each node only after we visit its children (and their subtreesfor examplethis would be necessary if we wish to return all nodes in the tree to free store we would like to delete the children of node before deleting the node itself but to do that requires that the children' children be deleted firstand so on this is called postorder traversal example the postorder enumeration for the binary tree of figure is dbgehifca an inorder traversal first visits the left child (including its entire subtree)then visits the nodeand finally visits the right child (including its entire subtreethe binary search tree of section makes use of this traversal example the inorder enumeration for the binary tree of figure is bdagechfi traversal routine is naturally written as recursive function its input parameter is pointer to node which we will call root because each node can be viewed as the root of some subtree the initial call to the traversal function passes in pointer to the root node of the tree the traversal function visits root and its children (if anyin the desired order for examplea preorder traversal specifies that root be visited before its children this can easily be implemented as follows void preorder(binnode rt/rt is the root of the subtree if (rt =nullreturn/empty subtree visit(rt)preorder(rt left())preorder(rt right())
20,218
chap binary trees function preorder first checks that the tree is not empty (if it isthen the traversal is done and preorder simply returnsotherwisepreorder makes call to visitwhich processes the root node ( prints the value or performs some computation as required by the applicationfunction preorder is then called recursively on the left subtreewhich will visit all nodes in that subtree finallypreorder is called on the right subtreevisiting all remaining nodes in the tree postorder and inorder traversals are similar they simply change the order in which the node and its children are visitedas appropriate an important design decision regarding the implementation of any recursive function on trees is when to check for an empty subtree function preorder first checks to see if the value for root is null if notit will recursively call itself on the left and right children of root in other wordspreorder makes no attempt to avoid calling itself on an empty child some programmers attempt an alternate design in which the left and right pointers of the current node are checked so that the recursive call is made only on non-empty children such design typically looks as followsvoid preorder (binnode rtvisit(rt)if (rt left(!nullpreorder(rt left())if (rt right(!nullpreorder(rt right())at first it might appear that preorder is more efficient than preorderbecause it makes only half as many recursive calls (why?on the other handpreorder must access the left and right child pointers twice as often the net result is little or no performance improvement in realitythe design of preorder is generally inferior to that of preorder for two reasons firstwhile it is not apparent in this simple examplefor more complex traversals it can become awkward to place the check for the null pointer in the calling code even in this simple example we had to write two tests for nullrather than one in the first design the second and more important problem with preorder is that it tends to be error prone while preorder insures that no recursive calls will be made on empty subtreesit will fail if the initial call passes in null pointer this would occur if the original tree is empty to avoid the bugeither preorder needs an additional test for null pointer at the beginning (making the subsequent tests redundant after all)or the caller of preorder has hidden obligation to pass in non-empty treewhich is unreliable design the net result is that many programmers forget to test for the possibility that the empty tree
20,219
is being traversed by using the first designwhich explicitly supports processing of empty subtreesthe problem is avoided another issue to consider when designing traversal is how to define the visitor function that is to be executed on every node one approach is simply to write new version of the traversal for each such visitor function as needed the disadvantage to this is that whatever function does the traversal must have access to the binnode class it is probably better design to permit only the tree class to have access to the binnode class another approach is for the tree class to supply generic traversal function which takes the visitor as function parameter this is known as the visitor design pattern major constraint on this approach is that the signature for all visitor functionsthat istheir return type and parametersmust be fixed in advance thusthe designer of the generic traversal function must be able to adequately judge what parameters and return type will likely be needed by potential visitor functions properly handling information flow between parts of program can often be significant design challenge this issue tends to be particularly confusing when dealing with recursive functions such as tree traversals in generalwe can run into trouble either with passing in the correct information needed by the function to do its workor with returning information to the recursive function' caller we will see many examples throughout the book that illustrate methods for passing information in and out of recursive functions as they traverse tree structure before leaving this sectionwe will study few simple examples let us first consider the simple case where computation requires that we communicate information back up the tree to the end user example we wish to write function that counts the number of nodes in binary tree the key insight is that the total count for any (non-emptysubtree is one for the root plus the counts for the left and right subtrees we can implement the function as follows int count(binnode rtif (rt =nullreturn /nothing to count return count(rt left()count(rt right())another problem that occurs when recursively processing data collections is controlling which members of the collection will be visited for examplesome tree "traversalsmight in fact visit only some tree nodeswhile avoiding processing of others exercise must solve exactly this problem in the context of binary search tree it must visit only those children of given node that might possibly
20,220
chap binary trees to figure to be binary search treethe left child of the node with value must have value between and fall within given range of values fortunatelyit requires only simple local calculation to determine which child(rento visit more difficult situation is illustrated by the following problem given an arbitrary binary tree we wish to determine iffor every node aare all nodes in ' left subtree less than the value of aand are all nodes in ' right subtree greater than the value of (this happens to be the definition for binary search treedescribed in section unfortunatelyto make this decision we need to know some context that is not available just by looking at the node' parent or children as shown by figure it is not enough to verify that ' left child has value less than that of aand that ' right child has greater value nor is it enough to verify that has value consistent with that of its parent in factwe need to know information about what range of values is legal for given node that information might come from any ancestor for the node thusrelevant range information must be passed down the tree we can implement this function as follows boolean checkbst(bstnode rootinteger lowinteger highif (root =nullreturn true/empty subtree integer rootkey root key()if ((rootkey high)return false/out of range if (!checkbst(root left()lowrootkey)return false/left side failed return checkbst(root right()rootkeyhigh) binary tree node implementations in this section we will examine ways to implement binary tree nodes we begin with some options for pointer-based binary tree node implementations then comes
20,221
discussion on techniques for determining the space requirements for given binary tree node implementation the section concludes with an introduction to the arraybased implementation for complete binary trees pointer-based node implementations by definitionall binary tree nodes have two childrenthough one or both children can be empty binary tree nodes normally contain value fieldwith the type of the field depending on the application the most common node implementation includes value field and pointers to the two children figure shows simple implementation for the binnode abstract classwhich we will name bstnode class bstnode includes data member of type (which is the second generic parameterfor the element type to support search structures such as the binary search treean additional field is includedwith corresponding access methodsstore key value (whose purpose is explained in section its type is determined by the first generic parameternamed key every bstnode object also has two pointersone to its left child and another to its right child figure shows an illustration of the bstnode implementation some programmers find it convenient to add pointer to the node' parentallowing easy upward movement in the tree using parent pointer is somewhat analogous to adding link to the previous node in doubly linked list in practicethe parent pointer is almost always unnecessary and adds to the space overhead for the tree implementation it is not just problem that parent pointers take space more importantlymany uses of the parent pointer are driven by improper understanding of recursion and so indicate poor programming if you are inclined toward using parent pointerconsider if there is more efficient implementation possible an important decision in the design of pointer-based node implementation is whether the same class definition will be used for leaves and internal nodes using the same class for both will simplify the implementationbut might be an inefficient use of space some applications require data values only for the leaves other applications require one type of value for the leaves and another for the internal nodes examples include the binary trie of section the pr quadtree of section the huffman coding tree of section and the expression tree illustrated by figure by definitiononly internal nodes have non-empty children if we use the same node implementation for both internal and leaf nodesthen both must store the child pointers but it seems wasteful to store child pointers in the leaf nodes thusthere are many reasons why it can save space to have separate implementations for internal and leaf nodes
20,222
chap binary trees /*binary tree node implementationpointers to children *class bstnode implements binnode private key key/key for this node private element/element for this node private bstnode left/pointer to left child private bstnode right/pointer to right child /*constructors *public bstnode({left right nullpublic bstnode(key ke valleft right nullkey kelement valpublic bstnode(key ke valbstnode lbstnode rleft lright rkey kelement val/*return and set the key value *public key key(return keypublic void setkey(key kkey /*return and set the element value *public element(return elementpublic void setelement( velement /*return and set the left child *public bstnode left(return leftpublic void setleft(bstnode pleft /*return and set the right child *public bstnode right(return rightpublic void setright(bstnode pright /*return true if this is leaf node *public boolean isleaf(return (left =null&(right =null)figure binary tree node class implementation as an example of tree that stores different information at the leaf and internal nodesconsider the expression tree illustrated by figure the expression tree represents an algebraic expression composed of binary operators such as additionsubtractionmultiplicationand division internal nodes store operatorswhile the leaves store operands the tree of figure represents the expression ( ac the storage requirements for leaf in an expression tree are quite different from those of an internal node internal nodes store one of small set of operatorsso internal nodes could store either small code identifying the operator or single byte for the operator' character symbol in contrastleaves must store variable names or numbers thusthe leaf node value field must be considerably
20,223
sec binary tree node implementations figure illustration of typical pointer-based binary tree implementationwhere each node stores two child pointers and value figure an expression tree for ( ac larger to handle the wider range of possible values at the same timeleaf nodes need not store child pointers java allows us to differentiate leaf from internal nodes through the use of class inheritance as we have seen with lists and with class binnodea base class provides general definition for an objectand subclass modifies base class to add more detail base class can be declared for nodes in generalwith subclasses defined for the internal and leaf nodes the base class of figure is named varbinnode it includes virtual member function named isleafwhich indicates the node type subclasses for the internal and leaf node types each implement isleaf internal nodes store child pointers of the base class typethey do not distinguish their children' actual subclass whenever node is examinedits version of isleaf indicates the node' subclass
20,224
chap binary trees public interface varbinnode public boolean isleaf()class varleafnode implements varbinnode /leaf node private string operand/operand value public varleafnode(string valoperand valpublic boolean isleaf(return truepublic string value(return operand}/*internal node *class varintlnode implements varbinnode private varbinnode left/left child private varbinnode right/right child private character operator/operator value public varintlnode(character opvarbinnode lvarbinnode roperator opleft lright rpublic boolean isleaf(return falsepublic varbinnode leftchild(return leftpublic varbinnode rightchild(return rightpublic character value(return operator/*preorder traversal *public static void traverse(varbinnode rtif (rt =nullreturn/nothing to visit if (rt isleaf()/process leaf node visit visitleafnode(((varleafnode)rtvalue())else /process internal node visit visitinternalnode(((varintlnode)rtvalue())traverse(((varintlnode)rtleftchild())traverse(((varintlnode)rtrightchild())figure an implementation for separate internal and leaf node representations using java class inheritance and virtual functions
20,225
figure presents sample node implementation it includes two classes derived from class varbinnodenamed leafnode and intlnode class intlnode accesses its children through pointers of type varbinnode function traverse illustrates the use of these classes when traverse calls method isleafjava' runtime environment determines which subclass this particular instance of root happens to be and calls that subclass' version of isleaf method isleaf then provides the actual node type to its caller the other member functions for the derived subclasses are accessed by type-casting the base class pointer as appropriateas shown in function traverse there is another approach that we can take to represent separate leaf and internal nodesalso using virtual base class and separate node classes for the two types this is to implement nodes using the composite design pattern this approach is noticeably different from the one of figure in that the node classes themselves implement the functionality of traverse figure shows the implementation herebase class varbinnode declares member function traverse that each subclass must implement each subclass then implements its own appropriate behavior for its role in traversal the whole traversal process is called by invoking traverse on the root nodewhich in turn invokes traverse on its children when comparing the implementations of figures and each has advantages and disadvantages the first does not require that the node classes know about the traverse function with this approachit is easy to add new methods to the tree class that do other traversals or other operations on nodes of the tree howeverwe see that traverse in figure does need to be familiar with each node subclass adding new node subclass would therefore require modifications to the traverse function in contrastthe approach of figure requires that any new operation on the tree that requires traversal also be implemented in the node subclasses on the other handthe approach of figure avoids the need for the traverse function to know anything about the distinct abilities of the node subclasses those subclasses handle the responsibility of performing traversal on themselves secondary benefit is that there is no need for traverse to explicitly enumerate all of the different node subclassesdirecting appropriate action for each with only two node classes this is minor point but if there were many such subclassesthis could become bigger problem disadvantage is that the traversal operation must not be called on null pointerbecause there is no object to catch the call this problem could be avoided by using flyweight to implement empty nodes typicallythe version of figure would be preferred in this example if traverse is member function of the tree classand if the node subclasses are
20,226
chap binary trees public interface varbinnode public boolean isleaf()public void traverse()class varleafnode implements varbinnode /leaf node private string operand/operand value public varleafnode(string valoperand valpublic boolean isleaf(return truepublic string value(return operandpublic void traverse(visit visitleafnode(operand)class varintlnode implements varbinnode /internal node private varbinnode left/left child private varbinnode right/right child private character operator/operator value public varintlnode(character opvarbinnode lvarbinnode roperator opleft lright rpublic boolean isleaf(return falsepublic varbinnode leftchild(return leftpublic varbinnode rightchild(return rightpublic character value(return operatorpublic void traverse(visit visitinternalnode(operator)if (left !nullleft traverse()if (right !nullright traverse()/*preorder traversal *public static void traverse(varbinnode rtif (rt !nullrt traverse()figure second implementation for separate internal and leaf node representations using java class inheritance and virtual functions using the composite design pattern herethe functionality of traverse is embedded into the node subclasses
20,227
sec binary tree node implementations hidden from users of that tree class on the other handif the nodes are objects that have meaning to users of the tree separate from their existence as nodes in the treethen the version of figure might be preferred because hiding the internal behavior of the nodes becomes more important space requirements this section presents techniques for calculating the amount of overhead required by binary tree implementation recall that overhead is the amount of space necessary to maintain the data structure in other wordsit is any space not used to store data records the amount of overhead depends on several factors including which nodes store data values (all nodesor just the leaves)whether the leaves store child pointersand whether the tree is full binary tree in simple pointer-based implementation for the binary tree such as that of figure every node has two pointers to its children (even when the children are nullthis implementation requires total space amounting to ( dfor tree of nodes herep stands for the amount of space required by pointerand stands for the amount of space required by data value the total overhead space will be for the entire tree thusthe overhead fraction will be /( dthe actual value for this expression depends on the relative size of pointers versus data fields if we arbitrarily assume that dthen full tree has about two thirds of its total space taken up in overhead worse yettheorem tells us that about half of the pointers are "wastednull values that serve only to indicate tree structurebut which do not provide access to new data if only leaves store data valuesthen the fraction of total space devoted to overhead depends on whether the tree is full if the tree is not fullthen conceivably there might only be one leaf node at the end of series of internal nodes thusthe overhead can be an arbitrarily high percentage for non-full binary trees the overhead fraction drops as the tree becomes closer to fullbeing lowest when the tree is truly full in this caseabout one half of the nodes are internal great savings can be had by eliminating the pointers from leaf nodes in full binary trees because about half of the nodes are leaves and half internal nodesand because only internal nodes now have overheadthe overhead fraction in this case will be approximately ( + ( dn
20,228
chap binary trees if dthe overhead drops to about one half of the total space howeverif only leaf nodes store useful informationthe overhead fraction for this implementation is actually three quarters of the total spacebecause half of the "dataspace is unused if full binary tree needs to store data only at the leaf nodesa better implementation would have the internal nodes store two pointers and no data field while the leaf nodes store only data field this implementation requires ( units of space if dthen the overhead is about /( + / it might seem counter-intuitive that the overhead ratio has gone up while the total amount of space has gone down the reason is because we have changed our definition of "datato refer only to what is stored in the leaf nodesso while the overhead fraction is higherit is from total storage requirement that is lower there is one serious flaw with this analysis when using separate implementations for internal and leaf nodesthere must be way to distinguish between the node types when separate node types are implemented via java subclassesthe runtime environment stores information with each object allowing it to determinefor examplethe correct subclass to use when the isleaf virtual function is called thuseach node requires additional space only one bit is truly necessary to distinguish the two possibilities in rare applications where space is critical resourceimplementors can often find spare bit within the node' value field in which to store the node type indicator an alternative is to use spare bit within node pointer to indicate node type for examplethis is often possible when the compiler requires that structures and objects start on word boundariesleaving the last bit of pointer value always zero thusthis bit can be used to store the nodetype flag and is reset to zero before the pointer is dereferenced another alternative when the leaf value field is smaller than pointer is to replace the pointer to leaf with that leaf' value when space is limitedsuch techniques can make the difference between success and failure in any other situationsuch "bit packingtricks should be avoided because they are difficult to debug and understand at bestand are often machine dependent at worst array implementation for complete binary trees the previous section points out that large fraction of the space in typical binary tree node implementation is devoted to structural overheadnot to storing data in the early to mid si worked on geographic information system that stored spatial data in quadtrees (see section at the time space was critical resourceso we used bit-packing approach where we stored the nodetype flag as the last bit in the parent node' pointer this worked perfectly on various -bit workstations unfortunatelyin those days ibm pc-compatibles used -bit pointers we never did figure out how to port our code to the -bit machine
20,229
this section presents simplecompact implementation for complete binary trees recall that complete binary trees have all levels except the bottom filled out completelyand the bottom level has all of its nodes filled in from left to right thusa complete binary tree of nodes has only one possible shape you might think that complete binary tree is such an unusual occurrence that there is no reason to develop special implementation for it howeverthe complete binary tree has practical usesthe most important being the heap data structure discussed in section heaps are often used to implement priority queues (section and for external sorting algorithms (section we begin by assigning numbers to the node positions in the complete binary treelevel by levelfrom left to right as shown in figure (aan array can store the tree' data values efficientlyplacing each data value in the array position corresponding to that node' position within the tree figure (blists the array indices for the childrenparentand siblings of each node in figure (afrom figure ( )you should see pattern regarding the positions of node' relatives within the array simple formulae can be derived for calculating the array index for each relative of node from ' index no explicit pointers are necessary to reach node' left or right child this means there is no overhead to the array implementation if the array is selected to be of size for tree of nodes the formulae for calculating the array indices of the various relatives of node are as follows the total number of nodes in the tree is the index of the node in question is rwhich must fall in the range to parent(rb( )/ if left child( if right child( if left sibling(rr if is even right sibling(rr if is odd and binary search trees section presented the dictionary adtalong with dictionary implementations based on sorted and unsorted lists when implementing the dictionary with an unsorted listinserting new record into the dictionary can be performed quickly by putting it at the end of the list howeversearching an unsorted list for particular record requires th(ntime in the average case for large databasethis is probably much too slow alternativelythe records can be stored in sorted list if the list is implemented using linked listthen no speedup to the search operation will result from storing the records in sorted order on the other handif we use sorted
20,230
chap binary trees (aposition parent left child right child left sibling right sibling (bfigure complete binary tree and its array implementation (athe complete binary tree with twelve nodes each node has been labeled with its position in the tree (bthe positions for the relatives of each node dash indicates that the relative does not exist array-based list to implement the dictionarythen binary search can be used to find record in only th(log ntime howeverinsertion will now require th(ntime on average becauseonce the proper location for the new record in the sorted list has been foundmany records might be shifted to make room for the new record is there some way to organize collection of records so that inserting records and searching for records can both be done quicklythis section presents the binary search tree (bst)which allows an improved solution to this problem bst is binary tree that conforms to the following conditionknown as the binary search tree propertyall nodes stored in the left subtree of node whose key value is have key values less than all nodes stored in the right subtree of node whose key value is have key values greater than or equal to figure shows two bsts for collection of values one consequence of the binary search tree property is that if the bst nodes are printed using an inorder traversal (see section )the resulting enumeration will be in sorted order from lowest to highest
20,231
sec binary search trees ( (bfigure two binary search trees for collection of values tree (aresults if values are inserted in the order tree (bresults if the same values are inserted in the order figure shows class declaration for the bst that implements the dictionary adt the public member functions include those required by the dictionary adtalong with constructor and destructor to find record with key value in bstbegin at the root if the root stores record with key value kthen the search is over if notthen we must search deeper in the tree what makes the bst efficient during search is that we need search only one of the node' two subtrees if is less than the root node' key valuewe search only the left subtree if is greater than the root node' key valuewe search only the right subtree this process continues until record with key value is foundor we reach leaf node if we reach leaf node without encountering kthen no record exists in the bst whose key value is example consider searching for the node with key value in the tree of figure (abecause is less than the root value of the search proceeds to the left subtree because is greater than we search in ' right subtree at this point the node containing is found if the search value were the same path would be followed to the node containing because this node has no childrenwe know that is not in the bst notice that in figure public member function find calls private member function named findhelp method find takes the search key as an explicit parameter and its bst as an implicit parameteralong with space to place copy of
20,232
chap binary trees import java lang comparable/*binary search tree implementation for dictionary adt *class bsteimplements dictionary private bstnode root/root of the bst int nodecount/number of nodes in the bst /*constructor *bst(root nullnodecount /*reinitialize tree *public void clear(root nullnodecount /*insert record into the tree @param key value of the record @param the record to insert *public void insert(key ke eroot inserthelp(rootke)nodecount++/*remove record from the tree @param key value of record to remove @return the record removedor null if there is none *public remove(key ke temp findhelp(rootk)/first find it if (temp !nullroot removehelp(rootk)/now remove it nodecount--return tempfigure class declaration for the binary search tree the record if it is found howeverthe find operation is most easily implemented as recursive function whose parameters are the root of bst subtreethe search keyand space for the element once it is found member findhelp is of the desired form for this recursive subroutine and is implemented as followsprivate findhelp(bstnode rtkey kif (rt =nullreturn nullif (rt key(compareto( return findhelp(rt left() )else if (rt key(compareto( = return rt element()else return findhelp(rt right() )
20,233
sec binary search trees /*remove and return the root node from the dictionary @return the record removednull if tree is empty *public removeany(if (root !nulle temp root element()root removehelp(rootroot key())nodecount--return tempelse return null/*@return record with key value knull if none exist @param the key value to find *public find(key kreturn findhelp(rootk)/*@return the number of records in the dictionary *public int size(return nodecountfigure (continuedonce the desired record is foundit is passed up the chain of recursive calls to findhelp in the third parameter"it the return value for the function (true or falsedepending on whether suitable element has been foundis also simply passed back up the chain of recursive calls inserting record with key value requires that we first find where that record would have been if it were in the tree this takes us to either leaf nodeor to an internal node with no child in the appropriate direction call this node we then add new node containing the new record as child of figure illustrates this operation the value is added as the right child of the node with value here is the implementation for inserthelpprivate bstnode inserthelp(bstnode rtkey ke eif (rt =nullreturn new bstnode(ke)if (rt key(compareto( rt setleft(inserthelp(rt left()ke))else rt setright(inserthelp(rt right()ke))return rt this assumes that no node has key value equal to the one being inserted if we find node that duplicates the key value to be insertedwe have two options if the application does not allow nodes with equal keysthen this insertion should be treated as an error (or ignoredif duplicate keys are allowedour convention will be to insert the duplicate in the right subtree
20,234
chap binary trees figure an example of bst insertion record with value is inserted into the bst of figure (athe node with value becomes the parent of the new node containing you should pay careful attention to the implementation for inserthelp note that inserthelp returns pointer to bstnode what is being returned is subtree identical to the old subtreeexcept that it has been modified to contain the new record being inserted each node along path from the root to the parent of the new node added to the tree will have its appropriate child pointer assigned to it except for the last node in the pathnone of these nodes will actually change their child' pointer value in that sensemany of the assignments seem redundant howeverthe cost of these additional assignments is worth paying to keep the insertion process simple the alternative is to check if given assignment is necessarywhich is probably more expensive than the assignmentthe shape of bst depends on the order in which elements are inserted new element is added to the bst as new leaf nodepotentially increasing the depth of the tree figure illustrates two bsts for collection of values it is possible for the bst containing nodes to be chain of nodes with height this would happen iffor exampleall elements were inserted in sorted order in generalit is preferable for bst to be as shallow as possible this keeps the average cost of bst operation low removing node from bst is bit trickier than inserting nodebut it is not complicated if all of the possible cases are considered individually before tackling the general node removal processlet us first discuss how to remove from given subtree the node with the smallest key value this routine will be used later by the general node removal function to remove the node with the minimum key value from subtreefirst find that node by continuously moving down the left link until there is no further left link to follow call this node to remove ssimply have the parent of change its pointer to point to the right child of we know that has no left child (because if did have left childs would not be the node with
20,235
minimum key valuethuschanging the pointer as described will maintain bstwith removed the code for this methodnamed deleteminis as followsprivate bstnode deletemin(bstnode rtif (rt left(=nullreturn rt right()else rt setleft(deletemin(rt left()))return rtexample figure illustrates the deletemin process beginning at the root node with value deletemin follows the left link until there is no further left linkin this case reaching the node with value the node with value is changed to point to the right child of the node containing the minimum value this is indicated in figure by dashed line pointer to the node containing the minimum-valued element is stored in parameter the return value of the deletemin method is the subtree of the current node with the minimum-valued node in the subtree removed as with method inserthelpeach node on the path back to the root has its left child pointer reassigned to the subtree resulting from its call to the deletemin method useful companion method is getmin which returns reference to the node containing the minimum value in the subtree private bstnode getmin(bstnode rtif (rt left(=nullreturn rtelse return getmin(rt left())removing node with given key value from the bst requires that we first find and then remove it from the tree sothe first part of the remove operation is search to find once is foundthere are several possibilities if has no childrenthen ' parent has its pointer set to null if has one childthen ' parent has its pointer set to ' child (similar to deleteminthe problem comes if has two children one simple approachthough expensiveis to set ' parent to point to one of ' subtreesand then reinsert the remaining subtree' nodes one at time better alternative is to find value in one of the subtrees that can replace the value in
20,236
chap binary trees subroot figure an example of deleting the node with minimum value in this treethe node with minimum value is the left child of the root thusthe root' left pointer is changed to point to ' right child thusthe question becomeswhich value can substitute for the one being removedit cannot be any arbitrary valuebecause we must preserve the bst property without making major changes to the structure of the tree which value is most like the one being removedthe answer is the least key value greater than (or equal tothe one being removedor else the greatest key value less than the one being removed if either of these values replace the one being removedthen the bst property is maintained example assume that we wish to remove the value from the bst of figure (ainstead of removing the root nodewe remove the node with the least value in the right subtree (using the deletemin operationthis value can then replace the value in the root in this example we first remove the node with value because it contains the least value in the right subtree we then substitute as the new value for the root node figure illustrates this process when duplicate node values do not appear in the treeit makes no difference whether the replacement is the greatest value from the left subtree or the least value from the right subtree if duplicates are storedthen we must select the replacement from the right subtree to see whycall the greatest value in the left subtree if multiple nodes in the left subtree have value gselecting as the replacement value for the root of the subtree will result in tree with equal values to the left of the node now containing precisely this situation occurs if we replace value with the greatest value in the left subtree of figure (bselecting the least value
20,237
sec binary search trees figure an example of removing the value from the bst the node containing this value has two children we replace value with the least value from the node' right subtreein this case /*remove node with key value @return the tree with the node removed *private bstnode removehelp(bstnode rtkey kif (rt =nullreturn nullif (rt key(compareto( rt setleft(removehelp(rt left() ))else if (rt key(compareto( rt setright(removehelp(rt right() ))else /found it if (rt left(=nullreturn rt right()else if (rt right(=nullreturn rt left()else /two children bstnode temp getmin(rt right())rt setelement(temp element())rt setkey(temp key())rt setright(deletemin(rt right()))return rtfigure implementation for the bst removehelp method from the right subtree does not have similar problembecause it does not violate the binary search tree property if equal values appear in the right subtree from the abovewe see that if we want to remove the record stored in node with two childrenthen we simply call deletemin on the node' right subtree and substitute the record returned for the record being removed figure shows is the code for removehelp the cost for findhelp and inserthelp is the depth of the node found or inserted the cost for removehelp is the depth of the node being removedor
20,238
chap binary trees in the case when this node has two childrenthe depth of the node with smallest value in its right subtree thusin the worst casethe cost of any one of these operations is the depth of the deepest node in the tree this is why it is desirable to keep bsts balancedthat iswith least possible height if binary tree is balancedthen the height for tree of nodes is approximately log howeverif the tree is completely unbalancedfor example in the shape of linked listthen the height for tree of nodes can be as great as thusa balanced bst will in the average case have operations costing th(log )while badly unbalanced bst can have operations in the worst case costing th(nconsider the situation where we construct bst of nodes by inserting the nodes one at time if we are fortunate to have them arrive in an order that results in balanced tree ( "randomorder is likely to be good enough for this purpose)then each insertion will cost on average th(log )for total cost of th( log nhoweverif the nodes are inserted in order of increasing valuethen the resulting tree will be chain of height the cost of insertion in this case will be ni= iwhich is th( traversing bst costs th(nregardless of the shape of the tree each node is visited exactly onceand each child pointer is followed exactly once below is an example traversalnamed printhelp it performs an inorder traversal on the bst to print the node values in ascending order private void printhelp(bstnode rtif (rt =nullreturnprinthelp(rt left())printvisit(rt element())printhelp(rt right())while the bst is simple to implement and efficient when the tree is balancedthe possibility of its being unbalanced is serious liability there are techniques for organizing bst to guarantee good performance two examples are the avl tree and the splay tree of section other search trees are guaranteed to remain balancedsuch as the - tree of section heaps and priority queues there are many situationsboth in real life and in computing applicationswhere we wish to choose the next "most importantfrom collection of peopletasksor objects for exampledoctors in hospital emergency room often choose to see next the "most criticalpatient rather than the one who arrived first when scheduling programs for execution in multitasking operating systemat any given moment there might be several programs (usually called jobsready to run the
20,239
next job selected is the one with the highest priority priority is indicated by particular value associated with the job (and might change while the job remains in the wait listwhen collection of objects is organized by importance or prioritywe call this priority queue normal queue data structure will not implement priority queue efficiently because search for the element with highest priority will take th(ntime listwhether sorted or notwill also require th(ntime for either insertion or removal bst that organizes records by priority could be usedwith the total of inserts and remove operations requiring th( log ntime in the average case howeverthere is always the possibility that the bst will become unbalancedleading to bad performance insteadwe would like to find data structure that is guaranteed to have good performance for this special application this section presents the heap data structure heap is defined by two properties firstit is complete binary treeso heaps are nearly always implemented using the array representation for complete binary trees presented in section secondthe values stored in heap are partially ordered this means that there is relationship between the value stored at any node and the values of its children there are two variants of the heapdepending on the definition of this relationship max-heap has the property that every node stores value that is greater than or equal to the value of either of its children because the root has value greater than or equal to its childrenwhich in turn have values greater than or equal to their childrenthe root stores the maximum of all values in the tree min-heap has the property that every node stores value that is less than or equal to that of its children because the root has value less than or equal to its childrenwhich in turn have values less than or equal to their childrenthe root stores the minimum of all values in the tree note that there is no necessary relationship between the value of node and that of its sibling in either the min-heap or the max-heap for exampleit is possible that the values for all nodes in the left subtree of the root are greater than the values for every node of the right subtree we can contrast bsts and heaps by the strength of their ordering relationships bst defines total order on its nodes in thatgiven the positions for any two nodes in the treethe one to the "left(equivalentlythe one appearing earlier in an inorder traversalhas smaller key value than the one to the "right in contrasta heap implements partial order given their positionswe can determine the relative order for the key values of two nodes in the heap only if one is descendant of the other the term "heapis also sometimes used to refer to memory pool see section
20,240
chap binary trees min-heaps and max-heaps both have their uses for examplethe heapsort of section uses the max-heapwhile the replacement selection algorithm of section uses min-heap the examples in the rest of this section will use max-heap be careful not to confuse the logical representation of heap with its physical implementation by means of the array-based complete binary tree the two are not synonymous because the logical view of the heap is actually tree structurewhile the typical physical implementation uses an array figure shows an implementation for heaps the class is generic with one type parameter defines the type for the data elements stored in the heap must extend the comparable interfaceand we use the compareto for comparing records in the heap this class definition makes two concessions to the fact that an array-based implementation is used firstheap nodes are indicated by their logical position within the heap rather than by pointer to the node in practicethe logical heap position corresponds to the identically numbered physical position in the array secondthe constructor takes as input pointer to the array to be used this approach provides the greatest flexibility for using the heap because all data values can be loaded into the array directly by the client the advantage of this comes during the heap construction phaseas explained below the constructor also takes an integer parameter indicating the initial size of the heap (based on the number of elements initially loaded into the arrayand second integer parameter indicating the maximum size allowed for the heap (the size of the arraymethod heapsize returns the current size of the heap isleaf(poswill return true if position pos is leaf in heap hand false otherwise members leftchildrightchildand parent return the position (actuallythe array indexfor the left childright childand parent of the position passedrespectively one way to build heap is to insert the elements one at time method insert will insert new element into the heap you might expect the heap insertion process to be similar to the insert function for bststarting at the root and working down through the heap howeverthis approach is not likely to work because the heap must maintain the shape of complete binary tree equivalentlyif the heap takes up the first positions of its array prior to the call to insertit must take up the first positions after to accomplish thisinsert first places at position of the array of coursev is unlikely to be in the correct position to move to the right placeit is compared to its parent' value if the value of is less than or equal to the value of its parentthen it is in the correct place and the insert
20,241
import java lang comparable/*max-heap implementation *public class maxheapprivate [heap/pointer to the heap array private int size/maximum size of the heap private int /number of things in heap public maxheap( [hint numint maxheap hn numsize maxbuildheap()/*return current size of the heap *public int heapsize(return /*is pos leaf position*public boolean isleaf(int posreturn (pos > / &(pos )/*return position for left child of pos *public int leftchild(int posassert pos / "position has no left child"return *pos /*return position for right child of pos *public int rightchild(int posassert pos ( - )/ "position has no right child"return *pos /*return position for parent *public int parent(int posassert pos "position has no parent"return (pos- )/ /*heapify contents of heap *public void buildheap(for (int = / - >= --siftdown( )/*insert into heap *public void insert( valassert size "heap is full"int curr ++heap[currval/start at end of heap /now sift up until curr' parent' key curr' key while ((curr ! &(heap[currcompareto(heap[parent(curr)] )dsutil swap(heapcurrparent(curr))curr parent(curr)figure an implementation for the heap
20,242
chap binary trees /*put element in its correct place *private void siftdown(int posassert (pos > &(pos "illegal heap position"while (!isleaf(pos)int leftchild(pos)if (( <( - )&(heap[jcompareto(heap[ + ] ) ++/ is now index of child with greater value if (heap[poscompareto(heap[ ]> returndsutil swap(heapposj)pos /move down public removemax(/remove maximum value assert "removing from empty heap"dsutil swap(heap -- )/swap maximum with last value if ( ! /not on last element siftdown( )/put new heap root val in correct place return heap[ ]/*remove element at specified position *public remove(int posassert (pos > &(pos "illegal heap position"dsutil swap(heappos-- )/swap with last value /if we just swapped in big valuepush it up while (heap[poscompareto(heap[parent(pos)] dsutil swap(heapposparent(pos))pos parent(pos)if ( ! siftdown(pos)/if it is littlepush down return heap[ ]figure (continued
20,243
sec heaps and priority queues routine is finished if the value of is greater than that of its parentthen the two elements swap positions from herethe process of comparing to its (currentparent continues until reaches its correct position each call to insert takes th(log ntime in the worst casebecause the value being inserted can move at most the distance from the bottom of the tree to the top of the tree thusthe time to insert values into the heapif we insert them one at timewill be th( log nin the worst case if all values are available at the beginning of the building processwe can build the heap faster than just inserting the values into the heap one by one consider figure ( )which shows one series of exchanges that will result in heap note that this figure shows the input in its logical form as complete binary treebut you should realize that these values are physically stored in an array all exchanges are between node and one of its children the heap is formed as result of this exchange process the array for the right-hand tree of figure (awould appear as follows figure (bshows an alternate series of exchanges that also forms heapbut much more efficiently from this exampleit is clear that the heap for any given set of numbers is not uniqueand we see that some rearrangements of the input values require fewer exchanges than others to build the heap sohow do we pick the best rearrangementone good algorithm stems from induction suppose that the left and right subtrees of the root are already heapsand is the name of the element at the root this situation is illustrated by figure in this case there are two possibilities ( has value greater than or equal to its two children in this caseconstruction is complete ( has value less than one or both of its children in this caser should be exchanged with the child that has greater value the result will be heapexcept that might still be less than one or both of its (newchildren in this casewe simply continue the process of "pushing downr until it reaches level where it is greater than its childrenor is leaf node this process is implemented by the private method siftdown of the heap class the siftdown operation is illustrated by figure this approach assumes that the subtrees are already heapssuggesting that complete algorithm can be obtained by visiting the nodes in some order such that the children of node are visited before the node itself one simple way to do this is simply to work from the high index of the array to the low index actuallythe build process need not visit the leaf nodes (they can never move down because they are already at the bottom)so the building algorithm can start in the middle of the
20,244
chap binary trees ( (bfigure two series of exchanges to build max-heap (athis heap is built by series of nine exchanges in the order ( - )( - )( - )( - )( - )( - )( - )( - )( - (bthis heap is built by series of four exchanges in the order ( - )( - )( - )( - figure final stage in the heap-building algorithm both subtrees of node are heaps all that remains is to push down to its proper level in the heap arraywith the first internal node the exchanges shown in figure (bresult from this process method buildheap implements the building algorithm what is the cost of buildheapclearly it is the sum of the costs for the calls to siftdown each siftdown operation can cost at most the number of levels it takes for the node being sifted to reach the bottom of the tree in any complete treeapproximately half of the nodes are leaves and so cannot be moved downward at all one quarter of the nodes are one level above the leavesand so their elements can move down at most one level at each step up the tree we get half the number of nodes as were at the previous leveland an additional height of one the maximum
20,245
sec heaps and priority queues ( ( (cfigure the siftdown operation the subtrees of the root are assumed to be heaps (athe partially completed heap (bvalues and are swapped (cvalues and are swapped to form the final heap sum of total distances that elements can go is therefore log xn log = = - ( - from equation we know that this summation has closed-form solution of approximately so this algorithm takes th(ntime in the worst case this is far better than building the heap one element at timewhich would cost th( log nin the worst case it is also faster than the th( log naverage-case time and th( worst-case time required to build the bst removing the maximum (rootvalue from heap containing elements requires that we maintain the complete binary tree shapeand that the remaining node values conform to the heap property we can maintain the proper shape by moving the element in the last position in the heap (the current last element in the arrayto the root position we now consider the heap to be one element smaller unfortunatelythe new root value is probably not the maximum value in the new heap this problem is easily solved by using siftdown to reorder the heap because the heap is log levels deepthe cost of deleting the maximum element is th(log nin the average and worst cases the heap is natural implementation for the priority queues discussed at the beginning of this section jobs can be added to the heap (using their priority value as the ordering keywhen needed method removemax can be called whenever new job is to be executed some applications of priority queues require the ability to change the priority of an object already stored in the queue this might require that the object' position in the heap representation be updated unfortunatelya max-heap is not efficient
20,246
chap binary trees when searching for an arbitrary valueit is only good for finding the maximum value howeverif we already know the index for an object within the heapit is simple matter to update its priority (including changing its position to maintain the heap propertyor remove it the remove method takes as input the position of the node to be removed from the heap typical implementation for priority queues requiring updating of priorities will need to use an auxiliary data structure that supports efficient search for objects (such as bstrecords in the auxiliary data structure will store the object' heap indexso that the object can be deleted from the heap and reinserted with its new priority (see project sections and present applications for priority queue with priority updating huffman coding trees the space/time tradeoff principle from section suggests that one can often gain an improvement in space requirements in exchange for penalty in running time there are many situations where this is desirable tradeoff typical example is storing files on disk if the files are not actively usedthe owner might wish to compress them to save space laterthey can be uncompressed for usewhich costs some timebut only once we often represent set of items in computer program by assigning unique code to each item for examplethe standard ascii coding scheme assigns unique eight-bit value to each character it takes certain minimum number of bits to provide unique codes for each character for exampleit takes dlog or seven bits to provide the unique codes needed to represent the symbols of the ascii character set the requirement for dlog ne bits to represent unique code values assumes that all codes will be the same lengthas are ascii codes this is called fixed-length coding scheme if all characters were used equally oftenthen fixed-length coding scheme is the most space efficient method howeveryou are probably aware that not all characters are used equally often figure shows the relative frequencies of the letters of the alphabet from this table we can see that the letter 'eappears about times more often than the letter ' in normal asciithe words "deedand "muckrequire the same amount of space (four bytesit would seem that words such as "deed,which are composed of relatively common lettersshould be storable in less space than words such as "muck,which are composed of relatively uncommon letters the ascii standard is eight bitsnot seveneven though there are only characters represented the eighth bit is used either to check for transmission errorsor to support extended ascii codes with an additional characters
20,247
sec huffman coding trees letter frequency letter frequency figure relative frequencies for the letters of the alphabet as they appear in selected set of english documents "frequencyrepresents the expected frequency of occurrence per lettersignoring case if some characters are used more frequently than othersis it possible to take advantage of this fact and somehow assign them shorter codesthe price could be that other characters require longer codesbut this might be worthwhile if such characters appear rarely enough this concept is at the heart of file compression techniques in common use today the next section presents one such approach to assigning variable-length codescalled huffman coding while it is not commonly used in its simplest form for file compression (there are better methods)huffman coding gives the flavor of such coding schemes building huffman coding trees huffman coding assigns codes to characters such that the length of the code depends on the relative frequency or weight of the corresponding character thusit is variable-length code if the estimated frequencies for letters match the actual frequency found in an encoded messagethen the length of that message will typically be less than if fixed-length code had been used the huffman code for each letter is derived from full binary tree called the huffman coding treeor simply the huffman tree each leaf of the huffman tree corresponds to letterand we define the weight of the leaf node to be the weight (frequencyof its associated letter the goal is to build tree with the minimum external path weight define
20,248
chap binary trees letter frequency figure the relative frequencies for eight selected letters the weighted path length of leaf to be its weight times its depth the binary tree with minimum external path weight is the one with the minimum sum of weighted path lengths for the given set of leaves letter with high weight should have low depthso that it will count the least against the total path length as resultanother letter might be pushed deeper in the tree if it has less weight the process of building the huffman tree for letters is quite simple firstcreate collection of initial huffman treeseach of which is single leaf node containing one of the letters put the partial trees onto min-heap ( priority queueorganized by weight (frequencynextremove the first two trees (the ones with lowest weightfrom the heap join these two trees together to create new tree whose root has the two trees as childrenand whose weight is the sum of the weights of the two trees put this new tree back on the heap this process is repeated until all of the partial huffman trees have been combined into one example figure illustrates part of the huffman tree construction process for the eight letters of figure ranking and arbitrarily by alphabetical orderthe letters are ordered by frequency as letter frequency because the first two letters on the list are and kthey are selected to be the first trees joined together they become the children of root node with weight thusa tree whose root has weight is placed back on the listwhere it takes up the first position the next step is to take values and off the heap (corresponding to the partial tree with two leaf nodes built in the last stepand the partial tree storing the letter mrespectivelyand join them together the resulting root node has weight and so this tree is placed back into the heap its priority will be between the trees with values (for letter cand (for letter uthis process continues until tree whose root has weight is built this tree is shown in figure figure shows an implementation for huffman tree nodes this implementation is similar to the varbinnode implementation of figure there is an abstract base classnamed huffnodeand two subclassesnamed leafnode
20,249
sec huffman coding trees step step step step step figure the first five steps of the building process for sample huffman tree
20,250
chap binary trees figure huffman tree for the letters of figure and intlnode this implementation reflects the fact that leaf and internal nodes contain distinctly different information figure shows the implementation for the huffman tree nodes of the tree store key-value pairs (see figure )where the key is the weight and the value is the character figure shows the java code for the tree-building process huffman tree building is an example of greedy algorithm at each stepthe algorithm makes "greedydecision to merge the two subtrees with least weight this makes the algorithm simplebut does it give the desired resultthis section concludes with proof that the huffman tree indeed gives the most efficient arrangement for the set of letters the proof requires the following lemma lemma for any huffman tree built by function buildhuff containing at least two lettersthe two letters with least frequency are stored in siblings nodes whose depth is at least as deep as any other leaf nodes in the tree proofcall the two letters with least frequency and they must be siblings because buildhuff selects them in the first step of the construction process assume that and are not the deepest nodes in the tree in this casethe huffman tree must either look as shown in figure or in some sense be symmetrical to this for this situation to occurthe parent of and labeled vmust have greater weight than the node labeled otherwisefunction buildhuff would
20,251
/*binary tree node implementation with just an element field (no keyand pointers to children *class huffnode implements binnode private element/element for this node private huffnode left/pointer to left child private huffnode right/pointer to right child /*constructors *public huffnode({left right nullpublic huffnode( valleft right nullelement valpublic huffnode( valhuffnode lhuffnode rleft lright relement val/*return and set the element value *public element(return elementpublic void setelement( velement /*return and set the left child *public huffnode left(return leftpublic huffnode setleft(huffnode preturn left /*return and set the right child *public huffnode right(return rightpublic huffnode setright(huffnode preturn right /*return true if this is leaf node *public boolean isleaf(return (left =null&(right =null)figure implementation for huffman tree nodes internal nodes and leaf nodes are represented by separate classeseach derived from an abstract base class have selected node in place of node as the child of node howeverthis is impossible because and are the letters with least frequency theorem function buildhuff builds the huffman tree with the minimum external path weight for the given set of letters proofthe proof is by induction on nthe number of letters base casefor the huffman tree must have the minimum external path weight because there are only two possible treeseach with identical weighted path lengths for the two leaves
20,252
chap binary trees class hufftree / huffman coding tree private huffnode root/root of the tree public hufftree(lettfreq valroot new huffnode(val)public hufftree(lettfreq valhufftree lhufftree rroot new huffnode(vall root() root())public huffnode root(return rootpublic int weight(/weight of tree is weight of root return root element(weight()figure class declarations for the huffman tree induction hypothesisassume that any tree created by buildhuff that contains leaves has minimum external path length induction stepgiven huffman tree built by buildhuff with leavesn > suppose that < <<wn where to wn are the weights of the letters call the parent of the letters with frequencies and from the lemmawe know that the leaf nodes containing the letters with frequencies and are as deep as any nodes in if any other leaf nodes in the tree were deeperwe could reduce their weighted path length by swapping them with or but the lemma tells us that no such deeper nodes exist call the huffman tree that is identical to except that node is replaced with leaf node whose weight is by the induction hypothesist has minimum external path length returning the children to restores tree twhich must also have minimum external path length thus by mathematical inductionfunction buildhuff creates the huffman tree with minimum external path length assigning and using huffman codes once the huffman tree has been constructedit is an easy matter to assign codes to individual letters beginning at the rootwe assign either ' or ' to each edge in the tree ' is assigned to edges connecting node with its left childand ' to edges connecting node with its right child this process is illustrated by figure the huffman code for letter is simply binary number determined by the path from the root to the leaf corresponding to that letter thusthe code for is ' because the path from the root to the leaf node for takes single left branch the code for is ' because the path to the node for takes four
20,253
sec huffman coding trees /build huffman tree from list hufflist static hufftree buildtree(list hufflisthufftree tmp tmp tmp lettfreq tmpnodefor(hufflist movetopos( )hufflist length( hufflist movetopos( )/while at least two items left hufflist movetostart()tmp hufflist remove()tmp hufflist remove()tmpnode new lettfreq(tmp weight(tmp weight())tmp new hufftree(tmpnodetmp tmp )/return to the list in sorted order for (hufflist movetostart()hufflist currpos(hufflist length()hufflist next()if (tmp weight(<hufflist getvalue(weight()hufflist insert(tmp )break/put in list if (hufflist currpos(>hufflist length()hufflist append(tmp )/this is heaviest value hufflist movetostart()return hufflist remove()/this is only tree on list /return the tree figure implementation for the huffman tree construction function buildhuff takes as input flthe min-heap of partial huffman treeswhich initially are single leaf nodes as shown in step of figure the body of function buildtree consists mainly of for loop on each iteration of the for loopthe first two partial trees are taken off the heap and placed in variables temp and temp tree is created (temp such that the left and right subtrees are temp and temp respectively finallytemp is returned to fl figure an impossible huffman treeshowing the situation where the two nodes with least weightl and are not the deepest nodes in the tree triangles represent subtrees
20,254
chap binary trees letter freq code bits figure the huffman codes for the letters of figure right branchesthen leftand finally one last right figure lists the codes for all eight letters given codes for the lettersit is simple matter to use these codes to encode text message we simply replace each letter in the string with its binary code lookup table can be used for this purpose example using the code generated by our example huffman treethe word "deedis represented by the bit string " and the word "muckis represented by the bit string " decoding the message is done by looking at the bits in the coded string from left to right until letter is decoded this can be done by using the huffman tree in reverse process from that used to generate the codes decoding bit string begins at the root of the tree we take branches depending on the bit value -left for ' and right for ' -until reaching leaf node this leaf contains the first character in the message we then process the next bit in the code restarting at the root to begin the next character example to decode the bit string " we begin at the root of the tree and take right branch for the first bit which is ' because the next bit is ' we take left branch we then take another right branch (for the third bit ' ')arriving at the leaf node corresponding to the letter thusthe first letter of the coded word is we then begin again at the root of the tree to process the fourth bitwhich is ' taking right branchthen two left branches (for the next two bits which are ' ')we reach the leaf node corresponding to the letter thusthe second letter is in similar manner we complete the decoding process to find that the last two letters are and kspelling the word "duck
20,255
set of codes is said to meet the prefix property if no code in the set is the prefix of another the prefix property guarantees that there will be no ambiguity in how bit string is decoded in other wordsonce we reach the last bit of code during the decoding processwe know which letter it is the code for huffman codes certainly have the prefix property because any prefix for code would correspond to an internal nodewhile all codes correspond to leaf nodes for examplethe code for is ' taking five right branches in the huffman tree of figure brings us to the leaf node containing we can be sure that no letter can have code ' because this corresponds to an internal node of the treeand the tree-building process places letters only at the leaf nodes how efficient is huffman codingin theoryit is an optimal coding method whenever the true frequencies are knownand the frequency of letter is independent of the context of that letter in the message in practicethe frequencies of letters do change depending on context for examplewhile is the most commonly used letter of the alphabet in english documentst is more common as the first letter of word this is why most commercial compression utilities do not use huffman coding as their primary coding methodbut instead use techniques that take advantage of the context for the letters another factor that affects the compression efficiency of huffman coding is the relative frequencies of the letters some frequency patterns will save no space as compared to fixed-length codesothers can result in great compression in generalhuffman coding does better when there is large variation in the frequencies of letters in the particular case of the frequencies shown in figure we can determine the expected savings from huffman coding if the actual frequencies of coded message match the expected frequencies example because the sum of the frequencies in figure is and has frequency we expect it to appear times in message containing letters an actual message might or might not meet this expectation letters dland have code lengths of threeand together are expected to appear times in letters letter has code length of fourand is expected to appear times in letters letter has code length of fiveand is expected to appear times in letters finallyletters and have code lengths of sixand together are expected to appear only times in letters the average expected cost per character is simply the sum of the cost for each character (ci times the probability of
20,256
chap binary trees its occurring (pi )or cn pn this can be reorganized as cn fn ft where fi is the (relativefrequency of letter and ft is the total for all letter frequencies for this set of frequenciesthe expected cost per letter is [( )+( )+( )+( )+( )]/ / fixed-length code for these eight characters would require log bits per letter as opposed to about bits per letter for huffman coding thushuffman coding is expected to save about for this set of letters huffman coding for all ascii symbols should do better than this the letters of figure are atypical in that there are too many common letters compared to the number of rare letters huffman coding for all letters would yield an expected cost of bits per letter the equivalent fixed-length code would require about five bits this is somewhat unfair to fixed-length coding because there is actually room for codes in five bitsbut only letters more generallyhuffman coding of typical text file will save around over ascii coding if we charge ascii coding at eight bits per character huffman coding for binary file (such as compiled executablewould have very different set of distribution frequencies and so would have different space savings most commercial compression programs use two or three coding schemes to adjust to different types of files in the preceding example"deedwas coded in bitsa saving of over the twelve bits required from fixed-length coding however"muckrequires bitsmore space than required by the corresponding fixed-length coding the problem is that "muckis composed of letters that are not expected to occur often if the message does not match the expected frequencies of the lettersthan the length of the encoding will not be as expected either further reading see shaffer and brown [sb for an example of tree implementation where an internal node pointer field stores the value of its child instead of pointer to its child when the child is leaf node
20,257
many techniques exist for maintaining reasonably balanced bsts in the face of an unfriendly series of insert and delete operations one example is the avl tree of adelson-velskii and landiswhich is discussed by knuth [knu the avl tree (see section is actually bst whose insert and delete routines reorganize the tree structure so as to guarantee that the subtrees rooted by the children of any node will differ in height by at most one another example is the splay tree [st ]also discussed in section see bentley' programming pearl "thanksheaps[ben ben for good discussion on the heap data structure and its uses the proof of section that the huffman coding tree has minimum external path weight is from knuth [knu for more information on data compression techniquessee managing gigabytes by wittenmoffatand bell [wmb ]and codes and cryptography by dominic welsh [wel tables and are derived from welsh [wel exercises section claims that full binary tree has the highest number of leaf nodes among all trees with internal nodes prove that this is true define the degree of node as the number of its non-empty children prove by induction that the number of degree nodes in any binary tree is one less than the number of leaves define the internal path length for tree as the sum of the depths of all internal nodeswhile the external path length is the sum of the depths of all leaf nodes in the tree prove by induction that if tree is full binary tree with internal nodesi is ' internal path lengthand is ' external path lengththen for > explain why function preorder from section makes half as many recursive calls as function preorder explain why it makes twice as many accesses to left and right children (amodify the preorder traversal of section to perform an inorder traversal of binary tree (bmodify the preorder traversal of section to perform postorder traversal of binary tree write recursive function named search that takes as input the pointer to the root of binary tree (not bst!and value kand returns true if value appears in the tree and false otherwise write an algorithm that takes as input the pointer to the root of binary tree and prints the node values of the tree in level order level order first
20,258
chap binary trees prints the rootthen all nodes of level then all nodes of level and so on hintpreorder traversals make use of stack through recursive calls consider making use of another data structure to help implement the levelorder traversal write recursive function that returns the height of binary tree write recursive function that returns count of the number of leaf nodes in binary tree assume that given bst stores integer values in its nodes write recursive function that sums the values of all nodes in the tree assume that given bst stores integer values in its nodes write recursive function that traverses binary treeand prints the value of every node who' grandparent has value that is multiple of five write recursive function that traverses binary treeand prints the value of every node which has at least four great-grandchildren compute the overhead fraction for each of the following full binary tree implementations (aall nodes store datatwo child pointersand parent pointer the data field requires four bytes and each pointer requires four bytes (ball nodes store data and two child pointers the data field requires sixteen bytes and each pointer requires four bytes (call nodes store data and parent pointerand internal nodes store two child pointers the data field requires eight bytes and each pointer requires four bytes (donly leaf nodes store datainternal nodes store two child pointers the data field requires eight bytes and each pointer requires four bytes why is the bst property defined so that nodes with values equal to the value of the root appear only in the right subtreerather than allow equal-valued nodes to appear in either subtree (ashow the bst that results from inserting the values and (in that order(bshow the enumerations for the tree of (athat result from doing preorder traversalan inorder traversaland postorder traversal draw the bst that results from adding the value to the bst shown in figure ( draw the bst that results from deleting the value from the bst of figure ( write function that prints out the node values for bst in sorted order from highest to lowest
20,259
sec exercises write recursive function named smallcount thatgiven the pointer to the root of bst and key kreturns the number of nodes having key values less than or equal to function smallcount should visit as few nodes in the bst as possible write recursive function named printrange thatgiven the pointer to the root of bsta low key valueand high key valueprints in sorted order all records whose key values fall between the two given keys function printrange should visit as few nodes in the bst as possible write recursive function named checkbst thatgiven the pointer to the root of binary treewill return true if the tree is bstand false if it is not describe simple modification to the bst that will allow it to easily support finding the kth smallest value in th(log naverage case time then write pseudo-code function for finding the kth smallest value in your modified bst what are the minimum and maximum number of elements in heap of height where in max-heap might the smallest element reside show the max-heap that results from running buildheap on the following values stored in an array (ashow the heap that results from deleting the maximum value from the max-heap of figure (bshow the heap that results from deleting the element with value from the max-heap of figure revise the heap definition of figure to implement min-heap the member function removemax should be replaced by new function called removemin build the huffman coding tree and determine the codes for the following set of letters and weightsletter frequency what is the expected length in bits of message containing characters for this frequency distribution
20,260
chap binary trees what will the huffman coding tree look like for set of sixteen characters all with equal weightwhat is the average code length for letter in this casehow does this differ from the smallest possible fixed length code for sixteen characters set of characters with varying weights is assigned huffman codes if one of the characters is assigned code then(adescribe all codes that cannot have been assigned (bdescribe all codes that must have been assigned assume that sample alphabet has the following weightsletter frequency (afor this alphabetwhat is the worst-case number of bits required by the huffman code for string of letterswhat string(shave the worstcase performance(bfor this alphabetwhat is the best-case number of bits required by the huffman code for string of letterswhat string(shave the bestcase performance(cwhat is the average number of bits required by character using the huffman code for this alphabet you must keep track of some data your options are( linked-list maintained in sorted order ( linked-list of unsorted records ( binary search tree ( an array-based list maintained in sorted order ( an array-based list of unsorted records for each of the following scenarioswhich of these choices would be bestexplain your answer (athe records are guaranteed to arrive already sorted from lowest to highest ( whenever record is insertedits key value will always be greater than that of the last record inserteda total of inserts will be interspersed with searches (bthe records arrive with values having uniform random distribution (so the bst is likely to be well balanced , , insertions are performedfollowed by searches (cthe records arrive with values having uniform random distribution (so the bst is likely to be well balanced insertions are interspersed with searches
20,261
(dthe records arrive with values having uniform random distribution (so the bst is likely to be well balanced insertions are performedfollowed by , , searches projects re-implement the composite design for the binary tree node class of figure using flyweight in place of null pointers to empty nodes one way to deal with the "problemof null pointers in binary trees is to use that space for some other purpose one example is the threaded binary tree extending the node implementation of figure the threaded binary tree stores with each node two additional bit fields that indicate if the child pointers lc and rc are regular pointers to child nodes or threads if lc is not pointer to non-empty child ( if it would be null in regular binary tree)then it instead stores pointer to the inorder predecessor of that node the inorder predecessor is the node that would be printed immediately before the current node in an inorder traversal if rc is not pointer to childthen it instead stores pointer to the node' inorder successor the inorder successor is the node that would be printed immediately after the current node in an inorder traversal the main advantage of threaded binary trees is that operations such as inorder traversal can be implemented without using recursion or stack re-implement the bst as threaded binary treeand include non-recursive version of the preorder traversal implement city database using bst to store the database records each database record contains the name of the city ( string of arbitrary lengthand the coordinates of the city expressed as integer xand -coordinates the bst should be organized by city name your database should allow records to be inserteddeleted by name or coordinateand searched by name or coordinate another operation that should be supported is to print all records within given distance of specified point collect running-time statistics for each operation which operations can be implemented reasonably efficiently ( in th(log ntime in the average caseusing bstcan the database system be made more efficient by using one or more additional bsts to organize the records by location create binary tree adt that includes generic traversal methods that take visitoras described in section write functions count and bstcheck of section as visitors to be used with the generic traversal method
20,262
chap binary trees implement priority queue class based on the max-heap class implementation of figure the following methods should be supported for manipulating the priority queuevoid enqueue(int objectidint priority)int dequeue()void changeweight(int objectidint newpriority)method enqueue inserts new object into the priority queue with id number objectid and priority priority method dequeue removes the object with highest priority from the priority queue and returns its object id method changeweight changes the priority of the object with id number objectid to be newpriority the type for should be class that stores the object id and the priority for that object you will need mechanism for finding the position of the desired object within the heap use an arraystoring the object with objectid in position (be sure in your testing to keep the objectids within the array bounds you must also modify the heap implementation to store the object' position in the auxiliary array so that updates to objects in the heap can be updated as well in the array the huffman coding tree function buildhuff of figure manipulates sorted list this could result in th( algorithmbecause placing an intermediate huffman tree on the list could take th(ntime revise this algorithm to use priority queue based on min-heap instead of list complete the implementation of the huffman coding treebuilding on the code presented in section include function to compute and store in table the codes for each letterand functions to encode and decode messages this project can be further extended to support file compression to do so requires adding two steps( read through the input file to generate actual frequencies for all letters in the fileand ( store representation for the huffman tree at the beginning of the encoded output file to be used by the decoding function if you have trouble with devising such representationsee section
20,263
non-binary trees many organizations are hierarchical in naturesuch as the military and most businesses consider company with president and some number of vice presidents who report to the president each vice president has some number of direct subordinatesand so on if we wanted to model this company with data structureit would be natural to think of the president in the root node of treethe vice presidents at level and their subordinates at lower levels in the tree as we go down the organizational hierarchy because the number of vice presidents is likely to be more than twothis company' organization cannot easily be represented by binary tree we need instead to use tree whose nodes have an arbitrary number of children unfortunatelywhen we permit trees to have nodes with an arbitrary number of childrenthey become much harder to implement than binary trees we consider such trees in this to distinguish them from the more commonly used binary treewe use the term general tree section presents general tree terminology section presents simple representation for solving the important problem of processing equivalence classes several pointer-based implementations for general trees are covered in section aside from general trees and binary treesthere are also uses for trees whose internal nodes have fixed number of children where is something other than two such trees are known as -ary trees section generalizes the properties of binary trees to -ary trees sequential representationsuseful for applications such as storing trees on diskare covered in section general tree definitions and terminology tree is finite set of one or more nodes such that there is one designated node rcalled the root of if the set ( { }is not emptythese nodes are partitioned
20,264
chap non-binary trees root ancestors of parent of siblings of subtree rooted at children of figure notation for general trees node is the parent of nodes vs and thusvs and are children of nodes and are ancestors of nodes vs and are called siblings the oval surrounds the subtree having as its root into > disjoint subsets tn- each of which is treeand whose roots rn respectivelyare children of the subsets ti ( < nare said to be subtrees of these subtrees are ordered in that ti is said to come before tj if by conventionthe subtrees are arranged from left to right with subtree called the leftmost child of node' out degree is the number of children for that node forest is collection of one or more trees figure presents further tree notation generalized from the notation for binary trees presented in each node in tree has precisely one parentexcept for the rootwhich has no parent from this observationit immediately follows that tree with nodes must have edges because each nodeaside from the roothas one edge connecting that node to its parent an adt for general tree nodes before discussing general tree implementationswe should first make precise what operations such implementations must support any implementation must be able to initialize tree given treewe need access to the root of that tree there must be some way to access the children of node in the case of the adt for binary tree nodesthis was done by providing member functions that give explicit
20,265
/*general tree adt *interface gentree public void clear()/clear the tree public gtnode root()/return the root /make the tree have new rootgive first child and sib public void newroot( valuegtnode firstgtnode sib)public void newleftchild( value)/add left child figure the general tree node and general tree classes access to the left and right child pointers unfortunatelybecause we do not know in advance how many children given node will have in the general treewe cannot give explicit functions to access each child an alternative must be found that works for any number of children one choice would be to provide function that takes as its parameter the index for the desired child that combined with function that returns the number of children for given node would support the ability to access any node or process all children of node unfortunatelythis view of access tends to bias the choice for node implementations in favor of an array-based approachbecause these functions favor random access to list of children in practicean implementation based on linked list is often preferred an alternative is to provide access to the first (or leftmostchild of nodeand to provide access to the next (or rightsibling of node figure shows class declarations for general trees and their nodes based on these two access functionsthe children of node can be traversed like list trying to find the next sibling of the rightmost sibling would return null general tree traversals in section three tree traversals were presented for binary treespreorderpostorderand inorder for general treespreorder and postorder traversals are defined with meanings similar to their binary tree counterparts preorder traversal of general tree first visits the root of the treethen performs preorder traversal of each subtree from left to right postorder traversal of general tree performs postorder traversal of the root' subtrees from left to rightthen visits the root inorder traversal does not have natural definition for the general treebecause there is no particular number of children for an internal node an arbitrary definition -such as visit the leftmost subtree in inorderthen the rootthen visit the remaining subtrees in inorder -can be invented howeverinorder traversals are generally not useful with general trees
20,266
chap non-binary trees figure an example of general tree example preorder traversal of the tree in figure visits the nodes in order racdebf postorder traversal of this tree visits the nodes in order cdeaf br to perform preorder traversalit is necessary to visit each of the children for given node (say rfrom left to right this is accomplished by starting at ' leftmost child (call it tfrom twe can move to ' right siblingand then to that node' right siblingand so on using the adt of figure here is java implementation to print the nodes of general tree in preorder note the for loop at the endwhich processes the list of children by beginning with the leftmost childthen repeatedly moving to the next child until calling next returns null /*preorder traversal for general trees *static void preorder(gtnode rtprintnode(rt)if (!rt isleaf()gtnode temp rt leftmostchild()while (temp !nullpreorder(temp)temp temp rightsibling() the parent pointer implementation perhaps the simplest general tree implementation is to store for each node only pointer to that node' parent we will call this the parent pointer implementation
20,267
clearly this implementation is not general purposebecause it is inadequate for such important operations as finding the leftmost child or the right sibling for node thusit may seem to be poor idea to implement general tree in this way howeverthe parent pointer implementation stores precisely the information required to answer the followinguseful question"given two nodesare they in the same tree?to answer the questionwe need only follow the series of parent pointers from each node to its respective root if both nodes reach the same rootthen they must be in the same tree if the roots are differentthen the two nodes are not in the same tree the process of finding the ultimate root for given node we will call find the parent pointer representation is most often used to maintain collection of disjoint sets two disjoint sets share no members in common (their intersection is emptya collection of disjoint sets partitions some objects such that every object is in exactly one of the disjoint sets there are two basic operations that we wish to support( determine if two objects are in the same setand ( merge two sets together because two merged sets are unitedthe merging operation is usually called union and the whole process of determining if two objects are in the same set and then merging the sets goes by the name "union/find to implement union/findwe represent each disjoint set with separate general tree two objects are in the same disjoint set if they are in the same tree every node of the tree (except for the roothas precisely one parent thuseach node requires the same space to represent it the collection of objects is typically stored in an arraywhere each element of the array corresponds to one objectand each element stores the object' value the objects also correspond to nodes in the various disjoint trees (one tree for each disjoint set)so we also store the parent value with each object in the array those nodes that are the roots of their respective trees store an appropriate indicator note that this representation means that single array is being used to implement collection of trees this makes it easy to merge trees together with union operations figure shows the parent pointer implementation for the general treecalled parptrtree this class is greatly simplified from the declarations of figure because we need only subset of the general tree operations instead of implementing separate node classparptrtree simply stores an array where each array element corresponds to node of the tree each position of the array stores the value for node and the array position for the parent of node class parptrtree is given two new functionsdiffer and union function differ checks if two
20,268
chap non-binary trees objects are in different setsand function union merges two sets together private function find is used to find the ultimate root for an object an application using the union/find operations should store set of objectswhere each object is assigned unique index in the range to the indices refer to the corresponding parent pointers in the array class parptrtree creates and initializes the union/find arrayand functions differ and union take array indices as inputs figure illustrates the parent pointer implementation note that the nodes can appear in any order within the arrayand the array can store up to separate trees for examplefigure shows two trees stored in the same array thusa single array can store collection of items distributed among an arbitrary (and changingnumber of disjoint subsets consider the problem of assigning the members of set to disjoint subsets called equivalence classes recall from section that an equivalence relation is reflexivesymmetricand transitive thusif objects and are equivalentand objects and are equivalentwe must be able to recognize that objects and are also equivalent there are many practical uses for disjoint sets and representing equivalences for exampleconsider figure which shows graph of ten nodes labeled through notice that for nodes through ithere is some series of edges that connects any pair of the nodesbut node is disconnected from the rest of the nodes such graph might be used to represent connections such as wires between components on circuit boardor roads between cities we can consider two nodes of the graph to be equivalent if there is path between them thusnodes ahand would be equivalent in figure but is not equivalent to any other subset of equivalent (connectededges in graph is called connected component the goal is to quickly classify the objects into disjoint sets that correspond to the connected components another application for union/find occurs in kruskal' algorithm for computing the minimal cost spanning tree for graph (section the input to the union/find algorithm is typically series of equivalence pairs in the case of the connected components examplethe equivalence pairs would simply be the set of edges in the graph an equivalence pair might say that object is equivalent to object if soc and are placed in the same subset if later equivalence relates and bthen by implication is also equivalent to thusan equivalence pair may cause two subsets to mergeeach of which contains several objects
20,269
sec the parent pointer implementation /*general tree class implementation for union/find *class parptrtree private integer [array/node array public parptrtree(int sizearray new integer[size]for (int = <sizei++array[inull/create node array /*determine if nodes are in different trees *public boolean differ(int aint binteger root find( )/find root of node integer root find( )/find root of node return root !root /compare roots /*merge two subtrees *public void union(int aint binteger root find( )/find root of node integer root find( )/find root of node if (root !root array[root root /merge string print(stringbuffer out new stringbuffer( )for (int = <array lengthi++if (array[ =nullout append("- ")else integer temp array[ ]for (int = <array lengthj++if (temp =array[ ]out append( ")breakreturn out tostring()public integer find(integer currif (array[curr=nullreturn curr/at root while (array[curr!nullcurr array[curr]return currfigure general tree implementation using parent pointers for the union/find algorithm
20,270
chap non-binary trees parent' index node index label figure the parent pointer array implementation each node corresponds to position in the node array stores its value and pointer to its parent the parent pointers are represented by the position in the array of the parent the root of any tree stores rootrepresented graphically by slash in the "parent' indexbox this figure shows two trees stored in the same parent pointer arrayone rooted at rand the other rooted at figure graph with two connected components equivalence classes can be managed efficiently with the union/find algorithm initiallyeach object is at the root of its own tree an equivalence pair is processed by checking to see if both objects of the pair are in the same tree using function differ if they are in the same treethen no change need be made because the objects are already in the same equivalence class otherwisethe two equivalence classes should be merged by the union function example as an example of solving the equivalence class problemconsider the graph of figure initiallywe assume that each node of the graph is in distinct equivalence class this is represented by storing each as the root of its own tree figure (ashows this initial configuration
20,271
sec the parent pointer implementation ( ( ( (dfigure an example of equivalence processing (ainitial configuration for the ten nodes of the graph in figure the nodes are placed into ten independent equivalence classes (bthe result of processing five edges(ab)(ch)(gf)(de)and (if(cthe result of processing two more edges(haand (eg(dthe result of processing edge (hed
20,272
chap non-binary trees using the parent pointer array representation nowconsider what happens when equivalence relationship (abis processed the root of the tree containing is aand the root of the tree containing is to make them equivalentone of these two nodes is set to be the parent of the other in this case it is irrelevant which points to whichso we arbitrarily select the first in alphabetical order to be the root this is represented in the parent pointer array by setting the parent field of (the node in array position of the arrayto store pointer to equivalence pairs (ch)(gf)and (deare processed in similar fashion when processing the equivalence pair (if)because and are both their own rootsi is set to point to note that this also makes equivalent to the result of processing these five equivalences is shown in figure (bthe parent pointer representation places no limit on the number of nodes that can share parent to make equivalence processing as efficient as possiblethe distance from each node to the root of its respective tree should be as small as possible thuswe would like to keep the height of the trees small when merging two equivalence classes together ideallyeach tree would have all nodes pointing directly to the root achieving this goal all the time would require too much additional processing to be worth the effortso we must settle for getting as close as possible low-cost approach to reducing the height is to be smart about how two trees are joined together one simple techniquecalled the weighted union rulejoins the tree with fewer nodes to the tree with more nodes by making the smaller tree' root point to the root of the bigger tree this will limit the total depth of the tree to (log )because the depth of nodes only in the smaller tree will now increase by oneand the depth of the deepest node in the combined tree can only be at most one deeper than the deepest node before the trees were combined the total number of nodes in the combined tree is therefore at least twice the number in the smaller subtree thusthe depth of any node can be increased at most log times when equivalences are processed example when processing equivalence pair (ifin figure ( ) is the root of tree with two nodes while is the root of tree with only one node thusi is set to point to rather than the other way around figure (cshows the result of processing two more equivalence pairs(haand (egfor the first pairthe root for is while the root for is itself both trees contain two nodesso it is an arbitrary decision
20,273
as to which node is set to be the root for the combined tree in the case of equivalence pair (eg)the root of is while the root of is because is the root of the larger treenode is set to point to not all equivalences will combine two trees if edge (fgis processed when the representation is in the state shown in figure ( )no change will be made because is already the root for the weighted union rule helps to minimize the depth of the treebut we can do better than this path compression is method that tends to create extremely shallow trees path compression takes place while finding the root for given node call this root path compression resets the parent of every node on the path from to to point directly to this can be implemented by first finding second pass is then made along the path from to rassigning the parent field of each node encountered to alternativelya recursive algorithm can be implemented as follows this version of find not only returns the root of the current nodebut also makes all ancestors of the current node point to the root public integer find(integer currif (array[curr=nullreturn curr/at root array[currfind(array[curr])return array[curr]example figure (dshows the result of processing equivalence pair (heon the the representation shown in figure (cusing the standard weighted union rule without path compression figure illustrates the path compression process for the same equivalence pair after locating the root for node hwe can perform path compression to make point directly to root object likewisee is set to point directly to its rootf finallyobject is set to point to root object note that path compression takes place during the find operationnot during the merge operation in figure this means that nodes bcand have node remain as their parentrather than changing their parent to be while we might prefer to have these nodes point to fto accomplish this would require that additional information from the find operation be passed back to the union operation this would not be practical path compression keeps the cost of each find operation very close to constant to be more precise about what is meant by "very close to constant,the cost of path compression for find operations on nodes (when combined with the weighted
20,274
chap non-binary trees figure an example of path compressionshowing the result of processing equivalence pair (heon the representation of figure (cunion rule for joining setsis th( lognthe notation "lognmeans the number of times that the log of must be taken before < for examplelog is because log log log and finally log thuslogn grows very slowlyso the cost for series of find operations is very close to note that this does not mean that the tree resulting from processing equivalence pairs necessarily has depth th(lognone can devise series of equivalence operations that yields th(log ndepth for the resulting tree howevermany of the equivalences in such series will look only at the roots of the trees being mergedrequiring little processing time the total amount of processing time required for operations will be th( logn)yielding nearly constant time for each equivalence operation this is an example of the technique of amortized analysisdiscussed further in section general tree implementations we now tackle the problem of devising an implementation for general trees that allows efficient processing of all member functions of the adt shown in figure this section presents several approaches to implementing general trees each implementation yields advantages and disadvantages in the amount of space required to store node and the relative ease with which key operations can be performed general tree implementations should place no restriction on how many children node may have in some applicationsonce node is created the number of children never changes in such casesa fixed amount of space can be allocated for the node when it is createdbased on the number of children for the node matters become more complicated if children can be added to or deleted from noderequiring that the node' space allocation be adjusted accordingly
20,275
sec general tree implementations index val par figure the "list of childrenimplementation for general trees the column of numbers to the left of the node array labels the array indices the column labeled "valstores node values the column labeled "parstores pointers to the parents for claritythese pointers are shown as array indices the last column stores pointers to the linked list of children for each internal node each element on the linked list stores pointer to one of the node' children (shown as the array index of the target nodelist of children our first attempt to create general tree implementation is called the "list of childrenimplementation for general trees it simply stores with each internal node linked list of its childrenin order from left to right this is illustrated by figure the "list of childrenimplementation stores the tree nodes in an array each node contains valuea pointer to its parentand pointer to linked list of the node' childrenstored from left to right each list element contains pointer to one child thusthe leftmost child of node can be found directly because it is the first element in the linked list howeverto find the right sibling for node is more difficult consider the case of node and its parent to find ' right siblingwe must move down the child list of until the linked list element storing the pointer to has been found going one step further takes us to the linked list element that stores pointer to ' right sibling thusin the worst caseto find ' right sibling requires that all children of ' parent be searched combining trees using this representation is difficult if each tree is stored in separate node array if the nodes of both trees are stored in single node arraythen
20,276
chap non-binary trees left val par right figure the "left-child/right-siblingimplementation adding tree as subtree of node is done by simply adding the root of to ' list of children the left-child/right-sibling implementation with the "list of childrenimplementationit is difficult to access node' right sibling figure presents an improvement hereeach node stores its value and pointers to its parentleftmost childand right sibling thuseach of the basic adt operations can be implemented by reading value directly from the node if two trees are stored within the same node arraythen adding one as the subtree of the other simply requires setting three pointers combining trees in this way is illustrated by figure this implementation is more space efficient than the "list of childrenimplementationand each node requires fixed amount of space in the node array dynamic node implementations the two general tree implementations just described use an array to store the collection of nodes in contrastour standard implementation for binary trees stores each node as separate dynamic object containing its value and pointers to its two children unfortunatelynodes of general tree can have any number of childrenand this number may change during the life of the node general tree node implementation must support these properties one solution is simply to limit the number
20,277
sec general tree implementations left val par right figure combining two trees that use the "left-child/right-siblingimplementation the subtree rooted at in figure now becomes the first child of three pointers are adjusted in the node arraythe left-child field of now points to node rwhile the right-sibling field for points to node the parent field of node points to node of children permitted for any node and allocate pointers for exactly that number of children there are two major objections to this firstit places an undesirable limit on the number of childrenwhich makes certain trees unrepresentable by this implementation secondthis might be extremely wasteful of space because most nodes will have far fewer children and thus leave some pointer positions empty the alternative is to allocate variable space for each node there are two basic approaches one is to allocate an array of child pointers as part of the node in essenceeach node stores an array-based list of child pointers figure illustrates the concept this approach assumes that the number of children is known when the node is createdwhich is true for some applications but not for others it also works best if the number of children does not change if the number of children does change (especially if it increases)then some special recovery mechanism must be provided to support change in the size of the child pointer array one possibility is to allocate new node of the correct size from free store and return the old copy of the node to free store for later reuse this works especially well in language with built-in garbage collection such as java for exampleassume that node initially has two childrenand that space for two child pointers is allocated when is created if third child is added to mspace for new node with three child pointers can be allocatedthe contents of is copied over to the new
20,278
chap non-binary trees val size (af (bfigure dynamic general tree representation with fixed-size arrays for the child pointers (athe general tree (bthe tree representation for each nodethe first field stores the node value while the second field stores the size of the child pointer array spaceand the old space is then returned to free store as an alternative to relying on the system' garbage collectora memory manager for variable size storage units can be implementedas described in section another possibility is to use collection of free listsone for each array sizeas described in section note in figure that the current number of children for each node is stored explicitly in size field the child pointers are stored in an array with size elements another approach that is more flexiblebut which requires more spaceis to store linked list of child pointers with each node as illustrated by figure this implementation is essentially the same as the "list of childrenimplementation of section but with dynamically allocated nodes rather than storing the nodes in an array dynamic "left-child/right-siblingimplementation the "left-child/right-siblingimplementation of section stores fixed number of pointers with each node this can be readily adapted to dynamic implementation in essencewe substitute binary tree for general tree each node of the "left-child/right-siblingimplementation points to two "childrenin new binary tree structure the left child of this new structure is the node' first child in the general tree the right child is the node' right sibling we can easily extend this conversion to forest of general treesbecause the roots of the trees can be considered siblings converting from forest of general trees to single binary tree is illustrated by figure here we simply include links from each node to its right sibling and remove links to all children except the leftmost child figure shows how this might look in an implementation with two pointers at each node
20,279
(ac (bfigure dynamic general tree representation with linked lists of child pointers (athe general tree (bthe tree representation root ( (bfigure converting from forest of general trees to single binary tree each node stores pointers to its left child and right sibling the tree roots are assumed to be siblings for the purpose of converting compared with the implementation illustrated by figure which requires overhead of three pointers/nodethe implementation of figure only requires two pointers per node because each node of the general tree now contains fixed number of pointersand because each function of the general tree adt can now be implemented efficientlythe dynamic "left-child/right-siblingimplementation is preferred to the other general tree implementations described in sections to -ary trees -ary trees are trees whose internal nodes all have exactly children thusa full binary tree is -ary tree the pr quadtree discussed in section is an
20,280
chap non-binary trees (af (bfigure general tree converted to the dynamic "left-child/right-siblingrepresentation compared to the representation of figure this representation requires less space ( (bfigure full and complete -ary trees (athis tree is full (but not complete(bthis tree is complete (but not fullexample of -ary tree because -ary tree nodes have fixed number of childrenunlike general treesthey are relatively easy to implement in generalk-ary trees bear many similarities to binary treesand similar implementations can be used for -ary tree nodes note that as becomes largethe potential number of null pointers growsand the difference between the required sizes for internal nodes and leaf nodes increases thusas becomes largerthe need to choose different implementations for the internal and leaf nodes becomes more pressing full and complete -ary trees are analogous to full and complete binary treesrespectively figure shows full and complete -ary trees for in practicemost applications of -ary trees limit them to be either full or complete many of the properties of binary trees extend to -ary trees equivalent theorems to those in section regarding the number of null pointers in -ary tree and the relationship between the number of leaves and the number of internal
20,281
nodes in -ary tree can be derived we can also store complete -ary tree in an arraysimilar to the approach shown in section sequential tree implementations next we consider fundamentally different approach to implementing trees the goal is to store series of node values with the minimum information needed to reconstruct the tree structure this approachknown as sequential tree implementationhas the advantage of saving space because no pointers are stored it has the disadvantage that accessing any node in the tree requires sequentially processing all nodes that appear before it in the node list in other wordsnode access must start at the beginning of the node listprocessing nodes sequentially in whatever order they are stored until the desired node is reached thusone primary virtue of the other implementations discussed in this section is lostefficient access (typically th(log ntimeto arbitrary nodes in the tree sequential tree implementations are ideal for archiving trees on disk for later use because they save spaceand the tree structure can be reconstructed as needed for later processing sequential tree implementations can also be used to serialize tree structure serialization is the process of storing an object as series of bytestypically so that the data structure can be transmitted between computers this capability is important when using data structures in distributed processing environment sequential tree implementation stores the node values as they would be enumerated by preorder traversalalong with sufficient information to describe the tree' shape if the tree has restricted formfor example if it is full binary treethen less information about structure typically needs to be stored general treebecause it has the most flexible shapetends to require the most additional shape information there are many possible sequential tree implementation schemes we will begin by describing methods appropriate to binary treesthen generalize to an implementation appropriate to general tree structure because every node of binary tree is either leaf or has two (possibly emptychildrenwe can take advantage of this fact to implicitly represent the tree' structure the most straightforward sequential tree implementation lists every node value as it would be enumerated by preorder traversal unfortunatelythe node values alone do not provide enough information to recover the shape of the tree in particularas we read the series of node valueswe do not know when leaf node has been reached howeverwe can treat all non-empty nodes as internal nodes with two (possibly emptychildren only null values will be interpreted as leaf nodesand these can be listed explicitly such an augmented node list provides enough information to recover the tree structure
20,282
chap non-binary trees figure sample binary tree for sequential tree implementation examples example for the binary tree of figure the corresponding sequential representation would be as follows (assuming that '/stands for null)ab/ //ceg///fh// /( to reconstruct the tree structure from this node listwe begin by setting node to be the root ' left child will be node node ' left child is null pointerso node must be ' right child node has two null childrenso node must be the right child of node to illustrate the difficulty involved in using the sequential tree representation for processingconsider searching for the right child of the root node we must first move sequentially through the node list of the left subtree only at this point do we reach the value of the root' right child clearly the sequential representation is space efficientbut not time efficient for descending through the tree along some arbitrary path assume that each node value takes constant amount of space an example would be if the node value is positive integer and null is indicated by the value zero from the full binary tree theorem of section we know that the size of the node list will be about twice the number of nodes ( the overhead fraction is / the extra space is required by the null pointers we should be able to store the node list more compactly howeverany sequential implementation must recognize when leaf node has been reachedthat isa leaf node indicates the end of subtree one way to do this is to explicitly list with each node whether it is an internal node or leaf if node is an internal nodethen we know that its
20,283
sec sequential tree implementations two children (which may be subtreesimmediately follow in the node list if is leaf nodethen the next node in the list is the right child of some ancestor of xnot the right child of in particularthe next node will be the child of ' most recent ancestor that has not yet seen its right child howeverthis assumes that each internal node does in fact have two childrenin other wordsthat the tree is full empty children must be indicated in the node list explicitly assume that internal nodes are marked with prime ( and that leaf nodes show no mark empty children of internal nodes are indicated by '/'but the (emptychildren of leaf nodes are not represented at all note that full binary tree stores no null values with this implementationand so requires less overhead example we can represent the tree of figure as followsa /dc / hi ( note that slashes are needed for the empty children because this is not full binary tree storing bits can be considerable savings over storing null values in example each node is shown with mark if it is internalor no mark if it is leaf this requires that each node value has space to store the mark bit this might be true iffor examplethe node value were stored as -byte integer but the range of the values sored was small enough so that not all bits are used an example would be if all node values must be positive then the high-order (signbit of the integer value could be used as the mark bit another approach is to store separate bit vector to represent the status of each node in this caseeach node of the tree corresponds to one bit in the bit vector value of ' could indicate an internal nodeand ' could indicate leaf node example the bit vector for the tree if figure would be ( storing general trees by means of sequential implementation requires that more explicit structural information be included with the node list not only must the general tree implementation indicate whether node is leaf or internalit must also indicate how many children the node has alternativelythe implementation can indicate when node' child list has come to an end the next example dispenses with marks for internal or leaf nodes instead it includes special mark (we
20,284
chap non-binary trees will use the ")symbolto indicate the end of child list all leaf nodes are followed by ")symbol because they have no children leaf node that is also the last child for its parent would indicate this by two or more successive ")symbols example for the general tree of figure we get the sequential representation rac) ) ))bf))( note that is followed by three ")marksbecause it is leafthe last node of ' rightmost subtreeand the last node of ' rightmost subtree note that this representation for serializing general trees cannot be used for binary trees this is because binary tree is not merely restricted form of general tree with at most two children every binary tree node has left and right childthough either or both might be empty for examplethe representation of example cannot let us distinguish whether node in figure is the left or right child of node further reading the expression logn cited in section is closely related to the inverse of ackermann' function for more information about ackermann' function and the cost of path compressionsee robert tarjan' paper "on the efficiency of good but not linear set merging algorithm[tar the article "data structures and algorithms for disjoint set union problemsby galil and italiano [gi covers many aspects of the equivalence class problem foundations of multidimensional and metric data structures by hanan samet [sam treats various implementations of tree structures in detail within the context of -ary trees samet covers sequential implementations as well as the linked and array implementations such as those described in this and while these books are ostensibly concerned with spatial data structuresmany of the concepts treated are relevant to anyone who must implement tree structures exercises write an algorithm to determine if two general trees are identical make the algorithm as efficient as you can analyze your algorithm' running time
20,285
write an algorithm to determine if two binary trees are identical when the ordering of the subtrees for node is ignored for exampleif tree has root node with value rleft child with value and right child with value bthis would be considered identical to another tree with root node value rleft child value band right child value make the algorithm as efficient as you can analyze your algorithm' running time how much harder would it be to make this algorithm work on general tree write postorder traversal function for general treessimilar to the preorder traversal function named print given in section write function that takes as input general tree and returns the number of nodes in that tree write your function to use the gentree and gtnode adts of figure describe how to implement the weighted union rule efficiently in particulardescribe what information must be stored with each node and how this information is updated when two trees are merged modify the implementation of figure to support the weighted union rule potential alternative to the weighted union rule for combining two trees is the height union rule the height union rule requires that the root of the tree with greater height become the root of the union explain why the height union rule can lead to worse average time behavior than the weighted union rule using the weighted union rule and path compressionshow the array for the parent pointer implementation that results from the following series of equivalences on set of objects indexed by the values through initiallyeach element in the set should be in separate equivalence class when two trees to be merged are the same sizemake the root with greater index value be the child of the root with lesser index value ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( using the weighted union rule and path compressionshow the array for the parent pointer implementation that results from the following series of equivalences on set of objects indexed by the values through initiallyeach element in the set should be in separate equivalence class when two trees to be merged are the same sizemake the root with greater index value be the child of the root with lesser index value ( ( ( ( ( ( ( ( ( ( ( ( ( (
20,286
chap non-binary trees devise series of equivalence statements for collection of sixteen items that yields tree of height when both the weighted union rule and path compression are used what is the total number of parent pointers followed to perform this series one alternative to path compression that gives similar performance gains is called path halving in path halvingwhen the path is traversed from the node to the rootwe make the grandparent of every other node on the path the new parent of write version of find that implements path halving your find operation should work as you move up the treerather than require the two passes needed by path compression analyze the fraction of overhead required by the "list of childrenimplementationthe "left-child/right-siblingimplementationand the two linked implementations of section how do these implementations compare in space efficiency using the general tree adt of figure write function that takes as input the root of general tree and returns binary tree generated by the conversion process illustrated by figure use mathematical induction to prove that the number of leaves in nonempty full -ary tree is ( ) where is the number of internal nodes derive the formulae for computing the relatives of non-empty complete -ary tree node stored in the complete tree representation of section find the overhead fraction for full -ary tree implementation with space requirements as follows(aall nodes store datak child pointersand parent pointer the data field requires four bytes and each pointer requires four bytes (ball nodes store data and child pointers the data field requires sixteen bytes and each pointer requires four bytes (call nodes store data and parent pointerand internal nodes store child pointers the data field requires eight bytes and each pointer requires four bytes (donly leaf nodes store dataonly internal nodes store child pointers the data field requires four bytes and each pointer requires two bytes (awrite out the sequential representation for figure using the coding illustrated by example (bwrite out the sequential representation for figure using the coding illustrated by example
20,287
sec exercises figure sample tree for exercise draw the binary tree representing the following sequential representation for binary trees illustrated by example abd// // / / draw the binary tree representing the following sequential representation for binary trees illustrated by example / / / show the bit vector for leaf and internal nodes (as illustrated by example for this tree draw the general tree represented by the following sequential representation for general trees illustrated by example xpc) )rv) ))) (awrite function to decode the sequential representation for binary trees illustrated by example the input should be the sequential representation and the output should be pointer to the root of the resulting binary tree (bwrite function to decode the sequential representation for full binary trees illustrated by example the input should be the sequential representation and the output should be pointer to the root of the resulting binary tree (cwrite function to decode the sequential representation for general trees illustrated by example the input should be the sequential representation and the output should be pointer to the root of the resulting general tree devise sequential representation for huffman coding trees suitable for use as part of file compression utility (see project
20,288
chap non-binary trees projects write classes that implement the general tree class declarations of figure using the dynamic "left-child/right-siblingrepresentation described in section write classes that implement the general tree class declarations of figure using the linked general tree implementation with child pointer arrays of figure your implementation should support only fixed-size nodes that do not change their number of children once they are created thenreimplement these classes with the linked list of children representation of figure how do the two implementations compare in space and time efficiency and ease of implementation write classes that implement the general tree class declarations of figure using the linked general tree implementation with child pointer arrays of figure your implementation must be able to support changes in the number of children for node when createda node should be allocated with only enough space to store its initial set of children whenever new child is added to node such that the array overflowsallocate new array from free store that can store twice as many children implement bst file archiver your program should take bst created in main memory using the implementation of figure and write it out to disk using one of the sequential representations of section it should also be able to read in disk files using your sequential representation and create the equivalent main memory representation use the union/find algorithm to implement solution to the following problem given set of points represented by their xy-coordinatesassign the points to clusters any two points are defined to be in the same cluster if they are within specified distance of each other for the purpose of this problemclustering is an equivalence relationship in other wordspoints aband are defined to be in the same cluster if the distance between and is less than and the distance between and is also less than deven if the distance between and is greater than to solve the problemcompute the distance between each pair of pointsusing the equivalence processing algorithm to merge clusters whenever two points are within the specified distance what is the asymptotic complexity of this algorithmwhere is the bottleneck in processing in this projectyou will run some empirical tests to determine if some variations on path compression in the union/find algorithm will lead to im
20,289
proved performance you should compare the following four implementations(astandard union/find with path compression and weighted union (bpath compression and weighted unionexcept that path compression is done after the unioninstead of during the find operation that ismake all nodes along the paths traversed in both trees point directly to the root of the larger tree (cweighted union and simplified form of path compression at the end of every find operationmake the node point to its tree' root (but don' change the pointers for other nodes along the path(dweighted union and simplified form of path compression both nodes in the equivalence will be set to point directly to the root of the larger tree after the union operation for exampleconsider processing the equivalence (abwhere is the root of and is the root of assume the tree with root is bigger than the tree with root at the end of the union/find operationnodes aband will all point directly to
20,290
sorting and searching
20,291
internal sorting we sort many things in our everyday livesa handful of cards when playing bridgebills and other piles of paperjars of spicesand so on and we have many intuitive strategies that we can use to do the sortingdepending on how many objects we have to sort and how hard they are to move around sorting is also one of the most frequently performed computing tasks we might sort the records in database so that we can search the collection efficiently we might sort the records by zip code so that we can print and mail them more cheaply we might use sorting as an intrinsic part of an algorithm to solve some other problemsuch as when computing the minimum-cost spanning tree (see section because sorting is so importantnaturally it has been studied intensively and many algorithms have been devised some of these algorithms are straightforward adaptations of schemes we use in everyday life others are totally alien to how humans do thingshaving been invented to sort thousands or even millions of records stored on the computer after years of studythere are still unsolved problems related to sorting new algorithms are still being developed and refined for specialpurpose applications while introducing this central problem in computer sciencethis has secondary purpose of illustrating many important issues in algorithm design and analysis the collection of sorting algorithms presented will illustrate that divideand-conquer is powerful approach to solving problemand that there are multiple ways to do the dividing mergesort divides list in half quicksort divides list into big values and small values and radix sort divides the problem by working on one digit of the key at time sorting algorithms will be used to illustrate wide variety of analysis techniques in this we'll find that it is possible for an algorithm to have an average case whose growth rate is significantly smaller than its worse case (quicksortwe'll see how it is possible to speed up sorting algorithms (both shellsort
20,292
chap internal sorting and quicksortby taking advantage of the best case behavior of another algorithm (insertion sortwe'll see several examples of how we can tune an algorithm for better performance we'll see that special case behavior by some algorithms makes them the best solution for special niche applications (heapsortsorting provides an example of significant technique for analyzing the lower bound for problem sorting will also be used to motivate the introduction to file processing presented in the present covers several standard algorithms appropriate for sorting collection of records that fit in the computer' main memory it begins with discussion of three simplebut relatively slowalgorithms requiring th( time in the average and worst cases several algorithms with considerably better performance are then presentedsome with th( log nworst-case running time the final sorting method presented requires only th(nworst-case time under special conditions the concludes with proof that sorting in general requires ohm( log ntime in the worst case sorting terminology and notation except where noted otherwiseinput to the sorting algorithms presented in this is collection of records stored in an array records are compared to one another by requiring that their type extend the comparable class this will require that the class implements the compareto methodwhich returns value less than zeroequal to zeroor greater than zero depending on its relationship to the record being compared to the compareto method is defined to extract the appropriate key field from the record we also assume that for every record type there is swap function that can interchange the contents of two records in the array given set of records rn with key values kn the sorting problem is to arrange the records into any order such that records rs rs rsn have keys obeying the property ks <ks <<ksn in other wordsthe sorting problem is to arrange set of records so that the values of their key fields are in non-decreasing order as definedthe sorting problem allows input with two or more records that have the same key value certain applications require that input not contain duplicate key values the sorting algorithms presented in this and in can handle duplicate key values unless noted otherwise when duplicate key values are allowedthere might be an implicit ordering to the duplicatestypically based on their order of occurrence within the input it might be desirable to maintain this initial ordering among duplicates sorting
20,293
algorithm is said to be stable if it does not change the relative ordering of records with identical key values manybut not allof the sorting algorithms presented in this are stableor can be made stable with minor changes when comparing two sorting algorithmsthe most straightforward approach would seem to be simply program both and measure their running times an example of such timings is presented in figure howeversuch comparison can be misleading because the running time for many sorting algorithms depends on specifics of the input values in particularthe number of recordsthe size of the keys and the recordsthe allowable range of the key valuesand the amount by which the input records are "out of ordercan all greatly affect the relative running times for sorting algorithms when analyzing sorting algorithmsit is traditional to measure the number of comparisons made between keys this measure is usually closely related to the running time for the algorithm and has the advantage of being machine and datatype independent howeverin some cases records might be so large that their physical movement might take significant fraction of the total running time if soit might be appropriate to measure the number of swap operations performed by the algorithm in most applications we can assume that all records and keys are of fixed lengthand that single comparison or single swap operation requires constant amount of time regardless of which keys are involved some special situations "change the rulesfor comparing sorting algorithms for examplean application with records or keys having widely varying length (such as sorting sequence of variable length stringswill benefit from special-purpose sorting technique some applications require that small number of records be sortedbut that the sort be performed frequently an example would be an application that repeatedly sorts groups of five numbers in such casesthe constants in the runtime equations that are usually ignored in an asymptotic analysis now become crucial finallysome situations require that sorting algorithm use as little memory as possible we will note which sorting algorithms require significant extra memory beyond the input array three th( sorting algorithms this section presents three simple sorting algorithms while easy to understand and implementwe will soon see that they are unacceptably slow when there are many records to sort nonethelessthere are situations where one of these simple algorithms is the best tool for the job
20,294
chap internal sorting = figure an illustration of insertion sort each column shows the array after the iteration with the indicated value of in the outer for loop values above the line in each column have been sorted arrows indicate the upward motions of records through the array insertion sort imagine that you have stack of phone bills from the past two years and that you wish to organize them by date fairly natural way to do this might be to look at the first two bills and put them in order then take the third bill and put it into the right order with respect to the first twoand so on as you take each billyou would add it to the sorted pile that you have already made this naturally intuitive process is the inspiration for our first sorting algorithmcalled insertion sort insertion sort iterates through list of records each record is inserted in turn at the correct position within sorted list composed of those records already processed the following is java implementation the input is an array of records stored in array static void sort( [afor (int = < lengthi++/insert 'th record for (int = ( > &( [jcompareto( [ - ])< ) --dsutil swap(ajj- )consider the case where inssort is processing the ith recordwhich has key value the record is moved upward in the array as long as is less than the key value immediately above it as soon as key value less than or equal to is encounteredinssort is done with that record because all records above it in the array must have smaller keys figure illustrates how insertion sort works the body of inssort is made up of two nested for loops the outer for loop is executed times the inner for loop is harder to analyze because the number of times it executes depends on how many keys in positions to have value less than that of the key in position in the worst caseeach record
20,295
must make its way to the top of the array this would occur if the keys are initially arranged from highest to lowestin the reverse of sorted order in this casethe number of comparisons will be one the first time through the for looptwo the second timeand so on thusthe total number of comparisons will be th( = in contrastconsider the best-case cost this occurs when the keys begin in sorted order from lowest to highest in this caseevery pass through the inner for loop will fail immediatelyand no values will be moved the total number of comparisons will be which is the number of times the outer for loop executes thusthe cost for insertion sort in the best case is th(nwhile the best case is significantly faster than the worst casethe worst case is usually more reliable indication of the "typicalrunning time howeverthere are situations where we can expect the input to be in sorted or nearly sorted order one example is when an already sorted list is slightly disorderedrestoring sorted order using insertion sort might be good idea if we know that the disordering is slight examples of algorithms that take advantage of insertion sort' best-case running time are the shellsort algorithm of section and the quicksort algorithm of section what is the average-case cost of insertion sortwhen record is processedthe number of times through the inner for loop depends on how far "out of orderthe record is in particularthe inner for loop is executed once for each key greater than the key of record that appears in array positions through - for examplein the leftmost column of figure the value is preceded by five values greater than each such occurrence is called an inversion the number of inversions ( the number of values greater than given value that occur prior to it in the arraywill determine the number of comparisons and swaps that must take place we need to determine what the average number of inversions will be for the record in position we expect on average that half of the keys in the first array positions will have value greater than that of the key at position thusthe average case should be about half the cost of the worst casewhich is still th( sothe average case is no better than the worst case in asymptotic complexity counting comparisons or swaps yields similar results because each time through the inner for loop yields both comparison and swapexcept the last ( the comparison that fails the inner for loop' test)which has no swap thusthe number of swaps for the entire sort operation is less than the number of comparisons this is in the best caseand th( in the average and worst cases
20,296
chap internal sorting = figure an illustration of bubble sort each column shows the array after the iteration with the indicated value of in the outer for loop values above the line in each column have been sorted arrows indicate the swaps that take place during given iteration bubble sort our next sort is called bubble sort bubble sort is often taught to novice programmers in introductory computer science courses this is unfortunatebecause bubble sort has no redeeming features whatsoever it is relatively slow sortit is no easier to understand than insertion sortit does not correspond to any intuitive counterpart in "everydayuseand it has poor best-case running time howeverbubble sort serves as the basis for better sort that will be presented in section bubble sort consists of simple double for loop the first iteration of the inner for loop moves through the record array from bottom to topcomparing adjacent keys if the lower-indexed key' value is greater than its higher-indexed neighborthen the two values are swapped once the smallest value is encounteredthis process will cause it to "bubbleup to the top of the array the second pass through the array repeats this process howeverbecause we know that the smallest value reached the top of the array on the first passthere is no need to compare the top two elements on the second pass likewiseeach succeeding pass through the array compares adjacent elementslooking at one less value than the preceding pass figure illustrates bubble sort java implementation is as followsstatic void sort( [afor (int = < length- ++/bubble up 'th record for (int = length- >ij--if (( [jcompareto( [ - ] )dsutil swap(ajj- )
20,297
determining bubble sort' number of comparisons is easy regardless of the arrangement of the values in the arraythe number of comparisons made by the inner for loop is always ileading to total cost of th( = bubble sort' running time is roughly the same in the bestaverageand worst cases the number of swaps required depends on how often value is less than the one immediately preceding it in the array we can expect this to occur for about half the comparisons in the average caseleading to th( for the expected number of swaps the actual number of swaps performed by bubble sort will be identical to that performed by insertion sort selection sort consider again the problem of sorting pile of phone bills for the past year another intuitive approach might be to look through the pile until you find the bill for januaryand pull that out then look through the remaining pile until you find the bill for februaryand add that behind january proceed through the ever-shrinking pile of bills to select the next one in order until you are done this is the inspiration for our last th( sortcalled selection sort the ith pass of selection sort "selectsthe ith smallest key in the arrayplacing that record into position in other wordsselection sort first finds the smallest key in an unsorted listthen the second smallestand so on its unique feature is that there are few record swaps to find the next smallest key value requires searching through the entire unsorted portion of the arraybut only one swap is required to put the record in place thusthe total number of swaps required will be (we get the last record in place "for free"figure illustrates selection sort below is java implementation static void sort( [afor (int = < length- ++/select 'th record int lowindex /remember its index for (int = length- >ij--/find the least value if ( [jcompareto( [lowindex] lowindex /put it in place dsutil swap(ailowindex)selection sort is essentially bubble sortexcept that rather than repeatedly swapping adjacent values to get the next smallest record into placewe instead
20,298
chap internal sorting = figure an example of selection sort each column shows the array after the iteration with the indicated value of in the outer for loop numbers above the line in each column have been sorted and are in their final positions key key key key key key key key ( (bfigure an example of swapping pointers to records (aa series of four records the record with key value comes before the record with key value (bthe four records after the top two pointers have been swapped now the record with key value comes before the record with key value remember the position of the element to be selected and do one swap at the end thusthe number of comparisons is still th( )but the number of swaps is much less than that required by bubble sort selection sort is particularly advantageous when the cost to do swap is highfor examplewhen the elements are long strings or other large records selection sort is more efficient than bubble sort (by constant factorin most other situations as well there is another approach to keeping the cost of swapping records low that can be used by any sorting algorithm even when the records are large this is to have each element of the array store pointer to record rather than store the record itself in this implementationa swap operation need only exchange the pointer valuesthe records themselves do not move this technique is illustrated by figure additional space is needed to store the pointersbut the return is faster swap operation
20,299
insertion bubble selection comparisonsbest case average case worst case th(nth( th( th( th( th( th( th( th( swapsbest case average case worst case th( th( th( th( th(nth(nth(nfigure comparison of the asymptotic complexities for three simple sorting algorithms the cost of exchange sorting figure summarizes the cost of insertionbubbleand selection sort in terms of their required number of comparisons and swaps in the bestaverageand worst cases the running time for each of these sorts is th( in the average and worst cases the remaining sorting algorithms presented in this are significantly better than these three under typical conditions but before continuing onit is instructive to investigate what makes these three sorts so slow the crucial bottleneck is that only adjacent records are compared thuscomparisons and moves (in all but selection sortare by single steps swapping adjacent records is called an exchange thusthese sorts are sometimes referred to as exchange sorts the cost of any exchange sort can be at best the total number of steps that the records in the array must move to reach their "correctlocation ( the number of inversions for each recordwhat is the average number of inversionsconsider list containing values define lr to be in reverse has ( - )/ distinct pairs of valueseach of which could potentially be an inversion each such pair must either be an inversion in or in lr thusthe total number of inversions in and lr together is exactly ( )/ for an average of ( )/ per list we therefore know with certainty that any sorting algorithm which limits comparisons to adjacent items will cost at least ( )/ ohm( in the average case there is slight anomaly with selection sort the supposed advantage for selection sort is its low number of swaps requiredyet selection sort' best-case number of swaps is worse than that for insertion sort or bubble sort this is because the implementation given for selection sort does not avoid swap in the case where record is already in position the reason is that it usually takes more time to repeatedly check for this situation than would be saved by avoiding such swaps