id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
20,500 | advanced tree structures this introduces several tree structures designed for use in specialized applications the trie of section is commonly used to store strings and is suitable for storing and searching collections of strings it also serves to illustrate the concept of key space decomposition the avl tree and splay tree of section are variants on the bst they are examples of self-balancing search trees and have guaranteed good performance regardless of the insertion order for records an introduction to several spatial data structures used to organize point data by xycoordinates is presented in section descriptions of the fundamental operations are given for each data structure because an important goal for this is to provide material for class programming projectsdetailed implementations are left for the reader tries recall that the shape of bst is determined by the order in which its data records are inserted one permutation of the records might yield balanced tree while another might yield an unbalanced tree in the shape of linked list the reason is that the value of the key stored in the root node splits the key range into two partsthose key values less than the root' key valueand those key values greater than the root' key value depending on the relationship between the root node' key value and the distribution of the key values for the other records in the the treethe resulting bst might be balanced or unbalanced thusthe bst is an example of data structure whose organization is based on an object space decompositionso called because the decomposition of the key range is driven by the objects ( the key values of the data recordsstored in the tree the alternative to object space decomposition is to predefine the splitting position within the key range for each node in the tree in other wordsthe root could be |
20,501 | chap advanced tree structures predefined to split the key range into two equal halvesregardless of the particular values or order of insertion for the data records those records with keys in the lower half of the key range will be stored in the left subtreewhile those records with keys in the upper half of the key range will be stored in the right subtree while such decomposition rule will not necessarily result in balanced tree (the tree will be unbalanced if the records are not well distributed within the key range)at least the shape of the tree will not depend on the order of key insertion furthermorethe depth of the tree will be limited by the resolution of the key rangethat isthe depth of the tree can never be greater than the number of bits required to store key value for exampleif the keys are integers in the range to then the resolution for the key is ten bits thustwo keys might be identical only until the tenth bit in the worst casetwo keys will follow the same path in the tree only until the tenth branch as resultthe tree will never be more than ten levels deep in contrasta bst containing records could be as much as levels deep decomposition based on predetermined subdivision of the key range is called key space decomposition in computer graphicsa related technique is known as image space decompositionand this term is sometimes applied to data structures based on key space decomposition as well any data structure based on key space decomposition is called trie folklore has it that "triecomes from "retrieval unfortunatelythat would imply that the word is pronounced "tree,which would lead to confusion with regular use of the word "tree "trieis actually pronounced as "try like the -treea trie stores data records only in leaf nodes internal nodes serve as placeholders to direct the search process figure illustrates the trie concept upper and lower bounds must be imposed on the key values so that we can compute the middle of the key range because the largest value inserted in this example is range from to is assumedas is the smallest power of two greater than the binary value of the key determines whether to select the left or right branch at any given point during the search the most significant bit determines the branch direction at the root figure shows binary trieso called because in this example the trie structure is based on the value of the key interpreted as binary numberwhich results in binary tree the huffman coding tree of section is another example of binary trie all data values in the huffman tree are at the leavesand each branch splits the range of possible letter codes in half the huffman codes are actually derived from the letter positions within the trie these are examples of binary triesbut tries can be built with any branching factor normally the branching factor is determined by the alphabet used for |
20,502 | sec tries figure the binary trie for the collection of values all data values are stored in the leaf nodes edges are labeled with the value of the bit used to determine the branching direction of each node the binary form of the key value determines the path to the recordassuming that each key is represented as -bit value representing number in the range to binary numbersthe alphabet is { and binary trie results other alphabets lead to other branching factors one application for tries is storing dictionary of words such trie will be referred to as an alphabet trie for simplicityour examples will ignore case in letters we add special character ($to the standard english letters the character is used to represent the end of string thusthe branching factor for each node is (up to once constructedthe alphabet trie is used to determine if given word is in the dictionary consider searching for word in the alphabet trie of figure the first letter of the search word determines which branch to take from the rootthe second letter determines which branch to take at the next leveland so on only the letters that lead to word are shown as branches in figure (bthe leaf nodes of the trie store copy of the actual wordswhile in figure (athe word is built up from the letters associated with each branch one way to implement node of the alphabet trie is as an array of pointers indexed by letter because most nodes have branches to only small fraction of the possible letters in the alphabetan alternate implementation is to use linked list of pointers to the child nodesas in figure the depth of leaf node in the alphabet trie of figure (bhas little to do with the number of nodes in the trie rathera node' depth depends on the number of characters required to distinguish this node' word from any other for exampleif the words "anteaterand "antelopeare both stored in the trieit is not until the fifth letter that the two words can be distinguished thusthese words must be stored at least as deep as level five in generalthe limiting factor on the depth of nodes in the alphabet trie is the length of the words stored |
20,503 | chap advanced tree structures (aa chicken ant deer duck horse goat goldfish goose anteater antelope (bfigure two variations on the alphabet trie representation for set of ten words (aeach node contains set of links corresponding to single lettersand each letter in the set of words has corresponding link "$is used to indicate the end of word internal nodes direct the search and also spell out the word one letter per link the word need not be stored explicitly "$is needed to recognize the existence of words that are prefixes to other wordssuch as 'antin this example (bhere the trie extends only far enough to discriminate between the words leaf nodes of the trie each store complete wordinternal nodes merely direct the search |
20,504 | sec tries xxxxxx xxxxx xxxxx xxxx xxxxxx xxx figure the pat trie for the collection of values contrast this with the binary trie of figure in the pat trieall data values are stored in the leaf nodeswhile internal nodes store the bit position used to determine the branching decisionassuming that each key is represented as bit value representing number in the range to some of the branches in this pat trie have been labeled to indicate the binary representation for all values in that subtree for exampleall values in the left subtree of the node labeled must have value xxxxxx (where means that bit can be either or all nodes in the right subtree of the node labeled must have value xxx howeverwe can skip branching on bit for this subtree because all values currently stored have value of for that bit poor balance and clumping can result when certain prefixes are heavily used for examplean alphabet trie storing the common words in the english language would have many words in the "thbranch of the treebut none in the "zqbranch any multiway branching trie can be replaced with binary trie by replacing the original trie' alphabet with an equivalent binary code alternativelywe can use the techniques of section for converting general tree to binary tree without modifying the alphabet the trie implementations illustrated by figures and are potentially quite inefficient as certain key sets might lead to large number of nodes with only single child variant on trie implementation is known as patriciawhich stands for "practical algorithm to retrieve information coded in alphanumeric in the case of binary alphabeta patricia trie (referred to hereafter as pat trieis full binary tree that stores data records in the leaf nodes internal nodes store only the position within the key' bit pattern that is used to decide on the next branching point in this wayinternal nodes with single children (equivalentlybit positions within the key that do not distinguish any of the keys within the current subtreeare eliminated pat trie corresponding to the values of figure is shown in figure |
20,505 | chap advanced tree structures example when searching for the value ( in binaryin the pat trie of figure the root node indicates that bit position (the leftmost bitis checked first because the th bit for value is take the left branch at level branch depending on the value of bit which again is at level branch depending on the value of bit which again is at level the index stored in the node is this means that bit of the key is checked next (the value of bit is irrelevantbecause all values stored in that subtree have the same value at bit position thusthe single branch that extends from the equivalent node in figure is just skipped for key value bit has value so the rightmost branch is taken because this leads to leaf nodethe search key is compared against the key stored in that node if they matchthen the desired record has been found note that during the search processonly single bit of the search key is compared at each internal node this is significantbecause the search key could be quite large search in the pat trie requires only single full-key comparisonwhich takes place once leaf node has been reached example consider the situation where we need to store library of dna sequences dna sequence is series of lettersusually many thousands of characters longwith the string coming from an alphabet of only four letters that stand for the four amino acids making up dna strand similar dna sequences might have long sections of their string that are identical the pat trie would avoid making multiple full key comparisons when searching for specific sequence balanced trees we have noted several times that the bst has high risk of becoming unbalancedresulting in excessively expensive search and update operations one solution to this problem is to adopt another search tree structure such as the - tree an alternative is to modify the bst access functions in some way to guarantee that the tree performs well this is an appealing conceptand it works well for heapswhose access functions maintain the heap in the shape of complete binary tree unfortunatelyrequiring that the bst always be in the shape of complete binary tree requires excessive modification to the tree during updateas discussed in section |
20,506 | if we are willing to weaken the balance requirementswe can come up with alternative update routines that perform well both in terms of cost for the update and in balance for the resulting tree structure the avl tree works in this wayusing insertion and deletion routines altered from those of the bst to ensure thatfor every nodethe depths of the left and right subtrees differ by at most one the avl tree is described in section different approach to improving the performance of the bst is to not require that the tree always be balancedbut rather to expend some effort toward making the bst more balanced every time it is accessed this is little like the idea of path compression used by the union/find algorithm presented in section one example of such compromise is called the splay tree the splay tree is described in section the avl tree the avl tree (named for its inventors adelson-velskii and landisshould be viewed as bst with the following additional propertyfor every nodethe heights of its left and right subtrees differ by at most as long as the tree maintains this propertyif the tree contains nodesthen it has depth of at most (log nas resultsearch for any node will cost (log )and if the updates can be done in time proportional to the depth of the node inserted or deletedthen updates will also cost (log )even in the worst case the key to making the avl tree work is to make the proper alterations to the insert and delete routines so as to maintain the balance property of courseto be practicalwe must be able to implement the revised update routines in th(log ntime consider what happens when we insert node with key value as shown in figure the tree on the left meets the avl tree balance requirements after the insertiontwo nodes no longer meet the requirements because the original tree met the balance requirementnodes in the new tree can only be unbalanced by difference of at most in the subtrees for the bottommost unbalanced nodecall it sthere are cases the extra node is in the left child of the left child of the extra node is in the right child of the left child of the extra node is in the left child of the right child of the extra node is in the right child of the right child of cases and are symmetricalas are cases and note also that the unbalanced nodes must be on the path from the root to the newly inserted node |
20,507 | chap advanced tree structures figure example of an insert operation that violates the avl tree balance property prior to the insert operationall nodes of the tree are balanced ( the depths of the left and right subtrees for every node differ by at most oneafter inserting the node with value the nodes with values and are no longer balanced ( (bc figure single rotation in an avl tree this operation occurs when the excess node (in subtree ais in the left child of the left child of the unbalanced node labeled by rearranging the nodes as shownwe preserve the bst propertyas well as re-balance the tree to preserve the avl tree balance property the case where the excess node is in the right child of the right child of the unbalanced node is handled in the same way our problem now is how to balance the tree in (log ntime it turns out that we can do this using series of local operations known as rotations cases and can be fixed using single rotationas shown in figure cases and can be fixed using double rotationas shown in figure the avl tree insert algorithm begins with normal bst insert then as the recursion unwinds up the treewe perform the appropriate rotation on any node that is |
20,508 | sec balanced trees (as (bfigure double rotation in an avl tree this operation occurs when the excess node (in subtree bis in the right child of the left child of the unbalanced node labeled by rearranging the nodes as shownwe preserve the bst propertyas well as re-balance the tree to preserve the avl tree balance property the case where the excess node is in the left child of the right child of is handled in the same way found to be unbalanced deletion is similarhowever consideration for unbalanced nodes must begin at the level of the deletemin operation example in figure ( )the bottom-most unbalanced node has value the excess node (with value is in the right subtree of the left child of so we have an example of case this requires double rotation to fix after the rotation becomes the left child of becomes the left child of and becomes the right child of the splay tree like the avl treethe splay tree is not actually data structure per se but rather is collection of rules for improving the performance of bst these rules govern modifications made to the bst whenever searchinsertor delete operation is performed their purpose is to provide guarantees on the time required by series of operationsthereby avoiding the worst-case linear time behavior of standard bst operations no single operation in the splay tree is guaranteed to be efficient insteadthe splay tree access rules guarantee that series of operations will take |
20,509 | chap advanced tree structures ( log ntime for tree of nodes whenever > thusa single insert or search operation could take (ntime howeverm such operations are guaranteed to require total of ( log ntimefor an average cost of (log nper access operation this is desirable performance guarantee for any search-tree structure unlike the avl treethe splay tree is not guaranteed to be height balanced what is guaranteed is that the total cost of the entire series of accesses will be cheap ultimatelyit is the cost of the series of operations that mattersnot whether the tree is balanced maintaining balance is really done only for the sake of reaching this time efficiency goal the splay tree access functions operate in manner reminiscent of the moveto-front rule for self-organizing lists from section and of the path compression technique for managing parent-pointer trees from section these access functions tend to make the tree more balancedbut an individual access will not necessarily result in more balanced tree whenever node is accessed ( when is inserteddeletedor is the goal of search)the splay tree performs process called splaying splaying moves to the root of the bst when is being deletedsplaying moves the parent of to the root as in the avl treea splay of node consists of series of rotations rotation moves higher in the tree by adjusting its position with respect to its parent and grandparent side effect of the rotations is tendency to balance the tree there are three types of rotation single rotation is performed only if is child of the root node the single rotation is illustrated by figure it basically switches with its parent in way that retains the bst property while figure is slightly different from figure in fact the splay tree single rotation is identical to the avl tree single rotation unlike the avl treethe splay tree requires two types of double rotation double rotations involve sits parent (call it )and ' grandparent (call it gthe effect of double rotation is to move up two levels in the tree the first double rotation is called zigzag rotation it takes place when either of the following two conditions are met is the left child of pand is the right child of is the right child of pand is the left child of in other wordsa zigzag rotation is used when gpand form zigzag the zigzag rotation is illustrated by figure the other double rotation is known as zigzig rotation zigzig rotation takes place when either of the following two conditions are met |
20,510 | sec balanced trees ( (bc figure splay tree single rotation this rotation takes place only when the node being splayed is child of the root herenode is promoted to the rootrotating with node because the value of is less than the value of pp must become ' right child the positions of subtrees aband are altered as appropriate to maintain the bst propertybut the contents of these subtrees remains unchanged (athe original tree with as the parent (bthe tree after rotation takes place performing single rotation second time will return the tree to its original shape equivalentlyif (bis the initial configuration of the tree ( is at the root and is its right child)then (ashows the result of single rotation to splay to the root (ap (bfigure splay tree zigzag rotation (athe original tree with spand in zigzag formation (bthe tree after the rotation takes place the positions of subtrees abcand are altered as appropriate to maintain the bst property |
20,511 | chap advanced tree structures (ad (bfigure splay tree zigzig rotation (athe original tree with spand in zigzig formation (bthe tree after the rotation takes place the positions of subtrees abcand are altered as appropriate to maintain the bst property is the left child of pwhich is in turn the left child of is the right child of pwhich is in turn the right child of thusa zigzig rotation takes place in those situations where zigzag rotation is not appropriate the zigzig rotation is illustrated by figure while figure appears somewhat different from figure in fact the zigzig rotation is identical to the avl tree double rotation note that zigzag rotations tend to make the tree more balancedbecause they bring subtrees and up one level while moving subtree down one level the result is often reduction of the tree' height by one zigzig promotions do not typically reduce the height of the treethey merely bring the newly accessed record toward the root splaying node involves series of double rotations until reaches either the root or the child of the root thenif necessarya single rotation makes the root this process tends to re-balance the tree in any caseit will make frequently accessed nodes stay near the top of the treeresulting in reduced access cost proof that the splay tree does in fact meet the guarantee of ( log nis beyond the scope of this book such proof can be found in the references in section example consider search for value in the splay tree of figure (athe splay tree' search operation is identical to searching in bst howeveronce the value has been foundit is splayed to the root three rotations are required in this example the first is zigzig rotation |
20,512 | whose result is shown in figure (bthe second is zigzag rotationwhose result is shown in figure (cthe final step is single rotation resulting in the tree of figure (dnotice that the splaying process has made the tree shallower spatial data structures all of the search trees discussed so far -bstsavl treessplay trees - treesb-treesand tries -are designed for searching on one-dimensional key typical example is an integer keywhose one-dimensional range can be visualized as number line these various tree structures can be viewed as dividing this onedimensional number line into pieces some databases require support for multiple keys in other wordsrecords can be searched for using any one of several key fieldssuch as name or id number typicallyeach such key has its own one-dimensional indexand any given search query searches one of these independent indices as appropriate multidimensional search key presents rather different concept imagine that we have database of city recordswhere each city has name and an xycoordinate bst or splay tree provides good performance for searches on city namewhich is one-dimensional key separate bsts could be used to index the xand -coordinates this would allow us to insert and delete citiesand locate them by name or by one coordinate howeversearch on one of the two coordinates is not natural way to view search in two-dimensional space another option is to combine the xy-coordinates into single keysay by concatenating the two coordinatesand index cities by the resulting key in bst that would allow search by coordinatebut would not allow for efficient two-dimensional range queries such as searching for all cities within given distance of specified point the problem is that the bst only works well for one-dimensional keyswhile coordinate is two-dimensional key where neither dimension is more important than the other multidimensional range queries are the defining feature of spatial application because coordinate gives position in spaceit is called spatial attribute to implement spatial applications efficiently requires the use of spatial data structures spatial data structures store data objects organized by position and are an important class of data structures used in geographic information systemscomputer graphicsroboticsand many other fields this section presents two spatial data structures for storing point data in two or more dimensions they are the - tree and the pr quadtree the - tree is |
20,513 | chap advanced tree structures ( ( ( (dfigure example of splaying after performing search in splay tree after finding the node with key value that node is splayed to the root by performing three rotations (athe original splay tree (bthe result of performing zigzig rotation on the node with key value in the tree of ( (cthe result of performing zigzag rotation on the node with key value in the tree of ( (dthe result of performing single rotation on the node with key value in the tree of (cif the search had been for the search would have been unsuccessful with the node storing key value being that last one visited in that casethe same splay operations would take place |
20,514 | natural extension of the bst to multiple dimensions it is binary tree whose splitting decisions alternate among the key dimensions like the bstthe - tree uses object space decomposition the pr quadtree uses key space decomposition and so is form of trie it is binary tree only for one-dimensional keys (in which case it is trie with binary alphabetfor dimensions it has branches thusin two dimensionsthe pr quadtree has four branches (hence the name "quadtree")splitting space into four equal-sized quadrants at each branch section briefly mentions two other variations on these data structuresthe bintree and the point quadtree these four structures cover all four combinations of object versus key space decomposition on the one handand multi-level binary versus -way branching on the other section briefly discusses spatial data structures for storing other types of spatial data the - tree the - tree is modification to the bst that allows for efficient processing of multidimensional keys the - tree differs from the bst in that each level of the - tree makes branching decisions based on particular search key associated with that levelcalled the discriminator we define the discriminator at level to be mod for dimensions for exampleassume that we store data organized by xy-coordinates in this casek is (there are two coordinates)with the xcoordinate field arbitrarily designated key and the -coordinate field designated key at each levelthe discriminator alternates between and thusa node at level (the rootwould have in its left subtree only nodes whose values are less than nx (because is search key and mod the right subtree would contain nodes whose values are greater than nx node at level would have in its left subtree only nodes whose values are less than my there is no restriction on the relative values of mx and the values of ' descendantsbecause branching decisions made at are based solely on the coordinate figure shows an example of how collection of two-dimensional points would be stored in - tree in figure the region containing the points is (arbitrarilyrestricted to squareand each internal node splits the search space each split is shown by linevertical for nodes with discriminators and horizontal for nodes with discriminators the root node splits the space into two partsits children further subdivide the space into smaller parts the children' split lines do not cross the root' split line thuseach node in the - tree helps to decompose the space into rectangles that show the extent of where nodes can fall in the various subtrees |
20,515 | chap advanced tree structures ( ( ( (ad ( ( ( (bfigure example of - tree (athe - tree decomposition for -unit region containing seven data points (bthe - tree for the region of (asearching - tree for the record with specified xy-coordinate is like searching bstexcept that each level of the - tree is associated with particular discriminator example consider searching the - tree for record located at ( first compare with the point stored at the root (record in figure if matches the location of athen the search is successful in this example the positions do not match ( ' location ( is not the same as ( ))so the search must continue the value of is compared with that of to determine in which direction to branch because ax ' value of is less than ' value of we branch to the right subtree (all cities with value greater than or equal to are in the right subtreeay does not affect the decision on which way to branch at this level at the second levelp does not match record ' positionso another branch must be taken howeverat this level we branch based on the relative values of point and record (because mod which corresponds to the -coordinatebecause cy ' value of is less than py ' value of we branch to the right at this pointp is compared against the position of match is made and the search is successful as with bstif the search process reaches null pointerthen the search point is not contained in the tree here is an implementation for - tree search |
20,516 | equivalent to the findhelp function of the bst class note that kd class private member stores the key' dimension private findhelp(kdnode rtint[keyint levelif (rt =nullreturn nulle it rt element()int[itkey rt key()if ((itkey[ =key[ ]&(itkey[ =key[ ])return rt element()if (itkey[levelkey[level]return findhelp(rt left()key(level+ )% )else return findhelp(rt right()key(level+ )% )inserting new node into the - tree is similar to bst insertion the - tree search procedure is followed until null pointer is foundindicating the proper place to insert the new node example inserting record at location ( in the - tree of figure first requires search to the node containing record at this pointthe new record is inserted into ' left subtree deleting node from - tree is similar to deleting from bstbut slightly harder as with deleting from bstthe first step is to find the node (call it nto be deleted it is then necessary to find descendant of which can be used to replace in the tree if has no childrenthen is replaced with null pointer note that if has one child that in turn has childrenwe cannot simply assign ' parent to point to ' child as would be done in the bst to do so would change the level of all nodes in the subtreeand thus the discriminator used for search would also change the result is that the subtree would no longer be - tree because node' children might now violate the bst property for that discriminator similar to bst deletionthe record stored in should be replaced either by the record in ' right subtree with the least value of ' discriminatoror by the record in ' left subtree with the greatest value for this discriminator assume that was at an odd level and therefore is the discriminator could then be replaced by the record in its right subtree with the least value (call it ymin the problem is that ymin is not necessarily the leftmost nodeas it would be in the bst modified search procedure to find the least value in the left subtree must be used to find it instead the implementation for findmin is shown in figure recursive call to the delete routine will then remove ymin from the tree finallyymin ' record is substituted for the record in node |
20,517 | chap advanced tree structures private kdnode findmin(kdnode rtint descrimint levelkdnode temp temp int[key nullint[key nullif (rt =nullreturn nulltemp findmin(rt left()descrim(level+ )% )if (temp !nullkey temp key()if (descrim !leveltemp findmin(rt right()descrim(level+ )% )if (temp !nullkey temp key()if ((temp =null|((temp !null&(key [descrimkey [descrim]))temp temp key key /nowtemp has the smaller value int[rtkey rt key()if ((temp =null|(key [descrimrtkey[descrim])return rtelse return temp figure the - tree findmin method on levels using the minimum value' discriminatorbranching is to the left on other levelsboth children' subtrees must be visited helper function min takes two nodes and discriminator as inputand returns the node with the smaller value in that discriminator note that we can replace the node to be deleted with the least-valued node from the right subtree only if the right subtree exists if it does notthen suitable replacement must be found in the left subtree unfortunatelyit is not satisfactory to replace ' record with the record having the greatest value for the discriminator in the left subtreebecause this new value might be duplicated if sothen we would have equal values for the discriminator in ' left subtreewhich violates the ordering rules for the - tree fortunatelythere is simple solution to the problem we first move the left subtree of node to become the right subtree ( we simply swap the values of ' left and right child pointersat this pointwe proceed with the normal deletion processreplacing the record of to be deleted with the record containing the least value of the discriminator from what is now ' right subtree assume that we want to print out list of all records that are within certain distance of given point we will use euclidean distancethat ispoint is defined to be within distance of point if (px nx ) (py ny ) < more efficient computation is (px nx ) (py ny ) < this avoids performing square root function |
20,518 | sec spatial data structures figure function incircle must check the euclidean distance between record and the query point it is possible for record to have xand ycoordinates each within the query distance of the query point cyet have itself lie outside the query circle if the search process reaches node whose key value for the discriminator is more than above the corresponding value in the search keythen it is not possible that any record in the right subtree can be within distance of the search key because all key values in that dimension are always too great similarlyif the current node' key value in the discriminator is less than that for the search key valuethen no record in the left subtree can be within the radius in such casesthe subtree in question need not be searchedpotentially saving much time in the average casethe number of nodes that must be visited during range query is linear on the number of data records that fall within the query circle example find all cities in the - tree of figure within units of the point ( the search begins with the root nodewhich contains record because ( is exactly units from the search pointit will be reported the search procedure then determines which branches of the tree to take the search circle extends to both the left and the right of ' (verticaldividing lineso both branches of the tree must be searched the left subtree is processed first hererecord is checked and found to fall within the search circle because the node storing has no childrenprocessing of the left subtree is complete processing of ' right subtree now begins the coordinates of record are checked and found not to fall within the circle thusit should not be reported howeverit is possible that cities within ' subtrees could fall within the search circle even if does not as is at level the discriminator at this level is the -coordinate because no record in ' left subtree ( records above ccould possibly be in the search circle thusc' left subtree (if it had oneneed not be searched howevercities in ' right subtree could fall within |
20,519 | chap advanced tree structures figure searching in the - treeof figure (athe - tree decomposition for -unit region containing seven data points (bthe - tree for the region of (athe circle thussearch proceeds to the node containing record againd is outside the search circle because no record in ' right subtree could be within the search circle thusonly ' left subtree need be searched this leads to comparing record ' coordinates against the search circle record falls outside the search circleand processing is complete so we see that we only search subtrees whose rectangles fall within the search circle figure shows an implementation for the region search method when node is visitedfunction incircle is used to check the euclidean distance between the node' record and the query point it is not enough to simply check that the differences between the xand -coordinates are each less than the query distances because the the record could still be outside the search circleas illustrated by figure the pr quadtree in the point-region quadtree (hereafter referred to as the pr quadtreeeach node either has exactly four children or is leaf that isthe pr quadtree is full fourway branching ( -arytree in shape the pr quadtree represents collection of data points in two dimensions by decomposing the region containing the data points into four equal quadrantssubquadrantsand so onuntil no leaf node contains more than single point in other wordsif region contains zero or one data pointsthen it is represented by pr quadtree consisting of single leaf node if the region con |
20,520 | private void rshelp(kdnode rtint[pointint radiusint levif (rt =nullreturnint[rtkey rt key()if (incircle(pointradiusrtkey)system out println(rt element())if (rtkey[lev(point[levradius)rshelp(rt left()pointradius(lev+ )% )if (rtkey[lev(point[levradius)rshelp(rt right()pointradius(lev+ )% )figure the - tree region search method tains more than single data pointthen the region is split into four equal quadrants the corresponding pr quadtree then contains an internal node and four subtreeseach subtree representing single quadrant of the regionwhich might in turn be split into subquadrants each internal node of pr quadtree represents single split of the two-dimensional region the four quadrants of the region (or equivalentlythe corresponding subtreesare designated (in ordernwneswand se each quadrant containing more than single point would in turn be recursively divided into subquadrants until each leaf of the corresponding pr quadtree contains at most one point for exampleconsider the region of figure (aand the corresponding pr quadtree in figure (bthe decomposition process demands fixed key range in this examplethe region is assumed to be of size note that the internal nodes of the pr quadtree are used solely to indicate decomposition of the regioninternal nodes do not store data records because the decomposition lines are predetermined ( ekey-space decomposition is used)the pr quadtree is trie search for record matching point in the pr quadtree is straightforward beginning at the rootwe continuously branch to the quadrant that contains until our search reaches leaf node if the root is leafthen just check to see if the node' data record matches point if the root is an internal nodeproceed to the child that contains the search coordinate for examplethe nw quadrant of figure contains points whose and values each fall in the range to the ne quadrant contains points whose value falls in the range to and whose value falls in the range to if the root' child is leaf nodethen that child is checked to see if has been found if the child is another internal nodethe search process continues through the tree until leaf node is found if this leaf node stores record whose position matches then the query is successfulotherwise is not in the tree |
20,521 | chap advanced tree structures nw se ne sw ( , ( , ( ( , ( , ( ( (bfigure example of pr quadtree (aa map of data points we define the region to be square with origin at the upper-left-hand corner and sides of length (bthe pr quadtree for the points in ( (aalso shows the block decomposition imposed by the pr quadtree for this region inserting record into the pr quadtree is performed by first locating the leaf node that contains the location of if this leaf node is emptythen is stored at this leaf if the leaf already contains (or record with ' coordinates)then duplicate record should be reported if the leaf node already contains another recordthen the node must be repeatedly decomposed until the existing record and fall into different leaf nodes figure shows an example of such an insertion deleting record is performed by first locating the node of the pr quadtree that contains node is then changed to be empty the next step is to look at ' three siblings and its siblings must be merged together to form single node if only one point is contained among them this merging process continues until some level is reached at which at least two points are contained in the subtrees represented by node and its siblings for exampleif point is to be deleted from the pr quadtree representing figure ( )the resulting node must be merged with its siblingsand that larger node again merged with its siblings to restore the pr quadtree to the decomposition of figure (aregion search is easily performed with the pr quadtree to locate all points within radius of query point qbegin at the root if the root is an empty leaf nodethen no data points are found if the root is leaf containing data recordthen the location of the data point is examined to determine if it falls within the circle if the root is an internal nodethen the process is performed recursivelybut only on those subtrees containing some part of the search circle |
20,522 | sec spatial data structures ( (bfigure pr quadtree insertion example (athe initial pr quadtree containing two data points (bthe result of inserting point the block containing must be decomposed into four sub-blocks points and would still be in the same block if only one subdivision takes placeso second decomposition is required to separate them let us now consider how structure of the pr quadtree affects the design of its node representation the pr quadtree is actually trie (as defined in section decomposition takes place at the mid-points for internal nodesregardless of where the data points actually fall the placement of the data points does determine whether decomposition for node takes placebut not where the decomposition for the node takes place internal nodes of the pr quadtree are quite different from leaf nodesin that internal nodes have children (leaf nodes do notand leaf nodes have data fields (internal nodes do notthusit is likely to be beneficial to represent internal nodes differently from leaf nodes finallythere is the fact that approximately half of the leaf nodes will contain no data field another issue to consider ishow does routine traversing the pr quadtree get the coordinates for the square represented by the current pr quadtree nodeone possibility is to store with each node its spatial description (such as upper-left corner and widthhoweverthis will take lot of space -perhaps as much as the space needed for the data recordsdepending on what information is being stored another possibility is to pass in the coordinates when the recursive call is made for exampleconsider the search process initiallythe search visits the root node of the treewhich has origin at ( )and whose width is the full size of the space being covered when the appropriate child is visitedit is simple matter for the search routine to determine the origin for the childand the width of the square is simply half that of the parent not only does passing in the size and position infor |
20,523 | chap advanced tree structures mation for node save considerable spacebut avoiding storing such information in the nodes we enables good design choice for empty leaf nodesas discussed next how should we represent empty leaf nodeson averagehalf of the leaf nodes in pr quadtree are empty ( do not store data pointone implementation option is to use null pointer in internal nodes to represent empty nodes this will solve the problem of excessive space requirements there is an unfortunate side effect that using null pointer requires the pr quadtree processing methods to understand this convention in other wordsyou are breaking encapsulation on the node representation because the tree now must know things about how the nodes are implemented this is not too horrible for this particular applicationbecause the node class can be considered private to the tree classin which case the node implementation is completely invisible to the outside world howeverit is undesirable if there is another reasonable alternative fortunatelythere is good alternative it is called the flyweight design pattern in the pr quadtreea flyweight is single empty leaf node that is reused in all places where an empty leaf node is needed you simply have all of the internal nodes with empty leaf children point to the same node object this node object is created once at the beginning of the programand is never removed the node class recognizes from the pointer value that the flyweight is being accessedand acts accordingly note that when using the flyweight design patternyou cannot store coordinates for the node in the node this is an example of the concept of intrinsic versus extrinsic state intrinsic state for an object is state information stored in the object if you stored the coordinates for node in the node objectthose coordinates would be intrinsic state extrinsic state is state information about an object stored elsewhere in the environmentsuch as in global variables or passed to the method if your recursive calls that process the tree pass in the coordinates for the current nodethen the coordinates will be extrinsic state flyweight can have in its intrinsic state only information that is accurate for all instances of the flyweight clearly coordinates do not qualifybecause each empty leaf node has its own location soif you want to use flyweightyou must pass in coordinates another design choice iswho controls the workthe node class or the tree classfor exampleon an insert operationyou could have the tree class control the flow down the treelooking at (queryingthe nodes to see their type and reacting accordingly this is the approach used by the bst implementation in section an alternate approach is to have the node class do the work that isyou have an insert method for the nodes if the node is internalit passes the city record to the appropriate child (recursivelyif the node is flyweightit replaces itself with |
20,524 | new leaf node if the node is full nodeit replaces itself with subtree this is an example of the composite design patterndiscussed in section other point data structures the differences between the - tree and the pr quadtree illustrate many of the design choices encountered when creating spatial data structures the - tree provides an object space decomposition of the regionwhile the pr quadtree provides key space decomposition (thusit is triethe - tree stores records at all nodeswhile the pr quadtree stores records only at the leaf nodes finallythe two trees have different structures the - tree is binary treewhile the pr quadtree is full tree with branches (in the two-dimensional case consider the extension of this concept to three dimensions - tree for three dimensions would alternate the discriminator through the xyand dimensions the three-dimensional equivalent of the pr quadtree would be tree with or eight branches such tree is called an octree we can also devise binary trie based on key space decomposition in each dimensionor quadtree that uses the two-dimensional equivalent to an object space decomposition the bintree is binary trie that uses keyspace decomposition and alternates discriminators at each level in manner similar to the - tree the bintree for the points of figure is shown in figure alternativelywe can use four-way decomposition of space centered on the data points the tree resulting from such decomposition is called point quadtree the point quadtree for the data points of figure is shown in figure other spatial data structures this section has barely scratched the surface of the field of spatial data structures by now dozens of distinct spatial data structures have been inventedmany with variations and alternate implementations spatial data structures exist for storing many forms of spatial data other than points the most important distinctions between are the tree structure (binary or notregular decompositions or notand the decomposition rule used to decide when the data contained within region is so complex that the region must be subdivided perhaps the best known spatial data structure is the "region quadtreefor storing images where the pixel values tend to be blockysuch as map of the countries of the world the region quadtree uses four-way regular decomposition scheme similar to the pr quadtree the decompostion rule is simply to divide any node containing pixels of more than one color or value |
20,525 | chap advanced tree structures ( (afigure an example of the bintreea binary tree using key space decomposition and discriminators rotating among the dimensions compare this with the - tree of figure and the pr quadtree of figure nw se ne sw ( (bfigure an example of the point quadtreea -ary tree using object space decomposition compare this with the pr quadtree of figure |
20,526 | spatial data structures can also be used to store line objectrectangle objector objects of arbitrary shape (such as polygons in two dimensions or polyhedra in three dimensionsa simpleyet effectivedata structure for storing rectangles or arbitrary polygonal shapes can be derived from the pr quadtree pick threshold value cand subdivide any region into four quadrants if it contains more than objects special case must be dealt with when more than object intersect some of the most interesting developments in spatial data structures have to do with adapting them for disk-based applications howeverall such disk-based implementations boil down to storing the spatial data structure within some variant on either -trees or hashing further reading patricia tries and other trie implementations are discussed in information retrievaldata structures algorithmsfrakes and baeza-yateseds [fby see knuth [knu for discussion of the avl tree for further reading on splay treessee "self-adjusting binary searchby sleator and tarjan [st the world of spatial data structures is rich and rapidly evolving for good introductionsee foundations of multidimensional and metric data structures by hanan samet [sam this is also the best reference for more information on the pr quadtree the - tree was invented by john louis bentley for further information on the - treein addition to [sam ]see [ben for information on using quadtree to store arbitrary polygonal objectssee [sh for discussion on the relative space requirements for two-way versus multiway branchingsee " generalized comparison of quadtree and bintree storage requirementsby shafferjuvvadiand heath [sjh closely related to spatial data structures are data structures for storing multidimensional data (which might not necessarily be spatial in naturea popular data structure for storing such data is the -treewhich was originally proposed by guttman [gut exercises show the binary trie (as illustrated by figure for the following collection of values show the pat trie (as illustrated by figure for the following collection of values write the insertion routine for binary trie as shown in figure write the deletion routine for binary trie as shown in figure |
20,527 | chap advanced tree structures (ashow the result (including appropriate rotationsof inserting the value into the avl tree on the left in figure (bshow the result (including appropriate rotationsof inserting the value into the avl tree on the left in figure (cshow the result (including appropriate rotationsof inserting the value into the avl tree on the left in figure (dshow the result (including appropriate rotationsof inserting the value into the avl tree on the left in figure show the splay tree that results from searching for value in the splay tree of figure ( show the splay tree that results from searching for value in the splay tree of figure ( some applications do not permit storing two records with duplicate key values in such casean attempt to insert duplicate-keyed record into tree structure such as splay tree should result in failure on insert what is the appropriate action to take in splay tree implementation when the insert routine is called with duplicate-keyed record show the result of deleting point from the - tree of figure (ashow the result of building - tree from the following points (inserted in the order givena ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( (bshow the result of deleting point from the tree you built in part ( (ashow the result of deleting from the pr quadtree of figure (bshow the result of deleting records and from the pr quadtree of figure (ashow the result of building pr quadtree from the following points (inserted in the order givenassume the tree is representing space of by units ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( (bshow the result of deleting point from the tree you built in part ( (cshow the result of deleting point from the resulting tree in part ( on averagehow many leaf nodes of pr quadtree will typically be emptyexplain why when performing region search on pr quadtreewe need only search those subtrees of an internal node whose corresponding square falls within the query circle this is most easily computed by comparing the and ranges of the query circle against the and ranges of the square corresponding to the subtree howeveras illustrated by figure the and ranges might overlap without the circle actually intersecting the square write function that accurately determines if circle and square intersect |
20,528 | sec projects (ashow the result of building bintree from the following points (inserted in the order givenassume the tree is representing space of by units ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( (bshow the result of deleting point from the tree you built in part ( (cshow the result of deleting point from the resulting tree in part ( compare the trees constructed for exercises and in terms of the number of internal nodesfull leaf nodesempty leaf nodesand total depths of the two trees show the result of building point quadtree from the following points (inserted in the order givenassume the tree is representing space of by units ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( projects use the trie data structure to devise program to sort variable-length strings the program' running time should be proportional to the total number of letters in all of the strings note that some strings might be very long while most are short define the set of suffix strings for string to be ss without its first characters without its first two charactersand so on for examplethe complete set of suffix strings for "hellowould be {helloellollolooa suffix tree is pat trie that contains all of the suffix strings for given stringand associates each suffix with the complete string the advantage of suffix tree is that it allows search for strings using "wildcards for examplethe search key "th*means to find all strings with "thas the first two characters this can easily be done with regular trie searching for "*this not efficient in regular triebut it is efficient in suffix tree implement the suffix tree for dictionary of words or phraseswith support for wildcard search revise the bst class of section to use the avl tree rotations your new implementation should not modify the original bst class adt compare your avl tree against an implementation of the standard bst over wide variety of input data under what conditions does the splay tree actually save time |
20,529 | chap advanced tree structures revise the bst class of section to use the splay tree rotations your new implementation should not modify the original bst class adt compare your splay tree against an implementation of the standard bst over wide variety of input data under what conditions does the splay tree actually save time implement city database using the - tree each database record contains the name of the city ( string of arbitrary lengthand the coordinates of the city expressed as integer xand -coordinates your database should allow records to be inserteddeleted by name or coordinateand searched by name or coordinate you should also support region queriesthat isa request to print all records within given distance of specified point implement city database using the pr quadtree each database record contains the name of the city ( string of arbitrary lengthand the coordinates of the city expressed as integer xand -coordinates your database should allow records to be inserteddeleted by name or coordinateand searched by name or coordinate you should also support region queriesthat isa request to print all records within given distance of specified point implement city database using the bintree each database record contains the name of the city ( string of arbitrary lengthand the coordinates of the city expressed as integer xand -coordinates your database should allow records to be inserteddeleted by name or coordinateand searched by name or coordinate you should also support region queriesthat isa request to print all records within given distance of specified point implement city database using the point quadtree each database record contains the name of the city ( string of arbitrary lengthand the coordinates of the city expressed as integer xand -coordinates your database should allow records to be inserteddeleted by name or coordinateand searched by name or coordinate you should also support region queriesthat isa request to print all records within given distance of specified point use the pr quadtree to implement an efficient solution to problem that isstore the set of points in pr quadtree for each pointthe pr quadtree is used to find those points within distance that should be equivalenced what is the asymptotic complexity of this solution select any two of the point representations described in this ( the - treethe pr quadtreethe bintreeand the point quadtreeimplement your two choices and compare them over wide range of data sets describe which is easier to implementwhich appears to be more space efficientand which appears to be more time efficient |
20,530 | implement representation for collection of (two dimensionalrectangles using quadtree based on regular decomposition assume that the space being represented is square whose width and height are some power of two rectangles are assumed to have integer coordinates and integer width and height pick some value cand use as decomposition rule that region is subdivided into four equal-sized regions whenever it contains more that rectangles special case occurs if all of these rectangles intersect at some point within the current region (because decomposing such node would never reach terminationin this situationthe node simply stores pointers to more than rectangles try your representation on data sets of rectangles with varying values of |
20,531 | theory of algorithms |
20,532 | analysis techniques this book contains many examples of asymptotic analysis of the time requirements for algorithms and the space requirements for data structures often it is easy to invent an equation to model the behavior of the algorithm or data structure in questionand also easy to derive closed-form solution for the equation should it contain recurrence or summation sometimes an analysis proves more difficult it may take clever insight to derive the right modelsuch as the snowplow argument for analyzing the average run length resulting from replacement selection (section in this caseonce the snowplow argument is understoodthe resulting equations are simple sometimesdeveloping the model is straightforward but analyzing the resulting equations is not an example is the average-case analysis for quicksort the equation given in section simply enumerates all possible cases for the pivot positionsumming corresponding costs for the recursive calls to quicksort howeverderiving closed-form solution for the resulting recurrence relation is not as easy many iterative algorithms require that we compute summation to determine the cost of loop techniques for finding closed-form solutions to summations are presented in section time requirements for many algorithms based on recursion are best modeled by recurrence relations discussion of techniques for solving recurrences is provided in section these sections extend the introduction to summations and recurrences provided in section so the reader should already be familiar with that material section provides an introduction to the topic of amortized analysis amortized analysis deals with the cost of series of operations perhaps single operation in the series has high costbut as result the cost of the remaining operations is limited in such way that the entire series can be done efficiently amortized analysis has been used successfully to analyze several of the algorithms presented in |
20,533 | chap analysis techniques this bookincluding the cost of series of union/find operations (section )the cost of series of splay tree operations (section )and the cost of series of operations on self-organizing lists (section section discusses the topic in more detail summation techniques we begin our study of techniques for finding the closed-form solution to summation by considering the simple example = in section it was proved by induction that this summation has the well-known closed form ( )/ but while induction is good technique for proving that proposed closed-form expression is correcthow do we find candidate closedform expression to test in the first placelet us try to approach this summation from first principlesas though we had never seen it before good place to begin analyzing summation it is to give an estimate of its value for given observe that the biggest term for this summation is nand there are terms being summed up so the total must be less than actuallymost terms are much less than nand the sizes of the terms grows linearly if we were to draw picture with bars for the size of the termstheir heights would form lineand we could enclose them in box units wide and units high it is easy to see from this that closer estimate for the summation is about ( )/ having this estimate in hand helps us when trying to determine an exact closed-form solutionbecause we will hopefully recognize if our proposed solution is badly wrong let us now consider some ways that we might hit upon an exact value for the closed form solution to this summation one particularly clever approach we can take is to observe that we can "pair upthe first and last termsthe second and ( )th termsand so on each pair sums to the number of pairs is / thusthe solution is ( )/ this is prettyand there' no doubt about it being correct the problem is that it is not useful technique for solving many other summations now let us try to do something bit more general we already recognized thatbecause the largest term is and there are termsthe summation is less than if we are luckythe closed form solution is polynomial using that as working assumptionwe can invoke technique called guess-and-test we will guess that the closed-form solution for this summation is polynomial of the form |
20,534 | sec summation techniques for some constants and if this is the casewe can plug in the answers to small cases of the summation to solve for the coefficients for this examplesubstituting and for leads to three simultaneous equations because the summation when is just must be for and we get the two equations which in turn yield / and / thusif the closed-form solution for the summation is polynomialit can only be / / which is more commonly written ( at this pointwe still must do the "testpart of the guess-and-test approach we can use an induction proof to verify whether our candidate closed-form solution is correct in this case it is indeed correctas shown by example the induction proof is necessary because our initial assumption that the solution is simple polynomial could be wrong for exampleit might have been possible that the true solution includes logarithmic termsuch as log the process shown here is essentially fitting curve to fixed number of points because there is always an -degree polynomial that fits pointswe had not done enough work to be sure that we to know the true equation without the induction proof guess-and-test is useful whenever the solution is ap polynomial expression in particularsimilar reasoning can be used to solve for = or more generally pn = for any positive integer why is this not universal approach to solving summationsbecause many summations do not have polynomial as their closed form solution more general approach is based on the subtract-and-guess or divide-andguess strategies one form of subtract-and-guess is known as the shifting method the shifting method subtracts the summation from variation on the summation the variation selected for the subtraction should be one that makes most of the terms cancel out to solve sum we pick known function and find pattern in terms of (ng(nor ( )/ ( |
20,535 | chap analysis techniques example find the closed form solution for ni= using the divideand-guess approach we will try two example functions to illustrate the divide-and-guess methoddividing by and dividing by ( our goal is to find patterns that we can use to guess closed-form expression as our candidate for testing with an induction proof to aid us in finding such patternswe can construct table showing the first few numbers of each functionand the result of dividing one by the otheras follows ( ( )/ / ( ( )/ ( / / / / / / / / / / / / / / / / / / dividing by both and ( happen to give us useful patterns to (nn+ work with (nn+ and ( - - of courselots of other approaches do not work for examplef (nn ( knowing that (nf ( is not useful for determining the closed form solution to this summation or consider (nf ( againknowing that (nf ( is not useful finding the right combination of equations can be like finding needle in haystack in our first examplewe can see directly what the closed-form solution should be (nn+ obviouslyf (nn( )/ dividing (nby ( does not give so obvious resultbut it provides another useful illustration (nn+ ( - ( )( ( ) ( ( )( ( )( (nnnf (nf (nnf (nf (nn (nn ( ( ( |
20,536 | sec summation techniques once againwe still do not have proof that (nn( )/ whybecause we did not prove that ( )/ ( )/ nor that ( )/ ( ( )( we merely hypothesized patterns from looking at few terms fortunatelyit is easy to check our hypothesis with induction example solve the summation / = we will begin by writing out table of the first few values of the summation and see if we can detect pattern ( (ng(nf ( by direct inspection of the second line of the tablewe might recognize the pattern ( - simple induction proof can then prove that this always holds true alternativelyconsider if we hadn' noticed the pattern for the form of (nwe might observe that (nappears to be reaching an asymptote at one in which casewe might consider looking at the difference between (nand the expected asymptoteby subtracting (nfrom ( the result is shown in the last line of the tablewhich have clear pattern of / from this we can easily deduce guess that ( againa simple induction proof will verify the guess example solve the summation (nn ari ar ar arn = this is called geometric series our goal is to find some variation for (nsuch that subtracting one from the other leaves us with an easily manipulated equation because the difference between consecutive terms of |
20,537 | chap analysis techniques the summation is factor of rwe can shift terms if we multiply the entire expression by rrf (nr ari ar ar ar arn+ = we can now subtract the one equation from the otheras followsf (nrf (na ar ar ar arn (ar ar ar arn arn+ the result leaves only the end termsf (nrf (nn ari = ari = ( ) (na ar + thuswe get the result (na arn+ - where example for our second example of the shifting methodwe solve (ni = we can achieve our goal if we multiply by twon ( ( + = the ith term of (nis + while the ( )th term of (nis ( + subtracting one expression from the other yields the summation of and few non-canceled termsn (nf ( = = = = + shift ' value in the second summationsubstituting ( for |
20,538 | sec recurrence relations + - + = - ( ) + = break the second summation into two partsn- - - + + + + = cancel like termsn + - = = + = again shift ' value in the summationsubstituting for ( ) + = replace the new summation with solution that we already known + + finallyreorganize the equation( ) + recurrence relations recurrence relations are often used to model the cost of recursive functions for examplethe standard mergesort (section takes list of size nsplits it in halfperforms mergesort on each halfand finally merges the two sublists in steps the cost for this can be modeled as ( ( / in other wordsthe cost of the algorithm on input of size is two times the cost for input of size / (due to the two recursive calls to mergesortplus (the time to merge the sublists together againthere are many approaches to solving recurrence relationsand we briefly consider three here the first is an estimation techniqueguess the upper and lower bounds for the recurrenceuse induction to prove the boundsand tighten as required the second approach is to expand the recurrence to convert it to summation and then use summation techniques the third approach is to take advantage |
20,539 | chap analysis techniques of already proven theorems when the recurrence is of suitable form in particulartypical divide and conquer algorithms such as mergesort yield recurrences of form that fits pattern for which we have ready solution estimating upper and lower bounds the first approach to solving recurrences is to guess the answer and then attempt to prove it correct if correct upper or lower bound estimate is givenan easy induction proof will verify this fact if the proof is successfulthen try to tighten the bound if the induction proof failsthen loosen the bound and try again once the upper and lower bounds matchyou are finished this is useful technique when you are only looking for asymptotic complexities when seeking precise closed-form solution ( you seek the constants for the expression)this method will not be appropriate example use the guessing technique to find the asymptotic bounds for mergesortwhose running time is described by the equation ( ( / nt( we begin by guessing that this recurrence has an upper bound in ( to be more preciseassume that ( < we prove this guess is correct by induction in this proofwe assume that is power of twoto make the calculations easy for the base caset( < for the induction stepwe need to show that ( < implies that ( the induction hypothesis is ( < for all < it follows that ( ( < < <( ) which is what we wanted to prove thust(nis in ( is ( good estimatein the next-to-last step we went from + to the much larger this suggests that ( is high estimate if we guess something smallersuch as ( <cn for some constant cit should be clear that this cannot work because cn and there is no room for |
20,540 | sec recurrence relations the extra cost to join the two pieces together thusthe true cost must be somewhere between cn and let us now try ( < log for the base casethe definition of the recurrence sets ( <( log assume (induction hypothesisthat ( < log thent( ( < log < (log < log which is what we seek to prove in similar fashionwe can prove that (nis in ohm( log nthust(nis also th( log nexample we know that the factorial function grows exponentially how does it compare to to nn do they all grow "equally fast(in an asymptotic sense)we can begin by looking at few initial terms nn we can also look at these functions in terms of their recurrences = nn( ) = - ( > = - ( > at this pointour intuition should be telling us pretty clearly the relative growth rates of these three functions but how do we prove formally which grows the fastestand how do we decide if the differences are significant in an asymptotic senseor just constant factor differenceswe can use logarithms to help us get an idea about the relative growth rates of these functions clearlylog equally clearlylog nn log we can easily see from this that is (nn )that isnn grows asymptotically faster than |
20,541 | chap analysis techniques how does nfit into thiswe can again take advantage of logarithms obviously <nn so we know that log nis ( log nbut what about lower bound for the factorial functionconsider the following nn ( > ** ** / therefore log >log) / log in other wordslog nis in ohm( log nthuslog nth( log nnote that this does not mean that nth(nn because log log nit follows that log th(log but th( the log function often works as "flattenerwhen dealing with asymptotics (and the antilog works as booster that iswhenever log (nis in (log ( )we know that (nis in ( ( )but knowing that log (nth(log ( )does not necessarily mean that (nth( ( )example what is the growth rate of the fibonacci sequence (nf ( ( for > ( ( in this case it is useful to compare the ratio of (nto ( the following table shows the first few values ( ( )/ ( following this out few more termsit appears to settle to ratio of approximately assuming ( )/ ( really does tend to fixed valuewe can determine what that value must be (nf ( ( - + ( ( ( this comes from knowing that (nf ( ( we divide by ( to make the second term go awayand we also get something |
20,542 | sec recurrence relations useful in the first term remember that the goal of such manipulations is to give us an equation that relates (nto something without recursive calls for large nwe also observe thatf (nf (nf ( ( ( ( as gets big this comes from multiplying ( )/ ( by ( )/ ( and rearranging if existsthen using the quadratic equationthe only solution greater than one is this expression also has the name ph what does this say about the growth rate of the fibonacci sequenceit is exponentialwith (nth(phn more preciselyf (nconverges to phn ( ph) expanding recurrences estimating bounds is effective if you only need an approximation to the answer more precise techniques are required to find an exact solution one such technique is called expanding the recurrence in this methodthe smaller terms on the right side of the equation are in turn replaced by their definition this is the expanding step these terms are again expandedand so onuntil full series with no recurrence results this yields summationand techniques for solving summations can then be used couple of simple expansions were shown in section more complex example is given below example find the solution for ( ( / ( for simplicity we assume that is power of twoso we will rewrite it as this recurrence can be expanded as followst( ( / |
20,543 | chap analysis techniques ( ( / ( / ) ( ( ( / ( / ) ( / ) ( - - this last expression can best be represented by summation as follows - / = - / = from equation we have / - ( / this is the exact solution to the recurrence for power of two at this pointwe should use simple induction proof to verify that our solution is indeed correct example our next example comes from the algorithm to build heap recall from section that to build heapwe first heapify the two subheapsthen push down the root to its proper position the cost isf ( < ( / log let us find closed form solution for this recurrence we can expand the recurrence few times to see that ( < ( / log < [ ( / log / log < [ ( ( / log / log / log |
20,544 | sec recurrence relations we can deduce from this expansion that this recurrence is equivalent to following summation and its derivationf ( <log - + log( / = log - (log ii= log log - - = log - - = log log log log divide and conquer recurrences the third approach to solving recurrences is to take advantage of known theorems that describe the solution for classes of recurrences one useful example is theorem that gives the answer for class known as divide and conquer recurrences these have the form (nat( /bcnk ( where abcand are constants in generalthis recurrence describes problem of size divided into subproblems of size /bwhile cnk is the amount of work necessary to combine the partial solutions mergesort is an example of divide and conquer algorithmand its recurrence fits this form so does binary search we use the method of expanding recurrences to derive the general solution for any divide and conquer recurrenceassuming that bm (na(at( / ( / ) cnk am ( am- ( /bm- ) ac( / ) cnk am- bik = ca = (bk / ) |
20,545 | chap analysis techniques note that am alogb nlogb ( the summation is geometric series whose sum depends on the ratio bk / there are three cases from equation ri /( ) constant = thust(nth(am th(nlogb because bk /awe know that bk from the definition of logarithms it follows immediately that logb we also note from equation that logb thusm logb = because am logb nk we have (nth(nlogb log nth(nk log from equation = rrm+ th(rm - thust(nth(am rm th(am (bk / ) th(bkm th(nk we can summarize the above derivation as the following theoremsometimes referred to as the master theorem theorem (the master theoremfor any recurrence relation of the form (nat( /bcnk ( cthe following relationships hold if bk th(nlogb th(nk log nif bk (nth(nk if bk |
20,546 | sec recurrence relations this theorem may be applied whenever appropriaterather than re-deriving the solution for the recurrence example apply the theorem to solve ( ( / because and we find that applying case ( of the theoremt(nth( example use the theorem to solve the recurrence relation for mergesortt( ( / nt( because and we find that applying case ( of the theoremt(nth( log naverage-case analysis of quicksort in section we determined that the average-case analysis of quicksort had the following recurrencen- (ncn [ (kt( )] ( ( = the cn term is an upper bound on the findpivot and partition steps this equation comes from assuming that the partitioning element is equally likely to occur in any position it can be simplified by observing that the two recurrence terms (kand ( kare equivalentbecause one simply counts up from ( to ( while the other counts down from ( to ( this yields - (ncn (kn = this form is known as recurrence with full history the key to solving such recurrence is to cancel out the summation terms the shifting method for summations provides way to do this multiply both sides by and subtract the result |
20,547 | chap analysis techniques from the formula for nt( )nt(ncn - (kk= ( ) ( ( ) (kk= subtracting nt(nfrom both sides yields( ) ( nt(nc( ) cn ( ( ) ( nt(nc( ( ( ) ( ( ( ) (nc( (nt( + + at this pointwe have eliminated the summation and can now use our normal methods for solving recurrences to get closed-form solution note that ( + + cso we can simplify the result expanding the recurrencewe get + (nn+ + + ( + + + ( + - + ( ( ) + + + + + + **** + + + ( ** + ( (hn+ ( < for hn+ the harmonic series from equation hn+ th(log )so the final solution is th( log |
20,548 | amortized analysis this section presents the concept of amortized analysiswhich is the analysis for series of operations taken as whole in particularamortized analysis allows us to deal with the situation where the worst-case cost for operations is less than times the worst-case cost of any one operation rather than focusing on the individual cost of each operation independently and summing themamortized analysis looks at the cost of the entire series and "chargeseach individual operation with share of the total cost we can apply the technique of amortized analysis in the case of series of sequential searches in an unsorted array for random searchesthe average-case cost for each search is / and so the expected total cost for the series is / unfortunatelyin the worst case all of the searches would be to the last item in the array in this caseeach search costs for total worst-case cost of compare this to the cost for series of searches such that each item in the array is searched for precisely once in this situationsome of the searches must be expensivebut also some searches must be cheap the total number in the bestavpn of searches erageand worst casefor this problem must be = / this is factor of two better than the more pessimistic analysis that charges each operation in the series with its worst-case cost as another example of amortized analysisconsider the process of incrementing binary counter the algorithm is to move from the lower-order (rightmostbit toward the high-order (leftmostbitchanging to until the first is encountered this is changed to and the increment operation is done below is java code to implement the increment operationassuming that binary number of length is stored in array of length for ( = (( < length&( [ = )) ++ [ if ( lengtha[ if we count from through (requiring counter with at least bits)what is the average cost for an increment operation in terms of the number of bits processednaive worst-case analysis says that if all bits are (except for the high-order bit)then bits need to be processed thusif there are incrementsthen the cost is howeverthis is much too highbecause it is rare for so many bits to be processed in facthalf of the time the low-order bit is and so only that bit is processed one quarter of the timethe low-order two bits are and so only the low-order two bits are processed another way to view this is that the low-order bit is always flippedthe bit to its left is flipped half the timethe next |
20,549 | chap analysis techniques bit one quarter of the timeand so on we can capture this with the summation (charging costs to bits going from right to leftn- = in other wordsthe average number of bits flipped on each increment is leading to total cost of only for series of increments useful concept for amortized analysis is illustrated by simple variation on the stack data structurewhere the pop function is slightly modified to take second parameter indicating that pop operations are to be performed this revised pop functioncalled multipopmight look as follows/pop elements from stack void multipop(int )the "localworst-case analysis for multipop is th(nfor elements in the stack thusif there are calls to push and calls to multipopthen the naive worst-case cost for the series of operation is this analysis is unreasonably pessimistic clearly it is not really possible to pop elements each time multipop is called analysis that focuses on single operations cannot deal with this global limitand so we turn to amortized analysis to model the entire series of operations the key to an amortized analysis of this problem lies in the concept of potential at any given timea certain number of items may be on the stack the cost for multipop can be no more than this number of items each call to push places another item on the stackwhich can be removed by only single multipop operation thuseach call to push raises the potential of the stack by one item the sum of costs for all calls to multipop can never be more than the total potential of the stack (aside from constant time cost associated with each call to multipop itselfthe amortized cost for any series of push and multipop operations is the sum of three costs firsteach of the push operations takes constant time secondeach multipop operation takes constant time in overheadregardless of the number of items popped on that call finallywe count the sum of the potentials expended by all multipop operationswhich is at most the number of push operations this total cost can therefore be expressed as ( th( |
20,550 | similar argument was used in our analysis for the partition function in the quicksort algorithm (section while on any given pass through the while loop the left or right pointers might move all the way through the remainder of the partitiondoing so would reduce the number of times that the while loop can be further executed our final example uses amortized analysis to prove relationship between the cost of the move-to-front self-organizing list heuristic from section and the cost for the optimal static ordering of the list recall thatfor series of search operationsthe minimum cost for static list results when the list is sorted by frequency of access to its records this is the optimal ordering for the records if we never allow the positions of records to changebecause the most frequently accessed record is first (and thus has least cost)followed by the next most frequently accessed recordand so on theorem the total number of comparisons required by any series of or more searches on self-organizing list of length using the move-to-front heuristic is never more than twice the total number of comparisons required when series is applied to the list stored in its optimal static order proofeach comparison of the search key with record in the list is either successful or unsuccessful for searchesthere must be exactly successful comparisons for both the self-organizing list and the static list the total number of unsuccessful comparisons in the self-organizing list is the sumover all pairs of distinct keysof the number of unsuccessful comparisons made between that pair consider particular pair of keys and for any sequence of searches sthe total number of (unsuccessfulcomparisons between and is identical to the number of comparisons between and required for the subsequence of made up only of searches for or call this subsequence sab in other wordsincluding searches for other keys does not affect the relative position of and and so does not affect the relative contribution to the total cost of the unsuccessful comparisons between and the number of unsuccessful comparisons between and made by the moveto-front heuristic on subsequence sab is at most twice the number of unsuccessful comparisons between and required when sab is applied to the optimal static ordering for the list to see thisassume that sab contains as and bswith < under the optimal static orderingi unsuccessful comparisons are required because must appear before in the list (because its access frequency is highermove-tofront will yield an unsuccessful comparison whenever the request sequence changes |
20,551 | chap analysis techniques from to or from to the total number of such changes possible is because each change involves an and each can be part of at most two changes because the total number of unsuccessful comparisons required by move-tofront for any given pair of keys is at most twice that required by the optimal static orderingthe total number of unsuccessful comparisons required by move-to-front for all pairs of keys is also at most twice as high because the number of successful comparisons is the same for both methodsthe total number of comparisons required by move-to-front is less than twice the number of comparisons required by the optimal static ordering further reading good introduction to solving recurrence relations appears in applied combinatorics by fred roberts [rob for more advanced treatmentsee concrete mathematics by grahamknuthand patashnik [gkp cormenleisersonand rivest provide good discussion on various methods for performing amortized analysis in introduction to algorithms [clrs for an amortized analysis that the splay tree requires log time to perform series of operations on nodes when nsee "self-adjusting binary search treesby sleator and tarjan [st the proof for theorem comes from "amortized analysis of self-organizing sequential search heuristicsby bentley and mcgeoch [bm exercises use the technique of guessing polynomial and deriving the coefficients to solve the summation = use the technique of guessing polynomial and deriving the coefficients to solve the summation = findand prove correcta closed-form solution for = |
20,552 | sec exercises use subtract-and-guess or divide-and-guess to find the closed form solution for the following summation you must first find pattern from which to deduce potential closed form solutionand then prove that the proposed solution is correct / = use the shifting method to solve the summation = use the shifting method to solve the summation = use the shifting method to solve the summation - = consider the following code fragment sum inc for ( = <=ni++for ( = <=ij++sum sum incinc++(adetermine summation that defines the final value for variable sum as function of (bdetermine closed-form solution for your summation chocolate company decides to promote its chocolate bars by including coupon with each bar bar costs dollarand with coupons you get free bar so depending on the value of cyou get more than one bar of chocolate for dollar when considering the value of the coupons how much chocolate is dollar worth (as function of ) write and solve recurrence relation to compute the number of times fibr is called in the fibr function of exercise |
20,553 | chap analysis techniques give and prove the closed-form solution for the recurrence relation (nt( ( give and prove the closed-form solution for the recurrence relation (nt( ct( prove by induction that the closed-form solution for the recurrence relation ( ( / nt( is in ohm( log for the following recurrencegive closed-form solution you should not give an exact solutionbut only an asymptotic solution ( using th notationyou may assume that is power of prove that your answer is correct (nt( / for ( using the technique of expanding the recurrencefind the exact closed-form solution for the recurrence relation ( ( / nt( you may assume that is power of section provides an asymptotic analysis for the worst-case cost of function buildheap give an exact worst-case analysis for buildheap for each of the following recurrencesfind and then prove (using inductionan exact closed-form solution when convenientyou may assume that is power of (at(nt( / for ( (bt( ( / for ( use theorem to prove that binary search requires th(log ntime recall that when hash table gets to be more than about one half fullits performance quickly degrades one solution to this problem is to reinsert all elements of the hash table into new hash table that is twice as large assuming that the (expectedaverage case cost to insert into hash table is th( )prove that the average cost to insert is still th( when this reinsertion policy is used given - tree with nodesprove that inserting additional nodes requires ( node splits |
20,554 | one approach to implementing an array-based list where the list size is unknown is to let the array grow and shrink this is known as dynamic array when necessarywe can grow or shrink the array by copying the array' contents to new array if we are careful about the size of the new arraythis copy operation can be done rarely enough so as not to affect the amortized cost of the operations (awhat is the amortized cost of inserting elements into the list if the array is initially of size and we double the array size whenever the number of elements that we wish to store exceeds the size of the arrayassume that the insert itself cost ( time per operation and so we are just concerned with minimizing the copy time to the new array (bconsider an underflow strategy that cuts the array size in half whenever the array falls below half full give an example where this strategy leads to bad amortized cost againwe are only interested in measuring the time of the array copy operations (cgive better underflow strategy than that suggested in part (byour goal is to find strategy whose amortized analysis shows that array copy requires (ntime for series of operations recall that two vertices in an undirected graph are in the same connected component if there is path connecting them good algorithm to find the connected components of an undirected graph begins by calling dfs on the first vertex all vertices reached by the dfs are in the same connected component and are so marked we then look through the vertex mark array until an unmarked vertex is found again calling the dfs on iall vertices reachable from are in second connected component we continue working through the mark array until all vertices have been assigned to some connected component sketch of the algorithm is as followsstatic void concom(graph gint ifor ( = < () ++/for vertices in graph setmark( )/vertex in no component int comp /current component for ( = < () ++if ( getmark( = /start new component dfs component(gicomp++)for ( = < () ++out append( getmark( ")static void dfs component(graph gint vint comp |
20,555 | chap analysis techniques setmark(vcomp)for (int first( ) () next(vw)if ( getmark( = dfs component(gwcomp)use the concept of potential from amortized analysis to explain why the total cost of this algorithm is th(| | |(note that this will not be true amortized analysis because this algorithm does not allow an arbitrary series of dfs operations but rather is fixed to do single call to dfs from each vertex give proof similar to that used for theorem to show that the total number of comparisons required by any series of or more searches on self-organizing list of length using the count heuristic is never more than twice the total number of comparisons required when series is applied to the list stored in its optimal static order use mathematical induction to prove that ib(if ib( for > = use mathematical induction to prove that fib(iis even if and only if is divisible by use mathematical induction to prove that for > ib( ( / ) - find closed forms for each of the following recurrences (af (nf ( ( (bf ( ( ) ( (cf ( ( ( (df ( nf ( ) ( (ef ( ( ) ( (ff ( - = ( ) ( find th for each of the following recurrence relations (at ( ( / (bt ( ( / (ct ( ( / (dt ( ( / (et ( ( / (ft ( ( / (gt ( ( / (ht ( ( / log (it ( ( / log |
20,556 | projects implement the union/find algorithm of section using both path compression and the weighted union rule count the total number of node accesses required for various series of equivalences to determine if the actual performance of the algorithm matches the expected cost of th( logn |
20,557 | lower bounds how do know if have good algorithm to solve problemif my algorithm runs in th( log ntimeis that goodit would be if were sorting the records stored in an array but it would be terrible if were searching the array for the largest element the value of an algorithm must be determined in relation to the inherent complexity of the problem at hand in section we defined the upper bound for problem to be the upper bound of the best algorithm we know for that problemand the lower bound to be the tightest lower bound that we can prove over all algorithms for that problem while we usually can recognize the upper bound for given algorithmfinding the tightest lower bound for all possible algorithms is often difficultespecially if that lower bound is more than the "triviallower bound determined by measuring the amount of input that must be processed the benefits of being able to discover strong lower bound are significant in particularwhen we can make the upper and lower bounds for problem meetthis means that we truly understand our problem in theoretical sense it also saves us the effort of attempting to discover more efficient algorithms when no such algorithm can exist often the most effective way to determine the lower bound for problem is to find reduction to another problem whose lower bound is already known this is the subject of howeverthis approach does not help us when we cannot find suitable "similar problem our focus in this is discovering and proving lower bounds from first principles significant example of lower bounds argument is the proof from section that the problem of sorting is ( log nin the worst case section reviews the concept of lower bound for problem and presents the basic "algorithmfor finding good algorithm section discusses lower |
20,558 | chap lower bounds bounds on searching in listsboth those that are unordered and those that are ordered section deals with finding the maximum value in listand presents model for selection based on building partially ordered set section presents the concept of an adversarial lower bounds proof section illustrates the concept of state space lower bound section presents linear time worst-case algorithm for finding the ith biggest element on list section continues our discussion of sorting with quest for the algorithm that requires the absolute fewest number of comparisons needed to sort list introduction to lower bounds proofs the lower bound for the problem is the tightest (highestlower bound that we can prove for all possible algorithms that solve the problem this can be difficult bargiven that we cannot possibly know all algorithms for any problembecause there are theoretically an infinite number howeverwe can often recognize simple lower bound based on the amount of input that must be examined for examplewe can argue that the lower bound for any algorithm to find the maximum-valued element in an unsorted list must be ohm(nbecause any algorithm must examine all of the inputs to be sure that it actually finds the maximum value in the case of maximum findingthe fact that we know of simple algorithm that runs in (ntimecombined with the fact that any algorithm needs ohm(ntimeis significant because our upper and lower bounds meet (within constant factor)we know that we do have "goodalgorithm for solving the problem it is possible that someone can develop an implementation that is "littlefaster than an existing oneby constant factor but we know that its not possible to develop one that is asymptotically better we must be careful about how we interpret this last statehowever the world is certainly better off for the invention of quicksorteven though mergesort was available at the time quicksort is not asymptotically faster than mergesortyet is not merely "tuningof mergesort either quicksort is substantially different approach to sorting so even when our upper and lower bounds for problem meetthere are still benefits to be gained from newclever algorithm so now we have an answer to the question "how do know if have good algorithm to solve problem?an algorithm is good (asymptotically speakingif its upper bound matches the problem' lower bound if they matchwe know to throughout this discussionit should be understood that any mention of bounds must specify what class of inputs are being considered do we mean the bound for the worst case inputthe average cost over all inputsregardless of which class of inputs we considerall of the issues raised apply equally |
20,559 | stop trying to find an (asymptoticallyfaster algorithm what if the (knownupper bound for our algorithm does not match the (knownlower bound for the problemin this casewe might not know what to do is our upper bound flawedand the algorithm is really faster than we can proveis our lower bound weakand the true lower bound for the problem is greateror is our algorithm simply not the bestnow we know precisely what we are aiming for when designing an algorithmwe want to find an algorithm who' upper bound matches the lower bound of the problem putting together all that we know so far about algorithmswe can organize our thinking into the following "algorithm for designing algorithms " if the upper and lower bounds matchthen stopelse if the bounds are close or the problem isn' importantthen stopelse if the problem definition focuses on the wrong thingthen restate itelse if the algorithm is too slowthen find faster algorithmelse if lower bound is too weakthen generate stronger bound we can repeat this process until we are satisfied or exhausted this brings us smack up against one of the toughest tasks in analysis lower bounds proofs are notoriously difficult to construct the problem is coming up with arguments that truly cover all of the things that any algorithm possibly could do the most common fallacy is to argue from the point of view of what some good algorithm actually does doand claim that any algorithm must do the same this simply is not trueand any lower bounds proof that refers to specific behavior that must take place should be viewed with some suspicion let us consider the towers of hanoi problem again recall from section that our basic algorithm is to move disks (recursivelyto the middle polemove the bottom disk to the third poleand then move - disks (again recursivelyfrom the middle to the third pole this algorithm generates the recurrence ( ( sothe upper bound for our algorithm is but is this the best algorithm for the problemwhat is the lower bound for the problemfor our first try at lower bounds proofthe "triviallower bound is that we must move every disk at least oncefor minimum cost of slightly better is to this is minor reformulation of the "algorithmgiven by gregory rawlins in his book "compared to what? |
20,560 | chap lower bounds observe that to get the bottom disk to the third polewe must move every other disk at least twice (once to get them off the bottom diskand once to get them over to the third polethis yields cost of which still is not good match for our algorithm is the problem in the algorithm or in the lower boundwe can get to the correct lower bound by the following reasoningto move the biggest disk from first to the last polewe must first have all of the other disks out of the wayand the only way to do that is to move them all to the middle pole (for cost of at least ( )we then must move the bottom disk (for cost of at least oneafter thatwe must move the remaining disks from the middle pole to the third pole (for cost of at least ( )thusno possible algorithm can solve the problem in less than steps thusour algorithm is optimal of coursethere are variations to given problem changes in the problem definition might or might not lead to changes in the lower bound two possible changes to the standard towers of hanoi problem arenot all disks need to start on the first pole multiple disks can be moved at one time the first variation does not change the lower bound (at least not asymptoticallythe second one does lower bounds on searching lists in section we presented an important lower bounds proof to show that the problem of sorting is th( log nin the worst case in we discussed number of algorithms to search in sorted and unsorted listsbut we did not provide any lower bounds proofs to this important problem we will extend our pool of techniques for lower bounds proofs in this section by studying lower bounds for searching unsorted and sorted lists searching in unsorted lists given an (unsortedlist of elements and search key kwe seek to identify one element in which has key value kif any exist for the rest of this discussionwe will assume that the key values for the elements in are uniquethat the set of all possible keys is totally ordered (that isthe operations are defined for all pairs of key values)and that comparison is our only way to find the relative recalling the advice to be suspicious of any lower bounds proof that argues given behavior "musthappenthis proof should be raising red flags howeverin this particular case the problem is so constrained that there really is no (betteralternative to this particular sequence of events |
20,561 | ordering of two keys our goal is to solve the problem using the minimum number of comparisons given this definition for searchingwe can easily come up with the standard sequential search algorithmand we can also see that the lower bound for this problem is "obviouslyn comparisons (keep in mind that the key might not actually appear in the list howeverlower bounds proofs are bit slipperyand it is instructive to see how they can go wrong theorem the lower bound for the problem of searching in an unsorted list is comparisons here is our first attempt at proving the theorem proofwe will try proof by contradiction assume an algorithm exists that requires only (or lesscomparisons of with elements of because there are elements of la must have avoided comparing with [ifor some value we can feed the algorithm an input with in position such an input is legal in our modelso the algorithm is incorrect is this proof correctunfortunately no first of allany given algorithm need not necessarily consistently skip any given position in its searches for exampleit is not necessary that all algorithms search the list from left to right it is not even necessary that all algorithms search the same positions first each time through the list we can try to dress up the proof as followson any given run of the algorithmsome element position (call it position igets skipped it is possible that is in position at that timeand will not be found unfortunatelythere is another error that needs to be fixed it is not true that all algorithms for solving the problem must work by comparing elements of against an algorithm might make useful progress by comparing elements of against each other for exampleif we compare two elements of lthen compare the greater against and find that it is less than kwe know that the other element is also less than it seems intuitively obvious that such comparisons won' actually lead to faster algorithmbut how do we know for surewe somehow need to generalize the proof to account for this approach we will now present useful technique for expressing the state of knowledge for the value relationships among set of objects total order defines relationships within collection of objects such that for every pair of objectsone is greater than the other partially ordered set or poset is set on which only partial order is defined that isthere can be pairs of elements for which we cannot decide which is "greaterfor our purpose herethe partial order is the state of our |
20,562 | chap lower bounds figure illustration of using poset to model our current knowledge of the relationships among collection of objects directed acyclic graph (dagis used to draw the poset (assume all edges are directed downwardin this exampleour knowledge is such that we don' know how or relate to any of the other objects howeverwe know that both and are greater than and furtherwe know that is greater than dand that is greater than current knowledge about the objectssuch that zero or more of the order relations between pairs of elements are known we can represent this knowledge by drawing directed acyclic graphs (dagsshowing the known relationshipsas illustrated by figure initiallywe know nothing about the relative order of the elements in lor their relationship to so initiallywe can view the elements in as being in separate partial orders any comparison between two elements in can affect the structure of the partial orders this is somewhat similar to the union/find algorithm implemented using parent pointer treesdescribed in section nowevery comparison between elements in can at best combine two of the partial orders together any comparison between and an elementsay ain can at best eliminate the partial order that contains thusif we spend comparisons comparing elements in we have at least partial orders every such partial order needs at least one comparison against to make sure that is not somewhere in that partial order thusany algorithm must make at least comparisons in the worst case searching in sorted lists we will now assume that list is sorted in this caseis linear search still optimalclearly nobut why notbecause we have additional information to work with that we do not have when the list is unsorted we know that the standard binary search algorithm has worst case cost of (log ncan we do better than thiswe can prove that this is the best possible in the worst case with proof similar to that used to show the lower bound on sorting again we use the decision tree to model our algorithm unlike when searching an unsorted listcomparisons between elements of tell us nothing new about their |
20,563 | relative orderso we consider only comparisons between and an element in at the root of the decision treeour knowledge rules out no positions in lso all are potential candidates as we take branches in the decision tree based on the result of comparing to an element in lwe gradually rule out potential candidates eventually we reach leaf node in the tree representing the single position in that can contain there must be at least nodes in the tree because we have distinct positions that can be in (any position in lplus not in at allsome path in the tree must be at least log levels deepand the deepest node in the tree represents the worst case for that algorithm thusany algorithm on sorted array requires at least ohm(log ncomparisons in the worst case we can modify this proof to find the average cost lower bound againwe model algorithms using decision trees except now we are interested not in the depth of the deepest node (the worst caseand therefore the tree with the leastdeepest node insteadwe are interested in knowing what the minimum possible is for the "average depthof the leaf nodes define the total path length as the sum of the levels for each node the cost of an outcome is the level of the corresponding node plus the average cost of the algorithm is the average cost of the outcomes (total path length/nwhat is the tree with the least average depththis is equivalent to the tree that corresponds to binary search thusbinary search is optimal in the average case while binary search is indeed an optimal algorithm for sorted list in the worst and average cases when searching sorted arraythere are number of circumstances that might lead us to selecting another algorithm instead one possibility is that we know something about the distribution of the data in the array we saw in section that if each position in is equally likely to hold (equivalentlythe data are well distributed along the full key range)then an interpolation search is (log log nin the average case if the data are not sortedthen using binary search requires us to pay the cost of sorting the list in advancewhich is only worthwhile if many searches will be performed on the list binary search also requires that the list (even if sortedbe implemented using an array or some other structure that supports random access to all elements with equal cost finallyif we know all search requests in advancewe might prefer to sort the list by frequency and do linear search in extreme search distributionsas discussed in section finding the maximum value how can we find the ith largest value in sorted listobviously we just go to the ith position but what if we have an unsorted listcan we do better than to sort itif we are looking for the minimum or maximum valuecertainly we can do |
20,564 | chap lower bounds better than sorting the list is this true for the second biggest valuefor the median valuein later sections we will examine those questions for this sectionwe will continue our examination of lower bounds proofs by reconsidering the simple problem of finding the maximum value in an unsorted list lower bounds on other selection problems will be covered in later sections of this here is simple algorithm for finding the largest value /return position of largest value in "astatic int largest(int[aint currlarge /holds largest element position for (int = < lengthi++/for each element if ( [currlargea[ ]/if [iis larger currlarge /remember its position return currlarge/return largest position obviously this algorithm requires comparisons is this optimalit should be intuitively obvious that it isbut let us try to prove it (before reading further you might try writing down your own proof proof the winner must compare against all other elementsso there must be comparisons this proof is clearly wrongbecause the winner does not need to explicitly compare against all other elements to be recognized for examplea standard singleelimination playoff sports tournament requires only comparisonsand the winner does not play every opponent so let' try again proof only the winner does not lose there are losers single comparison generates (at mostone (newloser thereforethere must be comparisons this proof is sound howeverit will be useful later to abstract this by introducing the concept of posets as we did in section we can view the maximumfinding problem as starting with poset where there are no known relationshipsso every member of the collection is in its own separate dag of one element proof to find the largest valuewe start with poset of dags each with single elementand we must build poset having all elements in one dag such that there is one maximum value (and by implicationn loserswe wish to connect the elements of the poset into single dag with the minimum number of links this requires at least links comparison provides at most one new link thusa minimum of comparisons must be made what is the average cost of largestbecause it always does the same number of comparisonsclearly it must cost comparisons we can also consider |
20,565 | sec adversarial lower bounds proofs how many assignments that largest must do function largest might do an assignment on any iteration of the for loop because this event does happenor does not happenif we are given no information about distribution we could guess that an assignment is made after each comparison with probability of one half but this is clearly wrong in factlargest does an assignment on the ith iteration if and only if [iis the biggest of the the first elements assuming all permutations are equally likelythe probability of this being true is / thusthe average number of assignments done is = = which is the harmonic series hn hn th(log nmore exactlyhn is close to loge how "reliableis this averagethat ishow much will given run of the program deviate from the mean costaccording to cebysev' inequalityan observation will fall within two standard deviations of the mean at least of the time for largestthe variance is loge the standard deviation is thus about logepn so of the observations are between loge loge and loge loge is this narrow spread or wide spreadcompared to the mean valuethis spread is pretty widemeaning that the number of assignments varies widely from run to run of the program hn adversarial lower bounds proofs our next problem will be finding the second largest in collection of objects consider what happens in standard single-elimination tournament even if we assume that the "bestteam wins in every gameis the second best the one who loses in the finalsnot necessarily we might expect that the second best must lose to the bestbut they might meet at any time let us go through our standard "algorithm for finding algorithmsby first proposing an algorithmthen lower boundand seeing if they match unlike our analysis for most problemsthis time we are going to count the exact number of comparisons involved and attempt to minimize them simple algorithm for finding the second largest is to first find the maximum (in comparisons)discard itand then find the maximum of the remaining elements (in comparisons |
20,566 | chap lower bounds for total cost of comparisons is this optimalthat seems doubtfulbut let us now proceed to the step of attempting to prove lower bound theorem the lower bound for finding the second largest value is proofany element that loses to anything other than the maximum cannot be second sothe only candidates for second place are those that lost to the maximum function largest might compare the maximum element to others thuswe might need additional comparisons to find the second largest this proof is wrong it exhibits the necessity fallacy"our algorithm does somethingtherefore all algorithms solving the problem must do the same this leaves us with our best lower bounds argument at the moment being that finding the second largest must cost at least as much as finding the largestor let us take another try at finding better algorithm by adopting strategy of divide and conquer what if we break the list into halvesand run largest on each halfwe can then compare the two winners (we have now used total of comparisons)and remove the winner from its half another call to largest on the winner' half yields its second best final comparison against the winner of the other half gives us the true second place winner the total cost is / is this optimalwhat if we break the list into four piecesthe best would be / what if we break the list into eight piecesthen the cost would be / pushing this idea to its extremethe only candidates for second place are losers to the eventual winnerand our goal is to have as few of these as possible so we need to keep track of the set of elements that have lost in direct comparison to the (eventualwinner we also observe that we learn the most from comparison when both competitors are known to be larger than the same number of other values so we would like to arrange our comparisons to be against "equally strongcompetitors we can do all of this with binomial tree binomial tree of height has nodes either it is single node (if )or else it is two height binomial trees with one tree' root becoming child of the other figure illustrates how binomial tree with eight nodes would be constructed the resulting algorithm is simple in principlebuild the binomial tree for all elementsand then compare the dlog ne children of the root to find second place we could store the binomial tree as an explicit tree structureand easily build it in time linear on the number of comparisons as each comparison requires one link be added because the shape of binomial tree is heavily constrainedwe can also store the binomial tree implicitly in an arraymuch as we do for heap assume that two treeseach with nodesare in the array the first tree is in positions |
20,567 | figure an example of building binomial tree pairs of elements are combined by choosing one of the parents to be the root of the entire tree given two trees of size fourone of the roots is chosen to be the root for the combined tree of eight nodes to the second tree is in positions to + the root of each subtree is in the final array position for that subtree to join two treeswe simply compare the roots of the subtrees if necessaryswap the subtrees so that tree with the the larger root element becomes the second subtree this trades space (only need space for the data valuesno pointersfor time (in the worst caseall of the data swapping might cost ( log )though this does not affect the number of comparisons requiredbecause the binomial tree' root has log childrenand building the tree requires comparisonsthe total number of comparisons required by this algorithm is dlog ne this is clearly better than our previous algorithm but is it optimalwe now go back to trying to improve the lower bounds proof to do thiswe introduce the concept of an adversary the adversary' job is to make an algorithm' cost as high as possible imagine that the adversary keeps list of all possible inputs we view the algorithm as asking the adversary for information about the algorithm' input the adversary may never liebut it is permitted to "rearrangethe input as it sees fit in order to drive the total cost for the algorithm as high as possible in particularwhen the algorithm asks questionthe adversary answers in way that is consistent with at least one remaining input the adversary then crosses out all remaining inputs inconsistent with that answer keep in mind that there is not really an entity within the computer program that is the adversaryand we don' actually modify the program the adversary operates merely as an analysis deviceto help us reason about the program as an example of an adversaryconsider the standard game of hangman player picks word and tells player how many letters the word has player guesses various letters if guesses letter in the wordthen will indicate which position(sin the word have the letter player is permitted to make only so many guesses of letters not in the word before losing |
20,568 | chap lower bounds in the hangman game examplethe adversary is imagined to hold dictionary of words of some selected length each time the player guesses letterthe adversary consults the dictionary and decides if more words will be eliminated by accepting the letter (and indicating which positions it holdsor saying that its not in the word the adversary can make any decision it choosesso long as at least one word in the dictionary is consistent with all of the decisions in this waythe adversary can hope to make the player guess as many letters as possible before explaining how the adversary plays role in our lower bounds prooffirst observe that at least values must lose at least once this requires at least compares in additionat least values must lose to the second largest value that isk direct losers to the winner must be compared there must be at least comparisons the question ishow low can we make kcall the strength of element [ithe number of elements that [iis (known to bebigger than if [ihas strength aand [jhas strength bthen the winner has strength what strategy by the adversary would cause the algorithm to learn the least from any given comparisonit should minimize the rate at which any element improves it strength it can do this by making the element with the greater strength win at every comparison this is "fairuse of an adversary in that it represents the results of providing worst-case input for that given algorithm to minimize the effects of worst-case behaviorthe algorithm' best strategy is to maximize the minimum improvement in strength by balancing the strengths of any two competitors from the algorithm' point of viewthe best outcome is that an element doubles in strength this happens whenever bwhere and are the strengths of the two elements being compared all strengths begin at zeroso the winner must make at least comparisons when - < thusthere must be at least dlog ne comparisons our algorithm is optimal state space lower bounds proofs we now consider the problem of finding both the minimum and the maximum from an (unsortedlist of values this might be useful if we want to know the range of collection of values to be plottedfor the purpose of drawing the plot' scales of course we could find them independently in comparisons slight modification is to find the maximum in comparisonsremove it from the listand then find the minimum in further comparisons for total of comparisons can we do better than thisbefore continuingthink moment about how this problem of finding the minimum and the maximum compares to the problem of the last sectionthat of finding the second biggest value (and by implicationthe maximumwhich of these two |
20,569 | problems do you think is harderit is probably not at all obvious to you that one problem is harder or easier than the other there is intuition that argues for either case on the one hand intuition might argue that the process of finding the maximum should tell you something about the second biggest valuemore than that process should tell you about the minimum value on the other handany given comparison tells you something about which of two can be candidate for maximum valueand which can be candidate for minimum valuethus making progress in both directions we will start by considering simple divide-and-conquer approach to finding the minimum and maximum split the list into two parts and find the minimum and maximum elements in each part then compare the two minimums and maximums to each other with further two comparisons to get the final result the algorithm is as follows /*return the minimum and maximum values in between positions and *static void minmax(int []int lint rint out[]if ( = / = out[ [ ]out[ [ ]else if ( + = / = out[ math min( [ ] [ ])out[ math max( [ ] [ ])else / > int[out new int[ ]int[out new int[ ]int mid ( )/ minmax(almidout )minmax(amid+ rout )out[ math min(out [ ]out [ ])out[ math max(out [ ]out [ ])the cost of this algorithm can be modeled by the following recurrence = = (nt(bn/ ct(dn/ this is rather interesting recurrenceand its solution ranges between / (when or + and / (when we can infer from this behavior that how we divide the list affects the performance of the algorithm |
20,570 | chap lower bounds for examplewhat if we have six items in the listif we break the list into two sublists of three elementsthe cost would be if we break the list into sublist of size two and another of size fourthen the cost would only be with divide and conquerthe best algorithm is the one that minimizes the worknot necessarily the one that balances the input sizes one lesson to learn from this example is that it can be important to pay attention to what happens for small sizes of nbecause any division of the list will eventually produce many small lists we can model all possible divide-and-conquer strategies for this problem with the following recurrence (nmin <= <= - { (kt( ) = = > that iswe want to find way to break up the list that will minimize the total work if we examine various ways of breaking up small listswe will eventually recognize that breaking the list into sublist of size and sublist of size will always produce results as good as any other division this strategy yields the following recurrence = = (nt( this recurrence (and the corresponding algorithmyields (nd / comparisons is this optimalwe now introduce yet another tool to our collection of lower bounds proof techniquesthe state space proof we will model our algorithm by defining state that the algorithm must be in at any given instant we can then define the start statethe end stateand the transitions between states that any algorithm can support from thiswe will reason about the minimum number of states that the algorithm must go through to get from the start to the endto reach state space lower bound at any given instantwe can track the following four categoriesuntestedelements that have not been tested winnerselements that have won at least onceand never lost loserselements that have lost at least onceand never won middleelements that have both won and lost at least once we define the current state to be vector of four values(uwlm for untestedwinnerslosersand middlesrespectively for set of elementsthe |
20,571 | sec state space lower bounds proofs initial state of the algorithm is ( and the end state is ( thusevery run for any algorithm must go from state ( to state ( we also observe that once an element is identified to be middleit can then be ignored because it can neither be the minimum nor the maximum given that there are four types of elementsthere are types of comparison comparing with middle cannot be more efficient than other comparisonsso we should ignore thoseleaving six comparisons of interest we can enumerate the effects of each comparison type as follows if we are in state (ijkland we have comparisonthen the state changes are as follows : : : : or : or : or ( ( ( ( ( ( ( ( (ij jj jjjjj kk kkk kkk ll ll ll ll nowlet us consider what an adversary will do for the various comparisons the adversary will make sure that each comparison does the least possible amount of work in taking the algorithm toward the goal state for examplecomparing winner to loser is of no value because the worst case result is always to learn nothing new (the winner remains winner and the loser remains loserthusonly the following five transitions are of interestu : : : : : ( ( ( ( (ij jj jk kk kk llll only the last two transition types increase the number of middlesso there must be of these the number of untested elements must go to and the first transition is the most efficient way to do this thusdn/ of these are required our conclusion is that the minimum possible number of transitions (comparisonsis dn/ thusour algorithm is optimal |
20,572 | chap lower bounds - - figure the poset that represents the minimum information necessary to determine the ith element in list we need to know which element has values less and values morebut we do not need to know the relationships among the elements with values less or greater than the ith element finding the ith best element we now tackle the problem of finding the ith best element in list as observed earlierone solution is to sort the list and simply look in the ith position howeverthis process provides considerably more information than we need the minimum amount of information that we actually need to know can be visualized as shown in figure that isall we need to know is the items less than our desired valueand the - items greater we do not care about the relative order within the upper and lower groups so can we find the required information faster than by first sortinglooking at the lower boundcan we tighten that beyond the trivial lower bound of comparisonswe will focus on the specific question of finding the median element ( the element with rank / )because the resulting algorithm can easily be modified to find the ith largest value for any looking at the quicksort algorithm might give us some insight into solving the median problem recall that quicksort works by selecting pivot valuepartitioning the array into those elements less than the pivot and those greater than the pivotand moving the pivot to its proper location in the array if the pivot is in position ithen we are done if notwe can solve the subproblem recursively by only considering one of the sublists that isif the pivot ends up in position ithen we simply solve by finding the ith best element in the left partition if the pivot is at position ithen we wish to find the kth element in the right partition what is the worst case cost of this algorithmas with quicksortwe get bad performance if the pivot is the first or last element in the array this would lead to possibly ( performance howeverif the pivot were to always cut the array in halfthen our cost would be modeled by the recurrence (nt( / or (ncost finding the average cost requires us to use recurrence with full historysimilar to the one we used to model the cost of quicksort if we do thiswe will find that (nis in (nin the average case |
20,573 | figure method for finding pivot for partitioning list that guarantees at least fixed fraction of the list will be in each partition we divide the list into groups of five elementsand find the median for each group we then recursively find the median of these / medians the median of five elements is guaranteed to have at least two in each partition the median of three medians from collection of elements is guaranteed to have at least five elements in each partition is it possible to modify our algorithm to get worst-case linear timeto do thiswe need to pick pivot that is guaranteed to discard fixed fraction of the elements we cannot just choose pivot at randombecause doing so will not meet this guarantee the ideal situation would be if we could pick the median value for the pivot each time but that is essentially the same problem that we are trying to solve to begin with noticehoweverthat if we choose any constant cand then if we pick the median from sample of size /cthen we can guarantee that we will discard at least / elements figure illustrates this idea this observation leads directly to the following algorithm choose the / medians for groups of five elements from the list choosing the median of five items can be done in constant time recursivelyselect mthe median of the / medians-of-fives partition the list into those elements larger and smaller than while selecting the median in this way is guaranteed to eliminate fraction of the elementswe still need to be sure that our recursion yields linear-time algorithm we model the algorithm by the following recurrence ( < (dn/ et( ( )/ dn/ we will prove that this recurrence is linear by assuming that it is true for some constant rand then show that ( <rn for all greater than some bound |
20,574 | chap lower bounds ( <<<< ( et( ) + this is true for > and > this provides base case that allows us to use induction to prove that > ( < in realitythis algorithm is not practical because its constant factor costs are so high so much work is being done to guarantee linear time performance that it is more efficient on average to rely on chance to select the pivotperhaps by picking it at random or picking the middle value out of the current subarray optimal sorting we conclude this section with an effort to find the sorting algorithm with the absolute fewest possible comparisons it might well be that the result will not be practical for general-purpose sorting algorithm but recall our analogy earlier to sports tournaments in sportsa "comparisonbetween two teams or individuals means doing competition between the two this is fairly expensive (at least compared to some minor book keeping in computer)and it might be worth trading fair amount of book keeping to cut down on the number of games that need to be played what if we want to figure out how to hold tournament that will give us the exact ordering for all teams in the fewest number of total gamesof coursewe are assuming that the results of each game will be "accuratein that we assume not only that the outcome of playing would always be the same (at least over the time period of the tournament)but that transitivity in the results also holds in practice these are unrealistic assumptionsbut such assumptions are implicitly part of many tournament organizations like most tournament organizerswe can simply accept these assumptions and come up with an algorithm for playing the games that gives us some rank ordering based on the results we obtain recall insertion sortwhere we put element into sorted sublist of the first elements what if we modify the standard insertion sort algorithm to use binary |
20,575 | search to locate where the ith element goes in the sorted sublistthis algorithm is called binary insert sort as general-purpose sorting algorithmthis is not practical because we then have to (on averagemove about / elements to make room for the newly inserted element in the sorted sublist but if we count only comparisonsbinary insert sort is pretty good and we can use some ideas from binary insert sort to get closer to an algorithm that uses the absolute minimum number of comparisons needed to sort consider what happens when we run binary insert sort on five elements how many comparisons do we need to dowe can insert the second element with one comparisonthe third with two comparisonsand the fourth with comparisons when we insert the fifth element into the sorted list of four elementswe need to do three comparisons in the worst case notice exactly what happens when we attempt to do this insertion we compare the fifth element against the second if the fifth is biggerwe have to compare it against the thirdand if it is bigger we have to compare it against the fourth in generalwhen is binary search most efficientwhen we have elements in the list it is least efficient when we have elements in the list sowe can do bit better if we arrange our insertions to avoid inserting an element into list of size if possible figure illustrates different organization for the comparisons that we might do first we compare the first and second elementand the third and fourth elements the two winners are then comparedyielding binomial tree we can view this as (sortedchain of three elementswith element hanging off from the root if we then insert element into the sorted chain of three elementswe will end up with one of the two posets shown on the right side of figure at cost of comparisons we can then merge into the chainfor cost of two comparisons (because we already know that it is smaller then either one or two elementswe are actually merging it into list of two or three elementsthusthe total number of comparisons needed to sort the five elements is at most seven instead of eight if we have ten elements to sortwe can first make five pairs of elements (using five comparesand then sort the five winners using the algorithm just described (using seven more comparesnow all we need to do is to deal with the original losers we can generalize this process for any number of elements aspair up all the nodes with comparisons recursively sort the winners fold in the losers we use binary insert to place the losers howeverwe are free to choose the best ordering for insertingkeeping in mind the fact that binary search has the same cost for through + items for examplebinary search requires three |
20,576 | chap lower bounds or figure organizing comparisons for sorting five elements first we order two pairs of elementsand then compare the two winners to form binomial tree of four elements the original loser to the root is labeled aand the remaining three elements form sorted chain (forming sorted chain of four elementswe then insert element into the sorted chain finallywe put into the resulting chain to yield final sorted list figure merge insert sort for ten elements first five pairs of elements are compared the five winners are then sorted this leaves the elements labeled - to be sorted into the chain made by the remaining six elements comparisons in the worst case for lists of size or so we pick the order of inserts to optimize the binary searcheswhich means picking an order that avoids growing sublist size such that it crosses the boundary on list size to require an additional comparison this sort is called merge insert sortand also known as the ford and johnson sort for ten elementsgiven the poset shown in figure we fold in the last four elements (labeled to in the order element element element and finally element element will be inserted into list of size threecosting two comparisons depending on where element then ends up in the listelement will now be inserted into list of size or costing two comparisons in either case depending on where elements and are in the listelement will now be inserted into list of size or all of which requires three comparisons to place in sort order finallyelement will be inserted into list of size or |
20,577 | merge insert sort is pretty goodbut is it optimalrecall from section that no sorting algorithm can be faster than ohm( log nto be precisethe information theoretic lower bound for sorting can be proved to be dlog ! that iswe can prove lower bound of exactly dlog ! comparisons merge insert sort gives us number of comparisons equal to this information theoretic lower bound for all values up to at merge insert sort requires comparisons while the information theoretic lower bound is only comparisons howeverfor such small number of elementsit is possible to do an exhaustive study of every possible arrangement of comparisons it turns out that there is in fact no possible arrangement of comparisons that makes the lower bound less than comparisons when thusthe information theoretic lower bound is an underestimate in this casebecause really is the best that can be done call the optimal worst cost for elements (nwe know that ( < (ndlog( ) because we could sort elements and use binary insert for the last one for all and ms( < (ns(mm (mnwhere (mnis the best time to merge two sorted lists for it turns out that we can do better by splitting the list into pieces of size and and then merging thusmerge sort is not quite optimal but it is extremely goodand nearly optimal for smallish numbers of elements further reading much of the material in this book is also covered in many other textbooks on data structures and algorithms the biggest exception is that not many other textbooks cover lower bounds proofs in any significant detailas is done in this those that do focus on the same example problems (search and selectionbecause it tells such tight and compelling story regarding related topicswhile showing off the major techniques for lower bounds proofs two examples of such textbooks are "computer algorithmsby baase and van gelder [bg ]and "compared to what?by gregory rawlins [raw "fundamentals of algorithmicsby brassard and bratley [bb also covers lower bounds proofs exercises consider the so-called "algorithm for algorithmsin section is this really an algorithmreview the definition of an algorithm from section which parts of the definition applyand which do notis the "algorithm for algorithmsa heuristic for finding good algorithmwhy or why not |
20,578 | chap lower bounds single-elimination tournaments are notorious for their scheduling difficulties imagine that you are organizing tournament for basketball teams (you may assume that for some integer iwe will further simplify things by assuming that each game takes less than an hourand that each team can be scheduled for game every hour if necessary (note that everything said here about basketball courts is also true about processors in parallel algorithm to solve the maximum-finding problem(ahow many basketball courts do we need to insure that every team can play whenever we want to minimize the total tournament time(bhow long will the tournament be in this case(cwhat is the total number of "court-hoursavailablehow many total hours are courts being usedhow many total court-hours are unused(dmodify the algorithm in such way as to reduce the total number of courts neededby perhaps not letting every team play whenever possible this will increase the total hours of the tournamentbut try to keep the increase as low as possible for your new algorithmhow long is the tournamenthow many courts are neededhow many total court-hours are availablehow many court-hours are usedand how many unused explain why the cost of splitting list of six into two lists of three to find the minimum and maximum elements requires eight comparisonswhile splitting the list into list of two and list of four costs only seven comparisons write out table showing the number of comparisons required to find the minimum and maximum for all divisions for all values of < present an adversary argument as lower bounds proof to show that comparisons are necessary to find the maximum of values in the worst case present an adversary argument as lower bounds proof to show that comparisons are necessary in the worst case when searching for an element with value (if one existsfrom among elements section claims that by picking pivot that always discards at least fixed fraction of the remaining arraythe resulting algorithm will be linear explain why this is true hintthe master theorem (theorem might help you show that any comparison-based algorithm for finding the median must use at least comparisons show that any comparison-based algorithm for finding the second-smallest of values can be extended to find the smallest value alsowithout requiring any more comparisons to be performed |
20,579 | show that any comparison-based algorithm for sorting can be modified to remove all duplicates without requiring any more comparisons to be performed show that any comparison-based algorithm for removing duplicates from list of values must use ohm( log ncomparisons given list of elementsan element of the list is majority if it appears more than / times (aassume that the input is list of integers design an algorithm that is linear in the number of integer-integer comparisons in the worst case that will find and report the majority if one existsand report that there is no majority if no such integer exists in the list (bassume that the input is list of elements that have no relative orderingsuch as colors or fruit so all that you can do when you compare two elements is ask if they are the same or not design an algorithm that is linear in the number of element-element comparisons in the worst case that will find majority if one existsand report that there is no majority if no such element exists in the list given an undirected graph gthe problem is to determine whether or not is connected use an adversary argument to prove that it is necessary to look at all ( )/ potential edges in the worst case (awrite an equation to describe the average cost for finding the median (bsolve your equation from part ( (awrite an equation to describe the average cost for finding the ith-smallest value in an array this will be function of both and it(ni(bsolve your equation from part ( suppose that you have objects that have identical weightexcept for one that is bit heavier than the others you have balance scale you can place objects on each side of the scale and see which collection is heavier your goal is to find the heavier objectwith the minimum number of weighings find and prove matching upper and lower bounds for this problem imagine that you are organizing basketball tournament for teams you know that the merge insert sort will give you full ranking of the teams with the minimum number of games played assume that each game can be played in less than an hourand that any team can play as many games in row as necessary show schedule for this tournament that also attempts to minimize the number of total hours for the tournament and the number of courts used if you have to make tradeoff between the twothen attempt to minimize the total number of hours that basketball courts are idle |
20,580 | chap lower bounds write the complete algorithm for the merge insert sort sketched out in section here is suggestion for what might be truly optimal sorting algorithm pick the best set of comparisons for input lists of size then pick the best set of comparisons for size size size and so on combine them together into one program with big case statement is this an algorithm projects implement the median-finding algorithm of section thenmodify this algorithm to allow finding the ith element for any value |
20,581 | patterns of algorithms this presents several fundamental topics related to the theory of algorithms algorithms these include dynamic programming (section )randomized algorithms (section )and the concept of transform (section each of these can be viewed as an example of an "algorithmic patternthat is commonly used for wide variety of applications in additionwe will discuss number of numerical algorithms in section section on randomized algorithms presents the skip list (section the skip list is probabilistic data structure that can be used to implement the dictionary adt the skip list is comparable in complexity to the bstyet often outperforms the bstbecause the skip list' efficiency is not tied to the values of the dataset being stored dynamic programming consider again the recursive function for computing the nth fibonacci number int fibr(int nif ( < return return fibr( - fibr( - )/base case /recursive call the cost of this algorithm (in terms of function callsis the size of the nth fibonacci number itselfwhich our analysis showed to be exponential (approximately why is this so expensiveit is expensive primarily because two recursive calls are made by the functionand they are largely redundant that iseach of the two calls is recomputing most of the seriesas is each sub-calland so on thusthe smaller values of the function are being recomputed huge number of times |
20,582 | chap patterns of algorithms if we could eliminate this redundancythe cost would be greatly reduced the approach that we will use can also improve any algorithm that spends most of its time recomputing common subproblems one way to accomplish this goal is to keep table of valuesand first check the table to see if the computation can be avoided here is straightforward example of doing so int fibrt(int nintvalues/assume values has at least slotsand all /slots are initialized to if ( < return /base case if (values[ ! return values[ ]values[nfibr( - valuesfibr( - values)return values[ ]this version of the algorithm will not compute value more than onceso its cost should be linear of coursewe didn' actually need to use table storing all of the valuessince future computations do not need access to all prior values insteadwe could build the value by working from and up to rather than backwards from down to and going up from the bottom we only need to store the previous two values of the functionas is done by our iterative version long fibi(int nlong pastprevcurrpast prev curr /curr holds fib(ifor (int = <=ni++/compute next value past prevprev curr/past holds fib( - curr past prev/prev holds fib( - return currthis issue of recomputing subproblems comes up frequently in many casesarbitrary subproblems (or at least wide variety of subproblemsmight need to be recomputedso that storing subresults in fixed number of variables will not work thusthere are many times where storing table of subresults can be useful this approach to designing an algorithm that works by storing table of results for subproblems is called dynamic programming the name is somewhat arcanebecause it doesn' bear much obvious similarity to the process that is taking place when storing subproblems in table howeverit comes originally from the field of |
20,583 | sec dynamic programming dynamic control systemswhich got its start before what we think of as computer programming the act of storing precomputed values in table for later reuse is referred to as "programmingin that field dynamic programming is powerful alternative to the standard principle of divide and conquer in divide and conquera problem is split into subproblemsthe subproblems are solved (independently)and then recombined into solution for the problem being solved dynamic programming is appropriate whenever the subproblems to be solved are overlapping in some way and if we can find suitable way of doing the necessary bookkeeping dynamic programming algorithms are usually not implemented by simply using table to store subproblems for recursive calls ( going backwards as is done by fibrtinsteadsuch algorithms typically implemented by building the table of subproblems from the bottom up thusfibi better represents the most common form of dynamic programming than does fibrteven though it doesn' need the actual table the knapsack problem we will next consider problem that appears with many variations in variety of industrial settings many businesses need to package items with the greatest efficiency one way to describe this basic idea is in terms of packing items into knapsackand so we will refer to this as the knapsack problem first define particular form of the problemand then discuss an algorithm for it based on dynamic programming we will see other versions of the problem in the exercises and in assume that we have knapsack with certain amount of space that we will define using integer value we also have items each with certain size such that that item has integer size ki the problem is to find subset of the items whose sizes exactly sum to kif one exists for exampleif our knapsack has capacity and the two items are of size and then no such subset exists but if we have add third item of size then we can fill the knapsack exactly with the first and third items we can define the problem more formally asfind { nsuch that ki is example assume that we are given knapsack of size and items of sizes can we find subset of the items that exactly fills the knapsackyou should take few minutes and try to do this before reading on and looking at the answer |
20,584 | chap patterns of algorithms one solution to the problem is example having solved the previous example for knapsack of size how hard is it now to solve for knapsack of size unfortunatelyknowing the answer for is of almost no use at all when solving for one solution is if you tried solving these examplesyou probably found yourself doing lot of trial-and-error and lot of backtracking to come up for an algorithmwe want an organized way to go through the possible subsets is there way to make the problem smaller to apply divide and conquerwe essentially have two parts to the inputthe knapsack size and the items it probably will not do us much good to try and break the knapsack into pieces and solve the sub-pieces (since we saw that knowing the answer for knapsack of size did nothing to help us solve the problem for knapsack of size sowhat can we say about solving the problem with or without the nth itemthis seems to lead to way to break down the problem if the nth item is not needed for solution (that isif we can solve the problem with the first itemsthen we can also solve the problem when the nth item is available (we just ignore iton the other handif we do include the nth item as member of the solution subsetthen we now would need to solve the problem with the first items on knapsack of size kn since the nth item is taking up kn space in the knapsack to organize this processwe can define the problem in terms of two parameters the knapsack size and the number of items denote given instance of the problem as (nknow we can say that (nkhas solution if and only if there exists solution for either ( kor ( kn that iswe can solve (nkonly if we can solve one of the sub problems where we use or do not use the nth item of coursethe ordering of the items is arbitrary we just need to give them some order to keep things straight continuing this ideato solve any subproblem of size we need only to solve two subproblems of size and so onuntil we are down to only one item that either fills the knapsack or not this naturally leads to cost expressed by the recurrence relation ( ( th( that can be pretty expensivebut we should quickly realize that there are only ( subproblems to solveclearlythere is the potential for many subproblems being solved repeatedly this is natural opportunity to apply dynamic programming we simply build an |
20,585 | array of size to contain the solutions for all subproblems (ik) < < < < there are two approaches to actually solving the problem one is to start with our problem of size (nkand make recursive calls to solve the subproblemseach time checking the array to see if subproblem has been solvedand filling the array whenever we get new subproblem solution the other is to start filling the array for row (which indicates successful solution only for knapsack of size ki we then fill in the succeeding rows from to nleft to rightas follows if ( khas solutionthen (nkhas solution else if ( kn has solution then (nkhas solution else (nkhas no solution in other wordsa new slot in the array gets its solution by looking at two slots in the preceding row since filling each slot in the array takes constant timethe total cost of the algorithm is th(nkexample solve the knapsack problem for and five items with sizes we do this by building the following array / / key-no solution for (ikosolution(sfor (ikwith omitted isolution(sfor (ikwith included /osolutions for (ikwith included and omitted for examplep ( stores value / it contains because ( has solution it contains because ( ( has solution since ( is marked with an iit has solution we can determine what that solution actually is by recognizing that it includes the th item (of size )which then leads us to look at the solution for ( this in turn has solution that omits the th itemleading us to ( at |
20,586 | chap patterns of algorithms this pointwe can either use the third item or not we can find solution by taking one branch we can find all solutions by following all branches when there is choice all-pairs shortest paths we next consider the problem of finding the shortest distance between all pairs of vertices in the graphcalled the all-pairs shortest-paths problem to be precisefor every uv vcalculate (uvone solution is to run dijkstra' algorithm for finding the single-source shortest path (see section |vtimeseach time computing the shortest path from different start vertex if is sparse (that is|eth(| |)then this is good solutionbecause the total cost will be th(| | | ||elog | |th(| | log | |for the version of dijkstra' algorithm based on priority queues for dense graphthe priority queue version of dijkstra' algorithm yields cost of th(| | log | |)but the version using minvertex yields cost of th(| | another solution that limits processing time to th(| | regardless of the number of edges is known as floyd' algorithm this is another example of dynamic programming the chief problem with solving this problem is organizing the search process so that we do not repeatedly solve the same subproblems define -path from vertex to vertex to be any path whose intermediate vertices (aside from and uall have indices less than -path is defined to be direct edge from to figure illustrates the concept of -paths define dk (vuto be the length of the shortest -path from vertex to vertex assume that we already know the shortest -path from to the shortest ( )path either goes through vertex or it does not if it does go through kthen the best path is the best -path from to followed by the best -path from to otherwisewe should keep the best -path seen before floyd' algorithm simply checks all of the possibilities in triple loop here is the implementation for floyd' algorithm at the end of the algorithmarray stores the all-pairs shortest distances |
20,587 | sec randomized algorithms figure an example of -paths in floyd' algorithm path is -path by definition path is not -pathbut it is -path (as well as -patha -pathand -pathbecause the largest intermediate vertex is path is -pathbut not -path because the intermediate vertex is all paths in this graph are -paths /compute all-pairs shortest paths static void floyd(graph gint[][dassertequals( [ ][ ]integer max value)for (int = < () ++/initialize with weights for (int = < () ++if ( weight(ij! [ ][jg weight(ij)for (int = < () ++/compute all paths for (int = < () ++for (int = < () ++if (( [ ][ !integer max value&( [ ][ !integer max value&( [ ][ ( [ ][kd[ ][ ])) [ ][jd[ ][kd[ ][ ]clearly this algorithm requires th(| | running timeand it is the best choice for dense graphs because it is (relativelyfast and easy to implement randomized algorithms in this sectionwe will consider how introducing randomness into our algorithms can lead to speeding things upperhaps at the expense of accuracy but often we can reduce the possibility for error to be as low as we like |
20,588 | chap patterns of algorithms randomized algorithms for finding large values in section we determined that the lower bound cost of finding the maximum value in an unsorted list is ohm(nthis is the least time needed to be certain that we have found the maximum value but what if we are willing to relax our requirement for certaintythe first question is what do we mean by thisthere are many aspects to "certaintyand we might relax the requirement in various ways there are several possible guarantees that we might require from an algorithm that produces as the maximum valuewhen the true maximum is is ' rank is "close toy ' rank(rank( <rank( "small is "usuallyy ( ( >"large ' rank is "usually"closeto ' rank each of these guarantees leads to type of algorithms as follows exact or deterministic algorithm approximation algorithm probabilistic algorithm heuristic there are also ways that we might choose to sacrifice reliability for speed these types of algorithms also have names las vegas algorithmswe always find the maximum valueand "usuallywe find it fast such algorithms have guaranteed resultbut not guaranteed fast running time monte carlo algorithmswe find the maximum value fastor we don' get an answer at all (but fastsuch algorithms have good running timetheir result is not guaranteed here is an example of an algorithm for finding large value that gives up its guarantee of getting the best value in exchange for an improved running time this is an example of probabilistic algorithmsince it includes steps that are affected by random events choose elements at randomand pick the best one of those as the answer for large nif log nthe answer is pretty good the cost is compares (since we must find the maximum of valuesbut we don' know for sure what we will get howeverwe can estimate that the rank will be mn about + nextconsider slightly different problem where the goal is to pick number in the upper half of values we would pick the maximum from among the first + values for cost of / comparisons can we do better than thisnot if we |
20,589 | want to guarantee getting the correct answer but if we are willing to accept near certainty instead of absolute certaintywe can gain lot in terms of speed as an alternativeconsider this probabilistic algorithm pick numbers and choose the greater this will be in the upper half with probability / (since it is not in the upper half only when both numbers we choose happen to be in the lower halfis probability of / not good enoughthen we simply pick more numbersfor numbersthe greatest is in upper half with probability - regardless of the number that we pick fromso long as is much larger than (otherwise the chances might become even betterif we pick ten numbersthen the chance of failure is only one in what if we really want to be surebecause lives depend on drawing number from the upper halfif we pick numberswe can fail only one time in billion if we pick enough numbersthen the chance of picking small number is less than the chance of the power failing during the computation picking numbers means that we can fail only one time in which is less chance than any disaster that you can imagine intervening skip lists in this section we will present probabilistic search structure called the skip list like bstsskip lists are designed to overcome basic limitation of array-based and linked listseither search or update operations require linear time the skip list is an example of probabilistic data structurebecause it makes some of its decisions at random skip lists provide an alternative to the bst and related tree structures the primary problem with the bst is that it may easily become unbalanced the - tree of is guaranteed to remain balanced regardless of the order in which data values are insertedbut it is rather complicated to implement presents the avl tree and the splay treewhich are also guaranteed to provide good performancebut at the cost of added complexity as compared to the bst the skip list is easier to implement than known balanced tree structures the skip list is not guaranteed to provide good performance (where good performance is defined as th(log nsearchinsertionand deletion time)but it will provide good performance with extremely high probability (unlike the bst which has good chance of performing poorlyas such it represents good compromise between difficulty of implementation and performance figure illustrates the concept behind the skip list figure (ashows simple linked list whose nodes are ordered by key value to search sorted linked list requires that we move down the list one node at timevisiting th(nnodes in the average case imagine that we add pointer to every other node that lets us |
20,590 | chap patterns of algorithms skip alternating nodesas shown in figure (bdefine nodes with only single pointer as level skip list nodeswhile nodes with two pointers are level skip list nodes to searchfollow the level pointers until value greater than the search key has been foundthen revert to level pointer to travel one more node if necessary this effectively cuts the work in half we can continue adding pointers to selected nodes in this way -give third pointer to every fourth nodegive fourth pointer to every eighth nodeand so on -until we reach the ultimate of log pointers in the first and middle nodes for list of nodes as illustrated in figure (cto searchstart with the bottom row of pointersgoing as far as possible and skipping many nodes at time thenshift up to shorter and shorter steps as required with this arrangementthe worst-case number of accesses is th(log nto implement skip listswe store with each skip list node an array named forward that stores the pointers as shown in figure (cposition forward[ stores level pointerforward[ stores level pointerand so on the skip list class definition includes data member level that stores the highest level for any node currently in the skip list the skip list is assumed to store header node named head with level pointers the find function is shown in figure searching for node with value in the skip list of figure (cbegins at the header node follow the header node' pointer at levelwhich in this example is level this points to the node with value because is less than we next try the pointer from forward[ of ' node to reach because is greater than we cannot go forward but must instead decrement the current level counter to we next try to follow forward[ of to reach the node with value because is smaller than we follow ' forward[ pointer to because is too bigfollow ' level pointer to because is not less than we fall out of the while loop and move one step forward to the node with value the ideal skip list of figure (chas been organized so that (if the first and last nodes are not countedhalf of the nodes have only one pointerone quarter have twoone eighth have threeand so on the distances are equally spacedin effect this is "perfectly balancedskip list maintaining such balance would be expensive during the normal process of insertions and deletions the key to skip lists is that we do not worry about any of this whenever inserting nodewe assign it level ( some number of pointersthe assignment is randomusing geometric distribution yielding probability that the node will have one pointera probability that it will have twoand so on the following function determines the level based on such distribution |
20,591 | sec randomized algorithms head (ahead (bhead (cfigure illustration of the skip list concept (aa simple linked list (baugmenting the linked list with additional pointers at every other node to find the node with key value we visit the nodes with values and then we move from the node with key value to the one with value (cthe ideal skip listguaranteeing (log nsearch time to find the node with key value we visit nodes in the order then againand finally public find(key searchkey/skiplist search skipnode head/dummy header node for (int =leveli>= --/for each level while (( forward[ !null&/go forward (searchkey compareto( forward[ikey() ) forward[ ]/go one last step forward[ ]/move to actual recordif it exists if (( !null&(searchkey compareto( key()= )return element()/got it else return null/its not there figure implementation for the skip list find function |
20,592 | chap patterns of algorithms /*insert record into the skiplist *public void insert(key ke newvalueint newlevel randomlevel()/new node' level if (newlevel level/if new node is deeper adjusthead(newlevel)/adjust the header /track end of level skipnode[update (skipnode[])new skipnode[level+ ]skipnode head/start at header node for (int =leveli>= --/find insert position while(( forward[ !null&( compareto( forward[ikey() ) forward[ ]update[ix/track end at level new skipnode(knewvaluenewlevel)for (int = <=newleveli++/splice into list forward[iupdate[iforward[ ]/who points to update[iforward[ix/who points to size++/increment dictionary size figure implementation for the skip list insert function /*pick level using exponential distribution *int randomlevel(int levfor (lev= dsutil random( = lev++)/do nothing return levonce the proper level for the node has been determinedthe next step is to find where the node should be inserted and link it in as appropriate at all of its levels figure shows an implementation for inserting new value into the skip list figure illustrates the skip list insertion process in this examplewe begin by inserting node with value into an empty skip list assume that randomlevel returns value of ( the node is at level with pointersbecause the empty skip list has no nodesthe level of the list (and thus the level of the header nodemust be set to the new node is insertedyielding the skip list of figure (anextinsert the value assume this time that randomlevel returns the search process goes to the node with value and the new node is inserted afteras shown in figure (bthe third node inserted has value and again assume that randomlevel returns this yields the skip list of figure the fourth node inserted has value and assume that randomlevel returns this means that the level of the skip list must risecausing the header |
20,593 | sec randomized algorithms head head ( (bhead head ( (dhead (efigure illustration of skip list insertion (athe skip list after inserting initial value at level (bthe skip list after inserting value at level (cthe skip list after inserting value at level (dthe skip list after inserting value at level (ethe final skip list after inserting value at level |
20,594 | chap patterns of algorithms node to gain an additional two (nullpointers at this pointthe new node is added to the front of the listas shown in figure (dfinallyinsert node with value at level this timelet us take close look at what array update is used for it stores the farthest node reached at each level during the search for the proper location of the new node the search process begins in the header node at level and proceeds to the node storing value because forward[ for this node is nullwe cannot go further at this level thusupdate[ stores pointer to the node with value likewisewe cannot proceed at level so update[ also stores pointer to the node with value at level we proceed to the node storing value this is as far as we can go at level so update[ stores pointer to the node with value finallyat level we end up at the node with value at this pointwe can add in the new node with value for each value ithe new node' forward[ipointer is set to be update[ ]->forward[ ]and the nodes stored in update[ifor indices through have their forward[ipointers changed to point to the new node this "splicesthe new node into the skip list at all levels the remove function is left as an exercise it is similar to inserting in that the update array is built as part of searching for the record to be deletedthen those nodes specified by the update array have their forward pointers adjusted to point around the node being deleted newly inserted node could have high level generated by randomlevelor low level it is possible that many nodes in the skip list could have many pointersleading to unnecessary insert cost and yielding poor ( th( )performance during searchbecause not many nodes will be skipped converselytoo many nodes could have low level in the worst caseall nodes could be at level equivalent to regular linked list if sosearch will again require th(ntime howeverthe probability that performance will be poor is quite low there is only once chance in that ten nodes in row will be at level the motto of probabilistic data structures such as the skip list is "don' worrybe happy we simply accept the results of randomlevel and expect that probability will eventually work in our favor the advantage of this approach is that the algorithms are simplewhile requiring only th(log ntime for all operations in the average case in practicethe skip list will probably have better performance than bst the bst can have bad performance caused by the order in which data are inserted for exampleif nodes are inserted into bst in ascending order of their key valuethen the bst will look like linked list with the deepest node at depth the skip list' performance does not depend on the order in which values are inserted into the list as the number of nodes in the skip list increasesthe probability of |
20,595 | encountering the worst case decreases geometrically thusthe skip list illustrates tension between the theoretical worst case (in this caseth(nfor skip list operation)and rapidly increasing probability of average-case performance of th(log )that characterizes probabilistic data structures numerical algorithms this section presents variety of algorithms related to mathematical computations on numbers examples are activities like multiplying two numbers or raising number to given power in particularwe are concerned with situations where built-in integer or floating-point operations cannot be used because the values being operated on are too large oftensimilar operations are applied to polynomials or matrices since we cannot rely on the hardware to process the inputs in single constanttime operationwe are concerned with how to most effectively process the operation to minimize the time cost this begs question as to how we can apply our normal measures of asymptotic cost in terms of growth rates on input size firstwhat is an instance of addition or multiplicationeach value of the operands yields different problem instance and what is the input size when multiplying two numbersif we view the input size as twothen any non-constant-time algorithm has growth rate that is infinitely high compared to the growth of the input this makes no senseespecially in light of the fact that we know from grade school arithmetic that adding or multiplying numbers does seem to get more difficult as the value of the numbers involved increases in factwe know from standard grade school algorithms that the cost of standard addition is linear on the number of digits being addedand multiplication has cost when multiplying an -digit number by an -digit number the number of digits for the operands does appear to be key consideration when we are performing an algorithm that is sensitive to input size the number of digits is simply the log of the valuefor suitable base of the log thusfor the purpose of calculating asymptotic growth rates of algorithmswe will consider the "sizeof an input value to be the log of that value given this viewthere are number of features that seem to relate such operations arithmetic operations on large values are not cheap there is only one instance of value there are instances of length or less the size (lengthof value is log |
20,596 | chap patterns of algorithms the cost of particular algorithm can decrease when increases in value (say when going from value of to to but generally increases when increases in length exponentiation we will start our examination of standard numerical algorithms by considering how to perform exponentiation that ishow do we compute mn we could multiply by total of times can we do betteryesthere is simple divide and conquer approach that we can use we can recognize thatwhen is evenmn mn/ mn/ if is oddthen mn mbn/ mbn/ this leads to the following recursive algorithm power(baseexpif exp return half power(baseexp/ )/integer division of exp half half halfif (odd(exp)then half half basereturn halfpower has recurrence relation = (nf (bn/ mod whose solution is (nblog nc ( where is the number of ' in the binary representation of how does this cost compare with the problem sizethe original problem size is log log nand the number of multiplications required is log this is far better (in factexponentially betterthan performing multiplications largest common factor we will next present euclid' algorithm for finding the largest common factor (lcffor two integers the lcf is the largest integer that divides both inputs evenly first we make this observationif divides and mthen divides we know this is true because if divides then ak for some integer aand if |
20,597 | sec numerical algorithms divides then bk for some integer solcf (nmlcf ( mnlcf (mn mlcf (mnnowfor any value there exists and such that km where > from the definition of the mod functionwe can derive the fact that bn/mcm mod since the lcf is factor of both and mand since km lthe lcf must therefore be factor of both km and land also the largest common factor of each of these terms as consequencelcf (nmlcf (mllcf (mn mod mthis observation leads to simple algorithm we will assume that > at each iteration we replace with and with mod until we have driven to zero int lcf(int nint mif ( = return nreturn lcf(mn )to determine how expensive this algorithm iswe need to know how much progress we are making at each step note that after two iterationswe have replaced with mod so the key question becomeshow big is mod relative to nn > / > bn/mc / mbn/mc / / mbn/mc mod / mod thusfunction lcf will halve its first parameter in no more than iterations the total cost is then (log |
20,598 | chap patterns of algorithms matrix multiplication the standard algorithm for multiplying two matrices requires th( time it is possible to do better than this by rearranging and grouping the multiplications in various ways one example of this is known as strassen' matrix multiplication algorithm for simplicitywe will assume that is power of two in the followinga and are arrayswhile aij and bij refer to arrays of size / / using this notationwe can think of matrix multiplication using divide and conquer in the following way of courseeach of the multiplications and additions on the right side of this equation are recursive calls on arrays of half sizeand additions of arrays of half sizerespectively the recurrence relation for this algorithm is ( ( / ( / ) th( this closed form solution can easily be obtained by applying the master theorem strassen' algorithm carefully rearranges the way that the various terms are multiplied and added together it does so in particular orderas expressed by the following equation in other wordsthe result of the multiplication for an array is obtained by different series of matrix multiplications and additions for / / arrays multiplications between subarrays also use strassen' algorithmand the addition of two subarrays requires th( time the subfactors are defined as followss ( ( ( ( ( ( ( ( ( ( |
20,599 | with little effortyou should be able to verify that this peculiar combination of operations does in fact produce the correct answernowlooking at the list of operations to compute the factorsand then the additions/subtractions needed to put them together to get the final answerswe see that we need total of seven (arraymultiplications and (arrayadditions/subtractions to do the job this leads to the recurrence ( ( / ( / ) (nth(nlog th( we obtained the close form solution again by applying the master theorem unfortunatelywhile strassen' algorithm does in fact reduce the asymptotic complexity over the standard algorithmthe cost of large number of addition and subtraction operations raises the constant factor involved considerably this means that an extremely large array size is required to make strassen' algorithm practical in actual application random numbers the success of randomized algorithms such as presented in section depend on having access to good random number generator while modern compilers are likely to include random number generator that is good enough for most purposesit is helpful to understand how they workand to even be able to construct your own in case you don' trust the one provided this is easy to do firstlet us consider what random sequence from the following listwhich appears to be sequence of "randomnumbers in factall three happen to be the beginning of some sequence in which one could continue the pattern to generate more values (in case you do not recognize itthe third one is the initial digits of the irrational constant eviewed as series of digitsideally every possible sequence has equal probability of being generated (even the three sequences abovein factdefinitions of randomness generally have features such asone cannot predict the next item the series is unpredictable the series cannot be described more briefly than simply listing it out this is the equidistribution property |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.