id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
22,000 | the euler tour traversal of binary tree in section we introduced the concept of an euler tour traversal of general graphusing the template method pattern in designing the eulertour class that class provided methods hook previsit and hook postvisit that could be overridden to customize tour in code fragment we provide binaryeulertour specialization that includes an additional hook invisit that is called once for each position--after its left subtree is traversedbut before its right subtree is traversed our implementation of binaryeulertour replaces the original tour utility to specialize to the case in which node has at most two children if node has only one childa tour differentiates between whether that is left child or right childwith the "in visittaking place after the visit of sole left childbut before the visit of sole right child in the case of leafthe three hooks are called in succession class binaryeulertour(eulertour) """abstract base class for performing euler tour of binary tree this version includes an additional hook invisit that is called after the tour of the left subtree (if any)yet before the tour of the right subtree (if any noteright child is always assigned index in patheven if no left sibling "" def tour(selfpdpath) results [nonenonewill update with results of recursions "pre visitfor self hook previsit(pdpathconsider left child if self tree left(pis not none path append( results[ self tour(self tree left( ) + path path pop"in visitfor self hook invisit(pdpathconsider right child if self tree right(pis not none path append( results[ self tour(self tree right( ) + path path pop"post visitp answer self hook postvisit(pdpathresults return answer can be overridden def hook invisit(selfpdpath)pass code fragment binaryeulertour base class providing specialized tour for binary trees the original eulertour base class was given in code fragment |
22,001 | figure an inorder drawing of binary tree to demonstrate use of the binaryeulertour frameworkwe develop subclass that computes graphical layout of binary treeas shown in figure the geometry is determined by an algorithm that assigns xand -coordinates to each position of binary tree using the following two rulesx(pis the number of positions visited before in an inorder traversal of (pis the depth of in in this applicationwe take the convention common in computer graphics that xcoordinates increase left to right and -coordinates increase top to bottom so the origin is in the upper left corner of the computer screen code fragment provides an implementation of binarylayout subclass that implements the above algorithm for assigning (xycoordinates to the element stored at each position of binary tree we adapt the binaryeulertour framework by introducing additional state in the form of count instance variable that represents the number of "in visitsthat we have performed the -coordinate for each position is set according to that counter class binarylayout(binaryeulertour) """class for computing ( ,ycoordinates for each node of binary tree "" def init (selftree)must call the parent constructor superinit (treeinitialize count of processed nodes self count def hook invisit(selfpdpath) -coordinate serialized by count elementsetx(self count elementsety(dy-coordinate is depth advance count of processed nodes self count + code fragment binarylayout class that computes coordinates at which to draw positions of binary tree we assume that the element type for the original tree supports setx and sety methods |
22,002 | case studyan expression tree in example we introduced the use of binary tree to represent the structure of an arithmetic expression in this sectionwe define new expressiontree class that provides support for constructing such treesand for displaying and evaluating the arithmetic expression that such tree represents our expressiontree class is defined as subclass of linkedbinarytreeand we rely on the nonpublic mutators to construct such trees each internal node must store string that defines binary operator ( )and each leaf must store numeric value (or string representing numeric valueour eventual goal is to build arbitrarily complex expression trees for compound arithmetic expressions such as ((( )/(( )howeverit suffices for the expressiontree class to support two basic forms of initializationexpressiontree(value)create tree storing the given value at the root expressiontree(ope )create tree storing string op at the root ( +)and with the structures of existing expressiontree instances and as the left and right subtrees of the rootrespectively such constructor for the expressiontree class is given in code fragment the class formally inherits from linkedbinarytreeso it has access to all the nonpublic update methods that were defined in section we use add root to create an initial root of the tree storing the token provided as the first parameter then we perform run-time checking of the parameters to determine whether the caller invoked the one-parameter version of the constructor (in which casewe are done)or the three-parameter form in that casewe use the inherited attach method to incorporate the structure of the existing trees as subtrees of the root composing parenthesized string representation string representation of an existing expression tree instancefor exampleas ((( + ) )/(( - )+ )can be produced by displaying tree elements using an inorder traversalbut with opening and closing parentheses inserted with preorder and postorder steprespectively in the context of an expressiontree classwe support special str method (see section that returns the appropriate string because it is more efficient to first build sequence of individual strings to be joined together (see discussion of "composing stringsin section )the implementation of str relies on nonpublicrecursive method named parenthesize recur that appends series of strings to list these methods are included in code |
22,003 | class expressiontree(linkedbinarytree) """an arithmetic expression tree "" def init (selftokenleft=noneright=none) """create an expression tree in single parameter formtoken should be leaf value ( ) and the expression tree will have that value at an isolated node in three-parameter versiontoken should be an operator and left and right should be existing expressiontree instances that become the operands for the binary operator ""linkedbinarytree initialization superinit if not isinstance(tokenstr) raise typeerrortoken must be string use inheritednonpublic method self add root(token if left is not nonepresumably three-parameter form if token not in +-* raise valueerrortoken must be valid operator self attach(self root)leftrightuse inheritednonpublic method def str (self) """return string representation of the expression "" pieces sequence of piecewise strings to compose self parenthesize recur(self root)pieces return join(pieces def parenthesize recur(selfpresult) """append piecewise representation of subtree to resulting list "" if self is leaf( ) result append(str( element))leaf value as string elseopening parenthesis result appendleft subtree self parenthesize recur(self left( )result result append( element)operator right subtree self parenthesize recur(self right( )resultclosing parenthesis result appendcode fragment the beginning of an expressiontree class |
22,004 | expression tree evaluation the numeric evaluation of an expression tree can be accomplished with simple application of postorder traversal if we know the values represented by the two subtrees of an internal positionwe can calculate the result of the computation that position designates pseudo-code for the recursive evaluation of the value represented by subtree rooted at position is given in code fragment algorithm evaluate recur( )if is leaf then return the value stored at else let be the operator stored at evaluate recur(left( ) evaluate recur(right( )return code fragment algorithm evaluate recur for evaluating the expression represented by subtree of an arithmetic expression tree rooted at position to implement this algorithm in the context of python expressiontree classwe provide public evaluate method that is invoked on instance as evaluatecode fragment provides such an implementationrelying on nonpublic evaluate recur method that computes the value of designated subtree def evaluate(self)"""return the numeric result of the expression ""return self evaluate recur(self root)def evaluate recur(selfp)"""return the numeric result of subtree rooted at ""if self is leaf( )return float( element)we assume element is numeric elseop elementleft val self evaluate recur(self left( )right val self evaluate recur(self right( )if op =return left val right val elif op =return left val right val elif op =return left val right val treat or as multiplication elsereturn left val right val code fragment support for evaluating an expressiontree instance |
22,005 | building an expression tree the constructor for the expressiontree classfrom code fragment provides basic functionality for combining existing trees to build larger expression trees howeverthe question still remains how to construct tree that represents an expression for given stringsuch as ((( + ) )/(( - )+ )to automate this processwe rely on bottom-up construction algorithmassuming that string can first be tokenized so that multidigit numbers are treated atomically (see exercise - )and that the expression is fully parenthesized the algorithm uses stack while scanning tokens of the input expression to find valuesoperatorsand right parentheses (left parentheses are ignored when we see an operator *we push that string on the stack when we see literal value vwe create single-node expression tree storing vand push on the stack when we see right parenthesiswe pop the top three items from the stack swhich represent subexpression ( we then construct tree using trees for and as subtrees of the root storing *and push the resulting tree back on the stack we repeat this until the expression has been processedat which time the top element on the stack is the expression tree for the total running time is (nan implementation of this algorithm is given in code fragment in the form of stand-alone function named build expression treewhich produces and returns an appropriate expressiontree instanceassuming the input has been tokenized def build expression tree(tokens) """returns an expressiontree based upon by tokenized expression "" =[we use python list as stack for in tokenst is an operator symbol if in +- * append(tpush the operator symbol consider to be literal elif not in ( append(expressiontree( )push trivial tree storing value compose new tree from three constituent parts elif = right popright subtree as per lifo op popoperator symbol left popleft subtree append(expressiontree(opleftright)repush tree we ignore left parenthesis return popcode fragment implementation of build expression tree that produces an expressiontree from sequence of tokens representing an arithmetic expression |
22,006 | exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - the following questions refer to the tree of figure which node is the rootb what are the internal nodesc how many descendants does node cs haved how many ancestors does node cs havee what are the siblings of node homeworks/ which nodes are in the subtree rooted at node projects/ what is the depth of node papers/ what is the height of the treer- show tree achieving the worst-case running time for algorithm depth - give justification of proposition - what is the running time of call to height (pwhen called on position distinct from the root of (see code fragment - describe an algorithmrelying only on the binarytree operationsthat counts the number of leaves in binary tree that are the left child of their respective parent - let be an -node binary tree that may be improper describe how to represent by means of proper binary tree with (nnodes - what are the minimum and maximum number of internal and external nodes in an improper binary tree with nodesr- answer the following questions so as to justify proposition what is the minimum number of external nodes for proper binary tree with height hjustify your answer what is the maximum number of external nodes for proper binary tree with height hjustify your answer let be proper binary tree with height and nodes show that log( < <( )/ for which values of and can the above lower and upper bounds on be attained with equalityr- give proof by induction of proposition - give direct implementation of the num children method within the class binarytree |
22,007 | - find the value of the arithmetic expression associated with each subtree of the binary tree of figure - draw an arithmetic expression tree that has four external nodesstoring the numbers and (with each number stored in distinct external nodebut not necessarily in this order)and has three internal nodeseach storing an operator from the set {+- /}so that the value of the root is the operators may return and act on fractionsand an operator may be used more than once - draw the binary tree representation of the following arithmetic expression"((( ( ))/(( (( ) ) - justify table summarizing the running time of the methods of tree represented with linked structureby providingfor each methoda description of its implementationand an analysis of its running time - the linkedbinarytree class provides only nonpublic versions of the update methods discussed on page implement simple subclass named mutablelinkedbinarytree that provides public wrapper functions for each of the inherited nonpublic update methods - let be binary tree with nodesand let (be the level numbering function of the positions of as given in section show thatfor every position of ( < show an example of binary tree with seven nodes that attains the above upper bound on (pfor some position - show how to use the euler tour traversal to compute the level number ( )as defined in section of each position in binary tree - let be binary tree with positions that is realized with an array representation aand let (be the level numbering function of the positions of as given in section give pseudo-code descriptions of each of the methods rootparentleftrightis leafand is root - our definition of the level numbering function ( )as given in section began with the root having number some authors prefer to use level numbering (pin which the root is assigned number because it simplifies the arithmetic for finding neighboring positions redo exercise - but assuming that we use level numbering (pin which the root is assigned number - draw binary tree that simultaneously satisfies the followingeach internal node of stores single character preorder traversal of yields examfun an inorder traversal of yields mafxuen - in what order are positions visited during preorder traversal of the tree of figure |
22,008 | - in what order are positions visited during postorder traversal of the tree of figure - let be an ordered tree with more than one node is it possible that the preorder traversal of visits the nodes in the same order as the postorder traversal of if sogive an exampleotherwiseexplain why this cannot occur likewiseis it possible that the preorder traversal of visits the nodes in the reverse order of the postorder traversal of if sogive an exampleotherwiseexplain why this cannot occur - answer the previous question for the case when is proper binary tree with more than one node - consider the example of breadth-first traversal given in figure using the annotated numbers from that figuredescribe the contents of the queue before each pass of the while loop in code fragment to get startedthe queue has contents { before the first passand contents { before the second pass - the collections deque class supports an extend method that adds collection of elements to the end of the queue at once reimplement the breadthfirst method of the tree class to take advantage of this feature - give the output of the function parenthesize(tt root))as described in code fragment when is the tree of figure - what is the running time of parenthesize(tt root))as given in code fragment for tree with nodesr- describein pseudo-codean algorithm for computing the number of descendants of each node of binary tree the algorithm should be based on the euler tour traversal - the build expression tree method of the expressiontree class requires input that is an iterable of string tokens we used convenient example((( + ) )/(( - )+ )in which each character is its own tokenso that the string itself sufficed as input to build expression tree in generala stringsuch as ( must be explicitly tokenized into list so as to ignore whitespace and to recognize multidigit numbers as single token write utility methodtokenize(raw)that returns such list of tokens for raw string creativity - define the internal path lengthi( )of tree to be the sum of the depths of all the internal positions in likewisedefine the external path lengthe( )of tree to be the sum of the depths of all the external positions in show that if is proper binary tree with positionsthen ( ( |
22,009 | - let be (not necessarily properbinary tree with nodesand let be the sum of the depths of all the external nodes of show that if has the minimum number of external nodes possiblethen is (nand if has the maximum number of external nodes possiblethen is ( log nc- let be (possibly improperbinary tree with nodesand let be the sum of the depths of all the external nodes of describe configuration for such that is ( such tree would be the worst case for the asymptotic running time of method height (code fragment - for tree let ni denote the number of its internal nodesand let ne denote the number of its external nodes show that if every internal node in has exactly childrenthen ne ni - two ordered trees and are said to be isomorphic if one of the following holdsboth and are empty the roots of and have the same number > of subtreesand the ith such subtree of is isomorphic to the ith such subtree of for design an algorithm that tests whether two given ordered trees are isomorphic what is the running time of your algorithmc- show that there are more than improper binary trees with internal nodes such that no pair are isomorphic (see exercise - - if we exclude isomorphic trees (see exercise - )exactly how many proper binary trees exist with exactly leavesc- add support in linkedbinarytree for methoddelete subtree( )that removes the entire subtree rooted at position pmaking sure to maintain the count on the size of the tree what is the running time of your implementationc- add support in linkedbinarytree for methodswap( , )that has the effect of restructuring the tree so that the node referenced by takes the place of the node referenced by qand vice versa make sure to properly handle the case when the nodes are adjacent - we can simplify parts of our linkedbinarytree implementation if we make use of of single sentinel nodereferenced as the sentinel member of the tree instancesuch that the sentinel is the parent of the real root of the treeand the root is referenced as the left child of the sentinel furthermorethe sentinel will take the place of none as the value of the left or right member for node without such child give new implementation of the update methods delete and attachassuming such representation |
22,010 | - describe how to clone linkedbinarytree instance representing proper binary treewith use of the attach method - describe how to clone linkedbinarytree instance representing (not necessarily properbinary treewith use of the add left and add right methods - we can define binary tree representation for an ordered general tree as follows (see figure )for each position of there is an associated position of if is leaf of then in does not have left childotherwise the left child of is where is the first child of in if has sibling ordered immediately after it in then is the right child of in otherwise does not have right child given such representation of general ordered tree answer each of the following questionsa is preorder traversal of equivalent to preorder traversal of is postorder traversal of equivalent to postorder traversal of is an inorder traversal of equivalent to one of the standard traversals of if sowhich onea ( (bfigure representation of tree with binary tree(atree (bbinary tree for the dashed edges connect nodes of that are siblings in - give an efficient algorithm that computes and printsfor every position of tree the element of followed by the height of ' subtree - give an ( )-time algorithm for computing the depths of all positions of tree where is the number of nodes of - the path length of tree is the sum of the depths of all positions in describe linear-time method for computing the path length of tree - the balance factor of an internal position of proper binary tree is the difference between the heights of the right and left subtrees of show how to specialize the euler tour traversal of section to print the balance factors of all the internal nodes of proper binary tree |
22,011 | - given proper binary tree define the reflection of to be the binary tree such that each node in is also in but the left child of in is ' right child in and the right child of in is ' left child in show that preorder traversal of proper binary tree is the same as the postorder traversal of ' reflectionbut in reverse order - let the rank of position during traversal be defined such that the first element visited has rank the second element visited has rank and so on for each position in tree let pre(pbe the rank of in preorder traversal of let post(pbe the rank of in postorder traversal of let depth(pbe the depth of pand let desc(pbe the number of descendants of pincluding itself derive formula defining post(pin terms of desc( )depth( )and pre( )for each node in - design algorithms for the following operations for binary tree preorder next( )return the position visited after in preorder traversal of (or none if is the last node visitedinorder next( )return the position visited after in an inorder traversal of (or none if is the last node visitedpostorder next( )return the position visited after in postorder traversal of (or none if is the last node visitedwhat are the worst-case running times of your algorithmsc- to implement the preorder method of the linkedbinarytree classwe relied on the convenience of python' generator syntax and the yield statement give an alternative implementation of preorder that returns an explicit instance of nested iterator class (see section for discussion of iterators - algorithm preorder draw draws binary tree by assigning xand ycoordinates to each position such that (pis the number of nodes preceding in the preorder traversal of and (pis the depth of in show that the drawing of produced by preorder draw has no pairs of crossing edges redraw the binary tree of figure using preorder draw - redo the previous problem for the algorithm postorder draw that is similar to preorder draw except that it assigns (pto be the number of nodes preceding position in the postorder traversal - design an algorithm for drawing general treesusing style similar to the inorder traversal approach for drawing binary trees - exercise - described the walk function of the os module this function performs traversal of the implicit tree represented by the file system read the formal documentation for the functionand in particular its use of an optional boolean parameter named topdown describe how its behavior relates to tree traversal algorithms described in this |
22,012 | sales domestic international canada america africa overseas europe (aasia australia sales domestic international canada america overseas africa europe asia australia (bfigure (atree (bindented parenthetic representation of - the indented parenthetic representation of tree is variation of the parenthetic representation of (see code fragment that uses indentation and line breaks as illustrated in figure give an algorithm that prints this representation of tree - let be binary tree with positions define roman position to be position in such that the number of descendants in ' left subtree differ from the number of descendants in ' right subtree by at most describe linear-time method for finding each position of such that is not roman positionbut all of ' descendants are roman - let be tree with positions define the lowest common ancestor (lcabetween two positions and as the lowest position in that has both and as descendants (where we allow position to be descendant of itself given two positions and qdescribe an efficient algorithm for finding the lca of and what is the running time of your algorithmc- let be binary tree with positionsandfor any position in let denote the depth of in the distance between two positions and in is dq da where is the lowest common ancestor (lcaof and the diameter of is the maximum distance between two positions in describe an efficient algorithm for finding the diameter of what is the running time of your algorithmc- suppose each position of binary tree is labeled with its value (pin level numbering of design fast method for determining (afor the lowest common ancestor (lca)aof two positions and in given (pand (qyou do not need to find position ajust value (ac- give an alternative implementation of the build expression tree method of the expressiontree class that relies on recursion to perform an implicit euler tour of the tree that is being built |
22,013 | - note that the build expression tree function of the expressiontree class is written in such way that leaf token can be any stringfor exampleit parses the expression ( *( + )howeverwithin the evaluate methodan error would occur when attempting to convert leaf token to number modify the evaluate method to accept an optional python dictionary that can be used to map such string variables to numeric valueswith syntax such as evaluate( : : : }in this waythe same algebraic expression can be evaluated using different values - as mentioned in exercise - postfix notation is an unambiguous way of writing an arithmetic expression without parentheses it is defined so that if "(exp op (exp )is normal (infixfully parenthesized expression with operation opthen its postfix equivalent is "pexp pexp op"where pexp is the postfix version of exp and pexp is the postfix version of exp the postfix version of single number or variable is just that number or variable sofor examplethe postfix version of the infix expression "(( ( ))/ is " /implement postfix method of the expressiontree class of section that produces the postfix notation for the given expression projects - implement the binary tree adt using the array-based representation described in section - implement the tree adt using linked structure as described in section provide reasonable set of update methods for your tree - the memory usage for the linkedbinarytree class can be streamlined by removing the parent reference from each nodeand instead having each position instance keep memberpaththat is list of nodes representing the entire path from the root to that position (this generally saves memory because there are typically relatively few stored position instances reimplement the linkedbinarytree class using this strategy - slicing floor plan divides rectangle with horizontal and vertical sides using horizontal and vertical cuts (see figure slicing floor plan can be represented by proper binary treecalled slicing treewhose internal nodes represent the cutsand whose external nodes represent the basic rectangles into which the floor plan is decomposed by the cuts (see figure the compaction problem for slicing floor plan is defined as follows assume that each basic rectangle of slicing floor plan is assigned minimum width and minimum height the compaction problem is to find the smallest possible height and width for each rectangle of the slicing floor plan that is compatible with the minimum dimensions |
22,014 | (ac (bfigure (aslicing floor plan(bslicing tree associated with the floor plan of the basic rectangles namelythis problem requires the assignment of values (pand (pto each position of the slicing tree such thatif is leaf whose basic rectangle has minimum width if is an internal positionassociated with max( () ( ) horizontal cutwith left child and right (pchild if is an internal positionassociated with ( (ra vertical cutwith left child and right child (ph if is leaf node whose basic rectangle has minimum height ( (rif is an internal positionassociated with horizontal cutwith left child and right child if is an internal positionassociated with max( () ( ) vertical cutwith left child and right child design data structure for slicing floor plans that supports the operationscreate floor plan consisting of single basic rectangle decompose basic rectangle by means of horizontal cut decompose basic rectangle by means of vertical cut assign minimum height and width to basic rectangle draw the slicing tree associated with the floor plan compact and draw the floor plan |
22,015 | - write program that can play tic-tac-toe effectively (see section to do thisyou will need to create game tree which is tree where each position corresponds to game configurationwhichin this caseis representation of the tic-tac-toe board (see section the root corresponds to the initial configuration for each internal position in the children of correspond to the game states we can reach from ' game state in single legal move for the appropriate playera (the first playeror (the second playerpositions at even depths correspond to moves for and positions at odd depths correspond to moves for leaves are either final game states or are at depth beyond which we do not want to explore we score each leaf with value that indicates how good this state is for player in large gameslike chesswe have to use heuristic scoring functionbut for small gameslike tic-tac-toewe can construct the entire game tree and score leaves as + - indicating whether player has windrawor lose in that configuration good algorithm for choosing moves is minimax in this algorithmwe assign score to each internal position in such that if represents ' turnwe compute ' score as the maximum of the scores of ' children (which corresponds to ' optimal play from pif an internal node represents ' turnthen we compute ' score as the minimum of the scores of ' children (which corresponds to ' optimal play from pp- implement the tree adt using the binary tree representation described in exercise - you may adapt the linkedbinarytree implementation - write program that takes as input general tree and position of and converts to another tree with the same set of position adjacenciesbut now with as its root notes discussions of the classic preorderinorderand postorder tree traversal methods can be found in knuth' fundamental algorithms book [ the euler tour traversal technique comes from the parallel algorithms communityit is introduced by tarjan and vishkin [ and is discussed by jaja [ and by karp and ramachandran [ the algorithm for drawing tree is generally considered to be part of the "folkloreof graph-drawing algorithms the reader interested in graph drawing is referred to the book by di battistaeadestamassiaand tollis [ and the survey by tamassia and liotta [ the puzzle in exercise - was communicated by micha sharir |
22,016 | priority queues contents the priority queue abstract data type priorities the priority queue adt implementing priority queue the composition design pattern implementation with an unsorted list implementation with sorted list heaps the heap data structure implementing priority queue with heap array-based representation of complete binary tree python heap implementation analysis of heap-based priority queue bottom-up heap construction python' heapq module sorting with priority queue selection-sort and insertion-sort heap-sort adaptable priority queues locators implementing an adaptable priority queue exercises |
22,017 | the priority queue abstract data type priorities in we introduced the queue adt as collection of objects that are added and removed according to the first-infirst-out (fifoprinciple company' customer call center embodies such model in which waiting customers are told "calls will be answered in the order that they were received in that settinga new call is added to the back of the queueand each time customer service representative becomes availablehe or she is connected with the call that is removed from the front of the call queue in practicethere are many applications in which queue-like structure is used to manage objects that must be processed in some waybut for which the first-infirst-out policy does not suffice considerfor examplean air-traffic control center that has to decide which flight to clear for landing from among many approaching the airport this choice may be influenced by factors such as each plane' distance from the runwaytime spent waiting in holding patternor amount of remaining fuel it is unlikely that the landing decisions are based purely on fifo policy there are other situations in which "first comefirst servepolicy might seem reasonableyet for which other priorities come into play to use another airline analogysuppose certain flight is fully booked an hour prior to departure because of the possibility of cancellationsthe airline maintains queue of standby passengers hoping to get seat although the priority of standby passenger is influenced by the check-in time of that passengerother considerations include the fare paid and frequent-flyer status so it may be that an available seat is given to passenger who has arrived later than anotherif such passenger is assigned better priority by the airline agent in this we introduce new abstract data type known as priority queue this is collection of prioritized elements that allows arbitrary element insertionand allows the removal of the element that has first priority when an element is added to priority queuethe user designates its priority by providing an associated key the element with the minimum key will be the next to be removed from the queue (thusan element with key will be given priority over an element with key although it is quite common for priorities to be expressed numericallyany python object may be used as keyas long as the object type supports consistent meaning for the test bfor any instances and bso as to define natural order of the keys with such generalityapplications may develop their own notion of priority for each element for exampledifferent financial analysts may assign different ratings ( prioritiesto particular assetsuch as share of stock |
22,018 | the priority queue adt formallywe model an element and its priority as key-value pair we define the priority queue adt to support the following methods for priority queue pp add(kv)insert an item with key and value into priority queue min)return tuple( , )representing the key and value of an item in priority queue with minimum key (but do not remove the item)an error occurs if the priority queue is empty remove min)remove an item with minimum key from priority queue pand return tuple( , )representing the key and value of the removed iteman error occurs if the priority queue is empty is empty)return true if priority queue does not contain any items len( )return the number of items in priority queue priority queue may have multiple entries with equivalent keysin which case methods min and remove min may report an arbitrary choice of item having minimum key values may be any type of object in our initial model for priority queuewe assume that an element' key remains fixed once it has been added to priority queue in section we consider an extension that allows user to update an element' key within the priority queue example the following table shows series of operations and their effects on an initially empty priority queue the "priority queuecolumn is somewhat deceiving since it shows the entries as tuples and sorted by key such an internal representation is not required of priority queue operation add( ,ap add( ,cp add( ,bp add( ,dp minp remove minp remove minlen(pp remove minp remove minp is emptyp remove minreturn value ( , ( , ( , ( , ( ,ctrue "errorpriority queue {( , ){( , )( , ){( , )( , )( , ){( , )( , )( , )( , ){( , )( , )( , )( , ){( , )( , )( , ){( , )( , ){( , )( , ){( , ){{{ |
22,019 | implementing priority queue in this sectionwe show how to implement priority queue by storing its entries in positional list (see section we provide two realizationsdepending on whether or not we keep the entries in sorted by key the composition design pattern one challenge in implementing priority queue is that we must keep track of both an element and its keyeven as items are relocated within our data structure this is reminiscent of case study from section in which we maintain access counts with each element in that settingwe introduced the composition design patterndefining an item class that assured that each element remained paired with its associated count in our primary data structure for priority queueswe will use composition to store items internally as pairs consisting of key and value to implement this concept for all priority queue implementationswe provide priorityqueuebase class (see code fragment that includes definition for nested class named item we define the syntax bfor item instances and bto be based upon the keys class priorityqueuebase """abstract base class for priority queue "" class item """lightweight composite to store priority queue items ""slots _key _value def init (selfkv) self key self value def lt (selfother)compare items based on their keys return self key other key concrete method assuming abstract len def is empty(self) """return true if the priority queue is empty "" return len(self= code fragment priorityqueuebase class with nested item class that composes key and value into single object for conveniencewe provide concrete implementation of is empty that is based on presumed len impelementation |
22,020 | implementation with an unsorted list in our first concrete implementation of priority queuewe store entries within an unsorted list our unsortedpriorityqueue class is given in code fragment inheriting from the priorityqueuebase class introduced in code fragment for internal storagekey-value pairs are represented as compositesusing instances of the inherited item class these items are stored within positionallistidentified as the data member of our class we assume that the positional list is implemented with doubly-linked listas in section so that all operations of that adt execute in ( time we begin with an empty list when new priority queue is constructed at all timesthe size of the list equals the number of key-value pairs currently stored in the priority queue for this reasonour priority queue len method simply returns the length of the internal data list by the design of our priorityqueuebase classwe inherit concrete implementation of the is empty method that relies on call to our len method each time key-value pair is added to the priority queuevia the add methodwe create new item composite for the given key and valueand add that item to the end of the list such an implementation takes ( time the remaining challenge is that when min or remove min is calledwe must locate the item with minimum key because the items are not sortedwe must inspect all entries to find one with minimum key for conveniencewe define nonpublic find min utility that returns the position of an item with minimum key knowledge of the position allows the remove min method to invoke the delete method on the positional list the min method simply uses the position to retrieve the item when preparing key-value tuple to return due to the loop for finding the minimum keyboth min and remove min methods run in (ntimewhere is the number of entries in the priority queue summary of the running times for the unsortedpriorityqueue class is given in table operation len is empty add min remove min running time ( ( ( (no(ntable worst-case running times of the methods of priority queue of size nrealized by means of an unsorteddoubly linked list the space requirement is ( |
22,021 | class unsortedpriorityqueue(priorityqueuebase)base class defines item """ min-oriented priority queue implemented with an unsorted list "" nonpublic utility def find min(self) """return position of item with minimum key ""is empty inherited from base class if self is empty) raise emptypriority queue is empty small self data first walk self data after(small while walk is not none if walk elementsmall element) small walk walk self data after(walk return small def init (self) """create new empty priority queue "" self data positionallist def len (self) """return the number of items in the priority queue "" return len(self data def add(selfkeyvalue) """add key-value pair "" self data add last(self item(keyvalue) def min(self) """return but do not remove ( ,vtuple with minimum key "" self find min item element return (item keyitem value def remove min(self) """remove and return ( ,vtuple with minimum key "" self find min item self data delete( return (item keyitem valuecode fragment an implementation of priority queue using an unsorted list the parent class priorityqueuebase is given in code fragment and the positionallist class is from section |
22,022 | implementation with sorted list an alternative implementation of priority queue uses positional listyet maintaining entries sorted by nondecreasing keys this ensures that the first element of the list is an entry with the smallest key our sortedpriorityqueue class is given in code fragment the implementation of min and remove min are rather straightforward given knowledge that the first element of list has minimum key we rely on the first method of the positional list to find the position of the first itemand the delete method to remove the entry from the list assuming that the list is implemented with doubly linked listoperations min and remove min take ( time this benefit comes at costhoweverfor method add now requires that we scan the list to find the appropriate position to insert the new item our implementation starts at the end of the listwalking backward until the new key is smaller than an existing itemin the worst caseit progresses until reaching the front of the list thereforethe add method takes (nworst-case timewhere is the number of entries in the priority queue at the time the method is executed in summarywhen using sorted list to implement priority queueinsertion runs in linear timewhereas finding and removing the minimum can be done in constant time comparing the two list-based implementations table compares the running times of the methods of priority queue realized by means of sorted and unsorted listrespectively we see an interesting tradeoff when we use list to implement the priority queue adt an unsorted list supports fast insertions but slow queries and deletionswhereas sorted list allows fast queries and deletionsbut slow insertions operation len is empty add min remove min unsorted list ( ( ( (no(nsorted list ( ( (no( ( table worst-case running times of the methods of priority queue of size nrealized by means of an unsorted or sorted listrespectively we assume that the list is implemented by doubly linked list the space requirement is ( |
22,023 | class sortedpriorityqueue(priorityqueuebase)base class defines item """ min-oriented priority queue implemented with sorted list "" def init (self) """create new empty priority queue "" self data positionallist def len (self) """return the number of items in the priority queue "" return len(self data def add(selfkeyvalue) """add key-value pair ""make new item instance newest self item(keyvaluewalk backward looking for smaller key walk self data last while walk is not none and newest walk element) walk self data before(walk if walk is nonenew key is smallest self data add first(newest elsenewest goes after walk self data add after(walknewest def min(self) """return but do not remove ( ,vtuple with minimum key "" if self is empty) raise emptypriority queue is empty self data first item element return (item keyitem value def remove min(self) """remove and return ( ,vtuple with minimum key "" if self is empty) raise emptypriority queue is empty item self data delete(self data first) return (item keyitem valuecode fragment an implementation of priority queue using sorted list the parent class priorityqueuebase is given in code fragment and the positionallist class is from section |
22,024 | heaps the two strategies for implementing priority queue adt in the previous section demonstrate an interesting trade-off when using an unsorted list to store entrieswe can perform insertions in ( timebut finding or removing an element with minimum key requires an ( )-time loop through the entire collection in contrastif using sorted listwe can trivially find or remove the minimum element in ( timebut adding new element to the queue may require (ntime to restore the sorted order in this sectionwe provide more efficient realization of priority queue using data structure called binary heap this data structure allows us to perform both insertions and removals in logarithmic timewhich is significant improvement over the list-based implementations discussed in section the fundamental way the heap achieves this improvement is to use the structure of binary tree to find compromise between elements being entirely unsorted and perfectly sorted the heap data structure heap (see figure is binary tree that stores collection of items at its positions and that satisfies two additional propertiesa relational property defined in terms of the way keys are stored in and structural property defined in terms of the shape of itself the relational property is the followingheap-order propertyin heap for every position other than the rootthe key stored at is greater than or equal to the key stored at ' parent as consequence of the heap-order propertythe keys encountered on path from the root to leaf of are in nondecreasing order alsoa minimum key is always stored at the root of this makes it easy to locate such an item when min or remove min is calledas it is informally said to be "at the top of the heap(hencethe name "heapfor the data structureby the waythe heap data structure defined here has nothing to do with the memory heap (section used in the run-time environment supporting programming language like python for the sake of efficiencyas will become clear laterwe want the heap to have as small height as possible we enforce this requirement by insisting that the heap satisfy an additional structural property--it must be what we term complete complete binary tree propertya heap with height is complete binary tree if levels of have the maximum number of nodes possible (namelylevel has nodesfor < < and the remaining nodes at level reside in the leftmost possible positions at that level |
22,025 | ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ,wfigure example of heap storing entries with integer keys the last position is the one storing entry ( , the tree in figure is complete because levels and are fulland the six nodes in level are in the six leftmost possible positions at that level in formalizing what we mean by the leftmost possible positionswe refer to the discussion of level numbering from section in the context of an array-based representation of binary tree (in factin section we will discuss the use of an array to represent heap complete binary tree with elements is one that has positions with level numbering through for examplein an array-based representation of the above treeits entries would be stored consecutively from [ to [ the height of heap let denote the height of insisting that be complete also has an important consequenceas shown in proposition proposition heap storing entries has height log njustificationfrom the fact that is completewe know that the number of nodes in levels through of is precisely - and that the number of nodes in level is at least and at most therefore > and < + by taking the logarithm of both sides of inequality <nwe see that height <log by rearranging terms and taking the logarithm of both sides of inequality < + we see that log( < since is an integerthese two inequalities imply that log |
22,026 | implementing priority queue with heap proposition has an important consequencefor it implies that if we can perform update operations on heap in time proportional to its heightthen those operations will run in logarithmic time let us therefore turn to the problem of how to efficiently perform various priority queue methods using heap we will use the composition pattern from section to store key-value pairs as items in the heap the len and is empty methods can be implemented based on examination of the treeand the min operation is equally trivial because the heap property assures that the element at the root of the tree has minimum key the interesting algorithms are those for implementing the add and remove min methods adding an item to the heap let us consider how to perform add( ,von priority queue implemented with heap we store the pair (kvas an item at new node of the tree to maintain the complete binary tree propertythat new node should be placed at position just beyond the rightmost node at the bottom level of the treeor as the leftmost position of new levelif the bottom level is already full (or if the heap is emptyup-heap bubbling after an insertion after this actionthe tree is completebut it may violate the heap-order property henceunless position is the root of (that isthe priority queue was empty before the insertion)we compare the key at position to that of ' parentwhich we denote as if key >kq the heap-order property is satisfied and the algorithm terminates if instead kq then we need to restore the heap-order propertywhich can be locally achieved by swapping the entries stored at positions and (see figure and this swap causes the new item to move up one level againthe heap-order property may be violatedso we repeat the processgoing up in until no violation of the heap-order property occurs (see figure and the upward movement of the newly inserted entry by means of swaps is conventionally called up-heap bubbling swap either resolves the violation of the heap-order property or propagates it one level up in the heap in the worst caseupheap bubbling causes the new entry to move all the way up to the root of heap thusin the worst casethe number of swaps performed in the execution of method add is equal to the height of by proposition that bound is log |
22,027 | ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , (hfigure insertion of new entry with key into the heap of figure (ainitial heap(bafter performing operation add( and dswap to locally restore the partial order property( and another swap( and hfinal swap |
22,028 | removing the item with minimum key let us now turn to method remove min of the priority queue adt we know that an entry with the smallest key is stored at the root of (even if there is more than one entry with smallest keyhoweverin general we cannot simply delete node rbecause this would leave two disconnected subtrees insteadwe ensure that the shape of the heap respects the complete binary tree property by deleting the leaf at the last position of defined as the rightmost position at the bottommost level of the tree to preserve the item from the last position pwe copy it to the root (in place of the item with minimum key that is being removed by the operationfigure and illustrates an example of these stepswith minimal item ( ,cbeing removed from the root and replaced by item ( , from the last position the node at the last position is removed from the tree down-heap bubbling after removal we are not yet donehoweverfor even though is now completeit likely violates the heap-order property if has only one node (the root)then the heap-order property is trivially satisfied and the algorithm terminates otherwisewe distinguish two caseswhere initially denotes the root of if has no right childlet be the left child of otherwise ( has both children)let be child of with minimal key if key <kc the heap-order property is satisfied and the algorithm terminates if instead kc then we need to restore the heap-order property this can be locally achieved by swapping the entries stored at and (see figure and it is worth noting that when has two childrenwe intentionally consider the smaller key of the two children not only is the key of smaller than that of pit is at least as small as the key at ' sibling this ensures that the heap-order property is locally restored when that smaller key is promoted above the key that had been at and that at ' sibling having restored the heap-order property for node relative to its childrenthere may be violation of this property at chencewe may have to continue swapping down until no violation of the heap-order property occurs (see figure - this downward swapping process is called down-heap bubbling swap either resolves the violation of the heap-order property or propagates it one level down in the heap in the worst casean entry moves all the way down to the bottom level (see figure thusthe number of swaps performed in the execution of method remove min isin the worst caseequal to the height of heap that isit is log nby proposition |
22,029 | ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ( , ( , ( , ( , ( , (hfigure removal of the entry with the smallest key from heap( and bdeletion of the last nodewhose entry gets stored into the root( and dswap to locally restore the heap-order property( and another swap( and hfinal swap |
22,030 | array-based representation of complete binary tree the array-based representation of binary tree (section is especially suitable for complete binary tree we recall that in this implementationthe elements of are stored in an array-based list such that the element at position in is stored in with index equal to the level number (pof pdefined as followsif is the root of then ( if is the left child of position qthen ( ( if is the right child of position qthen ( ( with this implementationthe elements of have contiguous indices in the range [ and the last position of is always at index where is the number of positions of for examplefigure illustrates the array-based representation of the heap structure originally portrayed in figure ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , figure an array-based representation of the heap from figure implementing priority queue using an array-based heap representation allows us to avoid some complexities of node-based tree structure in particularthe add and remove min operations of priority queue both depend on locating the last index of heap of size with the array-based representationthe last position is at index of the array locating the last position of complete binary tree implemented with linked structure requires more effort (see exercise - if the size of priority queue is not known in advanceuse of an array-based representation does introduce the need to dynamically resize the array on occasionas is done with python list the space usage of such an array-based representation of complete binary tree with nodes is ( )and the time bounds of methods for adding or removing elements become amortized (see section python heap implementation we provide python implementation of heap-based priority queue in code fragments and we use an array-based representationmaintaining python list of item composites although we do not formally use the binary tree adtcode fragment includes nonpublic utility functions that compute the level numbering of parent or child of another this allows us to describe the rest of our algorithms using tree-like terminology of parentleftand right howeverthe relevant variables are integer indexes (not "positionobjectswe use recursion to implement the repetition in the upheap and downheap utilities |
22,031 | class heappriorityqueue(priorityqueuebase)base class defines item """ min-oriented priority queue implemented with binary heap "" nonpublic behaviors def parent(selfj) return ( - / def left(selfj) return def right(selfj) return def has left(selfj) return self left(jlen(self dataindex beyond end of list def has right(selfj) return self right(jlen(self dataindex beyond end of list def swap(selfij) """swap the elements at indices and of array "" self data[ ]self data[jself data[ ]self data[ def upheap(selfj) parent self parent( if and self data[jself data[parent] self swap(jparentrecur at position of parent self upheap(parent def downheap(selfj) if self has left( ) left self left(jalthough right may be smaller small child left if self has right( ) right self right( if self data[rightself data[left] small child right if self data[small childself data[ ] self swap(jsmall childrecur at position of small child self downheap(small childcode fragment an implementation of priority queue using an array-based heap (continued in code fragment the extends the priorityqueuebase class from code fragment |
22,032 | public behaviors def init (self)"""create new empty priority queue ""self data def len (self)"""return the number of items in the priority queue ""return len(self datadef add(selfkeyvalue)"""add key-value pair to the priority queue ""self data append(self item(keyvalue)upheap newly added position self upheap(len(self data def min(self)"""return but do not remove ( ,vtuple with minimum key raise empty exception if empty ""if self is empty)raise emptypriority queue is empty item self data[ return (item keyitem valuedef remove min(self)"""remove and return ( ,vtuple with minimum key raise empty exception if empty ""if self is empty)raise emptypriority queue is empty put minimum item at the end self swap( len(self data and remove it from the listitem self data popthen fix new root self downheap( return (item keyitem valuecode fragment an implementation of priority queue using an array-based heap (continued from code fragment |
22,033 | analysis of heap-based priority queue table shows the running time of the priority queue adt methods for the heap implementation of priority queueassuming that two keys can be compared in ( time and that the heap is implemented with an array-based or linked-based tree representation in shorteach of the priority queue adt methods can be performed in ( or in (log ntimewhere is the number of entries at the time the method is executed the analysis of the running time of the methods is based on the followingthe heap has nodeseach storing reference to key-value pair the height of heap is (log )since is complete (proposition the min operation runs in ( because the root of the tree contains such an element locating the last position of heapas required for add and remove mincan be performed in ( time for an array-based representationor (log ntime for linked-tree representation (see exercise - in the worst caseup-heap and down-heap bubbling perform number of swaps equal to the height of operation len( ) is emptyp minp addp remove minamortizedif array-based running time ( ( (log ) (log )table performance of priority queueprealized by means of heap we let denote the number of entries in the priority queue at the time an operation is executed the space requirement is (nthe running time of operations min and remove min are amortized for an array-based representationdue to occasional resizing of dynamic arraythose bounds are worst case with linked tree structure we conclude that the heap data structure is very efficient realization of the priority queue adtindependent of whether the heap is implemented with linked structure or an array the heap-based implementation achieves fast running times for both insertion and removalunlike the implementations that were based on using an unsorted or sorted list |
22,034 | bottom-up heap construction if we start with an initially empty heapn successive calls to the add operation will run in ( log ntime in the worst case howeverif all key-value pairs to be stored in the heap are given in advancesuch as during the first phase of the heapsort algorithmthere is an alternative bottom-up construction method that runs in (ntime (heap-sorthoweverstill requires th( log ntime because of the second phase in which we repeatedly remove the remaining element with smallest key in this sectionwe describe the bottom-up heap constructionand provide an implementation that can be used by the constructor of heap-based priority queue for simplicity of expositionwe describe this bottom-up heap construction assuming the number of keysnis an integer such that + that isthe heap is complete binary tree with every level being fullso the heap has height log( viewed nonrecursivelybottom-up heap construction consists of the following log( steps in the first step (see figure )we construct ( )/ elementary heaps storing one entry each in the second step (see figure - )we form ( )/ heapseach storing three entriesby joining pairs of elementary heaps and adding new entry the new entry is placed at the root and may have to be swapped with the entry stored at child to preserve the heap-order property in the third step (see figure - )we form ( )/ heapseach storing entriesby joining pairs of -entry heaps (constructed in the previous stepand adding new entry the new entry is placed initially at the rootbut may have to move down with down-heap bubbling to preserve the heap-order property in the generic ith step < <hwe form ( )/ heapseach storing entriesby joining pairs of heaps storing ( - entries (constructed in the previous stepand adding new entry the new entry is placed initially at the rootbut may have to move down with down-heap bubbling to preserve the heap-order property in the last step (see figure - )we form the final heapstoring all the entriesby joining two heaps storing ( )/ entries (constructed in the previous stepand adding new entry the new entry is placed initially at the rootbut may have to move down with down-heap bubbling to preserve the heap-order property we illustrate bottom-up heap construction in figure for |
22,035 | ( ( ( ( ( ( ( (hfigure bottom-up construction of heap with entries( and bwe begin by constructing -entry heaps on the bottom level( and dwe combine these heaps into -entry heapsand then ( and -entry heapsuntil ( and hwe create the final heap the paths of the down-heap bubblings are highlighted in (dfand hfor simplicitywe only show the key within each node instead of the entire entry |
22,036 | python implementation of bottom-up heap construction implementing bottom-up heap construction is quite easygiven the existence of "down-heaputility function the "mergingof two equally sized heaps that are subtrees of common position pas described in the opening of this sectioncan be accomplished simply by down-heaping ' entry for examplethat is what happened to the key in going from figure ( to (gwith our array-based representation of heapif we initially store all items in arbitrary order within the arraywe can implement the bottom-up heap construction process with single loop that makes call to downheap from each position of the treeas long as those calls are ordered starting with the deepest level and ending with the root of the tree in factthat loop can start with the deepest nonleafsince there is no effect when down-heap is called at leaf position in code fragment we augment the original heappriorityqueue class from section to provide support for the bottom-up construction of an initial collection we introduce nonpublic utility methodheapifythat calls downheap on each nonleaf positionbeginning with the deepest and concluding with call at the root of the tree we have redesigned the constructor of the class to accept an optional parameter that can be any sequence of ( ,vtuples rather than initializing self data to an empty listwe use list comprehension syntax (see section to create an initial list of item composites based on the given contents we declare an empty sequence as the default parameter value so that the default syntax heappriorityqueuecontinues to result in an empty priority queue def init (selfcontents=))"""create new priority queue by defaultqueue will be empty if contents is givenit should be as an iterable sequence of ( ,vtuples specifying the initial contents ""empty by default self data self item( ,vfor , in contents if len(self data self heapifydef heapify(self)start self parent(len(self for in range(start- - )self downheap(jstart at parent of last leaf going to and including the root code fragment revision to the heappriorityqueue class of code fragments and to support linear-time construction given an initial sequence of entries |
22,037 | asymptotic analysis of bottom-up heap construction bottom-up heap construction is asymptotically faster than incrementally inserting keys into an initially empty heap intuitivelywe are performing single downheap operation at each position in the treerather than single up-heap operation from each since more nodes are closer to the bottom of tree than the topthe sum of the downward paths is linearas shown in the following proposition proposition bottom-up construction of heap with entries takes (ntimeassuming two keys can be compared in ( time justificationthe primary cost of the construction is due to the down-heap steps performed at each nonleaf position let pv denote the path of from nonleaf node to its "inorder successorleafthat isthe path that starts at vgoes to the right child of vand then goes down leftward until it reaches leaf althoughpv is not necessarily the path followed by the down-heap bubbling step from vthe length pv (its number of edgesis proportional to the height of the subtree rooted at vand thus bound on the complexity of the down-heap operation at we can bound the total running time of the bottom-up heap construction algorithm based on the sum of the sizes of pathsv pv for intuitionfigure illustrates the justification "visually,marking each edge with the label of the nonleaf node whose path pv contains that edge we claim that the paths pv for all nonleaf are edge-disjointand thus the sum of the path lengths is bounded by the number of total edges in the treehence (nto show thiswe consider what we term "right-leaningand "left-leaningedges ( those going from parent to rightrespectively leftchilda particular rightleaning edge can only be part of the path pv for node that is the parent in the relationship represented by left-leaning edges can be partitioned by considering the leaf that is reached if continuing down leftward until reaching leaf each nonleaf node only uses left-leaning edges in the group leading to that nonleaf node' inorder successor since each nonleaf node must have different inorder successorno two such paths can contain the same left-leaning edge we conclude that the bottom-up construction of heap takes (ntime figure visual justification of the linear running time of bottom-up heap construction each edge is labeled with node for which pv contains (if any |
22,038 | python' heapq module python' standard distribution includes heapq module that provides support for heap-based priority queues that module does not provide any priority queue classinstead it provides functions that allow standard python list to be managed as heap its model is essentially the same as our ownwith elements stored in list cells [ through [ ]based on the level-numbering indices with the smallest element at the root in [ we note that heapq does not separately manage associated valueselements serve as their own key the heapq module supports the following functionsall of which presume that existing list satisfies the heap-order property prior to the callheappush(le)push element onto list and restore the heap-order property the function executes in (log ntime heappop( )pop and return the element with smallest value from list land reestablish the heap-order property the operation executes in (log ntime heappushpop(le)push element on list and then pop and return the smallest item the time is (log )but it is slightly more efficient than separate calls to push and pop because the size of the list never changes if the newly pushed element becomes the smallestit is immediately returned otherwisethe new element takes the place of the popped element at the root and down-heap is performed heapreplace(le)similar to heappushpopbut equivalent to the pop being performed before the push (in other wordsthe new element cannot be returned as the smallestagainthe time is (log )but it is more efficient that two separate operations the module supports additional functions that operate on sequences that do not previously satisfy the heap-order property heapify( )transform unordered list to satisfy the heap-order property this executes in (ntime by using the bottom-up construction algorithm nlargest(kiterable)produce list of the largest values from given iterable this can be implemented to run in ( log ntimewhere we use to denote the length of the iterable (see exercise - nsmallest(kiterable)produce list of the smallest values from given iterable this can be implemented to run in ( log ntimeusing similar technique as with nlargest |
22,039 | sorting with priority queue in defining the priority queue adtwe noted that any type of object can be used as keybut that any pair of keys must be comparable to each otherand that the set of keys be naturally ordered in pythonit is common to rely on the operator to define such an orderin which case the following properties must be satisfiedirreflexive propertyk transitive propertyif and then formallysuch relationship defines what is known as strict weak orderas it allows for keys to be considered equal to each otherbut the broader equivalence classes are totally orderedas they can be uniquely arranged from smallest to largest due to the transitive property as our first application of priority queueswe demonstrate how they can be used to sort collection of comparable elements that iswe can produce sequence of elements of in increasing order (or at least in nondecreasing order if there are duplicatesthe algorithm is quite simple--we insert all elements into an initially empty priority queueand then we repeatedly call remove min to retrieve the elements in nondecreasing order an implementation of this algorithm is given in code fragment assuming that is positional list (see we use an original element of the collection as both key and value when calling add(elementelement def pq sort( ) """sort collection of elements stored in positional list "" len( priorityqueue for in range( ) element delete( first) add(elementelementuse element as key and value for in range( ) ( ,vp remove minstore smallest remaining element in add last(vcode fragment an implementation of the pq sort functionassuming an appropriate implementation of priorityqueue class note that each element of the input list serves as its own key in the priority queue with minor modification to this codewe can provide more general supportsorting elements according to an ordering other than the default for examplewhen working with stringsthe operator defines lexicographic orderingwhich is an extension of the alphabetic ordering to unicode for examplewe have that because of the order of the first character of each stringjust as |
22,040 | apple banana suppose that we have an application in which we have list of strings that are all known to represent integral values ( )and our goal is to sort the strings according to those integral values in pythonthe standard approach for customizing the order for sorting algorithm is to provideas an optional parameter to the sorting functionan object that is itself one-parameter function that computes key for given element (see sections and for discussion of this approach in the context of the builtin max function for examplewith list of (numericstringswe might wish to use the value of int(sas key for string of the list in this casethe constructor for the int class can serve as the one-parameter function for computing key in that waythe string will be ordered before string because its key int int we leave it as an exercise to support such an optional key parameter for the pq sort function (see exercise - selection-sort and insertion-sort our pq sort function works correctly given any valid implementation of the priority queue class howeverthe running time of the sorting algorithm depends on the running times of the operations add and remove min for the given priority queue class we next discuss choice of priority queue implementations that in effect cause the pq sort computation to behave as one of several classic sorting algorithms selection-sort if we implement with an unsorted listthen phase of pq sort takes (ntimefor we can add each element in ( time in phase the running time of each remove min operation is proportional to the size of thusthe bottleneck computation is the repeated "selectionof the minimum element in phase for this reasonthis algorithm is better known as selection-sort (see figure as noted abovethe bottleneck is in phase where we repeatedly remove an entry with smallest key from the priority queue the size of starts at and incrementally decreases with each remove min until it becomes thusthe first operation takes time ( )the second one takes time ( )and so on thereforethe total time needed for the second phase is ( ( (ni= iby proposition we have ni= ( )/ thusphase takes time ( )as does the entire selection-sort algorithm |
22,041 | input phase phase ( (bcollection ( ( ( priority queue (( ( ( ( ( ( ( ( ( (( ( ( ( ( ( ( ( ( ( ( ( (figure execution of selection-sort on collection ( insertion-sort if we implement the priority queue using sorted listthen we improve the running time of phase to ( )for each remove min operation on now takes ( time unfortunatelyphase becomes the bottleneck for the running timesincein the worst caseeach add operation takes time proportional to the current size of this sorting algorithm is better known as insertion-sort (see figure )in factour implementation for adding an element to priority queue is almost identical to step of insertion-sort as presented in section the worst-case running time of phase of insertion-sort is ( ( no (ni= iagainby proposition this implies worst-case ( time for phase and thusthe entire insertion-sort algorithm howeverunlike selection-sortinsertionsort has best-case running time of (ninput phase phase ( ( ( ( ( ( ( (bcollection ( ( ( ( ( ( (( ( priority queue (( ( ( ( ( ( ( ( ( ( (figure execution of insertion-sort on collection ( |
22,042 | heap-sort as we have previously observedrealizing priority queue with heap has the advantage that all the methods in the priority queue adt run in logarithmic time or better hencethis realization is suitable for applications where fast running times are sought for all the priority queue methods thereforelet us again consider the pq sort schemethis time using heap-based implementation of the priority queue during phase the ith add operation takes (log itimesince the heap has entries after the operation is performed therefore this phase takes ( log ntime (it could be improved to (nwith the bottom-up heap construction described in section during the second phase of pq sortthe jth remove min operation runs in (log( ))since the heap has entries at the time the operation is performed summing over all jthis phase takes ( log ntimeso the entire priority-queue sorting algorithm runs in ( log ntime when we use heap to implement the priority queue this sorting algorithm is better known as heap-sortand its performance is summarized in the following proposition proposition the heap-sort algorithm sorts collection of elements in ( log ntimeassuming two elements of can be compared in ( time let us stress that the ( log nrunning time of heap-sort is considerably better than the ( running time of selection-sort and insertion-sort (section implementing heap-sort in-place if the collection to be sorted is implemented by means of an array-based sequencemost notably as python listwe can speed up heap-sort and reduce its space requirement by constant factor using portion of the list itself to store the heapthus avoiding the use of an auxiliary heap data structure this is accomplished by modifying the algorithm as follows we redefine the heap operations to be maximum-oriented heapwith each position' key being at least as large as its children this can be done by recoding the algorithmor by adjusting the notion of keys to be negatively oriented at any time during the execution of the algorithmwe use the left portion of cup to certain index to store the entries of the heapand the right portion of cfrom index to to store the elements of the sequence thusthe first elements of (at indices provide the array-list representation of the heap in the first phase of the algorithmwe start with an empty heap and move the boundary between the heap and the sequence from left to rightone step at time in step ifor nwe expand the heap by adding the element at index |
22,043 | in the second phase of the algorithmwe start with an empty sequence and move the boundary between the heap and the sequence from right to leftone step at time at step ifor nwe remove maximum element from the heap and store it at index in generalwe say that sorting algorithm is in-place if it uses only small amount of memory in addition to the sequence storing the objects to be sorted the variation of heap-sort above qualifies as in-placeinstead of transferring elements out of the sequence and then back inwe simply rearrange them we illustrate the second phase of in-place heap-sort in figure ( ( ( ( ( ( figure phase of an in-place heap-sort the heap portion of each sequence representation is highlighted the binary tree that each sequence (implicitlyrepresents is diagrammed with the most recent path of down-heap bubbling highlighted |
22,044 | adaptable priority queues the methods of the priority queue adt given in section are sufficient for most basic applications of priority queuessuch as sorting howeverthere are situations in which additional methods would be usefulas shown by the scenarios below involving the standby airline passenger application standby passenger with pessimistic attitude may become tired of waiting and decide to leave ahead of the boarding timerequesting to be removed from the waiting list thuswe would like to remove from the priority queue the entry associated with this passenger operation remove min does not suffice since the passenger leaving does not necessarily have first priority insteadwe want new operationremovethat removes an arbitrary entry another standby passenger finds her gold frequent-flyer card and shows it to the agent thusher priority has to be modified accordingly to achieve this change of prioritywe would like to have new operation update allowing us to replace the key of an existing entry with new key we will see another application of adaptable priority queues when implementing certain graph algorithms in sections and in this sectionwe develop an adaptable priority queue adt and demonstrate how to implement this abstraction as an extension to our heap-based priority queue locators in order to implement methods update and remove efficientlywe need mechanism for finding user' element within priority queue that avoids performing linear search through the entire collection to support our goalwhen new element is added to the priority queuewe return special object known as locator to the caller we then require the user to provide an appropriate locator as parameter when invoking the update or remove methodas followsfor priority queue pp update(lockv)replace key and value for the item identified by locator loc remove(loc)remove the item identified by locator loc from the priority queue and return its (key,valuepair the locator abstraction is somewhat akin to the position abstraction used in our positional list adt from section and our tree adt from howeverwe differentiate between locator and position because locator for priority queue does not represent tangible placement of an element within the structure in our priority queuean element may be relocated within our data structure during an operation that does not seem directly relevant to that element locator for an item will remain validas long as that item remains somewhere in the queue |
22,045 | implementing an adaptable priority queue in this sectionwe provide python implementation of an adaptable priority queue as an extension of our heappriorityqueue class from section to implement locator classwe will extend the existing item composite to add an additional field designating the current index of the element within the array-based representation of our heapas shown in figure token ( , , ( , , ( , , ( , , ( , , ( , , ( , , ( , , figure representing heap using sequence of locators the third element of each locator instance corresponds to the index of the item within the array identifier token is presumed to be locator reference in the user' scope the list is sequence of references to locator instanceseach of which stores keyvalueand the current index of the item within the list the user will be given reference to the locator instance for each inserted elementas portrayed by the token identifier in figure when we perform priority queue operations on our heapand items are relocated within our structurewe reposition the locator instances within the list and we update the third field of each locator to reflect its new index within the list as an examplefigure shows the state of the above heap after call to remove minthe heap operation caused the minimum entry( , )to be removedand the entry( , )to be temporarily moved from the last position to the rootfollowed by down-heap bubble phase during the down-heapelement ( ,xwas swapped token ( , , ( , , ( , , ( , , ( , , ( , , ( , , figure the result of call to remove minon the heap originally portrayed in figure identifier token continues to reference the same locator instance as in the original configurationbut the placement of that locator in the list has changedas has the third field of the locator |
22,046 | with its left child( , )at index of the listthen swapped with its right child( , )at index of the list in the final configurationthe locator instances for all affected elements have been modified to reflect their new location it is important to emphasize that the locator instances have not changed identity the user' token referenceportrayed in figures and continues to reference the same instancewe have simply changed the third field of that instanceand we have changed where that instance is referenced within the list sequence with this new representationproviding the additional support for the adaptable priority queue adt is rather straightforward when locator instance is sent as parameter to update or removewe may rely on the third field of that structure to designate where the element resides in the heap with that knowledgethe update of key may simply require an up-heap or down-heap bubbling step to reestablish the heap-order property (the complete binary tree property remains intact to implement the removal of an arbitrary elementwe move the element at the last position to the vacated locationand again perform an appropriate bubbling step to satisfy the heap-order property python implementation code fragments and present python implementation of an adaptable priority queueas subclass of the heappriorityqueue class from section our modifications to the original class are relatively minor we define public locator class that inherits from the nonpublic item class and augments it with an additional index field we make it public class because we will be using locators as return values and parametershoweverthe public interface for the locator class does not include any other functionality for the user to update locators during the flow of our heap operationswe rely on an intentional design decision that our original class uses nonpublic swap method for all data movement we override that utility to execute the additional step of updating the stored indices within the two swapped locator instances we provide new bubble utility that manages the reinstatement of the heaporder property when key has changed at an arbitrary position within the heapeither due to key updateor the blind replacement of removed element with the item from the last position of the tree the bubble utility determines whether to apply up-heap or down-heap bubblingdepending on whether the given location has parent with smaller key (if an updated key coincidentally remains valid for its current locationwe technically call downheap but no swaps result the public methods are provided in code fragment the existing add method is overriddenboth to make use of locator instance rather than an item instance for storage of the new elementand to return the locator to the caller the remainder of that method is similar to the originalwith the management of locator indices enacted by the use of the new version of swap there is no reason to over |
22,047 | ride the remove min method because the only change in behavior for the adaptable priority queue is again provided by the overridden swap method the update and remove methods provide the core new functionality for the adaptable priority queue we perform robust checking of the validity of locator that is sent by caller (although in the interest of spaceour displayed code does not do preliminary type-checking to ensure that the parameter is indeed locator instanceto ensure that locator is associated with current element of the given priority queuewe examine the index that is encapsulated within the locator objectand then verify that the entry of the list at that index is the very same locator in conclusionthe adaptable priority queue provides the same asymptotic efficiency and space usage as the nonadaptive versionand provides logarithmic performance for the new locator-based update and remove methods summary of the performance is given in table class adaptableheappriorityqueue(heappriorityqueue) """ locator-based priority queue implemented with binary heap "" nested locator class class locator(heappriorityqueue item) """token for locating an entry of the priority queue ""slots _index add index as additional field def init (selfkvj) superinit ( , self index nonpublic behaviors override swap to record new indices def swap(selfij)perform the swap superswap( ,jreset locator index (post-swap self data[iindex reset locator index (post-swap self data[jindex def bubble(selfj) if and self data[jself data[self parent( )] self upheap( else self downheap(jcode fragment an implementation of an adaptable priority queue (continued in code fragment this extends the heappriorityqueue class of code fragments and |
22,048 | def add(selfkeyvalue)"""add key-value pair ""token self locator(keyvaluelen(self data)initiaize locator index self data append(tokenself upheap(len(self data return token def update(selflocnewkeynewval)"""update the key and value for the entry identified by locator loc "" loc index if not ( < len(selfand self data[jis loc)raise valueerrorinvalid locator loc key newkey loc value newval self bubble(jdef remove(selfloc)"""remove and return the ( ,vpair identified by locator loc "" loc index if not ( < len(selfand self data[jis loc)raise valueerrorinvalid locator if =len(self item at last position just remove it self data popelseswap item to the last position self swap(jlen(self)- remove it from the list self data popfix item displaced by the swap self bubble(jreturn (loc keyloc valuecode fragment an implementation of an adaptable priority queue (continued from code fragment operation running time len( ) is empty) mino( add( ,vo(log ) update(lockvo(log np remove(loco(log ) remove mino(log )amortized with dynamic array table running times of the methods of an adaptable priority queuepof size nrealized by means of our array-based heap representation the space requirement is ( |
22,049 | exercises for help with exercisesplease visit the sitewww wiley com/college/goodrich reinforcement - how long would it take to remove the log nsmallest elements from heap that contains entriesusing the remove min operationr- suppose you label each position of binary tree with key equal to its preorder rank under what circumstances is heapr- what does each remove min call return within the following sequence of priority queue adt methodsadd( , )add( , )add( , )add( , )remove min)add( , )add( , )remove min)remove min)add( , )remove min)add( , )remove min)remove min) - an airport is developing computer simulation of air-traffic control that handles events such as landings and takeoffs each event has time stamp that denotes the time when the event will occur the simulation program needs to efficiently perform the following two fundamental operationsinsert an event with given time stamp (that isadd future eventextract the event with smallest time stamp (that isdetermine the next event to processwhich data structure should be used for the above operationswhyr- the min method for the unsortedpriorityqueue class executes in (ntimeas analyzed in table give simple modification to the class so that min runs in ( time explain any necessary modifications to other methods of the class - can you adapt your solution to the previous problem to make remove min run in ( time for the unsortedpriorityqueue classexplain your answer - illustrate the execution of the selection-sort algorithm on the following input sequence( - illustrate the execution of the insertion-sort algorithm on the input sequence of the previous problem - give an example of worst-case sequence with elements for insertionsortand show that insertion-sort runs in ( time on such sequence - at which positions of heap might the third smallest key be storedr- at which positions of heap might the largest key be stored |
22,050 | - consider situation in which user has numeric keys and wishes to have priority queue that is maximum-oriented how could standard (minorientedpriority queue be used for such purposer- illustrate the execution of the in-place heap-sort algorithm on the following input sequence( - let be complete binary tree such that position stores an element with key ( )where (pis the level number of (see section is tree heapwhy or why notr- explain why the description of down-heap bubbling does not consider the case in which position has right child but not left child - is there heap storing seven entries with distinct keys such that preorder traversal of yields the entries of in increasing or decreasing order by keyhow about an inorder traversalhow about postorder traversalif sogive an exampleif notsay why - let be heap storing entries using the array-based representation of complete binary tree what is the sequence of indices of the array that are visited in preorder traversal of hwhat about an inorder traversal of hwhat about postorder traversal of hr- show that the sum log ii= which appears in the analysis of heap-sortis ( log nr- bill claims that preorder traversal of heap will list its keys in nondecreasing order draw an example of heap that proves him wrong - hillary claims that postorder traversal of heap will list its keys in nonincreasing order draw an example of heap that proves her wrong - show all the steps of the algorithm for removing the entry ( from the heap of figure assuming the entry had been identified with locator - show all the steps of the algorithm for replacing key of entry ( awith in the heap of figure assuming the entry had been identified with locator - draw an example of heap whose keys are all the odd numbers from to (with no repeats)such that the insertion of an entry with key would cause up-heap bubbling to proceed all the way up to child of the root (replacing that child' key with - describe sequence of insertions in heap that requires ( log ntime to process - complete figure by showing all the steps of the in-place heap-sort algorithm show both the array and the associated heap at the end of each step |
22,051 | creativity - show how to implement the stack adt using only priority queue and one additional integer instance variable - show how to implement the fifo queue adt using only priority queue and one additional integer instance variable - professor idle suggests the following solution to the previous problem whenever an item is inserted into the queueit is assigned key that is equal to the current size of the queue does such strategy result in fifo semanticsprove that it is so or provide counterexample - reimplement the sortedpriorityqueue using python list make sure to maintain remove min' ( performance - give nonrecursive implementation of the upheap method for the class heappriorityqueue - give nonrecursive implementation of the downheap method for the class heappriorityqueue - assume that we are using linked representation of complete binary tree and an extra reference to the last node of that tree show how to update the reference to the last node after operations add or remove min in (log ntimewhere is the current number of nodes of be sure and handle all possible casesas illustrated in figure - when using linked-tree representation for heapan alternative method for finding the last node during an insertion in heap is to storein the last node and each leaf node of reference to the leaf node immediately to its right (wrapping to the first node in the next lower level for the rightmost leaf nodeshow how to maintain such references in ( time per operation of the priority queue adt assuming that is implemented with linked structure ( , ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ,fw ( , ( , ( , ( , ( , ( , ( , ( , ( , ( ,qw ( , ( , ( , ( ,bz ( ,hz ( (bfigure updating the last node in complete binary tree after operation add or remove node is the last node before operation add or after operation remove node is the last node after operation add or before operation remove |
22,052 | - we can represent path from the root to given node of binary tree by means of binary stringwhere means "go to the left childand means "go to the right child for examplethe path from the root to the node storing ( , in the heap of figure is represented by " design an (log )-time algorithm for finding the last node of complete binary tree with nodesbased on the above representation show how this algorithm can be used in the implementation of complete binary tree by means of linked structure that does not keep reference to the last node - given heap and key kgive an algorithm to compute all the entries in having key less than or equal to for examplegiven the heap of figure and query the algorithm should report the entries with keys and (but not necessarily in this orderyour algorithm should run in time proportional to the number of entries returnedand should not modify the heap - provide justification of the time bounds in table - give an alternative analysis of bottom-up heap construction by showing the following summation is ( )for any positive integer hh / = - suppose two binary treest and hold entries satisfying the heap-order property (but not necessarily the complete binary tree propertydescribe method for combining and into binary tree whose nodes hold the union of the entries in and and also satisfy the heap-order property your algorithm should run in time ( where and are the respective heights of and - implement heappushpop method for the heappriorityqueue classwith semantics akin to that described for the heapq module in section - implement heapreplace method for the heappriorityqueue classwith semantics akin to that described for the heapq module in section - tamarindo airlines wants to give first-class upgrade coupon to their top log frequent flyersbased on the number of miles accumulatedwhere is the total number of the airlinesfrequent flyers the algorithm they currently usewhich runs in ( log ntimesorts the flyers by the number of miles flown and then scans the sorted list to pick the top log flyers describe an algorithm that identifies the top logn flyers in (ntime - explain how the largest elements from an unordered collection of size can be found in time ( log nusing maximum-oriented heap - explain how the largest elements from an unordered collection of size can be found in time ( log kusing (kauxiliary space |
22,053 | - given classpriorityqueuethat implements the minimum-oriented priority queue adtprovide an implementation of maxpriorityqueue class that adapts to provide maximum-oriented abstraction with methods addmaxand remove max your implementation should not make any assumption about the internal workings of the original priorityqueue classnor the type of keys that might be used - write key function for nonnegative integers that determines order based on the number of ' in each integer' binary expansion - give an alternative implementation of the pq sort functionfrom code fragment that accepts key function as an optional parameter - describe an in-place version of the selection-sort algorithm for an array that uses only ( space for instance variables in addition to the array - assuming the input to the sorting problem is given in an array adescribe how to implement the insertion-sort algorithm using only the array and at most constant number of additional variables - give an alternate description of the in-place heap-sort algorithm using the standard minimum-oriented priority queue (instead of maximumoriented onec- an online computer system for trading stocks needs to process orders of the form "buy shares at $ eachor "sell shares at $ each buy order for $ can only be processed if there is an existing sell order with price $ such that < likewisea sell order for $ can only be processed if there is an existing buy order with price $ such that < if buy or sell order is entered but cannot be processedit must wait for future order that allows it to be processed describe scheme that allows buy and sell orders to be entered in (log ntimeindependent of whether or not they can be immediately processed - extend solution to the previous problem so that users are allowed to update the prices for their buy or sell orders that have yet to be processed - group of children want to play gamecalled unmonopolywhere in each turn the player with the most money must give half of his/her money to the player with the least amount of money what data structure(sshould be used to play this game efficientlywhyprojects - implement the in-place heap-sort algorithm experimentally compare its running time with that of the standard heap-sort that is not in-place - use the approach of either exercise - or - to reimplement the top method of the favoriteslistmtf class from section make sure that results are generated from largest to smallest |
22,054 | - write program that can process sequence of stock buy and sell orders as described in exercise - - let be set of points in the plane with distinct integer xand ycoordinates let be complete binary tree storing the points from at its external nodessuch that the points are ordered left to right by increasing -coordinates for each node in let (vdenote the subset of consisting of points stored in the subtree rooted at for the root of define top(rto be the point in (rwith maximum -coordinate for every other node vdefine top(rto be the point in with highest ycoordinate in (vthat is not also the highest -coordinate in ( )where is the parent of in (if such point existssuch labeling turns into priority search tree describe linear-time algorithm for turning into priority search tree implement this approach - one of the main applications of priority queues is in operating systems-for scheduling jobs on cpu in this project you are to build program that schedules simulated cpu jobs your program should run in loopeach iteration of which corresponds to time slice for the cpu each job is assigned prioritywhich is an integer between - (highest priorityand (lowest priority)inclusive from among all jobs waiting to be processed in time slicethe cpu must work on job with highest priority in this simulationeach job will also come with length valuewhich is an integer between and inclusiveindicating the number of time slices that are needed to process this job for simplicityyou may assume jobs cannot be interrupted--once it is scheduled on the cpua job runs for number of time slices equal to its length your simulator must output the name of the job running on the cpu in each time slice and must process sequence of commandsone per time sliceeach of which is of the form "add job name with length and priority por "no new job this slicep- develop python implementation of an adaptable priority queue that is based on an unsorted list and supports location-aware entries notes knuth' book on sorting and searching [ describes the motivation and history for the selection-sortinsertion-sortand heap-sort algorithms the heap-sort algorithm is due to williams [ ]and the linear-time heap construction algorithm is due to floyd [ additional algorithms and analyses for heaps and heap-sort variations can be found in papers by bentley [ ]carlsson [ ]gonnet and munro [ ]mcdiarmid and reed [ ]and schaffer and sedgewick [ |
22,055 | mapshash tablesand skip lists contents maps and dictionaries the map adt applicationcounting word frequencies python' mutablemapping abstract base class our mapbase class simple unsorted map implementation hash tables hash functions collision-handling schemes load factorsrehashingand efficiency python hash table implementation sorted maps sorted search tables two applications of sorted maps skip lists search and update operations in skip list probabilistic analysis of skip lists setsmultisetsand multimaps the set adt python' mutableset abstract base class implementing setsmultisetsand multimaps exercises |
22,056 | maps and dictionaries python' dict class is arguably the most significant data structure in the language it represents an abstraction known as dictionary in which unique keys are mapped to associated values because of the relationship they express between keys and valuesdictionaries are commonly known as associative arrays or maps in this bookwe use the term dictionary when specifically discussing python' dict classand the term map when discussing the more general notion of the abstract data type as simple examplefigure illustrates map from the names of countries to their associated units of currency turkey lira spain greece euro china united states india yuan dollar rupee figure map from countries (the keysto their units of currency (the valueswe note that the keys (the country namesare assumed to be uniquebut the values (the currency unitsare not necessarily unique for examplewe note that spain and greece both use the euro for currency maps use an array-like syntax for indexingsuch as currencygreece to access value associated with given key or currencygreece drachma to remap it to new value unlike standard arrayindices for map need not be consecutive nor even numeric common applications of maps include the following university' information system relies on some form of student id as key that is mapped to that student' associated record (such as the student' nameaddressand course gradesserving as the value the domain-name system (dnsmaps host namesuch as www wiley comto an internet-protocol (ipaddresssuch as social media site typically relies on (nonnumericusername as key that can be efficiently mapped to particular user' associated information computer graphics system may map color namesuch as turquoise to the triple of numbers that describes the color' rgb (red-green-bluerepresentationsuch as ( , , python uses dictionary to represent each namespacemapping an identifying stringsuch as pi to an associated objectsuch as in this and the next we demonstrate that map may be implemented so that search for keyand its associated valuecan be performed very efficientlythereby supporting fast lookup in such applications |
22,057 | the map adt in this sectionwe introduce the map adtand define its behaviors to be consistent with those of python' built-in dict class we begin by listing what we consider the most significant five behaviors of map as followsm[ ]return the value associated with key in map mif one existsotherwise raise keyerror in pythonthis is implemented with the special method getitem [kvassociate value with key in map mreplacing the existing value if the map already contains an item with key equal to in pythonthis is implemented with the special method setitem del [ ]remove from map the item with key equal to kif has no such itemthen raise keyerror in pythonthis is implemented with the special method delitem len( )return the number of items in map in pythonthis is implemented with the special method len iter( )the default iteration for map generates sequence of keys in the map in pythonthis is implemented with the special method iter and it allows loops of the formfor in we have highlighted the above five behaviors because they demonstrate the core functionality of map--namelythe ability to queryaddmodifyor delete keyvalue pairand the ability to report all such pairs for additional conveniencemap should also support the following behaviorsk in mreturn true if the map contains an item with key in pythonthis is implemented with the special contains method get(kd=none)return [kif key exists in the mapotherwise return default value this provides form to query [kwithout risk of keyerror setdefault(kd)if key exists in the mapsimply return [ ]if key does not existset [kd and return that value pop(kd=none)remove the item associated with key from the map and return its associated value if key is not in the mapreturn default value (or raise keyerror if parameter is none |
22,058 | popitem)remove an arbitrary key-value pair from the mapand return ( ,vtuple representing the removed pair if map is emptyraise keyerror clear)remove all key-value pairs from the map keys)return set-like view of all keys of values)return set-like view of all values of items)return set-like view of ( ,vtuples for all entries of update( )assign [kv for every ( ,vpair in map = return true if maps and have identical key-value associations ! return true if maps and do not have identical keyvalue associations example in the followingwe show the effect of series of operations on an initially empty map storing items with integer keys and single-character values we use the literal syntax for python' dict class to describe the map contents operation len(mmk mb mu mv mk mb mx getf getf getk len(mdel mv popk keysm valuesm itemsm setdefaultb setdefaulta popitemreturn value keyerror none ) map { |
22,059 | applicationcounting word frequencies as case study for using mapconsider the problem of counting the number of occurrences of words in document this is standard task when performing statistical analysis of documentfor examplewhen categorizing an email or news article map is an ideal data structure to use herefor we can use words as keys and word counts as values we show such an application in code fragment we break apart the original document using combination of file and string methods that results in loop over lowercased version of all whitespace separated pieces of the document we omit all nonalphabetic characters so that parenthesesapostrophesand other such punctuation are not considered part of word in terms of map operationswe begin with an empty python dictionary named freq during the first phase of the algorithmwe execute the command freq[word freq get(word for each word occurrence we use the get method on the right-hand side because the current word might not exist in the dictionarythe default value of is appropriate in that case during the second phase of the algorithmafter the full document has been processedwe examine the contents of the frequency maplooping over freq itemsto determine which word has the most occurrences freq for piece in open(filenamereadlowersplit)only consider alphabetic characters within this piece word join( for in piece if isalpha)if wordrequire at least one alphabetic character freq[word freq get(word max word max count for ( ,cin freq items)(keyvaluetuples represent (wordcountif max countmax word max count printthe most frequent word is max wordprintits number of occurrences is max countcode fragment program for counting word frequencies in documentand reporting the most frequent word we use python' dict class for the map we convert the input to lowercase and ignore any nonalphabetic characters |
22,060 | python' mutablemapping abstract base class section provides an introduction to the concept of an abstract base class and the role of such classes in python' collections module methods that are declared to be abstract in such base class must be implemented by concrete subclasses howeveran abstract base class may provide concrete implementation of other methods that depend upon use of the presumed abstract methods (this is an example of the template method design pattern the collections module provides two abstract base classes that are relevant to our current discussionthe mapping and mutablemapping classes the mapping class includes all nonmutating methods supported by python' dict classwhile the mutablemapping class extends that to include the mutating methods what we define as the map adt in section is akin to the mutablemapping abstract base class in python' collections module the significance of these abstract base classes is that they provide framework to assist in creating user-defined map class in particularthe mutablemapping class provides concrete implementations for all behaviors other than the first five outlined in section getitem setitem delitem len and iter as we implement the map abstraction with various data structuresas long as we provide the five core behaviorswe can inherit all other derived behaviors by simply declaring mutablemapping as parent class to better understand the mutablemapping classwe provide few examples of how concrete behaviors can be derived from the five core abstractions for examplethe contains methodsupporting the syntax in mcould be implemented by making guarded attempt to retrieve self[kto determine if the key exists def contains (selfk)tryself[kreturn true except keyerrorreturn false access via getitem (ignore resultattempt failed similar approach might be used to provide the logic of the setdefault method def setdefault(selfkd)tryreturn self[kexcept keyerrorself[kd return if getitem succeedsreturn value otherwiseset default value with setitem and return that newly assigned value we leave as exercises the implementations of the remaining concrete methods of the mutablemapping class |
22,061 | our mapbase class we will be providing many different implementations of the map adtin the remainder of this and nextusing variety of data structures demonstrating trade-off of advantages and disadvantages figure provides preview of those classes the mutablemapping abstract base classfrom python' collections module and discussed in the preceding pagesis valuable tool when implementing map howeverin the interest of greater code reusewe define our own mapbase classwhich is itself subclass of the mutablemapping class our mapbase class provides additional support for the composition design pattern this is technique we introduced when implementing priority queue (see section in order to group key-value pair as single instance for internal use more formallyour mapbase class is defined in code fragment extending the existing mutablemapping abstract base class so that we inherit the many useful concrete methods that class provides we then define nonpublic nested item classwhose instances are able to store both key and value this nested class is reasonably similar in design to the item class that was defined within our priorityqueuebase class in section except that for map we provide support for both equality tests and comparisonsboth of which rely on the item' key the notion of equality is necessary for all of our map implementationsas way to determine whether key given as parameter is equivalent to one that is already stored in the map the notion of comparisons between keysusing the operatorwill become relevant when we later introduce sorted map adt (section mutablemapping (collections modulemapbase (section unsortedtablemap (section hashmapbase (section sortedtablemap (section treemap ((additional subclasseschainhashmap (section probehashmap (section figure our hierarchy of map types (with references to where they are defined |
22,062 | mapshash tablesand skip lists class mapbase(mutablemapping) """our own abstract base class that includes nonpublic item class "" nested item class class item """lightweight composite to store key-value pairs as map items ""slots _key _value def init (selfkv) self key self value def eq (selfother)compare items based on their keys return self key =other key def ne (selfother) return not (self =otheropposite of eq def lt (selfother)compare items based on their keys return self key other key code fragment extending the mutablemapping abstract base class to provide nonpublic item class for use in our various map implementations simple unsorted map implementation we demonstrate the use of the mapbase class with very simple concrete implementation of the map adt code fragment presents an unsortedtablemap class that relies on storing key-value pairs in arbitrary order within python list an empty table is initialized as self table within the constructor for our map when new key is entered into the mapvia line of the setitem methodwe create new instance of the nested item classwhich is inherited from our mapbase class this list-based map implementation is simplebut it is not particularly efficient each of the fundamental methodsgetitem setitem and delitem relies on for loop to scan the underlying list of items in search of matching key in best-case scenariosuch match may be found near the beginning of the listin which case the loop terminatesin the worst casethe entire list will be examined thereforeeach of these methods runs in (ntime on map with items |
22,063 | class unsortedtablemap(mapbase) """map implementation using an unordered list "" def init (self) """create an empty map ""list of item' self table def getitem (selfk) """return value associated with key (raise keyerror if not found"" for item in self table if =item key return item value raise keyerrorkey errorrepr( ) def setitem (selfkv) """assign value to key koverwriting existing value if present "" for item in self tablefound match if =item keyreassign value item value return and quit did not find match for key self table append(self item( , ) def delitem (selfk) """remove item associated with key (raise keyerror if not found"" for in range(len(self table))found match if =self table[jkeyremove item self table pop( return and quit raise keyerrorkey errorrepr( ) def len (self) """return number of items in the map "" return len(self table def iter (self) """generate iteration of the map keys "" for item in self tableyield the key yield item key code fragment an implementation of map using python list as an unsorted table parent class mapbase is given in code fragment |
22,064 | hash tables in this sectionwe introduce one of the most practical data structures for implementing mapand the one that is used by python' own implementation of the dict class this structure is known as hash table intuitivelya map supports the abstraction of using keys as indices with syntax such as [kas mental warm-upconsider restricted setting in which map with items uses keys that are known to be integers in range from to for some > in this casewe can represent the map using lookup table of length nas diagrammed in figure figure lookup table with length for map containing items ( , )( , )( , )and ( ,qin this representationwe store the value associated with key at index of the table (presuming that we have distinct way to represent an empty slotbasic map operations of getitem setitem and delitem can be implemented in ( worst-case time there are two challenges in extending this framework to the more general setting of map firstwe may not wish to devote an array of length if it is the case that secondwe do not in general require that map' keys be integers the novel concept for hash table is the use of hash function to map general keys to corresponding indices in table ideallykeys will be well distributed in the range from to by hash functionbut in practice there may be two or more distinct keys that get mapped to the same index as resultwe will conceptualize our table as bucket arrayas shown in figure in which each bucket may manage collection of items that are sent to specific index by the hash function (to save spacean empty bucket may be replaced by none ( , ( , ( , ( , ( , ( , ( , figure bucket array of capacity with items ( , )( , )( , )( , )( , )( , )and ( , )using simple hash function |
22,065 | hash functions the goal of hash functionhis to map each key to an integer in the range [ ]where is the capacity of the bucket array for hash table equipped with such hash functionhthe main idea of this approach is to use the hash function valueh( )as an index into our bucket arrayainstead of the key (which may not be appropriate for direct use as an indexthat iswe store the item (kvin the bucket [ ( )if there are two or more keys with the same hash valuethen two different items will be mapped to the same bucket in in this casewe say that collision has occurred to be surethere are ways of dealing with collisionswhich we will discuss laterbut the best strategy is to try to avoid them in the first place we say that hash function is "goodif it maps the keys in our map so as to sufficiently minimize collisions for practical reasonswe also would like hash function to be fast and easy to compute it is common to view the evaluation of hash functionh( )as consisting of two portions-- hash code that maps key to an integerand compression function that maps the hash code to an integer within range of indices[ ]for bucket array (see figure arbitrary objects hash code - - compression function - figure two parts of hash functiona hash code and compression function the advantage of separating the hash function into two such components is that the hash code portion of that computation is independent of specific hash table size this allows the development of general hash code for each object that can be used for hash table of any sizeonly the compression function depends upon the table size this is particularly convenientbecause the underlying bucket array for hash table may be dynamically resizeddepending on the number of items currently stored in the map (see section |
22,066 | hash codes the first action that hash function performs is to take an arbitrary key in our map and compute an integer that is called the hash code for kthis integer need not be in the range [ ]and may even be negative we desire that the set of hash codes assigned to our keys should avoid collisions as much as possible for if the hash codes of our keys cause collisionsthen there is no hope for our compression function to avoid them in this subsectionwe begin by discussing the theory of hash codes following thatwe discuss practical implementations of hash codes in python treating the bit representation as an integer to beginwe note thatfor any data type that is represented using at most as many bits as our integer hash codeswe can simply take as hash code for an integer interpretation of its bits for examplethe hash code for key could simply be the hash code for floating-point number such as could be based upon an interpretation of the bits of the floating-point representation as an integer for type whose bit representation is longer than desired hash codethe above scheme is not immediately applicable for examplepython relies on -bit hash codes if floating-point number uses -bit representationits bits cannot be viewed directly as hash code one possibility is to use only the high-order bits (or the low-order bitsthis hash codeof courseignores half of the information present in the original keyand if many of the keys in our map only differ in these bitsthen they will collide using this simple hash code better approach is to combine in some way the high-order and low-order portions of -bit key to form -bit hash codewhich takes all the original bits into consideration simple implementation is to add the two components as bit numbers (ignoring overflow)or to take the exclusive-or of the two components these approaches of combining components can be extended to any object whose binary representation can be viewed as an -tuple ( xn- of -bit integersfor exampleby forming hash code for as - = xi or as *xn- where the symbol represents the bitwise exclusive-or operation (which is in pythonpolynomial hash codes the summation and exclusive-or hash codesdescribed aboveare not good choices for character strings or other variable-length objects that can be viewed as tuples of the form ( xn- )where the order of the xi ' is significant for exampleconsider -bit hash code for character string that sums the unicode values of the characters in this hash code unfortunately produces lots of unwanted |
22,067 | collisions for common groups of strings in particular"temp and "temp collide using this functionas do "stop""tops""pots"and "spota better hash code should somehow take into consideration the positions of the xi ' an alternative hash codewhich does exactly thisis to choose nonzero constanta and use as hash code the value an- an- xn- xn- mathematically speakingthis is simply polynomial in that takes the components ( xn- of an object as its coefficients this hash code is therefore called polynomial hash code by horner' rule (see exercise - )this polynomial can be computed as xn- (xn- (xn- ( ( ax ))intuitivelya polynomial hash code uses multiplication by different powers as way to spread out the influence of each component across the resulting hash code of courseon typical computerevaluating polynomial will be done using the finite bit representation for hash codehencethe value will periodically overflow the bits used for an integer since we are more interested in good spread of the object with respect to other keyswe simply ignore such overflows stillwe should be mindful that such overflows are occurring and choose the constant so that it has some nonzerolow-order bitswhich will serve to preserve some of the information content even as we are in an overflow situation we have done some experimental studies that suggest that and are particularly good choices for when working with character strings that are english words in factin list of over , english words formed as the union of the word lists provided in two variants of unixwe found that taking to be or produced less than collisions in each casecyclic-shift hash codes variant of the polynomial hash code replaces multiplication by with cyclic shift of partial sum by certain number of bits for examplea -bit cyclic shift of the -bit value is achieved by taking the leftmost five bits and placing those on the rightmost side of the representationresulting in while this operation has little natural meaning in terms of arithmeticit accomplishes the goal of varying the bits of the calculation in pythona cyclic shift of bits can be accomplished through careful use of the bitwise operators >taking care to truncate results to -bit integers |
22,068 | mapshash tablesand skip lists an implementation of cyclic-shift hash code computation for character string in python appears as followsdef hash code( )mask ( < = for character in sh ( +ord(characterreturn limit to -bit integers -bit cyclic shift of running sum add in value of next character as with the traditional polynomial hash codefine-tuning is required when using cyclic-shift hash codeas we must wisely choose the amount to shift by for each new character our choice of -bit shift is justified by experiments run on list of just over , english wordscomparing the number of collisions for various shift amounts (see table shift collisions total max table comparison of collision behavior for the cyclic-shift hash code as applied to list of , english words the "totalcolumn records the total number of words that collide with at least one otherand the "maxcolumn records the maximum number of words colliding at any one hash code note that with cyclic shift of this hash code reverts to the one that simply sums all the characters |
22,069 | hash codes in python the standard mechanism for computing hash codes in python is built-in function with signature hash(xthat returns an integer value that serves as the hash code for object howeveronly immutable data types are deemed hashable in python this restriction is meant to ensure that particular object' hash code remains constant during that object' lifespan this is an important property for an object' use as key in hash table problem could occur if key were inserted into the hash tableyet later search were performed for that key based on different hash code than that which it had when insertedthe wrong bucket would be searched among python' built-in data typesthe immutable intfloatstrtupleand frozenset classes produce robust hash codesvia the hash functionusing techniques similar to those discussed earlier in this section hash codes for character strings are well crafted based on technique similar to polynomial hash codesexcept using exclusive-or computations rather than additions if we repeat the experiment described in table using python' built-in hash codeswe find that only strings out of the set of more than , collide with another hash codes for tuples are computed with similar technique based upon combination of the hash codes of the individual elements of the tuple when hashing frozensetthe order of the elements should be irrelevantand so natural option is to compute the exclusive-or of the individual hash codes without any shifting if hash(xis called for an instance of mutable typesuch as lista typeerror is raised instances of user-defined classes are treated as unhashable by defaultwith typeerror raised by the hash function howevera function that computes hash codes can be implemented in the form of special method named hash within class the returned hash code should reflect the immutable attributes of an instance it is common to return hash code that is itself based on the computed hash of the combination of such attributes for examplea color class that maintains three numeric redgreenand blue components might implement the method asdef hash (self)return hash(self redself greenself bluehash combined tuple an important rule to obey is that if class defines equivalence through eq then any implementation of hash must be consistentin that if =ythen hash( =hash(ythis is important because if two instances are considered to be equivalent and one is used as key in hash tablea search for the second instance should result in the discovery of the first it is therefore important that the hash code for the second match the hash code for the firstso that the proper bucket is examined this rule extends to any well-defined comparisons between objects of different classes for examplesince python treats the expression = as trueit ensures that hash( and hash( are the same |
22,070 | compression functions the hash code for key will typically not be suitable for immediate use with bucket arraybecause the integer hash code may be negative or may exceed the capacity of the bucket array thusonce we have determined an integer hash code for key object kthere is still the issue of mapping that integer into the range [ - this computationknown as compression functionis the second action performed as part of an overall hash function good compression function is one that minimizes the number of collisions for given set of distinct hash codes the division method simple compression function is the division methodwhich maps an integer to mod nwhere nthe size of the bucket arrayis fixed positive integer additionallyif we take to be prime numberthen this compression function helps "spread outthe distribution of hashed values indeedif is not primethen there is greater risk that patterns in the distribution of hash codes will be repeated in the distribution of hash valuesthereby causing collisions for exampleif we insert keys with hash codes { into bucket array of size then each hash code will collide with three others but if we use bucket array of size then there will be no collisions if hash function is chosen wellit should ensure that the probability of two different keys getting hashed to the same bucket is / choosing to be prime number is not always enoughhoweverfor if there is repeated pattern of hash codes of the form pn for several different 'sthen there will still be collisions the mad method more sophisticated compression functionwhich helps eliminate repeated patterns in set of integer keysis the multiply-add-and-divide (or "mad"method this method maps an integer to [(ai bmod pmod nwhere is the size of the bucket arrayp is prime number larger than nand and are integers chosen at random from the interval [ ]with this compression function is chosen in order to eliminate repeated patterns in the set of hash codes and get us closer to having "goodhash functionthat isone such that the probability any two different keys collide is / this good behavior would be the same as we would have if these keys were "throwninto uniformly at random |
22,071 | collision-handling schemes the main idea of hash table is to take bucket arrayaand hash functionhand use them to implement map by storing each item (kvin the "bucketa[ ( )this simple idea is challengedhoweverwhen we have two distinct keysk and such that ( ( the existence of such collisions prevents us from simply inserting new item (kvdirectly into the bucket [ ( )it also complicates our procedure for performing insertionsearchand deletion operations separate chaining simple and efficient way for dealing with collisions is to have each bucket ajstore its own secondary containerholding items (kvsuch that (kj natural choice for the secondary container is small map instance implemented using listas described in section this collision resolution rule is known as separate chainingand is illustrated in figure figure hash table of size storing items with integer keyswith collisions resolved by separate chaining the compression function is (kk mod for simplicitywe do not show the values associated with the keys in the worst caseoperations on an individual bucket take time proportional to the size of the bucket assuming we use good hash function to index the items of our map in bucket array of capacity nthe expected size of bucket is / thereforeif given good hash functionthe core map operations run in on/ the ratio /ncalled the load factor of the hash tableshould be bounded by small constantpreferably below as long as is ( )the core operations on the hash table run in ( expected time |
22,072 | open addressing the separate chaining rule has many nice propertiessuch as affording simple implementations of map operationsbut it nevertheless has one slight disadvantageit requires the use of an auxiliary data structure-- list--to hold items with colliding keys if space is at premium (for exampleif we are writing program for small handheld device)then we can use the alternative approach of always storing each item directly in table slot this approach saves space because no auxiliary structures are employedbut it requires bit more complexity to deal with collisions there are several variants of this approachcollectively referred to as open addressing schemeswhich we discuss next open addressing requires that the load factor is always at most and that items are stored directly in the cells of the bucket array itself linear probing and its variants simple method for collision handling with open addressing is linear probing with this approachif we try to insert an item (kvinto bucket ajthat is already occupiedwhere ( )then we next try [ mod nif [ mod nis also occupiedthen we try [ mod ]and so onuntil we find an empty bucket that can accept the new item once this bucket is locatedwe simply insert the item there of coursethis collision resolution strategy requires that we change the implementation when searching for an existing key--the first step of all getitem setitem or delitem operations in particularto attempt to locate an item with key equal to kwe must examine consecutive slotsstarting from [ ( )]until we either find an item with that key or we find an empty bucket (see figure the name "linear probingcomes from the fact that accessing cell of the bucket array can be viewed as "probe must probe times before finding empty slot new element with key to be inserted figure insertion into hash table with integer keys using linear probing the hash function is (kk mod values associated with keys are not shown |
22,073 | to implement deletionwe cannot simply remove found item from its slot in the array for exampleafter the insertion of key portrayed in figure if the item with key were trivially deleteda subsequent search for would fail because that search would start by probing at index then index and then index at which an empty cell is found typical way to get around this difficulty is to replace deleted item with special "availablemarker object with this special marker possibly occupying spaces in our hash tablewe modify our search algorithm so that the search for key will skip over cells containing the available marker and continue probing until reaching the desired item or an empty bucket (or returning back to where we started fromadditionallyour algorithm for setitem should remember an available cell encountered during the search for ksince this is valid place to put new item (kv)if no existing item is found although use of an open addressing scheme can save spacelinear probing suffers from an additional disadvantage it tends to cluster the items of map into contiguous runswhich may even overlap (particularly if more than half of the cells in the hash table are occupiedsuch contiguous runs of occupied hash cells cause searches to slow down considerably another open addressing strategyknown as quadratic probingiteratively tries the buckets [( (kf ( )mod ]for where (ii until finding an empty bucket as with linear probingthe quadratic probing strategy complicates the removal operationbut it does avoid the kinds of clustering patterns that occur with linear probing neverthelessit creates its own kind of clusteringcalled secondary clusteringwhere the set of filled array cells still has non-uniform patterneven if we assume that the original hash codes are distributed uniformly when is prime and the bucket array is less than half fullthe quadratic probing strategy is guaranteed to find an empty slot howeverthis guarantee is not valid once the table becomes at least half fullor if is not chosen as prime numberwe explore the cause of this type of clustering in an exercise ( - an open addressing strategy that does not cause clustering of the kind produced by linear probing or the kind produced by quadratic probing is the double hashing strategy in this approachwe choose secondary hash functionh and if maps some key to bucket [ ( )that is already occupiedthen we iteratively try the buckets [( (kf ( )mod nnextfor where (ii (kin this schemethe secondary hash function is not allowed to evaluate to zeroa common choice is (kq ( mod )for some prime number alson should be prime another approach to avoid clustering with open addressing is to iteratively try buckets [( (kf ( )mod nwhere (iis based on pseudo-random number generatorproviding repeatablebut somewhat arbitrarysequence of subsequent probes that depends upon bits of the original hash code this is the approach currently used by python' dictionary class |
22,074 | load factorsrehashingand efficiency in the hash table schemes described thus farit is important that the load factorl /nbe kept below with separate chainingas gets very close to the probability of collision greatly increaseswhich adds overhead to our operationssince we must revert to linear-time list-based methods in buckets that have collisions experiments and average-case analyses suggest that we should maintain for hash tables with separate chaining with open addressingon the other handas the load factor grows beyond and starts approaching clusters of entries in the bucket array start to grow as well these clusters cause the probing strategies to "bounce aroundthe bucket array for considerable amount of time before they find an empty slot in exercise - we explore the degradation of quadratic probing when > experiments suggest that we should maintain for an open addressing scheme with linear probingand perhaps only bit higher for other open addressing schemes (for examplepython' implementation of open addressing enforces that / if an insertion causes the load factor of hash table to go above the specified thresholdthen it is common to resize the table (to regain the specified load factorand to reinsert all objects into this new table although we need not define new hash code for each objectwe do need to reapply new compression function that takes into consideration the size of the new table each rehashing will generally scatter the items throughout the new bucket array when rehashing to new tableit is good requirement for the new array' size to be at least double the previous size indeedif we always double the size of the table with each rehashing operationthen we can amortize the cost of rehashing all the entries in the table against the time used to insert them in the first place (as with dynamic arrayssee section efficiency of hash tables although the details of the average-case analysis of hashing are beyond the scope of this bookits probabilistic basis is quite intuitive if our hash function is goodthen we expect the entries to be uniformly distributed in the cells of the bucket array thusto store entriesthe expected number of keys in bucket would be / which is ( if is (nthe costs associated with periodic rehashingto resize table after occasional insertions or deletions can be accounted for separatelyleading to an additional ( amortized cost for setitem and getitem in the worst casea poor hash function could map every item to the same bucket this would result in linear-time performance for the core map operations with separate chainingor with any open addressing model in which the secondary sequence of probes depends only on the hash code summary of these costs is given in table |
22,075 | operation list getitem setitem delitem len iter (no(no(no( (nhash table expected worst case ( (no( (no( (no( ( (no(ntable comparison of the running times of the methods of map realized by means of an unsorted list (as in section or hash table we let denote the number of items in the mapand we assume that the bucket array supporting the hash table is maintained such that its capacity is proportional to the number of items in the map in practicehash tables are among the most efficient means for implementing mapand it is essentially taken for granted by programmers that their core operations run in constant time python' dict class is implemented with hashingand the python interpreter relies on dictionaries to retrieve an object that is referenced by an identifier in given namespace (see sections and the basic command involves two calls to getitem in the dictionary for the local namespace to retrieve the values identified as and band call to setitem to store the result associated with name in that namespace in our own algorithm analysiswe simply presume that such dictionary operations run in constant timeindependent of the number of entries in the namespace (admittedlythe number of entries in typical namespace can almost surely be bounded by constant in academic paper [ ]researchers discuss the possibility of exploiting hash table' worst-case performance to cause denial-of-service (dosattack of internet technologies for many published algorithms that compute hash codesthey note that an attacker could precompute very large number of moderate-length strings that all hash to the identical -bit hash code (recall that by any of the hashing schemes we describeother than double hashingif two keys are mapped to the same hash codethey will be inseparable in the collision resolution in late another team of researchers demonstrated an implementation of just such an attack [ web servers allow series of key-value parameters to be embedded in url using syntax such as ?key =val &key =val &key =val typicallythose key-value pairs are immediately stored in map by the serverand limit is placed on the length and number of such parameters presuming that storage time in the map will be linear in the number of entries if all keys were to collidethat storage requires quadratic time (causing the server to perform an inordinate amount of workin spring of python developers distributed security patch that introduces randomization into the computation of hash codes for stringsmaking it less tractable to reverse engineer set of colliding strings |
22,076 | python hash table implementation in this sectionwe develop two implementations of hash tableone using separate chaining and the other using open addressing with linear probing while these approaches to collision resolution are quite differentthere are great many commonalities to the hashing algorithms for that reasonwe extend the mapbase class (from code fragment )to define new hashmapbase class (see code fragment )providing much of the common functionality to our two hash table implementations the main design elements of the hashmapbase class arethe bucket array is represented as python listnamed self tablewith all entries initialized to none we maintain an instance variable self that represents the number of distinct items that are currently stored in the hash table if the load factor of the table increases beyond we double the size of the table and rehash all items into the new table we define hash function utility method that relies on python' built-in hash function to produce hash codes for keysand randomized multiplyadd-and-divide (madformula for the compression function what is not implemented in the base class is any notion of how "bucketshould be represented with separate chainingeach bucket will be an independent structure with open addressinghoweverthere is no tangible container for each bucketthe "bucketsare effectively interleaved due to the probing sequences in our designthe hashmapbase class presumes the following to be abstract methodswhich must be implemented by each concrete subclassbucket getitem(jkthis method should search bucket for an item having key kreturning the associated valueif foundor else raising keyerror bucket setitem(jkvthis method should modify bucket so that key becomes associated with value if the key already existsthe new value overwrites the existing value otherwisea new item is inserted and this method is responsible for incrementing self bucket delitem(jkthis method should remove the item from bucket having key kor raise keyerror if no such item exists (self is decremented after this method iter this is the standard map method to iterate through all keys of the map our base class does not delegate this on per-bucket basis because "bucketsin open addressing are not inherently disjoint |
22,077 | class hashmapbase(mapbase) """abstract base class for map using hash-table with mad compression "" def init (selfcap= = ) """create an empty hash-table map "" self table cap none number of entries in the map self prime for mad compression self prime scale from to - for mad self scale randrange( - shift from to - for mad self shift randrange( def hash function(selfk) return (hash(kself scale self shiftself prime len(self table def len (self) return self def getitem (selfk) self hash function(kmay raise keyerror return self bucket getitem(jk def setitem (selfkv) self hash function(ksubroutine maintains self self bucket setitem(jkvkeep load factor < if self len(self table/ number ^ is often prime self resize( len(self table def delitem (selfk) self hash function(kmay raise keyerror self bucket delitem(jk self - resize bucket array to capacity def resize(selfc) old list(self items)use iteration to record existing items then reset table to desired capacity self table [nonen recomputed during subsequent adds self for ( ,vin old self[kv reinsert old key-value pair code fragment base class for our hash table implementationsextending our mapbase class from code fragment |
22,078 | separate chaining code fragment provides concrete implementation of hash table with separate chainingin the form of the chainhashmap class to represent single bucketit relies on an instance of the unsortedtablemap class from code fragment the first three methods in the class use index to access the potential bucket in the bucket arrayand check for the special case in which that table entry is none the only time we need new bucket structure is when bucket setitem is called on an otherwise empty slot the remaining functionality relies on map behaviors that are already supported by the individual unsortedtablemap instances we need bit of forethought to determine whether the application of setitem on the chain causes net increase in the size of the map (that iswhether the given key is new class chainhashmap(hashmapbase) """hash map implemented with separate chaining for collision resolution "" def bucket getitem(selfjk) bucket self table[ if bucket is noneno match found raise keyerrorkey errorrepr( ) return bucket[kmay raise keyerror def bucket setitem(selfjkv) if self table[jis nonebucket is new to the table self table[junsortedtablemap oldsize len(self table[ ] self table[ ][kv key was new to the table if len(self table[ ]oldsizeincrease overall map size self + def bucket delitem(selfjk) bucket self table[ if bucket is noneno match found raise keyerrorkey errorrepr( ) del bucket[kmay raise keyerror def iter (self) for bucket in self table if bucket is not nonea nonempty slot for key in bucket yield key code fragment concrete hash map class with separate chaining |
22,079 | linear probing our implementation of probehashmap classusing open addressing with linear probingis given in code fragments and in order to support deletionswe use technique described in section in which we place special marker in table location at which an item has been deletedso that we can distinguish between it and location that has always been empty in our implementationwe declare class-level attributeavailas sentinel (we use an instance of the built-in object class because we do not care about any behaviors of the sentineljust our ability to differentiate it from other objects the most challenging aspect of open addressing is to properly trace the series of probes when collisions occur during an insertion or search for an item to this endwe define nonpublic utilityfind slotthat searches for an item with key in "bucketj (that iswhere is the index returned by the hash function for key class probehashmap(hashmapbase) """hash map implemented with linear probing for collision resolution ""avail objectsentinal marks locations of previous deletions def is available(selfj) """return true if index is available in table "" return self table[jis none or self table[jis probehashmap avail def find slot(selfjk) """search for key in bucket at index return (successindextupledescribed as follows if match was foundsuccess is true and index denotes its location if no match foundsuccess is false and index denotes first available slot "" firstavail none while true if self is available( ) if firstavail is none firstavail mark this as first avail if self table[jis none return (falsefirstavailsearch has failed elif =self table[jkey return (truejfound match keep looking (cyclically ( len(self tablecode fragment concrete probehashmap class that uses linear probing for collision resolution (continued in code fragment |
22,080 | def bucket getitem(selfjk)founds self find slot(jkif not foundraise keyerrorkey errorreturn self table[svalue repr( )def bucket setitem(selfjkv)founds self find slot(jkif not foundself table[sself item( ,vself + elseself table[svalue def bucket delitem(selfjk)founds self find slot(jkif not foundraise keyerrorkey errorrepr( )self table[sprobehashmap avail def iter (self)for in range(len(self table))if not self is available( )yield self table[jkey no match found insert new item size has increased overwrite existing no match found mark as vacated scan entire table code fragment concrete probehashmap class that uses linear probing for collision resolution (continued from code fragment the three primary map operations each rely on the find slot utility when attempting to retrieve the value associated with given keywe must continue probing until we find the keyor until we reach table slot with the none value we cannot stop the search upon reaching an avail sentinelbecause it represents location that may have been filled when the desired item was once inserted when key-value pair is being assigned in the mapwe must attempt to find an existing item with the given keyso that we might overwrite its valuebefore adding new item to the map thereforewe must search beyond any occurrences of the avail sentinel when inserting howeverif no match is foundwe prefer to repurpose the first slot marked with availif anywhen placing the new element in the table the find slot method enacts this logiccontinuing the search until truly empty slotbut returning the index of the first available slot for an insertion when deleting an existing item within bucket delitemwe intentionally set the table entry to the avail sentinel in accordance with our strategy |
22,081 | sorted maps the traditional map adt allows user to look up the value associated with given keybut the search for that key is form known as an exact search for examplecomputer systems often maintain information about events that have occurred (such as financial transactions)organizing such events based upon what are known as time stamps if we can assume that time stamps are unique for particular systemthen we might organize map with time stamp serving as the keyand record about the event that occurred at that time as the value particular time stamp could serve as reference id for an eventin which case we can quickly retrieve information about that event from the map howeverthe map adt does not provide any way to get list of all events ordered by the time at which they occuror to search for which event occurred closest to particular time in factthe fast performance of hash-based implementations of the map adt relies on the intentionally scattering of keys that may seem very "nearto each other in the original domainso that they are more uniformly distributed in hash table in this sectionwe introduce an extension known as the sorted map adt that includes all behaviors of the standard mapplus the followingm find min)return the (key,valuepair with minimum key (or noneif map is emptym find max)return the (key,valuepair with maximum key (or noneif map is emptym find lt( )return the (key,valuepair with the greatest key that is strictly less than (or noneif no such item existsm find le( )return the (key,valuepair with the greatest key that is less than or equal to (or noneif no such item existsm find gt( )return the (key,valuepair with the least key that is strictly greater than (or noneif no such item existsm find ge( )return the (key,valuepair with the least key that is greater than or equal to (or noneif no such itemm find range(startstop)iterate all (key,valuepairs with start <key stop if start is noneiteration begins with minimum keyif stop is noneiteration concludes with maximum key iter( )iterate all keys of the map according to their natural orderfrom smallest to largest reversed( )iterate all keys of the map in reverse orderin pythonthis is implemented with the reversed method |
22,082 | sorted search tables several data structures can efficiently support the sorted map adtand we will examine some advanced techniques in section and in this sectionwe begin by exploring simple implementation of sorted map we store the map' items in an array-based sequence so that they are in increasing order of their keysassuming the keys have naturally defined order (see figure we refer to this implementation of map as sorted search table figure realization of map by means of sorted search table we show only the keys for this mapso as to highlight their ordering as was the case with the unsorted table map of section the sorted search table has space requirement that is ( )assuming we grow and shrink the array to keep its size proportional to the number of items in the map the primary advantage of this representationand our reason for insisting that be array-basedis that it allows us to use the binary search algorithm for variety of efficient operations binary search and inexact searches we originally presented the binary search algorithm in section as means for detecting whether given target is stored within sorted sequence in our original presentation (code fragment on page ) binary search function returned true of false to designate whether the desired target was found while such an approach could be used to implement the contains method of the map adtwe can adapt the binary search algorithm to provide far more useful information when performing forms of inexact search in support of the sorted map adt the important realization is that while performing binary searchwe can determine the index at or near where target might be found during successful searchthe standard implementation determines the precise index at which the target is found during an unsuccessful searchalthough the target is not foundthe algorithm will effectively determine pair of indices designating elements of the collection that are just less than or just greater than the missing target as motivating exampleour original simulation from figure on page shows successful binary search for target of using the same data we portray in figure had we instead been searching for the first four steps of the algorithm would be the same the subsequent difference is that we would make an additional call with inverted parameters high= and low= effectively concluding that the missing target lies in the gap between values and in that example |
22,083 | implementation in code fragments through we present complete implementation of classsortedtablemapthat supports the sorted map adt the most notable feature of our design is the inclusion of find index utility function this method using the binary search algorithmbut by convention returns the index of the leftmost item in the search interval having key greater than or equal to thereforeif the key is presentit will return the index of the item having that key (recall that keys are unique in map when the key is missingthe function returns the index of the item in the search interval that is just beyond where the key would have been located as technicalitythe method returns index high to indicate that no items of the interval had key greater than we rely on this utility method when implementing the traditional map operations and the new sorted map operations the body of each of the getitem setitem and delitem methods begins with call to find index to determine candidate index at which matching key might be found for getitem we simply check whether that is valid index containing the target to determine the result for setitem recall that the goal is to replace the value of an existing itemif one with key is foundbut otherwise to insert new item into the map the index returned by find index will be the index of the matchif one existsor otherwise the exact index at which the new item should be inserted for delitem we again rely on the convenience of find index to determine the location of the item to be poppedif any our find index utility is equally valuable when implementing the various inexact search methods given in code fragment for each of the methods find ltfind lefind gtand find gewe begin with call to find index utilitywhich locates the first index at which there is an element with key >kif any this is precisely what we want for find geif validand just beyond the index we want for find lt for find gt and find le we need some extra case analysis to distinguish whether the indicated index has key equal to for exampleif the indicated item has matching keyour find gt implementation increments the index before continuing with the process (we omit the implementation of find lefor brevity in all caseswe must properly handle boundary casesreporting none when unable to find key with the desired property our strategy for implementing find range is to use the find index utility to locate the first item with key >start (assuming start is not nonewith that knowledgewe use while loop to sequentially report items until reaching one that has key greater than or equal to the stopping value (or until reaching the end of the tableit is worth noting that the while loop may trivially iterate zero items if the first key that is greater than or equal to start also happens to be greater than or equal to stop this represents an empty range in the map |
22,084 | mapshash tablesand skip lists class sortedtablemap(mapbase) """map implementation using sorted table "" nonpublic behaviors def find index(selfklowhigh) """return index of the leftmost item with key greater than or equal to return high if no such item qualifies that isj will be returned such that all items of slice table[low:jhave key all items of slice table[ :high+ have key > "" if high low return high no element qualifies else mid (low high/ if =self table[midkey return mid found exact match elif self table[midkeynotemay return mid return self find index(klowmid else return self find index(kmid highanswer is right of mid public behaviors def init (self) """create an empty map "" self table def len (self) """return number of items in the map "" return len(self table def getitem (selfk) """return value associated with key (raise keyerror if not found"" self find index( len(self table if =len(self tableor self table[jkey ! raise keyerrorkey errorrepr( ) return self table[jvalue code fragment an implementation of sortedtablemap class (continued in code fragments and |
22,085 | def setitem (selfkv)"""assign value to key koverwriting existing value if present "" self find index( len(self table if len(self tableand self table[jkey =kreassign value self table[jvalue elseadds new item self table insert(jself item( , )def delitem (selfk)"""remove item associated with key (raise keyerror if not found"" self find index( len(self table if =len(self tableor self table[jkey !kraise keyerrorkey errorrepr( )delete item self table pop(jdef iter (self)"""generate keys of the map ordered from minimum to maximum ""for item in self tableyield item key def reversed (self)"""generate keys of the map ordered from maximum to minimum ""for item in reversed(self table)yield item key def find min(self)"""return (key,valuepair with minimum key (or none if empty""if len(self table return (self table[ keyself table[ valueelsereturn none def find max(self)"""return (key,valuepair with maximum key (or none if empty""if len(self table return (self table[- keyself table[- valueelsereturn none code fragment an implementation of sortedtablemap class (together with code fragments and |
22,086 | def find ge(selfk)"""return (key,valuepair with least key greater than or equal to "" key > self find index( len(self table if len(self table)return (self table[jkeyself table[jvalueelsereturn none def find lt(selfk)"""return (key,valuepair with greatest key strictly less than "" key > self find index( len(self table if return (self table[ - keyself table[ - valuenote use of - elsereturn none def find gt(selfk)"""return (key,valuepair with least key strictly greater than "" key > self find index( len(self table if len(self tableand self table[jkey =kj + advanced past match if len(self table)return (self table[jkeyself table[jvalueelsereturn none def find range(selfstartstop)"""iterate all (key,valuepairs such that start <key stop if start is noneiteration begins with minimum key of map if stop is noneiteration continues through the maximum key of map ""if start is nonej= elsefind first result self find index(start len(self table)- while len(self tableand (stop is none or self table[jkey stop)yield (self table[jkeyself table[jvaluej + code fragment an implementation of sortedtablemap class (continued from code fragments and we omit the find le method due to space |
22,087 | analysis we conclude by analyzing the performance of our sortedtablemap implementation summary of the running times for all methods of the sorted map adt (including the traditional map operationsis given in table it should be clear that the len find minand find max methods run in ( timeand that iterating the keys of the table in either direction can be peformed in (ntime the analysis for the various forms of search all depend on the fact that binary search on table with entries runs in (log ntime this claim was originally shown as proposition in section and that analysis clearly applies to our find index method as well we therefore claim an (log nworst-case running time for methods getitem find ltfind gtfind leand find ge each of these makes single call to find indexfollowed by constant number of additional steps to determine the appropriate answer based on the index the analysis of find range is bit more interesting it begins with binary search to find the first item within the range (if anyafter thatit executes loop that takes ( time per iteration to report subsequent values until reaching the end of the range if there are items reported in the rangethe total running time is ( log nin contrast to the efficient search operationsupdate operations for sorted table may take considerable time although binary search can help identify the index at which an update occursboth insertions and deletions requirein the worst casethat linearly many existing elements be shifted in order to maintain the sorted order of the table specificallythe potential call to table insert from within setitem and table pop from within delitem lead to (nworst-case time (see the discussion of corresponding operations of the list class in section in conclusionsorted tables are primarily used in situations where we expect many searches but relatively few updates operation len(mk in [kv del [km find min) find maxm find lt( ) find gt(km find le( ) find ge(km find range(startstopiter( )reversed(mrunning time ( (log no(nworst caseo(log nif existing (nworst case ( (log no( log nwhere items are reported (ntable performance of sorted mapas implemented with sortedtablemap we use to denote the number of items in the map at the time the operation is performed the space requirement is ( |
22,088 | two applications of sorted maps in this sectionwe explore applications in which there is particular advantage to using sorted map rather than traditional (unsortedmap to apply sorted mapkeys must come from domain that is totally ordered furthermoreto take advantage of the inexact or range searches afforded by sorted mapthere should be some reason why nearby keys have relevance to search flight databases there are several web sites on the internet that allow users to perform queries on flight databases to find flights between various citiestypically with the intent to buy ticket to make querya user specifies origin and destination citiesa departure dateand departure time to support such querieswe can model the flight database as mapwhere keys are flight objects that contain fields corresponding to these four parameters that isa key is tuple (origindestinationdatetimeadditional information about flightsuch as the flight numberthe number of seats still available in first (fand coach (yclassthe flight durationand the farecan be stored in the value object finding requested flight is not simply matter of finding an exact match for requested query although user typically wants to exactly match the origin and destination citieshe or she may have flexibility for the departure dateand certainly will have some flexibility for the departure time on specific day we can handle such query by ordering our keys lexicographically thenan efficient implementation for sorted map would be good way to satisfy usersqueries for instancegiven user query key kwe could call find ge(kto return the first flight between the desired citieshaving departure date and time matching the desired query or later better yetwith well-constructed keyswe could use find range( to find all flights within given range of times for exampleif (ordpvd may : )and (ordpvd may : ) respective call to find range( might result in the following sequence of key-value pairs(ordpvd may : (ordpvd may : (ordpvd may : (ordpvd may : (aa : )(aa : )(aa : )(aa : |
22,089 | maxima sets life is full of trade-offs we often have to trade off desired performance measure against corresponding cost supposefor the sake of an examplewe are interested in maintaining database rating automobiles by their maximum speeds and their cost we would like to allow someone with certain amount of money to query our database to find the fastest car they can possibly afford we can model such trade-off problem as this by using key-value pair to model the two parameters that we are trading offwhich in this case would be the pair (costspeedfor each car notice that some cars are strictly better than other cars using this measure for examplea car with cost-speed pair ( is strictly better than car with cost-speed pair ( at the same timethere are some cars that are not strictly dominated by another car for examplea car with cost-speed pair ( may be better or worse than car with cost-speed pair ( )depending on how much money we have to spend (see figure performance cost figure illustrating the cost-performance trade-off with pairs represented by points in the plane notice that point is strictly better than points cdand ebut may be better or worse than points abf gand hdepending on the price we are willing to pay thusif we were to add to our setwe could remove the points cdand ebut not the others formallywe say cost-performance pair (abdominates pair (cd(abif dthat isif the first pair has no greater cost and at least as good performance pair (abis called maximum pair if it is not dominated by any other pair we are interested in maintaining the set of maxima of collection of cost-performance pairs that iswe would like to add new pairs to this collection (for examplewhen new car is introduced)and to query this collection for given dollar amountdto find the fastest car that costs no more than dollars |
22,090 | mapshash tablesand skip lists maintaining maxima set with sorted map we can store the set of maxima pairs in sorted mapmso that the cost is the key field and performance (speedis the value field we can then implement operations add(cp)which adds new cost-performance pair (cp)and best( )which returns the best pair with cost at most cas shown in code fragment class costperformancedatabase """maintain database of maximal (cost,performancepairs "" def init (self) """create an empty database ""or more efficient sorted map self sortedtablemap def best(selfc) """return (cost,performancepair with largest cost not exceeding return none if there is no such pair "" return self find le( def add(selfcp) """add new entry with cost and performance "" determine if ( ,pis dominated by an existing pair other is at least as cheap as other self find le( if other is not none and other[ >pif its performance is as good return ( ,pis dominatedso ignore elseadd ( ,pto database self [cp and now remove any pairs that are dominated by ( ,pother more expensive than other self find gt( while other is not none and other[ < del self [other[ ] other self find gt(ccode fragment an implementation of class maintaining set of maxima cost-performance pairs using sorted map unfortunatelyif we implement using the sortedtablemapthe add behavior has (nworst-case running time ifon the other handwe implement using skip listwhich we next describewe can perform best(cqueries in (log nexpected time and add(cpupdates in (( rlog nexpected timewhere is the number of points removed |
22,091 | skip lists an interesting data structure for realizing the sorted map adt is the skip list in section we saw that sorted array will allow (log )-time searches via the binary search algorithm unfortunatelyupdate operations on sorted array have (nworst-case running time because of the need to shift elements in we demonstrated that linked lists support very efficient update operationsas long as the position within the list is identified unfortunatelywe cannot perform fast searches on standard linked listfor examplethe binary search algorithm requires an efficient means for direct accessing an element of sequence by index skip lists provide clever compromise to efficiently support search and update operations skip list for map consists of series of lists { sh each list si stores subset of the items of sorted by increasing keysplus items with two sentinel keys denoted and +where is smaller than every possible key that can be inserted in and is larger than every possible key that can be inserted in in additionthe lists in satisfy the followinglist contains every item of the map (plus sentinels and +for list si contains (in addition to and + randomly generated subset of the items in list si- list sh contains only and an example of skip list is shown in figure it is customary to visualize skip list with list at the bottom and lists sh above it alsowe refer to as the height of skip list intuitivelythe lists are set up so that si+ contains more or less alternate items of si as we shall see in the details of the insertion methodthe items in si+ are chosen at random from the items in si by picking each item from si to also be in si+ with probability / that isin essencewe "flip coinfor each item in si figure example of skip list storing items for simplicitywe show only the itemskeysnot their associated values |
22,092 | mapshash tablesand skip lists and place that item in si+ if the coin comes up "heads thuswe expect to have about / itemss to have about / itemsandin generalsi to have about / items in other wordswe expect the height of to be about log the halving of the number of items from one list to the next is not enforced as an explicit property of skip listshowever insteadrandomization is used functions that generate numbers that can be viewed as random numbers are built into most modern computersbecause they are used extensively in computer gamescryptographyand computer simulationssome functionscalled pseudorandom number generatorsgenerate random-like numbersstarting with an initial seed (see discusion of random module in section other methods use hardware devices to extract "truerandom numbers from nature in any casewe will assume that our computer has access to numbers that are sufficiently random for our analysis the main advantage of using randomization in data structure and algorithm design is that the structures and functions that result are usually simple and efficient the skip list has the same logarithmic time bounds for searching as is achieved by the binary search algorithmyet it extends that performance to update methods when inserting or deleting items neverthelessthe bounds are expected for the skip listwhile binary search has worst-case bound with sorted table skip list makes random choices in arranging its structure in such way that search and update times are (log non averagewhere is the number of items in the map interestinglythe notion of average time complexity used here does not depend on the probability distribution of the keys in the input insteadit depends on the use of random-number generator in the implementation of the insertions to help decide where to place the new item the running time is averaged over all possible outcomes of the random numbers used when inserting entries using the position abstraction used for lists and treeswe view skip list as two-dimensional collection of positions arranged horizontally into levels and vertically into towers each level is list si and each tower contains positions storing the same item across consecutive lists the positions in skip list can be traversed using the following operationsnext( )return the position following on the same level prev( )return the position preceding on the same level below( )return the position below in the same tower above( )return the position above in the same tower we conventionally assume that the above operations return none if the position requested does not exist without going into the detailswe note that we can easily implement skip list by means of linked structure such that the individual traversal methods each take ( timegiven skip-list position such linked structure is essentially collection of doubly linked lists aligned at towerswhich are also doubly linked lists |
22,093 | search and update operations in skip list the skip-list structure affords simple map search and update algorithms in factall of the skip-list search and update algorithms are based on an elegant skipsearch method that takes key and finds the position of the item in list that has the largest key less than or equal to (which is possibly -searching in skip list suppose we are given search key we begin the skipsearch method by setting position variable to the topmostleft position in the skip list scalled the start position of that isthe start position is the position of sh storing the special entry with key we then perform the following steps (see figure )where key(pdenotes the key of the item at position if below(pis nonethen the search terminates--we are at the bottom and have located the item in with the largest key less than or equal to the search key otherwisewe drop down to the next lower level in the present tower by setting below( starting at position pwe move forward until it is at the rightmost position on the present level such that key( < we call this the scan forward step note that such position always existssince each level contains the keys and it may be that remains where it started after we perform such forward scan for this level return to step figure example of search in skip list the positions examined when searching for key are highlighted we give pseudo-code description of the skip-list search algorithmskipsearchin code fragment given this methodthe map operation [kis performed by computing skipsearch(kand testing whether or not key(pk if these two keys are equalwe return the associated valueotherwisewe raise keyerror |
22,094 | algorithm skipsearch( )inputa search key outputposition in the bottom list with the largest key such that key( < start {begin at start positionwhile below( none do below( {drop downwhile >key(next( )do next( {scan forwardreturn code fragment algorthm to search skip list for key as it turns outthe expected running time of algorithm skipsearch on skip list with entries is (log nwe postpone the justification of this facthoweveruntil after we discuss the implementation of the update methods for skip lists navigation starting at the position identified by skipsearch(kcan be easily used to provide the additional forms of searches in the sorted map adt ( find gtfind rangeinsertion in skip list the execution of the map operation [kv begins with call to skipsearch(kthis gives us the position of the bottom-level item with the largest key less than or equal to (note that may hold the special item with key -if key(pkthe associated value is overwritten with otherwisewe need to create new tower for item (kvwe insert (kvimmediately after position within after inserting the new item at the bottom levelwe use randomization to decide the height of the tower for the new item we "flipa coinand if the flip comes up tailsthen we stop here else (the flip comes up heads)we backtrack to the previous (next higherlevel and insert (kvin this level at the appropriate position we again flip coinif it comes up headswe go to the next higher level and repeat thuswe continue to insert the new item (kvin lists until we finally get flip that comes up tails we link together all the references to the new item (kvcreated in this process to create its tower coin flip can be simulated with python' built-in pseudo-random number generator from the random module by calling randrange( )which returns or each with probability / we give the insertion algorithm for skip list in code fragment and we illustrate it in figure the algorithm uses an insertafterabove(pq(kv)method that inserts position storing the item (kvafter position (on the same level as pand above position qreturning the new position (and setting internal references so that nextprevaboveand below methods will work correctly for pqand rthe expected running time of the insertion algorithm on skip list with entries is (log )which we show in section |
22,095 | algorithm skipinsert( , )inputkey and value outputtopmost position of the item inserted in the skip list skipsearch(kq none { will represent top node in new item' toweri - repeat + if > then + {add new level to the skip listt next(ss insertafterabove(nones(-none){grow leftmost towerinsertafterabove(st(+none){grow rightmost towerwhile above(pis none do prev( {scan backward{jump up to higher levelp above(pq insertafterabove(pq(kv){increase height of new item' toweruntil coinflip(=tails + return code fragment insertion in skip list method coinflip(returns "headsor "tails"each with probability / instance variables nhand hold the number of entriesthe heightand the start node of the skip list figure insertion of an entry with key into the skip list of figure we assume that the random "coin flipsfor the new entry came up heads three times in rowfollowed by tails the positions visited are highlighted the positions inserted to hold the new entry are drawn with thick linesand the positions preceding them are flagged |
22,096 | removal in skip list like the search and insertion algorithmsthe removal algorithm for skip list is quite simple in factit is even easier than the insertion algorithm that isto perform the map operation del [kwe begin by executing method skipsearch(kif the position stores an entry with key different from kwe raise keyerror otherwisewe remove and all the positions above pwhich are easily accessed by using above operations to climb up the tower of this entry in starting at position while removing levels of the towerwe reestablish links between the horizontal neighbors of each removed position the removal algorithm is illustrated in figure and detailed description of it is left as an exercise ( - as we show in the next subsectiondeletion operation in skip list with entries has (log nexpected running time before we give this analysishoweverthere are some minor improvements to the skip-list data structure we would like to discuss firstwe do not actually need to store references to values at the levels of the skip list above the bottom levelbecause all that is needed at these levels are references to keys in factwe can more efficiently represent tower as single objectstoring the key-value pairand maintaining previous references and next references if the tower reaches level secondfor the horizontal axesit is possible to keep the list singly linkedstoring only the next references we can perform insertions and removals in strictly top-downscan-forward fashion we explore the details of this optimization in exercise - neither of these optimizations improve the asymptotic performance of skip lists by more than constant factorbut these improvements canneverthelessbe meaningful in practice in factexperimental evidence suggests that optimized skip lists are faster in practice than avl trees and other balanced search treeswhich are discussed in figure removal of the entry with key from the skip list of figure the positions visited after the search for the position of holding the entry are highlighted the positions removed are drawn with dashed lines |
22,097 | maintaining the topmost level skip list must maintain reference to the start position (the topmostleft position in sas an instance variableand must have policy for any insertion that wishes to continue inserting new entry past the top level of there are two possible courses of action we can takeboth of which have their merits one possibility is to restrict the top levelhto be kept at some fixed value that is function of nthe number of entries currently in the map (from the analysis we will see that max{ log is reasonable choiceand picking log nis even saferimplementing this choice means that we must modify the insertion algorithm to stop inserting new position once we reach the topmost level (unless log nlog( )in which case we can now go at least one more levelsince the bound on the height is increasingthe other possibility is to let an insertion continue inserting new position as long as heads keeps getting returned from the random number generator this is the approach taken by algorithm skipinsert of code fragment as we show in the analysis of skip liststhe probability that an insertion will go to level that is more than (log nis very lowso this design choice should also work either choice will still result in the expected (log ntime to perform searchinsertionand removalhoweverwhich we show in the next section probabilistic analysis of skip lists as we have shown aboveskip lists provide simple implementation of sorted map in terms of worst-case performancehoweverskip lists are not superior data structure in factif we do not officially prevent an insertion from continuing significantly past the current highest levelthen the insertion algorithm can go into what is almost an infinite loop (it is not actually an infinite loophoweversince the probability of having fair coin repeatedly come up heads forever is moreoverwe cannot infinitely add positions to list without eventually running out of memory in any caseif we terminate position insertion at the highest level hthen the worstcase running time for performing the getitem setitem and delitem map operations in skip list with entries and height is ( hthis worstcase performance occurs when the tower of every entry reaches level where is the height of howeverthis event has very low probability judging from this worst casewe might conclude that the skip-list structure is strictly inferior to the other map implementations discussed earlier in this but this would not be fair analysisfor this worst-case behavior is gross overestimate |
22,098 | mapshash tablesand skip lists bounding the height of skip list because the insertion step involves randomizationa more accurate analysis of skip lists involves bit of probability at firstthis might seem like major undertakingfor complete and thorough probabilistic analysis could require deep mathematics (andindeedthere are several such deep analyses that have appeared in data structures research literaturefortunatelysuch an analysis is not necessary to understand the expected asymptotic behavior of skip lists the informal and intuitive probabilistic analysis we give below uses only basic concepts of probability theory let us begin by determining the expected value of the height of skip list with entries (assuming that we do not terminate insertions earlythe probability that given entry has tower of height > is equal to the probability of getting consecutive heads when flipping cointhat isthis probability is / hencethe probability pi that level has at least one position is at most pi < for the probability that any one of different events occurs is at most the sum of the probabilities that each occurs the probability that the height of is larger than is equal to the probability that level has at least one positionthat isit is no more than pi this means that is larger thansay log with probability at most log < log for exampleif this probability is one-in- -million long shot more generallygiven constant is larger than log with probability at most /nc- that isthe probability that is smaller than log is at least /nc- thuswith high probabilitythe height of is (log nanalyzing search time in skip list nextconsider the running time of search in skip list sand recall that such search involves two nested while loops the inner loop performs scan forward on level of as long as the next key is no greater than the search key kand the outer loop drops down to the next level and repeats the scan forward iteration since the height of is (log nwith high probabilitythe number of drop-down steps is (log nwith high probability |
22,099 | so we have yet to bound the number of scan-forward steps we make let ni be the number of keys examined while scanning forward at level observe thatafter the key at the starting positioneach additional key examined in scan-forward at level cannot also belong to level if any of these keys were on the previous levelwe would have encountered them in the previous scan-forward step thusthe probability that any key is counted in ni is / thereforethe expected value of ni is exactly equal to the expected number of times we must flip fair coin before it comes up heads this expected value is hencethe expected amount of time spent scanning forward at any level is ( since has (log nlevels with high probabilitya search in takes expected time (log nby similar analysiswe can show that the expected running time of an insertion or removal is (log nspace usage in skip list finallylet us turn to the space requirement of skip list with entries as we observed abovethe expected number of positions at level is / which means that the expected total number of positions in is = = using proposition on geometric summationswe have + - + = hencethe expected space requirement of is (ntable summarizes the performance of sorted map realized by skip list operation len(mk in [kv del [km find min) find maxm find lt( ) find gt(km find le( ) find ge(km find range(startstopiter( )reversed(mrunning time ( (log nexpected (log nexpected (log nexpected ( (log nexpected ( log nexpectedwith items reported (ntable performance of sorted map implemented with skip list we use to denote the number of entries in the dictionary at the time the operation is performed the expected space requirement is ( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.