id
int64
0
25.6k
text
stringlengths
0
4.59k
21,500
machine (jvmmaintains stack whose elements are descriptors of the currently active (that isnonterminatedinvocations of methods these descriptors are called frames frame for some invocation of method "foolstores the current values of the local variables and parameters of method foolas well as information on method "coolthat called fool and on what needs to be returned to method "coolfigure an example of java method stackmethod fool has just been called by method coolwhich itself was previously called by method main note the values of the program counterparametersand local variables stored in the stack frames when the invocation of method fool terminatesthe invocation of method cool will resume its execution at instruction which is obtained by incrementing the value of the program counter stored in the stack frame
21,501
the jvm keeps special variablecalled the program counterto maintain the address of the statement the jvm is currently executing in the program when method "coolinvokes another method "fool"the current value of the program counter is recorded in the frame of the current invocation of cool (so the jvm will know where to return to when method fool is doneat the top of the java stack is the frame of the running methodthat isthe method that currently has control of the execution the remaining elements of the stack are frames of the suspended methodsthat ismethods that have invoked another method and are currently waiting for it to return control to them upon its termination the order of the elements in the stack corresponds to the chain of invocations of the currently active methods when new method is invokeda frame for this method is pushed onto the stack when it terminatesits frame is
21,502
suspended method understanding call-by-value parameter passing the jvm uses the java stack to perform parameter passing to methods specificallyjava uses the call-by-value parameter passing protocol this means that the current value of variable (or expressionis what is passed as an argument to called method in the case of variable of primitive typesuch as an int or floatthe current value of is simply the number that is associated with when such value is passed to the called methodit is assigned to local variable in the called method' frame (this simple assignment is also illustrated in figure note that if the called method changes the value of this local variableit will not change the value of the variable in the calling method in the case of variable that refers to an objecthoweverthe current value of is the memory address of object (we will say more about where this address actually is in section thuswhen object is passed as parameter to some methodthe address of is actually passed when this address is assigned to some local variable in the called methody will refer to the same object that refers to thereforeif the called method changes the internal state of the object that refers toit will simultaneously be changing the internal state of the object that refers to (which is the same objectneverthelessif the called program changes to refer to some other objectx will remain unchanged--it will still refer to the same object it was referencing before thusthe java method stack is used by the jvm to implement method calls and parameter passing incidentallymethod stacks are not specific feature of java they are used in the run-time environment of most modern programming languagesincluding and +the operand stack interestinglythere is actually another place where the jvm uses stack arithmetic expressionssuch as (( ( ))/eare evaluated by the jvm using an operand stack simple binary operationsuch as bis computed by pushing on the stackpushing on the stackand then calling an instruction that pops the top two items from the stackperforms the binary operation on themand pushes the result back onto the stack likewiseinstructions for writing and reading elements to and from memory involve the use of pop and push methods for the operand stack thusthe jvm uses stack to evaluate arithmetic expressions in java
21,503
postorder traversalwhich is exactly the algorithm the jvm uses we described that algorithm in recursive wayhowevernot in way that explicitly uses an operand stack neverthelessthis recursive description is equivalent to nonrecursive version based on using an operand stack implementing recursion one of the benefits of using stack to implement method invocation is that it allows programs to use recursion that isit allows method to call itselfas discussed in section interestinglyearly programming languagessuch as cobol and fortrandid not originally use run-time stacks to implement method and procedure calls but because of the elegance and efficiency that recursion allowsall modern programming languagesincluding the modern versions of classic languages like cobol and fortranutilize run-time stack for method and procedure calls in the execution of recursive methodeach box of the recursion trace corresponds to frame of the java method stack alsothe content of the java method stack corresponds to the chain of boxes from the initial method invocation to the current one to better illustrate how run-time stack allows for recursive methodslet us consider java implementation of the classic recursive definition of the factorial functionnn( )( as shown in code fragment code fragment recursive method factorial the first time we call method factorialits stack frame includes local variable storing the value method factorial(recursively calls itself to compute ( )!which pushes new frame on the java run-time stack in turnthis recursive invocation calls itself to compute ( )!etc the chain of
21,504
because calling factorial( returns immediately without invoking itself recursively the run-time stack allows for method factorial(to exist simultaneously in several active frames (as many as at some pointeach frame stores the value of its parameter as well as the value to be returned eventuallywhen the first recursive call terminatesit returns ( )!which is then multiplied by to compute nfor the original call of the factorial method allocating space in the memory heap we have already discussed (in section how the java virtual machine allocates method' local variables in that method' frame on the java run-time stack the java stack is not the only kind of memory available for program data in javahowever dynamic memory allocation memory for an object can also be allocated dynamically during method' executionby having that method utilize the special new operator built into java for examplethe following java statement creates an array of integers whose size is given by the value of variable kint[items new int[ ]the size of the array above is known only at runtime moreoverthe array may continue to exist even after the method that created it terminates thusthe memory for this array cannot be allocated on the java stack the memory heap instead of using the java stack for this object' memoryjava uses memory from another area of storage--the memory heap (which should not be confused with the "heapdata structure presented in we illustrate this memory areatogether with the other memory areasin java virtual machine in figure the storage available in the memory heap is divided into blockswhich are contiguous array-like "chunksof memory that may be of variable or fixed sizes to simplify the discussionlet us assume that blocks in the memory heap are of fixed sizesay , bytesand that one block is big enough for any object we might want to create (efficiently handling the more general case is actually an interesting research problem figure schematic view of the layout of memory addresses in the java virtual machine
21,505
the java virtual machine definition requires that the memory heap be able to quickly allocate memory for new objectsbut it does not specify the data structure that we should use to do this one popular method is to keep contiguous "holesof available free memory in doubly linked listcalled the free list the links joining these holes are stored inside the holes themselvessince their memory is not being used as memory is allocated and deallocatedthe collection of holes in the free lists changeswith the unused memory being separated into disjoint holes divided by blocks of used memory this separation of unused memory into separate holes is known as fragmentation of coursewe would like to minimize fragmentation as much as possible there are two kinds of fragmentation that can occur internal fragmentation occurs when portion of an allocated memory block is not actually used for examplea program may request an array of size but only use the first cells of this array there isn' much that run-time environment can do to reduce internal fragmentation external fragmentationon the other handoccurs when the there is significant amount of unused memory between several contiguous blocks of allocated memory since the run-time environment has control over where to allocate memory when it is requested (for examplewhen the new keyword is used in java)the run-time environment should allocate memory in way that tries to reduce external fragmentation as much as reasonably possible several heuristics have been suggested for allocating memory from the heap so as to minimize external fragmentation the best-fit algorithm searches the entire free list to find the hole whose size is closest to the amount of memory being requested the first-fit algorithm searches from the beginning of the free list for the first hole that is large enough the next-fit algorithm is similarin that it also searches the free list for the first hole that is large enoughbut it begins its search from where it left off previouslyviewing the free list as circularly linked list (section the worst-fit algorithm searches the free list to find the largest hole of available memorywhich might be done faster than search of the entire free list if this list were maintained as priority queue (in each algorithmthe requested amount of memory is subtracted from the chosen memory hole and the leftover part of that hole is returned to the free list although it might sound good at firstthe best-fit algorithm tends to produce the worst external fragmentationsince the leftover parts of the chosen holes tend to be small the first-fit algorithm is fastbut it tends to produce lot of external
21,506
next-fit algorithm spreads fragmentation more evenly throughout the memory heapthus keeping search times low this spreading also makes it more difficult to allocate large blockshowever the worst-fit algorithm attempts to avoid this problem by keeping contiguous sections of free memory as large as possible garbage collection in some languageslike and ++the memory space for objects must be explicitly deallocated by the programmerwhich is duty often overlooked by beginning programmers and is the source of frustrating programming errors even for experienced programmers insteadthe designers of java placed the burden of memory management entirely on the run-time environment as mentioned abovememory for objects is allocated from the memory heap and the space for the instance variables of running java program are placed in its method stacksone for each running thread (for the simple programs discussed in this book there is typically just one running threadsince instance variables in method stack can refer to objects in the memory heapall the variables and objects in the method stacks of running threads are called root objects all those objects that can be reached by following object references that start from root object are called live objects the live objects are the active objects currently being used by the running programthese objects should not be deallocated for examplea running java program may storein variablea reference to sequence that is implemented using doubly linked list the reference variable to is root objectwhile the object for is live objectas are all the node objects that are referenced from this object and all the elements that are referenced from these node objects from time to timethe java virtual machine (jvmmay notice that available space in the memory heap is becoming scarce at such timesthe jvm can elect to reclaim the space that is being used for objects that are no longer liveand return the reclaimed memory to the free list this reclamation process is known as garbage collection there are several different algorithms for garbage collectionbut one of the most used is the mark-sweep algorithm in the mark-sweep garbage collection algorithmwe associate "markbit with each object that identifies if that object is live or not when we determine at some point that garbage collection is neededwe suspend all other running threads and clear the mark bits of all the objects currently allocated in the memory heap we then trace through the java stacks of the currently running threads and we mark all the (rootobjects in these stacks as "live we must then determine all the other live objects--the ones that are reachable from the root objects to do this efficientlywe can use the directed-graph version of the depth-first search traversal (section in this caseeach object in the memory heap is viewed as vertex in directed graphand the reference from one object to another is viewed as directed edge by performing directed dfs from each root objectwe can correctly
21,507
once this process has completedwe then scan through the memory heap and reclaim any space that is being used for an object that has not been marked at this timewe can also optionally coalesce all the allocated space in the memory heap into single blockthereby eliminating external fragmentation for the time being this scanning and reclamation process is known as the "sweepphaseand when it completeswe resume running the suspended threads thusthe mark-sweep garbage collection algorithm will reclaim unused space in time proportional to the number of live objects and their references plus the size of the memory heap performing dfs in-place the mark-sweep algorithm correctly reclaims unused space in the memory heapbut there is an important issue we must face during the mark phase since we are reclaiming memory space at time when available memory is scarcewe must take care not to use extra space during the garbage collection itself the trouble is that the dfs algorithmin the recursive way we have described it in section can use space proportional to the number of vertices in the graph in the case of garbage collectionthe vertices in our graph are the objects in the memory heaphencewe probably don' have this much memory to use so our only alternative is to find way to perform dfs in-place rather than recursivelythat iswe must perform dfs using only constant amount of additional storage the main idea for performing dfs in-place is to simulate the recursion stack using the edges of the graph (which in the case of garbage collection correspond to object referenceswhen we traverse an edge from visited vertex to new vertex wwe change the edge (vwstored in ' adjacency list to point back to ' parent in the dfs tree when we return back to (simulating the return from the "recursivecall at )we can now switch the edge we modified to point back to of coursewe need to have some way of identifying which edge we need to change back one possibility is to number the references going out of as and so onand storein addition to the mark bit (which we are using for the "visitedtag in our dfs) count identifier that tells us which edges we have modified using count identifier requires an extra word of storage per object this extra word can be avoided in some implementationshowever for examplemany implementations of the java virtual machine represent an object as composition of reference with type identifier (which indicates if this object is an integer or some other typeand as reference to the other objects or data fields for this object since the type reference is always supposed to be the first element of the composition in such implementationswe can use this reference to "markthe edge we changed when leaving an object and going to some object we simply swap the reference at that refers to the type of with the reference at that refers to when we return to vwe can quickly identify the edge (vwwe changedbecause it will be the first reference in the composition for vand the
21,508
in ' adjacency list thuswhether we use this edge-swapping trick or count identifierwe can implement dfs in-place without affecting its asymptotic running time external memory and caching there are several computer applications that must deal with large amount of data examples include the analysis of scientific data setsthe processing of financial transactionsand the organization and maintenance of databases (such as telephone directoriesin factthe amount of data that must be dealt with is often too large to fit entirely in the internal memory of computer the memory hierarchy in order to accommodate large data setscomputers have hierarchy of different kinds of memorieswhich vary in terms of their size and distance from the cpu closest to the cpu are the internal registers that the cpu itself uses access to such locations is very fastbut there are relatively few such locations at the second level in the hierarchy is the cache memory this memory is considerably larger than the register set of cpubut accessing it takes longer (and there may even be multiple caches with progressively slower access timesat the third level in the hierarchy is the internal memorywhich is also known as main memory or core memory the internal memory is considerably larger than the cache memorybut also requires more time to access finallyat the highest level in the hierarchy is the external memorywhich usually consists of diskscd drivesdvd drivesand/or tapes this memory is very largebut it is also very slow thusthe memory hierarchy for computers can be viewed as consisting of four levelseach of which is larger and slower than the previous level (see figure in most applicationshoweveronly two levels really matter--the one that can hold all data items and the level just below that one bringing data items in and out of the higher memory that can hold all items will typically be the computational bottleneck in this case figure the memory hierarchy
21,509
specificallythe two levels that matter most depend on the size of the problem we are trying to solve for problem that can fit entirely in main memorythe two most important levels are the cache memory and the internal memory access times for internal memory can be as much as to times longer than those for cache memory it is desirablethereforeto be able to perform most memory accesses in cache memory for problem that does not fit entirely in main memoryon the other handthe two most important levels are the internal memory and the external memory here the differences are even more dramaticfor access times for disksthe usual general-purpose external-memory deviceare typically as much as to times longer than those for internal memory to put this latter figure into perspectiveimagine there is student in baltimore who wants to send request-for-money message to his parents in chicago if the student sends his parents an -mail messageit can arrive at their home computer in about five seconds think of this mode of communication as corresponding to an internal-memory access by cpu mode of communication corresponding to an external-memory access that is times slower would be for the student to walk to chicago and deliver his message in personwhich would take about month if he can average miles per day thuswe should make as few accesses to external memory as possible caching strategies most algorithms are not designed with the memory hierarchy in mindin spite of the great variance between access times for the different levels indeedall of the algorithm analyses described in this book so far have assumed that all memory accesses are equal this assumption might seemat firstto be great oversight--and
21,510
it is actually reasonable assumption to make one justification for this assumption is that it is often necessary to assume that all memory accesses take the same amount of timesince specific device-dependent information about memory sizes is often hard to come by in factinformation about memory size may be impossible to get for examplea java program that is designed to run on many different computer platforms cannot be defined in terms of specific computer architecture configuration we can certainly use architecture-specific informationif we have it (and we will show how to exploit such information later in this but once we have optimized our software for certain architecture configurationour software will no longer be device-independent fortunatelysuch optimizations are not always necessaryprimarily because of the second justification for the equal-time memory-access assumption caching and blocking another justification for the memory-access equality assumption is that operating system designers have developed general mechanisms that allow for most memory accesses to be fast these mechanisms are based on two important locality-ofreference properties that most software possessestemporal localityif program accesses certain memory locationthen it is likely to access this location again in the near future for exampleit is quite common to use the value of counter variable in several different expressionsincluding one to increment the counter' value in facta common adage among computer architects is that " program spends ninety percent of its time in ten percent of its code spatial localityif program accesses certain memory locationthen it is likely to access other locations that are near this one for examplea program using an array is likely to access the locations of this array in sequential or nearsequential manner computer scientists and engineers have performed extensive software profiling experiments to justify the claim that most software possesses both of these kinds of locality-of-reference for examplea for-loop used to scan through an array will exhibit both kinds of locality temporal and spatial localities havein turngiven rise to two fundamental design choices for two-level computer memory systems (which are present in the interface between cache memory and internal memoryand also in the interface between internal memory and external memorythe first design choice is called virtual memory this concept consists of providing an address space as large as the capacity of the secondary-level memoryand of transferring data located in the secondary levelinto the primary levelwhen they
21,511
the internal memory size the concept of bringing data into primary memory is called cachingand it is motivated by temporal locality forby bringing data into primary memorywe are hoping that it will be accessed again soonand we will be able to respond quickly to all the requests for this data that come in the near future the second design choice is motivated by spatial locality specificallyif data stored at secondary+level memory location is accessedthen we bring into primary+level memorya large block of contiguous locations that include the location (see figure this concept is known as blockingand it is motivated by the expectation that other secondary+level memory locations close to will soon be accessed in the interface between cache memory and internal memorysuch blocks are often called cache linesand in the interface between internal memory and external memorysuch blocks are often called pages figure blocks in external memory when implemented with caching and blockingvirtual memory often allows us to perceive secondary-level memory as being faster than it really is there is still problemhowever primary+level memory is much smaller than secondarylevel memory moreoverbecause memory systems use blockingany program of substance will likely reach point where it requests data from secondary+level memorybut the primary memory is already full of blocks in order to fulfill the request and maintain our use of caching and blockingwe must remove some block from primary memory to make room for new block from secondary memory in this case deciding how to do this eviction brings up number of interesting data structure and algorithm design issues caching algorithms there are several web applications that must deal with revisiting information presented in web pages these revisits have been shown to exhibit localities of referenceboth in time and in space to exploit these localities of referenceit is often advantageous to store copies of web pages in cache memoryso these pages
21,512
cache memory that has "slotsthat can contain web pages we assume that web page can be placed in any slot of the cache this is known as fully associative cache as browser executesit requests different web pages each time the browser requests such web page lthe browser determines (using quick testif is unchanged and currently contained in the cache if is contained in the cachethen the browser satisfies the request using the cached copy if is not in the cachehoweverthe page for is requested over the internet and transferred into the cache if one of the slots in the cache is availablethen the browser assigns to one of the empty slots but if all the cells of the cache are occupiedthen the computer must determine which previously viewed web page to evict before bringing in to take its place there areof coursemany different policies that can be used to determine the page to evict page replacement algorithms some of the better+known page replacement policies include the following (see figure )first+infirst+out (fifo)evict the page that has been in the cache the longestthat isthe page that was transferred to the cache furthest in the past least recently used (lru)evict the page whose last request occurred furthest in the past in additionwe can consider simple and purely random strategyrandomchoose page at random to evict from the cache figure the randomfifoand lru page replacement policies
21,513
random or pseudo+random number generator the overhead involved in implementing this policy is an ( additional amount of work per page replacement moreoverthere is no additional overhead for each page requestother than to determine whether page request is in the cache or not stillthis policy makes no attempt to take advantage of any temporal or spatial localities that user' browsing exhibits the fifo strategy is quite simple to implementas it only requires queue to store references to the pages in the cache pages are enqueued in when they are referenced by browserand then are brought into the cache when page needs to be evictedthe computer simply performs dequeue operation on to determine which page to evict thusthis policy also requires ( additional work per page replacement alsothe fifo policy incurs no additional overhead for page requests moreoverit tries to take some advantage of temporal locality the lru strategy goes step further than the fifo strategyfor the lru strategy explicitly takes advantage of temporal locality as much as possibleby always evicting the page that was least+recently used from policy point of viewthis is an excellent approachbut it is costly from an implementation point of view that isits way of optimizing temporal and spatial locality is fairly costly implementing
21,514
existing pagesfor exampleusing special pointers or "locators if is implemented with sorted sequence based on linked listthen the overhead for each page request and page replacement is ( when we insert page in or update its keythe page is assigned the highest key in and is placed at the end of the listwhich can also be done in ( time even though the lru strategy has constanttime overheadusing the implementation abovethe constant factors involvedin terms of the additional time overhead and the extra space for the priority queue qmake this policy less attractive from practical point of view since these different page replacement policies have different trade offs between implementation difficulty and the degree to which they seem to take advantage of localitiesit is natural for us to ask for some kind of comparative analysis of these methods to see which oneif anyis the best from worst-case point of viewthe fifo and lru strategies have fairly unattractive competitive behavior for examplesuppose we have cache containing pagesand consider the fifo and lru methods for performing page replacement for program that has loop that repeatedly requests pages in cyclic order both the fifo and lru policies perform badly on such sequence of page requestsbecause they perform page replacement on every page request thusfrom worst-case point of viewthese policies are almost the worst we can imagine--they require page replacement on every page request this worst-case analysis is little too pessimistichoweverfor it focuses on each protocol' behavior for one bad sequence of page requests an ideal analysis would be to compare these methods over all possible page-request sequences of coursethis is impossible to do exhaustivelybut there have been great number of experimental simulations done on page-request sequences derived from real programs based on these experimental comparisonsthe lru strategy has been shown to be usually superior to the fifo strategywhich is usually better than the random strategy external searching and -trees consider the problem of implementing dictionary for large collection of items that do not fit in main memory since one of the main uses of large dictionaries is in databaseswe refer to the secondary-memory blocks as disk blocks likewisewe refer to the transfer of block between secondary memory and primary memory as disk transfer recalling the great time difference that exists between main memory accesses and disk accessesthe main goal of maintaining dictionary in external memory is to minimize the number of disk transfers needed to perform query or update in factthe difference in speed between disk and internal memory is so great that we should be willing to perform considerable number of internalmemory accesses if they allow us to avoid few disk transfers let usthereforeanalyze the performance of dictionary implementations by counting the number of disk transfers
21,515
we refer to this count as the / complexity of the algorithms involved some inefficient external-memory dictionaries let us first consider the simple dictionary implementations that us list to store entries if the list is implemented as an unsorteddoubly linked listthen insert and remove can be performed with ( transfers eachbut removals and searching require transfers in the worst casesince each link hop we perform could access different block this search time can be improved to ( /btransfers (see exercise - )where denotes the number of nodes of the list that can fit into blockbut this is still poor performance we could alternately implement the sequence using sorted array in this casea search performs (log ntransfersvia binary searchwhich is nice improvement but this solution requires ( /btransfers to implement an insert or remove operation in the worst casefor we may have to access all blocks to move elements up or down thuslist-based dictionary implementations are not efficient in external memory since these simple implementations are / inefficientwe should consider the logarithmic-time internal-memory strategies that use balanced binary trees (for exampleavl trees or red-black treesor other search structures with logarithmic average-case query and update times (for exampleskip lists or splay treesthese methods store the dictionary items at the nodes of binary tree or of graph typicallyeach node accessed for query or update in one of these structures will be in different block thusthese methods all require (log ntransfers in the worst case to perform query or update operation this performance is goodbut we can do better in particularwe can perform dictionary queries and updates using only (log no(logn/logbtransfers ( ,btrees to reduce the importance of the performance difference between internal-memory accesses and external-memory accesses for searchingwe can represent our dictionary using multi-way search tree (section this approach gives rise to generalization of the ( , tree data structure known as the ( ,btree an (abtree is multi-way search tree such that each node has between and children and stores between and entries the algorithms for searchinginsertingand removing entries in an (abtree are straightforward generalizations of the corresponding ones for ( , trees the advantage of generalizing ( , trees to ( ,btrees is that generalized class of trees provides flexible search structurewhere the size of the nodes and the running time of the various dictionary operations depends on the parameters and by setting the parameters and appropriately with respect to the size of disk blockswe can derive data structure that achieves good external-memory performance
21,516
an ( ,btreewhere and are integerssuch that < <( )/ is multiway search tree with the following additional restrictionssize propertyeach internal node has at least childrenunless it is the rootand has at most children depth propertyall the external nodes have the same depth proposition the height of an (abtree storing entries is (log /log band (log /log ajustificationlet be an (abtree storing entriesand let be the height of we justify the proposition by establishing the following bounds on hby the size and depth propertiesthe number 'of external nodes of is at least ah and at most bh by proposition ' thus ah < <bh taking the logarithm in base of each termwe get ( )loga <log( <hlogb search and update operations we recall that in multi-way search tree teach node of holds secondary structure ( )which is itself dictionary (section if is an (abtreethen (vstores at most entries let (bdenote the time for performing search in (vdictionary the search algorithm in an (abtree is exactly like the one for multi-way search trees given in section hencesearching in an (abtree with entries takes ( ( )/logalogntime note that if is constant (and thus is also)then the search time is (lognthe main application of (abtrees is for dictionaries stored in external memory namelyto minimize disk accesseswe select the parameters and so that each tree node occupies single disk block (so that ( if we wish to simply count block transfersproviding the right and values in this context gives rise to data structure known as the -treewhich we will describe shortly before we
21,517
handled in ( ,btrees the insertion algorithm for an (abtree is similar to that for ( , tree an overflow occurs when an entry is inserted into -node vwhich becomes an illegal ( )-node (recall that node in multi-way tree is -node if it has children to remedy an overflowwe split node by moving the median entry of into the parent of and replacing with ( )/ -node and ( )/ node we can now see the reason for requiring <( )/ in the definition of an ( ,btree note that as consequence of the splitwe need to build the secondary structures ( 'and ( ''removing an entry from an (abtree is similar to what was done for ( , trees an underflow occurs when key is removed from an -node vdistinct from the rootwhich causes to become an illegal ( )-node to remedy an underflowwe perform transfer with sibling of that is not an -node or we perform fusion of with sibling that is an -node the new node resulting from the fusion is ( )-nodewhich is another reason for requiring <=( )/ table shows the performance of dictionary realized with an (abtree table time bounds for an -entry dictionary realized by an ( ,btree we assume the secondary structure of the nodes of support search in (btimeand split and fusion operations in (btimefor some functions (band ( )which can be made to be ( when we are only counting disk transfers -trees version of the (abtree data structurewhich is the best known method for maintaining dictionary in external memoryis called the " -tree (see figure -tree of order is an (abtree with / and since we
21,518
we restrict our discussion here to the / complexity of -trees figure -tree of order an important property of -trees is that we can choose so that the children references and the keys stored at node can all fit into single disk blockimplying that is proportional to this choice allows us to assume that and are also proportional to in the analysis of the search and update operations on (abtrees thusf(band (bare both ( )for each time we access node to perform search or an update operationwe need only perform single disk transfer as we have already observed aboveeach search or update requires that we examine at most ( nodes for each level of the tree thereforeany dictionary search or update operation on -tree requires only (log / )that iso(logn/logb)disk transfers for examplean insert operation proceeds down the btree to locate the node in which to insert the new entry if the node would overflow (to have childrenbecause of this additionthen this node is split into two nodes that have ( )/ and ( )/ childrenrespectively this process is then repeated at the next level upand will continue for at most (log nlevels likewiseif remove operation results in node underflow (to have / - children)then we move references from sibling node with at least / children or we need to perform afusion operation of this node with its sibling (and repeat this computation at the parentas with the insert operationthis will continue up the -tree for at most (logbnlevels the requirement that each internal node have at least / children implies that each disk block used to support -tree is at least half full thuswe have the followingproposition -tree with entries has / complexity (log nfor search or update operationand uses ( /bblockswhere is the size of block external-memory sorting in addition to data structuressuch as dictionariesthat need to be implemented in external memorythere are many algorithms that must also operate on input sets that
21,519
the algorithmic problem using as few block transfers as possible the most classic domain for such external-memory algorithms is the sorting problem multi-way merge-sort an efficient way to sort set of objects in external memory amounts to simple external-memory variation on the familiar merge-sort algorithm the main idea behind this variation is to merge many recursively sorted lists at timethereby reducing the number of levels of recursion specificallya high-level description of this multi-way merge-sort method is to divide into subsets of roughly equal sizerecursively sort each subset and then simultaneously merge all sorted lists into sorted representation of if we can perform the merge process using only ( /bdisk transfersthenfor large enough values of nthe total number of transfers performed by this algorithm satisfies the following recurrencet(nd ( /dcn/bfor some constant > we can stop the recursion when <bsince we can perform single block transfer at this pointgetting all of the objects into internal memoryand then sort the set with an efficient internal-memory algorithm thusthe stopping criterion for (nis ( if / <= this implies closed-form solution that (nis (( / )logd( / ))which is (( / )log( / )/logdthusif we can choose to be ( / )then the worst-case number of block transfers performed by this multi-way merge-sort algorithm will be quite low we choose ( / ) / the only aspect of this algorithm left to specifythenis how to perform the -way merge using only ( /bblock transfers multi-way merging we perform the -way merge by running "tournament we let tbe complete binary tree with external nodesand we keep entirely in internal memory we associate each external node of with different sorted list we initialize by reading into each external node ithe first object in this has the effect of reading into internal memory the first block of each sorted list for each internal-node
21,520
and we associate the smaller of the two with we repeat this comparison test at the next level up in tand the nextand so on when we reach the root of twe will associate the smallest object from among all the lists with this completes the initialization for the -way merge (see figure figure -way merge we show five-way merge with in general step of the -way mergewe move the object associated with the root of into an array we are building for the merged list we then trace down tfollowing the path to the external node that came from we then read into the next object in the list if was not the last element in its blockthen this next object is already in internal memory otherwisewe read in the next block of to access this new object (if is now emptyassociate the node with pseudo-object with key +we then repeat the minimum computations for each of the internal nodes from to the root of this again gives us the complete tree we then repeat this process of moving the object from the root of to the merged list 'and rebuilding tuntil is empty of objects each step in the merge takes (log dtimehencethe internal time for the -way merge is (nlogdthe number of transfers performed in merge is ( / )since we scan each list in order onceand we write out the merged list once thuswe haveproposition given an array-based sequence of elements stored in external memorywe can sort using (( / )log( / )/log( / )transfers and ( log ninternal cpu timewhere is the size of the internal memory and is the size of block exercises for source code and help with exercisesplease visit java datastructures net
21,521
- describein detailthe insertion and removal algorithms for an ( ,btree - suppose is multi-way tree in which each internal node has at least five and at most eight children for what values of and is valid ( ,btreer- for what values of is the tree of the previous exercise an order- -treer- show each level of recursion in performing four-wayexternal-memory merge-sort of the sequence given in the previous exercise - consider an initially empty memory cache consisting of four pages how many page misses does the lru algorithm incur on the following page request sequence( , , , , , , , , , , , , ) - consider an initially empty memory cache consisting of four pages how many page misses does the fifo algorithm incur on the following page request sequence( , , , , , , , , , , , , ) - consider an initially empty memory cache consisting of four pages how many page misses can the random algorithm incur on the following page request sequence( , , , , , , , , , , , , )show all of the random hoices your algorithm made in this case - draw the result of insertinginto an initially empty order- -treeentries with keys ( , , , , , , , , , , , , , , , , , )in this order - show each level of recursion in performing four-way merge-sort of the sequence given in the previous exercise
21,522
- show how to implement dictionary in external memoryusing an unordered sequence so that insertions require only ( transfers and searches require ( /btransfers in the worst casewhere is the number of elements and is the number of list nodes that can fit into disk block - change the rules that define red-black trees so that each red-black tree has corresponding ( , treeand vice versa - describe modified version of the -tree insertion algorithm so that each time we create an overflow because of split of node vwe redistribute keys among all of ' siblingsso that each sibling holds roughly the same number of keys (possibly cascading the split up to the parent of vwhat is the minimum fraction of each block that will always be filled using this schemec- another possible external-memory dictionary implementation is to use skip listbut to collect consecutive groups of (bnodesin individual blockson any level in the skip list in particularwe define an order- -skip list to be such representation of skip-list structurewhere each block contains at least / list nodes and at most list nodes let us also choose in this case to be the maximum number of list nodes from level of skip list that can fit into one block describe how we should modify the skip-list insertion and removal algorithms for +skip list so that the expected height of the structure is (logn/logbc- describe an external-memory data structure to implement the queue adt so that the total number of disk transfers needed to process sequence of enqueue and dequeue operations is ( /bc- solve the previous problem for the deque adt -
21,523
section so that the union and find operations each use at most (logn/logbdisk transfers - suppose we are given sequence of elements with integer keys such that some elements in are colored "blueand some elements in are colored "red in additionsay that red element pairs with blue element if they have the same key value describe an efficient externalmemory algorithm for finding all the red-blue pairs in how many disk transfers does your algorithm performc- consider the page caching problem where the memory cache can hold pagesand we are given sequence of requests taken from pool of possible pages describe the optimal strategy for the offline algorithm and show that it causes at most / page misses in totalstarting from an empty cache - consider the page caching strategy based on the least frequently used (lfurulewhere the page in the cache that has been accessed the least often is the one that is evicted when new page is requested if there are tieslfu evicts the least frequently used page that has been in the cache the longest show that there is sequence of requests that causes lfu to miss (ntimes for cache of pageswhereas the optimal algorithm will miss only (mtimes - suppose that instead of having the node+search function ( in an order- -tree twe have (dlogd what does the asymptotic running time of performing search in now becomec- describe an external-memory algorithm that determines (using ( /btransferswhether list of integers contains value occurring more than / times projects - write java class that implements all the methods of the ordered dictionary adt by means of an (abtreewhere and are integer constants passed as parameters to constructor
21,524
implement the -tree data structureassuming block size of , and integer keys test the number of "disk transfersneeded to process sequence of dictionary operations - implement an external-memory sorting algorithm and compare it experimentally to any of the internal-memory sorting algorithms described in this book notes knuth [ has very nice discussions about external-memory sorting and searchingand ullman [ discusses external memory structures for database systems the reader interested in the study of the architecture of hierarchical memory systems is referred to the book by burger et al [ or the book by hennessy and patterson [ the handbook by gonnet and baeza-yates [ compares the performance of number of different sorting algorithmsmany of which are externalmemory algorithms -trees were invented by bayer and mccreight [ and comer [ provides very nice overview of this data structure the books by mehlhorn [ and samet [ also have nice discussions about +trees and their variants aggarwal and vitter [ study the / complexity of sorting and related problemsestablishing upper and lower boundsincluding the lower bound for sorting given in this goodrich et al [ study the / complexity of several computational geometry problems the reader interested in further study of / +efficient algorithms is encouraged to examine the survey paper of vitter [
21,525
useful mathematical facts in this appendix we give several useful mathematical facts we begin with some combinatorial definitions and facts logarithms and exponents the logarithm function is defined as log if bc the following identities hold for logarithms and exponents log ac log log log / log log log ac clog log (log )/log blog alog (ba) bac babc ba+ ba/bc ba- in additionwe have the followingproposition if and bthen loga logb < logc
21,526
ab ab ab / ( ) ( ) / <( ) / / the natural logarithm function lnx log xwhere is the value of the following progressione / / / **in additionex / / / **ln( xx / / / **there are number of useful inequalities relating to these functions (which derive from these definitionsproposition if / <ln( < proposition for <ex < / proposition for any two positive real numbers and ninteger functions and relations the "floorand "ceilingfunctions are defined respectively as follows the largest integer less than or equal to the smallest integer greater than or equal to the modulo operator is defined for integers > and as
21,527
*****( ) the binomial coefficient is which is equal to the number of different combinations one can define by choosing different items from collection of items (where the order does not matterthe name "binomial coefficientderives from the binomial expansionwe also have the following relationships proposition if <= <nthen proposition (stirling' approximation)where (nis ( / the fibonacci progression is numeric progression such that and - for > proposition if is defined by the fibonacci progressionthen is th(gn)where ( )/ is the so-called golden ratio
21,528
there are number of useful facts about summations proposition factoring summationsprovided does not depend upon proposition reversing the orderone special form of summation is telescoping sumwhich arises often in the amortized analysis of data structure or algorithm the following are some other facts about summations that arise often in the analysis of data structures and algorithms proposition proposition proposition if > is an integer constantthen another common summation is the geometric sum for any fixed real number proposition
21,529
proposition for any real number there is also combination of the two common formscalled the linear exponential summationwhich has the following expansionproposition for the nth harmonic number is defined as proposition if is the nth harmonic numberthen is ln th( basic probability we review some basic facts from probability theory the most basic is that any statement about probability is defined upon sample space swhich is defined as the set of all possible outcomes from some experiment we leave the terms "outcomesand "experimentundefined in any formal sense example consider an experiment that consists of the outcome from flipping coin five times this sample space has different outcomesone for each different ordering of possible flips that can occur sample spaces can also be infiniteas the following example illustrates example consider an experiment that consists of flipping coin until it comes up heads this sample space is infinitewith each outcome being sequence of tails followed by single flip that comes up headsfor , , probability space is sample space together with probability function pr that maps subsets of to real numbers in the interval [ , it captures mathematically the notion of the probability of certain "eventsoccurring formallyeach subset of is
21,530
basic properties with respect to events defined from pr( pr( <pr( < for any if , and ab othen pr(aubpr( +pr(btwo events and are independent if pr(abpr( )*pr(ba collection of events { is mutually independent if pr( ik pr( pr( ***pr( ik for any subset { , , ik the conditional probability that an event occursgiven an event bis denoted as pr( | )and is defined as the ratio pr(ab)/pr( )assuming that pr( an elegant way for dealing with events is in terms of random variables intuitivelyrandom variables are variables whose values depend upon the outcome of some experiment formallya random variable is function that maps outcomes from some sample space to real numbers an indicator random variable is random variable that maps outcomes to the set { , often in data structure and algorithm analysis we use random variable to characterize the running time of randomized algorithm in this casethe sample space is defined by all possible outcomes of the random sources used in the algorithm we are most interested in the typicalaverageor "expectedvalue of such random variable the expected value of random variable is defined as where the summation is defined over the range of (which in this case is assumed to be discreteproposition (the linearity of expectation)let and be two random variables and let be number then
21,531
and (cxce(xexample let be random variable that assigns the outcome of the roll of two fair dice to the sum of the number of dots showing then ( justificationto justify this claimlet and be random variables corresponding to the number of dots on each die thusx ( they are two instances of the same functionand (xe( ( ( each outcome of the roll of fair die occurs with probability / thus ( / / / / / / / for , thereforee( two random variables and are independent if pr( | )pr( )for all real numbers and proposition if two random variables and are independentthen (xye( ) (yexample let be random variable that assigns the outcome of roll of two fair dice to the product of the number of dots showing then ( / justificationlet and be random variables denoting the number of dots on each die the variables and are clearly independenthence (xe( ( ) ( ( / ) / the following bound and corollaries that follow from it are known as chernoff bounds proposition let be the sum of finite number of independent / random variables and let be the expected value of thenfor useful mathematical techniques
21,532
the following rule proposition ( 'hopital' rule)if we have limn- (nand we have limn- ( +then limn- ( )/ (nlim nf'( )/ '( )where '(nand '(nrespectively denote the derivatives of (nand (nin deriving an upper or lower bound for summationit is often useful to split summation as followsanother useful technique is to bound sum by an integral if is nonde-creasing functionthenassuming the following terms are definedthere is general form of recurrence relation that arises in the analysis of divide-andconquer algorithmst(nat( /bf( )for constants > and > proposition let (nbe defined as above then if (nis (nlog -efor some constant then (nis th(nlog if (nis th(nlog alogkn)for fixed nonnegative integer > then (nis log th( logk+ if (nis ohm(nlog )for some constant (nisth( ( ) and if ( / <cf( )then this proposition is known as the master method for characterizing divide-andconquer recurrence relations asymptotically
21,533
[ adelson-velskii and landisan algorithm for the organization of informationdoklady akademii nauk sssrvol pp english translation in soviet math dokl - [ aggarwal and vitterthe input/output complexity of sorting and related problemscommun acmvol pp - [ ahoalgorithms for finding patterns in stringsin handbook of theoretical computer science ( van leeuwened )vol algorithms and complexitypp - amsterdamelsevier [ ahoj hopcroftand ullmanthe design and analysis of computer algorithms readingmaaddison-wesley [ ahoj hopcroftand ullmandata structures and algorithms readingmaaddison-wesley [ ahujat magnantiand orlinnetwork flowstheoryalgorithmsand applications englewood cliffsnjprentice hall
21,534
which is not necessarily following the hierarchical structure such data structure is termed as graph array is container which can hold fix number of items and these items should be of the same type most of the data structures make use of arrays to implement their algorithms following are the important terms to understand the concept of array element each item stored in an array is called an element index each location of an element in an array has numerical indexwhich is used to identify the element array representation:(storage structurearrays can be declared in various ways in different languages for illustrationlet' take array declaration arrays can be declared in various ways in different languages for illustrationlet' take array declaration as per the above illustrationfollowing are the important points to be considered index starts with array length is which means it can store elements each element can be accessed via its index for examplewe can fetch an element at index as basic operations following are the basic operations supported by an array traverse print all the array elements one by one insertion adds an element at the given index deletion deletes an element at the given index search searches an element using the given index or by the value update updates an element at the given index in cwhen an array is initialized with sizethen it assigns defaults values to its elements in following order data type default value bool false
21,535
int float double void wchar_t insertion operation insert operation is to insert one or more data elements into an array based on the requirementa new element can be added at the beginningendor any given index of array herewe see practical implementation of insertion operationwhere we add data at the end of the array algorithm let la be linear array (unorderedwith elements and is positive integer such that <= following is the algorithm where item is inserted into the th position of la start set set + repeat steps and while > set la[ + la[ set - set la[kitem stop example following is the implementation of the above algorithm live demo #include main(int la[{ , , , , }int item int nprintf("the original array elements are :\ ")for( <ni++printf("la[% % \ "ila[ ])
21,536
whilej >kla[ + la[ ] la[kitemprintf("the array elements after insertion :\ ")for( <ni++printf("la[% % \ "ila[ ])when we compile and execute the above programit produces the following result output the original array elements are la[ la[ la[ la[ la[ the array elements after insertion la[ la[ la[ la[ la[ la[ deletion operation deletion refers to removing an existing element from the array and re-organizing all elements of an array algorithm consider la is linear array with elements and is positive integer such that <= following is the algorithm to delete an element available at the kth position of la start set repeat steps and while set la[jla[ set + set - stop example
21,537
lve demo #include void main(int la[{ , , , , }int int ijprintf("the original array elements are :\ ")for( <ni++printf("la[% % \ "ila[ ]) kwhilej nla[ - la[ ] - printf("the array elements after deletion :\ ")for( <ni++printf("la[% % \ "ila[ ])when we compile and execute the above programit produces the following result output the original array elements are la[ la[ la[ la[ la[ the array elements after deletion la[ la[ la[ la[
21,538
search operation you can perform search for an array element based on its value or its index algorithm consider la is linear array with elements and is positive integer such that <= following is the algorithm to find an element with value of item using sequential search start set repeat steps and while if la[jis equal item then goto step set + print jitem stop example following is the implementation of the above algorithm live demo #include void main(int la[{ , , , , }int item int printf("the original array elements are :\ ")for( <ni++printf("la[% % \ "ila[ ])whilej )ifla[ =item breakj printf("found element % at position % \ "itemj+ )when we compile and execute the above programit produces the following result output the original array elements are la[ la[ la[
21,539
la[ found element at position update operation update operation refers to updating an existing element from the array at given index algorithm consider la is linear array with elements and is positive integer such that <= following is the algorithm to update an element available at the th position of la start set la[ - item stop example following is the implementation of the above algorithm live demo #include void main(int la[{ , , , , }int item int ijprintf("the original array elements are :\ ")for( <ni++printf("la[% % \ "ila[ ])la[ - itemprintf("the array elements after updation :\ ")for( <ni++printf("la[% % \ "ila[ ])when we compile and execute the above programit produces the following result output the original array elements are la[ la[ la[ la[ la[
21,540
la[ la[ la[ la[ la[
21,541
sparse matrix and its representations matrix is two-dimensional data object made of rows and columnstherefore having total values if most of the elements of the matrix have valuethen it is called sparse matrix why to use sparse matrix instead of simple matrix storagethere are lesser non-zero elements than zeros and thus lesser memory can be used to store only those elements computing timecomputing time can be saved by logically designing data structure traversing only non-zero elements example representing sparse matrix by array leads to wastage of lots of memory as zeroes in the matrix are of no use in most of the cases soinstead of storing zeroes with non-zero elementswe only store non-zero elements this means storing non-zero elements with triples(rowcolumnvaluesparse matrix representations can be done in many ways following are two common representations array representation linked list representation method using arrays #include int main(/assume sparse matrix int sparsematrix[ ][ { }{ }{ }{ }int size for (int ++for (int ++if (sparsematrix[ ][ ! size++int compactmatrix[ ][size]/making of new matrix
21,542
for (int ++for (int ++if (sparsematrix[ ][ ! compactmatrix[ ][kicompactmatrix[ ][kjcompactmatrix[ ][ksparsematrix[ ][ ] ++for (int = < ++for (int = <sizej++printf("% "compactmatrix[ ][ ])printf("\ ")return
21,543
stack stack is an abstract data type (adt)commonly used in most programming languages it is named stack as it behaves like real-world stackfor example deck of cards or pile of platesetc real-world stack allows operations at one end only for examplewe can place or remove card or plate from the top of the stack only likewisestack adt allows all data operations at one end only at any given timewe can only access the top element of stack this feature makes it lifo data structure lifo stands for last-in-first-out herethe element which is placed (inserted or addedlastis accessed first in stack terminologyinsertion operation is called push operation and removal operation is called pop operation stack representation the following diagram depicts stack and its operations stack can be implemented by means of arraystructurepointerand linked list stack can either be fixed size one or it may have sense of dynamic resizing herewe are going to implement stack using arrayswhich makes it fixed size stack implementation basic operations stack operations may involve initializing the stackusing it and then de-initializing it apart from these basic stuffsa stack is used for the following two primary operations push(pushing (storingan element on the stack
21,544
when data is pushed onto stack to use stack efficientlywe need to check the status of stack as well for the same purposethe following functionality is added to stacks peek(get the top data element of the stackwithout removing it isfull(check if stack is full isempty(check if stack is empty at all timeswe maintain pointer to the last pushed data on the stack as this pointer always represents the top of the stackhence named top the top pointer provides top value of the stack without actually removing it first we should learn about procedures to support stack functions peek(algorithm of peek(function begin procedure peek return stack[topend procedure implementation of peek(function in programming language example int peek(return stack[top]isfull(algorithm of isfull(function begin procedure isfull if top equals to maxsize return true else return false endif end procedure implementation of isfull(function in programming language example bool isfull(if(top =maxsizereturn trueelse return false
21,545
algorithm of isempty(function begin procedure isempty if top less than return true else return false endif end procedure implementation of isempty(function in programming language is slightly different we initialize top at - as the index in array starts from so we check if the top is below zero or - to determine if the stack is empty here' the code example bool isempty(if(top =- return trueelse return falsepush operation the process of putting new data element onto stack is known as push operation push operation involves series of steps step checks if the stack is full step if the stack is fullproduces an error and exit step if the stack is not fullincrements top to point next empty space step adds data element to the stack locationwhere top is pointing step returns success if the linked list is used to implement the stackthen in step we need to allocate space dynamically algorithm for push operation simple algorithm for push operation can be derived as follows begin procedure pushstackdata if stack is full
21,546
endif top top stack[topdata end procedure implementation of this algorithm in cis very easy see the following code example void push(int dataif(!isfull()top top stack[topdataelse printf("could not insert datastack is full \ ")pop operation accessing the content while removing it from the stackis known as pop operation in an array implementation of pop(operationthe data element is not actually removedinstead top is decremented to lower position in the stack to point to the next value but in linked-list implementationpop(actually removes data element and deallocates memory space pop operation may involve the following steps step checks if the stack is empty step if the stack is emptyproduces an error and exit step if the stack is not emptyaccesses the data element at which top is pointing step decreases the value of top by step returns success algorithm for pop operation
21,547
begin procedure popstack if stack is empty return null endif data stack[toptop top return data end procedure implementation of this algorithm in cis as follows example int pop(int dataif(!isempty()data stack[top]top top return dataelse printf("could not retrieve datastack is empty \ ")
21,548
stack applications three applications of stacks are presented here these examples are central to many activities that computer must do and deserve time spent with them expression evaluation backtracking (game playingfinding pathsexhaustive searching memory managementrun-time environment for nested language features expression evaluation in particular we will consider arithmetic expressions understand that there are boolean and logical expressions that can be evaluated in the same way control structures can also be treated similarly in compiler this study of arithmetic expression evaluation is an example of problem solving where you solve simpler problem and then transform the actual problem to the simpler one asidethe np-complete problem there are set of apparently intractable problemsfinding the shortest route in graph (traveling salesman problem)bin packinglinear programmingetc that are similar enough that if polynomial solution is ever found (exponential solutions aboundfor one of these problemsthen the solution can be applied to all problems infixprefix and postfix notation we are accustomed to write arithmetic expressions with the operation between the two operandsa+ or / if we write + *choweverwe have to apply precedence rules to avoid the ambiguous evaluation (add first or multiply first?there' no real reason to put the operation between the variables or values they can just as well precede or follow the operands you should note the advantage of prefix and postfixthe need for precedence rules and parentheses are eliminated infix prefix postfix + +ab aba+ * + *bc abc*( ( *+ab-cd ab+cd- * - * * postfix expressions are easily evaluated with the aid of stack infixprefix and postfix notation key infix prefix postfix + +ab ab
21,549
+ *bc abc*( ( *+ab-cd ab+cd- * - * * -*bb ** ac bb* * * postfix evaluation algorithm assume we have string of operands and operatorsan informalby hand process is scan the expression left to right skip values or variables (operands when an operator is foundapply the operation to the preceding two operands replace the two operands and operator with the calculated value (three symbols are replaced with one operand continue scanning until only value remains--the result of the expression the time complexity is (nbecause each operand is scanned onceand each operation is performed once more formal algorithmcreate new stack while(input stream is not empty)token getnexttoken()if(token instanceof operand)push(token)else if (token instance of operatorop pop()op pop()result calc(tokenop op )push(result)return pop()demonstration with infix transformation to postfix this process uses stack as well we have to hold information that' expressed inside parentheses while scanning to find the closing ')we also have to hold information on operations that are of lower precedence on the stack the algorithm is create an empty stack and an empty postfix output string/stream scan the infix input string/stream left to right if the current input token is an operandsimply append it to the output string (note the examples above that the operands remain in the same order if the current input token is an operatorpop off all operators that have equal or higher
21,550
order of popping is the order in the output if the current input token is '('push it onto the stack if the current input token is ')'pop off all operators and append them to the output string until '(is poppeddiscard the '( if the end of the input string is foundpop all operators and append them to the output string this algorithm doesn' handle errors in the inputalthough careful analysis of parenthesis or lack of parenthesis could point to such error determination apply the algorithm to the above expressions backtracking backtracking is used in algorithms in which there are steps along some path (statefrom some starting point to some goal find your way through maze find path from one point in graph (roadmapto another point play game in which there are moves to be made (checkerschessin all of these casesthere are choices to be made among number of options we need some way to remember these decision points in case we want/need to come back and try the alternative consider the maze at point where choice is madewe may discover that the choice leads to dead-end we want to retrace back to that decision point and then try the other (nextalternative againstacks can be used as part of the solution recursion is anothertypically more favoredsolutionwhich is actually implemented by stack memory management any modern computer environment uses stack as the primary memory management model for running program whether it' native code ( sunvaxor jvma stack is at the center of the run-time environment for javac++adafortranetc the discussion of jvm in the text is consistent with ntsolarisvmsunix runtime environments each program that is running in computer system has its own memory allocation containing the typical layout as shown below
21,551
when method/function is called an activation record is createdits size depends on the number and size of the local variables and parameters the base pointer value is saved in the special location reserved for it the program counter value is saved in the return address location the base pointer is now reset to the new base (top of the call stack prior to the creation of the ar the program counter is set to the location of the first bytecode of the method being called copies the calling parameters into the parameter region initializes local variables in the local variable region while the method executesthe local variables and parameters are simply found by adding constant associated with each variable/parameter to the base pointer when method returns get the program counter from the activation record and replace what' in the pc get the base pointer value from the ar and replace what' in the bp pop the ar entirely from the stack
21,552
queue queue is an abstract data structuresomewhat similar to stacks unlike stacksa queue is open at both its ends one end is always used to insert data (enqueueand the other is used to remove data (dequeuequeue follows first-in-first-out methodologyi the data item stored first will be accessed first real-world example of queue can be single-lane one-way roadwhere the vehicle enters firstexits first more real-world examples can be seen as queues at the ticket windows and busstops queue representation as we now understand that in queuewe access both ends for different reasons the following diagram given below tries to explain queue representation as data structure as in stacksa queue can also be implemented using arrayslinked-listspointers and structures for the sake of simplicitywe shall implement queues using one-dimensional array basic operations queue operations may involve initializing or defining the queueutilizing itand then completely erasing it from the memory here we shall try to understand the basic operations associated with queues enqueue(add (storean item to the queue dequeue(remove (accessan item from the queue few more functions are required to make the above-mentioned queue operation efficient these are peek(gets the element at the front of the queue without removing it isfull(checks if the queue is full isempty(checks if the queue is empty in queuewe always dequeue (or accessdatapointed by front pointer and while enqueing (or storingdata in the queue we take help of rear pointer let' first learn about supportive functions of queue peek(
21,553
as follows algorithm begin procedure peek return queue[frontend procedure implementation of peek(function in programming language example int peek(return queue[front]isfull(as we are using single dimension array to implement queuewe just check for the rear pointer to reach at maxsize to determine that the queue is full in case we maintain the queue in circular linked-listthe algorithm will differ algorithm of isfull(function algorithm begin procedure isfull if rear equals to maxsize return true else return false endif end procedure implementation of isfull(function in programming language example bool isfull(if(rear =maxsize return trueelse return falseisempty(algorithm of isempty(function algorithm begin procedure isempty if front is less than min or front is greater than rear return true
21,554
return false endif end procedure if the value of front is less than min or it tells that the queue is not yet initializedhence empty here' the programming code example bool isempty(if(front rearreturn trueelse return falseenqueue operation queues maintain two data pointersfront and rear thereforeits operations are comparatively difficult to implement than that of stacks the following steps should be taken to enqueue (insertdata into queue step check if the queue is full step if the queue is fullproduce overflow error and exit step if the queue is not fullincrement rear pointer to point the next empty space step add data element to the queue locationwhere the rear is pointing step return success sometimeswe also check to see if queue is initialized or notto handle any unforeseen situations algorithm for enqueue operation procedure enqueue(dataif queue is full return overflow
21,555
rear rear queue[reardata return true end procedure implementation of enqueue(in programming language example int enqueue(int dataif(isfull()return rear rear queue[reardatareturn end procedure dequeue operation accessing data from the queue is process of two tasks access the data where front is pointing and remove the data after access the following steps are taken to perform dequeue operation step check if the queue is empty step if the queue is emptyproduce underflow error and exit step if the queue is not emptyaccess the data where front is pointing step increment front pointer to point to the next available data element step return success algorithm for dequeue operation procedure dequeue
21,556
return underflow end if data queue[frontfront front return true end procedure implementation of dequeue(in programming language example int dequeue(if(isempty()return int data queue[front]front front return data
21,557
linked list linked list is sequence of data structureswhich are connected together via links linked list is sequence of links which contains items each link contains connection to another link linked list is the second most-used data structure after array following are the important terms to understand the concept of linked list link each link of linked list can store data called an element next each link of linked list contains link to the next link called next linkedlist linked list contains the connection link to the first link called first linked list representation linked list can be visualized as chain of nodeswhere every node points to the next node as per the above illustrationfollowing are the important points to be considered linked list contains link element called first each link carries data field(sand link field called next each link is linked with its next link using its next link last link carries link as null to mark the end of the list types of linked list following are the various types of linked list simple linked list item navigation is forward only doubly linked list items can be navigated forward and backward circular linked list last item contains link of the first element as next and the first element has link to the last element as previous basic operations following are the basic operations supported by list insertion adds an element at the beginning of the list deletion deletes an element at the beginning of the list display displays the complete list search searches an element using the given key delete deletes an element using the given key insertion operation adding new node in linked list is more than one step activity we shall learn this with diagrams here firstcreate node using the same structure and find the location where it has to be inserted
21,558
and (rightnodethen point next to between (leftnodenewnode next -rightnodeit should look like this nowthe next node at the left should point to the new node leftnode next -newnodethis will put the new node in the middle of the two the new list should look like this similar steps should be taken if the node is being inserted at the beginning of the list while inserting it at the endthe second last node of the list should point to the new node and the new node will point to null
21,559
deletion is also more than one step process we shall learn with pictorial representation firstlocate the target node to be removedby using searching algorithms the left (previousnode of the target node now should point to the next node of the target node leftnode next -targetnode nextthis will remove the link that was pointing to the target node nowusing the following codewe will remove what the target node is pointing at targetnode next -nullwe need to use the deleted node we can keep that in memory otherwise we can simply deallocate memory and wipe off the target node completely reverse operation this operation is thorough one we need to make the last node to be pointed by the head node and reverse the whole linked list
21,560
make it point to its previous node we have to make sure that the last node is not the lost node so we'll have some temp nodewhich looks like the head node pointing to the last node nowwe shall make all left side nodes point to their previous nodes one by one except the node (first nodepointed by the head nodeall nodes should point to their predecessormaking them their new successor the first node will point to null we'll make the head node point to the new first node by using the temp node the linked list is now reversed program#include #include #include #include struct node int dataint keystruct node *next}
21,561
struct node *current null//display the list void printlist(struct node *ptr headprintf("\ ")//start from the beginning while(ptr !nullprintf("(% ,% ",ptr->key,ptr->data)ptr ptr->nextprintf(]")//insert link at the first location void insertfirst(int keyint data//create link struct node *link (struct node*malloc(sizeof(struct node))link->key keylink->data data//point it to old first node link->next head//point first to new first node head link//delete first item struct nodedeletefirst(//save reference to first link struct node *templink head//mark next to first link as first head head->next//return the deleted link
21,562
//is list empty bool isempty(return head =nullint length(int length struct node *currentfor(current headcurrent !nullcurrent current->nextlength++return length//find link with given key struct nodefind(int key//start from the first link struct nodecurrent head//if list is empty if(head =nullreturn null//navigate through list while(current->key !key//if it is last node if(current->next =nullreturn nullelse //go to next link current current->next
21,563
return current//delete link with given key struct nodedelete(int key//start from the first link struct nodecurrent headstruct nodeprevious null//if list is empty if(head =nullreturn null//navigate through list while(current->key !key//if it is last node if(current->next =nullreturn nullelse //store reference to current link previous current//move to next link current current->next//found matchupdate the link if(current =head//change first to point to next link head head->nextelse //bypass the current link previous->next current->nextreturn current
21,564
int ijktempkeytempdatastruct node *currentstruct node *nextint size length() size for size ++ -current headnext head->nextfor +if current->data next->data tempdata current->datacurrent->data next->datanext->data tempdatatempkey current->keycurrent->key next->keynext->key tempkeycurrent current->nextnext next->nextvoid reverse(struct node*head_refstruct nodeprev nullstruct nodecurrent *head_refstruct nodenextwhile (current !nullnext current->nextcurrent->next prevprev currentcurrent next
21,565
void main(insertfirst( , )insertfirst( , )insertfirst( , )insertfirst( , )insertfirst( , )insertfirst( , )printf("original list")//print list printlist()while(!isempty()struct node *temp deletefirst()printf("\ndeleted value:")printf("(% ,% ",temp->key,temp->data)printf("\nlist after deleting all items")printlist()insertfirst( , )insertfirst( , )insertfirst( , )insertfirst( , )insertfirst( , )insertfirst( , )printf("\nrestored list")printlist()printf("\ ")struct node *foundlink find( )if(foundlink !nullprintf("element found")printf("(% ,% ",foundlink->key,foundlink->data)printf("\ ")
21,566
printf("element not found ")delete( )printf("list after deleting an item")printlist()printf("\ ")foundlink find( )if(foundlink !nullprintf("element found")printf("(% ,% ",foundlink->key,foundlink->data)printf("\ ")else printf("element not found ")printf("\ ")sort()printf("list after sorting the data")printlist()reverse(&head)printf("\nlist after reversing the data")printlist()if we compile and run the above programit will produce the following result output original list( , ( , ( , ( , ( , ( , deleted value:( , deleted value:( , deleted value:( , deleted value:( , deleted value:( , deleted value:( , list after deleting all items[restored list( , ( , ( , ( , ( , ( ,
21,567
list after deleting an item( , ( , ( , ( , ( , element not found list after sorting the data( , ( , ( , ( , ( , list after reversing the data( , ( , ( , ( , ( ,
21,568
polynomial list polynomial (xis the expression in variable which is in the form (ax bxn- jxk)where abc fall in the category of real numbers and 'nis non negative integerwhich is called the degree of polynomial an important characteristics of polynomial is that each term in the polynomial expression consists of two partsone is the coefficient other is the exponent example xhere and are coefficients and are its exponential value points to keep in mind while working with polynomialsthe sign of each coefficient and exponent is stored within the coefficient and the exponent itself additional terms having equal exponent is possible one the storage allocation for each term in the polynomial must be done in ascending and descending order of their exponent representation of polynomial polynomial can be represented in the various ways these areby the use of arrays by the use of linked list representation of polynomials using arrays there may arise some situation where you need to evaluate many polynomial expressions and perform basic arithmetic operations likeaddition and subtraction with those numbers for this you will have to get way to represent those polynomials the simple way is to represent polynomial with degree 'nand store the coefficient of + terms of the polynomial in array so every array element will consists of two valuescoefficient and exponent representation of polynomial using linked lists polynomial can be thought of as an ordered list of non zero terms each non zero term is two tuple which holds two pieces of information
21,569
the exponent part the coefficient part adding two polynomials using linked list given two polynomial numbers represented by linked list write function that add these lists means add the coefficients who have same variable powers exampleinput st number ^ ^ ^ nd number ^ ^ output ^ ^ ^ input st number ^ ^ ^ nd number ^ ^ output ^ ^ ^ ^
21,570
int coeffint powstruct node *next}void create_node(int xint ystruct node **tempstruct node * *zz *tempif( =nullr =(struct node*)malloc(sizeof(struct node)) ->coeff xr->pow *temp rr->next (struct node*)malloc(sizeof(struct node)) ->nextr->next nullelse ->coeff xr->pow
21,571
->nextr->next nullvoid polyadd(struct node *poly struct node *poly struct node *polywhile(poly ->next &poly ->nextif(poly ->pow poly ->powpoly->pow poly ->powpoly->coeff poly ->coeffpoly poly ->nextelse if(poly ->pow powpoly->pow poly ->powpoly->coeff poly ->coeffpoly poly ->nextelse poly->pow poly ->powpoly->coeff poly ->coeff+poly ->coeff
21,572
poly poly ->nextpoly->next (struct node *)malloc(sizeof(struct node))poly poly->nextpoly->next nullwhile(poly ->next |poly ->nextif(poly ->nextpoly->pow poly ->powpoly->coeff poly ->coeffpoly poly ->nextif(poly ->nextpoly->pow poly ->powpoly->coeff poly ->coeffpoly poly ->nextpoly->next (struct node *)malloc(sizeof(struct node))poly poly->nextpoly->next null
21,573
void show(struct node *nodewhile(node->next !nullprintf("%dx^% "node->coeffnode->pow)node node->nextif(node->next !nullprintf(")int main(struct node *poly null*poly null*poly null/create first list of ^ ^ ^ create_node( , ,&poly )create_node( , ,&poly )create_node( , ,&poly )/create second list of ^ ^ create_node( , ,&poly )create_node( , ,&poly )printf(" st number")show(poly )printf("\ nd number")show(poly )
21,574
/function add two polynomial numbers polyadd(poly poly poly)/display resultant list printf("\nadded polynomial")show(poly)return output st number ^ ^ ^ nd number ^ ^ added polynomial ^ ^ ^
21,575
doubly linked list doubly linked list (dllcontains an extra pointertypically called previous pointertogether with next pointer and data which are there in singly linked list following is representation of dll node in language /node of doubly linked list *struct node int datastruct nodenext/pointer to next node in dll struct nodeprev/pointer to previous node in dll }following are advantages/disadvantages of doubly linked list over singly linked list advantages over singly linked list dll can be traversed in both forward and backward direction the delete operation in dll is more efficient if pointer to the node to be deleted is given we can quickly insert new node before given node in singly linked listto delete nodepointer to the previous node is needed to get this previous nodesometimes the list is traversed in dllwe can get the previous node using previous pointer disadvantages over singly linked list every node of dll require extra space for an previous pointer it is possible to implement dll with single pointer though all operations require an extra pointer previous to be maintained for examplein insertionwe need to modify previous pointers together with next pointers for example in following functions for insertions at different positionswe need or extra steps to set previous pointer insertion node can be added in four ways at the front of the dll after given node at the end of the dll before given node add node at the front( steps processthe new node is always added before the head of the given linked list and newly added node becomes the new head of dll for example if the given linked list is
21,576
let us call the function that adds at the front of the list is push(the push(must receive pointer to the head pointerbecause push must change the head pointer to point to the new node add node after given node ( steps processwe are given pointer to node as prev_nodeand the new node is inserted after the given node add node at the end( steps processthe new node is always added after the last node of the given linked list for example if the given dll is and we add an item at the endthen the dll becomes since linked list is typically represented by the head of itwe have to traverse the list till end and then change the next of last node to new node add node before given nodesteps let the pointer to this given node be next_node and the data of the new node to be added as new_data
21,577
because any new node can not be added before null allocate memory for the new nodelet it be called new_node set new_node->data new_data set the previous pointer of this new_node as the previous node of the next_nodenew_node->prev next_node->prev set the previous pointer of the next_node as the new_nodenext_node->prev new_node set the next pointer of this new_node as the next_nodenew_node->next next_node if the previous node of the new_node is not nullthen set the next pointer of this previous node as new_nodenew_node->prev->next new_node
21,578
circular linked list circular linked list is linked list where all nodes are connected to form circle there is no null at the end circular linked list can be singly circular linked list or doubly circular linked list advantages of circular linked lists any node can be starting point we can traverse the whole list by starting from any point we just need to stop when the first visited node is visited again useful for implementation of queue unlike this implementationwe don' need to maintain two pointers for front and rear if we use circular linked list we can maintain pointer to the last inserted node and front can always be obtained as next of last circular lists are useful in applications to repeatedly go around the list for examplewhen multiple applications are running on pcit is common for the operating system to put the running applications on list and then to cycle through themgiving each of them slice of time to executeand then making them wait while the cpu is given to another application it is convenient for the operating system to use circular list so that when it reaches the end of the list it can cycle around to the front of the list circular doubly linked lists are used for implementation of advanced data structures like fibonacci heap insertion in an empty list initially when the list is emptylast pointer will be null after inserting node tafter insertiont is the last node so pointer last points to node and node is first and last nodeso is pointing to itself function to insert node in an empty liststruct node *addtoempty(struct node *lastint data/this function is only for empty list if (last !nullreturn last
21,579
struct node *last (struct node*)malloc(sizeof(struct node))/assigning the data last -data data/note list was empty we link single node /to itself last -next lastreturn lastrun on ide insertion at the beginning of the list to insert node at the beginning of the listfollow these step create nodesay make -next last -next last -next after insertionfunction to insert node in the beginning of the liststruct node *addbegin(struct node *lastint dataif (last =nullreturn addtoempty(lastdata)/creating node dynamically struct node *temp (struct node *)malloc(sizeof(struct node))/assigning the data temp -data data/adjusting the links temp -next last -nextlast -next tempreturn last
21,580
insertion at the end of the list to insert node at the end of the listfollow these step create nodesay make -next last -next last -next last after insertionfunction to insert node in the end of the liststruct node *addend(struct node *lastint dataif (last =nullreturn addtoempty(lastdata)/creating node dynamically
21,581
(struct node *)malloc(sizeof(struct node))/assigning the data temp -data data/adjusting the links temp -next last -nextlast -next templast tempreturn lastinsertion in between the nodes to insert node at the end of the listfollow these step create nodesay search the node after which need to be insertsay that node be make -next -next -next suppose need to be insert after node having value after searching and insertion
21,582
struct node *addafter(struct node *lastint dataint itemif (last =nullreturn nullstruct node *temp*pp last -next/searching the item do if ( ->data =itemtemp (struct node *)malloc(sizeof(struct node))/assigning the data temp -data data/adjusting the links temp -next -next/adding newly allocated node after -next temp/checking for the last node if ( =lastlast tempreturn lastp -nextwhile ( !last -next)cout <item <not present in the list <endlreturn last
21,583
lecture- memory allocationwhenever new node is createdmemory is allocated by the system this memory is taken from list of those memory locations which are free not allocated this list is called avail list similarlywhenever node is deletedthe deleted space becomes reusable and is added to the list of unused space to avail list this unused space can be used in future for memory allocation memory allocation is of two types static memory allocation dynamic memory allocation static memory allocationwhen memory is allocated during compilation timeit is called 'static memory allocationthis memory is fixed and cannot be increased or decreased after allocation if more memory is allocated than requirementthen memory is wasted if less memory is allocated than requirementthen program will not run successfully so exact memory requirements must be known in advance dynamic memory allocationwhen memory is allocated during run/execution timeit is called 'dynamic memory allocationthis memory is not fixed and is allocated according to our requirements thus in it there is no wastage of memory so there is no need to know exact memory requirements in advance garbage collectionwhenever node is deletedsome memory space becomes reusable this memory space should be available for future use one way to do this is to immediately insert the free space into availability list but this method may be time consuming for the operating system so another method is used which is called 'garbage collectionthis method is described belowin this method the os collects the deleted space time to time onto the availability list this process happens in two steps in first stepthe os goes through all the lists and tags all those cells which are currently being used in the second stepthe
21,584
space to availability list the garbage collection may occur when small amount of free space is left in the system or no free space is left in the system or when cpu is idle and has time to do the garbage collection compaction one preferable solution to garbage collection is compaction the process of moving all marked nodes to one end of memory and all available memory to other end is called compaction algorithm which performs compaction is called compacting algorithm
21,585
infix to postfix conversion #include char stack[ ]int top - void push(char xstack[++topxchar pop(if(top =- return - else return stack[top--]int priority(char xif( ='('return if( ='+| ='-'return if( ='*| ='/'return main(char exp[ ]char *exprintf("enter the expression :")scanf("% ",exp) expwhile(* !'\ 'if(isalnum(* )printf("% ",* )else if(* ='('push(* )else if(* =')'
21,586
while(( pop()!'('printf("% " )else while(priority(stack[top]>priority(* )printf("% ",pop())push(* ) ++while(top !- printf("% ",pop())outputenter the expression : + * abc*enter the expression :( + )* +( -aab+ *da-
21,587
#include int stack[ ] int top - void push(int stack[++topx int pop( return stack[top--] int main( char exp[ ] char * int , , ,num printf("enter the expression :") scanf("% ",exp) exp while(* !'\ ' if(isdigit(* )
21,588
num * push(num) else pop() pop() switch(* case '+' break case '-' break case '*' break
21,589
case '/' break push( ) ++ printf("\nthe result of expression % % \ \ ",exp,pop()) return outputenter the expression : +the result of expression +
21,590
binary tree binary tree consists of finite set of nodes that is either emptyor consists of one specially designated node called the root of the binary treeand the elements of two disjoint binary trees called the left subtree and right subtree of the root note that the definition above is recursivewe have defined binary tree in terms of binary trees this is appropriate since recursion is an innate characteristic of tree structures diagram binary tree binary tree terminology tree terminology is generally derived from the terminology of family trees (specificallythe type of family tree called lineal charteach root is said to be the parent of the roots of its subtrees two nodes with the same parent are said to be siblingsthey are the children of their parent the root node has no parent great deal of tree processing takes advantage of the relationship between parent and its childrenand we commonly say directed edge (or simply an edgeextends from parent to its children thus edges connect root with the roots of each subtree an undirected edge extends in both directions between parent and child
21,591
grandparent and grandchild relations can be defined in similar mannerwe could also extend this terminology further if we wished (designating nodes as cousinsas an uncle or auntetc other tree terms the number of subtrees of node is called the degree of the node in binary treeall nodes have degree or node of degree zero is called terminal node or leaf node non-leaf node is often called branch node the degree of tree is the maximum degree of node in the tree binary tree is degree directed path from node to nk is defined as sequence of nodes nk such that ni is the parent of ni+ for < an undirected path is similar sequence of undirected edges the length of this path is the number of edges on the pathnamely ( the number of nodes there is path of length zero from every node to itself notice that in binary tree there is exactly one path from the root to each node the level or depth of node with respect to tree is defined recursivelythe level of the root is zeroand the level of any other node is one higher than that of its parent or to put it another waythe level or depth of node ni is the length of the unique path from the root to ni the height of ni is the length of the longest path from ni to leaf thus all leaves in the tree are at height the height of tree is equal to the height of the root the depth of tree is equal to the level or depth of the deepest leafthis is always equal to the height of the tree if there is directed path from to then is an ancestor of and is descendant of
21,592
special forms of binary trees there are few special forms of binary tree worth mentioning if every non-leaf node in binary tree has nonempty left and right subtreesthe tree is termed strictly binary tree orto put it another wayall of the nodes in strictly binary tree are of degree zero or twonever degree one strictly binary tree with leaves always contains nodes some texts call this "fullbinary tree complete binary tree of depth is the strictly binary tree all of whose leaves are at level the total number of nodes in complete binary tree of depth equals + since all leaves in such tree are at level dthe tree contains leaves andtherefore internal nodes diagram complete binary tree binary tree of depth is an almost complete binary tree ifeach leaf in the tree is either at level or at level for any node nd in the tree with right descendant at level dall the left descendants of nd that are leaves are also at level diagram an almost complete binary tree
21,593
other strictly binary treean almost complete binary tree with leaves that is not strictly binary has nodes there are two distinct almost complete binary trees with leavesone of which is strictly binary and one of which is not there is only single almost complete binary tree with nodes this tree is strictly binary if and only if is odd representing binary trees in memory array representation for complete or almost complete binary treestoring the binary tree as an array may be good choice one way to do this is to store the root of the tree in the first element of the array thenfor each node in the tree that is stored at subscript kthe node' left child can be stored at subscript + and the right child can be stored at subscript + for examplethe almost complete binary tree shown in diagram can be stored in an array like sohoweverif this scheme is used to store binary tree that is not complete or almost completewe can end up with great deal of wasted space in the array
21,594
would be stored using this techinque like solinked representation if binary tree is not complete or almost completea better choice for storing it is to use linked representation similar to the linked list structures covered earlier in the semester
21,595
pointer to the root node of the tree (labeled root in the diagram aboveany pointer in the tree structure that does not point to node will normally contain the value null linked tree with nodes will always contain null links
21,596
tree traversaltraversal is process to visit all the nodes of tree and may print their values too becauseall nodes are connected via edges (linkswe always start from the root (headnode that iswe cannot randomly access node in tree there are three ways which we use to traverse tree in-order traversal pre-order traversal post-order traversal generallywe traverse tree to search or locate given item or key in the tree or to print all the values it contains in-order traversal in this traversal methodthe left subtree is visited firstthen the root and later the right sub-tree we should always remember that every node may represent subtree itself if binary tree is traversed in-orderthe output will produce sorted key values in an ascending order we start from aand following in-order traversalwe move to its left subtree is also traversed in-order the process goes on until all the nodes are visited the output of inorder traversal of this tree will be - - - - - -
21,597
until all nodes are traversed step recursively traverse left subtree step visit root node step recursively traverse right subtree pre-order traversal in this traversal methodthe root node is visited firstthen the left subtree and finally the right subtree we start from aand following pre-order traversalwe first visit itself and then move to its left subtree is also traversed pre-order the process goes on until all the nodes are visited the output of pre-order traversal of this tree will be - - - - - - algorithm until all nodes are traversed step visit root node step recursively traverse left subtree step recursively traverse right subtree
21,598
in this traversal methodthe root node is visited lasthence the name first we traverse the left subtreethen the right subtree and finally the root node we start from aand following post-order traversalwe first visit the left subtree is also traversed post-order the process goes on until all the nodes are visited the output of post-order traversal of this tree will be - - - - - - algorithm until all nodes are traversed step recursively traverse left subtree step recursively traverse right subtree step visit root node
21,599
avl trees an avl tree is another balanced binary search tree named after their inventorsadelson-velskii and landisthey were the first dynamically balanced trees to be proposed like red-black treesthey are not perfectly balancedbut pairs of subtrees differ in height by at most maintaining an (lognsearch time addition and deletion operations also take (logntime definition of an avl tree an avl tree is binary search tree which has the following properties the sub-trees of every node differ in height by at most one every sub-tree is an avl tree balance requirement for an avl treethe left and right sub-trees differ by at most in height you need to be careful with this definitionit permits some apparently unbalanced treesfor examplehere are some treestree avl treeyes examination shows that each left sub-tree has height greater than each right subtree