id
int64
0
25.6k
text
stringlengths
0
4.59k
20,400
found alternativelyif the hash table size is power of two and the probe function is (ki( )/ then every slot in the table will be visited by the probe function both pseudo-random probing and quadratic probing eliminate primary clusteringwhich is the problem of keys sharing substantial segments of probe sequence if two keys hash to the same home positionhoweverthen they will always follow the same probe sequence for every collision resolution method that we have seen so far the probe sequences generated by pseudo-random and quadratic probing (for exampleare entirely function of the home positionnot the original key value this is because function ignores its input parameter for these collision resolution methods if the hash function generates cluster at particular home positionthen the cluster remains under pseudo-random and quadratic probing this problem is called secondary clustering to avoid secondary clusteringwe need to have the probe sequence make use of the original key value in its decision-making process simple technique for doing this is to return to linear probing by constant step size for the probe functionbut to have that constant be determined by second hash functionh thusthe probe sequence would be of the form (kii (kthis method is called double hashing example assume hash table has size and that there are three keys and with ( ( ( ( ( and ( thenthe probe sequence for will be and so on the probe sequence for will be and so on the probe sequence for will be and so on thusnone of the keys share substantial portions of the same probe sequence of courseif fourth key has ( and ( then it will follow the same probe sequence as pseudorandom or quadratic probing can be combined with double hashing to solve this problem good implementation of double hashing should ensure that all of the probe sequence constants are relatively prime to the table size this can be achieved easily one way is to select to be prime numberand have return value in the range < ( < another way is to set for some value and have return an odd value between and figure shows an implementation of the dictionary adt by means of hash table the simplest hash function is usedwith collision resolution by linear probingas the basis for the structure of hash table implementation suggested
20,401
chap searching project at the end of this asks you to improve the implementation with other hash functions and collision resolution policies analysis of closed hashing how efficient is hashingwe can measure hashing performance in terms of the number of record accesses required when performing an operation the primary operations of concern are insertiondeletionand search it is useful to distinguish between successful and unsuccessful searches before record can be deletedit must be found thusthe number of accesses required to delete record is equivalent to the number required to successfully search for it to insert recordan empty slot along the record' probe sequence must be found this is equivalent to an unsuccessful search for the record (recall that successful search for the record during insertion should generate an error because two records with the same key are not allowed to be stored in the tablewhen the hash table is emptythe first record inserted will always find its home position free thusit will require only one record access to find free slot if all records are stored in their home positionsthen successful searches will also require only one record access as the table begins to fill upthe probability that record can be inserted in its home position decreases if record hashes to an occupied slotthen the collision resolution policy must locate another slot in which to store it finding records not stored in their home position also requires additional record accesses as the record is searched for along its probe sequence as the table fills upmore and more records are likely to be located ever further from their home positions from this discussionwe see that the expected cost of hashing is function of how full the table is define the load factor for the table as / where is the number of records currently in the table an estimate of the expected cost for an insertion (or an unsuccessful searchcan be derived analytically as function of in the case where we assume that the probe sequence follows random permutation of the slots in the hash table assuming that every slot in the table has equal probability of being the home slot for the next recordthe probability of finding the home position occupied is the probability of finding both the home position occupied and the next slot on the ( - probe sequence occupied is ( - the probability of collisions is ( ( ( (
20,402
/*dictionary implemented using hashing *class hashdictionaryeimplements dictionary private static final int defaultsize private hashtable /the hash table private int count/of records now in table private int maxsize/maximum size of dictionary hashdictionary(this(defaultsize)hashdictionary(int szt new hashtable(sz)count maxsize szpublic void clear(/*reinitialize * new hashtable(maxsize)count public void insert(key ke /*insert an element *assert count maxsize "hash table is full" hashinsert(ke)count++public remove(key /*remove an element * temp hashremove( )if (temp !nullcount--return temppublic removeany(/*remove some element *if (count ! count--return hashremoveany()else return null/*find record with key value " *public find(key kreturn hashsearch( )/*return number of values in the hash table *public int size(return countfigure partial implementation for the dictionary adt using hash table this uses poor hash function and poor collision resolution policy (linear probing)which can easily be replaced member functions hashinsert and hashsearch appear in figures and respectively
20,403
chap searching if and are largethen this is approximately ( / ) the expected number of probes is one plus the sum over > of the probability of collisionswhich is approximately ( / ) /( ai= the cost for successful search (or deletionhas the same cost as originally inserting that record howeverthe expected value for the insertion cost depends on the value of not at the time of deletionbut rather at the time of the original insertion we can derive an estimate of this cost (essentially an average over all the insertion costsby integrating from to the current value of ayielding result of dx loge - - it is important to realize that these equations represent the expected cost for operations using the unrealistic assumption that the probe sequence is based on random permutation of the slots in the hash table (thus avoiding all expense resulting from clusteringthusthese costs are lower-bound estimates in the average case the true average cost under linear probing is ( /( ) for insertions or unsuccessful searches and ( /( )for deletions or successful searches proofs for these results can be found in the references cited in section figure shows the graphs of these four equations to help you visualize the expected performance of hashing based on the load factor the two solid lines show the costs in the case of "randomprobe sequence for ( insertion or unsuccessful search and ( deletion or successful search as expectedthe cost for insertion or unsuccessful search grows fasterbecause these operations typically search further down the probe sequence the two dashed lines show equivalent costs for linear probing as expectedthe cost of linear probing grows faster than the cost for "randomprobing from figure we see that the cost for hashing when the table is not too full is typically close to one record access this is extraordinarily efficientmuch better than binary search which requires log record accesses as increasesso does the expected cost for small values of athe expected cost is low it remains below two until the hash table is about half full when the table is nearly emptyadding new record to the table does not increase the cost of future search operations by much howeverthe additional search cost caused by each additional insertion increases rapidly once the table becomes half full based on this analysisthe rule of thumb is to design hashing system so that the hash table never gets above half
20,404
sec hashing insert delete figure growth of expected record accesses with the horizontal axis is the value for athe vertical axis is the expected number of accesses to the hash table solid lines show the cost for "randomprobing ( theoretical lower bound on the cost)while dashed lines show the cost for linear probing ( relatively poor collision resolution strategythe two leftmost lines show the cost for insertion (equivalentlyunsuccessful search)the two rightmost lines show the cost for deletion (equivalentlysuccessful searchfull beyond that point performance will degrade rapidly this requires that the implementor have some idea of how many records are likely to be in the table at maximum loadingand select the table size accordingly you might notice that recommendation to never let hash table become more than half full contradicts the disk-based space/time tradeoff principlewhich strives to minimize disk space to increase information density hashing represents an unusual situation in that there is no benefit to be expected from locality of reference in sensethe hashing system implementor does everything possible to eliminate the effects of locality of referencegiven the disk block containing the last record accessedthe chance of the next record access coming to the same disk block is no better than random chance in well-designed hash system this is because good hashing implementation breaks up relationships between search keys instead of improving performance by taking advantage of locality of referencehashing trades increased hash table space for an improved chance that the record will be in its home position thusthe more space available for the hash tablethe more efficient hashing should be
20,405
chap searching depending on the pattern of record accessesit might be possible to reduce the expected cost of access even in the face of collisions recall the / rule of the accesses will come to of the data in other wordssome records are accessed more frequently if two records hash to the same home positionwhich would be better placed in the home positionand which in slot further down the probe sequencethe answer is that the record with higher frequency of access should be placed in the home positionbecause this will reduce the total number of record accesses ideallyrecords along probe sequence will be ordered by their frequency of access one approach to approximating this goal is to modify the order of records along the probe sequence whenever record is accessed if search is made to record that is not in its home positiona self-organizing list heuristic can be used for exampleif the linear probing collision resolution policy is usedthen whenever record is located that is not in its home positionit can be swapped with the record preceding it in the probe sequence that other record will now be further from its home positionbut hopefully it will be accessed less frequently note that this approach will not work for the other collision resolution policies presented in this sectionbecause swapping pair of records to improve access to one might remove the other from its probe sequence another approach is to keep access counts for records and periodically rehash the entire table the records should be inserted into the hash table in frequency orderensuring that records that were frequently accessed during the last series of requests have the best chance of being near their home positions deletion when deleting records from hash tablethere are two important considerations deleting record must not hinder later searches in other wordsthe search process must still pass through the newly emptied slot to reach records whose probe sequence passed through this slot thusthe delete process cannot simply mark the slot as emptybecause this will isolate records further down the probe sequence for examplein figure ( )keys and both hash to slot key is placed in slot by the collision resolution policy if is deleted from the tablea search for must still pass through slot as it probes to slot we do not want to make positions in the hash table unusable because of deletion the freed slot should be available to future insertion both of these problems can be resolved by placing special mark in place of the deleted recordcalled tombstone the tombstone indicates that record once
20,406
occupied the slot but does so no longer if tombstone is encountered when searching through probe sequencethe search procedure is to continue with the search when tombstone is encountered during insertionthat slot can be used to store the new record howeverto avoid inserting duplicate keysit will still be necessary for the search procedure to follow the probe sequence until truly empty position has been foundsimply to verify that duplicate is not in the table howeverthe new record would actually be inserted into the slot of the first tombstone encountered the use of tombstones allows searches to work correctly and allows reuse of deleted slots howeverafter series of intermixed insertion and deletion operationssome slots will contain tombstones this will tend to lengthen the average distance from record' home position to the record itselfbeyond where it could be if the tombstones did not exist typical database application will first load collection of records into the hash table and then progress to phase of intermixed insertions and deletions after the table is loaded with the initial collection of recordsthe first few deletions will lengthen the average probe sequence distance for records (it will add tombstonesover timethe average distance will reach an equilibrium point because insertions will tend to decrease the average distance by filling in tombstone slots for exampleafter initially loading records into the databasethe average path distance might be ( an average of accesses per search beyond the home position will be requiredafter series of insertions and deletionsthis average distance might increase to due to tombstones this seems like small increasebut it is three times longer on average beyond the home position than before deletions two possible solutions to this problem are do local reorganization upon deletion to try to shorten the average path length for exampleafter deleting keycontinue to follow the probe sequence of that key and swap records further down the probe sequence into the slot of the recently deleted record (being careful not to remove key from its probe sequencethis will not work for all collision resolution policies periodically rehash the table by reinserting all records into new hash table not only will this remove the tombstonesbut it also provides an opportunity to place the most frequently accessed records into their home positions further reading for comparison of the efficiencies for various self-organizing techniquessee bentley and mcgeoch"amortized analysis of self-organizing sequential search heuristics[bm the text compression example of section comes from
20,407
chap searching bentley et al " locally adaptive data compression scheme[bstw for more on ziv-lempel codingsee data compressionmethods and theory by james storer [sto knuth covers self-organizing lists and zipf distributions in volume of the art of computer programming[knu introduction to modern information retrieval by salton and mcgill [sm is an excellent source for more information about document retrieval techniques see the paper "practical minimal perfect hash functions for large databasesby fox et al [fhcd for an introduction and good algorithm for perfect hashing for further details on the analysis for various collision resolution policiessee knuthvolume [knu and concrete mathematicsa foundation for computer science by grahamknuthand patashnik [gkp the model of hashing presented in this has been of fixed-size hash table problem not addressed is what to do when the hash table gets half full and more records must be inserted this is the domain of dynamic hashing methods good introduction to this topic is "dynamic hashing schemesby enbody and du [ed exercises create graph showing expected cost versus the probability of an unsuccessful search when performing sequential search (see section what can you say qualitatively about the rate of increase in expected cost as the probability of unsuccessful search grows modify the binary search routine of section to implement interpolation search assume that keys are in the range to , and that all key values within the range are equally likely to occur write an algorithm to find the kth smallest value in an unsorted array of numbers ( <nyour algorithm should require th(ntime in the average case hintyour algorithm should look similar to quicksort example discusses distribution where the relative frequencies of the records match the harmonic series that isfor every occurrence of the first recordthe second record will appear half as oftenthe third will appear one third as oftenthe fourth one quarter as oftenand so on the actual probability for the ith record was defined to be /(ihn explain why this is correct graph the equations (nlog and (nnloge which gives the better performancebinary search on sorted listor sequential search on
20,408
sec exercises list ordered by frequency where the frequency conforms to zipf distributioncharacterize the difference in running times assume that the values through are stored in self-organizing listinitially in ascending order consider the three self-organizing list heuristicscountmove-to-frontand transpose for countassume that the record is moved ahead in the list passing over any other record that its count is now greater than for eachshow the resulting list and the total number of comparisons required resulting from the following series of accessesd for each of the three self-organizing list heuristics (countmove-to-frontand transpose)describe series of record accesses for which it would require the greatest number of comparisons of the three write an algorithm to implement the frequency count self-organizing list heuristicassuming that the list is implemented using an array in particularwrite function freqcount that takes as input value to be searched for and which adjusts the list appropriately if the value is not already in the listadd it to the end of the list with frequency count of one write an algorithm to implement the move-to-front self-organizing list heuristicassuming that the list is implemented using an array in particularwrite function movetofront that takes as input value to be searched for and which adjusts the list appropriately if the value is not already in the listadd it to the beginning of the list write an algorithm to implement the transpose self-organizing list heuristicassuming that the list is implemented using an array in particularwrite function transpose that takes as input value to be searched for and which adjusts the list appropriately if the value is not already in the listadd it to the end of the list write functions for computing unionintersectionand set difference on arbitrarily long bit vectors used to represent set membership as described in section assume that for each operation both vectors are of equal length compute the probabilities for the following situations these probabilities can be computed analyticallyor you may write computer program to generate the probabilities by simulation (aout of group of studentswhat is the probability that students share the same birthday(bout of group of studentswhat is the probability that students share the same birthday
20,409
chap searching (chow many students must be in the class for the probability to be at least that there are who share birthday in the same month assume that you are hashing key to hash table of slots (indexed from to for each of the following functions ( )is the function acceptable as hash function ( would the hash program work correctly for both insertions and searches)and if sois it good hash functionfunction random(nreturns random integer between and inclusive (ah(kk/ where and are integers (bh( (ch( ( random( )mod (dh(kk mod where is prime number assume that you have seven-slot closed hash table (the slots are numbered through show the final hash table that would result if you used the hash function (kk mod and linear probing on this list of numbers after inserting the record with key value list for each empty slot the probability that it will be the next one filled assume that you have ten-slot closed hash table (the slots are numbered through show the final hash table that would result if you used the hash function (kk mod and quadratic probing on this list of numbers after inserting the record with key value list for each empty slot the probability that it will be the next one filled assume that you have ten-slot closed hash table (the slots are numbered through show the final hash table that would result if you used the hash function (kk mod and pseudo-random probing on this list of numbers the permutation of offsets to be used by the pseudo-random probing will be after inserting the record with key value list for each empty slot the probability that it will be the next one filled what is the result of running sfold from section on the following stringsassume hash table size of slots (ahello world (bnow hear this (chear this now using closed hashingwith double hashing to resolve collisionsinsert the following keys into hash table of thirteen slots (the slots are numbered through the hash functions to be used are and defined below you should show the hash table after all eight keys have been inserted
20,410
sec projects be sure to indicate how you are using and to do the hashing function rev(kreverses the decimal digits of kfor examplerev( rev( (kk mod ( (rev( mod keys write an algorithm for deletion function for hash tables that replaces the record with special value indicating tombstone modify the functions hashinsert and hashsearch to work correctly with tombstones consider the following permutation for the numbers to analyze what will happen if this permutation is used by an implementation of pseudo-random probing on hash table of size seven will this permutation solve the problem of primary clusteringwhat does this say about selecting permutation for use when implementing pseudo-random probing projects implement binary search and the quadratic binary search of section run your implementations over large range of problem sizestiming the results for each algorithm graph and compare these timing results implement the three self-organizing list heuristics countmove-to-frontand transpose compare the cost for running the three heuristics on various input data the cost metric should be the total number of comparisons required when searching the list it is important to compare the heuristics using input data for which self-organizing lists are reasonablethat ison frequency distributions that are uneven one good approach is to read text files the list should store individual words in the text file begin with an empty listas was done for the text compression example of section each time word is encountered in the text filesearch for it in the self-organizing list if the word is foundreorder the list as appropriate if the word is not in the listadd it to the end of the list and then reorder as appropriate implement the text compression system described in section implement system for managing document retrieval your system should have the ability to insert (abstract references todocuments into the systemassociate keywords with given documentand to search for documents with specified keywords
20,411
chap searching implement database stored on disk using bucket hashing define records to be bytes long with -byte key and bytes of data the remaining bytes are available for you to store necessary information to support the hash table bucket in the hash table will be bytes longso each bucket has space for records the hash table should consist of buckets (total space for records with slots indexed by positions to followed by the overflow bucket at record position in the file the hash function for key value should be mod (note that this means the last three slots in the table will not be home positions for any record the collision resolution function should be linear probing with wrap-around within the bucket for exampleif record is hashed to slot the collision resolution process will attempt to insert the record into the table in the order and finally if bucket is fullthe record should be placed in the overflow section at the end of the file your hash table should implement the dictionary adt of section when you do your testingassume that the system is meant to store about or so records at time implement the dictionary adt of section by means of hash table with linear probing as the collision resolution policy you might wish to begin with the code of figure using empirical simulationdetermine the cost of insert and delete as grows ( reconstruct the dashed lines of figure thenrepeat the experiment using quadratic probing and pseudorandom probing what can you say about the relative performance of these three collision resolution policies
20,412
indexing many large-scale computing applications are centered around data sets that are too large to fit into main memory the classic example is large database of records with multiple search keysrequiring the ability to insertdeleteand search for records hashing provides outstanding performance for such situationsbut only in the limited case in which all searches are of the form "find the record with key value unfortunatelymany applications require more general search capabilities one example is range query search for all records whose key lies within some range other queries might involve visiting all records in order of their key valueor finding the record with the greatest key value hash tables are not organized to support any of these queries efficiently this introduces file structures used to organize large collection of records stored on disk such file structures support efficient insertiondeletionand search operationsfor exact-match queriesrange queriesand largest/smallest key value searches before discussing such file structureswe must become familiar with some basic file-processing terminology an entry-sequenced file stores records in the order that they were added to the file entry-sequenced files are the disk-based equivalent to an unsorted list and so do not support efficient search the natural solution is to sort the records by order of the search key howevera typical databasesuch as collection of employee or customer records maintained by businessmight contain multiple search keys to answer question about particular customer might require search on the name of the customer businesses often wish to sort and output the records by zip code order for bulk mailing government paperwork might require the ability to search by social security number thusthere might not be single "correctorder in which to store the records indexing is the process of associating key with the location of corresponding data record section discussed the concept of key sortin which an index
20,413
chap indexing file is created whose records consist of key/pointer pairs hereeach key is associated with pointer to complete record in the main database file the index file could be sorted or organized using tree structurethereby imposing logical order on the records without physically rearranging them one database might have several associated index fileseach supporting efficient access through different key field each record of database normally has unique identifiercalled the primary key for examplethe primary key for set of personnel records might be the social security number or id number for the individual unfortunatelythe id number is generally an inconvenient value on which to perform search because the searcher is unlikely to know it insteadthe searcher might know the desired employee' name alternativelythe searcher might be interested in finding all employees whose salary is in certain range if these are typical search requests to the databasethen the name and salary fields deserve separate indices howeverkey values in the name and salary indices are not likely to be unique key field such as salarywhere particular key value might be duplicated in multiple recordsis called secondary key most searches are performed using secondary key the secondary key index (or more simplysecondary indexwill associate secondary key value with the primary key of each record having that secondary key value at this pointthe full database might be searched directly for the record with that primary keyor there might be primary key index (or primary indexthat relates each primary key value with pointer to the actual record on disk in the latter caseonly the primary index provides the location of the actual record on diskwhile the secondary indices refer to the primary index indexing is an important technique for organizing large databasesand many indexing methods have been developed direct access through hashing is discussed in section simple list sorted by key value can also serve as an index to the record file indexing disk files by sorted lists are discussed in the following section unfortunatelya sorted list does not perform well for insert and delete operations third approach to indexing is the tree index trees are typically used to organize large databases that must support record insertiondeletionand key range searches section briefly describes isama tentative step toward solving the problem of storing large database that must support insertion and deletion of records its shortcomings help to illustrate the value of tree indexing techniques section introduces the basic issues related to tree indexing section introduces the - treea balanced tree structure that is simple form of the -tree covered in section -trees are the most widely used indexing method for large disk-based databasesand many variations have been invented section
20,414
sec linear indexing linear index database records figure linear indexing for variable-length records each record in the index file is of fixed length and contains pointer to the beginning of the corresponding record in the database file begins with discussion of the variant normally referred to simply as " -tree section presents the most widely implemented variantthe -tree linear indexing linear index is an index file organized as sequence of key/pointer pairs where the keys are in sorted order and the pointers either ( point to the position of the complete record on disk( point to the position of the primary key in the primary indexor ( are actually the value of the primary key depending on its sizea linear index might be stored in main memory or on disk linear index provides number of advantages it provides convenient access to variable-length database recordsbecause each entry in the index file contains fixed-length key field and fixed-length pointer to the beginning of (variable-lengthrecord as shown in figure linear index also allows for efficient search and random access to database recordsbecause it is amenable to binary search if the database contains enough recordsthe linear index might be too large to store in main memory this makes binary search of the index more expensive because many disk accesses would typically be required by the search process one solution to this problem is to store second-level linear index in main memory that indicates which disk block in the index file stores desired key for examplethe linear index on disk might reside in series of -byte blocks if each key/pointer pair in the linear index requires bytesthen keys are stored per block the second-level indexstored in main memoryconsists of simple table storing the value of the key in the first position of each block in the linear index file this arrangement is shown in figure if the linear index requires disk blocks ( mb)the second-level index contains only entriesone per disk block to find which disk block contains desired search key valuefirst search through the
20,415
chap indexing second level index linear indexdisk blocks figure simple two-level linear index the linear index is stored on disk the smallersecond-level index is stored in main memory each element in the second-level index stores the first key value in the corresponding disk block of the index file in this examplethe first disk block of the linear index stores keys in the range to and the second disk block stores keys in the range to thusthe first entry of the second-level index is key value (the first key in the first block of the linear index)while the second entry of the second-level index is key value jones aa ab ab smith ax ax zx zukowski zq ff figure two-dimensional linear index each row lists the primary keys associated with particular secondary key value in this examplethe secondary key is name the primary key is unique four-character code -entry table to find the greatest value less than or equal to the search key this directs the search to the proper block in the index filewhich is then read into memory at this pointa binary search within this block will produce pointer to the actual record in the database because the second-level index is stored in main memoryaccessing record by this method requires two disk readsone from the index file and one from the database file for the actual record every time record is inserted to or deleted from the databaseall associated secondary indices must be updated updates to linear index are expensivebecause the entire contents of the array might be shifted by one position another problem is that multiple records with the same secondary key each duplicate that key value within the index when the secondary key field has many duplicatessuch as when it has limited range ( field to indicate job category from among small number of possible job categories)this duplication might waste considerable space one improvement on the simple sorted array is two-dimensional array where each row corresponds to secondary key value row contains the primary keys
20,416
whose records have the indicated secondary key value figure illustrates this approach now there is no duplication of secondary key valuespossibly yielding considerable space savings the cost of insertion and deletion is reducedbecause only one row of the table need be adjusted note that new row is added to the array when new secondary key value is added this might lead to moving many recordsbut this will happen infrequently in applications suited to using this arrangement drawback to this approach is that the array must be of fixed sizewhich imposes an upper limit on the number of primary keys that might be associated with particular secondary key furthermorethose secondary keys with fewer records than the width of the array will waste the remainder of their row better approach is to have one-dimensional array of secondary key valueswhere each secondary key is associated with linked list this works well if the index is stored in main memorybut not so well when it is stored on disk because the linked list for given key might be scattered across several disk blocks consider large database of employee records if the primary key is the employee' id number and the secondary key is the employee' namethen each record in the name index associates name with one or more id numbers the id number index in turn associates an id number with unique pointer to the full record on disk the secondary key index in such an organization is also known as an inverted list or inverted file it is inverted in that searches work backwards from the secondary key to the primary key to the actual data record it is called list because each secondary key value has (conceptuallya list of primary keys associated with it figure illustrates this arrangement herewe have last names as the secondary key the primary key is four-character unique identifier figure shows better approach to storing inverted lists an array of secondary key values is shown as before associated with each secondary key is pointer to an array of primary keys the primary key array uses linked-list implementation this approach combines the storage for all of the secondary key lists into single arrayprobably saving space each record in this array consists of primary key value and pointer to the next element on the list it is easy to insert and delete secondary keys from this arraymaking this good implementation for disk-based inverted files isam how do we handle large databases that require frequent updatethe main problem with the linear index is that it is singlelarge array that does not lend itself to updates because single update can require changing the position of every key
20,417
chap indexing secondary key primary key jones aa smith ab zukowski ab ff ax ax zx zq figure illustration of an inverted list each secondary key value is stored in the secondary key list each secondary key value on the list has pointer to list of the primary keys whose associated records have that secondary key value secondary key index primary key next jones aa smith ax zukowski zx zq ab ab ax ff figure an inverted list implemented as an array of secondary keys and combined lists of primary keys each record in the secondary key array contains pointer to record in the primary key array the next field of the primary key array indicates the next record with that secondary key value
20,418
sec isam in-memory table of cylinder keys cylinder cylinder index cylinder cylinder index records records cylinder cylinder overflow overflow system overflow figure illustration of the isam indexing system in the index inverted lists reduce this problembut they are only suitable for secondary key indices with many fewer secondary key values than records the linear index would perform well as primary key index if it could somehow be broken into pieces such that individual updates affect only part of the index this concept will be pursued throughout the rest of this eventually culminating in the -treethe most widely used indexing method today but firstwe begin by studying isaman early attempt to solve the problem of large databases requiring frequent update its weaknesses help to illustrate why the -tree works so well before the invention of effective tree indexing schemesa variety of disk-based indexing methods were in use all were rather cumbersomelargely because no adequate method for handling updates was known typicallyupdates would cause the index to degrade in performance isam is one example of such an index and was widely used by ibm prior to adoption of the -tree isam is based on modified form of the linear indexas illustrated by figure records are stored in sorted order by primary key the disk file is divided among number of cylinders on disk each cylinder holds section of the list in sorted order initiallyeach cylinder is not filled to capacityand the extra space is set aside in the cylinder overflow in memory is table listing the lowest key value stored in each cylinder of the file each cylinder contains table listing the lowest recall from section that cylinder is all of the tracks readable from particular placement of the heads on the multiple platters of disk drive
20,419
chap indexing key value for each block in that cylindercalled the cylinder index when new records are insertedthey are placed in the correct cylinder' overflow area (in effecta cylinder acts as bucketif cylinder' overflow area fills completelythen system-wide overflow area is used search proceeds by determining the proper cylinder from the system-wide table kept in main memory the cylinder' block table is brought in from disk and consulted to determine the correct block if the record is found in that blockthen the search is complete otherwisethe cylinder' overflow area is searched if that is fulland the record is not foundthen the system-wide overflow is searched after initial construction of the databaseso long as no new records are inserted or deletedaccess is efficient because it requires only two disk fetches the first disk fetch recovers the block table for the desired cylinder the second disk fetch recovers the block thatunder good conditionscontains the record after many insertsthe overflow list becomes too longresulting in significant search time as the cylinder overflow area fills up under extreme conditionsmany searches might eventually lead to the system overflow area the "solutionto this problem is to periodically reorganize the entire database this means re-balancing the records among the cylinderssorting the records within each cylinderand updating both the system index table and the within-cylinder block table such reorganization was typical of database systems during the and would normally be done each night or weekly tree-based indexing linear indexing is efficient when the database is staticthat iswhen records are inserted and deleted rarely or never isam is adequate for limited number of updatesbut not for frequent changes because it has essentially two levels of indexingisam will also break down for truly large database where the number of cylinders is too great for the top-level index to fit in main memory in their most general formdatabase applications have the following characteristics large sets of records are frequently updated search is by one or combination of several keys key range queries or min/max queries are used for such databasesa better organization must be found one approach would be to use the binary search tree (bstto store primary and secondary key indices bsts can store duplicate key valuesthey provide efficient insertion and deletion as well as efficient searchand they can perform efficient range queries when there
20,420
is enough main memorythe bst is viable option for implementing both primary and secondary key indices unfortunatelythe bst can become unbalanced even under relatively good conditionsthe depth of leaf nodes can easily vary by factor of two this might not be significant concern when the tree is stored in main memory because the time required is still th(log nfor search and update when the tree is stored on diskhoweverthe depth of nodes in the tree becomes crucial every time bst node is visitedit is necessary to visit all nodes along the path from the root to each node on this path must be retrieved from disk each disk access returns block of information if node is on the same block as its parentthen the cost to find that node is trivial once its parent is in main memory thusit is desirable to keep subtrees together on the same block unfortunatelymany times node is not on the same block as its parent thuseach access to bst node could potentially require that another block to be read from disk using buffer pool to store multiple blocks in memory can mitigate disk access problems if bst accesses display good locality of reference but buffer pool cannot eliminate disk / entirely the problem becomes greater if the bst is unbalancedbecause nodes deep in the tree have the potential of causing many disk blocks to be read thusthere are two significant issues that must be addressed to have efficient search from disk-based bst the first is how to keep the tree balanced the second is how to arrange the nodes on blocks so as to keep the number of blocks encountered on any path from the root to the leaves at minimum we could select scheme for balancing the bst and allocating bst nodes to blocks in way that minimizes disk /oas illustrated by figure howevermaintaining such scheme in the face of insertions and deletions is difficult in particularthe tree should remain balanced when an update takes placebut doing so might require much reorganization each update should affect only disk few blocksor its cost will be too high as you can see from figure adopting rule such as requiring the bst to be complete can cause great deal of rearranging of data within the tree we can solve these problems by selecting another tree structure that automatically remains balanced after updatesand which is amenable to storing in blocks there are number of widely used balanced tree data structuresand there are also techniques for keeping bsts balanced examples are the avl and splay trees discussed in section as an alternativesection presents the - treewhich has the property that its leaves are always at the same level the main reason for discussing the - tree here in preference to the other balanced search trees is that
20,421
chap indexing figure breaking the bst into blocks the bst is divided among disk blockseach with space for three nodes the path from the root to any leaf is contained on two blocks ( (bfigure an attempt to re-balance bst after insertion can be expensive (aa bst with six nodes in the shape of complete binary tree (ba node with value is inserted into the bst of (ato maintain both the complete binary tree shape and the bst propertya major reorganization of the tree is required it naturally leads to the -tree of section which is by far the most widely used indexing method today - trees this section presents data structure called the - tree the - tree is not binary treebut instead its shape obeys the following definition node contains one or two keys every internal node has either two children (if it contains one keyor three children (if it contains two keyshence the name all leaves are at the same level in the treeso the tree is always height balanced in addition to these shape propertiesthe - tree has search tree property analogous to that of bst for every nodethe values of all descendants in the left subtree are less than the value of the first keywhile values in the center subtree
20,422
sec - trees figure - tree are greater than or equal to the value of the first key if there is right subtree (equivalentlyif the node stores two keys)then the values of all descendants in the center subtree are less than the value of the second keywhile values in the right subtree are greater than or equal to the value of the second key to maintain these shape and search properties requires that special action be taken when nodes are inserted and deleted the - tree has the advantage over the bst in that the - tree can be kept height balanced at relatively low cost figure illustrates the - tree nodes are indicated as rectangular boxes with two key fields (these nodes actually would contain complete records or pointers to complete recordsbut the figures will show only the keys internal nodes with only two children have an empty right key field leaf nodes might contain either one or two keys figure is class declaration for the - tree node note that this sample declaration does not distinguish between leaf and internal nodes and so is space inefficientbecause leaf nodes store three pointers each the techniques of section can be applied here to implement separate internal and leaf node types from the defining rules for - trees we can derive relationships between the number of nodes in the tree and the depth of the tree - tree of height has at least - leavesbecause if every internal node has two children it degenerates to the shape of complete binary tree - tree of height has at most - leavesbecause each internal node can have at most three children searching for value in - tree is similar to searching in bst search begins at the root if the root does not contain the search key kthen the search progresses to the only subtree that can possibly contain the value(sstored in the root node determine which is the correct subtree for exampleif searching for the value in the tree of figure we begin with the root node because is between and it can only be in the middle subtree searching the middle child of the root node yields the desired record if searching for then the first step is again to search the root node because is less than the first (leftbranch is
20,423
chap indexing /* - tree node implementation *class ttnode,eprivate lval/the left record private key lkey/the node' left key private rval/the right record private key rkey/the node' right key private ttnode left/pointer to left child private ttnode center/pointer to middle child private ttnode right/pointer to right child public ttnode(center left right nullpublic ttnode(key lke lvkey rke rvttnode ttnode ttnode lkey lkrkey rklval lvrval rvleft center right public boolean isleaf(return left =nullpublic ttnode lchild(return leftpublic ttnode rchild(return rightpublic ttnode cchild(return centerpublic key lkey(return lkey/left key public lval(return lval/left value public key rkey(return rkey/right key public rval(return rval/right value public void setleft(key ke elkey klval epublic void setright(key ke erkey krval epublic void setleftchild(ttnode itleft itpublic void setcenterchild(ttnode itcenter itpublic void setrightchild(ttnode itright itfigure the - tree node implementation taken at the next levelwe take the second branch to the leaf node containing if the search key were then upon encountering the leaf containing we would find that the search key is not in the tree below is an implementation for the - tree search method
20,424
sec - trees figure simple insert into the - tree of figure the value is inserted into the tree at the leaf node containing because there is room in the node for second keyit is simply added to the left position with moved to the right position private findhelp(ttnode rootkey kif (root =nullreturn null/val not found if ( compareto(root lkey()= return root lval()if ((root rkey(!null&( compareto(root rkey()= )return root rval()if ( compareto(root lkey() /search left return findhelp(root lchild() )else if (root rkey(=null/search center return findhelp(root cchild() )else if ( compareto(root rkey() /search center return findhelp(root cchild() )else return findhelp(root rchild() )/search right insertion into - tree is similar to insertion into bst to the extent that the new record is placed in the appropriate leaf node unlike bst insertiona new child is not created to hold the record being insertedthat isthe - tree does not grow downward the first step is to find the leaf node that would contain the record if it were in the tree if this leaf node contains only one valuethen the new record can be added to that node with no further modification to the treeas illustrated in figure in this examplea record with key value is inserted searching from the rootwe come to the leaf node that stores we add as the left value (pushing the record with key to the rightmost positionif we insert the new record into leaf node that already contains two recordsthen more space must be created consider the two records of node and the record to be inserted without further concern for which two were already in and which is the new record the first step is to split into two nodes thusa new node -call it -must be created from free store receives the record with the least of the three key values receives the greatest of the three the record
20,425
chap indexing figure simple node-splitting insert for - tree the value is added to the - tree of figure this makes the node containing values and splitpromoting value to the parent node with the middle of the three key value is passed up to the parent node along with pointer to this is called promotion the promoted key is then inserted into the parent if the parent currently contains only one record (and thus has only two children)then the promoted record and the pointer to are simply added to the parent node if the parent is fullthen the split-and-promote process is repeated figure illustrates simple promotion figure illustrates what happens when promotions require the root to splitadding new level to the tree in either caseall leaf nodes continue to have equal depth figures and present an implementation for the insertion process note that inserthelp of figure takes three parameters the first is pointer to the root of the current subtreenamed rt the second is the key for the record to be insertedand the third is the record itself the return value for inserthelp is pointer to - tree node if rt is unchangedthen pointer to rt is returned if rt is changed (due to the insertion causing the node to split)then pointer to the new subtree root is returnedwith the key value and record value in the leftmost fieldsand pointer to the (singlesubtree in the center pointer field this revised node will then be added to the parentas illustrated in figure when deleting record from the - treethere are three cases to consider the simplest occurs when the record is to be removed from leaf node containing two records in this casethe record is simply removedand no other nodes are affected the second case occurs when the only record in leaf node is to be removed the third case occurs when record is to be removed from an internal node in both the second and the third casesthe deleted record is replaced with another that can take its place while maintaining the correct ordersimilar to removing node from bst if the tree is sparse enoughthere is no such record available that will allow all nodes to still maintain at least one record in this situationsibling nodes are merged together the delete operation for the - tree is excessively complex and will not be described further insteada complete discussion of deletion will be
20,426
sec - trees ( ( (cfigure example of inserting record that causes the - tree root to split (athe value is added to the - tree of figure this causes the node containing and to splitpromoting (bthis in turn causes the internal node containing and to splitpromoting (cfinallythe root node splitspromoting to become the left record in the new root the result is that the tree becomes one level higher postponed until the next sectionwhere it can be generalized for particular variant of the -tree the - tree insert and delete routines do not add new nodes at the bottom of the tree instead they cause leaf nodes to split or mergepossibly causing ripple effect moving up the tree to the root if necessary the root will splitcausing new root node to be created and making the tree one level deeper on deletionif the last two children of the root mergethen the root node is removed and the tree will lose level in either caseall leaf nodes are always at the same level when all leaf nodes are at the same levelwe say that tree is height balanced because the - tree is height balancedand every internal node has at least two childrenwe know that the maximum depth of the tree is log thusall - tree insertfindand delete operations require th(log ntime
20,427
chap indexing private ttnode inserthelp(ttnode rtkey ke ettnode retvalif (rt =null/empty treecreate leaf node for root return new ttnode(kenullnullnullnullnull)if (rt isleaf()/at leaf nodeinsert here return rt add(new ttnode(kenullnullnullnullnull))/add to internal node if ( compareto(rt lkey() /insert left retval inserthelp(rt lchild()ke)if (retval =rt lchild()return rtelse return rt add(retval)else if((rt rkey(=null|( compareto(rt rkey() )retval inserthelp(rt cchild()ke)if (retval =rt cchild()return rtelse return rt add(retval)else /insert right retval inserthelp(rt cchild()ke)if (retval =rt cchild()return rtelse return rt add(retval)figure the - tree insert routine -trees this section presents the -tree -trees are usually attributed to bayer and mccreight who described the -tree in paper by -trees had replaced virtually all large-file access methods other than hashing -treesor some variant of -treesare the standard file organization for applications requiring insertiondeletionand key range searches -trees address effectively all of the major problems encountered when implementing disk-based search trees -trees are always height balancedwith all leaf nodes at the same level update and search operations affect only few disk blocks the fewer the number of disk blocks affectedthe less disk / is required -trees keep related records (that isrecords with similar key valueson the same disk blockwhich helps to minimize disk / on searches due to locality of reference
20,428
sec -trees /*add new key/value pair to the node there might be subtree associated with the record being added this information comes in the form of - tree node with one key and (possibly nullsubtree through the center pointer field *public ttnode add(ttnode itif (rkey =null/only one keyadd here if (lkey compareto(it lkey() rkey it lkey()rval it lval()right centercenter it cchild()else rkey lkeyrval lvalright centerlkey it lkey()lval it lval()center it cchild()return thiselse if (lkey compareto(it lkey()> /add left center new ttnode(rkeyrvalnullnullcenterrightnull)rkey nullrval nullright nullit setleftchild(left)left itreturn thiselse if (rkey compareto(it lkey() /add center it setcenterchild(new ttnode(rkeyrvalnullnullit cchild()rightnull))it setleftchild(this)rkey nullrval nullright nullreturn itelse /add right ttnode new ttnode(rkeyrvalnullnullthisitnull)it setleftchild(right)right nullrkey nullrval nullreturn figure the - tree node add method
20,429
chap indexing figure -tree of order four -trees guarantee that every node in the tree will be full at least to certain minimum percentage this improves space efficiency while reducing the typical number of disk fetches necessary during search or update operation -tree of order is defined to have the following shape propertiesthe root is either leaf or has at least two children each internal nodeexcept for the roothas between dm/ and children all leaves are at the same level in the treeso the tree is always height balanced the -tree is generalization of the - tree put another waya - tree is -tree of order three normallythe size of node in the -tree is chosen to fill disk block -tree node implementation typically allows or more children thusa -tree node is equivalent to disk blockand "pointervalue stored in the tree is actually the number of the block containing the child node (usually interpreted as an offset from the beginning of the corresponding disk filein typical applicationb-tree block / will be managed using buffer pool and block-replacement scheme such as lru (see section figure shows -tree of order four each node contains up to three keysand internal nodes have up to four children search in -tree is generalization of search in - tree it is an alternating two-step processbeginning with the root node of the -tree perform binary search on the records in the current node if record with the search key is foundthen return that record if the current node is leaf node and the key is not foundthen report an unsuccessful search otherwisefollow the proper branch and repeat the process for exampleconsider search for the record with key value in the tree of figure the root node is examined and the second (rightbranch taken after
20,430
examining the node at level the third branch is taken to the next level to arrive at the leaf node containing record with key value -tree insertion is generalization of - tree insertion the first step is to find the leaf node that should contain the key to be insertedspace permitting if there is room in this nodethen insert the key if there is notthen split the node into two and promote the middle key to the parent if the parent becomes fullthen it is split in turnand its middle key promoted note that this insertion process is guaranteed to keep all nodes at least half full for examplewhen we attempt to insert into full internal node of -tree of order fourthere will now be five children that must be dealt with the node is split into two nodes containing two keys eachthus retaining the -tree property the middle of the five children is promoted to its parent -trees the previous section mentioned that -trees are universally used to implement large-scale disk-based systems actuallythe -tree as described in the previous section is almost never implementednor is the - tree as described in section what is most commonly implemented is variant of the -treecalled the -tree when greater efficiency is requireda more complicated variant known as the -tree is used when data are staticit is an extremely efficient way to search the problem is those pesky inserts and deletes imagine that we want to keep the idea of storing sorted listbut make it more flexible by breaking the list into manageable chunks that are more easily updated how might we do thatfirstwe need to decide how big the chunks should be since the data are on diskit seems reasonable to store chunk that is the size of disk blockor small multiple of the disk block size we could insert new record with chunk that hasn' filled its block but what if the chunk fills up the entire block that contains itwe could just split it in half what if we want to delete recordwe could just take the deleted record out of the chunkbut we might not want lot of near-empty chunks so we could put adjacent chunks together if they have only small amount of data between them or we could shuffle data between adjacent chunks that together contain more data the big problem would be how to find the desired chunk when processing record with given key perhaps some sort of tree-like structure could be used to locate the appropriate chunk these ideas are exactly what motivate the -tree the -tree is essentially mechanism for managing list broken into chunks the most significant difference between the -tree and the bst or the - tree is that the -tree stores records only at the leaf nodes internal nodes store key
20,431
chap indexing valuesbut these are used solely as placeholders to guide the search this means that internal nodes are significantly different in structure from leaf nodes internal nodes store keys to guide the searchassociating each key with pointer to child -tree node leaf nodes store actual recordsor else keys and pointers to actual records in separate disk file if the -tree is being used purely as an index depending on the size of record as compared to the size of keya leaf node in -tree of order might have enough room to store more or less than records the requirement is simply that the leaf nodes store enough records to remain at least half full the leaf nodes of -tree are normally linked together to form doubly linked list thusthe entire collection of records can be traversed in sorted order by visiting all the leaf nodes on the linked list here is java-like pseudocode representation for the -tree node interface leaf node and internal node subclasses would implement this interface /*interface for btree nodes *public interface bpnode public boolean isleaf()public int numrecs()public key[keys()an important implementation detail to note is that while figure shows internal nodes containing three keys and four pointersclass bpnode is slightly different in that it stores key/pointer pairs figure shows the -tree as it is traditionally drawn to simplify implementation in practicenodes really do associate key with each pointer each internal node should be assumed to hold in the leftmost position an additional key that is less than or equal to any possible key value in the node' leftmost subtree -tree implementations typically store an additional dummy record in the leftmost leaf node whose key value is less than any legal key value -trees are exceptionally good for range queries once the first record in the range has been foundthe rest of the records with keys in the range can be accessed by sequential processing of the remaining records in the first nodeand then continuing down the linked list of leaf nodes as far as necessary figure illustrates the -tree search in -tree is nearly identical to search in regular -treeexcept that the search must always continue to the proper leaf node even if the search-key value is found in an internal nodethis is only placeholder and does not provide access to the actual record to find record with key value in the -tree of figure search begins at the root the value stored in the root merely serves as placeholderindicating that keys with values greater than or equal to
20,432
sec -trees figure example of -tree of order four internal nodes must store between two and four children for this examplethe record size is assumed to be such that leaf nodes store between three and five records are found in the second subtree from the second child of the rootthe first branch is taken to reach the leaf node containing the actual record (or pointer to the actual recordwith key value here is pseudocode sketch of the -tree search algorithmprivate findhelp(bpnode rtkey kint currec binaryle(rt keys()rt numrecs() )if (rt isleaf()if ((((bpleaf)rtkeys())[currec=kreturn ((bpleaf)rtrecs(currec)else return nullelse return findhelp(((bpinternal)rtpointers(currec) ) -tree insertion is similar to -tree insertion firstthe leaf that should contain the record is found if is not fullthen the new record is addedand no other -tree nodes are affected if is already fullsplit it in two (dividing the records evenly among the two nodesand promote copy of the least-valued key in the newly formed right node as with the - treepromotion might cause the parent to split in turnperhaps eventually leading to splitting the root and causing the -tree to gain new level -tree insertion keeps all leaf nodes at equal depth figure illustrates the insertion process through several examples figure shows java-like pseudocode sketch of the -tree insert algorithm to delete record from the -treefirst locate the leaf that contains if is more than half fullthen we need only remove rleaving still at least half full this is demonstrated by figure if deleting record reduces the number of records in the node below the minimum threshold (called an underflow)then we must do something to keep the node sufficiently full the first choice is to look at the node' adjacent siblings to
20,433
chap indexing ( ( ( (dfigure examples of -tree insertion (aa -tree containing five records (bthe result of inserting record with key value into the tree of (athe leaf node splitscausing creation of the first internal node (cthe -tree of (bafter further insertions (dthe result of inserting record with key value into the tree of (cthe second leaf node splitswhich causes the internal node to split in turncreating new root private bpnode inserthelp(bpnode rtkey ke ebpnode retvalif (rt isleaf()/at leaf nodeinsert here return ((bpleaf)rtadd(ke)/add to internal node int currec binaryle(rt keys()rt numrecs() )bpnode temp inserthelp((bpinternal)rootpointers(currec)ke)if (temp !((bpinternal)rtpointers(currec)return ((bpinternal)rtadd((bpinternal)temp)else return rtfigure java-like pseudocode sketch of the -tree insert algorithm
20,434
sec -trees figure simple deletion from -tree the record with key value is removed from the tree of figure note that even though is also placeholder used to direct search in the parent nodethat value need not be removed from internal nodes even if no record in the tree has key value thusthe leftmost node at level one in this example retains the key with value after the record with key value has been removed from the second leaf node figure deletion from the -tree of figure via borrowing from sibling the key with value is deleted from the leftmost leafcausing the record with key value to shift to the leftmost leaf to take its place note that the parent must be updated to properly indicate the key range within the subtrees in this examplethe parent node has its leftmost key value changed to determine if they have spare record that can be used to fill the gap if sothen enough records are transferred from the sibling so that both nodes have about the same number of records this is done so as to delay as long as possible the next time when delete causes this node to underflow again this process might require that the parent node has its placeholder key value revised to reflect the true first key value in each node figure illustrates the process if neither sibling can lend record to the under-full node (call it )then must give its records to sibling and be removed from the tree there is certainly room to do thisbecause the sibling is at most half full (remember that it had no records to contribute to the current node)and has become less than half full because it is under-flowing this merge process combines two subtrees of the parentwhich might cause it to underflow in turn if the last two children of the root merge togetherthen the tree loses level figure illustrates the node-merge
20,435
chap indexing ( (bfigure deleting the record with key value from the -tree of figure via collapsing siblings (athe two leftmost leaf nodes merge together to form single leaf unfortunatelythe parent node now has only one child (bbecause the left subtree has spare leaf nodethat node is passed to the right subtree the placeholder values of the root and the right internal node are updated to reflect the changes value moves to the rootand old root value moves to the rightmost internal node /*delete record with the given key valueand return true if the root underflows *private boolean removehelp(bpnode rtkey kint currec binaryle(rt keys()rt numrecs() )if (rt isleaf()if (((bpleaf)rtkeys()[currec=kreturn ((bpleaf)rtdelete(currec)else return falseelse /process internal node if (removehelp(((bpinternal)rtpointers(currec) )/child will merge if necessary return ((bpinternal)rtunderflow(currec)else return falsefigure java-like pseudocode for the -tree delete algorithm deletion process figure shows java-like pseudocode for the -tree delete algorithm the -tree requires that all nodes be at least half full (except for the rootthusthe storage utilization must be at least this is satisfactory for many implementationsbut note that keeping nodes fuller will result both in less space
20,436
required (because there is less empty space in the disk fileand in more efficient processing (fewer blocks on average will be read into memory because the amount of information in each block is greaterbecause -trees have become so popularmany algorithm designers have tried to improve -tree performance one method for doing so is to use the -tree variant known as the -tree the -tree is identical to the -treeexcept for the rules used to split and merge nodes instead of splitting node in half when it overflowsthe -tree gives some records to its neighboring siblingif possible if the sibling is also fullthen these two nodes split into three similarlywhen node underflowsit is combined with its two siblingsand the total reduced to two nodes thusthe nodes are always at least two thirds full -tree analysis the asymptotic cost of searchinsertionand deletion of records from -treesb-treesand -trees is th(log nwhere is the total number of records in the tree howeverthe base of the log is the (averagebranching factor of the tree typical database applications use extremely high branching factorsperhaps or more thusin practice the -tree and its variants are extremely shallow as an illustrationconsider -tree of order and leaf nodes that contain up to records one-level -tree can have at most records two-level -tree must have at least records ( leaves with records eachit has at most , records ( leaves with records eacha three-level -tree must have at least records (two second-level nodes with children containing records eachand at most one million records ( second-level nodes with full children eacha four-level -tree must have at least , records and at most million records thusit would require an extremely large database to generate -tree of more than four levels we can reduce the number of disk fetches required for the -tree even more by using the following methods firstthe upper levels of the tree can be stored in main memory at all times because the tree branches so quicklythe top two levels (levels and require relatively little space if the -tree is only four levels deepthen at most two disk fetches (internal nodes at level two and leaves at level threeare required to reach the pointer to any given record this concept can be extended further if higher space utilization is required howeverthe update routines become much more complicated once worked on project where we implemented -for- node split and merge routines this gave better performance than the -for- node split and merge routines of the -tree howeverthe spitting and merging routines were so complicated that even their author could no longer understand them once they were completed
20,437
chap indexing as mentioned earliera buffer pool should be used to manage nodes of the -tree several nodes of the tree would typically be in main memory at one time the most straightforward approach is to use standard method such as lru to do node replacement howeversometimes it might be desirable to "lockcertain nodes such as the root into the buffer pool in generalif the buffer pool is even of modest size (say at least twice the depth of the tree)no special techniques for node replacement will be required because the upper-level nodes will naturally be accessed frequently further reading for an expanded discussion of the issues touched on in this see general file processing text such as file structuresa conceptual toolkit by folk and zoellick [fz in particularfolk and zoellick provide good discussion of the relationship between primary and secondary indices the most thorough discussion on various implementations for the -tree is the survey article by comer [com also see [sal for further details on implementing -trees see shaffer and brown [sb for discussion of buffer pool management strategies for -tree-like data structures exercises assume that computer system has disk blocks of bytesand that you are storing records that have -byte keys and -byte data fields the records are sorted and packed sequentially into the disk file (aassume that linear index uses bytes to store the key and bytes to store the block id for the associated records what is the greatest number of records that can be stored in the file if linear index of size kb is used(bwhat is the greatest number of records that can be stored in the file if the linear index is also stored on disk (and thus its size is limited only by the second-level indexwhen using second-level index of bytes ( key valuesas illustrated by figure each element of the second-level index references the smallest key value for disk block of the linear index assume that computer system has disk blocks of bytesand that you are storing records that have -byte keys and -byte data fields the records are sorted and packed sequentially into the disk file
20,438
sec exercises (aassume that linear index uses bytes to store the key and bytes to store the block id for the associated records what is the greatest number of records that can be stored in the file if linear index of size mb is used(bwhat is the greatest number of records that can be stored in the file if the linear index is also stored on disk (and thus its size is limited only by the second-level indexwhen using second-level index of bytes ( key valuesas illustrated by figure each element of the second-level index references the smallest key value for disk block of the linear index modify the function binary of section so as to support variable-length records with fixed-length keys indexed by simple linear index as illustrated by figure assume that database stores records consisting of -byte integer key and variable-length data field consisting of string show the linear index (as illustrated by figure for the following collection of records hello worldxyz this string is rather long this is shorter abc hello new world each of the following series of records consists of four-digit primary key (with no duplicatesand four-character secondary key (with many duplicates deer deer duck deer goat duck frog deer duck frog (ashow the inverted list (as illustrated by figure for this collection of records
20,439
chap indexing (bshow the improved inverted list (as illustrated by figure for this collection of records under what conditions will isam be more efficient than -tree implementation prove that the number of leaf nodes in - tree with levels is between - and - show the result of inserting the values and into the - tree of figure you are given series of records whose keys are letters the records arrive in the following ordercsdtampibwngurkeholj show the - tree that results from inserting these records you are given series of records whose keys are letters the records are inserted in the following ordercsdtampibwngurkeholj show the tree that results from inserting these records when the - tree is modified to be - treethat isthe internal nodes act only as placeholders assume that the leaf nodes are capable of holding up to two records show the result of inserting the value into the -tree of figure show the result of inserting the values and (in that orderinto the -tree of figure show the result of deleting the values and (in that orderfrom the -tree of figure you are given series of records whose keys are letters the records are inserted in the following ordercsdtampibwngurkeholj show the -tree of order four that results from inserting these records assume that the leaf nodes are capable of storing up to three records assume that you have -tree whose internal nodes can store up to children and whose leaf nodes can store up to records what are the minimum and maximum number of records that can be stored by the -tree for and levels assume that you have -tree whose internal nodes can store up to children and whose leaf nodes can store up to records what are the minimum and maximum number of records that can be stored by the -tree for and levels
20,440
projects implement two-level linear index for variable-length records as illustrated by figures and assume that disk blocks are bytes in length records in the database file should typically range between and bytesincluding -byte key value each record of the index file should store key value and the byte offset in the database file for the first byte of the corresponding record the top-level index (stored in memoryshould be simple array storing the lowest key value on the corresponding block in the index file implement the - treethat isa - tree where the internal nodes act only as placeholders your - tree should implement the dictionary interface of section implement the dictionary adt of section for large file stored on disk by means of the -tree of section assume that disk blocks are bytesand thus both leaf nodes and internal nodes are also bytes records should store -byte (intkey value and -byte data field internal nodes should store key value/pointer pairs where the "pointeris actually the block number on disk for the child node both internal nodes and leaf nodes will need room to store various information such as count of the records stored on that nodeand pointer to the next node on that level thusleaf nodes will store recordsand internal nodes will have room to store about to children depending on how you implement them use buffer pool (section to manage access to the nodes stored on disk
20,441
advanced data structures
20,442
graphs graphs provide the ultimate in data structure flexibility graphs can model both real-world systems and abstract problemsso they are used in hundreds of applications here is small sampling of the range of problems that graphs are applied to modeling connectivity in computer and communications networks representing map as set of locations with distances between locationsused to compute shortest routes between locations modeling flow capacities in transportation networks finding path from starting condition to goal conditionfor examplein artificial intelligence problem solving modeling computer algorithmsshowing transitions from one program state to another finding an acceptable order for finishing subtasks in complex activitysuch as constructing large buildings modeling relationships such as family treesbusiness or military organizationsand scientific taxonomies we begin in section with some basic graph terminology and then define two fundamental representations for graphsthe adjacency matrix and adjacency list section presents graph adt and simple implementations based on the adjacency matrix and adjacency list section presents the two most commonly used graph traversal algorithmscalled depth-first and breadth-first searchwith application to topological sorting section presents algorithms for solving some problems related to finding shortest routes in graph finallysection presents algorithms for finding the minimum-cost spanning treeuseful for determining lowest-cost connectivity in network besides being useful and interesting in their own rightthese algorithms illustrate the use of some data structures presented in earlier
20,443
chap graphs ( ( (cfigure examples of graphs and terminology (aa graph (ba directed graph (digraph(ca labeled (directedgraph with weights associated with the edges in this examplethere is simple path from vertex to vertex containing vertices and vertices and also form pathbut not simple path because vertex appears twice vertices and form simple cycle terminology and representations graph (veconsists of set of vertices and set of edges esuch that each edge in is connection between pair of vertices in the number of vertices is written | |and the number of edges is written | |ecan range from zero to maximum of | | |va graph with relatively few edges is called sparsewhile graph with many edges is called dense graph containing all possible edges is said to be complete graph with edges directed from one vertex to another (as in figure ( )is called directed graph or digraph graph whose edges are not directed is called an undirected graph (as illustrated by figure ( ) graph with labels associated with its vertices (as in figure ( )is called labeled graph two vertices are adjacent if they are joined by an edge such vertices are also called neighbors an edge connecting vertices and is written (uvsuch an edge is said to be incident on vertices and associated with each edge may be cost or weight graphs whose edges have weights (as in figure ( )are said to be weighted sequence of vertices vn forms path of length if there exist edges from vi to vi+ for < path is simple if all vertices on the path are distinct the length of path is the number of edges it contains cycle is path of length three or more that connects some vertex to itself cycle is simple if the path is simpleexcept for the first and last vertices being the same some graph applications require that given pair of vertices can have multiple edges connecting themor that vertex can have an edge to itself howeverthe applications discussed in this book do not require either of these special casesso for simplicity we will assume that they cannot occur
20,444
sec terminology and representations figure an undirected graph with three connected components vertices and form one connected component vertices and form second connected component vertex by itself forms third connected component subgraph is formed from graph by selecting subset vs of ' vertices and subset es of ' edges such that for every edge in es both of its vertices are in vs an undirected graph is connected if there is at least one path from any vertex to any other the maximally connected subgraphs of an undirected graph are called connected components for examplefigure shows an undirected graph with three connected components graph without cycles is called acyclic thusa directed graph without cycles is called directed acyclic graph or dag free tree is connectedundirected graph with no simple cycles an equivalent definition is that free tree is connected and has | edges there are two commonly used methods for representing graphs the adjacency matrix is illustrated by figure (bthe adjacency matrix for graph is |vx |varray assume that |vn and that the vertices are labeled from through vn- row of the adjacency matrix contains entries for vertex vi column in row is marked if there is an edge from vi to vj and is not marked otherwise thusthe adjacency matrix requires one bit at each position alternativelyif we wish to associate number with each edgesuch as the weight or distance between two verticesthen each matrix position must store that number in either casethe space requirements for the adjacency matrix are th(| | the second common representation for graphs is the adjacency listillustrated by figure (cthe adjacency list is an array of linked lists the array is |vitems longwith position storing pointer to the linked list of edges for vertex vi this linked list represents the edges by the vertices that are adjacent to vertex vi the adjacency list is therefore generalization of the "list of childrenrepresentation for trees described in section
20,445
chap graphs ( ( (cfigure two graph representations (aa directed graph (bthe adjacency matrix for the graph of ( (cthe adjacency list for the graph of (aexample the entry for vertex in figure (cstores and because there are two edges in the graph leaving vertex with one going to vertex and one going to vertex the list for vertex stores an entry for vertex because there is an edge from vertex to vertex but no entry for vertex because this edge comes into vertex rather than going out the storage requirements for the adjacency list depend on both the number of edges and the number of vertices in the graph there must be an array entry for each vertex (even if the vertex is not adjacent to any other vertex and thus has no elements on its linked list)and each edge must appear on one of the lists thusthe cost is th(| | |both the adjacency matrix and the adjacency list can be used to store directed or undirected graphs each edge of an undirected graph connecting vertices and is represented by two directed edgesone from to and one from to figure illustrates the use of the adjacency matrix and the adjacency list for undirected graphs
20,446
sec terminology and representations ( ( (cfigure using the graph representations for undirected graphs (aan undirected graph (bthe adjacency matrix for the graph of ( (cthe adjacency list for the graph of (awhich graph representation is more space efficient depends on the number of edges in the graph the adjacency list stores information only for those edges that actually appear in the graphwhile the adjacency matrix requires space for each potential edgewhether it exists or not howeverthe adjacency matrix requires no overhead for pointerswhich can be substantial costespecially if the only information stored for an edge is one bit to indicate its existence as the graph becomes denserthe adjacency matrix becomes relatively more space efficient sparse graphs are likely to have their adjacency list representation be more space efficient example assume that the vertex index requires two bytesa pointer requires four bytesand an edge weight requires two bytes then the adjacency matrix for the graph of figure requires | bytes while the adjacency list requires | | bytes for the graph of figure the adjacency matrix requires the same space as beforewhile the adjacency list requires | | bytes (because there are now edges instead of
20,447
chap graphs the adjacency matrix often requires higher asymptotic cost for an algorithm than would result if the adjacency list were used the reason is that it is common for graph algorithm to visit each neighbor of each vertex using the adjacency listonly the actual edges connecting vertex to its neighbors are examined howeverthe adjacency matrix must look at each of its |vpotential edgesyielding total cost of th(| |time when the algorithm might otherwise require only th(| |+| |time this is considerable disadvantage when the graph is sparsebut not when the graph is closer to full graph implementations we next turn to the problem of implementing graph class figure shows an abstract class defining an adt for graphs vertices are defined by an integer index value in other wordsthere is vertex vertex and so on we can assume that graph application stores any additional information of interest about given vertex elsewheresuch as name or application-dependent value note that this adt is not implemented using genericbecause it is the graph class usersresponsibility to maintain information related to the vertices themselves the graph class has no knowledge of the type or content of the information associated with vertexonly the index number for that vertex abstract class graph has methods to return the number of vertices and edges (methods and erespectivelyfunction weight returns the weight of given edgewith that edge identified by its two incident vertices for examplecalling weight( on the graph of figure (cwould return if no such edge existsthe weight is defined to be so calling weight( on the graph of figure (cwould return functions setedge and deledge set the weight of an edge and remove an edge from the graphrespectively againan edge is identified by its two incident vertices setedge does not permit the user to set the weight to be because this value is used to indicate non-existent edgenor are negative edge weights permitted functions getmark and setmark get and setrespectivelya requested value in the mark array (described belowfor vertex nearly every graph algorithm presented in this will require visits to all neighbors of given vertex two methods are provided to support this they work in manner similar to linked list access functions function first takes as input vertex vand returns the edge to the first neighbor for (we assume the neighbor list is sorted by vertex numberfunction next takes as input vertices and and returns the index for the vertex forming the next edge with after on '
20,448
sec graph implementations /*graph adt *public interface graph /graph class adt /*initialize the graph @param the number of vertices *public void init(int )/*@return the number of vertices *public int ()/*@return the current number of edges *public int ()/*@return ' first neighbor *public int first(int )/*@return ' next neighbor *public int next(int vint )/*set the weight for an edge @param , the vertices @param wght edge weight *public void setedge(int iint jint wght)/*delete an edge @param , the vertices *public void deledge(int iint )/*determine if an edge is in the graph @param , the vertices @return true if edge , has non-zero weight *public boolean isedge(int iint )/*return an edge' weight @param , the vertices @return the weight of edge ,jor zero *public int weight(int iint )/*set the mark value for vertex @param the vertex @param val the value to set *public void setmark(int vint val)/*get the mark value for vertex @param the vertex @return the value of the mark *public int getmark(int )figure graph adt this adt assumes that the number of vertices is fixed when the graph is createdbut that edges can be added and removed it also supports mark array to aid graph traversal algorithms
20,449
chap graphs edge list function next will return value of |vonce the end of the edge list for has been reached the following line appears in many graph algorithmsfor ( =>first( ) () ->next( , )this for loop gets the first neighbor of vthen works through the remaining neighbors of until value equal to -> (is returnedsignaling that all neighbors of have been visited for examplefirst( in figure would return next( would return next( would return next( would return which is not vertex in the graph it is reasonably straightforward to implement our graph and edge adts using either the adjacency list or adjacency matrix the sample implementations presented here do not address the issue of how the graph is actually created the user of these implementations must add functionality for this purposeperhaps reading the graph description from file the graph can be built up by using the setedge function provided by the adt figure shows an implementation for the adjacency matrix array mark stores the information manipulated by the setmark and getmark functions the edge matrix is implemented as an integer array of size for graph of vertices position (ijin the matrix stores the weight for edge (ijif it exists weight of zero for edge (ijis used to indicate that no edge connects vertices and given vertex vfunction first locates the position in matrix of the first edge (if anyof by beginning with edge ( and scanning through row until an edge is found if no edge is incident on vthen first returns function next locates the edge following edge (ij(if anyby continuing down the row of vertex starting at position looking for an edge if no such edge existsnext returns functions setedge and deledge adjust the appropriate value in the array function weight returns the value stored in the appropriate position in the array figure presents an implementation of the adjacency list representation for graphs its main data structure is an array of linked listsone linked list for each vertex these linked lists store objects of type edgewhich merely stores the index for the vertex pointed to by the edgealong with the weight of the edge
20,450
sec graph implementations class graphm implements graph /graphadjacency matrix private int[][matrix/the edge matrix private int numedge/number of edges public int[mark/the mark array public graphm({public graphm(int ninit( )/constructor public void init(int nmark new int[ ]matrix new int[ ][ ]numedge public int (return mark length/of vertices public int (return numedge/of edges public int first(int /get ' first neighbor for (int = <mark lengthi++if (matrix[ ][ ! return ireturn mark length/no edge for this vertex public int next(int vint /get ' next edge for (int = + <mark lengthi++if (matrix[ ][ ! return ireturn mark length/no next edge/set edge weight public void setedge(int iint jint wtassert wt!= "cannot set weight to "if (matrix[ ][ = numedge++matrix[ ][jwtpublic void deledge(int iint /delete edge (ijif (matrix[ ][ ! numedge--matrix[ ][ public boolean isedge(int iint /is (ijan edgereturn matrix[ ][ ! figure an implementation for the adjacency matrix implementation
20,451
chap graphs public int weight(int iint /return edge weight return matrix[ ][ ]/get and set marks public void setmark(int vint valmark[vvalpublic int getmark(int vreturn mark[ ]figure (continued/edge class for adjacency list graph representation class edge private int vertwtpublic edge(int vint /constructor vert vwt wpublic int vertex(return vertpublic int weight(return wtimplementation for graphl member functions is straightforward in principlewith the key functions being setedgedeledgeand weight the simplest implementation would start at the beginning of the adjacency list and move along it until the desired vertex has been found howevermany graph algorithms work by taking advantage of the first and next functions to process all edges extending from given vertex in turn thusthere is significant time savings if setedgedeledgeand weight first check to see if the desired edge is the current one on the relevant linked list the implementation of figure does exactly this graph traversals often it is useful to visit the vertices of graph in some specific order based on the graph' topology this is known as graph traversal and is similar in concept to tree traversal recall that tree traversals visit every node exactly oncein some specified order such as preorderinorderor postorder multiple tree traversals exist because various applications require the nodes to be visited in particular order for exampleto print bst' nodes in ascending order requires an inorder traversal as opposed to some other traversal standard graph traversal orders also exist each is appropriate for solving certain problems for examplemany problems in artificial intelligence programming are modeled using graphs the problem domain may consist of large collection of stateswith connections between various pairs of states solving the problem may require getting from specified start state to
20,452
sec graph traversals /adjacency list graph implementation class graphl implements graph private graphlist[vertexprivate int numedgepublic int[markpublic graphl({public graphl(int ninit( )/the vertex list /number of edges /the mark array /constructor public void init(int nmark new int[ ]vertex new graphlist[ ]for (int = <ni++vertex[inew graphlist()numedge public int (return mark length/of vertices public int (return numedge/of edges public int first(int /get ' first neighbor if (vertex[vlength(= return mark length/no neighbor vertex[vmovetostart()edge it vertex[vgetvalue()return it vertex()public int next(int vint /get next neighbor edge it nullif (isedge(vw)vertex[vnext()it vertex[vgetvalue()if (it !nullreturn it vertex()else return mark lengthfigure an implementation for the adjacency list
20,453
chap graphs /store edge weight public void setedge(int iint jint weightassert weight ! "may not set weight to "edge curredge new edge(jweight)if (isedge(ij)/edge already exists in graph vertex[iremove()vertex[iinsert(curredge)else /keep neighbors sorted by vertex index numedge++for (vertex[imovetostart()vertex[icurrpos(vertex[ilength()vertex[inext()if (vertex[igetvalue(vertex(jbreakvertex[iinsert(curredge)public void deledge(int iint /delete edge if (isedge(ij)vertex[iremove()numedge--public boolean isedge(int vint /is ( ,jan edgeedge it vertex[vgetvalue()if ((it !null&(it vertex(= )return truefor (vertex[vmovetostart()vertex[vcurrpos(vertex[vlength()vertex[vnext()/check whole list if (vertex[vgetvalue(vertex(=wreturn truereturn falsepublic int weight(int iint /return weight of edge if (isedge(ij)return vertex[igetvalue(weight()return /set and get marks public void setmark(int vint valmark[vvalpublic int getmark(int vreturn mark[ ]figure (continued
20,454
specified goal state by moving between states only through the connections typicallythe start and goal states are not directly connected to solve this problemthe vertices of the graph must be searched in some organized manner graph traversal algorithms typically begin with start vertex and attempt to visit the remaining vertices from there graph traversals must deal with number of troublesome cases firstit may not be possible to reach all vertices from the start vertex this occurs when the graph is not connected secondthe graph may contain cyclesand we must make sure that cycles do not cause the algorithm to go into an infinite loop graph traversal algorithms can solve both of these problems by maintaining mark bit for each vertex on the graph at the beginning of the algorithmthe mark bit for all vertices is cleared the mark bit for vertex is set when the vertex is first visited during the traversal if marked vertex is encountered during traversalit is not visited second time this keeps the program from going into an infinite loop when it encounters cycle once the traversal algorithm completeswe can check to see if all vertices have been processed by checking the mark bit array if not all vertices are markedwe can continue the traversal from another unmarked vertex note that this process works regardless of whether the graph is directed or undirected to ensure visiting all verticesgraphtraverse could be called as follows on graph gvoid graphtraverse(graph gint vfor ( = < () ++ setmark(vunvisited)/initialize for ( = < () ++if ( getmark( =unvisiteddotraverse(gv)function "dotraversemight be implemented by using one of the graph traversals described in this section depth-first search the first method of organized graph traversal is called depth-first search (dfswhenever vertex is visited during the searchdfs will recursively visit all of ' unvisited neighbors equivalentlydfs will add all edges leading out of to stack the next vertex to be visited is determined by popping the stack and following that edge the effect is to follow one branch through the graph to its conclusionthen it will back up and follow another branchand so on the dfs process can be used to define depth-first search tree this tree is composed of
20,455
chap graphs ( (bfigure (aa graph (bthe depth-first search tree for the graph when starting at vertex the edges that were followed to any new (unvisitedvertex during the traversal and leaves out the edges that lead to already visited vertices dfs can be applied to directed or undirected graphs here is an implementation for the dfs algorithmstatic void dfs(graph gint /depth first search previsit(gv)/take appropriate action setmark(vvisited)for (int first( ) ( next(vw)if ( getmark( =unvisiteddfs(gw)postvisit(gv)/take appropriate action this implementation contains calls to functions previsit and postvisit these functions specify what activity should take place during the search just as preorder tree traversal requires action before the subtrees are visitedsome graph traversals require that vertex be processed before ones further along in the dfs alternativelysome applications require activity after the remaining vertices are processedhence the call to function postvisit this would be natural opportunity to make use of the visitor design pattern described in section figure shows graph and its corresponding depth-first search tree figure illustrates the dfs process for the graph of figure (adfs processes each edge once in directed graph in an undirected graphdfs processes each edge from both directions each vertex must be visitedbut only onceso the total cost is th(| | |
20,456
sec graph traversals call dfs on mark process (acprint (acand call dfs on mark process (cac process (cbprint (cband call dfs on mark process (bcc process (bfprint (bfand call dfs on mark process (fbb process (fcc process (fdprint (fdand call dfs on mark process (dcprocess (dfa pop process (feprint (feand call dfs on mark process (eaprocess (efa pop done with pop continue with process (ceprocess (cfpop done with pop continue with process (aepop dfs complete figure detailed illustration of the dfs process for the graph of figure (astarting at vertex the steps leading to each change in the recursion stack are described
20,457
chap graphs /breadth first (queue-basedsearch static void bfs(graph gint startqueue new aqueue( ()) enqueue(start) setmark(startvisited)while ( length( /process each vertex on int dequeue()previsit(gv)/take appropriate action for (int first( ) () next(vw)if ( getmark( =unvisited/put neighbors on setmark(wvisited) enqueue( )postvisit(gv)/take appropriate action figure implementation for the breadth-first graph traversal algorithm breadth-first search our second graph traversal algorithm is known as breadth-first search (bfsbfs examines all vertices connected to the start vertex before visiting vertices further away bfs is implemented similarly to dfsexcept that queue replaces the recursion stack note that if the graph is tree and the start vertex is at the rootbfs is equivalent to visiting vertices level by level from top to bottom figure provides an implementation for the bfs algorithm figure shows graph and the corresponding breadth-first search tree figure illustrates the bfs process for the graph of figure (atopological sort assume that we need to schedule series of taskssuch as classes or construction jobswhere we cannot start one task until after its prerequisites are completed we wish to organize the tasks into linear order that allows us to complete them one at time without violating any prerequisites we can model the problem using dag the graph is directed because one task is prerequisite of another -the vertices have directed relationship it is acyclic because cycle would indicate conflicting series of prerequisites that could not be completed without violating at least one prerequisite the process of laying out the vertices of dag in linear order to meet the prerequisite rules is called topological sort figure illustrates the problem an acceptable topological sort for this example is
20,458
sec graph traversals ( (bfigure (aa graph (bthe breadth-first search tree for the graph when starting at vertex topological sort may be found by performing dfs on the graph when vertex is visitedno action is taken ( function previsit does nothingwhen the recursion pops back to that vertexfunction postvisit prints the vertex this yields topological sort in reverse order it does not matter where the sort startsas long as all vertices are visited in the end here is an implementation for the dfs-based algorithm static void topsort(graph /recursive topological sort for (int = < () ++/initialize mark array setmark(iunvisited)for (int = < () ++/process all vertices if ( getmark( =unvisitedtophelp(gi)/recursive helper function /topsort helper function static void tophelp(graph gint vg setmark(vvisited)for (int first( ) () next(vw)if ( getmark( =unvisitedtophelp(gw)printout( )/postvisit for vertex using this algorithm starting at and visiting adjacent neighbors in alphabetic ordervertices of the graph in figure are printed out in the order when reversedthis yields the legal topological sort
20,459
chap graphs initial call to bfs on mark and put on the queue dequeue process (caignore process (cbmark and enqueue print (cbprocess (cdmark and enqueue print (cdprocess (cfmark and enqueue print (cfd dequeue process (bcignore process (bfignore dequeue process (acmark and enqueue print (acprocess (aemark and enqueue print(aeb dequeue process (eaignore process (efignore dequeue process (dcignore process (dfignore dequeue process (fbignore process (fcignore process (fdignore bfs is complete figure detailed illustration of the bfs process for the graph of figure (astarting at vertex the steps leading to each change in the queue are described
20,460
sec shortest-paths problems figure an example graph for topological sort seven tasks have dependencies as shown by the directed graph we can also implement topological sort using queue instead of recursion to do sowe first visit all edgescounting the number of edges that lead to each vertex ( count the number of prerequisites for each vertexall vertices with no prerequisites are placed on the queue we then begin processing the queue when vertex is taken off of the queueit is printedand all neighbors of (that isall vertices that have as prerequisitehave their counts decremented by one any neighbor whose count is now zero is placed on the queue if the queue becomes empty without printing all of the verticesthen the graph contains cycle ( there is no possible ordering for the tasks that does not violate some prerequisitethe printed order for the vertices of the graph in figure using the queue version of topological sort is figure shows an implementation for the queue-based topological sort algorithm shortest-paths problems on road mapa road connecting two towns is typically labeled with its distance we can model road network as directed graph whose edges are labeled with real numbers these numbers represent the distance (or other cost metricsuch as travel timebetween two vertices these labels may be called weightscostsor distancesdepending on the application given such grapha typical problem is to find the total length of the shortest path between two specified vertices this is not trivial problembecause the shortest path may not be along the edge (if anyconnecting two verticesbut rather may be along path involving one or more intermediate vertices for examplein figure the cost of the path from to to is the cost of the edge directly from to is the cost of the path from to to to is thusthe shortest path from to is (not along the edge connecting to dwe use the notation (ad to indicate that the shortest distance from to is in figure there is no path from to bso we set (ebwe define (ad to be the weight of edge (ad)that
20,461
chap graphs static void topsort(graph /topological sortqueue queue new aqueue( ())int[count new int[ ()]int vfor ( = < () ++count[ /initialize for ( = < () ++/process every edge for (int first( ) () next(vw)count[ ]++/add to ' prereq count for ( = < () ++/initialize queue if (count[ = / has no prerequisites enqueue( )while ( length( /process the vertices dequeue(intvalue()printout( )/previsit for vertex for (int first( ) () next(vw)count[ ]--/one less prerequisite if (count[ = /this vertex is now free enqueue( )figure queue-based topological sort algorithm figure example graph for shortest-path definitions isthe weight of the direct connection from to because there is no edge from to bw(ebnote that (dabecause the graph of figure is directed we assume that all weights are positive single-source shortest paths this section presents an algorithm to solve the single-source shortest-paths problem given vertex in graph gfind shortest path from to every other vertex in we might want only the shortest path between two verticess and however in the worst casewhile finding the shortest path from to twe might find
20,462
the shortest paths from to every other vertex as well so there is no better algorithm (in the worst casefor finding the shortest path to single vertex than to find shortest paths to all vertices the algorithm described here will only compute the distance to every such vertexrather than recording the actual path recording the path requires modifications to the algorithm that are left as an exercise computer networks provide an application for the single-source shortest-paths problem the goal is to find the cheapest way for one computer to broadcast message to all other computers on the network the network can be modeled by graph with edge weights indicating time or cost to send message to neighboring computer for unweighted graphs (or whenever all edges have the same cost)the singlesource shortest paths can be found using simple breadth-first search when weights are addedbfs will not give the correct answer one approach to solving this problem when the edges have differing weights might be to process the vertices in fixed order label the vertices to vn- with when processing vertex we take the edge connecting and when processing we consider the shortest distance from to and compare that to the shortest distance from to to when processing vertex vi we consider the shortest path for vertices through vi- that have already been processed unfortunatelythe true shortest path to vi might go through vertex vj for such path will not be considered by this algorithm howeverthe problem would not occur if we process the vertices in order of distance from assume that we have processed in order of distance from to the first vertices that are closest to scall this set of vertices we are now about to process the ith closest vertexcall it shortest path from to must have its next-to-last vertex in thusd(sxmin ( (suw(ux)us in other wordsthe shortest path from to is the minimum over all paths that go from to uthen have an edge from to xwhere is some vertex in this solution is usually referred to as dijkstra' algorithm it works by maintaining distance estimate (xfor all vertices in the elements of are initialized to the value infinite vertices are processed in order of distance from whenever vertex is processedd(xis updated for every neighbor of figure shows an implementation for dijkstra' algorithm at the endarray will contain the shortest distance values there are two reasonable solutions to the key issue of finding the unvisited vertex with minimum distance value during each pass through the main for loop
20,463
chap graphs /compute shortest path distances from sstore them in static void dijkstra(graph gint sint[dfor (int = < () ++/initialize [iinteger max valued[ for (int = < () ++/process the vertices int minvertex(gd)/find next-closest vertex setmark(vvisited)if ( [ =integer max valuereturn/unreachable for (int first( ) () next(vw)if ( [ ( [vg weight(vw)) [wd[vg weight(vw)figure an implementation for dijkstra' algorithm the first method is simply to scan through the list of |vvertices searching for the minimum valueas followsstatic int minvertex(graph gint[dint /initialize to any unvisited vertexfor (int = < () ++if ( getmark( =unvisitedv ibreakfor (int = < () ++/now find smallest value if (( getmark( =unvisited&( [id[ ]) ireturn vbecause this scan is done |vtimesand because each edge requires constanttime update to dthe total cost for this approach is th(| | | |th(| | )because |eis in (| | the second method is to store unprocessed vertices in min-heap ordered by distance values the next-closest vertex can be found in the heap in th(log | |time every time we modify ( )we could reorder in the heap by deleting and reinserting it this is an example of priority queue with priority updateas described in section to implement true priority updatingwe would need to store with each vertex its array index within the heap simpler approach is to add the new (smallerdistance value for given vertex as new record in the heap the smallest value for given vertex currently in the heap will be found firstand greater distance values found later will be ignored because the vertex will already be marked as visited the only disadvantage to repeatedly inserting distance values is that it will raise the number of elements in the heap from th(| |to th(| |in the worst case the time complexity is th((| | |log | |)because for each edge we must reorder the heap because the objects stored on the heap need to
20,464
know both their vertex number and their distancewe create simple class for the purpose called dijkelemas follows dijkelem is quite similar to the edge class used by the adjacency list representation import java lang comparableclass dijkelem implements comparable private int vertexprivate int weightpublic dijkelem(int invint inwvertex invweight inwpublic dijkelem({vertex weight public int key(return weightpublic int vertex(return vertexpublic int compareto(dijkelem thatif (weight that key()return - else if (weight =that key()return else return figure shows an implementation for dijkstra' algorithm using the priority queue using minvertex to scan the vertex list for the minimum value is more efficient when the graph is densethat iswhen |eapproaches | | using priority queue is more efficient when the graph is sparse because its cost is th((| | |log | |howeverwhen the graph is densethis cost can become as great as th(| | log | |th(| | log | |figure illustrates dijkstra' algorithm the start vertex is all vertices except have an initial value of after processing vertex aits neighbors have their estimates updated to be the direct distance from after processing (the closest vertex to )vertices and are updated to reflect the shortest path through the remaining vertices are processed in order bdand minimum-cost spanning trees this section presents two algorithms for determining the minimum-cost spanning tree (mstfor graph the mst problem takes as input connectedundirected graph gwhere each edge has distance or weight measure attached the mst is the graph containing the vertices of along with the subset of ' edges that ( has minimum total cost as measured by summing the values for all of the edges
20,465
chap graphs /dijkstra' shortest-pathspriority queue version static void dijkstra(graph gint sint[dint /the current vertex dijkelem[ new dijkelem[ ()]/heap for edges [ new dijkelem( )/initial vertex minheap new minheap( ())for (int = < () ++/initialize distance [iinteger max valued[ for (int = < () ++/for each vertex do ( removemin()vertex()/get position while ( getmark( =visited) setmark(vvisited)if ( [ =integer max valuereturn/unreachable for (int first( ) () next(vw)if ( [ ( [vg weight(vw))/update [wd[vg weight(vw) insert(new dijkelem(wd[ ]))figure an implementation for dijkstra' algorithm using priority queue initial process process process process process figure listing for the progress of dijkstra' algorithm operating on the graph of figure the start vertex is in the subsetand ( keeps the vertices connected applications where solution to this problem is useful include soldering the shortest set of wires needed to connect set of terminals on circuit boardand connecting set of cities by telephone lines in such way as to require the least amount of cable the mst contains no cycles if proposed set of edges did have cyclea cheaper mst could be had by removing any one of the edges in the cycle thusthe mst is free tree with | edges the name "minimum-cost spanning treecomes from the fact that the required set of edges forms treeit spans the vertices
20,466
sec minimum-cost spanning trees figure graph and its mst all edges appear in the original graph those edges drawn with heavy lines indicate the subset making up the mst note that edge (cfcould be replaced with edge (dfto form different mst with equal cost ( it connects them together)and it has minimum cost figure shows the mst for an example graph prim' algorithm the first of our two algorithms for finding msts is commonly referred to as prim' algorithm prim' algorithm is very simple start with any vertex in the graphsetting the mst to be initially pick the least-cost edge connected to this edge connects to another vertexcall this add vertex and edge (nmto the mst nextpick the least-cost edge coming from either or to any other vertex in the graph add this edge and the new vertex it reaches to the mst this process continuesat each step expanding the mst by selecting the least-cost edge from vertex currently in the mst to vertex not currently in the mst prim' algorithm is quite similar to dijkstra' algorithm for finding the singlesource shortest paths the primary difference is that we are seeking not the next closest vertex to the start vertexbut rather the next closest vertex to any vertex currently in the mst thus we replace the lines if ( [ ( [vg->weight(vw)) [wd[vg->weight(vw)in djikstra' algorithm with the lines if ( [wg->weight(vw) [wg->weight(vw)in prim' algorithm
20,467
chap graphs /compute minimal-cost spanning tree static void prim(graph gint sint[dint[vfor (int = < () ++/initialize [iinteger max valued[ for (int = < () ++/process the vertices int minvertex(gd) setmark(vvisited)if ( !saddedgetomst( [ ] )if ( [ =integer max valuereturn/unreachable for (int first( ) () next(vw)if ( [wg weight(vw) [wg weight(vw) [wvfigure an implementation for prim' algorithm figure shows an implementation for prim' algorithm that searches the distance matrix for the next closest vertex for each vertex iwhen is processed by prim' algorithman edge going to is added to the mst that we are building array [istores the previously visited vertex that is closest to vertex this information lets us know which edge goes into the mst when vertex is processed the implementation of figure also contains calls to addedgetomst to indicate which edges are actually added to the mst alternativelywe can implement prim' algorithm using priority queue to find the next closest vertexas shown in figure as with the priority queue version of dijkstra' algorithmthe heap' elem type stores dijkelem object prim' algorithm is an example of greedy algorithm at each step in the for loopwe select the least-cost edge that connects some marked vertex to some unmarked vertex the algorithm does not otherwise check that the mst really should include this least-cost edge this leads to an important questiondoes prim' algorithm work correctlyclearly it generates spanning tree (because each pass through the for loop adds one edge and one unmarked vertex to the spanning tree until all vertices have been added)but does this tree have minimum costtheorem prim' algorithm produces minimum-cost spanning tree proofwe will use proof by contradiction let (vebe graph for which prim' algorithm does not generate an mst define an ordering on the vertices according to the order in which they were added by prim' algorithm to the mst
20,468
/prims' mst algorithmpriority queue version static void prim(graph gint sint[dint[vint /the current vertex dijkelem[ new dijkelem[ ()]/heap for edges [ new dijkelem( )/initial vertex minheap new minheap( ())for (int = < () ++/initialize [iinteger max value/distances [ for (int = < () ++/nowget distances do ( removemin()vertex()/get position while ( getmark( =visited) setmark(vvisited)if ( !saddedgetomst( [ ] )/add edge to mst if ( [ =integer max valuereturn/unreachable for (int first( ) () next(vw)if ( [wg weight(vw)/update [wg weight(vw) [wv/where it came from insert(new dijkelem(wd[ ]))figure an implementation of prim' algorithm using priority queue vn- let edge ei connect (vx vi for some let ej be the lowest numbered (firstedge added by prim' algorithm such that the set of edges selected so far cannot be extended to form an mst for in other wordsej is the first edge where prim' algorithm "went wrong let be the "truemst call vp ( jthe vertex connected by edge ej that isej (vp vj because is treethere exists some path in connecting vp and vj there must be some edge in this path connecting vertices vu and vw with and > because ej is not part of tadding edge ej to forms cycle edge must be of lower cost than edge ej because prim' algorithm did not generate an mst this situation is illustrated in figure howeverprim' algorithm would have selected the least-cost edge available it would have selected not ej thusit is contradiction that prim' algorithm would have selected the wrong edgeand thusprim' algorithm must be correct example for the graph of figure assume that we begin by marking vertex from athe least-cost edge leads to vertex vertex and edge (acare added to the mst at this pointour candidate edges connecting the mst (vertices and cwith the rest of the graph are (ae)
20,469
chap graphs marked vertices vi vu vp unmarked vertices vi > "correctedge ej prim' edge vw vj figure prim' mst algorithm proof the left oval contains that portion of the graph where prim' mst and the "truemst agree the right oval contains the rest of the graph the two portions of the graph are connected by (at leastedges ej (selected by prim' algorithm to be in the mstand (the "correctedge to be placed in the mstnote that the path from vw to vj cannot include any marked vertex vi <jbecause to do so would form cycle (cb)(cd)and (cffrom these choicesthe least-cost edge from the mst is (cdso we add vertex to the mst for the next iterationour edge choices are (ae)(cb)(cf)and (dfbecause edges (cfand (dfhappen to have equal costit is an arbitrary decision as to which gets selected let' pick (cfthe next step marks vertex and adds edge (feto the mst following in this mannervertex (through edge (cb)is marked at this pointthe algorithm terminates kruskal' algorithm our next mst algorithm is commonly referred to as kruskal' algorithm kruskal' algorithm is also simplegreedy algorithm we first partition the set of vertices into |vequivalence classes (see section )each consisting of one vertex we then process the edges in order of weight an edge is added to the mstand the two equivalence classes combinedif the edge connects two vertices in different equivalence classes this process is repeated until only one equivalence class remains
20,470
example figure shows the first three steps of kruskal' algorithm for the graph of figure edge (cdhas the least costand because and are currently in separate mststhey are combined we next select edge (efto processand combine these vertices into single mst the third edge we process is (cf)which causes the mst containing vertices and to merge with mst containing vertices and the next edge to process is (dfbut because vertices and are currently in the same mstthis edge is rejected the algorithm will continue on to accept edges (bcand (acinto the mst the edges can be processed in order of weight by using min-heap this is generally faster than sorting the edges firstbecause in practice we need only visit small fraction of the edges before completing the mst this is an example of finding only few smallest elements in listas discussed in section the only tricky part to this algorithm is determining if two vertices belong to the same equivalence class fortunatelythe ideal algorithm is available for the purpose -the union/find algorithm based on the parent pointer representation for trees described in section figure shows an implementation for the algorithm class kruskalelem is used to store the edges on the min-heap kruskal' algorithm is dominated by the time required to process the edges the differ and union functions are nearly constant in time if path compression and weighted union is used thusthe total cost of the algorithm is th(|elog | |in the worst casewhen nearly all edges must be processed before all the edges of the spanning tree are found and the algorithm can stop more often the edges of the spanning tree are the shorter ones,and only about |vedges must be processed if sothe cost is often close to th(|vlog | |in the average case further reading many interesting properties of graphs can be investigated by playing with the programs in the stanford graphbase this is collection of benchmark databases and graph processing programs the stanford graphbase is documented in [knu exercises prove by induction that graph with vertices has at most ( - )/ edges prove the following implications regarding free trees (aif an undirected graph is connected and has no simple cyclesthen the graph has | edges
20,471
chap graphs initial step process edge (cdd step process edge (efd step process edge (cfd figure illustration of the first three steps of kruskal' mst algorithm as applied to the graph of figure (bif an undirected graph has | edges and no cyclesthen the graph is connected (adraw the adjacency matrix representation for the graph of figure (bdraw the adjacency list representation for the same graph (cif pointer requires four bytesa vertex label requires two bytesand an edge weight requires two byteswhich representation requires more space for this graph(dif pointer requires four bytesa vertex label requires one byteand an edge weight requires two byteswhich representation requires more space for this graph show the dfs tree for the graph of figure starting at vertex
20,472
class kruskalelem implements comparable private int vwweightpublic kruskalelem(int inweightint invint inwweight inweightv invw inwpublic int (return vpublic int (return wpublic int key(return weightpublic int compareto(kruskalelem thatif (weight that key()return - else if (weight =that key()return else return static void kruskal(graph /kruskal' mst algorithm parptrtree new parptrtree( ())/equivalence array kruskalelem[ new kruskalelem[ ()]/minheap array int edgecnt /count of edges for (int = < () ++/put edges in the array for (int first( ) () next(iw) [edgecnt++new kruskalelem( weight(iw)iw)minheap new minheap(eedgecntedgecnt)int nummst ()/initially classes for (int = nummst> ++/combine equiv classes kruskalelem temp removemin()/next cheapest int temp ()int temp ()if ( differ(vu)/if in different classes union(vu)/combine equiv classes addedgetomst(vu)/add this edge to mst nummst--/one less mst figure an implementation for kruskal' algorithm wright pseudocode algorithm to create dfs tree for an undirectedconnected graph starting at specified vertex show the bfs tree for the graph of figure starting at vertex wright pseudocode algorithm to create bfs tree for an undirectedconnected graph starting at specified vertex the bfs topological sort algorithm can report the existence of cycle if one is encountered modify this algorithm to print the vertices possibly appearing in cycles (if there are any cycles
20,473
chap graphs figure example graph for exercises explain whyin the worst casedijkstra' algorithm is (asymptoticallyas efficient as any algorithm for finding the shortest path from some vertex to another vertex show the shortest paths generated by running dijkstra' shortest-paths algorithm on the graph of figure beginning at vertex show the values as each vertex is processedas in figure modify the algorithm for single-source shortest paths to actually store and return the shortest paths rather than just compute the distances the root of dag is vertex such that every vertex of the dag can be reached by directed path from write an algorithm that takes directed graph as input and determines the root (if there is onefor the graph the running time of your algorithm should be th(| | | write an algorithm to find the longest path in dagwhere the length of the path is measured by the number of edges that it contains what is the asymptotic complexity of your algorithm write an algorithm to determine whether directed graph of |vvertices contains cycle your algorithm should run in th(| | |time write an algorithm to determine whether an undirected graph of |vvertices contains cycle your algorithm should run in th(| |time the single-destination shortest-paths problem for directed graph is to find the shortest path from every vertex to specified vertex write an algorithm to solve the single-destination shortest-paths problem list the order in which the edges of the graph in figure are visited when running prim' mst algorithm starting at vertex show the final mst list the order in which the edges of the graph in figure are visited when running kruskal' mst algorithm each time an edge is added to the
20,474
mstshow the result on the equivalence array( show the array as in figure write an algorithm to find maximum cost spanning treethat isthe spanning tree with highest possible cost when can prim' and kruskal' algorithms yield different msts prove thatif the costs for the edges of graph are distinctthen only one mst exists for does either prim' or kruskal' algorithm work if there are negative edge weights consider the collection of edges selected by dijkstra' algorithm as the shortest paths to the graph' vertices from the start vertex do these edges form spanning tree (not necessarily of minimum cost)do these edges form an mstexplain why or why not prove that tree is bipartite graph prove that any tree ( connectedundirected graph with no cyclescan be two-colored ( graph can be two colored if every vertex can be assigned one of two colors such that no adjacent vertices have the same color write an algorithm that determines if an arbitrary undirected graph is bipartite graph if the graph is bipartitethen your algorithm should also identify the vertices as to which of the two partitions each belongs to projects design format for storing graphs in files then implement two functionsone to read graph from file and the other to write graph to file test your functions by implementing complete mst program that reads an undirected graph in from fileconstructs the mstand then writes to second file the graph representing the mst an undirected graph need not explicitly store two separate directed edges to represent single undirected edge an alternative would be to store only single undirected edge (ijto connect vertices and howeverwhat if the user asks for edge (ji)we can solve this problem by consistently storing the edge such that the lesser of and always comes first thusif we have an edge connecting vertices and requests for edge ( and ( both map to ( because looking at the adjacency matrixwe notice that only the lower triangle of the array is used thus we could cut the space required by the adjacency matrix from | | positions to | |(| )/ positions read section on
20,475
chap graphs triangular matrices the re-implement the adjacency matrix representation of figure to implement undirected graphs using triangular array while the underlying implementation (whether adjacency matrix or adjacency listis hidden behind the graph adtthese two implementations can have an impact on the efficiency of the resulting program for dijkstra' shortest paths algorithmtwo different implementations were given in section that provide different ways for determining the next closest vertex at each iteration of the algorithm the relative costs of these two variants depend on who sparse or dense the graph is they might also depend on whether the graph is implemented using an adjacency list or adjacency matrix design and implement study to compare the effects on performance for three variables(ithe two graph representations (adjacency list and adjacency matrix)(iithe two implementations for djikstra' shortest paths algorithm (searching the table of vertex distances or using priority queue to track the distances)and (iiisparse versus dense graphs be sure to test your implementations on variety of graphs that are sufficiently large to generate meaningful times the example implementations for dfs and bfs show calls to functions previsit and postvisit re-implement the bfs and dfs functions to make use of the visitor design pattern to handle the pre/post visit functionality write program to label the connected components for an undirected graph in other wordsall vertices of the first component are given the first component' labelall vertices of the second component are given the second component' labeland so on your algorithm should work by defining any two vertices connected by an edge to be members of the same equivalence class once all of the edges have been processedall vertices in given equivalence class will be connected use the union/find implementation from section to implement equivalence classes
20,476
lists and arrays revisited simple lists and arrays are the right tool for the many applications other situations require support for operations that cannot be implemented efficiently by the standard list representations of this presents advanced implementations for lists and arrays that overcome some of the problems of simple linked list and contiguous array representations wide range of topics are coveredwhose unifying thread is that the data structures are all listor array-like this should also serve to reinforce the concept of logical representation versus physical implementationas some of the "listimplementations have quite different organizations internally section describes series of representations for multilistswhich are lists that may contain sublists section discusses representations for implementing sparse matriceslarge matrices where most of the elements have zero values section discusses memory management techniqueswhich are essentially way of allocating variable-length sections from large array multilists recall from that list is finiteordered sequence of items of the form hx xn- where > we can represent the empty list by null or hi in we assumed that all list elements had the same data type in this sectionwe extend the definition of lists to allow elements to be arbitrary in nature in generallist elements are one of two types an atomwhich is data record of some type such as numbersymbolor string another listwhich is called sublist
20,477
chap lists and arrays revisited figure example of multilist represented by tree figure example of reentrant multilist the shape of the structure is dag (all edges point downwarda list containing sublists will be written as hx hy ha iy ihz ix in this examplethe list has four elements the second element is the sublist hy ha iy and the third is the sublist hz the sublist hy ha iy itself contains sublist if list has one or more sublistswe call multilist lists with no sublists are often referred to as linear lists or chains note that this definition for multilist fits well with our definition of sets from definition where set' members can be either primitive elements or sets we can restrict the sublists of multilist in various waysdepending on whether the multilist should have the form of treea dagor generic graph pure list is list structure whose graph corresponds to treesuch as in figure in other wordsthere is exactly one path from the root to any nodewhich is equivalent to saying that no object may appear more than once in the list in the pure listeach pair of angle brackets corresponds to an internal node of the tree the members of the list correspond to the children for the node atoms on the list correspond to leaf nodes reentrant list is list structure whose graph corresponds to dag nodes might be accessible from the root by more than one pathwhich is equivalent to saying that objects (including sublistsmay appear multiple times in the list as long as no cycles are formed all edges point downwardfrom the node representing list or sublist to its elements figure illustrates reentrant list to write out
20,478
sec multilists figure example of cyclic list the shape of the structure is directed graph this list in bracket notationwe can duplicate nodes as necessary thusthe bracket notation for the list of figure could be written hhhabiihhabicihcdeiheii for conveniencewe will adopt convention of allowing sublists and atoms to be labeledsuch as " :whenever label is repeatedthe element corresponding to that label will be substituted when we write out the list thusthe bracket notation for the list of figure could be written hhl habiihl cihl dl eihl ii cyclic list is list structure whose graph corresponds to any directed graphpossibly containing cycles figure illustrates such list labels are required to write this in bracket notation here is the notation for the list of figure hl hl hal iihl bihl cdil hl ii multilists can be implemented in number of ways most of these should be familiar from implementations suggested earlier in the book for listtreeand graph data structures one simple approach is to use an array representation this works well for chains with fixed-length elementsequivalent to the simple array-based list of we can view nested sublists as variable-length elements to use this approachwe require some indication of the beginning and end of each sublist in essencewe are using sequential tree implementation as discussed in section this should be no surprisebecause the pure list is equivalent to general tree structure unfortunatelyas with any sequential representationaccess to the nth sublist must be done sequentially from the beginning of the list because pure lists are equivalent to treeswe can also use linked allocation methods to support direct access to the list of children simple linear lists are
20,479
chap lists and arrays revisited root figure linked representation for the pure list of figure the first field in each link node stores tag bit if the tag bit stores "+,then the data field stores an atom if the tag bit stores "-,then the data field stores pointer to sublist root figure lisp-like linked representation for the cyclic multilist of figure each link node stores two pointers pointer either points to an atomor to another link node link nodes are represented by two boxesand atoms by circles represented by linked lists pure lists can be represented as linked lists with an additional tag field to indicate whether the node is an atom or sublist if it is sublistthe data field points to the first element on the sublist this is illustrated by figure another approach is to represent all list elements with link nodes storing two pointer fieldsexcept for atoms atoms just contain data this is the system used by the programming language lisp figure illustrates this representation either the pointer contains tag bit to identify what it points toor the object being pointed to stores tag bit to identify itself tags distinguish atoms from list nodes this implementation can easily support reentrant and cyclic listsbecause non-atoms can point to any other node
20,480
sec matrix representations (aa (bfigure triangular matrices (aa lower triangular matrix (ban upper triangular matrix matrix representations some applications must represent largetwo-dimensional matrix where many of the elements have value of zero one example is the lower triangular matrix that results from solving systems of simultaneous equations lower triangular matrix stores zero values at positions [rcsuch that cas shown in figure (athusthe upper-right triangle of the matrix is always zero another example is the representation of undirected graphs in an adjacency matrix (see project because all edges between vertices and go in both directionsthere is no need to store both instead we can just store one edge going from the higher-indexed vertex to the lower-indexed vertex in this caseonly the lower triangle of the matrix can have non-zero values we can take advantage of this fact to save space instead of storing ( )/ pieces of information in an arrayit would save space to use list of length ( )/ this is only practical if some means can be found to locate within the list the element that would correspond to position [rcin the original matrix to derive an equation to do this computationnote that row of the matrix has one non-zero valuerow has twopnon-zero valuesand so on thusrow is preceded by rows with total of rk= ( )/ non-zero elements adding to reach the cth position in the rth row yields the following equation to convert position [rcin the original matrix to the correct position in the list matrix[rclist[( )/ ca similar equation can be used to store an upper triangular matrixthat isa matrix with zero values at positions [rcsuch that cas shown in figure (bfor an upper triangular matrixthe equation would be matrix[rclist[rn ( )/ ca more difficult situation arises when the vast majority of values stored in an matrix are zerobut there is no restriction on which positions are zero and which are non-zero this is known as sparse matrix
20,481
chap lists and arrays revisited cols rows figure the orthogonal list sparse matrix representation one approach to representing sparse matrix is to concatenate (or otherwise combinethe row and column coordinates into single value and use this as key in hash table thusif we want to know the value of particular position in the matrixwe search the hash table for the appropriate key if value for this position is not foundit is assumed to be zero this is an ideal approach when all queries to the matrix are in terms of access by specified position howeverif we wish to find the first non-zero element in given rowor the next non-zero element below the current one in given columnthen the hash table requires us to check sequentially through all possible positions in some row or column another approach is to implement the matrix as an orthogonal listas illustrated in figure here we have list of row headerseach of which contains pointer to list of matrix records second list of column headers also contains pointers to matrix records each non-zero matrix element stores pointers to its non-zero neighbors in the rowboth following and preceding it each non-zero
20,482
sec matrix representations element also stores pointers to its non-zero neighbors following and preceding it in the column thuseach non-zero element stores its own valueits position within the matrixand four pointers non-zero elements are found by traversing row or column list note that the first non-zero element in given row could be in any columnlikewisethe neighboring non-zero element in any row or column list could be at any (higherrow or column in the array thuseach non-zero element must also store its row and column position explicitly to find if particular position in the matrix contains non-zero elementwe traverse the appropriate row or column list for examplewhen looking for the element at row and column we can traverse the list either for row or for column when traversing row or column listif we come to an element with the correct positionthen its value is non-zero if we encounter an element with higher positionthen the element we are looking for is not in the sparse matrix in this casethe element' value is zero for examplewhen traversing the list for row in the matrix of figure we first reach the element at row and column if this is what we are looking forthen the search can stop if we are looking for the element at row and column then the search proceeds along the row list to next reach the element at column at this point we know that no element at row and column is stored in the sparse matrix insertion and deletion can be performed by working in similar way to insert or delete elements within the appropriate row and column lists each non-zero element stored in the sparse matrix representation takes much more space than an element stored in simple matrix when is the sparse matrix more space efficient than the standard representationto calculate thiswe need to determine how much space the standard matrix requiresand how much the sparse matrix requires the size of the sparse matrix depends on the number of nonzero elementswhile the size of the standard matrix representation does not vary we need to know the (relativesizes of pointer and data value for simplicityour calculation will ignore the space taken up by the row and column header (which is not much affected by the number of elements in the sparse arrayas an exampleassume that data valuea row or column indexand pointer each require four bytes an matrix requires nm bytes the sparse matrix requires bytes per non-zero element (four pointerstwo array indicesand one data valueif we set to be the percentage of non-zero elementswe can solve for the value of below which the sparse matrix representation is more space efficient using the equation mnx mn
20,483
chap lists and arrays revisited and solving for xwe find that the sparse matrix using this implementation is more space efficient when / that iswhen less than about of the elements are non-zero different values for the relative sizes of data valuespointersor matrix indices can lead to different break-even point for the two implementations the time required to process sparse matrix depends on the number of nonzero elements stored when searching for an elementthe cost is the number of elements preceding the desired element on its row or column list the cost for operations such as adding two matrices should be th( min the worst case when the one matrix stores non-zero elements and the other stores non-zero elements memory management most of the data structure implementations described in this book store and access objects of uniform sizesuch as integers stored in list or tree few simple methods have been described for storing variable-size records in an array or stack this section discusses memory management techniques for the general problem of handling space requests of variable size the basic model for memory management is that we have (largeblock of contiguous memory locationswhich we will call the memory pool periodicallymemory requests are issued for some amount of space in the pool the memory manager must find contiguous block of locations of at least the requested size from somewhere within the memory pool honoring such request is called memory allocation the memory manager will typically return some piece of information that permits the user to recover the data that were just stored this piece of information is called handle previously allocated memory might be returned to the memory manager at some future time this is called memory deallocation we can define an adt for the memory manager as shown in figure the user of the memmanager adt provides pointer (in parameter spaceto space that holds some message to be stored or retrieved this is similar to the java basic file read/write methods presented in section the fundamental idea is that the client gives messages to the memory manager for safe keeping the memory manager returns "receiptfor the message in the form of memhandle object the client holds the memhandle until it wishes to get the message back method insert lets the client tell the memory manager the length and contents of the message to be stored this adt assumes that the memory manager will remember the length of the message associated with given handlethus method get does not include length parameter but instead returns the length of the mes
20,484
/*memory manager interface *interface memmanadt /*store record and return handle to it *public memhandle insert(byte[info)/request space /*get back copy of stored record *public byte[get(memhandle )/retrieve data /*release the space associated with record *public void release(memhandle )/release space figure simple adt for memory manager sage actually stored method release allows the client to tell the memory manager to release the space that stores given message when all inserts and releases follow simple patternsuch as last requestedfirst released (stack order)or first requestedfirst released (queue order)memory management is fairly easy we are concerned in this section with the general case where blocks of any size might be requested and released in any order this is known as dynamic storage allocation one example of dynamic storage allocation is managing free store for compiler' runtime environmentsuch as the systemlevel new operations in java another example is managing main memory in multitasking operating system herea program might require certain amount of spaceand the memory manager must keep track of which programs are using which parts of the main memory yet another example is the file manager for disk drive when disk file is createdexpandedor deletedthe file manager must allocate or deallocate disk space block of memory or disk space managed in this way is sometimes referred to as heap the term "heapis being used here in different way than the heap data structure discussed in section here "heaprefers to the memory controlled by dynamic memory management scheme in the rest of this sectionwe first study techniques for dynamic memory management we then tackle the issue of what to do when no single block of memory in the memory pool is large enough to honor given request dynamic storage allocation for the purpose of dynamic storage allocationwe view memory as single array broken into series of variable-size blockswhere some of the blocks are free and some are reserved or already allocated the free blocks are linked together to form
20,485
chap lists and arrays revisited figure dynamic storage allocation model memory is made up of series of variable-size blockssome allocated and some free in this exampleshaded areas represent memory currently allocated and unshaded areas represent unused memory available for future allocation small blockexternal fragmentation unused space in allocated blockinternal fragmentation figure an illustration of internal and external fragmentation freelist used for servicing future memory requests figure illustrates the situation that can arise after series of memory allocations and deallocations when memory request is received by the memory managersome block on the freelist must be found that is large enough to service the request if no such block is foundthen the memory manager must resort to failure policy such as discussed in section if there is request for wordsand no block exists of exactly size mthen larger block must be used instead one possibility in this case is that the entire block is given away to the memory allocation request this might be desirable when the size of the block is only slightly larger than the request this is because saving tiny block that is too small to be useful for future memory request might not be worthwhile alternativelyfor free block of size kwith mup to space may be retained by the memory manager to form new free blockwhile the rest is used to service the request memory managers can suffer from two types of fragmentation external fragmentation occurs when series of memory requests result in lots of small free blocksno one of which is useful for servicing typical requests internal fragmentation occurs when more than words are allocated to request for wordswasting free storage this is equivalent to the internal fragmentation that occurs when files are allocated in multiples of the cluster size the difference between internal and external fragmentation is illustrated by figure some memory management schemes sacrifice space to internal fragmentation to make memory management easier (and perhaps reduce external fragmentationfor exampleexternal fragmentation does not happen in file management systems
20,486
that allocate file space in clusters another example of sacrificing space to internal fragmentation so as to simplify memory management is the buddy method described later in this section the process of searching the memory pool for block large enough to service the requestpossibly reserving the remaining space as free blockis referred to as sequential fit method sequential-fit methods sequential-fit methods attempt to find "goodblock to service storage request the three sequential-fit methods described here assume that the free blocks are organized into doubly linked listas illustrated by figure there are two basic approaches to implementing the freelist the simpler approach is to store the freelist separately from the memory pool in other wordsa simple linked-list implementation such as described in can be usedwhere each node of the linked list contains pointer to single free block in the memory pool this is fine if there is space available for the linked list itselfseparate from the memory pool the second approach to storing the freelist is more complicated but saves space because the free space is freeit can be used by the memory manager to help it do its jobthat isthe memory manager temporarily "borrowsspace within the free blocks to maintain its doubly linked list to do soeach unallocated block must be large enough to hold these pointers in additionit is usually worthwhile to let the memory manager add few bytes of space to each reserved block for its own purposes in other wordsa request for bytes of space might result in slightly more than bytes being allocated by the memory managerwith the extra bytes used by the memory manager itself rather than the requester we will assume that all memory blocks are organized as shown in figure with space for tags and linked list pointers herefree and reserved blocks are distinguished by tag bit at both the beginning and the end of the blockfor reasons that will be explained in additionboth free and reserved blocks have size indicator immediately after the tag bit at the beginning of the block to indicate how large the block is free blocks have second size indicator immediately preceding the tag bit at the end of the block finallyfree blocks have left and right pointers to their neighbors in the free block list the information fields associated with each block permit the memory manager to allocate and deallocate blocks as needed when request comes in for words of storagethe memory manager searches the linked list of free blocks until it finds "suitableblock for allocation how it determines which block is suitable will
20,487
chap lists and arrays revisited figure doubly linked list of free blocks as seen by the memory manager shaded areas represent allocated memory unshaded areas are part of the freelist tag sizellink rlink tag size size tag (atag (bfigure blocks as seen by the memory manager each block includes additional information such as freelist link pointersstart and end tagsand size field (athe layout for free block the beginning of the block contains the tag bit fieldthe block size fieldand two pointers for the freelist the end of the block contains second tag field and second block size field (ba reserved block of bytes the memory manager adds to these bytes an additional tag bit field and block size field at the beginning of the blockand second tag field at the end of the block be discussed below if the block contains exactly words (plus space for the tag and size fields)then it is removed from the freelist if the block (of size kis large enoughthen the remaining words are reserved as block on the freelistin the current location when block is freedit must be merged into the freelist if we do not care about merging adjacent free blocksthen this is simple insertion into the doubly linked list of free blocks howeverwe would like to merge adjacent blocksbecause this allows the memory manager to serve requests of the largest possible size merging is easily done due to the tag and size fields stored at the ends of each blockas illustrated by figure herethe memory manager first checks the unit of memory immediately preceding block to see if the preceding block (call it pis also free if it isthen the memory unit before ' tag bit stores the size of pthus indicating the position for the beginning of the block in memory can
20,488
sec memory management figure adding block to the freelist the word immediately preceding the start of in the memory pool stores the tag bit of the preceding block if is freemerge into we find the end of by using ' size field the word following the end of is the tag field for block if is freemerge it into then simply have its size extended to include block if block is not freethen we just add block to the freelist finallywe also check the bit following the end of block if this bit indicates that the following block (call it sis freethen is removed from the freelist and the size of is extended appropriately we now consider how "suitablefree block is selected to service memory request to illustrate the processassume there are four blocks on the freelist of sizes and (in that orderassume that request is made for units of storage for our exampleswe ignore the overhead imposed for the taglinkand size fields discussed above the simplest method for selecting block would be to move down the free block list until block of size at least is found any remaining space in this block is left on the freelist if we begin at the beginning of the list and work down to the first free block at least as large as we select the block of size because this approach selects the first block with enough spaceit is called first fit simple variation that will improve performance isinstead of always beginning at the head of the freelistremember the last position reached in the previous search and start from there when the end of the freelist is reachedsearch begins again at the head of the freelist this modification reduces the number of unnecessary searches through small blocks that were passed over by previous requests there is potential disadvantage to first fitit might "wastelarger blocks by breaking them upand so they will not be available for large requests later strategy that avoids using large blocks unnecessarily is called best fit best fit looks at the entire list and picks the smallest block that is at least as large as the request ( the "bestor closest fit to the requestcontinuing with the preceding examplethe best fit for request of units is the block of size leaving
20,489
chap lists and arrays revisited remainder of size best fit has the disadvantage that it requires that the entire list be searched another problem is that the remaining portion of the best-fit block is likely to be smalland thus useless for future requests in other wordsbest fit tends to maximize problems of external fragmentation while it minimizes the chance of not being able to service an occasional large request strategy contrary to best fit might make sense because it tends to minimize the effects of external fragmentation this is called worst fitwhich always allocates the largest block on the list hoping that the remainder of the block will be useful for servicing future request in our examplethe worst fit is the block of size leaving remainder of size if there are few unusually large requeststhis approach will have less chance of servicing them if requests generally tend to be of the same sizethen this might be an effective strategy like best fitworst fit requires searching the entire freelist at each memory request to find the largest block alternativelythe freelist can be ordered from largest to smallest free blockpossibly by using priority queue implementation which strategy is bestit depends on the expected types of memory requests if the requests are of widely ranging sizebest fit might work well if the requests tend to be of similar sizewith rare large and small requestsfirst or worst fit might work well unfortunatelythere are always request patterns that one of the three sequential fit methods will servicebut which the other two will not be able to service for exampleif the series of requests is made to freelist containing blocks (in that order)the requests can all be serviced by first fitbut not by best fit alternativelythe series of requests can be serviced by best fit but not by first fit on this same freelist buddy methods sequential-fit methods rely on linked list of free blockswhich must be searched for suitable block at each memory request thusthe time to find suitable free block would be th(nin the worst case for freelist containing blocks merging adjacent free blocks is somewhat complicated finallywe must either use additional space for the linked listor use space within the memory pool to support the memory manager operations in the second optionboth free and reserved blocks require tag and size fields fields in free blocks do not cost any space (because they are stored in memory that is not otherwise being used)but fields in reserved blocks create additional overhead the buddy system solves most of these problems searching for block of the proper size is efficientmerging adjacent free blocks is simpleand no tag or other information fields need be stored within reserved blocks the buddy system
20,490
assumes that memory is of size for some integer both free and reserved blocks will always be of size for < at any given timethere might be both free and reserved blocks of various sizes the buddy system keeps separate list for free blocks of each size there can be at most such listsbecause there can only be distinct block sizes when request comes in for wordswe first determine the smallest value of such that > block of size is selected from the free list for that block size if one exists the buddy system does not worry about internal fragmentationthe entire block of size is allocated if no block of size existsthe next larger block is located this block is split in half (repeatedly if necessaryuntil the desired block of size is created any other blocks generated as by-product of this splitting process are placed on the appropriate freelists the disadvantage of the buddy system is that it allows internal fragmentation for examplea request for words will require block of size the primary advantages of the buddy system are ( there is less external fragmentation( search for block of the right size is cheaper thansaybest fit because we need only find the first available block on the block list for blocks of size and ( merging adjacent free blocks is easy the reason why this method is called the buddy system is because of the way that merging takes place the buddy for any block of size is another block of the same sizeand with the same address ( the byte position in memoryread as binary valueexcept that the kth bit is reversed for examplethe block of size with beginning address in figure (ahas buddy with address likewisein figure ( )the block of size with address has buddy if free blocks are sorted by address valuethe buddy can be found by searching the correct block size list merging simply requires that the address for the combined buddies be moved to the freelist for the next larger block size other memory allocation methods in addition to sequential-fit and buddy methodsthere are many ad hoc approaches to memory management if the application is sufficiently complexit might be desirable to break available memory into several memory zoneseach with different memory management scheme for examplesome zones might have simple memory access pattern of first-infirst-out this zone can therefore be managed efficiently by using simple stack another zone might allocate only records of fixed sizeand so can be managed with simple freelist as described in section other zones might need one of the general-purpose memory allocation methods
20,491
chap lists and arrays revisited buddies buddies buddies ( (bfigure example of the buddy system (ablocks of size (bblocks of size discussed in this section the advantage of zones is that some portions of memory can be managed more efficiently the disadvantage is that one zone might fill up while other zones have excess memory if the zone sizes are chosen poorly another approach to memory management is to impose standard size on all memory requests we have seen an example of this concept already in disk file managementwhere all files are allocated in multiples of the cluster size this approach leads to internal fragmentationbut managing files composed of clusters is easier than managing arbitrarily sized files the cluster scheme also allows us to relax the restriction that the memory request be serviced by contiguous block of memory most disk file managers and operating system main memory managers work on cluster or page system block management is usually done with buffer pool to allocate available blocks in main memory efficiently failure policies and garbage collection at some point during processinga memory manager could encounter request for memory that it cannot satisfy in some situationsthere might be nothing that can be donethere simply might not be enough free memory to service the requestand the application may require that the request be serviced immediately in this casethe memory manager has no option but to return an errorwhich could in turn lead to failure of the application program howeverin many cases there are alternatives to simply returning an error the possible options are referred to collectively as failure policies in some casesthere might be sufficient free memory to satisfy the requestbut it is scattered among small blocks this can happen when using sequential
20,492
sec memory management handle memory block figure using handles for dynamic memory management the memory manager returns the address of the handle in response to memory request the handle stores the address of the actual memory block in this waythe memory block might be moved (with its address updated in the handlewithout disrupting the application program fit memory allocation methodwhere external fragmentation has led to series of small blocks that collectively could service the request in this caseit might be possible to compact memory by moving the reserved blocks around so that the free space is collected into single block problem with this approach is that the application must somehow be able to deal with the fact that all of its data have now been moved to different locations if the application program relies on the absolute positions of the data in any waythis would be disastrous one approach for dealing with this problem is the use of handles handle is second level of indirection to memory location the memory allocation routine does not return pointer to the block of storagebut rather pointer to variable that in turn points to the storage this variable is the handle the handle never moves its positionbut the position of the block might be moved and the value of the handle updated figure illustrates the concept another failure policy that might work in some applications is to defer the memory request until sufficient memory becomes available for examplea multitasking operating system could adopt the strategy of not allowing process to run until there is sufficient memory available while such delay might be annoying to the userit is better than halting the entire system the assumption here is that other processes will eventually terminatefreeing memory another option might be to allocate more memory to the memory manager in zoned memory allocation system where the memory manager is part of larger systemthis might be viable option in java program that implements its own memory managerit might be possible to get more memory from the system-level new operatorsuch as is done by the freelist of section the last failure policy that we will consider is garbage collection consider the following series of statements
20,493
chap lists and arrays revisited freelist figure example of lisp list variablesincluding the system freelist int[ new int[ ]int[ new int[ ] qwhile in java this would be no problem (due to automatic garbage collection)in languages such as +this would be considered bad form because the original space allocated to is lost as result of the third assignment this space cannot be used again by the program such lost memory is referred to as garbagealso known as memory leak when no program variable points to block of spaceno future access to that space is possible of courseif another variable had first been assigned to point to ' spacethen reassigning would not create garbage some programming languages take different view towards garbage in particularthe lisp programming language uses the multilist representation of figure and all storage is in the form either of internal nodes with two pointers or atoms figure shows typical collection of lisp structuresheaded by variables named aband calong with freelist in lisplist objects are constantly being put together in various ways as temporary variablesand then all reference to them is lost when the object is no longer needed thusgarbage is normal in lispand in fact cannot be avoided during normal processing when lisp runs out of memoryit resorts to garbage collection process to recover the space tied up in garbage garbage collection consists of examining the managed memory pool to determine which parts are still being used
20,494
and which parts are garbage in particulara list is kept of all program variablesand any memory locations not reachable from one of these variables are considered to be garbage when the garbage collector executesall unused memory locations are placed in free store for future access this approach has the advantage that it allows for easy collection of garbage it has the disadvantagefrom user' point of viewthat every so often the system must halt while it performs garbage collection for examplegarbage collection is noticeable in the emacs text editorwhich is normally implemented in lisp occasionally the user must wait for moment while the memory management system performs garbage collection the java programming language also makes use of garbage collection as in lispit is common practice in java to allocate dynamic memory as neededand to later drop all references to that memory the garbage collector is responsible for reclaiming such unused space as necessary this might require extra time when running the programbut it makes life considerably easier for the programmer in contrastmany large applications written in +(even commonly used commercial softwarecontain memory leaks that will in time cause the program to fail several algorithms have been used for garbage collection one is the reference count algorithm hereevery dynamically allocated memory block includes space for count field whenever pointer is directed to memory blockthe reference count is increased whenever pointer is directed away from memory blockthe reference count is decreased if the count ever becomes zerothen the memory block is considered garbage and is immediately placed in free store this approach has the advantage that it does not require an explicit garbage collection phasebecause information is put in free store immediately when it becomes garbage the reference count algorithm is used by the unix file system files can have multiple namescalled links the file system keeps count of the number of links to each file whenever file is "deleted,in actuality its link field is simply reduced by one if there is another link to the filethen no space is recovered by the file system whenever the number of links goes to zerothe file' space becomes available for reuse reference counts have several major disadvantages firsta reference count must be maintained for each memory object this works well when the objects are largesuch as file howeverit will not work well in system such as lisp where the memory objects typically consist of two pointers or value (an atomanother major problem occurs when garbage contains cycles consider figure here each memory object is pointed to oncebut the collection of objects is still garbage because no pointer points to the collection thusreference counts only work when
20,495
chap lists and arrays revisited figure garbage cycle example all memory elements in the cycle have non-zero reference counts because each element has one pointer to iteven though the entire cycle is garbage the memory objects are linked together without cyclessuch as the unix file system where files can only be organized as dag another approach to garbage collection is the mark/sweep strategy hereeach memory object needs only single mark bit rather than reference counter field when free store is exhausteda separate garbage collection phase takes place as follows clear all mark bits perform depth-first search (dfsfollowing pointers from each variable on the system' list of variables each memory element encountered during the dfs has its mark bit turned on "sweepis made through the memory poolvisiting all elements unmarked elements are considered garbage and placed in free store the advantages of the mark/sweep approach are that it needs less space than is necessary for reference countsand it works for cycles howeverthere is major disadvantage this is "hiddenspace requirement needed to do the processing dfs is recursive algorithmeither it must be implemented recursivelyin which case the compiler' runtime system maintains stackor else the memory manager can maintain its own stack what happens if all memory is contained in single linked listthen the depth of the recursion (or the size of the stackis the number of memory cellsunfortunatelythe space for the dfs stack must be available at the worst conceivable timethat iswhen free memory has been exhausted fortunatelya clever technique allows dfs to be performed without requiring additional space for stack insteadthe structure being traversed is used to hold the stack at each step deeper into the traversalinstead of storing pointer on the stackwe "borrowthe pointer being followed this pointer is set to point back to the node we just came from in the previous stepas illustrated by figure each borrowed pointer stores an additional bit to tell us whether we came down the left branch or the right branch of the link node being pointed to at any given instant we have passed down only one path from the rootand we can follow the trail of pointers back up as we return (equivalent to popping the recursion stack)we set the pointer back to its original position so as to return the structure to its
20,496
sec further reading ( prev curr (bfigure example of the deutsch-schorr-waite garbage collection algorithm (athe initial multilist structure (bthe multilist structure of (aat the instant when link node is being processed by the garbage collection algorithm chain of pointers stretching from variable prev to the head node of the structure has been (temporarilycreated by the garbage collection algorithm original condition this is known as the deutsch-schorr-waite garbage collection algorithm further reading an introductory text on operating systems covers many topics relating to memory management issuesincluding layout of files on disk and caching of information in main memory all of the topics covered here on memory managementbuffer poolsand paging are relevant to operating system implementation for examplesee operating systems by william stallings[sta for information on lispsee the little lisper by friedman and felleisen [ff another good lisp reference is common lispthe language by guy steele [ste for information on emacswhich is both an excellent text editor and fully developed programming environmentsee the gnu emacs manual by richard stallman [sta you can get more information about java' garbage collection system from the java programming language by ken arnold and james gosling [ag
20,497
chap lists and arrays revisited (aa (bc (cfigure some example multilists exercises for each of the following bracket notation descriptionsdraw the equivalent multilist in graphical form such as shown in figure (ahabhcdeihfhgihii (bhabhcdl eil (chl al hl bil hl ii (ashow the bracket notation for the list of figure ( (bshow the bracket notation for the list of figure ( (cshow the bracket notation for the list of figure ( given the linked representation of pure list such as hx hy hz iy ihw ix iwrite an in-place reversal algorithm to reverse the sublists at all levels including the topmost level for this examplethe result would be linked representation corresponding to hx hw ihy hz iy ix what fraction of the values in matrix must be zero for the sparse matrix representation of section to be more space efficient than the standard two-dimensional matrix representation when data values require eight bytesarray indices require two bytesand pointers require four bytes write function to add an element at given position to the sparse matrix representation of section write function to delete an element from given position in the sparse matrix representation of section write function to transpose sparse matrix as represented in section write function to add two sparse matrices as represented in section
20,498
write memory manager allocation and deallocation routines for the situation where all requests and releases follow last-requestedfirst-released (stackorder write memory manager allocation and deallocation routines for the situation where all requests and releases follow last-requestedlast-released (queueorder show the result of allocating the following blocks from memory pool of size using first fit for each series of block requests state if given request cannot be satisfied (atake (call this block )take release atake take (btake (call this block )take release atake take (ctake (call this block )take release atake take show the result of allocating the following blocks from memory pool of size using best fit for each series of block requests state if given request cannot be satisfied (atake (call this block )take release atake take (btake (call this block )take release atake take (ctake (call this block )take release atake take show the result of allocating the following blocks from memory pool of size using worst fit for each series of block requests state if given request cannot be satisfied (atake (call this block )take release atake take (btake (call this block )take release atake take (ctake (call this block )take release atake take assume that the memory pool contains three blocks of free storage their sizes are and give examples of storage requests for which (afirst-fit allocation will workbut not best fit or worst fit (bbest-fit allocation will workbut not first fit or worst fit (cworst-fit allocation will workbut not first fit or best fit projects implement the sparse matrix representation of section your implementation should support the following operations on the matrixinsert an element at given positiondelete an element from given positionreturn the value of the element at given positiontake the transpose of matrixand
20,499
chap lists and arrays revisited add two matrices implement the memmanager adt shown at the beginning of section use separate linked list to implement the freelist your implementation should work for any of the three sequential-fit methodsfirst fitbest fitand worst fit test your system empirically to determine under what conditions each method performs well implement the memmanager adt shown at the beginning of section do not use separate memory for the free listbut instead embed the free list into the memory pool as shown in figure your implementation should work for any of the three sequential-fit methodsfirst fitbest fitand worst fit test your system empirically to determine under what conditions each method performs well implement the memmanager adt shown at the beginning of section using the buddy method of section your system should support requests for blocks of specified size and release of previously requested blocks implement the deutsch-schorr-waite garbage collection algorithm that is illustrated by figure