id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
23,000 | abefore split bafter split -node -node cbefore color flip dafter color flip -node -node figure two red children one red child -node split and color flip thuswe see that splitting -node during the insertion process in tree is equivalent to performing color flips during the insertion process in red-black tree -node splits and rotations when -node in tree is transformed into its red-black equivalenttwo arrangements are possibleas we showed earlier in figure either of the two data items can become the parent depending on which one is chosenthe child will be either left child or right childand the slant of the line connecting parent and child will be either left or right both arrangements are validhoweverthey may not contribute equally to balancing the tree let' look at the situation in slightly larger context figure shows treeand and show two equivalent redblack trees derived from the tree by applying the transformation rules the difference between them is the choice of which of the two data items in the -node to make the parentin is the parentin cit' |
23,001 | trees and external storage atree bleft slant color change color change rotation cright slant black node red node figure -node and rotation although these arrangements are equally validyou can see that the tree in bis not balancedwhile that in cis given the red-black tree in )we would want to rotate it to the right (and perform two color changesto balance it amazinglythis rotation results in the exact same tree shown in cthuswe see an equivalence between rotations in red-black trees and the choice of which node to make the parent when transforming trees to red-black trees although we don' show ita similar equivalence can be seen for the double rotation necessary for inside grandchildren |
23,002 | efficiency of trees it' harder to analyze the efficiency of tree than red-black treebut the equivalence of red-black trees and trees gives us starting point speed as we saw in in red-black tree one node on each level must be visited during searchwhether to find an existing node or insert new one the number of levels in red-black tree ( balanced binary treeis about log ( + )so search times are proportional to this one node must be visited at each level in tree as wellbut the tree is shorter (has fewer levelsthan red-black tree with the same number of data items refer to figure where the tree has three levels and the red-black tree has five more specificallyin trees there are up to four children per node if every node were fullthe height of the tree would be proportional to log logarithms to the base and to the base differ by constant factor of thusthe height of tree would be about half that of red-black treeprovided that all the nodes were full since they aren' all fullthe height of the tree is somewhere between log ( + and log ( + )/ the reduced height of the tree decreases search times slightly compared with red-black trees on the other handthere are more items to examine in each nodewhich increases the search time because the data items in the node are examined using linear searchthis multiplies the search times by an amount proportional to mthe average number of items per node the result is search time proportional to *log some nodes contain one itemsome twoand some three if we estimate that the average is twosearch times will be proportional to *log this is small constant number that can be ignored in big notation thusfor trees the increased number of items per node tends to cancel out the decreased height of the tree the search times for tree and for balanced binary tree such as red-black tree are approximately equaland are both (lognstorage requirements each node in tree contains storage for three references to data items and four references to its children this space may be in the form of arraysas shown in tree javaor of individual variables not all this storage is used node with only one data item will waste / of the space for data and / the space for children node with two data items will waste / of the space for data and / of the space for childrenor put another wayit will use / of the available space |
23,003 | trees and external storage if we take two data items per node as the average utilizationabout / of the available storage is wasted you might imagine using linked lists instead of arrays to hold the child and data referencesbut the overhead of the linked list compared with an arrayfor only three or four itemswould probably not make this worthwhile approach because they're balancedred-black trees contain few nodes that have only one childso almost all the storage for child references is used alsoevery node contains the maximum number of data itemswhich is one this makes red-black trees more efficient than trees in terms of memory usage in javawhich stores references to objects instead of the objects themselvesthis difference in storage between trees and red-black trees may not be importantand the programming is certainly simpler for trees howeverin languages that don' use references this waythe difference in storage efficiency between red-black trees and trees may be significant - trees we'll discuss - trees briefly here because they are historically important and because they are still used in many applications alsosome of the techniques used with - trees are applicable to -treeswhich we'll examine in the next section finallyit' interesting to see how small change in the number of children per node can cause large change in the tree' algorithms - trees are similar to trees except thatas you might have guessed from the namethey hold one less data item and have one less child they were the first multiway treeinvented by hopcroft in -trees (of which the tree is special casewere not invented until in many respects the operation of - trees is similar to that of trees nodes can hold one or two data items and can have zeroonetwoor three children otherwisethe arrangement of the key values of the parent and its children is the same inserting data item into node is potentially simplified because fewer comparisons and moves are potentially necessary as in treesall insertions are made into leaf nodesand all leaf nodes are on the bottom level node splits searching for an existing data item is handled just as it is in tree except for the number of data items and children you might guess that insertion is also similar to treebut there is surprising difference in the way splits are handled here' why the splits are so different in either kind of tree node split requires three data itemsone to be kept in the node being splitone to move right into the new |
23,004 | nodeand one to move up to the parent node full node in tree has three data itemswhich are moved to these three destinations howevera full node in tree has only two data items where can we get third itemwe must use the new itemthe one being inserted in the tree in tree the new item is inserted after all the splits have taken place in the - tree it must participate in the split it must be inserted in leafso no splits are possible on the way down if the leaf node where the new item should be inserted is not fullthe new item can be inserted immediatelybut if the leaf node is fullit must be split its two items and the new item are distributed among these three nodesthe existing nodethe new nodeand the parent node if the parent is not fullthe operation is complete (after connecting the new nodethis situation is shown in figure ab figure insertion with non-full parent howeverif the parent is fullit too must be split its two items and the item passed up from its recently split child must be distributed among the parenta new sibling of the parentand the parent' parent this situation is shown in figure ab figure insertion with full parent if the parent' parent (the grandparent of the leaf nodeis fullit too must be split the splitting process ripples upward until either non-full parent or the root is encountered if the root is fulla new root is created that is the parent of the old rootas shown in figure |
23,005 | trees and external storage ab figure splitting the root figure shows node split that ripples up through tree until it reaches the root figure splits rippling up tree implementation we'll leave complete java implementation of - tree as an exercise howeverwe'll finish with some hints on how to handle splits this is only one approach (another involves allowing each node to hold phantom fourth childon the way down the insertion routine doesn' care if the nodes it encounters are full or not it searches down through the tree until it finds the appropriate leaf if the leaf is not fullit inserts the new value howeverif the leaf is fullit must rearrange the tree to make room to do thisit calls split(method arguments to this |
23,006 | method can be the full leaf node and the new item it will be the responsibility of split(to make the split and insert the new node in the new leaf if split(finds that the leaf' parent is fullit calls itself recursively to split the parent it keeps calling itself until non-full leaf or the root is found the return value of split(is the new right nodewhich can be used by the previous incarnation of split(coding the splitting process is complicated by several factors in tree the three items to be distributed are already sortedbut in the - tree the new item' key must be compared with the two items in the leafthe three are then distributed according to the results of the comparison alsosplitting parent creates second parentso now we have left (the originalparent and new right parent we need to change the connections from single parent with three children to two parents with two children each there are three casesdepending on which child ( or is being split this situation is shown in figure asplitting child child child child child new bsplitting child child child child child child child csplitting child new child child child child figure child connecting the children child child child new |
23,007 | trees and external storage in this figure the new nodes created as the result of split are shadedand new connections are shown as wiggly lines external storage trees are examples of multiway treeswhich have more than two children and more than one data item another kind of multiway treethe -treeis useful when data resides in external storage external storage typically refers to some kind of disk systemsuch as the hard disk found in most desktop computers or servers in this section we'll begin by describing various aspects of external file handling we'll talk about simple approach to organizing external datasequential ordering finallywe'll discuss -trees and explain why they work so well with disk files we'll finish with another approach to external storageindexingwhich can be used alone or with -tree we'll also touch on other aspects of external storagesuch as searching techniques in the next we'll mention different approach to external storagehashing the details of external storage techniques are dependent on the operating systemlanguageand even the hardware used in particular installation as consequenceour discussion in this section will be considerably more general than for most topics in this book accessing external data the data structures we've discussed so far are all based on the assumption that data is stored entirely in main memory (often called ramfor random access memoryhoweverin many situations the amount of data to be processed is too large to fit in main memory all at once in this case different kind of storage is necessary disk files generally have much larger capacity than main memorythis is made possible by their lower cost per byte of storage of coursedisk files have another advantagetheir permanence when you turn off your computer (or the power fails)the data in main memory is lost disk files can retain data indefinitely with the power off howeverit' mostly the size difference that we'll be involved with here the disadvantage of external storage is that it' much slower than main memory this speed difference means that different techniques must be used to handle it efficiently as an example of external storageimagine that you're writing database program to handle the data found in the phone book for medium-sized city--perhaps , entries each entry includes nameaddressphone numberand various other data |
23,008 | used internally by the phone company let' say an entry is stored as record requiring bytes the result is file size of , which is , , bytes or megabytes we'll assume that on the target machine this is too large to fit in main memory but small enough to fit on your disk drive thusyou have large amount of data on your disk drive how do you structure it to provide the usual desirable characteristicsquick searchinsertionand deletionin investigating the answersyou must keep in mind two facts firstaccessing data on disk drive is much slower than accessing it in main memory secondyou must access many records at once let' explore these points very slow access computer' main memory works electronically any byte can be accessed just as fast as any other bytein fraction of microsecond ( millionth of secondthings are more complicated with disk drives data is arranged in circular tracks on spinning disksomething like the tracks on compact disc (cdor the grooves in an old-style phonograph record to access particular piece of data on disk drivethe read-write head must first be moved to the correct track this is done with stepping motor or similar deviceit' mechanical activity that requires several milliseconds (thousandths of secondonce the correct track is foundthe read-write head must wait for the data to rotate into position on the averagethis takes half revolution even if the disk is spinning at , revolutions per minuteabout more milliseconds pass before the data can be read once the read-write head is positionedthe actual reading (or writingprocess beginsthis might take few more milliseconds thusdisk access times of around milliseconds are common this is something like , times slower than main memory technological progress is reducing disk access times every yearbut main memory access times are being reduced fasterso the disparity between disk access and main memory access times will grow even larger in the future one block at time when the read-write head is correctly positioned and the reading (or writingprocess beginsthe drive can transfer large amount of data to main memory fairly quickly for this reasonand to simplify the drive control mechanismdata is stored on the disk in chunks called blockspagesallocation unitsor some other namedepending on the system we'll call them blocks the disk drive always reads or writes minimum of one block of data at time block size variesdepending on the operating systemthe size of the disk driveand other factorsbut it is usually power of for our phone book examplelet' |
23,009 | trees and external storage assume block size of , bytes ( thusour phone book database will require , , bytes divided by , bytes per blockwhich is , blocks your software is most efficient when it specifies read or write operation that' multiple of the block size if you ask to read bytesthe system will read one block , bytesand throw away all but or if you ask to read , bytesit will read two blocksor , bytesand throw away almost half of them by organizing your software so that it works with block of data at timeyou can optimize its performance assuming our phone book record size of bytesyou can store records in block ( , divided by )as shown in figure thusfor maximum efficiency it' important to read records at time (or multiples of this numberfile , , bytes block block block block block , , , bytes record record record record last name last name last name last name first name first name first name first name address address address address phone phone phone phone etc etc etc etc bytes figure blocks and records notice that it' also useful to make your record size multiple of that wayan integral number of them will always fit in block of coursethe sizes shown in our phone book example for recordsblocksand so on are only illustrativethey will vary widely depending on the number and size of records and other software and hardware constraints blocks containing hundreds of records are commonand records may be much larger or smaller than bytes |
23,010 | once the read-write head is positioned as described earlierreading block is fairly fastrequiring only few milliseconds thusa disk access to read or write block is not very dependent on the size of the block it follows that the larger the blockthe more efficiently you can read or write single record (assuming you use all the records in the blocksequential ordering one way to arrange the phone book data in the disk file would be to order all the records according to some keysay alphabetically by last name the record for joseph aardvark would come firstand so on this is shown in figure aardvark (other dataaaron (other dataabbot (other data block figure able abrams (other data(other data abrell (other data block sequential ordering searching to search sequentially ordered file for particular last name such as smithyou could use binary search you would start by reading block of records from the middle of the file the records in the block are all read at once into an , -byte buffer in main memory if the keys of these records are too early in the alphabet (kellerfor example)you would go to the / point in the file (princeand read block thereif the keys were too lateyou' go to the / point (deleonby continually dividing the range in halfyou would eventually find the record you were looking for as we saw in binary search in main memory takes log comparisonswhich for , items would be about if every comparison tooksay microsecondsthis would be microsecondsor about / , of secondless than an eye blink howeverwe're now dealing with data stored on disk because each disk access is so time-consumingit' more important to focus on how many disk accesses are necessary than on how many individual records there are the time to read block of records will be very much larger than the time to search the records in the block once they're in memory disk accesses are much slower than memory accessesbut on the other hand we access block at timeand there are far fewer blocks than records in our example there are , blocks log of this number is about so in theory we'll need about disk accesses to find the record we want |
23,011 | trees and external storage in practice this number is reduced somewhat because we read records at once in the beginning stages of binary searchit doesn' help to have multiple records in memory because the next access will be in distant part of the file howeverwhen we get close to the desired recordthe next record we want may already be in memory because it' part of the same block of this may reduce the number of comparisons by two or so thuswe'll need about disk accesses ( - )which at milliseconds per access requires about millisecondsor / second this is much slower than in-memory accessbut still not too bad insertion unfortunatelythe picture is much worse if we want to insert (or deletean item from sequentially ordered file because the data is orderedboth operations require moving half the records on the averageand therefore about half the blocks moving each block requires two disk accessesone read and one write when the insertion point is foundthe block containing it is read into memory buffer the last record in the block is savedand the appropriate number of records are shifted up to make room for the new onewhich is inserted then the buffer contents are written back to the disk file nextthe second block is read into the buffer its last record is savedall the other records are shifted upand the last record from the previous block is inserted at the beginning of the buffer then the buffer contents are again written back to disk this process continues until all the blocks beyond the insertion point have been rewritten assuming there are , blockswe must read and write (on the average , of themwhich at milliseconds per read and write requires more than minutes to insert single entry this won' be satisfactory if you have thousands of new names to add to the phone book another problem with the sequential ordering is that it works quickly for only one key our file is arranged by last names but suppose you wanted to search for particular phone number you can' use binary search because the data is ordered by name you would need to go through the entire fileblock by blockusing sequential access this search would require reading an average of half the blockswhich would require about minutesvery poor performance for simple search it would be nice to have more efficient way to store disk data -trees how can the records of file be arranged to provide fast searchinsertionand deletion timeswe've seen that trees are good approach to organizing in-memory data will trees work with files |
23,012 | they willbut different kind of tree must be used for external data than for inmemory data the appropriate tree is multiway tree somewhat like treebut with many more data items per nodeit' called -tree -trees were first conceived as appropriate structures for external storage by bayer and mccreight in (strictly speaking - trees and trees are -trees of order and respectivelybut the term -tree is often taken to mean many more children per node one block per node why do we need so many items per nodewe've seen that disk access is most efficient when data is read or written one block at time in treethe entity containing data is node it makes sense then to store an entire block of data in each node of the tree this wayreading node accesses maximum amount of data in the shortest time how much data can be put in nodewhen we simply stored the -byte data records for our phone book examplewe could fit into an , -byte block in treehoweverwe also need to store the links to other nodes (which means links to other blocksbecause node corresponds to blockin an in-memory treesuch as those we've discussed in previous these links are references (or pointersin languages like ++to nodes in other parts of memory for tree stored in disk filethe links are block numbers in file (from to , in our phone book examplefor block numbers we can use field of type inta -byte typewhich can point to more than billion possible blockswhich is probably enough for most files now we can no longer squeeze -byte records into block because we need room for the links to child nodes we could reduce the number of records to to make room for the linksbut it' most efficient to have an even number of records per nodeso (after appropriate negotiation with managementwe reduce the record size to bytes there will be child links (one more than the number of data itemsso the links will require bytes ( this leaves room for -byte records with bytes left over ( , block in such treeand the corresponding node representationis shown in figure within each node the data is ordered sequentially by keyas in tree in factthe structure of -tree is similar to that of treeexcept that there are more data items per node and more links to children the order of -tree is the number of children each node can potentially have in our example this is so the tree is an order -tree searching search for record with specified key is carried out in much the same way it is in an in-memory tree firstthe block containing the root is read into memory the search algorithm then starts examining each of the records (orif it' not full |
23,013 | trees and external storage as many as the node actually holds)starting at when it finds record with greater keyit knows to go to the child whose link lies between this record and the preceding one smith smyth smoot snell (other data (other data figure (other data , , , (other data , block numbers , node in -tree of order this process continues until the correct node is found if leaf is reached without finding the specified keythe search is unsuccessful insertion the insertion process in -tree is more like an insertion in - tree than in tree recall that in tree many nodes are not fulland in fact contain only one data item in particulara node split always produces two nodes with one item in each this is not an optimum approach in -tree in -tree it' important to keep the nodes as full as possible so that each disk accesswhich reads an entire nodecan acquire the maximum amount of data to help achieve this endthe insertion process differs from that of trees in three waysa node split divides the data items equallyhalf go to the newly created nodeand half remain in the old one node splits are performed from the bottom upas in - treerather than from the top down |
23,014 | againas in - treeit' not the middle item in node that' promoted upwardbut the middle item in the sequence formed from the items in the node plus the new item we'll demonstrate these features of the insertion process by building small -treeas shown in figure there isn' room to show realistic number of records per nodeso we'll show only fourthusthe tree is an order -tree figure shows root node that' already fullitems with keys and have already been inserted into the tree new data item with key of is insertedresulting in node split here' how the split is accomplished because it' the root that' being splittwo new nodes are created (as in tree) new root and new node to the right of the one being split to decide where the data items gothe insertion algorithm arranges their five keys in orderin an internal buffer four of these keys are from the node being splitand the fifth is from the new item being inserted in figure these five-item sequences are shown to the side of the tree in this first step the sequence is shown the center item in this sequence in this first stepis promoted to the new root node (in the figurean arrow indicates that the center item will go upward all the items to the left of center remain in the node being splitand all the items to the right go into the new right-hand node the result is shown in figure (in our phone book exampleeight items would go into each child noderather than the two shown in the figure in figure we insert two more items and they fill up the left childas shown in figure the next item to be inserted splits this left childwith the result shown in figure here the has been promoted upward into the root nextthree items-- and --are inserted into the tree the first two fill up the third childand the third splits itcausing the creation of new node and the promotion of the middle item to the root the result is shown in figure again three items-- and --are added to the tree the first two items fill up the second childand the third one splits itcausing the creation of new node and the promotion of the middle item to the rootas shown in figure now the root is full howeversubsequent insertions don' necessarily cause node splitbecause nodes are split only when new item is inserted into full nodenot when full node is encountered in the search down the tree thus and are inserted in the second child without causing any splitsas shown in figure |
23,015 | trees and external storage figure building -tree |
23,016 | howeverthe next item to be inserted does cause splitin fact it causes two of them the second node child is fullso it' splitas shown in figure howeverthe promoted from this splithas no place to go because the root is full thereforethe root must be split as wellresulting in the arrangement of figure notice that throughout the insertion process no node (except the rootis ever less than half fulland many are more than half full as we notedthis promotes efficiency because file access that reads node always acquires substantial amount of data efficiency of -trees because there are so many records per nodeand so many nodes per leveloperations on -trees are very fastconsidering that the data is stored on disk in our phone book example there are , records all the nodes in the -tree are at least half fullso they contain at least records and links to children the height of the tree is thus somewhat less than log (logarithm to the base of )where is , this is so there will be about levels in the tree |
23,017 | trees and external storage thususing -treeonly six disk accesses are necessary to find any record in file of , records at milliseconds per accessthis takes about millisecondsor / of second this is dramatically faster than the binary search of sequentially ordered file the more records there are in nodethe fewer levels there are in the tree we've seen that there are levels in our -treeeven though the nodes hold only records in contrasta binary tree with , items would have about levelsand tree would have if we use blocks with hundreds of recordswe can reduce the number of levels in the tree and further improve access times although searching is faster in -trees than in sequentially ordered disk filesit' for insertion and deletion that -trees show the greatest advantage let' first consider -tree insertion in which no nodes need to be split this is the most likely scenariobecause of the large number of records per node in our phone book exampleas we've seenonly accesses are required to find the insertion point then one more access is required to write the block containing the newly inserted record back to the diskfor total of accesses next let' see how things look if node must be split the node being split must be readhave half its records removedand be written back to disk the newly created node must be written to the diskand the parent must be read andfollowing the insertion of the promoted recordwritten back to disk this is accesses in addition to the necessary to find the insertion pointfor total of this is major improvement over the , accesses required for insertion in sequential file in some versions of the -treeonly leaf nodes contain records non-leaf nodes contain only keys and block numbers this may result in faster operation because each block can hold many more block numbers the resulting higher-order tree will have fewer levelsand access speed will be increased howeverprogramming may be complicated because there are two kinds of nodesleaves and non-leaves indexing different approach to speeding up file access is to store records in sequential order but use file index along with the data itself file index is list of key/block pairsarranged with the keys in order recall that in our original phone book example we had , records of bytes eachstored records to blockin , blocks assuming our search key is the last nameevery entry in the index contains two itemsthe keylike jones the number of the block where the jones record is located within the file these numbers run from to , |
23,018 | let' say we use string bytes long for the key (big enough for most last namesand bytes for the block number ( type int in javaeach entry in our index thus requires bytes this is only / the amount necessary for each record the entries in the index are arranged sequentially by last name the original records on the disk can be arranged in any convenient order this usually means that new records are simply appended to the end of the fileso the records are ordered by time of insertion this arrangement is shown in figure block # block # jones (other data benson (other datalloyd (other datakeller (other dataparker (other data abbot duncan (other (other datadata bytes index key blockjones , jordan , joslyn , joyce , jung figure bytes file index index file in memory the index is much smaller than the file containing actual records it may even be small enough to fit entirely in main memory in our example there are , records each one has -byte entry in the indexso the index will be , or , , bytes long ( megabytesin modern computers there' no problem fitting this in memory the index can be stored on the disk but read into memory whenever the database program is started up from then onoperations on the index can take place in memory at the end of each day (or perhaps more frequently)the index can be written back to disk for permanent storage searching the index-in-memory approach allows much faster operations on the phone book file than are possible with file in which the records themselves are arranged sequentially for examplea binary search requires index accesses at microseconds per accessthat' only about / , of second then there' (inevitablythe |
23,019 | trees and external storage time to read the actual record from the fileonce its block number has been found in the index howeverthis is only one disk access of (say milliseconds insertion to insert new item in an indexed filetwo steps are necessary we first insert the item' full record into the main filethen we insert an entryconsisting of the key and the block number where the new record is storedinto the index because the index is in sequential orderto insert new itemwe need to move half the index entrieson the average figuring microseconds to move byte in memorywe have , times times or about seconds to insert new entry this compares with minutes for the unindexed sequential file (note that we don' need to move any records in the main filewe simply append the new record at the end of the file of courseyou can use more sophisticated approach to storing the index in memory you could store it as binary treetreeor red-black treefor example any of these would significantly reduce insertion and deletion times in any case the index-in-memory approach is much faster than the sequential-file approach in some cases it will also be faster than -tree the only actual disk accesses necessary for an insertion into an indexed file involve the new record itself usuallythe last block in the file is read into memorythe new record is appendedand the block is written back out this process involves only two file accesses multiple indexes an advantage of the indexed approach is that multiple indexeseach with different keycan be created for the same file in one index the keys can be last namesin anothertelephone numbersin anotheraddresses because the indexes are small compared with the filethis doesn' increase the total data storage very much of courseit does present more of challenge when items are deleted from the file because entries must be deleted from all the indexesbut we won' get into that here index too large for memory if the index is too large to fit in memoryit must be broken into blocks and stored on the disk for large files storing the index itself as -tree may then be profitable in the main file the records are stored in any convenient order this arrangement can be very efficient appending records to the end of the main file is fast operationand inserting the index entry for the new record is also quick because the index is tree the result is very fast searching and insertion for large files |
23,020 | note that when an index is arranged as -treeeach node contains child pointers and - data items the child pointers are the block numbers of other nodes in the index the data items consist of key value and pointer to block in the main file don' confuse these two kinds of block pointers complex search criteria in complex searches the only practical approach may be to read every block in file sequentially suppose in our phone book example we wanted list of all entries in the phone book with first name frankwho lived in springfieldand who had phone number with three digits in it (these were perhaps clues found scrawled on scrap of paper clutched in the hand of victim of foul play file organized by last names would be no help at all even if there were index files ordered by first names and citiesthere would be no convenient way to find which files contained both frank and springfield in such cases (which are quite common in many kinds of databases)the fastest approach is probably to read the file sequentiallyblock by blockchecking each record to see whether it meets the criteria sorting external files mergesort is the preferred algorithm for sorting external data this is becausemore so than most sorting techniquesdisk accesses tend to occur in adjacent records rather than random parts of the file recall from "recursion,that mergesort works recursively by calling itself to sort smaller and smaller sequences once two of the smallest sequences (one byte each in the internal-memory versionhave been sortedthey are then merged into sorted sequence twice as long larger and larger sequences are mergeduntil eventually the entire file is sorted the approach for external storage is similar howeverthe smallest sequence that can be read from the disk is block of records thusa two-stage process is necessary in the first phasea block is readits records are sorted internallyand the resulting sorted block is written back to disk the next block is similarly sorted and written back to disk this process continues until all the blocks are internally sorted in the second phasetwo sorted blocks are readmerged into two-block sequenceand written back to disk this process continues until all pairs of blocks have been merged nexteach pair of two-block sequences is merged into four-block sequence each timethe size of the sorted sequences doublesuntil the entire file is sorted figure shows the mergesort process on an external file the file consists of four blocks of four records eachfor total of records only three blocks can fit in |
23,021 | trees and external storage internal memory (of courseall these sizes would be much larger in real situation figure shows the file before sortingthe number in each record is its key value ablock sort block sort block sort merge file sort file merge block file file merge figure mergesort on an external file internal sort of blocks in the first phase all the blocks in the file are sorted internally this is done by reading the block into memory and sorting it with any appropriate internal sorting algorithmsuch as quicksort (or for smaller numbers of records shellsort or insertion sortthe result of sorting the blocks internally is shown in figure second file may be used to hold the sorted blocksand we assume that availability of external storage is not problem it' often desirable to avoid modifying the original file merging in the second phase we want to merge the sorted blocks in the first pass we merge every pair of blocks into sorted two-block sequence thusthe two blocks - |
23,022 | and - are merged into -- - also- and - are merged into -- - the result is shown in figure third file is necessary to hold the result of this merge step in the second passthe two -record sequences are merged into -record sequencewhich can be written back to file as shown in figure now the sort is complete of coursemore merge steps would be required to sort larger filesthe number of such steps is proportional to log the merge steps can alternate between two files (file and file in figure internal arrays because the computer' internal memory has room for only three blocksthe merging process must take place in stages let' say there are three arrayscalled arr arr and arr each of which can hold block in the first mergeblock - is read into arr and - is read into arr these two arrays are then mergesorted into arr howeverbecause arr holds only one blockit becomes full before the sort is completed when it becomes fullits contents are written to disk the sort then continuesfilling up arr again this completes the sortand arr is again written to disk the following lists show the details of each of the three mergesorts mergesort read - into arr read - into arr merge into arr write to disk merge into arr write to disk mergesort read - into arr read - into arr merge into arr write to disk merge into arr write to disk mergesort read - into arr read - into arr merge into arr write to disk |
23,023 | trees and external storage merge into arr (arr is now empty read - into arr merge into arr write to disk merge into arr (arr is now empty read - into arr merge into arr write to disk merge into arr write to disk this last sequence of steps is rather lengthyso it may be helpful to examine the details of the array contents as the steps are completed figure shows how these arrays look at various stages of mergesort adarr arr arr arr arr to disk arr steps and earr arr arr arr arr step to disk steps and cfarr arr arr arr arr to disk arr steps and figure step barr array contents during mergesort step to disk |
23,024 | summary multiway tree has more keys and children than binary tree tree is multiway tree with up to three keys and four children per node in multiway treethe keys in node are arranged in ascending order in treeall insertions are made in leaf nodesand all leaf nodes are on the same level three kinds of nodes are possible in treea -node has one key and two childrena -node has two keys and three childrenand -node has three keys and four children there is no -node in tree in search in treeat each node the keys are examined if the search key is not foundthe next node will be child if the search key is less than key child if the search key is between key and key child if the search key is between key and key and child if the search key is greater than key insertion into tree requires that any full node be split on the way down the treeduring the search for the insertion point splitting the root creates two new nodessplitting any other node creates one new node the height of tree can increase only when the root is split there is one-to-one correspondence between tree and red-black tree to transform tree into red-black treemake each -node into black nodemake each -node into black parent with red childand make each -node into black parent with two red children when -node is transformed into parent and childeither node can become the parent splitting node in tree is the same as performing color flip in red-black tree rotation in red-black tree corresponds to changing between the two possible orientations (slantswhen transforming -node the height of tree is less than log search times are proportional to the height the tree wastes space because many nodes are not even half full |
23,025 | trees and external storage - tree is similar to treeexcept that it can have only one or two data items and onetwoor three children insertion in - tree involves finding the appropriate leaf and then performing splits from the leaf upwarduntil non-full node is found external storage means storing data outside of main memoryusually on disk external storage is largercheaper (per byte)and slower than main memory data in external storage is typically transferred to and from main memory block at time data can be arranged in external storage in sequential key order this gives fast search times but slow insertion (and deletiontimes -tree is multiway tree in which each node may have dozens or hundreds of keys and children there is always one more child than there are keys in -tree node for the best performancea -tree is typically organized so that node holds one block of data if the search criteria involve many keysa sequential search of all the records in file may be the most practical approach questions these questions are intended as self-test for readers answers may be found in appendix tree is so named because node can have three children and four data items twothreeor four children two parentsthree childrenand four items two parentsthree itemsand four children tree is superior to binary search tree in that it is imagine parent node with data items and if one of its child nodes had items with values and it would be the child numbered true or falsedata items are located exclusively in leaf nodes |
23,026 | which of the following is not true each time node is splita exactly one new node is created exactly one new data item is added to the tree one data item moves from the split node to its parent one data item moves from the split node to its new sibling tree increases its number of levels when searching tree does not involve splitting nodes on the way down if necessary picking the appropriate child to go tobased on data items in node ending up at leaf node if the search key is not found examining at least one data item in any node visited after non-root node of tree is splitdoes its new right child contain the item previously numbered or -node split in tree is equivalent to in red-black tree which of the following statements about node-splitting operation in - tree (not treeis not truea the parent of split node must also be split if it is full the smallest item in the node being split always stays in that node when the parent is splitchild must always be disconnected from its old parent and connected to the new parent the splitting process starts at leaf and works upward what is the big efficiency of - tree in accessing data on disk drivea inserting data is slow but finding the place to write data is fast moving data to make room for more data is fast because so many items can be accessed at once deleting data is unusually fast finding the place to write data is comparatively slow but lot of data can be written quickly in -tree each node contains data items |
23,027 | trees and external storage true or falsenode splits in -tree have similarities to node splits in - tree in external storageindexing means keeping file of keys and their corresponding blocks records and their corresponding blocks keys and their corresponding records last names and their corresponding keys experiments carrying out these experiments will help to provide insights into the topics covered in the no programming is involved draw by hand what tree looks like after each of the following insertions and don' use the tree workshop applet draw by hand what - tree looks like after inserting the same sequence of values as in experiment think about how you would remove node from tree programming projects writing programs to solve the programming projects helps to solidify your understanding of the material and demonstrates how the concepts are applied (as noted in the introductionqualified instructors may obtain completed solutions to the programming projects on the publisher' web site this project should be easy write method that returns the minimum value in tree write method that does an inorder traverse of tree it should display all the items in order tree can be used as sorting machine write sort(method that' passed an array of key values from main(and writes them back to the array in sorted order modify the tree java program (listing so that it creates and works with - trees instead it should display the tree and allow searches it should also allow items to be insertedbut only if the parent of the leaf node (which is |
23,028 | being splitdoes not also need to be split this implies that the split(routine need not be recursive in writing insert()remember that no splits happen until the appropriate leaf has been located then the leaf will be split if it' full you'll need to be able to split the root toobut only when it' leaf with this limited routine you can insert fewer than nine items before the program crashes extend the program in programming project so that the split(routine is recursive and can handle situations with full parent of full child this will allow insertion of an unlimited number of items note that in the revised split(routine you'll need to split the parent before you can decide where the items go and where to attach the children |
23,029 | hash tables in this introduction to hashing open addressing separate chaining hash table is data structure that offers very fast insertion and searching when you first hear about themhash tables sound almost too good to be true no matter how many data items there areinsertion and searching (and sometimes deletioncan take close to constant timeo( in big notation in practice this is just few machine instructions for human user of hash tablethis is essentially instantaneous it' so fast that computer programs typically use hash tables when they need to look up tens of thousands of items in less than second (as in spelling checkershash tables are significantly faster than treeswhichas we learned in the preceding operate in relatively fast (logntime not only are they fasthash tables are relatively easy to program hash tables do have several disadvantages they're based on arraysand arrays are difficult to expand after they've been created for some kinds of hash tablesperformance may degrade catastrophically when table becomes too fullso the programmer needs to have fairly accurate idea of how many data items will need to be stored (or be prepared to periodically transfer data to larger hash tablea time-consuming processalsothere' no convenient way to visit the items in hash table in any kind of order (such as from smallest to largestif you need this capabilityyou'll need to look elsewhere howeverif you don' need to visit items in orderand you can predict in advance the size of your databasehash tables are unparalleled in speed and convenience hash functions hashing efficiency hashing and external storage |
23,030 | hash tables introduction to hashing in this section we'll introduce hash tables and hashing one important concept is how range of key values is transformed into range of array index values in hash table this is accomplished with hash function howeverfor certain kinds of keysno hash function is necessarythe key values can be used directly as array indices we'll look at this simpler situation first and then go on to show how hash functions can be used when keys aren' distributed in such an orderly fashion employee numbers as keys suppose you're writing program to access employee records for small company withsay , employees each employee record requires , bytes of storage thusyou can store the entire database in only megabytewhich will easily fit in your computer' memory the company' personnel director has specified that she wants the fastest possible access to any individual record alsoevery employee has been given number from (for the founderto , (for the most recently hired workerthese employee numbers can be used as keys to access the recordsin fact access by other keys is deemed unnecessary employees are seldom laid offbut even when they aretheir records remain in the database for reference (concerning retirement benefits and so onwhat sort of data structure should you use in this situationindex numbers as keys one possibility is simple array each employee record occupies one cell of the arrayand the index number of the cell is the employee number for that record this type of array is shown in figure array longsmithnorman ceo , , vegateresa vp , alcazarherman technician , vossheinrich salesman index numbers same as employee numbers figure employee numbers as array indices |
23,031 | as you knowaccessing specified array element is very fast if you know its index number the clerk looking up herman alcazar knows that he is employee number so he enters that numberand the program goes instantly to index number in the array single program statement is all that' necessaryemprecord rec databasearray[ ]adding new item is also very quickyou insert it just past the last occupied element the next new record--for jim chanthe newly hired employee number , --would go in cell , againa single statement inserts the new recorddatabasearray[totalemployees++newrecordpresumablythe array is made somewhat larger than the current number of employeesto allow room for expansionbut not much expansion is anticipated not always so orderly the speed and simplicity of data access using this array-based database make it very attractive howeverour example works only because the keys are unusually well organized they run sequentially from to known maximumand this maximum is reasonable size for an array there are no deletionsso memory-wasting gaps don' develop in the sequence new items can be added sequentially at the end of the arrayand the array doesn' need to be very much larger than the current number of items dictionary in many situations the keys are not so well behaved as in the employee database just described the classic example is dictionary if you want to put every word of an english-language dictionaryfrom to zyzzyva (yesit' word)into your computer' memory so they can be accessed quicklya hash table is good choice similar widely used application for hash tables is in computer-language compilerswhich maintain symbol table in hash table the symbol table holds all the variable and function names made up by the programmeralong with the address where they can be found in memory the program needs to access these names very quicklyso hash table is the preferred data structure let' say we want to store , -word english-language dictionary in main memory you would like every word to occupy its own cell in , -cell arrayso you can access the word using an index number this will make access very fast but what' the relationship of these index numbers to the wordsgiven the word morphosisfor examplehow do we find its index number |
23,032 | hash tables converting words to numbers what we need is system for turning word into an appropriate index number to beginwe know that computers use various schemes for representing individual characters as numbers one such scheme is the ascii codein which is is and so onup to for howeverthe ascii code runs from to to accommodate capitalspunctuationand so on there are really only letters in english wordsso let' devise our own codea simpler one that can potentially save memory space let' say is is is and so on up to for we'll also say blank is so we have characters (uppercase letters aren' used in this dictionary how do we combine the digits from individual letters into number that represents an entire wordthere are all sorts of approaches we'll look at two representative onesand their advantages and disadvantages adding the digits simple approach to converting word to number might be to simply add the code numbers for each character say we want to convert the word cats to number firstwe convert the characters to digits using our homemade codec= = then we add them thusin our dictionary the word cats would be stored in the array cell with index all the other english words would likewise be assigned an array index calculated by this process how well would this workfor the sake of argumentlet' restrict ourselves to letter words then (remembering that blank is )the first word in the dictionaryawould be coded by + + + + + + + + + = the last potential word in the dictionary would be zzzzzzzzzz ( zsour code obtained by adding its letters would be |
23,033 | thusthe total range of word codes is from to unfortunatelythere are , words in the dictionaryso there aren' enough index numbers to go around each array element will need to hold about words ( , divided by clearlythis presents problems if we're thinking in terms of our one word-per-array element scheme maybe we could put subarray or linked list of words at each array element unfortunatelysuch an approach would seriously degrade the access speed accessing the array element would be quickbut searching through the words to find the one we wanted would be slow so our first attempt at converting words to numbers leaves something to be desired too many words have the same index (for examplewastingivetendmoantickbailsdredgeand hundreds of other words add to as cats does we conclude that this approach doesn' discriminate enoughso the resulting array has too few elements we need to spread out the range of possible indices multiplying by powers let' try different way to map words to numbers if our array was too small beforelet' make sure it' big enough what would happen if we created an array in which every wordin fact every potential wordfrom to zzzzzzzzzzwas guaranteed to occupy its own unique array elementto do thiswe need to be sure that every character in word contributes in unique way to the final number we'll begin by thinking about an analogous situation with numbers instead of words recall that in an ordinary multi-digit numbereach digit-position represents value times as big as the position to its right thus , really means * * * * orwriting the multipliers as powers of * * * * (an input routine in computer program performs similar series of multiplications and additions to convert sequence of digitsentered at the keyboardinto number stored in memory in this system we break number into its digitsmultiply them by appropriate powers of (because there are possible digits)and add the products in similar way we can decompose word into its lettersconvert the letters to their numerical equivalentsmultiply them by appropriate powers of (because there are possible charactersincluding the blank)and add the results this gives unique number for every word |
23,034 | hash tables say we want to convert the word cats to number we convert the digits to numbers as shown earlier then we multiply each number by the appropriate power of and add the results * * * * calculating the powers gives * , * * * and multiplying the letter codes times the powers yields , which sums to , this process does indeed generate unique number for every potential word we just calculated -letter word what happens with larger wordsunfortunatelythe range of numbers becomes rather large the largest -letter wordzzzzzzzzzztranslates into * * * * * * * * * * just by itself is more than , , , , so you can see that the sum will be huge an array stored in memory can' possibly have this many elements figure not word not word not word not word actual english word not word not word fira firb firc fird fire firf firg , , , , , , the problem is that this scheme assigns an array element to every potential wordwhether it' an actual english word or not thusthere are cells for aaaaaaaaaaaaaaaaaaabaaaaaaaaacand so onup to zzzzzzzzzz only small fraction of these cells are necessary for real wordsso most array cells are empty this situation is shown in figure , index for every potential word our first scheme--adding the numbers--generated too few indices this latest scheme--adding the numbers times powers of --generates too many |
23,035 | hashing what we need is way to compress the huge range of numbers we obtain from the numbers-multiplied-by-powers system into range that matches reasonably sized array how big an array are we talking about for our english dictionaryif we have only , wordsyou might assume our array should have approximately this many elements howeverit turns out we're going to need an array with about twice this many cells (it will become clear later why this is so so we need an array with , elements thuswe look for way to squeeze range of to more than , , , , into the range to , simple approach is to use the modulo operator (%)which finds the remainder when one number is divided by another to see how this approach workslet' look at smaller and more comprehensible range suppose we squeeze numbers in the range to (we'll represent them by the variable largenumberinto the range to (the variable smallnumberthere are numbers in the range of small numbersso we'll say that variable smallrange has the value it doesn' really matter what the large range is (unless it overflows the program' variable sizethe java expression for the conversion is smallnumber largenumber smallrangethe remainders when any number is divided by are always in the range to for example % gives and % is this is shown in figure we've squeezed the range - into the range - -to- compression ratio similar expression can be used to compress the really huge numbers that uniquely represent every english word into index numbers that fit in our dictionary arrayarrayindex hugenumber arraysizethis is an example of hash function it hashes (convertsa number in large range into number in smaller range this smaller range corresponds to the index numbers in an array an array into which data is inserted using hash function is called hash table (we'll talk more about the design of hash functions later in the to reviewwe convert word into huge number by multiplying each character in the word by an appropriate power of hugenumber ch * ch * ch * ch * ch * ch * ch * ch * ch * ch * |
23,036 | hash tables small range large range figure range conversion thenusing the modulo operator (%)we squeeze the resulting huge range of numbers into range about twice as big as the number of items we want to store this is an example of hash functionarraysize numberwords arrayindex hugenumber arraysizein the huge rangeeach number represents potential data item (an arrangement of letters)but few of these numbers represent actual data items (english wordsa hash function transforms these large numbers into the index numbers of much smaller array in this array we expect thaton the averagethere will be one word for every two cells some cells will have no wordsand othersmore than one practical implementation of this scheme runs into trouble because hugenumber will probably overflow its variable sizeeven for type long we'll see how to deal with this problem later |
23,037 | collisions we pay price for squeezing large range into small one there' no longer guarantee that two words won' hash to the same array index this is similar to what happened when we added the letter codesbut the situation is nowhere near as bad when we added the lettersthere were only possible results (for words up to lettersnow we're spreading this out into , possible results even soit' impossible to avoid hashing several different words into the same array locationat least occasionally we had hoped that we could have one data item per index numberbut this turns out not to be possible the best we can do is hope that not too many words will hash to the same index perhaps you want to insert the word melioration into the array you hash the word to obtain its index number but find that the cell at that number is already occupied by the word demystifywhich happens to hash to the exact same number (for certain size arraythis situationshown in figure is called collision parchment demystify melioration slander quixotic figure collision it may appear that the possibility of collisions renders the hashing scheme impracticalbut in fact we can work around the problem in variety of ways |
23,038 | hash tables remember that we've specified an array with twice as many cells as data items thusperhaps half the cells are empty one approachwhen collision occursis to search the array in some systematic way for an empty cell and insert the new item thereinstead of at the index specified by the hash function this approach is called open addressing if cats hashes to , but this location is already occupied by parsnipthen we might try to insert cats in , for example second approach (mentioned earlieris to create an array that consists of linked lists of words instead of the words themselves thenwhen collision occursthe new item is simply inserted in the list at that index this is called separate chaining in the balance of this we'll discuss open addressing and separate chainingand then return to the question of hash functions so far we've focused on hashing strings this is realisticbecause many hash tables are used for storing strings howevermany other hash tables hold numbersas in our employee-number example in the discussion that followsand in the workshop appletswe use numbers--rather than strings--as keys this makes things easier to understand and simplifies the programming examples keep in mindhoweverthat in many situations these numbers would be derived from strings open addressing in open addressingwhen data item can' be placed at the index calculated by the hash functionanother location in the array is sought we'll explore three methods of open addressingwhich vary in the method used to find the next vacant cell these methods are linear probingquadratic probingand double hashing linear probing in linear probing we search sequentially for vacant cells if , is occupied when we try to insert data item therewe go to , then , and so onincrementing the index until we find an empty cell this is called linear probing because it steps sequentially along the line of cells the hash workshop applet the hash workshop applet demonstrates linear probing when you start this appletyou'll see screen similar to figure in this applet the range of keys runs from to the initial size of the array is the hash function has to squeeze the range of keys down to match the array size it does this with the modulo operator (%)as we've seen beforearrayindex key arraysize |
23,039 | figure the hash workshop applet at startup for the initial array size of this is arrayindex key this hash function is simple enough that you can solve it mentally for given keykeep subtracting until you get number less than for exampleto hash subtract giving and then againgiving this is the index number where the algorithm will place thusyou can easily check that the algorithm has hashed key to the correct address (an array size of is even easier to figure outas key' last digit is the index it will hash to as with other appletsoperations are carried out by repeatedly pressing the same button for exampleto find data item with specified numberclick the find button repeatedly rememberfinish sequence with one button before using another button for exampledon' switch from clicking fill to some other button until the press any key message is displayed all the operations require you to type numerical value at the beginning of the sequence the find button requires you to type key valuefor examplewhile new requires the size of the new table the new button you can create new hash table of size you specify by using the new button the maximum size is this limitation results from the number of cells that can be viewed in the applet window the initial size is also we use this number because it makes it easy to check whether the hash values are correctbut as we'll see laterin general-purpose hash tablethe array size should be prime numberso would be better choice |
23,040 | hash tables the fill button initiallythe hash table contains itemsso it' half full howeveryou can also fill it with specified number of data items using the fill button keep clicking filland when promptedtype the number of items to fill hash tables work best when they are not more than half or at the most two-thirds full ( items in -cell tableyou'll see that the filled cells aren' evenly distributed in the cells sometimes there' sequence of several empty cells and sometimes sequence of filled cells let' call sequence of filled cells in hash table filled sequence as you add more and more itemsthe filled sequences become longer this is called clusteringand is shown in figure cluster figure an example of clustering when you use the appletnote that it may take long time to fill hash table if you try to fill it too full (for exampleif you try to put items in -cell tableyou may think the program has stoppedbut be patient it' extremely inefficient at filling an almost-full array |
23,041 | alsonote that if the hash table becomes completely fullthe algorithms all stop workingin this applet they assume that the table has at least one empty cell the find button the find button starts by applying the hash function to the key value you type into the number box this results in an array index the cell at this index may be the key you're looking forthis is the optimum situationand success will be reported immediately howeverit' also possible that this cell is already occupied by data item with some other key this is collisionyou'll see the red arrow pointing to an occupied cell following collisionthe search algorithm will look at the next cell in sequence the process of finding an appropriate cell following collision is called probe following collisionthe find algorithm simply steps along the array looking at each cell in sequence if it encounters an empty cell before finding the key it' looking forit knows the search has failed there' no use looking further because the insertion algorithm would have inserted the item at this cell (if not earlierfigure shows successful and unsuccessful linear probes initial probe asuccessful search for figure initial probe bunsuccessful search for linear probes the ins button the ins button inserts data itemwith key value that you type into the number boxinto the hash table it uses the same algorithm as the find button to locate the appropriate cell if the original cell is occupiedit will probe linearly for vacant cell when it finds oneit inserts the item |
23,042 | hash tables try inserting some new data items type in three-digit number and watch what happens most items will go into the first cell they trybut some will suffer collisions and need to step along to find an empty cell the number of steps they take is the probe length most probe lengths are only few cells long sometimeshoweveryou may see probe lengths of four or five cellsor even longer as the array becomes excessively full notice which keys hash to the same index if the array size is the keys and so on up to all hash to index try inserting this sequence or similar one such sequences will demonstrate the linear probe the del button the del button deletes an item whose key is typed by the user deletion isn' accomplished by simply removing data item from cellleaving it empty why notremember that during insertion the probe process steps along series of cellslooking for vacant one if cell is made empty in the middle of this sequence of full cellsthe find routine will give up when it sees the empty celleven if the desired cell can eventually be reached for this reason deleted item is replaced by an item with special key value that identifies it as deleted in this applet we assume all legitimate key values are positiveso the deleted value is chosen as - deleted items are marked with the special key *delthe insert button will insert new item at the first available empty cell or in *delitem the find button will treat *delitem as an existing item for the purposes of searching for another item further along if there are many deletionsthe hash table fills up with these ersatz *deldata itemswhich makes it less efficient for this reason many hash table implementations don' allow deletion if it is implementedit should be used sparingly duplicates allowedcan you allow data items with duplicate keys to be used in hash tablesthe fill routine in the hash workshop applet doesn' allow duplicatesbut you can insert them with the insert button if you like then you'll see that only the first one can be accessed the only way to access second item with the same key is to delete the first one this isn' too convenient you could rewrite the find algorithm to look for all items with the same key instead of just the first one howeverit would then need to search through all the cells of every linear sequence it encountered this wastes time for all table accesseseven when no duplicates are involved in the majority of cases you probably want to forbid duplicates |
23,043 | clustering try inserting more items into the hash table in the hash workshop applet as it gets more fullthe clusters grow larger clustering can result in very long probe lengths this means that accessing cells at the end of the sequence is very slow the more full the array isthe worse clustering becomes it' not usually problem when the array is half fulland still not too bad when it' two-thirds full beyond thishoweverperformance degrades seriously as the clusters grow larger and larger for this reason it' critical when designing hash table to ensure that it never becomes more than halfor at the most two-thirdsfull (we'll discuss the mathematical relationship between how full the hash table is and probe lengths at the end of this java code for linear probe hash table it' not hard to create methods to handle searchinsertionand deletion with linear probe hash tables we'll show the java code for these methods and then complete hash java program that puts them in context the find(method the find(method first calls hashfunc(to hash the search key to obtain the index number hashval the hashfunc(method applies the operator to the search key and the array sizeas we've seen before nextin while conditionfind(checks whether the item at this index is empty (nullif notit checks whether the item contains the search key if the item does contain the keyfind(returns the item if it doesn'tfind(increments hashval and goes back to the top of the while loop to check whether the next cell is occupied here' the code for find()public dataitem find(int key/(assumes table not fullint hashval hashfunc(key)/find item with key /hash the key while(hasharray[hashval!null/until empty cell/found the keyif(hasharray[hashvalgetkey(=keyreturn hasharray[hashval]/yesreturn item ++hashval/go to next cell hashval %arraysize/wrap around if necessary return null/can' find item |
23,044 | hash tables as hashval steps through the arrayit eventually reaches the end when this happenswe want it to wrap around to the beginning we could check for this with an if statementsetting hashval to whenever it equaled the array size howeverwe can accomplish the same thing by applying the operator to hashval and the array size cautious programmers might not want to assume the table is not fullas is done here the table should not be allowed to become fullbut if it didthis method would loop forever for simplicity we don' check for this situation the insert(method the insert(methodshown hereuses about the same algorithm as find(to locate where data item should go howeverit' looking for an empty cell or deleted item (key - )rather than specific item when such an empty cell has been locatedinsert(places the new item into it public void insert(dataitem item/insert dataitem /(assumes table not fullint key item getkey()/extract key int hashval hashfunc(key)/hash the key /until empty cell or - while(hasharray[hashval!null &hasharray[hashvalidata !- ++hashval/go to next cell hashval %arraysize/wrap around if necessary hasharray[hashvalitem/insert item /end insert(the delete(method the following delete(method finds an existing item using code similar to find(when the item is founddelete(writes over it with the special data item nonitemwhich is predefined with key of - public dataitem delete(int keyint hashval hashfunc(key)/delete dataitem /hash the key while(hasharray[hashval!null/until empty cell/found the keyif(hasharray[hashvalgetkey(=key |
23,045 | dataitem temp hasharray[hashval]/save item hasharray[hashvalnonitem/delete item return temp/return item ++hashval/go to next cell hashval %arraysize/wrap around if necessary return null/can' find item /end delete(the hash java program listing shows the complete hash java program in this program dataitem object contains just one fieldan integer that is its key as in other data structures we've discussedthese objects could contain more data or reference to an object of another class (such as employee or partnumberthe major field in class hashtable is an array called hasharray other fields are the size of the array and the special nonitem object used for deletions listing the hash java program /hash java /demonstrates hash table with linear probing /to run this programc:>java hashtableapp import java io *///////////////////////////////////////////////////////////////class dataitem /(could have more dataprivate int idata/data item (key//public dataitem(int ii/constructor idata ii//public int getkey(return idata///end class dataitem ///////////////////////////////////////////////////////////////class hashtable private dataitem[hasharray/array holds hash table private int arraysize |
23,046 | listing hash tables continued private dataitem nonitem/for deleted items /public hashtable(int size/constructor arraysize sizehasharray new dataitem[arraysize]nonitem new dataitem(- )/deleted item key is - /public void displaytable(system out print("table")for(int = <arraysizej++if(hasharray[ !nullsystem out print(hasharray[jgetkey(")else system out print("*")system out println("")/public int hashfunc(int keyreturn key arraysize/hash function /public void insert(dataitem item/insert dataitem /(assumes table not fullint key item getkey()/extract key int hashval hashfunc(key)/hash the key /until empty cell or - while(hasharray[hashval!null &hasharray[hashvalgetkey(!- ++hashval/go to next cell hashval %arraysize/wraparound if necessary hasharray[hashvalitem/insert item /end insert( |
23,047 | listing continued /public dataitem delete(int key/delete dataitem int hashval hashfunc(key)/hash the key while(hasharray[hashval!null/until empty cell/found the keyif(hasharray[hashvalgetkey(=keydataitem temp hasharray[hashval]/save item hasharray[hashvalnonitem/delete item return temp/return item ++hashval/go to next cell hashval %arraysize/wraparound if necessary return null/can' find item /end delete(/public dataitem find(int key/find item with key int hashval hashfunc(key)/hash the key while(hasharray[hashval!null/until empty cell/found the keyif(hasharray[hashvalgetkey(=keyreturn hasharray[hashval]/yesreturn item ++hashval/go to next cell hashval %arraysize/wraparound if necessary return null/can' find item //end class hashtable ///////////////////////////////////////////////////////////////class hashtableapp public static void main(string[argsthrows ioexception dataitem adataitemint akeysizenkeyspercell |
23,048 | listing hash tables continued /get sizes system out print("enter size of hash table")size getint()system out print("enter initial number of items") getint()keyspercell /make table hashtable thehashtable new hashtable(size)for(int = <nj++/insert data akey (int)(java lang math random(keyspercell size)adataitem new dataitem(akey)thehashtable insert(adataitem)while(true/interact with user system out print("enter first letter of ")system out print("showinsertdeleteor find")char choice getchar()switch(choicecase ' 'thehashtable displaytable()breakcase ' 'system out print("enter key value to insert")akey getint()adataitem new dataitem(akey)thehashtable insert(adataitem)breakcase ' 'system out print("enter key value to delete")akey getint()thehashtable delete(akey)breakcase ' 'system out print("enter key value to find")akey getint() |
23,049 | listing continued adataitem thehashtable find(akey)if(adataitem !nullsystem out println("found akey)else system out println("could not find akey)breakdefaultsystem out print("invalid entry\ ")/end switch /end while /end main(//public static string getstring(throws ioexception inputstreamreader isr new inputstreamreader(system in)bufferedreader br new bufferedreader(isr)string br readline()return //public static char getchar(throws ioexception string getstring()return charat( )//public static int getint(throws ioexception string getstring()return integer parseint( )///end class hashtableapp ///////////////////////////////////////////////////////////////the main(routine in the hashtableapp class contains user interface that allows the user to show the contents of the hash table (enter )insert an item ( )delete an item ( )or find an item ( |
23,050 | hash tables initiallythe program asks the user to input the size of the hash table and the number of items in it you can make it almost any sizefrom few items to , (building larger tables than this may take little time don' use the (for showoption on tables of more than few hundred itemsthey scroll off the screen and displaying them takes long time variable in main()keyspercellspecifies the ratio of the range of keys to the size of the array in the listingit' set to this means that if you specify table size of the keys will range from to if you want to see what' going onit' best to create tables with fewer than about items so that all the items can be displayed on one line here' some sample interaction with hash javaenter size of hash table enter initial number of items enter first letter of showinsertdeleteor finds table ** * * enter first letter of showinsertdeleteor findf enter key value to find found enter first letter of showinsertdeleteor findi enter key value to insert enter first letter of showinsertdeleteor finds table * * * enter first letter of showinsertdeleteor findd enter key value to delete enter first letter of showinsertdeleteor finds table *- * * key values run from to ( times minus the *symbol indicates that cell is empty the item with key is inserted at location (the first item is numbered because % is notice how changes to - when this item is deleted expanding the array one option when hash table becomes too full is to expand its array in javaarrays have fixed size and can' be expanded your program must create newlarger arrayand then insert the contents of the old small array into the new large one |
23,051 | remember that the hash function calculates the location of given data item based on the array sizeso items won' be located in the same place in the large array as they were in the small array you can' therefore simply copy the items from one array to the other you'll need to go through the old array in sequencecell by cellinserting each item you find into the new array with the insert(method this is called rehashing it' time-consuming processbut necessary if the array is to be expanded the expanded array is usually made twice the size of the original array actuallybecause the array size should be prime numberthe new array will need to be bit more than twice as big calculating the new array size is part of the rehashing process here are some routines to help find the new array size (or the original array sizeif you don' trust the user to pick prime numberwhich is usually the caseyou start off with the specified size and then look for the next prime larger than that the getprime(method gets the next prime larger than its argument it calls isprime(to check each of the numbers above the specified size private int getprime(int min/returns st prime min for(int min+ truej++/for all min ifisprime( /is primereturn /yesreturn it /private boolean isprime(int /is primefor(int = ( * < ) ++/for all ifn = /divides evenly by jreturn false/yesso not prime return true/noso prime these routines are not the ultimate in sophistication for examplein getprime(you could check and then odd numbers from then oninstead of every number howeversuch refinements don' matter much because you usually find prime after checking only few numbers java offers class vector that is an array-like data structure that can be expanded howeverit' not much help because of the need to rehash all data items when the table changes size |
23,052 | hash tables quadratic probing we've seen that clusters can occur in the linear probe approach to open addressing once cluster formsit tends to grow larger items that hash to any value in the range of the cluster will step along and insert themselves at the end of the clusterthus making it even bigger the bigger the cluster getsthe faster it grows it' like the crowd that gathers when someone faints at the shopping mall the first arrivals come because they saw the victim falllater arrivals gather because they wondered what everyone else was looking at the larger the crowd growsthe more people are attracted to it the ratio of the number of items in table to the table' size is called the load factor table with , cells and , items has load factor of / loadfactor nitems arraysizeclusters can form even when the load factor isn' high parts of the hash table may consist of big clusterswhile others are sparsely inhabited clusters reduce performance quadratic probing is an attempt to keep clusters from forming the idea is to probe more widely separated cellsinstead of those adjacent to the primary hash site the step is the square of the step number in linear probeif the primary hash index is xsubsequent probes go to + + + and so on in quadratic probingprobes go to + + + + + and so on the distance from the initial probe is the square of the step numberx+ + + + + and so on figure shows some quadratic probes it' as if quadratic probe became increasingly desperate as its search lengthened at first it calmly picks the adjacent cell if that' occupiedit thinks it may be in small clusterso it tries something cells away if that' occupiedit becomes little concernedthinking it may be in larger clusterand tries cells away if that' occupiedit feels the first tinges of panic and jumps cells away pretty soonit' flying hysterically all over the placeas you can see if you try searching with the hashdouble workshop applet when the table is almost full the hashdouble applet with quadratic probes the hashdouble workshop applet allows two different kinds of collision handlingquadratic probes and double hashing (we'll look at double hashing in the next section this applet generates display much like that of the hash workshop appletexcept that it includes radio buttons to select quadratic probing or double hashing |
23,053 | figure initial probe initial probe asuccessful search for bunsuccessful search for quadratic probes to see how quadratic probes lookstart up this applet and create new hash table of items using the new button when you're asked to select double or quadratic probeclick the quad button after the new table is createdfill it four/fifths full using the fill button ( items in -cell arraythis is too fullbut it will generate longer probes so you can study the probe algorithm incidentallyif you try to fill the hash table too fullyou may see the message can' complete fill this occurs when the probe sequences get very long every additional step in the probe sequence makes bigger step size if the sequence is too longthe step size will eventually exceed the capacity of its integer variableso the applet shuts down the fill process before this happens when the table is filledselect an existing key value and use the find key to see whether the algorithm can find it often the key value is located at the initial cellor the one adjacent to it if you're patienthoweveryou'll find key that requires three or four stepsand you'll see the step size lengthen for each step you can also use find to search for non-existent keythis search continues until an empty cell is encountered |
23,054 | hash tables tip importantalways make the array size prime number use instead of for example (other primes less than are and if the array size is not primean endless sequence of steps may occur during probe if this happens during fill operationthe applet will be paralyzed the problem with quadratic probes quadratic probes eliminate the clustering problem we saw with the linear probewhich is called primary clustering howeverquadratic probes suffer from different and more subtle clustering problem this occurs because all the keys that hash to particular cell follow the same sequence in trying to find vacant space let' say and all hash to and are inserted in this order then will require one-step probe will require four-step probeand will require nine-step probe each additional item with key that hashes to will require longer probe this phenomenon is called secondary clustering secondary clustering is not serious problembut quadratic probing is not often used because there' slightly better solution double hashing to eliminate secondary clustering as well as primary clusteringwe can use another approachdouble hashing secondary clustering occurs because the algorithm that generates the sequence of steps in the quadratic probe always generates the same steps and so on what we need is way to generate probe sequences that depend on the key instead of being the same for every key then numbers with different keys that hash to the same index will use different probe sequences the solution is to hash the key second timeusing different hash functionand use the result as the step size for given key the step size remains constant throughout probebut it' different for different keys experience has shown that this secondary hash function must have certain characteristicsit must not be the same as the primary hash function it must never output (otherwisethere would be no stepevery probe would land on the same celland the algorithm would go into an endless loopexperts have discovered that functions of the following form work wellstepsize constant (key constant) |
23,055 | where constant is prime and smaller than the array size for examplestepsize (key )this is the secondary hash function used in the hashdouble workshop applet different keys may hash to the same indexbut they will (most likelygenerate different step sizes with this hash function the step sizes are all in the range to this is shown in figure asuccessful search for figure initial probe initial probe bunsuccessful search for double hashing the hashdouble applet with double hashing you can use the hashdouble workshop applet to see how double hashing works it starts up automatically in double-hashing modebut if it' in quadratic modeyou can switch to double by creating new table with the new button and clicking the double button when prompted to best see probes at workyou'll need to fill the table rather fullsay to about nine/tenths capacity or more even with such high load factorsmost data items will be found immediately by the first hash functiononly few will require extended probe sequences try finding existing keys when one needs probe sequenceyou'll see how all the steps are the same size for given keybut that the step size is different--between and --for different keys |
23,056 | hash tables java code for double hashing listing shows the complete listing for hashdouble javawhich uses double hashing it' similar to the hash java program (listing )but uses two hash functionsone for finding the initial index and the second for generating the step size as beforethe user can show the table contentsinsert an itemdelete an itemand find an item listing the hashdouble java program /hashdouble java /demonstrates hash table with double hashing /to run this programc:>java hashdoubleapp import java io *///////////////////////////////////////////////////////////////class dataitem /(could have more itemsprivate int idata/data item (key//public dataitem(int ii/constructor idata ii//public int getkey(return idata///end class dataitem ///////////////////////////////////////////////////////////////class hashtable private dataitem[hasharray/array is the hash table private int arraysizeprivate dataitem nonitem/for deleted items /hashtable(int size/constructor arraysize sizehasharray new dataitem[arraysize]nonitem new dataitem(- )/public void displaytable(system out print("table")for(int = <arraysizej++ |
23,057 | listing continued if(hasharray[ !nullsystem out print(hasharray[jgetkey()")else system out print("*")system out println("")/public int hashfunc (int keyreturn key arraysize/public int hashfunc (int key/non-zeroless than array sizedifferent from hf /array size must be relatively prime to and return key //insert dataitem public void insert(int keydataitem item/(assumes table not fullint hashval hashfunc (key)/hash the key int stepsize hashfunc (key)/get step size /until empty cell or - while(hasharray[hashval!null &hasharray[hashvalgetkey(!- hashval +stepsize/add the step hashval %arraysize/for wraparound hasharray[hashvalitem/insert item /end insert(/public dataitem delete(int key/delete dataitem int hashval hashfunc (key)/hash the key int stepsize hashfunc (key)/get step size |
23,058 | listing hash tables continued while(hasharray[hashval!null/until empty cell/is correct hashvalif(hasharray[hashvalgetkey(=keydataitem temp hasharray[hashval]/save item hasharray[hashvalnonitem/delete item return temp/return item hashval +stepsize/add the step hashval %arraysize/for wraparound return null/can' find item /end delete(/public dataitem find(int key/find item with key /(assumes table not fullint hashval hashfunc (key)/hash the key int stepsize hashfunc (key)/get step size while(hasharray[hashval!null/until empty cell/is correct hashvalif(hasharray[hashvalgetkey(=keyreturn hasharray[hashval]/yesreturn item hashval +stepsize/add the step hashval %arraysize/for wraparound return null/can' find item //end class hashtable ///////////////////////////////////////////////////////////////class hashdoubleapp public static void main(string[argsthrows ioexception int akeydataitem adataitemint sizen/get sizes system out print("enter size of hash table") |
23,059 | listing continued size getint()system out print("enter initial number of items") getint()/make table hashtable thehashtable new hashtable(size)for(int = <nj++/insert data akey (int)(java lang math random( size)adataitem new dataitem(akey)thehashtable insert(akeyadataitem)while(true/interact with user system out print("enter first letter of ")system out print("showinsertdeleteor find")char choice getchar()switch(choicecase ' 'thehashtable displaytable()breakcase ' 'system out print("enter key value to insert")akey getint()adataitem new dataitem(akey)thehashtable insert(akeyadataitem)breakcase ' 'system out print("enter key value to delete")akey getint()thehashtable delete(akey)breakcase ' 'system out print("enter key value to find")akey getint()adataitem thehashtable find(akey)if(adataitem !nullsystem out println("found akey)else |
23,060 | listing hash tables continued system out println("could not find akey)breakdefaultsystem out print("invalid entry\ ")/end switch /end while /end main(//public static string getstring(throws ioexception inputstreamreader isr new inputstreamreader(system in)bufferedreader br new bufferedreader(isr)string br readline()return //public static char getchar(throws ioexception string getstring()return charat( )//public static int getint(throws ioexception string getstring()return integer parseint( )///end class hashdoubleapp ///////////////////////////////////////////////////////////////output and operation of this program are similar to those of hash java table shows what happens when items are inserted into -cell hash table using double hashing the step sizes run from to table filling -cell table using double hashing item number key hash value step size cells in probe sequence |
23,061 | table continued item number key hash value step size cells in probe sequence the first keys mostly hash to vacant cell (the th one is an anomalyafter thatas the array gets more fullthe probe sequences become quite long here' the resulting array of keys* * table size prime number double hashing requires that the size of the hash table is prime number to see whyimagine situation in which the table size is not prime number for examplesuppose the array size is (indices from to )and that particular key hashes to an initial index of and step size of the probe sequence will be and so onrepeating endlessly only these three cells are ever examinedso the algorithm will never find the empty cells that might be waiting at and so on the algorithm will crash and burn if the array size were which is primethe probe sequence will eventually visit every cell it' and so on and on if there is even one empty cellthe probe will find it using prime number as the array size makes it impossible for any number to divide it evenlyso the probe sequence will eventually check every cell |
23,062 | hash tables similar effect occurs using the quadratic probe in that casehoweverthe step size gets larger with each step and will eventually overflow the variable holding itthus preventing an endless loop in generaldouble hashing is the probe sequence of choice when open addressing is used separate chaining in open addressingcollisions are resolved by looking for an open cell in the hash table different approach is to install linked list at each index in the hash table data item' key is hashed to the index in the usual wayand the item is inserted into the linked list at that index other items that hash to the same index are simply added to the linked listthere' no need to search for empty cells in the primary array figure shows how separate chaining looks empty empty array figure linked lists example of separate chaining separate chaining is conceptually somewhat simpler than the various probe schemes used in open addressing howeverthe code is longer because it must include the mechanism for the linked listsusually in the form of an additional class the hashchain workshop applet to see how separate chaining worksstart the hashchain workshop applet it displays an array of linked listsas shown in figure |
23,063 | figure separate chaining in the hashchain workshop applet each element of the array occupies one line of the displayand the linked lists extend from left to right initiallythere are cells in the array ( liststhis is more than fits on the screenyou can move the display up and down with the scrollbar to see the entire array the display shows up to six items per list you can create hash table with up to lists and use load factors up to higher load factors may cause the linked lists to exceed six items and run off the right edge of the screenmaking it impossible to see all the items (this may happen very occasionally even at the load factor experiment with the hashchain applet by inserting some new items with the ins button you'll see how the red arrow goes immediately to the correct list and inserts the item at the beginning of the list the lists in the hashchain applet are not sortedso insertion does not require searching through the list (the example program will demonstrate sorted lists try to find specified items using the find button during find operationif there are several items on the listthe red arrow must step through the items looking for the correct one for successful searchhalf the items in the list must be examined on the averageas we discussed in "linked lists for an unsuccessful searchall the items must be examined load factors the load factor (the ratio of the number of items in hash table to its sizeis typically different in separate chaining than in open addressing in separate chaining it' normal to put or more items into an cell arraythusthe load factor can be or greater there' no problem with thissome locations will simply contain two or more items in their lists |
23,064 | hash tables of courseif there are many items on the listsaccess time is reduced because access to specified item requires searching through an average of half the items on the list finding the initial cell takes fast ( timebut searching through list takes time proportional to mthe average number of items on the list this is (mtime thuswe don' want the lists to become too full load factor of as shown in the initial workshop appletis common with this load factorroughly one-third of the cells will be emptyone-third will hold one itemand one-third will hold two or more items in open addressingperformance degrades badly as the load factor increases above one-half or two-thirds in separate chaining the load factor can rise above without hurting performance very much this makes separate chaining more robust mechanismespecially when it' hard to predict in advance how much data will be placed in the hash table duplicates duplicates are allowed and may be generated in the fill process all items with the same key will be inserted in the same listso if you need to discover all of themyou must search the entire list in both successful and unsuccessful searches this lowers performance the find operation in the applet finds only the first of several duplicates deletion in separate chainingdeletion poses no special problems as it does in open addressing the algorithm hashes to the proper list and then deletes the item from the list because probes aren' usedit doesn' matter if the list at particular cell becomes empty we've included del button in the workshop applet to show how deletion works table size with separate chainingmaking the table size prime number is not as important as it is with quadratic probes and double hashing there are no probes in separate chainingso we don' need to worry that probe will go into an endless sequence because the step size divides evenly into the array size on the other handcertain kinds of key distributions can cause data to cluster when the array size is not prime number we'll have more to say about this problem when we discuss hash functions buckets another approach similar to separate chaining is to use an array at each location in the hash tableinstead of linked list such arrays are sometimes called buckets this approach is not as efficient as the linked list approachhoweverbecause of the |
23,065 | problem of choosing the size of the buckets if they're too smallthey may overflowand if they're too largethey waste memory linked listswhich allocate memory dynamicallydon' have this problem java code for separate chaining the hashchain java program includes sortedlist class and an associated link class sorted lists don' speed up successful searchbut they do cut the time of an unsuccessful search in half (as soon as an item larger than the search key is reachedwhich on average is half the items in listthe search can be declared failure deletion times are also cut in halfhoweverinsertion times are lengthened because the new item can' just be inserted at the beginning of the listits proper place in the ordered list must be located before it' inserted if the lists are shortthe increase in insertion times may not be important if many unsuccessful searches are anticipatedit may be worthwhile to use the slightly more complicated sorted listrather than an unsorted list howeveran unsorted list is preferred if insertion speed is more important the hashchain java programshown in listing begins by constructing hash table with table size and number of items entered by the user the user can then insertfindand delete itemsand display the list for the entire hash table to be viewed on the screenthe size of the table must be no greater than or so listing the hashchain java program /hashchain java /demonstrates hash table with separate chaining /to run this programc:>java hashchainapp import java io *///////////////////////////////////////////////////////////////class link /(could be other itemsprivate int idata/data item public link next/next link in list /public link(int it/constructor idatait/public int getkey(return idata/public void displaylink(/display this link system out print(idata ") |
23,066 | listing hash tables continued /end class link ///////////////////////////////////////////////////////////////class sortedlist private link first/ref to first list item /public void sortedlist(/constructor first null/public void insert(link thelink/insert linkin order int key thelink getkey()link previous null/start at first link current first/until end of listwhilecurrent !null &key current getkey(/or current keyprevious currentcurrent current next/go to next item if(previous==null/if beginning of listfirst thelink/first --new link else /not at beginningprevious next thelink/prev --new link thelink next current/new link --current /end insert(/public void delete(int key/delete link /(assumes non-empty listlink previous null/start at first link current first/until end of listwhilecurrent !null &key !current getkey(/or key =currentprevious currentcurrent current next/go to next link /disconnect link if(previous==null/if beginning of list first first next/delete first link else /not at beginning |
23,067 | listing continued previous next current next/delete current link /end delete(/public link find(int key/find link link current first/start at first /until end of listwhile(current !null ¤t getkey(<key/or key too smallif(current getkey(=key/is this the linkreturn current/found itreturn link current current next/go to next item return null/didn' find it /end find(/public void displaylist(system out print("list (first-->last)")link current first/start at beginning of list while(current !null/until end of listcurrent displaylink()/print data current current next/move to next link system out println("")/end class sortedlist ///////////////////////////////////////////////////////////////class hashtable private sortedlist[hasharray/array of lists private int arraysize/public hashtable(int size/constructor arraysize sizehasharray new sortedlist[arraysize]/create array for(int = <arraysizej++/fill array hasharray[jnew sortedlist()/with lists |
23,068 | listing hash tables continued /public void displaytable(for(int = <arraysizej++/for each cellsystem out print( ")/display cell number hasharray[jdisplaylist()/display list /public int hashfunc(int key/hash function return key arraysize/public void insert(link thelink/insert link int key thelink getkey()int hashval hashfunc(key)/hash the key hasharray[hashvalinsert(thelink)/insert at hashval /end insert(/public void delete(int key/delete link int hashval hashfunc(key)/hash the key hasharray[hashvaldelete(key)/delete link /end delete(/public link find(int key/find link int hashval hashfunc(key)/hash the key link thelink hasharray[hashvalfind(key)/get link return thelink/return link //end class hashtable ///////////////////////////////////////////////////////////////class hashchainapp public static void main(string[argsthrows ioexception |
23,069 | listing continued int akeylink adataitemint sizenkeyspercell /get sizes system out print("enter size of hash table")size getint()system out print("enter initial number of items") getint()/make table hashtable thehashtable new hashtable(size)for(int = <nj++/insert data akey (int)(java lang math random(keyspercell size)adataitem new link(akey)thehashtable insert(adataitem)while(true/interact with user system out print("enter first letter of ")system out print("showinsertdeleteor find")char choice getchar()switch(choicecase ' 'thehashtable displaytable()breakcase ' 'system out print("enter key value to insert")akey getint()adataitem new link(akey)thehashtable insert(adataitem)breakcase ' 'system out print("enter key value to delete")akey getint()thehashtable delete(akey)breakcase ' 'system out print("enter key value to find") |
23,070 | listing hash tables continued akey getint()adataitem thehashtable find(akey)if(adataitem !nullsystem out println("found akey)else system out println("could not find akey)breakdefaultsystem out print("invalid entry\ ")/end switch /end while /end main(//public static string getstring(throws ioexception inputstreamreader isr new inputstreamreader(system in)bufferedreader br new bufferedreader(isr)string br readline()return //public static char getchar(throws ioexception string getstring()return charat( )//public static int getint(throws ioexception string getstring()return integer parseint( )///end class hashchainapp ///////////////////////////////////////////////////////////////here' the output when the user creates table with listsinserts items into itand displays it with the optionenter size of hash table enter initial number of items enter first letter of showinsertdeleteor finds |
23,071 | list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last) list (first-->last)if you insert more items into this tableyou'll see the lists grow longer but maintain their sorted order you can delete items as well we'll return to the question of when to use separate chaining when we discuss hash table efficiency later in this hash functions in this section we'll explore the issue of what makes good hash function and see whether we can improve the approach to hashing strings mentioned at the beginning of this quick computation good hash function is simpleso it can be computed quickly the major advantage of hash tables is their speed if the hash function is slowthis speed will be degraded hash function with many multiplications and divisions is not good idea (the bit-manipulation facilities of java or ++such as shifting bits right to divide number by multiple of can sometimes be used to good advantage the purpose of hash function is to take range of key values and transform them into index values in such way that the key values are distributed randomly across all the indices of the hash table keys may be completely random or not so random |
23,072 | hash tables random keys so-called perfect hash function maps every key into different table location this is only possible for keys that are unusually well behaved and whose range is small enough to be used directly as array indices (as in the employee-number example at the beginning of this in most cases neither of these situations existsand the hash function will need to compress larger range of keys into smaller range of index numbers the distribution of key values in particular database determines what the hash function needs to do in this we've assumed that the data was randomly distributed over its entire range in this situation the hash function index key arraysizeis satisfactory it involves only one mathematical operationand if the keys are truly randomthe resulting indices will be random tooand therefore well distributed non-random keys howeverdata is often distributed non-randomly imagine database that uses car-part numbers as keys perhaps these numbers are of the form -- this is interpreted as followsdigits - supplier number ( to currently up to digits - category code ( up to digits - month of introduction ( to digits - year of introduction ( to digits - serial number ( to but never exceeds digit toxic risk flag ( or digits - checksum (sum of other fieldsmodulo the key used for the part number shown would be , , , , , howeversuch keys are not randomly distributed the majority of numbers from to , , , , , can' actually occur (for examplesupplier numbers higher than category codes that aren' multiples of and months from to alsothe checksum is not independent of the other numbers some work should be done to these part numbers to help ensure that they form range of more truly random numbers |
23,073 | don' use non-data the key fields should be squeezed down until every bit counts for examplethe category codes should be changed to run from to alsothe checksum should be removed because it doesn' add any additional informationit' deliberately redundant various bit-twiddling techniques are appropriate for compressing the various fields in the key use all the data every part of the key (except non-dataas just describedshould contribute to the hash function don' just use the first four digits or some such expurgation the more data that contributes to the keythe more likely it is that the keys will hash evenly into the entire range of indices sometimes the range of keys is so large it overflows type int or type long variables we'll see how to handle overflow when we talk about hashing strings in moment to summarizethe trick is to find hash function that' simple and fastyet excludes the non-data parts of the key and uses all the data use prime number for the modulo base often the hash function involves using the modulo operator (%with the table size we've already seen that it' important for the table size to be prime number when using quadratic probe or double hashing howeverif the keys themselves may not be randomly distributedit' important for the table size to be prime number no matter what hashing system is used this is true becauseif many keys share divisor with the array sizethey may tend to hash to the same locationcausing clustering using prime table size eliminates this possibility for exampleif the table size is multiple of in our car-part examplethe category codes will all hash to index numbers that are multiples of howeverwith prime number such as you are guaranteed that no keys will divide evenly into the table size the moral is to examine your keys carefully and tailor your hash algorithm to remove any irregularity in the distribution of the keys hashing strings we saw at the beginning of this how to convert short strings to key numbers by multiplying digit codes by powers of constant in particularwe saw that the four-letter word cats could turn into number by calculating key * * * * |
23,074 | hash tables this approach has the desirable attribute of involving all the characters in the input string the calculated key value can then be hashed into an array index in the usual wayindex (keyarraysizehere' java method that finds the key value of stringpublic static int hashfunc (string keyint hashval int pow / * etc for(int =key length()- >= --/right to left int letter key charat( /get char code hashval +pow letter/times power of pow * /next power of return hashval arraysize/end hashfunc (the loop starts at the rightmost letter in the word if there are lettersthis is - the numerical equivalent of the letteraccording to the code we devised at the beginning of this ( = and so on)is placed in letter this is then multiplied by power of which is for the letter at - for the letter at - and so on the hashfunc (method is not as efficient as it might be aside from the character conversionthere are two multiplications and an addition inside the loop we can eliminate multiplication by taking advantage of mathematical identity called horner' method (horner was an english mathematician - this states that an expression like var * var * var * var * var * can be written as (((var * var )* var )* var )* var to evaluate this equationwe can start inside the innermost parentheses and work outward if we translate this to java methodwe have the following codepublic static int hashfunc (string keyint hashval key charat( |
23,075 | for(int = <key length() ++int letter key charat( hashval hashval letterreturn hashval arraysize/end hashfunc (/left to right /get char code /multiply and add /mod here we start with the leftmost letter of the word (which is somewhat more natural than starting on the right)and we have only one multiplication and one addition each time through the loop (aside from extracting the character from the stringthe hashfunc (method unfortunately can' handle strings longer than about seven letters longer strings cause the value of hashval to exceed the size of type int (if we used type longthe same problem would still arise for somewhat longer strings can we modify this basic approach so we don' overflow any variablesnotice that the key we eventually end up with is always less than the array size because we apply the modulo operator it' not the final index that' too bigit' the intermediate key values it turns out that with horner' formulation we can apply the modulo operator (%at each step in the calculation this gives the same result as applying the modulo operator once at the end but avoids overflow (it does add an operation inside the loop the hashfunc (method shows how this lookspublic static int hashfunc (string keyint hashval for(int = <key length() ++/left to right int letter key charat( /get char code hashval (hashval letterarraysize/mod return hashval/no mod /end hashfunc (this approach or something like it is normally taken to hash string various bitmanipulation tricks can be played as wellsuch as using base of (or larger power of instead of so that multiplication can be effected using the shift operator (>>)which is faster than the modulo operator (%you can use an approach similar to this to convert any kind of string to number suitable for hashing the strings can be wordsnamesor any other concatenation of characters |
23,076 | hash tables folding another reasonable hash function involves breaking the key into groups of digits and adding the groups this ensures that all the digits influence the hash value the number of digits in group should correspond to the size of the array that isfor an array of , itemsuse groups of three digits each for examplesuppose you want to hash nine-digit social security numbers for linear probing if the array size is , you would divide the nine-digit number into three groups of three digits if particular ssn was you would calculate key value of + + you can use the operator to trim such sums so the highest index is in this case % if the array size is you would need to break the nine-digit key into four two-digit numbers and one onedigit number + + + + and % it' easier to imagine how this works when the array size is multiple of howeverfor best results it should be prime numberas we've seen for other hash functions we'll leave an implementation of this scheme as an exercise hashing efficiency we've noted that insertion and searching in hash tables can approach ( time if no collision occursonly call to the hash function and single array reference are necessary to insert new item or find an existing item this is the minimum access time if collisions occuraccess times become dependent on the resulting probe lengths each cell accessed during probe adds another time increment to the search for vacant cell (for insertionor for an existing cell during an accessa cell must be checked to see whether it' emptyand--in the case of searching or deletion-whether it contains the desired item thusan individual search or insertion time is proportional to the length of the probe this is in addition to constant time for the hash function the average probe length (and therefore the average access timeis dependent on the load factor (the ratio of items in the table to the size of the tableas the load factor increasesprobe lengths grow longer we'll look at the relationship between probe lengths and load factors for the various kinds of hash tables we've studied open addressing the loss of efficiency with high load factors is more serious for the various open addressing schemes than for separate chaining |
23,077 | in open addressingunsuccessful searches generally take longer than successful searches during probe sequencethe algorithm can stop as soon as it finds the desired itemwhich ison the averagehalfway through the probe sequence on the other handit must go all the way to the end of the sequence before it' sure it can' find an item linear probing the following equations show the relationship between probe length (pand load factor (lfor linear probing for successful search it' ( ) and for an unsuccessful search it' ( these formulas are from knuth (see appendix "further reading")and their derivation is quite complicated figure shows the result of graphing these equations average probe length unsuccessful successful figure load factor linear probe performance |
23,078 | hash tables at load factor of / successful search takes comparisons and an unsuccessful search takes at load factor of / the numbers are and at higher load factors the numbers become very large the moralas you can seeis that the load factor must be kept under / and preferably under / on the other handthe lower the load factorthe more memory is needed for given amount of data the optimum load factor in particular situation depends on the trade-off between memory efficiencywhich decreases with lower load factorsand speedwhich increases quadratic probing and double hashing quadratic probing and double hashing share their performance equations these equations indicate modest superiority over linear probing for successful searchthe formula (again from knuthis -log ( -loadfactorloadfactor for an unsuccessful search it is ( -loadfactorfigure shows graphs of these formulas at load factor of successful and unsuccessful searches both require an average of two probes at / load factorthe numbers are and and at they're and thussomewhat higher load factors can be tolerated for quadratic probing and double hashing than for linear probing separate chaining the efficiency analysis for separate chaining is differentand generally easierthan for open addressing we want to know how long it takes to search for or insert an item into separatechaining hash table we'll assume that the most time-consuming part of these operations is comparing the search key of the item with the keys of other items in the list we'll also assume that the time required to hash to the appropriate list and to determine when the end of list has been reached is equivalent to one key comparison thusall operations require +ncomps timewhere ncomps is the number of key comparisons let' say that the hash table consists of arraysize elementseach of which holds listand that data items have been inserted in the table thenon the averageeach list will hold divided by arraysize itemsaveragelistlength arraysize |
23,079 | average probe length unsuccessful successful figure load factor quadratic-probe and double-hashing performance this is the same as the definition of the load factorloadfactor arraysize so the average list length equals the load factor searching in successful searchthe algorithm hashes to the appropriate list and then searches along the list for the item on the averagehalf the items must be examined before the correct one is located thusthe search time is loadfactor this is true whether the lists are ordered or not in an unsuccessful searchif the lists are unorderedall the items must be searchedso the time is loadfactor |
23,080 | hash tables these formulas are graphed in figure average probe length unsuccessful successful load factor figure separate-chaining performance for an ordered listonly half the items must be examined in an unsuccessful searchso the time is the same as for successful search in separate chaining it' typical to use load factor of about (the number of data items equals the array sizesmaller load factors don' improve performance significantlybut the time for all operations increases linearly with load factorso going beyond or so is generally bad idea insertion if the lists are not orderedinsertion is always immediatein the sense that no comparisons are necessary the hash function must still be computedso let' call the insertion time if the lists are orderedthenas with an unsuccessful searchan average of half the items in each list must be examinedso the insertion time is loadfactor open addressing versus separate chaining if open addressing is to be useddouble hashing seems to be the preferred system by small margin over quadratic probing the exception is the situation in which plenty of memory is available and the data won' expand after the table is createdin this case linear probing is somewhat simpler to implement andif load factors below are usedcauses little performance penalty |
23,081 | if the number of items that will be inserted in hash table isn' known when the table is createdseparate chaining is preferable to open addressing increasing the load factor causes major performance penalties in open addressingbut performance degrades only linearly in separate chaining when in doubtuse separate chaining its drawback is the need for linked list classbut the payoff is that adding more data than you anticipated won' cause performance to slow to crawl hashing and external storage at the end of trees and external storage,we discussed using btrees as data structures for external (disk-basedstorage let' look briefly at the use of hash tables for external storage recall from that disk file is divided into blocks containing many recordsand that the time to access block is much larger than any internal processing on data in main memory for these reasons the overriding consideration in devising an external storage strategy is minimizing the number of block accesses on the other handexternal storage is not expensive per byteso it may be acceptable to use large amounts of itmore than is strictly required to hold the dataif by so doing we can speed up access time this is possible using hash tables table of file pointers the central feature in external hashing is hash table containing block numberswhich refer to blocks in external storage the hash table is sometimes called an index (in the sense of book' indexit can be stored in main memory orif it is too largestored externally on diskwith only part of it being read into main memory at time even if it fits entirely in main memorya copy will probably be maintained on the disk and read into memory when the file is opened non-full blocks let' reuse the example from in which the block size is , bytesand record is bytes thusa block can hold records every entry in the hash table points to one of these blocks let' say there are blocks in particular file the index (hash tablein main memory holds pointers to the file blockswhich start at at the beginning of the file and run up to in external hashing it' important that blocks don' become full thuswe might store an average of records per block some blocks would have more recordsand some fewer there would be about records in the file this arrangement is shown in figure |
23,082 | hash tables dewitt white bercerra chong appleby decosta milano freeman keys hash to array indices block lang milano danzig full figure block chong leblanc empty smith full external hashing all records with keys that hash to the same value are located in the same block to find record with particular keythe search algorithm hashes the keyuses the hash value as an index to the hash tablegets the block number at that indexand reads the block this process is efficient because only one block access is necessary to locate given item the downside is that considerable disk space is wasted because the blocks areby designnot full to implement this schemewe must choose the hash function and the size of the hash table with some care so that limited number of keys hash to the same value in our examplewe want only eight records per keyon the average full blocks even with good hash functiona block will occasionally become full this situation can be handled using variations of the collision-resolution schemes discussed for internal hash tablesopen addressing and separate chaining |
23,083 | in open addressingifduring insertionone block is found to be fullthe algorithm inserts the new record in neighboring block in linear probing this is the next blockbut it could also be selected using quadratic probe or double hashing in separate chainingspecial overflow blocks are made availablewhen primary block is found to be fullthe new record is inserted in the overflow block full blocks are undesirable because an additional disk access is necessary for the second blockthis doubles the access time howeverthis is acceptable if it happens rarely we've discussed only the simplest hash table implementation for external storage there are many more complex approaches that are beyond the scope of this book summary hash table is based on an array the range of key values is usually greater than the size of the array key value is hashed to an array index by hash function an english-language dictionary is typical example of database that can be efficiently handled with hash table the hashing of key to an already-filled array cell is called collision collisions can be handled in two major waysopen addressing and separate chaining in open addressingdata items that hash to full array cell are placed in another cell in the array in separate chainingeach array element consists of linked list all data items hashing to given array index are inserted in that list we discussed three kinds of open addressinglinear probingquadratic probingand double hashing in linear probing the step size is always so if is the array index calculated by the hash functionthe probe goes to xx+ + + and so on the number of such steps required to find specified item is called the probe length in linear probingcontiguous sequences of filled cells appear they are called primary clustersand they reduce performance in quadratic probing the offset from is the square of the step numberso the probe goes to xx+ + + + and so on |
23,084 | hash tables quadratic probing eliminates primary clustering but suffers from the less severe secondary clustering secondary clustering occurs because all the keys that hash to the same value follow the same sequence of steps during probe all keys that hash to the same value follow the same probe sequence because the step size does not depend on the keybut only on the hash value in double hashing the step size depends on the key and is obtained from secondary hash function if the secondary hash function returns value in double hashingthe probe goes to xx+sx+ sx+ sx+ sand so onwhere depends on the key but remains constant during the probe the load factor is the ratio of data items in hash table to the array size the maximum load factor in open addressing should be around for double hashing at this load factorsearches will have an average probe length of search times go to infinity as load factors approach in open addressing it' crucial that an open-addressing hash table does not become too full load factor of is appropriate for separate chaining at this load factor successful search has an average probe length of and an unsuccessful search probe lengths in separate chaining increase linearly with load factor string can be hashed by multiplying each character by different power of constantadding the productsand using the modulo operator (%to reduce the result to the size of the hash table to avoid overflowwe can apply the modulo operator at each step in the processif the polynomial is expressed using horner' method hash table sizes should generally be prime numbers this is especially important in quadratic probing and separate chaining hash tables can be used for external storage one way to do this is to have the elements in the hash table contain disk-file block numbers questions these questions are intended as self-test for readers answers may be found in appendix |
23,085 | using big notationsay how long it takes (ideallyto find an item in hash table transforms range of key values into range of index values open addressing refers to keeping many of the cells in the array unoccupied keeping an open mind about which address to use probing at cell + + and so on until an empty cell is found looking for another location in the array when the one you want is occupied using the next available position after an unsuccessful probe is called what are the first five step sizes in quadratic probing secondary clustering occurs because many keys hash to the same location the sequence of step lengths is always the same too many items with the same key are inserted the hash function is not perfect separate chaining involves the use of at each location reasonable load factor in separate chaining is true or falsea possible hash function for strings involves multiplying each character by an ever-increasing power the best technique when the amount of data is not well known is linear probing quadratic probing double hashing separate chaining if digit folding is used in hash functionthe number of digits in each group should reflect true or falsein linear probing an unsuccessful search takes longer than successful search |
23,086 | hash tables in separate chaining the time to insert new item increases linearly with the load factor is proportional to the number of items in the table is proportional to the number of lists is proportional to the percentage of full cells in the array true or falsein external hashingit' important that the records don' become full in external hashingall records with keys that hash to the same value are located in experiments carrying out these experiments will help to provide insights into the topics covered in the no programming is involved in linear probingthe time for an unsuccessful search is related to the cluster size using the hash workshop appletfind the average cluster size for items filled into cellswith load factor of consider an isolated cell (that iswith empty cells on both sidesto be cluster of size to find the averageyou could count the number of cells in each cluster and divide by the number of clustersbut there' an easier way what is itrepeat this experiment for half-dozen -item fills and average the cluster sizes repeat the entire process for load factors of and do your results agree with the chart in figure with the hashdouble workshop appletmake small quadratic hash tablewith size that is not prime numbersay fill it very fullsay items now search for non-existent key values try different keys until you find one that causes the quadratic probe to go into an unending sequence this happens because the quadratic step sizemodulo non-prime array sizeforms repeating series the moralmake your array size prime number with the hashchain appletcreate an array with cellsand then fill it with itemswith load factor of inspect the linked lists that are displayed add the lengths of all these linked lists and divide by the number of lists to find the average list length on the averageyou'll need to search this length in an unsuccessful search (actuallythere' quicker way to find this average length what is it? |
23,087 | programming projects writing programs to solve the programming projects helps to solidify your understanding of the material and demonstrates how the concepts are applied (as noted in the introductionqualified instructors may obtain completed solutions to the programming projects on the publisher' web site modify the hash java program (listing to use quadratic probing implement linear probe hash table that stores strings you'll need hash function that converts string to an index numbersee the section "hashing stringsin this assume the strings will be lowercase wordsso characters will suffice write hash function to implement digit-folding approach in the hash function (as described in the "hash functionssection of this your program should work for any array size and any key length use linear probing accessing group of digits in number may be easier than you think does it matter if the array size is not multiple of write rehash(method for the hash java program it should be called by insert(to move the entire hash table to an array about twice as large whenever the load factor exceeds the new array size should be prime number refer to the section "expanding the arrayin this don' forget you'll need to handle items that have been "deleted,that iswritten over with - instead of using linked list to resolve collisionsas in separate chaininguse binary search tree that iscreate hash table that is an array of trees you can use the hashchain java program (listing as starting point and the tree class from the tree java program (listing in to display small tree-based hash tableyou could use an inorder traversal of each tree the advantage of tree over linked list is that it can be searched in (logninstead of (ntime this time savings can be significant advantage if very high load factors are encountered checking items takes maximum of comparisons in list but only in tree duplicates can present problems in both trees and hash tablesso add some code that prevents duplicate key from being inserted in the hash table (bewarethe find(method in tree assumes non-empty tree to shorten the listing for this programyou can forget about deletionwhich for trees requires lot of code |
23,088 | heaps in this introduction to heaps java code for heaps tree-based heap we saw in "stacks and queues,that priority queue is data structure that offers convenient access to the data item with the smallest (or largestkey priority queues may be used for task scheduling in computerswhere some programs and activities should be executed sooner than others and are therefore given higher priority another example is in weapons systemssay in navy cruiser numerous threats--airplanesmissilessubmarinesand so on--are detected and must be prioritized for examplea missile that' short distance from the cruiser is assigned higher priority than an aircraft long distance away so that countermeasures (surface-to-air missilesfor examplecan deal with it first priority queues are also used internally in other computer algorithms in "weighted graphs,we'll see priority queues used in graph algorithmssuch as dijkstra' algorithm priority queue is an abstract data type (adtoffering methods that allow removal of the item with the maximum (or minimumkey valueinsertionand sometimes other operations as with other adtspriority queues can be implemented using variety of underlying structures in we saw priority queue implemented as an ordered array the trouble with that approach is thateven though removal of the largest item is accomplished in fast ( timeinsertion requires slow (ntimebecause an average of half the items in the array must be moved to insert the new one in order heapsort |
23,089 | heaps in this we'll describe another structure that can be used to implement priority queuethe heap heap is kind of tree it offers both insertion and deletion in (logntime thusit' not quite as fast for deletionbut much faster for insertion it' the method of choice for implementing priority queues where speed is important and there will be many insertions note don' confuse the term heapused here for special kind of binary treewith the same term used to mean the portion of computer memory available to programmer with new in languages like java and +introduction to heaps heap is binary tree with these characteristicsit' complete this means it' completely filled inreading from left to right across each rowalthough the last row need not be full figure shows complete and incomplete trees it' (usuallyimplemented as an array we described in "binary trees,how binary trees can be stored in arraysrather than using references to connect the nodes each node in heap satisfies the heap conditionwhich states that every node' key is larger than (or equal tothe keys of its children acomplete figure bincomplete complete and incomplete binary trees figure shows heap and its relationship to the array used to implement it the array is what' stored in memorythe heap is only conceptual representation notice that the tree is complete and that the heap condition is satisfied for all the nodes |
23,090 | heap array figure heap root last node heap and its underlying array the fact that heap is complete binary tree implies that there are no "holesin the array used to represent it every cell is filledfrom to - ( is in figure we'll assume in this that the maximum key (rather than the minimumis in the root priority queue based on such heap is descending-priority queue (we discussed ascending-priority queues in priority queuesheapsand adts we'll be talking about heaps in this although heaps are mostly used to implement priority queues howeverthere' very close relationship between priority queue and the heap used to implement it this relationship is demonstrated in the following abbreviated codeclass heap private node heaparray[]public void insert(node ndpublic node remove( |
23,091 | heaps class priorityqueue private heap theheappublic void insert(node ndtheheap insert(nd)public node remove(return theheap remove(the methods for the priorityqueue class are simply wrapped around the methods for the underlying heap classthey have the same functionality this example makes it conceptually clear that priority queue is an adt that can be implemented in variety of wayswhile heap is more fundamental kind of data structure in this for simplicitywe'll simply show the heap' methods without the priorityqueue wrapping weakly ordered heap is weakly ordered compared with binary search treein which all node' left descendants have keys less than all its right descendants this impliesas we sawthat in binary search tree you can traverse the nodes in order by following simple algorithm in heaptraversing the nodes in order is difficult because the organizing principle (the heap conditionis not as strong as the organizing principle in tree all you can say about heap is thatalong every path from the root to leafthe nodes are arranged in descending order as you can see in figure the nodes to the left or right of given nodeor on higher or lower levels--provided they're not on the same path--can have keys larger or smaller than the node' key except where they share the same nodespaths are independent of each other because heaps are weakly orderedsome operations are difficult or impossible besides its failure to support traversala heap also does not allow convenient searching for specified key this is because there' not enough information to decide which of node' two children to pick in trying to descend to lower level during the search it follows that node with specified key can' be deletedat least in (logntimebecause there' no way to find it (these operations can be carried outby looking at every cell of the array in sequencebut this is only possible in slow (ntime thusthe organization of heap may seem dangerously close to randomness neverthelessthe ordering is just sufficient to allow fast removal of the maximum node and fast insertion of new nodes these operations are all that' needed to use |
23,092 | heap as priority queue we'll discuss briefly how these operations are carried out and then see them in action in workshop applet removal removal means removing the node with the maximum key this node is always the rootso removing it is easy the root is always at index of the heap arraymaxnode heaparray[ ]the problem is that once the root is gonethe tree is no longer completethere' an empty cell this "holemust be filled in we could shift all the elements in the array down one cellbut there' much faster approach here are the steps for removing the maximum node remove the root move the last node into the root trickle the last node down until it' below larger node and above smaller one the last node is the rightmost node in the lowest occupied level of the tree this corresponds to the last filled cell in the array (see the node at index with the value in figure to copy this node into the root is straightforwardheaparray[ heaparray[ - ] --the removal of the root decreases the size of the array by one to trickle (the terms bubble or percolate are also useda node up or down means to move it along path step by stepswapping it with the node ahead of itchecking at each step to see whether it' in its proper position in step the node at the root is too small for that positionso it' trickled down the heap into its proper place we'll see the code for this later step restores the completeness characteristic of the heap (no holes)and step restores the heap condition (every node larger than its childrenthe removal process is shown in figure in part aof this figure the last node ( is copied to the rootwhich is removed in parts ) )and )the last node is trickled down to its appropriate positionwhich happens to be on the bottom row (this isn' always the casethe trickle-down process may stop at middle row as well part eshows the node in its correct position |
23,093 | heaps removed ab last node swap cd swap swap figure removing the maximum node at each position of the target node the trickle-down algorithm checks which child is larger it then swaps the target node with the larger child if it tried to swap with the smaller childthat child would become the parent of larger childwhich violates the heap condition correct and incorrect swaps are shown in figure figure which child to swap swapping the smaller child swapping the larger child wrong correct |
23,094 | insertion inserting node is also easy insertion uses trickle uprather than trickle down initiallythe node to be inserted is placed in the first open position at the end of the arrayincreasing the array size by oneheaparray[nnewnoden++the problem is that it' likely that this will destroy the heap condition this happens if the new node' key is larger than its newly acquired parent because this parent is on the bottom of the heapit' likely to be smallso the new node is likely to be larger thusthe new node will usually need to be trickled upward until it' below node with larger key and above node with smaller key the insertion process is shown in figure ab swap last node to be inserted cdswap figure inserting node swap |
23,095 | heaps the trickle-up algorithm is somewhat simpler than trickling down because two children don' need to be compared node has only one parentand the target node is simply swapped with its parent in the figure the final correct position for the new node happens to be the rootbut new node can also end up at an intermediate level by comparing figures and you can see that if you remove node and then insert the same node the result is not necessarily the restoration of the original heap given set of nodes can be arranged in many valid heapsdepending on the order in which nodes are inserted not really swapped in figures and we showed nodes being swapped in the trickle-down and trickle-up processes swapping is conceptually the easiest way to understand insertion and deletionand indeed some heap implementations actually use swaps figure shows simplified version of swaps used in the trickle-down process after three swapsnode will end up in position dand nodes bcand will each move up one level swap copy copy swap copy figure temp copy swap aswaps bcopies copy trickling with swaps and copies howevera swap requires three copiesso the three swaps shown in figure take nine copies we can reduce the total number of copies necessary in trickle algorithm by substituting copies for swaps figure shows how five copies do the work of three swaps firstnode is saved temporarily then is copied over ac is copied over band is copied over finallya is copied back from temporary storage onto position we have reduced the number of copies from nine to five in the figure we're moving node three levels the savings in copy time grow larger as the number of levels increases because the two copies from and to temporary |
23,096 | storage account for less of the total for large number of levels the savings in the number of copies approach factor of three another way to visualize trickle-up and trickle-down processes being carried out with copies is to think of "hole"--the absence of node--moving down in trickle up and up in trickle down for examplein figure bcopying to temp creates "holeat the "holeactually consists of the earlier copy of node that will be movedit' still there but it' irrelevant copying to moves the "holefrom to bin the opposite direction from the node step by step the "holetrickles downward the heap workshop applet the heap workshop applet demonstrates the operations we discussed in the preceding sectionit allows you to insert new items into heap and remove the largest item in additionyou can change the priority of given item when you start up the heap workshop appletyou'll see display similar to figure figure the heap workshop applet there are four buttonsfillchngremand insfor fillchangeremoveand insert let' see how they work the fill button the heap contains nodes when the applet is first started using the fill buttonyou can create new heap with any number of nodes from to press fill repeatedlyand type in the desired number when prompted |
23,097 | heaps the change button it' possible to change the priority of an existing node this procedure is useful in many situations for examplein our cruiser examplea threat such as an approaching airplane may reverse course away from the carrierits priority should be lowered to reflect this new developmentalthough the aircraft would remain in the priority queue until it was out of radar range to change the priority of noderepeatedly press the chng button when promptedclick on the node with the mouse this will position the red arrow on the node thenwhen promptedtype in the node' new priority if the node' priority is raisedit will trickle upward to new position if the priority is loweredthe node will trickle downward the remove button repeatedly pressing the rem button causes the node with the highest keylocated at the rootto be removed you'll see it disappearand then be replaced by the last (rightmostnode on the bottom row finallythis node will trickle down until it reaches the position that reestablishes the heap order the insert button new node is always inserted initially in the first available array celljust to the right of the last node on the bottom row of the heap from there it trickles up to the appropriate position pressing the ins button repeatedly carries out this operation java code for heaps the complete code for heap java is shown later in this section before we get to itwe'll focus on the individual operations of insertionremovaland change here are some points to remember from about representing tree as an array for node at index in the arrayits parent is ( - its left child is * its right child is * these relationships can be seen in figure note remember that the symbolwhen applied to integersperforms integer divisionin which the answer is rounded to the lowest integer |
23,098 | insertion we place the trickle-up algorithm in its own method the insert(methodwhich includes call to this trickleup(methodis straightforwardpublic boolean insert(int keyif(currentsize==maxsizereturn falsenode newnode new node(key)heaparray[currentsizenewnodetrickleup(currentsize++)return true/end insert(/if array is full/failure /make new node /put it at the end /trickle it up /success we check to make sure the array isn' full and then make new node using the key value passed as an argument this node is inserted at the end of the array finallythe trickleup(routine is called to move this node up to its proper position in trickleup((shown belowthe argument is the index of the newly inserted item we find the parent of this position and then save the node in variable called bottom inside the while loopthe variable index will trickle up the path toward the rootpointing to each node in turn the while loop runs as long as we haven' reached the root (index> )and the key (idataof index' parent is less than the new node the body of the while loop executes one step of the trickle-up process it first copies the parent node into indexmoving the node down (this has the effect of moving the "holeupward then it moves index upward by giving it its parent' indexand giving its parent its parent' index public void trickleup(int indexint parent (index- node bottom heaparray[index]whileindex &heaparray[parentgetkey(bottom getkey(heaparray[indexheaparray[parent]/move node down index parent/move index up parent (parent- /parent <its parent /end while heaparray[indexbottom/end trickleup( |
23,099 | heaps finallywhen the loop has exitedthe newly inserted nodewhich has been temporarily stored in bottomis inserted into the cell pointed to by index this is the first location where it' not larger than its parentso inserting it here satisfies the heap condition removal the removal algorithm is also not complicated if we subsume the trickle-down algorithm into its own routine we save the node from the rootcopy the last node (at index currentsize- into the rootand call trickledown(to place this node in its appropriate location public node remove(/delete item with max key /(assumes non-empty listnode root heaparray[ ]/save the root heaparray[ heaparray[--currentsize]/root <last trickledown( )/trickle down the root return root/return removed node /end remove(this method returns the node that was removedthe user of the heap usually needs to process it in some way the trickledown(routine is more complicated than trickleup(because we must determine which of the two children is larger firstwe save the node at index in variable called top if trickledown(has been called from remove()index is the rootbutas we'll seeit can be called from other routines as well the while loop will run as long as index is not on the bottom row--that isas long as it has at least one child within the loop we check if there is right child (there may be only left)and if socompare the children' keyssetting largerchild appropriately then we check if the key of the original node (now in topis greater than that of largerchildif sothe trickle-down process is complete and we exit the loop public void trickledown(int indexint largerchildnode top heaparray[index]while(index currentsize/ int leftchild *index+ int rightchild leftchild+ /save root /while node has at /least one child/find larger child |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.