id
int64
0
25.6k
text
stringlengths
0
4.59k
24,100
notes nextdelete although is in leafthis leaf does not have an extra keythe deletion results in node with only one keywhich is not acceptable for -tree of order if the sibling node to the immediate left or right has an extra keywe can then borrow key from the parent and move key up from this sibling in our specific casethe sibling to the right has an extra key sothe successor of (the last key in the node where the deletion occurred)is moved down from the parentand the is moved up (of coursethe is moved over so that the can be inserted in its proper place finallylet' delete this one causes lots of problems although is in leafthe leaf has no extra keysnor do the siblings to the immediate right or left in such case the leaf has to be combined with one of these two siblings this includes moving down the parent' key that was between those of these two leaves in our examplelet' combine the leaf containing with the leaf containing we also move down the lovely professional university
24,101
notes of courseyou immediately see that the parent node now contains only one keyg this is not acceptable if this problem node had sibling to its immediate left or right that had spare keythen we would again "borrowa key suppose for the moment that the right sibling (the node with xhad one more key in it somewhere to the right of we would then move down to the node with too few keys and move the up where the had been howeverthe old left subtree of would then have to become the right subtree of in other wordsthe node would be attached via the pointer field to the right of ' new location since in our example we have no way to borrow key from siblingwe must again combine with the siblingand move down the from the parent in this casethe tree shrinks in height by one -tree algorithms -tree is data structure that maintains an ordered set of data and allows efficient operations to finddeleteinsertand browse the data in this discussioneach piece of data stored in -tree will be called "key"because each key is unique and can occur in the -tree in only one location -tree consists of "noderecords containing the keysand pointers that link the nodes of the -tree together every -tree is of some "order "meaning nodes contain from to keysand nodes are thereby always at least half full of keys keys are kept in sorted order within each node corresponding list of pointers are effectively interspersed between keys to indicate where to search for key if it isn' in the current node node containing keys always also contains + pointers lovely professional university
24,102
notes examplehere is portion of -tree with order (nodes have at least keys and pointersnodes are delimited with [square bracketsthe keys are city namesand are kept sorted in each node on either side of every key are pointers linking the key to subsequent nodesstart here chicago hoboken ++ aptos boston denver detroit san-jose seattle to find the key "dallas"we begin searching at the top "rootnode "dallasis not in the node but sorts between "chicagoand "hoboken"so we follow the middle pointer to the next node again"dallasis not in the node but sorts before "denver"so we follow that node' first pointer down to the next node (marked with an " "eventuallywe will either locate the keyor encounter "leafnode at the bottom level of the -tree with no pointers to any lower nodes and without the key we wantindicating the key is nowhere in the -tree below is another fragment of an order -tree (nodes have at least key and pointerssearching for the key "chicagobegins at "marin"follows the first pointer to "aptos(since chicago sorts before marin)then follows that node' second pointer down to the next level (since chicago sorts after aptos)as marked with an "xv marin +--+--- aptos seattle searching -tree for key always begins at the root node and follows pointers from node to node until either the key is located or the search fails because leaf node is reached and there are no more pointers to follow -trees grow when new keys are inserted since the root node initially begins with just one keythe root node is special exception and the only node allowed to have less than keys in an order -tree here is an order -tree with integer keys except for the special root nodeorder requires every node to have from to keys and to pointers empty slots are marked with "showing where future keys have not yet been stored in the nodes ++ ++++ [ [ [ [ [ [ lovely professional university
24,103
to insert the key " "we first simply search for that key if is foundthe key is already in the tree and the insertion is superfluous otherwisewe must end up at leaf node at the bottom level of the tree where would be stored in the above casethe leaf node contains and room for fourth keyso is simply inserted in the leaf node in sorted ordernotes [ now you'll insert the key " the initial search leads us to the leaf node where would be insertedbut the node is already full with keys and adding another key would violate the rule that order -trees can' have more than keys because of this "overflowconditionthe leaf node is split into two leaf nodes the leftmost keys are put in the left nodethe rightmost keys are put in the right nodeand the middle key is "promotedby inserting it into the parent node above the leaf hereinserting causes the - node to be split into two nodesand is moved up to the parent node that contained and before inserting -+ [ after inserting --+-+----+ [ [ in this casethe parent node contained only keys ( and )leaving room for to be promoted and inserted but if the parent node was also already full with keysthen it too would have to split indeedsplitting may propagate all the way up to the root node when the root splitsthe -tree grows in height by one leveland new root with single promoted key is formed ( situation when an order root node sometimes has fewer than keysjust like the situation described earlier when the root node stores the very first key placed in the -tree -trees shrink when keys are deleted to delete keyfirst perform the usual search operation to locate the node containing the key (if the key isn' foundit isn' in the tree and can' be deleted if the found key is not in leafmove it to leaf by swapping the key with the logical "nextkey in -treethe "nextkey is always the first key in the leftmost leaf of the right subtree examplein this -tree we want to delete " "which is not in leaf "xxindicates key values that don' matterxx xx xx +->xx xx xx xx +->xx xx xx xx +->[ you follow the pointer immediately to the right of to find ' right subtreethen follow the leftmost pointers in each subnode until we reach leaf the first key in the leaf is " "the logical "nextkey after in the list of all keys in the tree by swapping and we can move to leaf node to set up deletion without violating the key order or pointer order of the overall -tree once the key we want is in leafwe can delete it if at least keys remain in the nodewe're doneotherwise it is an "underflow"since every node (except the rootmust have at least keys lovely professional university
24,104
notes if node underflowswe may be able to "redistributekeys by borrowing some from neighboring node for examplein the order -tree belowthe key is being deletedwhich causes node to underflow since it only has keys and left so keys from the neighbor on the left are "shifted throughthe parent node and redistributed so both leaf nodes end up with keysbefore deleting after deleting xx xx ++ [ [ xx xx ++ [ [ but if the underflow node and the neighbor node have less than keys to redistributethe two nodes will have to be combined for examplehere key is being deleted from the -tree belowcausing an underflowand the neighbor node can' afford to give up any keys for redistribution so one node is discardedand the parent key moves down with the other keys to fill up single nodebefore deleting after deleting -+++--- [ [ -+ [ in the above casemoving the key out of the parent node left two keys ( and remaining but if the parent node only had keys to begin withthen the parent node also would underflow when the parent key was moved down to combine with the leaf key indeedunderflow and the combining of nodes may propagate all the way up to the root node when the root underflowsthe -tree shrinks in height by one leveland the nodes under the old root combine to form new root the payoff of the -tree insert and delete rules are that -trees are always "balancedsearching an unbalanced tree may require traversing an arbitrary and unpredictable number of nodes and pointers an unbalanced tree of nodes balanced tree of nodes ++ searching balanced tree means that all leaves are at the same depth there is no runaway pointer overhead indeedeven very large -trees can guarantee only small number of nodes must be retrieved to find given key for examplea -tree of , , keys with keys per node never needs to retrieve more than nodes to find any key lovely professional university
24,105
notes task consider the following binary search treea splay this tree at each of the following keys in turndbgfadbd applications databasesa database is collection of data organized in fashion that facilitates updatingretrievingand managing the data the data can consist of anythingincludingbut not limited to namesaddressespicturesand numbers databases are commonplace and are used everyday for examplean airline reservation system might maintain database of available flightscustomersand tickets issued teacher might maintain database of student names and grades because computers excel at quickly and accurately manipulatingstoringand retrieving datadatabases are often maintained electronically using database management system database management systems are essential components of many everyday business operations database serve as foundation for accounting systemsinventory systemsmedical recordkeeping sytemsairline reservation systemsand countless other important aspects of modern businesses it is not uncommon for database to contain millions of records requiring many gigabytes of storage in order for database to be useful and usableit must support the desired operationssuch as retrieval and storagequickly because databases cannot typically be maintained entirely in memoryb-trees are often used to index the data and to provide fast access for examplesearching an unindexed and unsorted database containing key values will have worst case running time of ( )if the same data is indexed with -treethe same search operation will run in (log nto perform search for single key on set of one million keys ( , , ) linear search will require at most , , comparisons if the same data is indexed with -tree of minimum degree comparisons will be required in the worst case clearlyindexing large amounts of data can significantly improve search performance although other balanced tree structures can be useda -tree also optimizes costly disk accesses that are of concern when dealing with large data sets lovely professional university
24,106
notes concurrent access to -treesdatabases typically run in multiuser environments where many users can concurrently perform operations on the database unfortunatelythis common scenario introduces complications exampleimagine database storing bank account balances now assume that someone attempts to withdraw ` from an account containing ` firstthe current balance is checked to ensure sufficient funds after funds are disbursedthe balance of the account is reduced this approach works flawlessly until concurrent transactions are considered suppose that another person simultaneously attempts to withdraw ` from the same account at the same time the account balance is checked by the first personthe account balance is also retrieved for the second person since neither person is requesting more funds than are currently availableboth requests are satisfied for total of ` after the first person' transaction` should remain (` ` )so the new balance is recorded as ` nextthe account balance after the second person' transaction` (` ` )is recorded overwriting the ` balance unfortunately` have been disbursedbut the account balance has only been decreased by ` clearlythis behavior is undesirableand special precautions must be taken -tree suffers from similar problems in multiuser environment if two or more processes are manipulating the same treeit is possible for the tree to become corrupt and result in data loss or errors the simplest solution is to serialize access to the data structure in other wordsif another process is using the treeall other processes must wait although this is feasible in many casesit can place unnecessary and costly limit on performance because many operations actually can be performed concurrently without risk lockingintroduced by gray and refined by many othersprovides mechanism for controlling concurrent operations on data structures in order to prevent undesirable side effects and to ensure consistency summary -trees are balanced trees that are optimized for situations when part or the entire tree must be maintained in secondary storage such as magnetic disk -tree is specialized multiway tree designed especially for use on disk in -tree each node may contain large number of keys the number of subtrees of each nodethenmay also be large -tree is designed to branch out in this large number of directions and to contain lot of keys in each node so that the height of the tree is relatively small this means that only small number of nodes must be read from disk to retrieve an item the goal is to get fast access to the dataand with disk drives this means reading very small number of records note that large node size (with lots of keys in the nodealso fits with the fact that with disk drive one can usually read fair amount of data at once keywords -tree algorithmsa -tree is data structure that maintains an ordered set of data and allows efficient operations to finddeleteinsertand browse the data lovely professional university
24,107
-treesb-trees are balanced trees that are optimized for situations when part or the entire tree must be maintained in secondary storage such as magnetic disk notes self assessment choose the appropriate answers -tree has minimum number of allowable children for each node known as the (aminimization factor (bmaximization factor (coperational factor (dsituational factor binary tree is special type of tree having degree ( ( ( ( fill in the blanks the search operation on -tree is to search on binary tree is collection of data organized in fashion that facilitates updatingretrievingand managing the data -tree is kept balanced by requiring that all leaf nodes are at the same the operation creates an empty -tree by allocating new root node that has no keys and is leaf node state whether the following statements are true or false deletion of key from -tree is possible -trees do not shrink when keys are deleted the lower and upper bounds on the number of child nodes are typically fixed for particular implementation to perform an insertion on -treethe appropriate node for the key must be located using an algorithm similiar to -tree-search review questions generalize the amortized analysis given in the text for incrementing four-digit binary integers to -digit binary integers describe the deletion of an item from -trees describe of structure of -tree also explain the operation of -tree explain how will you insert an item in -trees lovely professional university
24,108
notes you have -tree of order in figure given below explain to delete from it there are possible ordered sequences of the four keys but only distinct binary trees with four nodes thereforethese binary trees are not equally likely to occur as search trees find which one of the binary search trees corresponds to each of the possible ordered sequences of thereby find the probability for building each of the binary search trees from randomly ordered input " node containing keys always also contains + pointers discuss "the split operation transforms full node with keys into two nodes with keys eachexplain explain the process of deletion of key from -tree with the help of suitable example binary trees are defined recursivelyalgorithms for manipulating binary trees are usually best written recursively in programming with binary treesbe aware of the problems generally associated with recursive algorithms be sure that your algorithm terminates under any condition and that it correctly treats the trivial case of an empty tree answersself assessment ( ( analogous database depth -tree-create true false true true further readings books brian kernighan and dennis ritchiethe programming languageprentice hall burkhard moniendata structures and efficient algorithmsthomas ottmannspringer krusedata structure program designprentice hall of indianew delhi mark allen welesdata structure algorithm analysis in csecond ed addisonwesley publishing rg dromeyhow to solve it by computercambridge university press lovely professional university
24,109
shi-kuo changdata structures and algorithmsworld scientific notes shi-kuo changdata structures and algorithmsworld scientific sorenson and tremblayan introduction to data structure with algorithms thomas cormencharles eleiserson ronald rivestintroduction to algorithmsprentice-hall of india pvt limitednew delhi timothy buddclassic data structures in ++addison wesley online links www en wikipedia org www web-source net www webopedia com lovely professional university
24,110
anil sharmalovely professional university unit hashing notes contents objectives introduction hashing linear probing or linear open addressing rehashing overflow chaining hash functions open hashing closed hashing rehashing summary keywords self assessment review questions further readings objectives after studying this unityou will be able toexplain the concept of hashing know of hash functions realise open hashing and closed hashing describe rehashing introduction the search time of each algorithm depend on the number of elements of the collection of the data searching technique called hashing or hash addressing which is essentially independent of the number hashing is the transformation of string of characters into usually shorter fixed-length value or key that represents the original string hashing is used to index and retrieve items in database because it is faster to find the item using the shorter hashed key than to find it using the original value it is also used in many encryption algorithms hash function is unary function that is used by hashed associative containersit maps its argument to result of type size_t hash function must be deterministic and stateless that isthe return value must depend only on the argumentand equal arguments must yield equal results lovely professional university
24,111
hashing notes in many applications we require to use data object called symbol table symbol table is nothing but set of pairs (namevaluewhere value represents collection of attributes associated with the nameand this collection of attributes depends upon the program element identified by the name for exampleif name is used to identify an array in programthen the attributes associated with are the number of dimensionslower bound and upper bound of each dimensionand the element type therefore symbol table can be thought of as linear list of pairs (namevalue)and hence you can use list of data object for realizing symbol table symbol table is referred to or accessed frequently either for adding the nameor for storing the attributes of the nameor for retrieving the attributes of the name therefore accessing efficiency is prime concern while designing symbol table hence the most common way of getting symbol table implemented is to use hash table hashing is method of directly computing the index of the table by using some suitable mathematical function called hash function the hash function operates on the name to be stored in the symbol tableor whose attributes are to be retrieved from the symbol table if is hash function and is namethen (xgives the index of the table where along with its attributes can be stored if is already stored in the tablethen (xgives the index of the table where it is stored to retrieve the attributes of from the table there are various methods of defining hash function like division method in this methodyou take the sum of the values of the charactersdivide it by the size of the tableand take the remainder this gives us an integer value lying in the range of to ( if the size of the table is the other method is mid square method in this methodthe identifier is first squared and then the appropriate number of bits from the middle of square is used as the hash value since the middle bits of the square usually depend on all the characters in the identifierit is expected that different identifiers will result into different values the number of middle bits that you select depends on the table size therefore if is the number of middle bits that you use to form hash valuethen the table size will be hence when you use this method the table size is required to be power of another method is folding in which the identifier is partitioned into several partsall but the last part being of the same length these parts are then added together to obtain the hash value to store the name or to add attributes of the nameyou compute hash value of the nameand place the name or attributes as the case may beat that place in the table whose index is the hash value of the name for retrieving the attribute values of the name kept in the symbol tablei apply the hash function to the name to obtain index of the table where you get the attributes of the name hence you find that no comparisons are required to be done hence the time required for the retrieval is independent of the table size therefore retrieval is possible in constant amount of timewhich will be the time taken for computing the hash function therefore hash table seems to be the best for realizationof the symbol tablebut there is one problem associated with the hashingand it is of collisions hash collision occurs when the two identifiers are mapped into the same hash value this happens because hash function defines mapping from set of valid identifiers to the set of those integerswhich are used as indices of the table therefore you see that the domain of the mapping defined by the hash function is much larger than the range of the mappingand hence the mapping is of many to one nature therefore when implement hash table suitable collision handling mechanism is to be provided which will be activated when there is collision collision handling involve finding out an alternative location for one of the two colliding symbols for exampleif and are the different identifiers and if (xh( ) and are the colliding symbols if is encountered before ythen the ith entry of the table will be used for accommodating symbol xbut later on when comes there is hash collisionand therefore you have to find out an alternative location either for or this means you find out suitable alternative location and either accommodate in that locationor you can move to that location lovely professional university
24,112
notes and place in the ith location of the table there are various methods available to obtain an alternative location to handle the collision they differ from each other in the way search is made for an alternative location the following are the commonly used collision handling techniqueslinear probing or linear open addressing in this methodif for an identifier xh(xiand if the ith location is already occupied then you search for location close to the ith location by doing linear search starting from the ( + )th location to accommodate this means you start from the ( + )th location and do the linear search till you get an empty locationand once you get an empty location accommodate there overflow chaining this is method of implementing hash tablein which collisions gets handled automatically in this method you use two tablesa symbol table to accommodate identifiers and their attributesand hash table which is an array of pointers pointing to symbol table entries each symbol table entry is made of three fieldsfirst for holding the identifiersecond for holing the attributesand the third for holding the link or pointer which can be made pointing to any symbol table entry the insertions into the symbol table are done as followsif is symbol to be insertedthen it will be added to the next available -entry of the symbol table the hash value of is then computedif (xithen the ith hash table pointer is made pointing to the symbol table entry in which is stored if the ith hash table pointer is not pointing to any symbol table entry if the ith hash table pointer is already pointing to some symbol table entrythen the link field of symbol table entry containing is made pointing to that symbol table entry to which ith hash table pointer is pointing toand make the ith hash table pointer pointing the symbol entry containing this is equivalent to building linked list on the ith index of the hash table the retrieval of attributes is done as followsif is symbolthen you obtain ( )and use this value as the index of the hash tableand traverse the list built on this index to get that entry which contains typical hash table implemented using this technique is shown belowlet the symbols to be stored are the hash function that you useh(symbol(value of first letter of the symbolmod nwhere is the size of table if ( ( ( then ( ( ( thereforethe contents of the symbol table will be the one shown in figure lovely professional university
24,113
figure hash table implementation using overflow chaining for collision handling notes consider using division method of hashing store the following values in the hash table of size use sequential method for resolving the collisions since division method of hashing is to be used the hash function hish(keykey mode where key is the value to be stored start with the value and compute the hash value using as key the hash value is ( mod therefore store at the index in the table for ( mod hence place at the index for ( mod store at index for , ( mod store at index for ( mod there is collisiontherefore you find location closer to location at index which is empty to accommodate you see that the location at index is empty store at index for ( mod again there is collisiontherefore you find location closer to location at index which is empty to accommodate you see that location at index is empty store at index for ( mod store at index for ( mod again there is collisiontherefore you find location closer to location at index which is empty to accommodate you see that location at index is empty store at index lovely professional university
24,114
notes the hash table therefore is the one shown below task " hash function must be deterministic and stateless discuss hash functions some of the methods of defining hash function are discussed below modular arithmeticin this methodfirst the key is converted to integerthen it is divided by the size of index rangeand the remainder is taken to be the hash value the spread achieved depends very much on the modulus if modulus is power of small integers like or then many keys tend to map into the same indexwhile other indices remain unused the best choice for modulus is often but not always is prime numberwhich usually has the effect of spreading the keys quite uniformly truncationthis method ignores part of keyand use the remainder part directly as hash value (considering non-numeric fields as their numerical codeif the keys for example are eight digit numbers and the hash table has entriesthen the firstsecondand fifth digit from right might make hash value so maps to it is fast methodbut often fails to distribute keys evenly foldingin this methodthe identifier is partitioned into several parts all but the last part being of the same length these parts are then added together to obtain the hash value for example an eight digit integer can be divided into groups of threethreeand two digits the groups are the added togetherand truncated if necessary to be in the proper range of indices hence maps to truncated to since all information in the key can affect the value of the functionfolding often achieves better spread of indices than truncation mid square methodin this methodthe identifier is squared (considering non-numeric fields as their numerical code)and then the appropriate number of bits from the middle lovely professional university
24,115
of the square are used to get the hash value since the middle bits of the square usually depend on all the characters in the identifierit is expected that different identifiers will result in different values the number of middle bits that we select depends on table size therefore if is the number of middle bits used to form hash valuethen the table size will be rhence when you use mid square method the table size should be power of notes open hashing the simplest form of open hashing defines each slot in the hash table to be the head of linked list all records that hash to particular slot are placed on that slot' linked list the figure below illustrates hash table where each slot stores one record and link pointer to the rest of the list records within slot' list can be ordered in several waysby insertion orderby key value orderor by frequency-of-access order ordering the list by key value provides an advantage in the case of an unsuccessful searchbecause know to stop searching the list once you encounter key that is greater than the one being searched for if records on the list are unordered or ordered by frequencythen an unsuccessful search will need to visit every record on the list given table of size storing recordsthe hash function will (ideallyspread the records evenly among the positions in the tableyielding on average / records for each list assuming that the table has more slots than there are records to be storedyou can hope that few slots will contain more than one record in the case where list is empty or has only one recorda search requires only one access to the list thusthe average cost for hashing should be th( howeverif clustering causes many records to hash to only few of the slotsthen the cost to access record will be much higher because many elements on the linked list must be searched open hashing is most appropriate when the hash table is kept in main memorywith the lists implemented by standard in-memory linked list storing an open hash table on disk in an efficient way is difficultbecause members of given linked list might be stored on different disk blocks this would result in multiple disk accesses when searching for particular key valuewhich defeats the purpose of using hashing let be the universe of keys(aintegers (bcharacter strings (ccomplex bit patterns lovely professional university
24,116
notes the set of hash values (also called the buckets or binslet { }where is positive integer hash function hu associates buckets (hash valuesto keys two main issuescollisions if and are two different keysit is possible that ( ( this is called collision collision resolution is the most important issue in hash table implementations hash functions choosing hash function that minimizes the number of collisions and also hashes uniformly is another critical issue closed hashing all elements are stored in the hash table itself avoids pointersonly computes the sequence of slots to be examined collisions are handled by generating sequence of rehash values universe of primary keys probe number { given key xit has hash value ( , and set of rehash values ( ) ( , ) (xm- require that for every key xthe probe sequence be permutation of this ensures that every hash table position is eventually considered as slot for storing record with key value search (xtsearch will continue until you find the element (successful searchor an empty slot (unsuccessful searchdelete (xt no delete if the search is unsuccessful if the search is successfulthen put the label deleted (different from an empty slotinsert (xt no need to insert if the search is successful if the search is unsuccessfulinsert at the first position with deleted tag lovely professional university
24,117
notes task "open hashing is most appropriate when the hash table is kept in main memorywith the lists implemented by standard in-memory linked list explain rehashing this is another method of collision handling in this method you find an alternative empty location by modifying the hash functionand applying the modified hash function to the colliding symbol for exampleif is symbol and (xiand if the ith location is already occupiedthen modify the hash function to and find out ( )if ( =jand jth location is emptythen accommodate in the jth location otherwise you once again modify to some and repeat the process till the collision gets handled once the collision gets handled we revert back to the original hash function before considering the next symbol denote ( by simply (xlinear probing (xi( (ximod quadratic probing (xi( (xc mod where and are constants double hashing (xi( (xi mod another hash function comparison of rehashing methods linear probing distinct probe sequences primary clustering quadratic probing distinct probe sequences no primary clusteringbut secondary clustering double probing distinct probe sequences no primary clustering no secondary clustering summary hash functions are mostly used in hash tablesto quickly locate data record (for examplea dictionary definitiongiven its search key (the headwordspecificallythe hash function is used to map the search key to the index of slot in the table where the corresponding record is supposedly stored rehashing schemes use second hashing operation when there is collision lovely professional university
24,118
notes keywords foldingin folding the identifier is partitioned into several parts all but the last part being of the same length hash functiona hash function is unary function that is used by hashed associative containers hashinghashing is the transformation of string of characters into usually shorter fixed-length value or key that represents the original string rehashingin rehashing find an alternative empty location by modifying the hash functionand applying the modified hash function to the colliding symbol self assessment fill in the blanks the simplest form of open hashing defines each slot in the hash table to be the head of is most appropriate when the hash table is kept in main memorywith the lists implemented by standard in-memory linked list resolution is the most important issue in hash table implementations hash function must be and stateless is referred to or accessed frequently either for adding the nameor for storing the attributes of the nameor for retrieving the attributes of the name hashing is method of directly computing the index of the table by using some suitable mathematical function called collision handling involve finding out an alternative location for one of the two colliding method ignores part of keyand use the remainder part directly as hash value review questions describe overflow chaining describe various hash functions in detail describe rehashing devise simpleeasy to calculate hash function for mapping three-letter words to integers between and - inclusive find the values of your function on the words pal lap pam map pat pet set sat tat bat for try for as few collisions as possible suppose that hash table contains hash_size entries indexed from through and that the following keys are to be mapped into the table (adetermine the hash addresses and find how many collisions occur when these keys are reduced by applying the operation hash_size lovely professional university
24,119
(bdetermine the hash addresses and find how many collisions occur when these keys are first folded by adding their digits together (in ordinary decimal representationand then applying hash_size (cfind hash function that will produce no collisions for these keys ( hash function that has no collisions for fixed set of keys is called perfect (drepeat the previous parts of this exercise for hash_size ( hash function that produces no collision for fixed set of keys that completely fill the hash table is called minimal perfect another method for resolving collisions with open addressing is to keep separate array called the overflow tableinto which are put all entries that collide with an occupied location they can either be inserted with another hash function or simply inserted in orderwith sequential search used for retrieval discuss the advantages and disadvantages of this method with linear probingit is possible to delete an entry without using second special keyas follows mark the deleted entry empty search until another empty position is found if the search finds key whose hash address is at or before the just-emptied positionthen move it back theremake its previous position emptyand continue from the new empty position write an algorithm to implement this method do the retrieval and insertion algorithms need modification in chained hash tablesuppose that it makes sense to speak of an order for the keysand suppose that the nodes in each chain are kept in order by key then search can be terminated as soon as it passes the place where the key should beif present how many fewer probes will be doneon averagein an unsuccessful searchin successful searchhow many probes are neededon averageto insert new node in the right placecompare your answers with the corresponding numbers derived in the text for the case of unordered chains the hash table itself contained only listsone for each of the chains one variant method is to place the first actual entry of each chain in the hash table itself (an empty position is indicated by an impossible keyas with open addressing with given load factorcalculate the effect on space of this methodas function of the number of words (except linksin each entry ( link takes one word distinguish between linear and quadratic probing consider using division method of hashing store the following values in the hash table of size notes answersself assessment linked list open hashing collision deterministic symbol table hash function symbols truncation lovely professional university
24,120
notes further readings books brian kernighan and dennis ritchiethe programming languageprentice hall burkhard moniendata structures and efficient algorithmsthomas ottmannspringer krusedata structure program designprentice hall of indianew delhi mark allen welesdata structure algorithm analysis in csecond ed addisonwesley publishing rg dromeyhow to solve it by computercambridge university press shi-kuo changdata structures and algorithmsworld scientific shi-kuo changdata structures and algorithmsworld scientific sorenson and tremblayan introduction to data structure with algorithms thomas cormencharles eleiserson ronald rivestintroduction to algorithmsprentice-hall of india pvt limitednew delhi timothy buddclassic data structures in ++addison wesley online links www en wikipedia org www web-source net www webopedia com lovely professional university
24,121
unit heaps unit heaps notes contents objectives introduction heaps binary heaps complete trees implementation putting items into binary heap removing items from binary heap applications of heaps discrete event simulation implementation -heaps summary keywords self assessment review questions further readings objectives after studying this unityou will be able todescribe heaps state the concept of binary heaps discuss applications of heaps define -heaps introduction heap is specialized tree-based data structure that satisfies the heap propertyif is child node of athen key( >key(bthis implies that an element with the greatest key is always in the root nodeand so such heap is sometimes called max-heap (alternativelyif the comparison is reversedthe smallest element is always in the root nodewhich results in min-heap the heap is one maximally-efficient implementation of an abstract data type called priority queue heaps are crucial in several efficient graph algorithms heaps heap is storage pool in which regions of memory are dynamically allocated for examplein +the space for variable is allocated essentially in one of three possible placesglobal lovely professional university
24,122
notes variables are allocated in the space of initialized static variablesthe local variables of procedure are allocated in the procedure' activation recordwhich is typically found in the processor stackand dynamically allocated variables are allocated in the heap in this unitthe term heap is taken to mean the storage pool for dynamically allocated variables consider heaps and heap-ordered trees in the context of priority queue implementations while it may be possible to use heap to manage dynamic storage pooltypical implementations do not in this contextthe technical meaning of the term heap is closer to its dictionary definition-" pile of many things binary tree has the heap property iff it is empty or the key in the root is larger than that in either child and both subtrees have the heap property heap can be used as priority queuethe highest priority item is at the root and is trivially extracted but if the root is deletedyou are left with two sub-trees and you must efficiently re-create single tree with the heap property the value of the heap structure is that you can both extract the highest priority item and insert new one in (logntime binary heaps binary heap is heap-ordered binary tree which has very special shape called complete tree as result of its special shapea binary heap can be implemented using an array as the underlying foundational data structure array subscript calculations are used to find the parent and the children of given node in the tree and since an array is usedthe storage overhead associated with the subtree fields contained in the nodes of the trees is eliminated complete trees complete trees and perfect trees are closely relatedyet quite distinct as pointed out in the preceding unita perfect binary tree of height has exactly + internal nodes sincethe only permissible values of are + there is no perfect binary tree which containssay or nodes howeveryou want data structure that can hold an arbitrary number of objects so you cannot use perfect binary tree insteadyou use complete binary treewhich is defined as followsdefinition (complete binary treea complete binary tree of height > is binary tree {rtltrwith the following properties if tl and tr for there are two possibilities(atl is perfect binary tree of height - and tr is complete binary tree of height - or (btl is complete binary tree of height - and tr is perfect binary tree of height - figure shows an example of complete binary tree of height four notice that the left subtree of node is complete binary tree of height threeand the right subtree is perfect binary tree lovely professional university
24,123
of height two similarlythe left subtree of node is perfect binary tree of height twoand the right subtree is complete binary tree of height two notes figure complete binary tree does there exist an complete binary with exactly nodes for every integer > the following theorem addresses this question indirectly by defining the relationship between the height of complete tree and the number of nodes it contains task discuss how complete and perfect trees are closely related theorem- complete binary tree of height > contains at least and at most + nodes extbfproof firstyou prove the lower bound by induction let mh be the minimum number of nodes in complete binary tree of height to prove the lower bound you must show that mh base case there is exactly one node in tree of height zero thereforem inductive hypothesis assume that mh for kfor some > consider the complete binary tree of height + which has the smallest number of nodes its left subtree is complete tree of height having the smallest number of nodes and its right subtree is perfect tree of height - from the inductive hypothesisthere are nodes in the left subtree and there are exactly ( - )+ nodes in the perfect right subtree thusmk ( - )+ + thereforeby induction mh for all > which proves the lower bound nexti prove the upper bound by induction let mh be the maximum number of nodes in complete binary tree of height to prove the upper bound must show that mh + base case there is exactly one node in tree of height zero thereforem inductive hypothesis assume that mh + for kfor some > consider the complete binary tree of height + which has the largest number of nodes its left subtree is perfect tree of height and its right subtree is complete tree of height having the largest number of nodes lovely professional university
24,124
notes there are exactly + nodes in the perfect left subtree from the inductive hypothesisthere are + nodes in the right subtree thusmk + + ( + )+ thereforeby induction mh + for all > which proves the upper bound it follows from theorem that there exists exactly one complete binary tree that contains exactly internal nodes for every integer > it also follows from theorem that the height of complete binary tree containing internal nodes is [log nwhy are interested in complete treesas it turns outcomplete trees have some useful characteristics for examplein the preceding you saw that the internal path length of treei the sum of the depths of all the internal nodesdetermines the average time for various operations complete binary tree has the nice property that it has the smallest possible internal path lengththeorem the internal path length of binary tree with nodes is at least as big as the internal path length of complete binary tree with nodes extbfproof consider binary tree with nodes that has the smallest possible internal path length clearlythere can only be one node at depth zero--the root similarlyat most two nodes can be at depth oneat most four nodes can be at depth twoand so on thereforethe internal path length of tree with nodes is always at least as large as the sum of the first terms in the series but this summation is precisely the internal path length of complete binary treesince the depth of the average node in tree is obtained by dividing the internal path length of the tree by ntheorem tells us that complete trees are the best possible in the sense that the average depth of node in complete tree is the smallest possible but how small is smallthat isis does the average depth grow logarithmically with the following theorem addresses this questiontheorem the internal path length of complete binary tree with nodes is log ( log ( log = extbfproof the proof of theorem is left as an exercise for the reader from theorem you may conclude that the internal path length of complete tree is ( log nconsequentlythe depth of the average node in complete tree is (log nimplementation binary heap is heap-ordered complete binary tree which is implemented using an array in heap the smallest key is found at the root and since the root is always found in the first position of the arrayfinding the smallest key is trivial operation in binary heap in this section we will describe the implementation of priority queue as binary heap as shown in figure define concrete class called binaryheap for this purpose lovely professional university
24,125
notes figure object class hierarchy comparable abstract object abstract container container binaryheap priorityqueue mergeablepriorityqueue tree binomialqueue abstracttree binarytree leflistheap generaltree binomialtree program- introduces the binaryheap class the binaryheap class extends the abstractcontainer class introduced in program and it implements the priorityqueue interface defined in program public class binaryheap extends abstractcontainer implements priorityqueue protected comparable [array/program binaryheap fields putting items into binary heap there are two requirements which must be satisfied when an item is inserted in binary heap firstthe resulting tree must have the correct shape secondthe tree must remain heap-ordered figure illustrates the way in which this is done since the resulting tree must be complete treethere is only one place in the tree where node can be added that issince the bottom level must be filled from left to rightthe node node must be added at the next available position in the bottom level of the tree as shown in figure (afigure inserting an item into binary heap lovely professional university
24,126
notes in this examplethe new item to be inserted has the key note that you cannot simply drop the new item into the next position in the complete tree because the resulting tree is no longer heap ordered insteadthe hole in the heap is moved toward the root by moving items down in the heap as shown in figure (band (cthe process of moving items down terminates either when you reach the root of the tree or when the hole has been moved up to position in which when the new item is inserted the result is heap program gives the code for inserting an item in binary heap the enqueue method of the binaryheap class takes as its argument the item to be inserted in the heap if the priority queue is full an exception is thrown otherwisethe item is inserted as described above public class binaryheap extends abstractcontainer implements priorityqueue protected comparable[arraypublic void enqueue (comparable objectif (count =array length throw new containerfullexception ()++countint =countwhile ( &array [ / isgt (object)array [iarray [ / ] /= array [iobject/program binaryheap class enqueue method the implementation of the algorithm is actually remarkably simple lines - move the hole in the heap up by moving items down when the loop terminatesthe new item can be inserted at position thereforethe loop terminates either at the rooti= or when the key in the parent of iwhich is found at position / is smaller than the item to be inserted notice too that good optimizing compiler will recognize that the subscript calculations involve only division by two thereforethe divisions can be replaced by bitwise right shifts which usually run much more quickly since the depth of complete binary tree with nodes is log nthe worst case running time for the enqueue operation is log nt(isgto(log )where (isgtis the time required to compare to objects if (isgto( )the enqueue operation is simply (log nin the worst case lovely professional university
24,127
removing items from binary heap notes the dequeuemin method removes from priority queue the item having the smallest key in order to remove the smallest itemit needs first to be located thereforethe dequeuemin operation is closely related to findmin the smallest item is always at the root of min heap thereforethe findmin operation is trivial program gives the code for the findmin method of the binaryheap class assuming that no exception is thrownthe running time of findmin is clearly ( public class binaryheap extends abstractcontainer implements priorityqueue protected comparable [arraypublic comparable findmin (if (count = throw new containeremptyexception ()return array [ ]/program binaryheap class findmin method since the bottom row of complete tree is filled from left to right as items are addedit follows that the bottom row must be emptied from right to left as items are removed soyou have problemthe datum to be removed from the heap by dequeuemin is in the rootbut the node to be removed from the heap is in the bottom row figure removing an item from binary heap lovely professional university
24,128
notes figure (aillustrates the problem the dequeuemin operation removes the key from the heapbut it is the node containing key that must be removed from the tree to make it into complete tree again when key is removed from the roota hole is created in the tree as shown in figure (bthe trick is to move the hole down in the tree to point where the left-over keyin this case the key can be reinserted into the tree to move hole down in the treeyou consider the children of the empty node and move up the smallest key moving up the smallest key ensures that the result will be min heap the process of moving up continues until either the hole has been pushed down to leaf nodeor until the hole has been pushed to point where the left over key can be inserted into the heap in the example shown in figure ( )-( )the hole is pushed from the root node to leaf node where the key is ultimately placed is shown in figure (dprogram gives the code for the dequeuemin method of the binaryheap class this method implements the deletion algorithm described above the main loop (lines - moves the hole in the tree down by moving up the child with the smallest key until either leaf node is reached or until the hole has been moved down to point where the last element of the array can be reinserted public class binaryheap extends abstractcontainer implements priorityqueue protected comparable [arraypublic comparable dequeuemin(if (count = throw new containeremptyexception ()comparable result array [ ]comparable last array [count]--countint while ( count int child iif (child count &array [child islt (array [child])child + if (last isle (array [child])breakarray [iarray [child] childarray [ilastreturn result/program binaryheap class dequeuemin method lovely professional university
24,129
in the worst casethe hole must be pushed from the root to leaf node each iteration of the loop makes at most two object comparisons and moves the hole down one level notes thereforethe running time of the dequeuemin operation is log ( (log )where count is the number of items in the heap if ( and ( )the dequeuemin operation is simply (log nin the worst case task discuss the uses of binary heaps applications of heaps the main applications of heaps are discrete event simulation implementation discrete event simulation one of the most important applications of priority queues is in discrete event simulation simulation is tool which is used to study the behavior of complex systems the first step in simulation is modeling you construct mathematical model of the system wish to study then you write computer program to evaluate the model the systems studied using discrete event simulation have the following characteristicsthe system has state which evolves or changes with time changes in state occur at distinct points in simulation time state change moves the system from one state to another instantaneously state changes are called events examplesuppose wish to study the service received by customers in bank suppose single teller is serving customers if the teller is not busy when customer arrives at the bankthe that customer is immediately served on the other handif the teller is busy when another customer arrivesthat customer joins queue and waits to be served you can model this system as discrete event process as shown in figure the state of the system is characterized by the state of the server (the teller)which is either busy or idleand by the number of customers in the queue the events which cause state changes are the arrival of customer and the departure of customer figure simple queueing system customers departing customer arriving customer queue server if the server is idle when customer arrivesthe server immediately begins to serve the customer and therefore changes its state to busy if the server is busy when customer arrivesthat customer joins the queue lovely professional university
24,130
notes when the server finishes serving the customerthat customer departs if the queue is not emptythe server immediately commences serving the next customer otherwisethe server becomes idle how do you keep track of which event to simulate nexteach event (arrival or departureoccurs at discrete point in simulation time in order to ensure that the simulation program is correctit must compute the events in order this is called the causality constraint-events cannot change the past in our modelwhen the server begins to serve customer you can compute the departure time of that customer sowhen customer arrives at the server schedule an event in the future which corresponds to the departure of that customer in order to ensure that events are processed in orderyou keep them in priority queue in which the time of the event is its priority since you always process the pending event with the smallest time next and since an event can schedule new events only in the futurethe causality constraint will not be violated implementation this section presents the simulation of system comprised of single queue and server as shown in figure program defines the class event which represents events in the simulation there are two parts to an eventa type (either arrival or departure)and time public class simulation static class event extends association public static final int arrival public static final int departure event (int typedouble time{super (new db (time)new int (type))double gettime ({return ((db getkey ()doublevalue ();int gettype ({return ((intgetvalue ()intvalue ();/program event class an association is an ordered pair comprised of key and value in the case of the event classthe key is the time of the event and the value is the type of the event thereforethe events in priority queue are prioritized by their times program defines the run method which implements the discrete event simulation this method takes one argumenttimelimitwhich specifies the total amount of time to be simulated the simulation class contains single fieldcalled eventlistwhich is priority queue this priority queue is used to hold the events during the course of the simulation public class simulation priorityqueue eventlist new leftistheap ()public void run (double timelimit lovely professional university
24,131
notes boolean serverbusy falseint numberinqueue randomvariable servicetime new exponentialrv ( )randomvariable interarrivaltime new exponentialrv ( )eventlist enqueue (new event arrival ))while (!eventlist isempty ()event event (eventeventlist dequeuemin ()double event gettime ()if ( timelimit{eventlist purge ()breakswitch (event gettype ()case event arrivalif (!severbusyserverbusy trueeventlist enqueue (new event(event departuret servicetime nextdouble ()))else ++numberinqueueeventlist enqueue (new event (event arrivalt interarrivaltime nextdouble ()))breakcase event departureif (numberinqueue = serverbusy falseelse --numberinqueueeventlist enqueue (new event(event departuret servicetime nextdouble ()))break/program application of priority queues discrete event simulation lovely professional university
24,132
notes the state of the system being simulated is represented by the two variables serverbusy and numberinqueue the first is boolean value which indicates whether the server is busy the second keeps track of the number of customers in the queue in addition to the state variablesthere are two instances of the class exponentialrv it implements the randomvariable interface defined in program this interface defines method called nextdouble which is used to sample the random number generator every time nextdouble is calleda different (randomresult is returned the random values are exponentially distributed around mean value which is specified in the constructor for examplein this case both servicetime and interarrivaltime produce random distributions with the mean value of (lines - it is assumed that the eventlist priority queue is initially empty the simulation begins by enqueueing customer arrival at time zero (line the while loop (lines - constitutes the main simulation loop this loop continues as long as the eventlist is not emptyi as long as there is an event to be simulated each iteration of the simulation loop begins by dequeuing the next event in the event list (line if the time of that event exceeds timelimitthe event is discardedthe eventlist is purgedand the simulation is terminated otherwisethe simulation proceeds the simulation of an event depends on the type of that event the switch statement (line invokes the appropriate code for the given event if the event is customer arrival and the server is not busyserverbusy is set to true and the servicetime random number generator is sampled to determine the amount of time required to service the customer customer departure is scheduled at the appropriate time in the future (lines - on the other handif the server is already busy when the customer arrivesi add one to the numberinqueue variable (line another customer arrival is scheduled after every customer arrival the interarrivaltime random number generator is sampledand the arrival is scheduled at the appropriate time in the future (lines - if the event is customer departure and the queue is emptythe server becomes idle (lines - when customer departs and there are still customers in the queuethe next customer in the queue is served thereforenumberinqueue is decreased by one and the servicetime random number generator is sampled to determine the amount of time required to service the next customer customer departure is scheduled at the appropriate time in the future (lines - clearly the execution of the simulation method given in program mimics the modeled system of coursethe program given produces no output for it to be of any practical valuethe simulation program should be instrumented to allow the user to study its behavior examplethe user may be interested in knowing statistics such as the average queue length and the average waiting time that customer waits for service and such instrumentation can be easily incorporated into the given framework -heaps the -ary heap or -heap is priority queue data structurea generalization of the binary heap in which the nodes have children instead of thusa binary heap is -heap the -ary heap consists of an array of itemseach of which has priority associated with it these items may be viewed as the nodes in complete -ary treelisted in breadth first traversal orderthe item at position of the array forms the root of the treethe items at positions - are its childrenthe next items are its grandchildrenetc thusthe parent of the item at position (for any is the item at position ceiling(( )/dand its children are the items at positions di through di according to the heap propertyin min-heapeach item has priority lovely professional university
24,133
that is at least as large as its parentin max-heapeach item has priority that is no larger than its parent notes the minimum priority item in min-heap (or the maximum priority item in max-heapmay always be found at position of the array to remove this item from the priority queuethe last item in the array is moved into its placeand the length of the array is decreased by one thenwhile item and its children do not satisfy the heap propertyitem is swapped with one of its children (the one with the smallest priority in min-heapor the one with the largest priority in max-heap)moving it downward in the tree and later in the arrayuntil eventually the heap property is satisfied the same downward swapping procedure may be used to increase the priority of an item in min-heapor to decrease the priority of an item in max-heap to insert new item into the heapthe item is appended to the end of the arrayand then while the heap property is violated it is swapped with its parentmoving it upward in the tree and earlier in the arrayuntil eventually the heap property is satisfied the same upward-swapping procedure may be used to decrease the priority of an item in min-heapor to increase the priority of an item in max-heap to create new heap from an array of itemsone may loop over the items in reverse orderstarting from the item at position and ending at the item at position applying the downward-swapping procedure for each item to implement prim' algorithm efficientlywe need data structure that will store the vertices of in way that allows the vertex joined by the minimum cost edge to be selected quickly heap is data structure consisting of collection of itemseach having key the basic operations on heap are insert( , ,hadd item to heap using as the key value deletemin(hdelete and return an item of minimum key from changekey( , ,hchange the key of item in heap to key( ,hreturn the key value for item the heap is among the most widely applicable non-elementary data structure heaps can be implemented efficientlyusing heap-ordered tree each tree node contains one item and each item has real-valued key the key of each node is at least as large the key of its parent (excepting the rootfor integer > -heap is heap-ordered -ary tree that is "heapshaped let be an infinite -ary treewith vertices numbered in breadth-first order subtree of is heap-shaped if its vertices have consecutive numbers item number key the depth of -heap with nodes is <[logdnlovely professional university
24,134
notes implementing -heaps as arrays the nodes of -heap can be stored in an array in breadth-first order (aallows indices for parents and children to calculated directlyeliminating the need for pointers key if is the index of an item xthen [( - )/dis the index of (xand the indices of the children of are in the range [ ( - di when the key of an item is decreasedwe can restore heap-orderby repeatedly swapping the item with its parent similarlyfor increasing an item' key -heap operations item function findmin(heap )return if {[nullh {( ) ( fiendprocedure siftup(item iinteger xmodifies heap )integer pp :( - )/ddo and key( ( )key( = ( : ( ) :pp :( - )/ odh( :iendprocedure insert(item imodifies heap )siftup( ,| , )endinteger function minchild(integer xheap )integer imincminc : ( - if minc | =>return fii :minc do if key( ( )minc :ifii : lovely professional university
24,135
notes odreturn mincendprocedure siftdown(item iinteger xmodifies heap )integer cc :minchild( , )do and key( ( ) ( : ( ) :cc :minchild( , )odh( :iendprocedure delete(item imodified heap )item jj : (| |) (| |:nullif and key(jsiftup( , - ( ), ) and key(jkey( =>siftdown( , - ( ), )fienditem function deletemin(modifies heap )item iif {=return nullfii : ( )delete( ( ), )return iendprocedure changekey(item ikeytype kmodified heap )item kiki :key( )key( :kif siftup( , - ( ), ) ki =>siftdown( , - ( ), )fiendd-heap algorithms -heap is tree with the property that parent' value is smaller than (or equal tothe value of any of its children for examplethe min heap we have seen in class is -heap given the number of elements in the -heapin terms of and what is the time cost in the worst case (big oh notationof each of the following operations buildheapbuilds -heap from list of nautrals read from standard input insertheapinserts new element into the -heap decreaskey (pd)lowers the value of the item at position by positive amount increasekey (pd)increases the value of the item at position by positive amount removeremoves the node at position from the -heap this is done by performing decreasekey (pand then performing deletemin (lovely professional university
24,136
notes best example of -heap is "djikstra' algorithmdjikstra' algorithm (named after its discoverdutch computer scientist dijkstrasolves the problem of finding the shortest path from point in graph (the sourceto destination with non-negative weight edge it turns out that one can find the shortest paths from given source to all vertices (pointsin graph in the same time hencethis problem is sometimes called the single-source shortest paths problem dijkstra' algorithm is greedy algorithmwhich finds shortest path between all pairs of vertices in the graph before describing the algorithms formallylet us study the method through an example figure directed graph with no negative edge( dijkstra' algorithm keeps two sets of verticess is the set of vertices whose shortest paths from the source have already been determined - is the set of remaining vertices the other data structures needed ared array of best estimates of shortest path to each vertex from the source pi an array of predecessors for each vertex predecessor is an array of vertices to which shortest path has already been determined the basic operation of dijkstra' algorithm is edge relaxation if there is an edge from to vthen the shortest known path from to can be extended to path from to by adding edge ( ,vat the end this path will have length [ ]+ ( ,vif this is less than [ ]we can replace the current value of [vwith the new value the predecessor list is an array of indicesone for each vertex of graph each vertex entry contains the index of its predecessor in path through the graph operation of algorithm the following sequence of diagrams illustrate the operation of dijkstra' algorithm the bold vertices indicate the vertex to which shortest path has been determined initialize the graphall the vertices have infinite costs except the source vertex which has zero cost lovely professional university
24,137
notes from all the adjacent verticeschoose the closest vertex to the source as we initialized [sto it' (shown in bold circleadd it to relax all vertices adjacent to si and update vertices and by and as the distance from choose the nearest vertexx relax all vertices adjacent to update predecessors for uv and predecessor of predecessor of , predecessor of , add to now is the closest vertex add it to relax and adjust its predecessor is now closestadd it to and adjust its adjacent vertexv lovely professional university
24,138
notes finallyadd to the predecessor list now defines the shortest path from each node to dijkstra' algorithm initialise and pifor each vertex in vg [ :infinity pi[ :nil [ : set to empty : : (gwhile ( -sis not nullwhile not empty( sort the vertices in - according to the current best estimate of their distance from the source :extract-min ) add vertex uthe closest vertex in -sto saddnodesu ) relax all the vertices still in - connected to relaxnode unode vdouble [][if [vd[uw[ ]vthen [ : [uw[ ][vpi[ : in summarythis algorithm starts by assigning weight of infinity to all verticesand then selecting source and assigning weight of zero to it vertices are added to the set for which shortest paths are known when vertex is selectedthe weights of its adjacent vertices are relaxed once all vertices are relaxedtheir predecessor' vertices are updated (pithe cycle of selectionweight relaxation and predecessor update is repeated until the shortest path to all vertices has been found summary heap is partially sorted binary tree although heap is not completely in orderit conforms to sorting principleevery node has value less (for the sake of simplicityi will assume that all orderings are from least to greatestthan either of its children lovely professional university
24,139
additionallya heap is "complete tree- complete tree is one in which there are no gaps between leaves notes for instancea tree with root node that has only one child must have its child as the left node more preciselya complete tree is one that has every level filled in before adding node to the next leveland one that has the nodes in given level filled in from left to rightwith no breaks keywords binary heapa binary heap is heap-ordered binary tree which has very special shape called complete tree discrete event simulationone of the most important applications of priority queues is in discrete event simulation -heapd-heap is priority queue data structurea generalization of the binary heap in which the nodes have children instead of heapa heap is specialized tree-based data structure that satisfies the heap propertyif is child node of athen key( >key( self assessment fill in the blanks heap is partially sorted complete trees and perfect trees are in heap the is found at the root the method removes from priority queue the item having the smallest key the of an event depends on the type of that event the minimum priority item in min-heap may always be found at of the array the heap is one maximally-efficient implementation of an abstract data type called calculations are used to find the parent and the children of given node in the tree review questions what do you mean by heaps explain complete binary tree prove that " complete binary tree of height > contains at least and at most + nodes prove that "the internal path length of binary tree with nodes is at least as big as the internal path length of complete binary tree with nodes explain the implementation of binary heap describe how will you put items into binary heaps lovely professional university
24,140
notes write method and the corresponding recursive function to traverse binary tree (in whatever order you find convenientand dispose of all its nodes use this method to implement binary_tree destructor consider heap of keyswith xk being the key in position (in the contiguous representationfor < prove that the height of the subtree rooted at xk is the greatest integer not exceeding lg( /( ))for all satisfying < " heap can be used as priority queuethe highest priority item is at the root and is trivially extracted discuss "the enqueue method of the binaryheap class takes as its argument the item to be inserted in the heap explain answersself assessment binary tree closely related smallest key dequeuemin simulation position priority queue array subscript further readings books brian kernighan and dennis ritchiethe programming languageprentice hall burkhard moniendata structures and efficient algorithmsthomas ottmannspringer krusedata structure program designprentice hall of indianew delhi mark allen welesdata structure algorithm analysis in csecond ed addisonwesley publishing rg dromeyhow to solve it by computercambridge university press shi-kuo changdata structures and algorithmsworld scientific shi-kuo changdata structures and algorithmsworld scientific sorenson and tremblayan introduction to data structure with algorithms thomas cormencharles eleiserson ronald rivestintroduction to algorithms prentice-hall of india pvt limitednew delhi timothy buddclassic data structures in ++addison wesley online links www en wikipedia org www web-source net www webopedia com lovely professional university
24,141
unit leftist heaps and binomial queues unit leftist heaps and binomial queues notes contents objectives introduction leftist heaps leftist trees implementation merging leftist heaps putting items into leftist heap removing items from leftist heap skew heaps binomial queues summary keywords self assessment review questions further readings objectives after studying this unityou will be able tostate the concept leftist heaps realise skew heaps explain the binomial queues introduction numerous data structures have been developed that can support efficient implementations of all the priority-queue operations most of them are based on direct linked representation of heap-ordered trees two links are needed for moving down the tree (either to both children in binary tree or to the first child and next sibling in binary tree representation of general treeand one link to the parent is needed for moving up the tree developing implementations of the heap-ordering operations that work for any (heap-orderedtree shape with explicit nodes and links or other representation is generally straightforward the difficulty lies in dynamic operations such as insertremoveand joinwhich require us to modify the tree structure different data structures are based on different strategies for modifying the tree structure while still maintaining balance in the tree leftist heaps leftist heap is heap-ordered binary tree which has very special shape called leftist tree one of the nice properties of leftist heaps is that is possible to merge two leftist heaps efficiently as resultleftist heaps are suited for the implementation of mergeable priority queues lovely professional university
24,142
notes leftist trees leftist tree is tree which tends to ``lean'to the left the tendency to lean to the left is defined in terms of the shortest path from the root to an external node in leftist treethe shortest path to an external node is always found on the right every node in binary tree has associated with it quantity called its null path length which is defined as followsnull path and null path length consider an arbitrary node in some binary tree the null path of node is the shortest path in from to an external node of the null path length of node is the length of its null path sometimes it is convenient to talk about the null path length of an entire tree rather than of nodenull path length of tree the null path length of an empty tree is zero and the null path length of non-empty binary tree {rtltris the null path length its root when new node or subtree is attached to given treeit is usually attached in place of an external node since the null path length of tree is the length of the shortest path from the root of the tree to an external nodethe null path length gives lower bound on the cost of insertion examplethe running time for insertion in binary search treeis at least dt(compareo(dwhere is the null path length of the tree leftist tree is tree in which the shortest path to an external node is always on the right this informal idea is defined more precisely in terms of the null path lengths as followsdefinition (leftist treea leftist tree is binary tree with the following properties either lor {rtltr}where both tl and tr are leftist trees which have null path lengths dl and drrespectivelysuch that dl >dr figure shows an example of leftist heap leftist heap is simply heap-ordered leftist tree the external depth of the node is shown to the right of each node in figure the figure clearly shows that it is not necessarily the case in leftist tree that the number of nodes to the left of given node is greater than the number to the right howeverit is always the case that the null path length on the left is greater than or equal to the null path length on the right for every node in the tree lovely professional university
24,143
figure leftist heap notes the reason for our interest in leftist trees is illustrated by the following theoremstheorem- consider leftist tree which contains internal nodes the path leading from the root of downwards to the rightmost external node contains at most log ( nodes extbfproof assume that has null path length then must contain at least - leaves otherwisethere would be shorter path than from the root of to an external node binary tree with exactly leaves has exactly - non-leaf internal nodes since has at least - leavesit must contain at least > internal nodes altogether therefored <log ( since is leftist treethe shortest path to an external node must be the path on the right thusthe length of the path to the rightmost external is at most log ( there is an interesting dichotomy between avl balanced trees and leftist trees the shape of an avl tree satisfies the avl balance condition which stipulates that the difference in the heights of the left and right subtrees of every node may differ by at most one the effect of avl balancing is to ensure that the height of the tree is (log non the other handleftist trees have an ``imbalance condition'which requires the null path length of the left subtree to be greater than or equal to that of the right subtree the effect of the condition is to ensure that the length of the right path in leftist tree is (log nthereforeby devising algorithms for manipulating leftist heaps which only follow the right path of the heapyou can achieve running times which are logarithmic in the number of nodes the dichotomy also extends to the structure of the algorithms for examplean imbalance sometimes results from an insertion in an avl tree the imbalance is rectified by doing rotations similarlyan insertion into leftist tree may result in violation of the ``imbalance condition 'that isthe null path length of the right subtree of node my become greater than that of the left subtree fortunatelyit is possible to restore the proper condition simply by swapping the left and right subtrees of that node implementation this section presents an implementation of leftist heaps that is based on the binary tree implementation program introduces the leftistheap class the leftistheap class extends the binarytree class introduced in program and it implements the mergeablepriorityqueue interface defined in program lovely professional university
24,144
notes public class leftistheap extends binarytree implements mergeablepriorityqueue protected int nullpathlength/program leftistheap fields merging leftist heaps in order to merge two leftist heapssay and declared as follows mergeablepriorityqueue new leftistheap ()mergeablepriorityqueue new leftistheap () invoke the merge method like thish merge ( )the effect of the merge method is to take all the nodes from and to attach them to thus leaving as the empty heap in order to achieve logarithmic running timeit is important for the merge method to do all its work on the right sides of and it turns out that the algorithm for merging leftist heaps is actually quite simple to begin withif is the empty heapthen you can simply swap the contents of and otherwiselet us assume that the root of is larger than the root of then you can merge the two heaps by recursively merging with the right subheap of after doing soit may turn out that the right subheap of now has larger null path length than the left subheap this you rectify by swapping the left and right subheaps so that the result is again leftist on the other handif initially has the smaller rooti simply exchange the roles of and and proceed as above figure illustrates the merge operation in this examplei wish to merge the two trees and shown in figure (asince has the larger rootit is recursively merged with the right subtree of the result of that merge replaces the right subtree of as shown in figure (bsince the null path length of the right subtree is now greater than the leftthe subtrees of are swapped giving the leftist heap shown in figure ( lovely professional university
24,145
figure merging leftist heaps notes program gives the code for the merge method of the leftistheap class the merge method makes use of two other methodsswapcontents and swapsubtrees the swapcontents method takes as its argument leftist heapand exchanges all the contents (key and subtreesof this heap with the given one the swapsubtrees method exchanges the left and right subtrees of this node the implementation of these routines is trivial and is left as project for the reader clearlythe worst-case running time for each of these routines is ( the merge method only visits nodes on the rightmost paths of the trees being merged suppose you are merging two treessay and with null path lengths and respectively then the running time of the merge method is ( ) (isgto( lovely professional university
24,146
notes where (isgtis time required to compare two keys if you assume that the time to compare two keys is constantthen you get (log log )where and are the number of internal nodes in trees and respectively public class leftistheap extends binarytree implements mergeablepriorityqueue protected int nullpathlengthpublic void merge (mergeablepriorityqueue queueleflistheap arg (leftistheapqueueif (isempty ()swapcontents (arg)else if (!arg isempty ()if (((comparablegetkey ()isgt ((comparablearg getkey ())swapcontents (arg)getrightheap (merge (arg)if (getleftheap (nullpathlength getrightheap (nullpathlengthswapsubtrees ()nullpathlength math min getleftheap (nullpathlengthgetrightheap (nullpathlength)/program leftistheap class merge method putting items into leftist heap the enqueue method of the leftistheap class is used to put items into the heap enqueue is easily implemented using the merge operation that isto enqueue an item in given heapi simply create new heap containing the one item to be enqueued and merge it with the given heap the algorithm to do this is shown in program public class leflistheap extends binarytree implements mergeablepriorityqueue protected int nullpathlengthpublic void enqueue (comparable objectmerge (new leflistheap (object))/program leftistheap class enqueue method lovely professional university
24,147
the expression for the running time for the insert operation follows directly from that of the merge operation that isthe time required for the insert operation in the worst case is notes ( ) (isgto( )where is the null path length of the heap into which the item is inserted if you assume that two keys can be compared in constant timethe running time for insert becomes simply (log )where is the number of nodes in the tree into which the item is inserted task discuss the use of enqueue method removing items from leftist heap the findmin method locates the item with the smallest key in given priority queue and the dequeuemin method removes it from the queue since the smallest item in heap is found at the rootthe findmin operation is easy to implement program shows how it can be done clearlythe running time of the findmin operation is ( public class leftistheap extends binarytree implements mergeablepriorityqueue protected int nullpathlengthpublic comparable findmin (if (isempty ()throw new containeremptyexeception ()return (comparablegetkey ()/program leftistheap class findmin method since the smallest item in heap is at the rootthe dequeuemin operation must delete the root node since leftist heap is binary heapthe root has at most two children in general when the root is deletedyou are left with two non-empty leftist heaps since you already have an efficient way to merge leftist heapsthe solution is to simply merge the two children of the root to obtain single heap againprogram shows how the dequeuemin operation of the leftistheap class can be implemented public class leflistheap extends binarytree implements mergeablepriorityqueue protected int nullpathlengthpublic comparable dequeuemin (if (isempty ()lovely professional university
24,148
notes throw new containeremptyexception ()comparable result (comparablegetkey ()leftistheap oldleft getleftheap ()leftistheap oldright getrightheap ()purge ()swapcontents (oldleft)merge (oldright)return result/program leftistheap class dequeuemin method the running time of program is determined by the time required to merge the two children of the root (line since the rest of the work in dequeuemin can be done in constant time consider the running time to delete the root of leftist heap with internal nodes the running time to merge the left and right subtrees of (dl dr ) (isgto(dl dr)where dl and dr are the null path lengths of the left and right subtrees trespectively in the worst casedr and dl log skew heaps there are several ways to implement heaps in self-adjusting fashion the one shall discuss is called skew heaps as proposed by sleator and tarjanand is analogous to leftist heaps skew heap is heap-ordered binary tree that isit is binary tree with one key in each node so that for each node other that the rootthe key at node is no less than the key at the parent of to represent such tree you store in each node its associated keydenoted key(xand two pointers left(xand right( )to its left child and right childrespectively if has no left child define left( =lif has no right child define right( = access to the tree is by pointer to its rootyou represent an empty tree by pointer to with this representation you can carry out the various heap operations as follows perform makeheap(hin ( time by initializing to since heap order implies that the root is minimum key in the treeyou can carry out findmin(hin ( time by returning the key at the rootreturning null if the heap is empty you perform insert and deletemin using union to carry out insert( ) make into one-node heap and union it with to carry out deletemin( )if is not empty replace by the union of its left and right subtrees and return the key at the original root (if is originally empty you return null to perform unionh )you form single tree by traversing down the right paths of and merging them into single right path with keys in nondecreasing order first assume the left subtrees of nodes along the merge path do not change (figure (athe time for the union operation is bounded by constant times the length of the merge path to make union efficientyou must keep right paths short in leftist heaps this is done by maintaining the invariant thatfor any node xthe right path descending from is shortest path down to missing node maintaining this invariant requires storing at every node the length of shortest path down to missing nodeafter the merge you walk back up the merge pathupdating the shortest path lengths and swapping left and right children as necessary to maintain the leftist property the length of the right path in leftist heap of nodes is at most log nimplying an olog lovely professional university
24,149
worst-case time bound for each of the heap operationswhere is the number of nodes in the heap or heaps involved notes figure union of two skew heaps (amerge of the right paths (bswapping of children along the path formed by the merge ( ( in our self-adjusting version of this data structurei perform the union operation by merging the right paths of the two trees and then swapping the left and right children of every node on the merge path except the lowest (figure ( )this makes the potentially long right path formed by the merge into left path you call the resulting data structure skew heap task " skew heap is heap-ordered binary tree explain binomial queues binomial queue is priority queue that is implemented not as single tree but as collection of heap-ordered trees collection of trees is called fores each of the trees in binomial queue lovely professional university
24,150
notes has very special shape called binomial tree binomial trees are general trees the maximum degree of node is not fixed the remarkable characteristic of binomial queues is that the merge operation is similar in structure to binary additioni the collection of binomial trees that make up the binomial queue is like the set of bits that make up the binary representation of non-negative integer furthermorethe merging of two binomial queues is done by adding the binomial trees that make up that queue in the same way that the bits are combined when adding two binary numbers binomial trees binomial tree is general tree with very special shapedefinition (binomial treethe binomial tree of order > with root is the tree bk defined as follows if bk {ri the binomial tree of order zero consists of single noder if bk {rb bk- the binomial tree of order > comprises the root rand binomial subtreesb bk- figure shows the first five binomial treesb it follows directly from the root of bkthe binomial tree of order khas degree since may arbitrarily largeso too can the degree of the root furthermorethe root of binomial tree has the largest fanout of any of the nodes in that tree figure binomial trees lovely professional university
24,151
the number of nodes in binomial tree of order is function of knotes theorem- the binomial tree of order kbkcontains nodes extbfproof (by inductionlet nk be the number of nodes in bka binomial tree of order base caseby definitionb consists single node therefore inductive hypothesisassume that nk for lfor some > consider the binomial tree of order bl {rb bltherefore the number of nodes in bl+ is given by nl+ ni = = - + thereforeby induction on lnk for all > it follows from theorem that binomial trees only come in sizes that are power of two nk { furthermorefor given power of twothere is exactly one shape of binomial tree theorem- the height of bkthe binomial tree of order kis extbfproof (by inductionlet hk be the height of bka binomial tree of order base caseby definitionb consists single node therefore inductive hypothesisassume that hk for lfor some > consider the binomial tree of order bl+ {rb bltherefore the height bl+ is given by hi hl+ max <= <= max <= <= thereforeby induction on lhk for all > theorem tells us that the height of binomial tree of order is and tells us that the number of nodes is nk thereforethe height of bk is exactly (log nfigure shows that there are two ways to think about the construction of binomial treesi binomial bk consists of root node to which the binomial trees bk- are attached as shown in figure (alovely professional university
24,152
notes figure two views of binomial tree (ab (bb alternativelyyou can think of bk as being comprised of two binomial trees of order - examplefigure (bshows that is made up of two instances of in generalsuppose you have two trees of order - say - and - where - { - then you can construct binomial tree of order by combining the trees to get bk { - - why do you call bk binomial treeit is because the number of nodes at given depth in the tree is determined by the binomial coefficient and the binomial coefficient derives its name from the binomial theorem and the binomial theorem tells us how to compute the nth power of binomial and binomial is an expression which consists of two termssuch as + that is why it is called binomial treetask "binomial tree is general true with very special shapediscuss lovely professional university
24,153
summary notes power-of- heap is left-heap-ordered tree consisting of root node with an empty right subtree and complete left subtree the tree corresponding to power-of- heap by the left-childright-sibling correspondence is called binomial tree binomial trees and power-of- heaps are equivalent work with both representations because binomial trees are slightly easier to visualizewhereas the simple representation of power-of- heaps leads to simpler implementations keywords binomial queuea binomial queue is priority queue that is implemented not as single tree but as collection of heap-ordered trees leftist heapa leftist heap is heap-ordered binary tree which has very special shape called leftist tree leftist treea leftist tree is tree which tends to ``lean'to the left the tendency to lean to the left is defined in terms of the shortest path from the root to an external node skew heapa skew heap is heap-ordered binary tree self assessment fill in the blanks collection of trees is called every node in binary tree has associated with it quantity called its the null path length of an empty tree is leftist tree is tree in which the shortest path to an external node is always on the the method of the leftistheap class is used to put items into the heap the method locates the item with the smallest key in given priority queue skew heaps as proposed by each of the trees in binomial queue has very special shape called review questions what do you mean by leftist heaps describe null path and null path length write the methods to implement queues by the simple but slow technique of keeping the front of the queue always in the first position of linear array prove that "consider leftist tree which contains internal nodes the path leading from the root of downwards to the rightmost external node contains at most log ( nodes lovely professional university
24,154
notes write methods to implement queues in circular array with one unused entry in the array that iswe consider that the array is full when the rear is two positions before the frontwhen the rear is one position beforeit will always indicate an empty queue prove that "the binomial tree of order kbkcontains nodes extbfproof (by inductionlet nk be the number of nodes in bka binomial tree of order write menu driven demonstration program for manipulating deque of characterssimilar to the extended_queue demonstration program write the class definition and the method implementations needed to implement deque in linear array write the methods needed to implement deque in circular array consider the class deque as derived from the class queue write method to implement queueswhere the implementation does not keep count of the entries in the queue but instead uses the special conditions rear - and front to indicate an empty queue answersself assessment fores null path length zero enqueue findmin sleator and tarjan binomial tree right further readings books brian kernighan and dennis ritchiethe programming languageprentice hall burkhard moniendata structures and efficient algorithmsthomas ottmannspringer krusedata structure program designprentice hall of indianew delhi mark allen welesdata structure algorithm analysis in csecond ed addisonwesley publishing rg dromeyhow to solve it by computercambridge university press shi-kuo changdata structures and algorithmsworld scientific shi-kuo changdata structures and algorithmsworld scientific sorenson and tremblayan introduction to data structure with algorithms thomas cormencharles eleiserson ronald rivestintroduction to algorithmsprentice-hall of india pvt limitednew delhi timothy buddclassic data structures in ++addison wesley online links www en wikipedia org www web-source net www webopedia com lovely professional university
24,155
mandeep kaurlovely professional university unit sorting notes contents objectives introduction internal sorting insertion sort algorithm of insertion sort complexity analysis shell sort heap sort merge sort merging of two sorted lists quick sort bucket sort external sorting summary keywords self assessment review questions further readings objectives after studying this unityou will be able toz discuss internal sorting explain heap sortmerge sort and quick sort describe external sorting discuss the implementation of heaps introduction retrieval of information is made easier when it is stored in some predefined order sorting isthereforea very important computer application activity many sorting algorithms are available different environments require different sorting methods sorting algorithms can be characterised in the following two ways simple algorithms which require the order of (written as ( ))comparisons to sort items sophisticated algorithms that require the (nlog ncomparisons to sort items lovely professional university
24,156
notes the difference lies in the fact that the first method moves data only over small distances in the process of sortingwhereas the second method moves data over large distancesso that items settle into the proper order soonerthus resulting in fewer comparisons performance of sorting algorithm can also depend on the degree of order already present in the data there are two basic categories of sorting methodsinternal sorting and external sorting internal sorting is applied when the entire collection of data to be sorted is small enough so that the sorting can take place within the main memory the time required to read or write is not considered to be significant in evaluating the performance of internal sorting methods external sorting methods are applied to larger collection of data which reside on secondary devices read and write access times are major concern in determining sorting performances of such methods searching is the process of looking for somethingfinding one piece of data that has been stored within whole group of data it is often the most time-consuming part of many computer programs there are variety of methodsor algorithmsused to search for data itemdepending on how much data there is to look throughwhat kind of data it iswhat type of structure the data is stored inand even where the data is stored inside computer memory or on some external medium till nowwe have studied variety of data structurestheir typestheir use and so on in this we will concentrate on some techniques to search particular data or piece of information from large amount of data there are basically two types of searching techniqueslinear or sequential search and binary search searching is very common task in day-to-day lifewhere we are involved some or other timein searching either for some needful at home or office or marketor searching word in dictionary in this we see that if the things are organised in some mannerthen search becomes efficient and fast all the above facts apply to our computer programs also suppose we have telephone directory stored in the memory in an array which contains name and numbers nowwhat happens if we have to find numberthe answer is search that number in the array according to name (givenif the names were organised in some ordersearching would have been fast internal sorting the function of sorting or ordering list of objects according to some linear order is so fundamental that it is ubiquitous in engineering applications in all disciplines there are two broad categories of sorting methodsinternal sorting takes place in the main memorywhere we can take advantage of the random access nature of the main memoryexternal sorting is necessary when the number and size of objects are prohibitive to be accommodated in the main memory problem given records rnwith key values knproduce the records in the order ri ri rinsuch that ki <ki <<kin the complexity of sorting algorithm can be measured in terms of (anumber of algorithm steps to sort records (bnumber of comparisons between keys (appropriate when the keys are long character strings(cnumber of times records must be moved (appropriate when record size is largelovely professional university
24,157
any sorting algorithm that uses comparisons of keys needs at least ( log ntime to accomplish the sorting notes sorting methods internal (in memoryquick sort heap sort bubble sort insertion sort selection sort shell sort external (appropriate for secondary storagemerge sort radix sort poly-phase sort insertion sort this is naturally occurring sorting method exemplified by card player arranging the cards dealt to him he picks up the cards as they are dealt and inserts them into the required position thus at every stepwe insert an item into its proper place in an already ordered list we will illustrate insertion sort with an example (refer to figure before presenting the formal algorithm examplesort the following list using the insertion sort method ( therefore insert before ( insert between ( insert between ( insert after ( insertion sort thus to find the correct position search the list till an item just greater than the target is found shift all the items from this point one down the list insert the target in the vacated slot repeat this process for all the elements in the list this results in sorted list algorithm of insertion sort insertion sort algorithm somewhat resembles selection sort array is imaginary divided into two parts sorted one and unsorted one at the beginningsorted part contains first element of the array and unsorted one contains the rest at every stepalgorithm takes first element in the lovely professional university
24,158
notes unsorted part and inserts it to the right place of the sorted one when unsorted part becomes emptyalgorithm stops sketchyinsertion sort algorithm step looks like thissorted partitial result < > unsorted data becomes sorted partitial result < unsorted data > examplesort { - using insertion sort - unsorted - - to be inserted - shift - reached left boundaryinsert- - to be inserted - shift - - insert - to be inserted - insert - to be inserted - shift - shift - insert - sorted the ideas of insertion the main operation of the algorithm is insertion the task is to insert value into the sorted part of the array let us see the variants of how we can do it "sifting downusing swaps the simplest way to insert next element into the sorted part is to sift it downuntil it occupies correct position initially the element stays right after the sorted part at each step algorithm compares the element with one before it andif they stay in reversed orderswap them let us see an illustration lovely professional university
24,159
swap swap swap sifting is done notes this approach writes sifted element to temporary position many times next implementation eliminates those unnecessary writes shifting instead of swapping we can modify previous algorithmso it will write sifted element only to the final correct position let us see an illustration swap swap swap sifting is done insert to final position it is the most commonly used modification of the insertion sort using binary search it is reasonable to use binary search algorithm to find proper place for insertion this variant of the insertion sort is called binary insertion sort after position for insertion is foundalgorithm shifts the part of the array and inserts the element this version has lower number of comparisonsbut overall average complexity remains ( from practical point of view this improvement is not very importantbecause insertion sort is used on quite small data sets complexity analysis insertion sort' overall complexity is ( on averageregardless of the method of insertion on the almost sorted arrays insertion sort shows better performanceup to (nin case of applying insertion sort to sorted array number of writes is ( on averagebut number of comparisons may vary depending on the insertion algorithm it is ( when shifting or swapping methods are used and ( log nfor binary insertion sort from the point of view of practical applicationan average complexity of the insertion sort is not so important as it was mentioned aboveinsertion sort is applied to quite small data sets (from to elements shell sort most of the sorting algorithms seen so far such as insertion sort and selection sort have run time complexity of ( we also know that no sorting algorithm can have time complexity less than ( )since we need to scan the list at least once for exampleinsertion sort best case complexity is (ncan we get better performance and get run time complexity between ( and ( )the answer is yes and there are many algorithms with complexity in this rangewith varying tradeoffs in this notewe describe one approachshell sort shell sort has evolved as trial and error algorithm analytical results are not available but empirically it has been found that the lovely professional university
24,160
notes shell sort improves the efficiency and decreases the run time complexity to ( donald shell discovered the shell sort and thence it is known as shell sort algorithm the algorithms for shell sort can be defined in two stepsstep divide the original list into smaller lists step sort individual sub lists using any known sorting algorithm (like bubble sortinsertion sortselection sortetcvoid shellsort (int[aint nint ijkhvint[cols { for ( = < ++ =cols[ ]for ( =hi<ni++ = [ ] =iwhile ( >= & [ - ]>va[ ]= [ - ] = -ha[ ]=vmany questions arise how should divide the list which sorting algorithm to use how many times will have to execute steps and and the most puzzling question if am anyway using bubbleinsertion or selection sort then how can achieve improvement in efficiencyi shall discuss each of these questions one by one for dividing the original list into smaller listswe choose value kwhich is known as increment based on the value of kwe split the list into sub lists for exampleif our original list is [ ] [ ] [ ] [ ] [ [ and we choose as the value for incrementk then we get the following sub lists first_list [ ] [ ] [ ] [ [ lovely professional university
24,161
second_list [ ] [ ] [ ] [ [ third_list [ ] [ ] [ ] [ [ forth_list [ ] [ ] [ ] [ [ fifth_list [ ] [ ] [ ] [ [ notes so the sub list will contain every kth element of the original list starting from index - th according to the algorithm mentioned abovefor each iterationthe list is divided and then sorted if we use the same value of kwe will get the same sub lists and every time we will sort the same sub listswhich will not result in the ordered final list note that sorting the five sub lists independently do not ensure that the full list is sortedso we need to change the value of (increase or decrease?for every iteration to know whether the array is sortedwe need to scan the full list we also know that number of sub lists we get are equal to the value of so if we decide to reduce the value of after every iteration we will reduce the number of sub lists also in every iteration eventuallywhen will be set to we will have only one sub list hence we know the termination condition for our algorithm is since for every iteration we are decreasing the value of the increment (kthe algorithm is also known as "diminishing increment sortany sorting algorithm or combination of algorithms can be used for sorting the sub listse some shell sort implementation use bubble sort for the last iteration and insertion sort for other iterations we use insertion sort in this tutorial how does shell sort give better performance if the sorting is ultimately done by algorithms like insertion sort and bubble sortthe rationale is as follows if the list is either small or almost sorted then insertion sort is efficient since less number of elements will be shifting in shell sort as we have already seenthe original list is divided into smaller lists based on the value of increment and sorted initially value of is fairly large giving large number of small sub lists we know that simple sorting algorithms like insertion sort are effective for small listssince the data movements are over short distances as reduces in valuethe length of the sub lists increases but since the earlier sub lists have been sortedwe except the full list to look more and more sorted again note that algorithms such as insertion sort are fairly efficient when they work with nearly sorted lists thus the inefficiency arising out of working with larger lists is partly compensated by lists being increasingly sorted this is the intuitive explanation for the performance of shell sort how to choose the value of increment ktill date there is no convincing answer to what should be the optimal sequence for the incrementk but it has been empirically found that larger number of increments gives more efficient results also it is advisable not to use empirical sequences like , , , or , , , if we do sofor different iterationwe may get almost the same elements in the sub lists to compare and sort which will not result in better performance for exampleconsider list and increment sequences and when the increment is you will get the following sub lists ( , ( , ( , ( , ( lovely professional university
24,162
notes after first iteration = when the increment reduces to you will get the following lists ( ( ( now if we analyse the first sub lists where increment was and then second sub lists where increment was we can see that in the second set of listswe are comparing and sorting again which we had already sorted before in the previous sub list sowe should choose the value of in such way that in every iterations we get almost different elements in the sub lists to compare and sort there are proven and recommended ways for generating the increment sequences some of them have been discussed below donald shell has suggested ht / hkhk+ / which give the increment series asn/ / hibbard suggested the sequence of increment as - another way for calculating increment suggested by knuth and we have used this method for generating the increments in this tutorial hi hi hi + and stops at htwhen ht+ > which gives the increment series as ht task discuss insertion sorting techniques other than thesethere are many series which have been empirically found and perform well examplewe will take an example to illustrate shell sort let the list be - and the increment values using knuth formula we get are when increment is we get the following sub lists and the elements are divided among the sub lists in the following ways every kth element of the list starting with the index will be in the sub list lovely professional university
24,163
notes after sorting the sub liststhe resulting list we get is as follows - you nowreduce the increment value to and get the following sub lists after sorting these sub liststhe resulting list is as follows - you further reduce increment value to which is our last increment value now there is only one listwhich after sortingwe get- now try to understand and analyse what is happeninghow the elements are movedwe can seeafter the first iteration when the increment was the element - of the listwhich was at th position in the original list has reached to the nd position the element was at th position and has reached the rd position using insertion sort on the whole listwe know that if - has to reach from the th position to the nd positionthe required number of movements of the elements in the list will be very high shell sort performs better than insertion sort by reducing the number of movements of elements in the lists this is achieved by large strides the element take in the beginning cycle upper bound theoremwith the -sequence shellsort needs (nnsteps for sorting sequence of length (papernov/stasevic [ps ]prooflet ht be the closest to we analyze the behavior of shellsort separately for the elements hk with let < since hk we have the conditions mentioned above that hk+ and hk+ are relatively prime and in (hkthereforeo(nhksorting steps suffice for hk-sorting the data sequence since lovely professional university
24,164
notes the hk form geometric seriesthe sum of all hk with is in (htonthus (nnsorting steps are needed for this part where now let when the sequence is arranged as an array with hk columns there are /hk elements in each column thuso(( /hk) sorting steps are needed to sort each columnsince insertion sort has quadratic complexity there are hk columnstherefore the number of sorting steps for hksorting the entire data sequence is in (( /hk) hko(nn/hkagainthe /hk form geometric series whose sum is in ( /htonthereforeagain (nnsteps are needed for this part where it can be shown that for this -sequence the upper bound is tight but there is another -sequence that leads to more efficient behavior of shellsort theoremwith the -sequence qshellsort needs (nlog( ) steps for sorting sequence of length (pratt [pra ]proofif and then (gh( - )( - in , -sorted sequence to the right of each element only the next element can be smaller thereforeo(nsorting steps suffice to sort the sequence with insertion sort considering elements with odd and with even index separatelyit becomes clear that again (nsorting steps suffice to make , -sorted sequence -sorted similarlyo(nsorting steps suffice to make , -sorted sequence -sorted and so on the above -sequence has the property that for each hk also hk and hk occursso (nsorting steps suffice for each hk altogether there are log( ) elements in the -sequencethus the complexity of shellsort with this -sequence is in (nlog( ) the -sequence of pratt performs best asymptoticallybut it consists of log( ) elements particularlyif the data sequence is presorteda -sequence with less elements is bettersince the data sequence has to be scanned (by the for- -loop in the programfor each hkeven if only few sorting steps are performed by combining the arguments of these two theorems -sequences with (log( )elements can be derived that lead to very good performance in practiceas for instance the -sequence of the program but unfortunatelythere seems to be no -sequence that gives shellsort worst case performance of (nlog( )it is an open question whether possibly the average complexity is in (nlog( ) heap sort heapsort is sorting technique which sorts contiguous list of length with (nlog (ncomparisons and movement of entrieseven in the worst case hence it achieves worst-case bounds better than those of quicksortand for contiguous list it is better than mergesortsince it needs only small and constant amount of space apart from the list being sorted heapsort proceeds in two phases firstall the entries in the list are arranged to satisfy heap propertyand then top of the heap is removed and another entry is promoted to take its place repeatedly therefore we need procedure which builds an initial heap to arrange all the entries in the list to satisfy heap property the procedure which builds an initial heap uses procedure which adjust the ith entry in the list whose entries at and positions already satisfy heap property in such manner entry at ith position in the list will also satisfy heap property the following code implements the algorithm void adjust(list xint iint nint jkboolean flag lovely professional university
24,165
notes list[ ]flag truej iwhile( < &flagif(( &(list[jlist[ ]) if( >list[ ]flag falseelse list[ div list[ ] list[ div kvoid build_initial_heap(list xint nint ifor( div > --adjust(xin)void heapsort(list xint nint ibuild_initial_heap(xn)for( -li > --exchange(list[ ]list[ + ])adjust(xli)void exchange(int &aint &bint tt * * * * tlovely professional university
24,166
notes void adjust(int []int iint nint jkint flagk [ ]flag iwhile( < &flag = if(( &( [jx[ ]) ++if( > [ ]flag else [ [ ] [ kvoid build_initial_heap(int []int nint ifor( > --adjust(xin)void heapsort(int []int nint ibuild_initial_heap(xn)for( -li > --exchange( [ ] [ ])adjust( , , )analysis of heapsort in each pass of while loop in procedure adjustx, , )the position is doubledhence the number of passes cannot exceed (loge div itherefore the computation time of adjust is (log div lovely professional university
24,167
ithe procedure build-initial-heap calls the adjust procedure for values of ranging from / down to hence the total number of iterations will benotes / lognlog( / log( / / log( / lognlog( / = this comes out to be some constant times hence the computation time of build_initial_heap is (nthe heapsort procedure calls adjust ( ( - timeshence the total number of iterations made in the heap sort will ben- log( / = - log( log( log( log( = which comes out to be approx log(nhence the computing time of heapsort is ( log( ) (nthe only additional space needed by heap sort is space for one record to carry out exchange consider list given below when heapsort is applied to it we get following list to be sorted initial heap output after each pass of heapsort merge sort this is another sorting technique having the same average and worst case time complexityrequiring an additional list of size the technique that we use is the merging of the two sorted lists of size and respectively to form single sorted list of size ( +ngiven list of size to be sortedinstead of viewing it to be one single list of size nwe start by viewing it to be lists lovely professional university
24,168
notes each of size and merge the first list with the second list to form single sorted list of size similarly we merge the third and the fourth lists to form second single sorted list of size and so on this completes the one pass we then consider the first sorted list of size and second sorted list of size and merge them to form single sorted list of size similarly we merge the third and the fourth sorted lists each of size to form the second single sorted list of size and so onthis completes the second pass in the third pass we merge these adjacent sorted lists each of size to form sorted lists of size we continue this process till finally we end up with single sorted list of size as shown in figure [ , [ , [ , [ , , , [ , [ , , , [ , , , , , , , to carry out the above taskwe require procedure to merge the two sorted lists of size and respectively to form single sorted list of size ( + )we also require procedure to carry out one pass of the list merging the adjacent sorted lists of the specified size this is because we have to carry out the repeated passes of the given list in the first passwe merge the adjacent lists of size in the second passwe merge the adjacent lists of size and so on thereforewe will call this procedure by varying the size of the lists to be merged here is implementation of the same void merge(int []int []int lint mint nint , ,ki = + while (( < &( < )if( [ < [ ] lovely professional university
24,169
notes [ = [ ] + ++else [kx[ ] ++; ++while ( <my[ = [ ] ++ ++while ( <ny[kx[ ] ++ ++void mpass(int [],int [],int int nint ,ji whi ( < merge(xyii+ - - ) if(( + - nmerge(xyii+ - )else while ( < [ = [ ] ++lovely professional university
24,170
notes void msort(int []int nint int [] while( nmpass(xyln) mpass(yxln) the merging of two sub-liststhe first running from the index to mand the second running from the index ( + to requires no more than ( - + iterations hence if then no more than iterationswhere is the size of the list to be sorted therefore if is the size of the list to be sortedevery pass that merge routine performs requires time proportional to ( )and since the number of passes required to be performed are log the time complexity of the algorithm is (nlog ( ))both average and worst case the merge sort requires an additional list of size multi-way merge sort the two-phasemultiway merge-sort algorithm is similar to the external-memory merge-sort algorithm presented in the previous section phase is the samebutin phase the main loop is performed only once merging all [ /mruns into one run in one go to achieve thismultiway merging is performed instead of using the twoway-merge algorithm the idea of multiway merging is the same as for the two-way mergingbut instead of having input buffers (bf and bf of elementswe have [ /minput bufferseach elements long each buffer corresponds to one unfinished (or activerun initiallyall runs are active each buffer has pointer to the first unchosen element in that buffer (analogous to and in twowaymergethe multiway merging is performed by repeating these steps find the smallest element among the unchosen elements of all the input buffers linear search is sufficientbut if the cpu cost is also importantminimum priority queue can be used to store pointers to all the unchosen elements in input buffers in such casefinding the smallest element is logarithmic in the number of the active runs move the smallest element to the first available position of the output buffer if the output buffer is fullwrite it to the disk and reinitialize the buffer to hold the next output page if the bufferfrom which the smallest element was just taken is now exhausted of elementsread the next page from the corresponding run if no pages remain in that runconsider the run finished (no longer activelovely professional university
24,171
when only one active run remains the algorithm finishes up as shown in lines and of twoway-merge-it just copies all the remaining elements to the end of file figure visualizes multiway merging notes figure main-memory organization for multiway merging input buffersone for each unfinished run select smallest unchosen element for output bf pointers to first unchosen elements it is easy to see that phase of the two-phasemultiway merge-sort algorithm performs only th (ni/ operations and this is also the running time of the whole algorithm in spite of thisthe algorithm has limitation-it can not sort very large files if phase of the algorithm produces more than runs ( / )all runs can not be merged in one go in phase because each run requires one-page input buffer in main-memory and one page of main-memory is reserved for the output buffer how large should the file be for this to happenmultiway merge sort of very large files sometimes there may be need to sort extremely large files or there is only small amount of available main memory as described in the previous sectiontwo-phasemultiway merge sort may not work in such situations natural way to extend the two-phasemultiway merge sort for files of any size is to do not one but many iterations in phase of the algorithm that iswe employ the external memory merge-sort algorithm from section but instead of using twoway-mergewe use the multiway merging (as described in the previous sectionto merge runs from file into one run in file thenin each iteration of the main loop of phase we reduce the number of runs by factor of what is the running time of this algorithmwhich we call simply multiway merge sort phase and each iteration of the main loop of phase takes th(ni/ operations after phase we start up with [ / [ /mrunseach iteration of the main loop of phase reduces the number of runs by factor of and we stop when we have just one run thusthere are logm- ( /miterations of the main loop of phase thereforethe total running time of the algorithm is th( logm- ( / )th ( logm logmmth ( logm nth ( logm nremember that the cost of the external-memory merge-sort algorithm from section is th( log ( / )thusmultiway merge sort is faster by factor of ( /logmn)log mactuallylovely professional university
24,172
notes th( logmnis lower bound for the problem of external-memory sorting that ismultiway merge sort is an asymptotically optimal algorithm merging of two sorted lists assume that two lists to be merged are sorted in descending order compare the first element of the first list with the first element of the second list if the element of the first list is greaterthen place it in the resultant list advance the index of the first list and the index of the resultant list so that they will point to the next term if the element of the first list is smallerplace the element of the second list in the resultant list advance the index of the second list and the index of the resultant list so that they will point to the next term repeat this process until all the elements of either the first list or the second list are compared if some elements remain to be compared in the first list or in the second listplace those elements in the resultant list and advance the corresponding index of that list and the index of the resultant list suppose the first list is and the second list is the sorted lists are and the first element of the first list is which is smaller than so the first element of the resultant list is now is compared with again it is smallerso the second element in the resultant list is next is compared with in this case it is greaterso the third element of the resultant list is repeat this process for all the elements of the first list and the second list the resultant list is lab exercise program#include #include void main(void read(int *,int)void dis(int *,int)void sort(int *,int)void merge(int *,int *,int *,int)int [ ], [ ], [ ]clrscr()printf("enter the elements of first list \ ")read( , )/*read the list*printf("the elements of first list are \ ")dis( , )/*display the first list*printf("enter the elements of second list \ ")read( , )/*read the list*printf("the elements of second list are \ ")dis( , ) /*display the second list*lovely professional university
24,173
notes sort( , )printf("the sorted list is:\ ")dis( , )sort( , )printf("the sorted list is:\ ")dis( , )merge( , , , )printf("the elements of merged list are \ ")dis( , )/*display the merged list*getch()void read(int [],int iint jfor( = ; < ; ++scanf("% ",& [ ])fflush(stdin)void dis(int [],int iint jfor( = ; < ; ++printf("% ", [ ])printf("\ ")void sort(int arr[,int kint tempint ,jfor( = ; < ; ++for( = ; < - - ; ++if(arr[ ]<arr[ + ]temp=arr[ ]arr[ ]=arr[ + ]arr[ + ]=templovely professional university
24,174
notes void merge(int [],int [],int [],int kint ptra= ,ptrb= ,ptrc= while(ptra< &ptrb<kif( [ptrab[ptrb] [ptrc]= [ptra]ptra++else [ptrc]= [ptrb]ptrb++ptrc++while(ptra<kc[ptrc]= [ptra]ptra++;ptrc++while(ptrb<kc[ptrc]= [ptrb]ptrb++ptrc++inputenter the elements of the first list outputthe elements of first list are inputenter the elements of the second list outputthe elements of second list are the sorted list is lovely professional university
24,175
notes the sorted list is the elements of the merged list are task write program for merge sort quick sort in this method an array [ ], [nis sorted by picking some value in the array as key elementwe then swap the first element of the list with the key element so that the key will come in the first positionwe then find out the proper place of key in the list the proper place is that position in the list where if key is placed then all elements to the left of it are smaller than the keyand all the elements to the right of it are greater than the key to obtain the proper position of the key we traverse the list in both the directions using the indices and respectively we initialize to that index which is one more than the index of the key elementi if the list to be sorted is having the indices running from to nthen the key element is the at index mhence we initialize to ( + the index is incremented till we get an element at the ith position greater than the key value similarly we initialize to and go on decrementing till we get an element having the value less than the key value we then check whet her and have crossed each other if not then we interchange the elements at the ith and jth positionand continue the process of incrementing and decrementing till and crosses each other when and crosses each other we interchange the elements at the key position( at mth positionand the elements at the jth position this brings the key element at the jth positionand we find that the elements to left are less than and the elements to the right of it are greater than it therefore we can split the given list into two sub-lists the first one made of elements from mth position to the ( - )th positionand the second one made of elements from the ( + )th position to nth positionand repeat the same procedure with each of the sub-lists separately here is implementation void qsort(int []int mint nint key, , ,kif( nk getkeyposition(xmn)interchange(& [ ]& [ ])key [ ] =nwhile( jwhile(( < &( [ <key) ++while( > &( [ikey)lovely professional university
24,176
notes ++ifi jinterchange(& [ ]& [ ])interchange(& [ ]& [ ])qsort(xmj- )qsort(xj+ )void interchange(int *iint *jint temptemp * * * * tempchoice of the key we can choose any entry in the list as the key the choice of first entry is often poor choice for keysince if the list is already sortedthen there will be no element less than the first element selected as keyand so one of the sub-lists will be empty hence we choose key near the center of the listin the hope that our choice will partition the list in such manner that about half comes on each side of key therefore the function getkeyposition isint getkeyposition(int iint jreturn( + mod )the choice of the key near the center is also arbitraryand hence it is not necessary that it will always divide the list nicely in to halfit may also happen that one sub-list is much larger than other hence some other method of selecting key should be used good way to choose key is to use random number generator to choose the position of next key in each activation of quicksort therefore the function getkeyposition isint getkeyposition(int iint jreturn random number in the range of to consider the following list lovely professional university indices
24,177
when qsort is activated first time key and = and = is incremented till it becomes because at position the value is greater than keyj is not decrementedbecause at position the value that we have is less than the key since <jwe interchange the rd element and th element then is incremented till it becomes and is decremented till it becomes since jwe interchange the key element that is the element at position with the element at position and call qsort recursively with the left sub-list made of elements from position to and right sub-list made of elements from position to as shown below notes indices right sub-list left sub-list by continuing in this fashionwe finally get the sorted list the average case time complexity of the quick sort algorithm can be decided as followswe assume that every time the list gets splitted into two approximately equal sized sub-lists if the size of given list is nthen it gets splitted into two sub lists of size approximately / each of these sub -lists further gets splitted into two sub-lists of size / and this is continued till the size becomes when the quick sort works with list of size it places the key element (which we take the first element of the list under considerationat its proper position in the list this requires no more than iterations after placing the key element at its proper position in the list of size nquick sort activates itself two times to work with left and right sub-listseach assumed to be of size / therefore if (nis time required to sort list of size since the time required to sort the list of size is equal to the sum of the time required to place the key element at its proper position in the list of size and the time required to sort the left and right sub-lists each assumed to be of size / (ncomes out to bet(nc* * ( / where is constant and ( / is the time required to sort the list of size / similarly the time required to sort the list of size / is equal to the sum of the time required to place the key element at its proper positions in the list of size and the time required to sort the left and right sub-lists each assumed to be of size / ( / comes out to bet( / * / * ( / where ( / is the time required to sort the list of size / ( / * / * ( / )and so on and finally we get ( (nc* ( * ( / ( / ) (nc* * ( / ) * * ( / * * *( / ( / ) ( * * * ( / * * ( / ( (log )* * nt( / )(log )* * nt( *(log * (nnlog(ntherefore we conclude that the average time complexity of the quick sort algorithm is (nlog dbut the worst case time complexity is of the ( the reason for this is in the worst case one of the two sub-lists will always be emptyand the other will be of the size ( - where is the size of the original list therefore in the worst case lovely professional university
24,178
notes (ncomes out to bet(nc* + ( - * *( - ( - * * ( - * * *( - ( - * * * ( - * * - * ( -nc+ therefore (nn hence the order is ( space complexity the average case space complexity is log nbecause the space complexity depends on the maximum number of activations that can exist we find that if we assume that every time the list gets splitted into approximately two lists of equal size then the maximum number of activations that will exist simultaneously will be log in the worst casethere exist activations because the depth of the recursion is hencethe worst case space complexity is (nalgorithms of quick sort the divide-and-conquer strategy is used in quicksort below the recursion step is described choose pivot value we take the value of the middle element as pivot valuebut it can be any valuewhich is in range of sorted valueseven if it doesn' present in the array partition rearrange elements in such waythat all elements which are lesser than the pivot go to the left part of the array and all elements greater than the pivotgo to the right part of the array values equal to the pivot can stay in any part of the array noticethat array may be divided in non-equal parts sort both parts apply quicksort algorithm recursively to the left and the right parts partition algorithm in detail there are two indices and and at the very beginning of the partition algorithm points to the first element in the array and points to the last one then algorithm moves forwarduntil an element with value greater or equal to the pivot is found index is moved backwarduntil an element with value lesser or equal to the pivot is found if < then they are swapped and steps to the next position ( ) steps to the previous one ( algorithm stopswhen becomes greater than after partitionall values before -th element are less or equal than the pivot and all values after -th element are greater or equal to the pivot lovely professional university
24,179
notes examplesort { using quicksort swap pivot value pivot value swap and swap and swap and stop partition run quick sort recursively sorted noticethat we show here only the first recursion stepin order not to make example too long butin fact{ and { are sorted then recursively why does it workon the partition step algorithm divides the array into two parts and every element from the left part is less or equal than every element from the right part also and satisfy <pivot < inequality after completion of the recursion calls both of the parts become sorted andtaking into account arguments stated abovethe whole array is sorted exampleconsider the following list to be sorted in ascending order 'add your man(ignore blanksn [ quicksort ( > 'nl - 'yvthere forel [ 'avthere forer lovely professional university
24,180
notes < swap ( , , to get [ [ 'ovthere forel [ 'mvthere forer < swap ( , , to get [ [ 'uvl [ 'mv; break swap ( , , to get [ at this point 'nis in its correct place [ ] [ to [ constitutes sub list [ to [ constitutes sublist now quick sort ( quick sort ( the quick sort algorithm uses the ( log ncomparisons on average the performance can be improved by keeping in mind the following points switch to faster sorting scheme like insertion sort when the sublist size becomes comparatively small use better dividing element in the implementations we have always used [nas the dividing element useful method for the selection of dividing element is the median-of three method select any elements from the list use the median of these as the dividing element bucket sort bucket sort runs in linear time on the average it assumes that the input is generated by random process that distributes elements uniformly over the interval [ the idea of bucket sort is to divide the interval [ into equal-sized subintervalsor bucketsand then distribute the input numbers into the buckets since the inputs are uniformly distributed over ( )we don' expect many numbers to fall into each bucket to produce the outputsimply sort the numbers in each bucket and then go through the bucket in orderlisting the elements in each the code assumes that input is in -element array and each element in satisfies < [ < we also need an auxiliary array [ - for linked-lists (buckets lovely professional university
24,181
bucket_sort (anotes length [ for to do insert [iinto list [na[ ] for to - do sort list with insertion sort concatenate the lists [ ] [ ] [ - together in order examplegiven input array [ the array [ of sorted lists or buckets after line bucket holds values in the interval [ / ( + )/ the sorted output consists of concatenation in order of the lists first [ then [ then [ and the last one is [ analysis all lines except line take (ntime in the worst case we can see inspection that total time to examine all buckets in line is ( - (nthe only interesting part of the analysis is the time taken by insertion sort in line let ni be the random variable denoting the number of elements in the bucket [isince the expected time to sort by insertion_sort is ( )the expected time to sort the elements in bucket [iis [ ( ni) ( [ ni]thereforethe total expected time to sort all elements in all buckets is - = ( [ ni] - = ( [ ni]( in order to evaluate this summationwe must determine the distribution of each random variable we have elements and buckets the probability that given element falls in bucket [iis / probability / note this problem is the same as that of "balls-and-binproblem lovely professional university
24,182
notes thereforethe probability follows the binomial distributionwhich has meane[ninp variancevar[ninp( / for any random variablewe have [ nivar[nie [ni / th( putting this value in equation above(do some tweakingand we have expected time for insertion_sorto(nnow back to our original problem in the above bucket sort algorithmwe observe ( [time to insert elements in array [time to go through auxiliary array [ - (sort by insertion_sorto( ( - (no(nthereforethe entire bucket sort algorithm runs in linear expected time task discuss bubble sort with suitable example external sorting external sorting refers to the sorting of file that is on disk (or tapeinternal sorting refers to the sorting of an array of data that is in ram the main concern with external sorting is to minimize disk access since reading disk block takes about million times longer than accessing an item in ram (according to shaffer see the reference at the end of this documentperhaps the simplest form of external sorting is to use fast internal sort with good locality of reference (which means that it tends to reference nearby itemsnot widely scattered itemsand hope that your operating system' virtual memory can handle it (quicksort is one sort algorithm that is generally very fast and has good locality of reference if the file is too hugehowevereven virtual memory might be unable to fit it alsothe performance may not be too great due to the large amount of time it takes to access data on disk methods most external sort routines are based on mergesort they typically break large data file into number of shortersorted "runsthese can be produced by repeatedly reading section of the data file into ramsorting it with ordinary quicksortand writing the sorted data to disk after the sorted runs have been generateda merge algorithm is used to combine sorted files into longer sorted files the simplest scheme is to use -way mergemerge sorted files into one sorted filethen merge moreand so on until there is just one large sorted file better scheme is multiway merge algorithmit might merge perhaps shorter runs together lovely professional university
24,183
summary notes arranging objects in specified order is called sorting bubble sortinsertion sortselection sortquick sortheap sortradix sort are some of very common search algorithms the comparison starts with the first element (or at the last elementand continues sequentially till either we find match or the end of the list is encountered linear search makes as many comparisons as there are elements in the array even if the array is sorted (either in ascending or descending orderthe number of comparisons remains the same keywords bubble sorta sorting technique in which the largest element of the remaining list bubbles up to its proper ordering position in each pass through the list heap sorta sorting technique in which all the entries in the list are arranged to satisfy heap propertyand then top of the heap is removed and another entry is promoted to take its place repeatedly merge sorta sorting technique in which the given list is broken down into smaller lists repeatedly until the list become easy to sortthen the sorted lists are merged to obtain the final sorted list quick sorta sorting technique in which an array is sorted by picking some value in the array as key elementthen swapping the first element of the list with the key element so that the key comes in the first position the proper place of key in the list is found out repeatedly sortinga technique to arrange the elements of list in some pre-specified order self assessment choose the appropriate answers which one is not the method of internal sorting(aheap sort (bmerge sort (cquick sort (dbubble sort polyphase sort is (ainternal sort (bexternal sort (cboth of the above (dnone is sorting technique which sorts contiguous list of length with (nlog (ncomparisons and movement of entrieseven in the worst case (aheap sort (bmerge sort (cquick sort lovely professional university
24,184
notes ( bubble sort heap sort proceeds in (afive phase (bthree phases (cone phase (dtwo phase fill in the blanks arranging objects in specified order is called the average time complexity of the quick sort algorithm is the computing time of heapsort is the time complexity of the algorithm is both average and worst case state whether the following statements are true or false the order of linear search in worst case is ( / linear search is more efficient than binary search for binary searchthe array has to be sorted in ascending order only file is collection of records and record is in turn collection of fields review questions what is sortingexplain insertion sorting in details distinguish between quick and heap sort explain -way merge sort distinguish between linear and binary search explain the application of searching consider the list given below whose elements are arranged in an ascending order assume that binary search technique is used find out the number of probes required to find each entry in the list sort the list given below by applying the heapsort method lovely professional university
24,185
notes consider an unsorted array [nof integer elements that may have many elements present more than once it is required to store only the distinct elements of the array in separate array the information about the number of times each element is replicated is maintained in third array for examplec[ would indicate the number of times the element [ occurs in array write program to generate the arrays and cgiven an array write program that finds the largest and the second largest elements in an unsorted array the program should make just single scan of the array distinguish between internal and external method of sorting answersself assessment ( ( ( sorting (nlog ( log( ) ( (nlog ( ) false false true ( false further readings books brian kernighan and dennis ritchiethe programming languageprentice hall burkhard moniendata structures and efficient algorithmsthomas ottmannspringer krusedata structure program designprentice hall of indianew delhi mark allen welesdata structure algorithm analysis in csecond ed addisonwesley publishing rg dromeyhow to solve it by computercambridge university press shi-kuo changdata structures and algorithmsworld scientific shi-kuo changdata structures and algorithmsworld scientific sorenson and tremblayan introduction to data structure with algorithms thomas cormencharles eleiserson ronald rivestintroduction to algorithms prentice-hall of india pvt limitednew delhi timothy buddclassic data structures in ++addison wesley online links www en wikipedia org www web-source net www webopedia com lovely professional university
24,186
mandeep kaurlovely professional university unit graphs notes contents objectives introduction defining graph basic graph terminology representations of graphs adjacent matrix adjacency list representation shortest path algorithms summary keywords self assessment review questions further readings objectives after studying this unityou will be able toz define graph realise basic graph terminology explain representation of graphs discuss shortest path algorithms introduction in this unitwe introduce you to an important mathematical structure called graph graphs have found applications in subjects as diverse as sociologychemistrygeography and engineering sciences they are also widely used in solving games and puzzles in computer sciencegraphs are used in many areas one of which is computer design in day-to-day applicationsgraphs find their importance as representations of many kinds of physical structure we use graphs as models of practical situations involving routesthe vertices represent the cities and edges represent the roads or some other linksspecially in transportation managementassignment problems and many more optimization problems electric circuits are another obvious example where interconnections between objects play central role circuits elements like transistorsresistorsand capacitors are intricately wired together such circuits can be represented and processed within computer in order to answer simple questions like "is everything connected together?as well as complicated questions like "if this circuit is builtwill it work? lovely professional university
24,187
defining graph notes graph consists of set of vertice (nodesand set of edges (arcswe write =( ,ev is finite and non empty set of vertices is set of pairs of verticesthese pairs are called edges therefore ( )read as of gis set of verticesand ( )read as of gis set of edges an edge ( , )is pair of vertices and wand is said to be incident with and graph may be pictorically represented as given in figure figure graph we have numbered the nodes as , , , and therefore ( ( and ( {( )( )( )( )( )( )( )you may notice that the edge incident with node and node is written as ( , )we could also have written ( , instead of ( , the same applies to all the other edges thereforewe may say that ordering of vertices is not significant here this is true for an undirected graph in an undirected graphpair of vertices representing any edge is unordered thus ( ,wand ( ,vrepresent the same edge in directed graph each edge is an ordered pair of verticesi each edge is represented by directed pair if ( , )then is tail or initial vertex and is head or final vertex subsequently ( ,wand ( ,vrepresent two different edges directed graph may be pictorically represented as given in figure lovely professional university
24,188
notes figure directed graph the direction is indicated by an arrow the set of vertices for this graph remains the same as that of the graph in the earlier examplei ( ( , , , , however the set of edges would be ( {( , )( , )( , )( , )( , )( , )( , )do you notice the differencenote arrow is always from tail vertex to head vertex in our further discussion on graphswe would refer to directed graph as digraph and undirected graph as graph basic graph terminology good deal of nomenclature is associated with graphs most of the terms have straight foward definitionsand it is convenient to put them in one place even though we would not be using some of them until later adjacent vertices vertex is said to be adjacent to vertex if there is an edge ( or ( figure vertices adjacent to node are , , and and that to node are and lovely professional university
24,189
notes find out the vertices adjacent to remaining nodes of the graph patha path from vertex to vertex is sequence of verticeseach adjacent to the next consider the above example again , , is path , , is path , , is pathis , , pathhow many paths are there from vertex to vertex you may notice that there is path existing in the above example which starts at vertex and finishes at vertex path , , , , , such path is called cycle cycle is path in which first and last vertices are the same do we have path from any vertex to any other vertex in the above exampleif you see it carefullyyou may find the answer to the above question as yes such graph is said to be connected graph graph is called connected if there exists path from any vertex to any other vertex there are graphs which are unconnected consider the graph in figure figure an unconnected graph it is an unconnected graph you may say that these are two graphs and not one look at the figure in its totality and apply the definition of graph does it satisfy the definition of graphit does thereforeit is one graph having two unconnected components since there are unconnected componentsit is an unconnected graph so far we have talked of pathscycles and connectivity of undirected graph in digraph the path is called directed path and cycle as directed cycle figure digraph lovely professional university
24,190
notes in figure , is directed path is directed path is not directed path there is no directed cycle in the above graph you may verify the above statement digraph is called strongly connected if there is directed path from any vertex to any other vertex consider the digraph given in figure figure weakly connected graph there does not exist directed path from vertex to vertex also from vertex to other verticesand so on thereforeit is weakly connected graph let us make is strongly connected figure strongly connected graph thegraph in figure strongly connected graph you may notice that we have added just one arc from vertex to vertex lovely professional university
24,191
notes an alternative could be as given in figure figure strongly connected graph there may exist more alternate structures make at least one more alternate structure for the same diagraph you must have observed that there is no limitation of number of edges incident on one vertex it could be noneoneor more the number of edges incident on vertex determines its degree in digraph we attach an indegree and an outdegree to each of the vertice in figure the indegree of vertex is and outdegree is indegree of vertex is the number of edges for which vertex is head and outdegree is the number of edges for which vertex is tail figure tree let us define special type of graph called tree graph is tree if it has two propertiesit is connectedand there are no cycles in the graph lovely professional university
24,192
notes graph depicted in figure is tree and so are the ones depicted in figure (ato (efigure tree structure ( ( ( ( (ebecause of their special structure and propertiestrees occur in many different applications in computer science did knowin figure how many outdegree present in vertex representations of graphs graph is mathematical structure and finds its application in many areas of interest in which problems need to be solved using computers thusthis mathematical structure must be represented as some kind of data structures two such representations arecommonly used these are adjacent matrix adjacency list representation the choice of representation depends on the application and function to be performed on the graph adjacent matrix the adjacency matrix for graph ( ,ewith verticesis an matrix of bitssuch that iff there is an edge from vi to vj and ij if there is no such edge ij lovely professional university
24,193
notes table (ashows the adjacency matrix for the graph given in figure table adjacency matrix for the graph in figure vertice you may observe that the adjacency matrix for an undirected graph is symmetricas the lower and upper triangles are same also all the diagonal elements are zero since we consider graphs without any self loops let us find adjacency matrix for digraph given in figure table adjacency matrix for digraph in figure vertice the total number of ' account for the number of edges in the digraph the number of ' in each row tells the outdegree of the corresponding vertex adjacency list representation in this representationwe store graph as linked structure we store all the vertices in list and then for each vertexwe have linked list of its adjacent vertices let us see it through an example consider the graph given in figure figure the adjacency list representation needs list of all of its nodesi and for each node linked list of its adjacent nodes lovely professional university
24,194
notes therefore we shall havetable adjacency list structure for graph in figure note that adjacent vertices may appear in the adjacency list in arbitrary order also an arrow from to in the list linked to does not mean that and are adjacent the adjacency list representation is better for sparse graphs because the space required is ( )as contrasted with the ( required by the adjacency matrix representation shortest path algorithms dijkstra developed an algorithm to determine the shorted path between two nodes in graph it is also possible to find the shortest paths from given source node to all nodes in graph at the same timehence this problem is sometimes called the single-source shortest paths problem the shortest path problem may be expressed as followsgiven connected graph (ve)with weighted edges and fixed vertex in vto find shortest path from to each vertex in the weights assigned to the edges may represent distancecosteffort or any other attribute that needs to be minimized in the graph solution to this problem could be found by finding spanning tree of the graph the graph representing all the paths from one vertex to all the others must be spanning tree it must include all vertices there will also be no cycles as cycle would define more than one path from the selected vertex to at least one other vertex the algorithm finds the routesby cost precedence let' assume that every cost is positive number the algorithm is equally applicable to grapha digraphor even to mixed graph with only some of its sides directed if we consider digraphthen every other case is fully covered as well since no directed side can be considered directed sides of equal cost for every direction the algorithm is based on the fact that every minimal path containing more than one side is the expansion of another minimal path containing side less this happens because all costs are considered as positive numbers in this way the first route ( found by the algorithm will be one arc routethat is from the starting point to one of the sides directly connected to this starting point the next route ( will be one arc route itselfor two arc routebut in this case will be an expansion of ( lovely professional university
24,195
notes here is the algorithm let be the set of all the vertices of the graph and be the set of all the vertices considered for the determination of the minimal path set ={ while there are still vertices in (asort the vertices in according to the current best estimate of their distance from the source (badd uthe closest vertex in sto (cre-compute the distances for the vertices in (dconsider the following example for illustration find the shortest path from node to node in the following graph label on an edge indicates the distance between the two nodes the edge connects applying dijkstra algorithm {xdistances of all the nodes from the nodes in the sxa xb xc xd xe xy sinceminimum distance from to - is (xb) {xband {xb distances of all the nodes from the nodes in the sxa xc xd xe xy xba xbc xbd xbe xby sinceminimum distance from to is (xby) {xbyand {xby distances of all the nodes from the nodes in the sxa xc xd xe xba xbc xbd xbe xbya xbyc xbyd xbye continuing in similar mannerwe find that the shortest path between nodes and is xby with cost value lovely professional university
24,196
notes summary graphs provide in excellent way to describe the essential features of many applications graphs are mathematical structures and are found to be useful in problem solving they may be implemented in many ways by the use of different kinds of data structures graph traversalsdepth first as well as breadth firstare also required in many applications existence of cycles makes graph traversal challengingand this leads to finding some kind of acyclic subgraph of graph that is we actually reach at the problems of finding shortest path and of finding minimum cost spanning tree problems keywords adjacenttwo vertices in an undirected graph are called adjacent edgesa graph consists of set vwhose members are called the vertices of gtogether with set of pairs of distinct vertices from these pairs are called the edges of free treea free tree is defined as connected undirected graph with no cycles graphgraph is mathematical structure and finds its application in many areas of interest in which problems need to be solved using computers undirected graphif the pairs are unorderedthen is called an undirected graph self assessment fill in the blanks graph may have many the weight of tree is just the sum of weights of its the ends when no more paths are found in an undirected graphpair of vertices representing any edge is in digraph the path is called directed path and cycle as state whether the following statements are true or false dijkstra developed an algorithm to determine the shorted path between two nodes in graph the algorithm is not equally applicable to grapha digraphor even to mixed graph with only some of its sides directed cycle is not path in which first and last vertices are the same good deal of nomenclature is associated with graphs in digraph the path is called directed path and cycle as directed cycle review questions what do you mean by shortest path find out the minimum number of edges in strongly connected diagraph on vertices lovely professional university
24,197
test the program for obtaining the depth first spanning tree for the following graph " graph may have many spanning treesfor instance the complete graph on four vertices has sixteen spanning treesexplain define vertices of graph graph is regular if every vertex has the same valence (that isif it is adjacent to the same number of other verticesfor regular grapha good implementation is to keep the vertices in linked list and the adjacency lists contiguous the length of all the adjacency lists is called the degree of the graph the topological sorting functions as presented in the text are deficient in error checking modify the (adepth-first and (bbreadth-first functions so that they will detect any (directedcycles in the graph and indicate what vertices cannot be placed in any topological order because they lie on cycle how can we determine maximal spanning tree in network write digraph methods called write that will write pertinent information specifying graph to the terminal the graph is to be implemented with an adjacency tableb linked vertex list with linked adjacency listsc contiguous vertex list of linked adjacency lists notes implement and test the method for determining shortest distances in directed graphs with weights answersself assessment spanning trees edges algorithm unordered directed cycle true false false true true lovely professional university
24,198
notes further readings books brian kernighan and dennis ritchiethe programming languageprentice hall burkhard moniendata structures and efficient algorithmsthomas ottmannspringer krusedata structure program designprentice hall of indianew delhi mark allen welesdata structure algorithm analysis in second ed addisonwesley publishing rg dromeyhow to solve it by computercambridge university press shi-kuo changdata structures and algorithmsworld scientific shi-kuo changdata structures and algorithmsworld scientific sorenson and tremblayan introduction to data structure with algorithms thomas cormencharles eleiserson ronald rivestintroduction to algorithms prentice-hall of india pvt limitednew delhi timothy buddclassic data structures in ++addison wesley online links www en wikipedia org www web-source net www webopedia com lovely professional university
24,199
unit network flows unit network flows notes contents objectives introduction network flow ford fulkerson method comparison networks network flow problem minimum spanning tree kruskal' algorithm prim' algorithm summary keywords self assessment review questions further readings objectives after studying this unityou will be able toexplain network flow describe problem of network flow know minimum spanning tree introduction we use graphs as models of practical situations involving routesthe vertices represent the cities and edges represent the roads or some other linksspecially in transportation managementassignment problems and many more optimization problems electric circuits are another obvious example where interconnections between objects play central role circuits elements like transistorsresistorsand capacitors are intricately wired together such circuits can be represented and processed within computer in order to answer simple questions like "is everything connected together?as well as complicated questions like "if this circuit is builtwill it work? network flow network flow is an advanced branch of graph theory the problem resolves around special type of weighted directed graph with two special verticesthe source vertexwhich has no incoming edgeand the sink vertexwhich has no outgoing edge by conventionthe source vertex is usually labelled and the sink vertex labelled lovely professional university